Join us as a Founding Generalist and take real ownership over core programs that shape the AI safety talent pipeline.
Apply nowKnow someone who might be a good fit? If your referral gets hired and completes 6 months with us, we'll pay you a $5,000 referral bonus. Learn more
Kairos is a nonprofit accelerating talent into AI safety and policy. In just one year, we've trained over 600 people through our flagship programs:
Our fellows have published at NeurIPS and ICML, been featured in Time Magazine, and gone on to lead impactful work across top research labs, think tanks, and policy orgs. We've helped dozens land AI safety roles and supported hundreds in taking meaningful actions in the space.
We see ourselves as a portfolio of highly impactful projects in a fast-evolving field, and we're just getting started. By 2026, we plan to launch even more ambitious initiatives addressing critical gaps in the ecosystem and continue rapidly scaling up our efforts.
As a Founding Generalist, you'll have real ownership over core programs that shape the AI safety talent pipeline. You'll take on a wide range of responsibilities that combine strategy, relationship-building, and execution. This isn't a typical operations role—you'll be a key builder on our team, taking on high-stakes work with significant autonomy.
An ideal candidate will have most or all of the following characteristics:
We are looking for someone with high agency, strong judgment, and deep alignment with our mission. You care deeply about impact and are excited to build in a fast-paced, high-trust environment.
You demonstrate strong internal motivation and ownership. You proactively upskill on complex tasks and reliably drive toward Kairos's goals. Specifically, you can:
You're motivated by reducing catastrophic risks from AI and ensuring a positive transition to transformative AI. You take individual responsibility while supporting the team. When challenges arise, you're the person who rolls up their sleeves and contributes wherever you can make a difference.
You pursue truth over comfort. You recognize that accurate models of the world enable better decisions. You actively resist motivated reasoning, maintain intellectual humility, and stay open to changing your mind. When discussing ideas, you focus on understanding reality rather than defending existing views.
You move fast and adapt quickly when new information arises. You develop hypotheses about how to create change and act decisively (even under uncertainty) while staying ready to pivot when needed.
You're collaborative and pro-social. You see other people working on AI safety as allies and support shared goals. You reward reflection on mistakes rather than punishing them, and you handle problems constructively.
While not required, we prefer candidates who also have:
$90,000–$150,000
This will depend on experience, seniority, and location, with the potential for additional compensation for exceptional candidates. We will also pay for work-related travel and expenses. If you work from an AI safety office in Berkeley, London, or Cambridge, MA, we'll cover food, lunches, and office expenses.
10% 401(k) contribution or equivalent 10% pension contribution
Access to office space in Berkeley, London, or Cambridge, MA, or optional coworking access if elsewhere
Flexible working hours, competitive health insurance, dental and vision coverage, generous vacation policy, and professional development budget
Kairos is a small, dynamic, and high-trust team motivated by the urgent challenge of making advanced AI go well for humanity.
We also believe meaningful work should be enjoyable. We support each other's well-being, celebrate wins, and maintain a healthy sense of humor even when the work is challenging.
If you're excited about supporting the next generation of AI safety talent and want to make a tangible impact in this critical field, we'd love to hear from you.