We're a nonprofit accelerating the next generation of AI safety and policy talent.
The development of transformative AI will be one of the most consequential events in human history. Getting it right requires building a robust ecosystem of researchers, policymakers, and technical professionals who can navigate the complex challenges ahead. At Kairos, we focus on interventions that can scale—reaching hundreds or thousands of people with programs that combine rigorous mentorship with practical research experience.
Our approach is simple: identify talented people early, connect them with experienced mentors, and give them the structure and support they need to do meaningful work.
Through our programs—SPAR (Supervised Program for Alignment Research) and the Pathfinder Fellowship—we've supported +45 publications and +120 people worldwide in taking their first serious steps into AI safety and policy.
We operate with urgency. If transformative AI arrives in the next 2–10 years, we need to be building capacity now. That means staying lean, moving fast, and constantly reassessing whether our programs are having the impact we need them to have.
We design programs that can reach large numbers of people without sacrificing quality. Rather than hand-picking a small cohort, we build infrastructure that lets motivated individuals find mentorship and do meaningful work—whether that's publications through SPAR or supporting people through Pathfinder.
Impact comes first. We're not optimizing for popularity or organizational prestige—we're optimizing for talent development that actually moves the needle on AI safety and policy. When we have to choose between what looks good and what works, we choose what works.
The field is changing rapidly, and so are we. We maintain the speed and adaptability to pivot our strategy as new evidence comes in or the landscape shifts. Our small team size is a feature, not a bug—it lets us make decisions quickly and test new approaches without excessive process.
We pursue truth rigorously. That means being honest about uncertainty, updating our beliefs when the data demands it, and building accurate models of how our programs create impact. We'd rather have clarity than false confidence, and we actively resist motivated reasoning—even when it might be more comfortable to believe we're on the right track.
Co-Director
Agustín brings extensive experience in AI safety research and community building. He previously worked on alignment research and has been instrumental in growing the AI safety ecosystem globally.
Co-Director
Neav has a background in operations and program management, with a focus on scaling impactful initiatives. She has been pivotal in developing the infrastructure that supports our growing programs.
Founding Generalist
Rebecca is a versatile contributor who has helped shape Kairos from its inception. Her work spans strategy, communications, and program development, ensuring our initiatives run smoothly and effectively.