Skip to content
Astra Fellowship

Astra is Constellation’s flagship fellowship, built to accelerate AI safety research and talent.


As AI advances at unprecedented speed, preparing for its risks is critical. Astra brings exceptional people into the field and connects them with leading mentors and research opportunities.

Complete this expression of interest form to be notified when our next cohort launches!

constellation-11-17-25-drew-bird-274-min

Astra is a fully funded, 5 month, in-person program at Constellation’s Berkeley research center.

Fellows advance frontier AI safety projects with guidance from expert mentors and dedicated research management and career support from Constellation’s team.

Over 80% of Astra’s first cohort are now working full-time in AI safety roles in organizations such as Redwood Research, METR, Anthropic, OpenAI, Google DeepMind, the Center for AI Standards and Innovation, and the UK AI Security Institute. This round, we want to go further: placing even more people into the highest-impact roles, and helping fellows launch new initiatives to tackle urgent but neglected problems.

Three researchers collaborating around one laptop.
Trusted by the best
1
2
3
4
5
6
7
1
2
3
4
5
6
7
1
2
3
4
5
6
7
1
2
3
4
5
6
7
1
2
3
4
5
6
7

What we're looking for


We’re looking for talented people that are excited to pursue new ideas and projects that advance safe AI. You may be a strong fit if you:
  • Are motivated to reduce catastrophic risks from advanced AI

  • Bring technical or domain-specific experience relevant to the focus areas of empirical research and strategy & governance

  • Would like to transition into a full-time AI safety role or start your own AI safety focused organization

Prior AI safety experience is not required. Many of our most impactful fellows entered from adjacent fields and quickly made significant contributions. If you're interested but not sure you meet every qualification, we’d still encourage you to apply.

One solo figure writing with pen and paper on a couch in front of large windows.
Two researchers deep in discussion while sitting at a table.
Close up image of two people typing on laptops with a cup of tea on the table.
Eli Lifland & Romeo Dean, AI Futures Project

“Astra was a really important program for us. We first started working with Daniel Kokotajlo through Astra, and the scenario we started developing during the program eventually became AI 2027. After Astra, we also co-founded the AI Futures Project together.”

Michael Chen, METR

"I worked with METR during Astra, and joined METR’s policy team immediately after the fellowship. Participating in Astra directly led to my current role.”

Aryan Bhatt, Redwood Research

“Astra was an incredibly important opportunity. I was able to work closely with my mentor Buck, who taught me a lot about doing good research. That eventually led me to my role at Redwood Research, where I now run an entire team.”

Martin Soto, Member of Technical Staff, UK AISI

"I'm really glad I participated in Astra! Endless conversations (both with my mentor and everyone else at Constellation), ranging from theoretical alignment to frontier governance, were an invaluable source of learning, knowledge and opportunities."

Mentors

Empirical Research
Strategy & Governance
Joe Benton

Joe Benton

Anthropic
Aryan Bhatt

Aryan Bhatt

REDWOOD RESEARCH
owain_evans

Owain Evans

TRUTHFUL AI
neev_parikh

Neev Parikh

METR
Buck Shlegeris

Buck Shlegeris

Redwood Research
jake_mendel-1

Jake Mendel

Coefficient Giving
Jasmine Wang

Jasmine Wang

OpenAI
Tomek Korbak

Tomek Korbak

OpenAI
Raymond Douglas

Raymond Douglas

Alignment of Complex Systems Research Group
fin_moorhouse

Fin Moorhouse

FORETHOUGHT
Max Nadeau

Max Nadeau

Coefficient Giving
julian_stastny

Julian Stastny

REDWOOD RESEARCH
Catherine Brewer

Catherine Brewer

Coefficient Giving
Dave Banerjee

Dave Banerjee

IAPS
Erich Grunewald

Erich Grunewald

IAPS
Theo Bearman

Theo Bearman

IAPS
Ashwin A

Ashwin Acharya

Independent
Ben Angel Chang

Ben Chang

Independent
Gabriel Kulp

Gabriel Kulp

RAND
Mauricio Baker

Mauricio Baker

RAND
Edward Kembery

Edward Kembery

Safe AI Forum
Isabella Duan

Isabella Duan

Safe AI Forum
Sabrina Shih

Sabrina Shih

Safe AI Forum
Yawen D

Yawen Duan

Safe AI Forum
Empirical Research
Empirical Research
Strategy & Governance
Joe Benton

Joe Benton

Anthropic
Aryan Bhatt

Aryan Bhatt

REDWOOD RESEARCH
owain_evans

Owain Evans

TRUTHFUL AI
neev_parikh

Neev Parikh

METR
Buck Shlegeris

Buck Shlegeris

Redwood Research
jake_mendel-1

Jake Mendel

Coefficient Giving
Jasmine Wang

Jasmine Wang

OpenAI
Tomek Korbak

Tomek Korbak

OpenAI
Raymond Douglas

Raymond Douglas

Alignment of Complex Systems Research Group
fin_moorhouse

Fin Moorhouse

FORETHOUGHT
Max Nadeau

Max Nadeau

Coefficient Giving
julian_stastny

Julian Stastny

REDWOOD RESEARCH
Catherine Brewer

Catherine Brewer

Coefficient Giving
Dave Banerjee

Dave Banerjee

IAPS
Erich Grunewald

Erich Grunewald

IAPS
Theo Bearman

Theo Bearman

IAPS
Ashwin A

Ashwin Acharya

Independent
Ben Angel Chang

Ben Chang

Independent
Gabriel Kulp

Gabriel Kulp

RAND
Mauricio Baker

Mauricio Baker

RAND
Edward Kembery

Edward Kembery

Safe AI Forum
Isabella Duan

Isabella Duan

Safe AI Forum
Sabrina Shih

Sabrina Shih

Safe AI Forum
Yawen D

Yawen Duan

Safe AI Forum

Application

Applications Open
Aug 28
Applications close
Sep 26

Decisions & Onboarding

Acceptances sent
Nov 6
Onboarding finishes
Dec 31

 

Program starts
Jan 5
Program ends
Mar 31
Extension starts
Mar 31
Extension ends
Jun 31
Harry Mayne, Spring 2026 Cohort
"Working with Owain Evans gave me the opportunity to learn how to conduct great empirical research and communicate it effectively. The mentors, organization, and cohort are constantly challenging your way of thinking and helping you develop into the best version of yourself as a researcher."
Yong Zheng-Xin, Spring 2026 Cohort
"I love being part of a diverse community of smart and driven people, who all care deeply about AI safety. Being an Astra Fellow has given me direct access to experienced researchers, which has been incredibly meaningful for my growth."
Andy Wang, Spring 2026 Cohort
"Astra is everything a junior AI safety researcher could ask for. The program surrounds you with talented people, abundant resources, and a culture that motivates you to realize your impact. Because of Astra, I am now confident that I can meaningfully contribute to AI safety."
Fellowship Benefits

We provide the resources and support needed for fellows to pursue full-time research.

Receipt

Stipends

A monthly stipend of $8,400 for the duration of the program.

magnify

Research Budget

~$15K per empirical research fellow per month for compute.

envelope

Visa Support

We provide support and guidance for international applicants navigating the visa process.

Additional benefits

1
Workspace & Community
Ongoing collaboration at Constellation’s Berkeley research center, where you’ll have access to >300 weekly visitors and frequent AI safety focused convenings (e.g., shared daily meals, weekly seminars, workshops, table-top-exercises, conferences).
2
Mentorship & Research Management
Weekly mentorship from senior experts and research management support from Constellation’s team (via 1:1s, small group meetings, office hours, Slack collaboration).
3
Placement Services
Many fellows are expected to join organizations that are participating in Astra. Others are actively connected to opportunities across our network.
4
Incubation Services
We also provide advisory services for fellows launching new projects and organizations (e.g., business operations, communication, hiring, fundraising & more).

What we're looking for

We’re looking for talented people that are excited to pursue new ideas and projects that advance safe AI. 

You may be a strong fit if you:
  • Are motivated to reduce catastrophic risks from advanced AI
  • Bring technical or domain-specific experience relevant to the focus areas (e.g., technical research, security, governance, policy, strategy, field-building)
  • Would like to transition into a full-time AI safety role or start your own AI safety focused organization

Prior AI safety experience is not required. Many of our most impactful fellows entered from adjacent fields and quickly made significant contributions. If you're interested but not sure you meet every qualification, we’d still encourage you to apply.

How to apply

Applications for our January 2026 cohort are now officially closed!

Complete this expression of interest form to be notified when our next cohort launches—possibly as soon as Summer 2026!

Join the next cohort

Complete this expression of interest form to be notified when our next cohort launches!

Person wears Constellation hoodie. They are laughing and smiling while two other people at the table smile back.