Astra Fellowship

The Astra Fellowship pairs fellows with experienced advisors to collaborate on a two or three month AI safety research project. Fellows will be part of a cohort of talented researchers working out of the Constellation offices in Berkeley, CA, allowing them to connect and exchange ideas with leading AI safety researchers. The program will take place between January 4 and March 15, 2024, though the start and end dates are flexible and partially remote participation is possible. The deadline to apply has passed.

Advisors

Ajeya Cotra

Senior Program Officer, Open Philanthropy

Read More

Buck Shlegeris

CEO, Redwood Research

Read More

Daniel Kokotajlo

Research Scientist, OpenAI Governance Team  

Read More

Ethan Perez

Research Lead, Anthropic

Read More

Evan Hubinger

Research Lead, Anthropic

Read More

Fabien Roger

Member of Technical Staff, Redwood Research

Read More

Hjalmar Wijk

Member of Technical Staff, METR

Read More

Lukas Finnveden

Research Analyst, Open Philanthropy

Read More

Megan Kinniment

Member of Technical Staff, METR

Read More

Owain Evans

Research Associate, Future of Humanity Institute

Read More

Richard Ngo

Research Scientist, OpenAI Governance Team

Read More

Rob Long (with Ethan Perez)

Research Associate, Center for AI Safety (and Research Lead, Anthropic)

Read More

Tom Davidson

Senior Research Analyst, Open Philanthropy

Read More

Frontier Model Redteaming, Compute Governance, & Cybersecurity

Various Positions

Read More

Program details

We will provide housing and transportation within Berkeley for the duration of the program. Additionally, we recommended Astra invitees to AI Safety Support (an Australian charity) for independent research grants and they have decided to provide grants of $15k for 10 weeks of independent research to accepted Astra applicants in support of their AI Safety research.

Fellows will conduct research from Constellation’s shared office space, and lunch and dinner will be provided daily. Individual advisors will choose when and how to interact with their fellows, but most advisors will work out of the Constellation office frequently. There will be regular invited talks from senior researchers, social events with Constellation members, and opportunities to receive feedback on research. Before the program begins, we may also provide tutorial support to fellows interested in going through Constellation and Redwood Research’s MLAB curriculum.

We expect to inform all applicants whether they have progressed to the second round of applications within a week of them applying, and to make final decisions by December 1, 2023. For more details on the application process, see the FAQ below. The deadline to apply has passed.

Testimonials

"Participating in MLAB [the Machine Learning for Alignment Bootcamp, jointly run by Constellation and Redwood Research] was probably the biggest single direct cause for me to land my current role. The material was hugely helpful, and the Constellation network is awesome for connecting with AI safety organizations."

Haoxing Du

Member of Technical Staff, METR

“Having research chats with people I met at Constellation has given rise to new research directions I hadn't previously considered, like model organisms. Talking with people at Constellation is how I decided that existential risk from AI is non-trivial, after having many back and forth conversations with people in the office. These updates have had large ramifications for how I’ve done my research, and significantly increased the impact of my research.”

Ethan Perez

Research Lead, Anthropic

"Speaking with AI safety researchers in Constellation was an essential part of how I formed my views on AI threat models and AI safety research prioritization. It also gave me access to a researcher network that I've found very valuable for my career.”

Sam Marks

Postdoc, David Bau's Interpretable Neural Networks Lab

“Participating in MLAB was likely the most important thing I did for upskilling to get my current position and has generally been quite valuable for my research via gaining intuitions on how language models work, gaining more Python fluency, and better understanding ML engineering. I’m excited for other people to have similar opportunities.”

Ansh Radhakrishnan

Researcher, Anthropic

FAQ

If your question isn't answered here, please reach out to programs@constellation.org

What is Constellation?
Who is eligible to apply?    
What do fellows tend to do after the program?
What is the application process like?
What if I’m not available for these dates?
How does this compare to the Visiting Researcher Program?
What housing and travel will you cover?
Can I refer someone?
Will I receive any compensation for participating in this program?