The Astra Fellowship pairs fellows with experienced advisors to collaborate on a two or three month AI safety research project. Fellows will be part of a cohort of talented researchers working out of the Constellation offices in Berkeley, CA, allowing them to connect and exchange ideas with leading AI safety researchers. The program will take place between January 4 and March 15, 2024, though the start and end dates are flexible and partially remote participation is possible. The deadline to apply has passed.
In addition to these advisors the Astra Fellowship offered other potential advisors with specializations in Frontier Model Redteaming, Compute Governance, & Cybersecurity.
We will provide housing and transportation within Berkeley for the duration of the program. Additionally, we recommended Astra invitees to AI Safety Support (an Australian charity) for independent research grants and they have decided to provide grants of $15k for 10 weeks of independent research to accepted Astra applicants in support of their AI Safety research.
Fellows will conduct research from Constellation’s shared office space, and lunch and dinner will be provided daily. Individual advisors will choose when and how to interact with their fellows, but most advisors will work out of the Constellation office frequently. There will be regular invited talks from senior researchers, social events with Constellation members, and opportunities to receive feedback on research. Before the program begins, we may also provide tutorial support to fellows interested in going through Constellation and Redwood Research’s MLAB curriculum.
We expect to inform all applicants whether they have progressed to the second round of applications within a week of them applying, and to make final decisions by December 1, 2023. For more details on the application process, see the FAQ below. The deadline to apply has passed.
"Participating in MLAB [the Machine Learning for Alignment Bootcamp, jointly run by Constellation and Redwood Research] was probably the biggest single direct cause for me to land my current role. The material was hugely helpful, and the Constellation network is awesome for connecting with AI safety organizations."
“Having research chats with people I met at Constellation has given rise to new research directions I hadn't previously considered, like model organisms. Talking with people at Constellation is how I decided that existential risk from AI is non-trivial, after having many back and forth conversations with people in the office. These updates have had large ramifications for how I’ve done my research, and significantly increased the impact of my research.”
"Speaking with AI safety researchers in Constellation was an essential part of how I formed my views on AI threat models and AI safety research prioritization. It also gave me access to a researcher network that I've found very valuable for my career.”
“Participating in MLAB was likely the most important thing I did for upskilling to get my current position and has generally been quite valuable for my research via gaining intuitions on how language models work, gaining more Python fluency, and better understanding ML engineering. I’m excited for other people to have similar opportunities.”
If your question isn't answered here, please reach out to programs@constellation.org
Constellation is a research center dedicated to safely navigating the development of transformative AI. We host a number of organizations, teams and individuals working on topics including alignment, dangerous capability evaluations, and AI governance, in addition to running field-building programs such as this one.
We welcome a wide range of applicants. We expect professionals working in related industries, graduate students, and especially promising undergraduates to be good fits, but are excited about applicants with other backgrounds. If you are unsure about your fit, please err on the side of applying. We especially encourage women and underrepresented minorities to apply.
If you and your advisor would like to continue your project after the program ends, we may be able to provide support. While individual cases will vary, we are generally excited to help fellows complete their research.
Additionally, over 15 participants in past programs, such as Constellation and Redwood Research’s Machine Learning for Alignment Bootcamps, are (as of the time of writing) working at Anthropic, ARC Evals, ARC Theory, Google DeepMind, OpenAI, Open Philanthropy, and Redwood Research.
The application process involves two rounds.
Applications will be processed on a rolling basis up until the deadline, November 17th. We encourage you to submit the application as soon as convenient, so that if you progress to the next round you will have more time to complete the advisor-specific questions. We plan to get back to all candidates by November 20th with information on their next steps.
While we prefer if participants are present for the full duration of the program from January through March, we can accommodate variable start and end dates as long as you are able to join for the majority of the program. If you aren’t available for any of the dates, we still recommend filling out the application since we may run future iterations of this program.
The Visiting Research Program provides an opportunity for established researchers to spend time at Constellation while continuing their full-time research. The Astra Fellowship allows people interested in starting new research to work with an experienced advisor, who will provide guidance and project direction. If you would like to continue your own research, we recommend applying to the Visiting Researcher Program. If you would like to be paired with an advisor to start a new project, we recommend applying to this program.
We will cover travel to and from Berkeley, CA in addition to housing for the time you are here. Constellation provides lunch and dinner during weekdays. Feel free to email programs@constellation.org with any questions.
Yes — please email programs@constellation.org with their name, email (if you’d like us to reach out to them,) and (optionally) a short sentence on why you think they’d be a good fit.
Constellation is not compensating you for your participation in this program. However, AI Safety Support (an Australian charity) has decided to provide grants of $15k for 10 weeks of independent research to accepted Astra applicants in support of their AI Safety research.