Advancing AI safety collaboration, talent, and insight

Programs at Constellation

Visiting Fellows

Visiting Fellows

The Visiting Fellows program provides an opportunity for professionals working on our focus areas to connect with leading researchers, exchange ideas, and find collaborators while continuing their work from our offices in Berkeley, CA.

Constellation Residency

Constellation Residency

The Constellation Residency is a year-long salaried position for experienced researchers, engineers, entrepreneurs, and other professionals to pursue self-directed work in one of our focus areas.

Workshops

Workshops

We expect to offer 1–2 day intensive workshops for experts working in or transitioning into our focus areas. Express interest in our workshops here.

General Hosting and Visitors

General Hosting and Visitors

We host individuals or teams doing mission-aligned work that doesn’t fit within our other programs. Participants include individuals from nonprofit organizations, universities, AI companies, think tanks, and governments. They are offered workspaces for short- and long-term use at the center (fees apply) as well as opportunities to connect with researchers and other contributors.

"I think this has probably been the period of four weeks in which I learned the most about any topic, throughout my whole life."

Gabriel Wu

Constellation program participant and Director, AI Safety Student Team at Harvard

Past programs

Astra Fellowship

The Astra Fellowship pairs fellows with experienced advisors to collaborate on a three month AI safety research project.

Machine Learning Alignment Bootcamp (MLAB)

We hosted and helped run MLAB, which introduced experienced programmers to ML skills and concepts that are relevant for research on safety. There are no current plans to run additional MLAB sessions, but you can request access to the MLAB curriculum.

Collaborative residency on model internals

We hosted and helped run a one-month collaborative research effort on transformer model internals. The program brought together researchers to collaborate on providing mechanistic explanations of model behaviors, using recently developed interpretability techniques and causal scrubbing methodology.