The Anthropic AI Safety Fellowship 2026 is a four-month, full-time research programme for emerging AI researchers and technologists focused on the risks, reliability, and governance of advanced artificial intelligence systems. Starting with cohorts in July 2026, the fellowship aims to develop practical expertise in AI safety, interpretability, security, and alignment through hands-on projects and collaboration with Anthropic researchers.
About the Anthropic AI Safety Fellowship 2026 Program
The Anthropic AI Safety Fellowship is hosted by Anthropic, with in-person participation at its offices in Berkeley, California and London, UK, as well as remote options for eligible candidates based in the United States, United Kingdom, or Canada. Fellows work closely with Anthropic’s research teams on safety-critical problems such as making large AI models more interpretable, robust, and secure. The programme emphasizes producing tangible research outputs, including technical tools, models, or research papers that advance the broader AI safety field.
Funding Size
- Weekly stipend: $3,850 USD
- Compute and research support: ~$15,000 USD per month
- Duration: Four months (July–October 2026)
This combination of financial support and compute resources enables participants to focus on impactful AI safety work without financial or infrastructure constraints.
Read Also: Grant Proposal Development Workshop 2026 – Apply Now
Who Can Apply
The fellowship is open to individuals who:
- Have strong Python programming skills
- Possess a background in computer science, mathematics, physics, or other quantitative disciplines
- Are early-career researchers, engineers, or technologists
- Do not require visa sponsorship (must already be legally able to work in the US, UK, or Canada)
A PhD is not required but applicants should demonstrate technical depth and a strong interest in AI safety research. The selection process typically includes evaluation of technical experience, research interests, references, and may involve interviews or technical assessments.
Read Also: The Eisenhower Fellowships Global Program 2026 – Apply Now
Geographic Eligibility
Eligible participants must already have the legal right to work in one of the following countries:
- United States
- United Kingdom
- Canada
Anthropic does not provide visa sponsorship for this fellowship.
Sector or Thematic Focus
The fellowship supports research themes including:
- AI safety and alignment
- Interpretability and explainability of models
- Security and robustness of AI systems
- Practical mitigation strategies for harmful outputs
- Development of tools and research contributions to the AI safety community
Application Process
Applicants must submit their materials through the official Anthropic Fellows portal. The application typically requires:
- Selection of preferred research tracks
- Technical resume or CV
- Description of relevant experience and technical skills
- References or letters of recommendation
- Possible follow-up technical assessments or discussions with mentors
Required Materials
- Completed online application
- CV or professional resume
- Technical summary of relevant skills and projects
- References from academic or professional mentors
- Work authorization proof (US, UK, or Canada)
Key Dates
- Application deadline for July 2026 cohort: 26 April 2026
- Fellowship start: July 2026
Anthropic also reviews strong applications on a rolling basis for future cohorts beyond July 2026.
Selection Notes
Selection is competitive and based on technical aptitude, research potential, alignment with AI safety themes, and clarity of expressed interests. Applicants who demonstrate strong problem-solving skills and relevant project experience are more likely to be admitted.

