The Anthropic Fellows Program is a structured research initiative created by Anthropic to develop emerging talent in AI research, engineering, and AI safety. It is designed to support individuals with strong technical potential, regardless of formal experience, and to provide them with the resources, mentorship, and funding required to conduct impactful empirical research in alignment with frontier AI safety priorities.
Anthropic is a public benefit corporation focused on building reliable, interpretable, and steerable AI systems that are safe and beneficial for society. The organization brings together researchers, engineers, policy experts, and operational leaders to advance the long-term development of safe artificial intelligence systems.
Program Structure and Objectives
The Fellows Program is designed as an intensive, full-time research experience aimed at producing high-quality public outputs, such as research papers or technical reports. Fellows are expected to work on empirical projects using external infrastructure such as open-source models and public APIs.
Key objectives of the program include:
Advancing AI safety and security research through hands-on empirical work
Producing publishable research outputs (many past fellows have published papers)
Building talent pipelines into frontier AI research roles
Supporting interdisciplinary collaboration across AI-related fields
Encouraging exploration of scalable, real-world AI safety problems
In previous cohorts, more than 80% of fellows successfully produced research publications, demonstrating the program’s emphasis on execution and scientific contribution.
Program Duration, Compensation, and Logistics
The program runs for approximately four months on a full-time basis. Fellows are expected to work around 40 hours per week, with the possibility of extension based on performance and project needs.
Participants receive structured support, including:
Weekly stipend:
3,850 USD
2,310 GBP
4,300 CAD
Research funding for compute and infrastructure (~$15,000/month)
Access to shared workspace locations in:
Berkeley, California
London, United Kingdom
Optional remote participation for eligible candidates in the US, UK, or Canada
Access to mentorship from leading researchers at Anthropic
Integration into a broader AI safety and research community
Visa sponsorship is not provided for fellows, and participants must already have work authorization in the US, UK, or Canada.
Application Timeline and Process
Applications are reviewed on a rolling basis, with structured cohorts beginning periodically. The next cohort begins on July 20, 2026, and applications must be submitted by April 26, 2026 for consideration.
The interview and selection process typically includes:
Initial application screening
Reference checks
Technical assessments and interviews
Research-focused discussions with potential mentors
Applicants are encouraged to apply even if they do not meet every listed qualification, as the program values potential, motivation, and research curiosity over rigid credential requirements.
Fellowship Workstreams
The program is organized into multiple specialized workstreams. Applicants may be considered across all areas based on their skills and preferences.
AI Safety Fellows
This workstream focuses on reducing catastrophic risks from advanced AI systems.
Research areas include:
Scalable oversight of advanced models
Adversarial robustness and AI control
Mechanistic interpretability of model internals
Model organisms of misalignment
AI welfare and evaluation frameworks
Typical profiles include candidates with:
Experience in empirical machine learning research
Strong Python programming ability
Interest in interpretability or alignment problems
Contributions to open-source AI research
AI Security Fellows
This track focuses on identifying and mitigating security vulnerabilities in AI systems.
Key focus areas:
Offensive security and vulnerability research
Red teaming and adversarial testing
LLM security and safety evaluation
Bug bounty experience and CVE reporting
Ideal candidates often demonstrate:
Experience in cybersecurity or penetration testing
Strong open-source contributions in ML or security domains
Ability to solve ambiguous technical problems independently
ML Systems & Performance Fellows
This stream emphasizes infrastructure, scalability, and systems-level ML engineering.
Work may include:
Building high-performance ML systems
Developing simulation environments for AI workloads
Optimizing training and inference pipelines
Supporting infrastructure-heavy research projects
Strong candidates typically have:
Experience with distributed systems and ML infrastructure
Engineering expertise in large-scale computing systems
Ability to balance research and production-grade engineering
Reinforcement Learning Fellows
This track focuses on reinforcement learning research and applied experimentation.
Key areas include:
RL environments for model training
Generalization studies in reinforcement learning
Model-based tools for training data analysis
Algorithm development and experimentation
Ideal profiles include:
Strong ML systems engineering experience
Familiarity with training and fine-tuning models
Ability to debug complex model training processes
Economics & Societal Impacts Fellows
This workstream explores the broader societal implications of AI systems.
Research topics include:
Economic impacts of AI adoption
Labor market and workforce transformation studies
Human-AI collaboration analysis
Model evaluation for societal well-being
Policy-relevant empirical research
Strong candidates often have:
Background in economics, social science, or data analysis
Strong writing and communication skills
Ability to interpret ambiguous empirical results
Interest in AI policy and societal impact
Candidate Profile and Requirements
Across all workstreams, ideal candidates typically demonstrate:
Strong motivation to improve AI safety and societal outcomes
Technical proficiency in Python programming
Ability to work full-time during the program duration
Experience in computer science, mathematics, physics, or related disciplines
Comfort working in fast-paced collaborative research environments
Strong communication and execution skills
Additional advantages include:
Open-source contributions
Prior ML or systems research experience
Domain expertise relevant to a chosen workstream
Program Philosophy and Values
Anthropic’s research approach is grounded in the belief that AI safety is a critical global challenge requiring large-scale, collaborative scientific effort. The organization treats AI research as an empirical science, similar in rigor and methodology to physics or biology.
Core values include:
Focus on long-term AI safety and alignment
Emphasis on interpretability and steerability of AI systems
Collaborative, cross-disciplinary research culture
Commitment to transparency and public research dissemination
Inclusion of diverse perspectives in shaping AI futures
For more opportunities such as these please follow us on Facebook, Instagram, WhatsApp, Twitter, LinkedIn and WPChannel
Disclaimer:Â Global South Opportunities (GSO)Â is not the organization offering the program. For any inquiries, please contact the official organization directly. Please do not send your applications & CVs to GSO, as we are unable to process them. Due to the high volume of emails, we receive daily, we may not be able to respond to all inquiries. Thank you for your understanding.
Related