OpenAI begins recruiting for external AI safety research fellowships

Applications of AI


OpenAI is accepting applications for its paid fellowship program, which provides funding to external researchers working on safety and coordination issues related to advanced AI systems. The program, called OpenAI Safety Fellowship, will run from September 14, 2026 to February 5, 2027. Applications close on May 3rd, and successful applicants will be notified by July 25th.

OpenAI Safety Fellowship

This fellowship is open to researchers, engineers, and practitioners outside of OpenAI. Priority research areas include safety assessment, ethics, robustness, scalable mitigations, privacy protection safety methods, agent monitoring, and high severity misuse areas. OpenAI says it prefers work that is experience-based and technically strong.

Where colleagues work

Fellows will have access to the workspace of Constellation, a nonprofit organization in Berkeley that supports AI safety research. Remote participation is also possible. They will work alongside colleagues and receive guidance from OpenAI staff.

By the end of the program, each fellow is expected to produce substantial research outputs, including papers, benchmarks, and datasets. The fellowship includes a monthly stipend, computing support, and ongoing mentorship. Fellows receive API credits but do not have access to OpenAI’s internal systems.

Who can apply

OpenAI accepts candidates from computer science, social science, cybersecurity, privacy, human-computer interaction, and related fields. In the selection process, we place emphasis on research ability, technical judgment, and execution ability. No specific educational background is required. Letters of recommendation will be required as part of your application.



Source link