About The Team
The Intelligence & Investigations (I2) team detects and disrupts abuse and strategic risk so people can use our products safely. Within I2, Strategic Intelligence & Analysis is building a first-of-its-kind user-risk measurement function: policy-grounded baselines, confidence intervals, and attribution that reveal how real users engage with frontier AI—and how that changes when we ship mitigations. We sit between Safety Systems, Data Science, Integrity, and Product, turning heterogeneous safety signals into decision intelligence that appears in executive dashboards, launch/post-launch readouts, and weekly briefs.
About The Role
As the Data Science Manager for user-risk measurement and quantitative forecasting, you will define the strategy and standards, hire and mentor a team, and deliver decision-ready metrics and narratives that shape safety and product direction across OpenAI. You will turn heterogeneous safety signals into trustworthy baselines, connect behavior shifts to product/policy mitigations, and build short-/medium-term forecasts that help leaders prioritize with confidence.
This role is based in San Francisco, CA (hybrid, 3 days/week). Relocation support is available.
In This Role, You Will
- Define the measurement and forecasting strategy and operating model; align policy-grounded definitions, governance, and quality bars across partners
- Build user-level baselines and confidence intervals for rare-event harms using principled sampling and inference; institute stability and drift checks
- Ship executive-grade reporting: dashboard tiles, weekly 1-pagers, monthly deep dives, and launch/post-launch readouts that drive action
- Implement mitigation attribution and change-tracking; back-test launches and connect outcomes to specific interventions and external events
- Own data interfaces and SLOs across DS schemas; ensure privacy-by-design data paths and auditable method notes
- Build automated systems and pipelines to clean and organize unstructured data from disparate sources
- Act as the single analytics entry point with cross-functional partners across our Safety Systems, Data Science, Integrity, Product, and Policy teams; resolve definitions, standards, and timelines
- Build and lead the team; mentor experienced ICs; foster an inclusive, principled, high-standards culture
You Might Thrive In This Role If You
- Have 7+ years in data science, measurement/causal inference, forecasting, modeling, or risk analytics in high-stakes domains
- Have deep strength in sampling, inference, probability sampling, time series data, backtesting, parametric/non-parametric modeling approaches, and imputation, and uncertainty for rare-event estimation; comfort with time-varying metrics and survival analysis
- Write strong Python and SQL; are fluent with modern warehouses and notebook-to-production workflows; communicate crisply to executives and engineers
- Bring experience in integrity/fraud/safety or adjacent high-stakes analytics; have led multiple complex workstreams and mentored senior peers
- Experience in managing data scientists, quantitative analysts or similar
- Balance near-term execution with long-term vision; raise analytical standards and create clarity across teams
- Bonus: Familiarity with Airflow
About OpenAI
OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.
We are an equal opportunity employer, and we do not discriminate on the basis of race, religion, color, national origin, sex, sexual orientation, age, veteran status, disability, genetic information, or other applicable legally protected characteristic.
For additional information, please see OpenAI’s Affirmative Action and Equal Employment Opportunity Policy Statement.
Qualified applicants with arrest or conviction records will be considered for employment in accordance with applicable law, including the San Francisco Fair Chance Ordinance, the Los Angeles County Fair Chance Ordinance for Employers, and the California Fair Chance Act. For unincorporated Los Angeles County workers: we reasonably believe that criminal history may have a direct, adverse and negative relationship with the following job duties, potentially resulting in the withdrawal of a conditional offer of employment: protect computer hardware entrusted to you from theft, loss or damage; return all computer hardware in your possession (including the data contained therein) upon termination of employment or end of assignment; and maintain the confidentiality of proprietary, confidential, and non-public information. In addition, job duties require access to secure and protected information technology systems and related data security obligations.
To notify OpenAI that you believe this job posting is non-compliant, please submit a report through this form. No response will be provided to inquiries unrelated to job posting compliance.
We are committed to providing reasonable accommodations to applicants with disabilities, and requests can be made via this link.
OpenAI Global Applicant Privacy Policy
At OpenAI, we believe artificial intelligence has the potential to help people solve immense global challenges, and we want the upside of AI to be widely shared. Join us in shaping the future of technology.