
AI Offense Defense Dynamics Lead Researcher
Interested in this role?
Read all the details below first
Table of Contents
Job Description
Research Lead for AI Risk Management and Alignment
About CARMA
CARMA (Center for AI Risk Management & Alignment), a fiscally-sponsored project of Social & Environmental Entrepreneurs, Inc., is dedicated to mitigating the potential risks of artificial intelligence while maximizing its benefits for humanity.
The Opportunity
We are seeking a highly motivated and experienced Research Lead to join our team. In this role, you will play a critical part in developing and advancing our understanding of AI risk management and alignment.
Responsibilities
- Design and implement quantitative and qualitative frameworks to assess AI capabilities and their potential impact on societal safety.
- Develop analytical models and simulations to identify key leverage points for policy interventions aimed at mitigating AI risks.
- Expand and operationalize a comprehensive taxonomy and framework for offense-defense dynamics in AI systems.
- Build empirically-informed analytical frameworks to predict AI system features that favor offensive or defensive applications.
- Translate research findings into actionable guidance for policymakers, AI developers, and other stakeholders.
Qualifications
- Master's degree or higher in Computer Science, Cybersecurity, Criminology, Security Studies, AI Policy, Risk Management, or a related field.
- Proven experience with complex systems modeling, risk assessment methodologies, or security analysis.
- Strong understanding of dual-use technologies, modern AI systems, and sociotechnical interactions.
- Demonstrated ability to develop both qualitative and quantitative frameworks, including experience with semi-quantitative models.
- A record of relevant publications or research contributions related to technology risk, governance, or security.
Benefits
This is a 100% remote position with occasional travel required.
Qualifications
Education:
- Master's degree or higher in Computer Science, Cybersecurity, Criminology, Security Studies, AI Policy, Risk Management, or a related field.
Experience:
- Minimum 3 years of experience in complex systems modeling, risk assessment methodologies, or security analysis.
Skills:
- Demonstrated understanding of dual-use technologies, modern AI systems, and sociotechnical interactions.
- Proven ability to develop both qualitative and quantitative frameworks, including experience with semi-quantitative models.
- Strong analytical and research skills with a record of relevant publications or research contributions related to technology risk, governance, or security.
- Excellent communication and collaboration skills, with the ability to translate complex technical concepts into actionable guidance for diverse audiences.
Preferred Qualifications:
- PhD in a relevant field.
- Experience working with AI safety and alignment research communities.
- Familiarity with policy development and implementation processes.
Research & Analysis
- Develop and refine quantitative and qualitative frameworks to assess the potential societal impacts of AI, both beneficial and harmful.
- Design analytical models and simulations to identify critical points for policy interventions aimed at mitigating AI risks.
- Construct and operationalize a comprehensive taxonomy and framework for understanding offense-defense dynamics within AI systems.
Prediction & Guidance
- Build empirically-grounded analytical frameworks to predict which AI system features are more likely to be exploited for offensive purposes versus defensive applications.
- Translate research findings into actionable recommendations and guidance for policymakers, AI developers, and other relevant stakeholders.
Collaboration & Dissemination
- Actively participate in internal and external collaborations to advance the understanding of AI risk and promote responsible development.
- Contribute to the publication of research findings in peer-reviewed journals and other relevant platforms.
Selection Process
Applicants for the AI Offense Defense Dynamics Lead Researcher position at Future of Life Organizations' CARMA will first submit their resumes and cover letters. A review of these materials will be conducted to identify candidates whose qualifications align with the job description. Shortlisted candidates will then be invited to participate in a series of interviews. These interviews will involve technical assessments to gauge the candidate's understanding of AI systems, risk assessment methodologies, and relevant research areas. Behavioral questions will also be asked to evaluate the candidate's communication, collaboration, and problem-solving skills.
Following the interviews, the hiring team will deliberate and select the most qualified candidate based on their experience, skills, and potential to contribute to CARMA's mission. The final candidate will undergo a reference check before receiving a formal offer of employment.
How to Apply
To apply for a job, read through all information provided on the job listing page carefully.
Look for the apply link on the job listing page, usually located somewhere on the page.
Clicking on the apply link will take you to the company's application portal.
Enter your personal details and any other information requested by the company in the application portal.
Pay close attention to the instructions provided and fill out all necessary fields accurately and completely.
Double-check all the information provided before submitting the application.
Ensure that your contact information is correct and up-to-date, and accurately reflect your qualifications and experience.
Important Note
Submitting an application with incorrect or incomplete information could harm your chances of being selected for an interview.
About Future of Life Organizations
Future of Life Institute (FLI) is a leading non-profit organization dedicated to mitigating existential risks posed by advanced technologies, particularly artificial intelligence. Through research, advocacy, and public education, FLI strives to ensure that artificial intelligence benefits all of humanity.
FLI's work encompasses a wide range of critical areas, including AI safety, biosecurity, and global catastrophic risks. The organization collaborates with leading researchers, policymakers, and industry experts to develop practical solutions and promote responsible development and deployment of transformative technologies. FLI's mission is to secure a future where humanity thrives alongside advanced technologies, safeguarding our planet and ensuring a prosperous future for generations to come.
Ready to Apply?
Join Future of Life Organizations and take your career to the next level. We're looking for talented individuals like you!
Apply for this Job