What to Read First
- If you’re new to the entire topic, see the 5-page Reducing Long-Term Catastrophic Risks from Artificial Intelligence.
- If you don’t think human-level AI is possible this century, read Intelligence Explosion: Evidence and Import.
- If you think that safe AI is the default outcome, or that designing safe AI will be easy, see The Singularity and Machine Ethics or Complex Value Systems are Required to Realize Valuable Futures.
- If you want to see how cognitive biases could affect one’s thinking about AI risk, see Cognitive Biases Potentially Affecting Judgment of Global Risks or Not Built to Think About AI.
- If you want to know what can be done to reduce AI risk, see How to Purchase AI Risk Reduction or So You Want to Save the World.
Resources for other researchers
- Our other publications (mostly old ones).
- Research by others about AI risk.
- The Singularity Institute’s continuously updated BibLaTeX file and Mendeley group.
- Journals that may publish papers on AI risk.
- Forthcoming and desired articles on AI risk.
- Keep up with the very latest research relevant to Friendly AI by subscribing to the Friendly AI Research blog.
- For an overview of what research can be done on the AI risk problem, see So You Want to Save the World.
- IntelligenceExplosion.com and Friendly-AI.com.