Skip to content

Dunchead/ai-safety

Repository files navigation

AI Safety Project

The AI Safety Project aims to map landscape of AI risk and understand the challenges we must overcome as a global community.

The field of AI development is moving at such a pace that there's a real chance we may fall victim to risks before we fully understand them, or are even aware of them. This is probably already happening.

Community-driven initiatives are essential to keep up with this rapidly evolving landscape. The goals of this project are:

  • Create a simple framework for understanding the landscape of AI risk and the main challenges we need to overcome
  • Share key resources and potential solutions
  • Provide a space for community collaboration that is not driven by economic incentives

Please contribute!

This is a collaborative project and a work in progress. We encourage contributions for any of the following:

  • Additional risks/challenges
  • Suggested solutions
  • Key links/resources

Feel free to submit your own ideas.

Bugfixes/improvements for the website itself are also welcome.

Contributions can be made in two ways:

  1. As posts in the discusssion forum under 'AI risks, challenges and solutions'
  2. As pull requests (most content is in script.js)

Contributor guidelines

This is a high-level map, not an encyclopedia.

The goal is to create a clear outline of the major issues with links to only the most useful resources for further reading; we're not trying to compile a detailed or comprehensive list. Too much information is part of the problem we're trying to solve.

Yes please:
✅ Key points not already covered, concisely written
✅ Links/resources that are especially clear, original or significant
✅ Restructuring that improves the content while maintaining the clarity of design

No thanks:
❌ Verbose or repetitive text
❌ Links/resources that do not offer much additional value
❌ Non-essential restructuring of the design/content