Skip to content

jaebooker/AI-Policy-Simulator

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

42 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Research question: What are the probabilities governments will coordinate vs. compete when developing strong artificial intelligence?

Methodology: A simulation of world governments, to try to assess the chances they will coordinate in AI Research, and what mechanisms might be critical for this to occur. I will do this by using elements of Game Theory, where each government will act as an individual agent. Certain rewards and punishments will be set for Coordinating and for Defecting. Coordinating will be when two agents share information, and agree to certain safety mechanisms. Depending on how many agents are sharing information, this might speed-up an agents’ progress, or slow them down, since they will be sharing information, but also acting more cautious. Defecting will be when an agent chooses to pursue research on their own, without sharing information, and potentially creating more hazardous AIs. Under certain conditions, it could be that one single agent defecting could cause drastically negative outcomes for all other agents. But coordination between agents could also lead to faster innovation and rewards. With more iterations, I might create multiple parameters beyond Coordinate and Defect. For instance, an agent might be able to choose Imposter, where it hides that it’s defecting from another agent, while gaining the rewards from coordination--but with each iteration, would run the risk of being discovered. If discovered, the agents would cease coordinating, and future agents would be unlikely to be willing to coordinate after an Imposter is discovered. There could also be the option of Spy, where an agent on the surface is defecting, but is also spying on other agents to gain the benefits of coordination. The negative consequences could be, if discovered, being unable to coordinate with other agents in the future. I am also curious about having a Critical Research Point: a point-of-no-return, where agents who cross this point become impossible to compete with, due to accelerating returns from AIs they create. I could also change the initial states of individual progress for agents, where some might progress much faster on their own than others. If that’s the case, would slower agents group together to compete with faster ones, or would they instead resort to spying?

Tools: Built this simulation using Python.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages