Project summary
Samotsvety Forecasting did conditional forecasts focusing on two potential policies that seek to reduce AI risk. The first, Policy PAUSE, involves the implementation of the 6 month training moratorium of AI systems more powerful than GPT-4, as recommended by the FLI open letter.
The second policy concerns the signing or implementation of a presented AI treaty. It includes provisions on banning large AI training runs, dismantling large GPU clusters, and promoting international cooperation and capacity building in AI safety.
Tolga, a member of Samotsvety, wants to continue improving the forecasts and treaty.
You can see the treaty, forecasting, etc by emailing contact@bilge.no
What are this project's goals and how they be achieved?
Draw attention to the need for international coordination to regulate AI progress, especially by preventing unsafe development (eg. via a global moratorium)
Make progress on what this treaty could actually say, so once the international community does begin working on it, there is already a good model for them to draw from.
Figure out what is best/most important to include in a potential treaty and what is worth especially fighting for.
How will this funding be used?
Professional web developer: $2k
Web hosting for 1 year: $1k
Retrospective compensation for forecasters' work: $2k
Forecasters' compensation for the next project: $2k
Compensation for Tolga's and others' work on treaty: $2k
Who is on the team and what's their track record on similar projects?
Tolga is part of a leading forecasting team, Samotsvety. He also has a track record with several other groups.
Simeon Campos, Akash Wasil, and Olivia Jimenez will also be part of this project who will be aiding in spreading and implementing the results of the forecasts who each have their own track records in this area.
How could this project be actively harmful?
Accelerate the drive towards an early AI treaty, but that drive might not result in a strong AI treaty like we are suggesting, but a much weaker one that makes it harder to get a strong AI treaty later.
If we are maximally successful in getting a strong AI treaty agreed, the institutions that are set up could become captured by badly motivated or incentivized people.
What are the most likely causes and outcomes if this project fails? (premortem)
Doesn't get a lot of traction
What other funding is this person or project getting?
This project hasn't had any funding so far, and all work done so far has been done on a voluntary basis. Tolga has not received any grants. Samotsvety has received retrospective funding for a couple of previous forecasting projects (e.g. Nuclear forecasting), but is currently unfunded. Tolga is part of some other forecasting groups (Swift Centre, INFER, Good Judgment), from which he earns about $1k per month in total.