Manifund foxManifund
Home
Login
About
People
Categories
Newsletter
HomeAboutPeopleCategoriesLoginCreate
3

AI-Plans.com Critique-a-Thon $500 Prize Fund Proposal

Technical AI safety
KabirKumar avatar

Kabir Kumar

ActiveGrant
$500raised
$500funding goal
Fully funded and not currently accepting donations.

Project summary


AI-Plans.com is a contributable compendium of alignment plans and the criticisms against them. We currently have over 100 alignment plans on the site and are in the process of adding more. Several alignment researchers, including Tom Everrit, Dan Hendrycks, and Stuart Russell, are interested in the site and other researchers have already been finding useful papers on the site and submitting plans to the site.
We are hosting a critique-a-thon on the 1st of August, which will last for 10 days, with a prize fund of $500.

What are this project's goals and how will you achieve them?

The goal of this project is to encourage high-quality critiques of alignment plans on AI-Plans.com. Critiques will be judged on accuracy, precision, communication, evidence, and novelty by myself, members of the team, and a couple of alignment researchers. The top three critiques will be awarded from the prize fund, with two honorable mentions also receiving a prize.

How will this funding be used?


The funding will be used to award the top critiques from the critique-a-thon. This will incentivize high-quality critiques and help improve the content on AI-Plans.com. The prize fund will be split as follows:

  • 1st place: $200

  • 2nd place: $125

  • 3rd place: $75

  • Honorable mention 1: $50

  • Honorable mention 2: $50

Who is on your team and what's your track record on similar projects?

Our team includes Jonathan Ng, an alignment researcher, who will be taking a look at some critiques. Other members of the team include an expert QA with many years of experience and Azai, who has a strong background in mathematics. We also have a consultant who is CompTia certified, highly skilled in cybersecurity, red-teaming and is also a professor.
Dr Peter S. Park, an MIT postdoc in the Tegmark lab has agreed to be a judge.
We have successfully launched AI-Plans.com in beta and have already added over 100 alignment plans to the site.
I myself, have helped get a start-up off the ground from nothing, going door to door, to have 3 branches, thousands of customers and schools requesting for internships. During the process I saw a lot of other start-ups, with much more qualified people fail completely and learnt what it takes to fail(overconfidence in the product, lack of outreach and market research, laziness, many things) and what it takes to succeed- determination and a sharp, user-focused mind.
I have been assisting Stake Out AI with narrative-building and proofreading and helping out at VAISU as well. I'm confident in my skill of breaking down the reasons an idea can and will fail and then finding ways to reach into it and extract something valuable.

What are the most likely causes and outcomes if this project fails? (premortem)

If this project fails, it could be due to a lack of participation or low-quality submissions. This could result in less content being added to AI-Plans.com and slower progress towards our goal of creating a comprehensive compendium of alignment plans and criticisms. Despite being the most likely form of failure, it's not very likely, since we already have 10 plus participants, on the day we announced the critique-a-thon.

What other funding are you or your project getting?

This project and AI-plans.com are currently unfunded passion projects. The requested $500 for prizes would be the first and only external funding for efforts on the site thus far.

Comments11Donations1Similar8
KabirKumar avatar

Kabir Kumar

AI-Plans.com

Science & technologyTechnical AI safetyAI governance
5
4
$5.37K raised
KabirKumar avatar

Kabir Kumar

AI-Plans.com

Alignment Research Platform

3
1
$0 raised
KabirKumar avatar

Kabir Kumar

Ranked, Contributable Compendium of Alignment Plans - AI-plans.com

Making a simple, easy to read platorm, where alignment plans and their criticisms can be seen and ranked. Currently in Stage 1.

Technical AI safety
2
4
$0 raised
c1sc0 avatar

Francis Dierick

The AI Arena - ludi.life

Online platform where AIs and humans race to solve puzzles.

Technical AI safetyAI governanceLong-Term Future Fund
2
4
$0 raised
esbenkran avatar

Esben Kran Christensen

Run five international hackathons on AI safety research

Six-month support for a Program Manager to organize and execute international AI safety hackathons with Apart Research

Technical AI safety
5
7
$10.9K raised
gleech avatar

Gavin Leech

Shallow review of AI safety 2024

9
14
$20.9K raised
Dhruv712 avatar

Dhruv Sumathi

AI For Humans Workshop and Hackathon at Edge Esmeralda

Talks and a hackathon on AI safety, d/acc, and how to empower humans in a post-AGI world.

Science & technologyTechnical AI safetyAI governanceBiosecurityGlobal catastrophic risks
1
0
$0 raised
🐝

Sahil

[AI Safety Workshop @ EA Hotel] Autostructures

Scaling meaning without fixed structure (...dynamically generating it instead.)

3
7
$8.55K raised