Manifund foxManifund
Home
Login
About
People
Categories
Newsletter
HomeAboutPeopleCategoriesLoginCreate
2

Jaeson's Independent Alignment Research and work on Accelerating Alignment

Technical AI safety
🐬

Jaeson Booker

Not fundedGrant
$0raised

Project summary

Alignment Research: continue my research into collective intelligence systems for alignment and mechanism design for AI Safety.

First write-up here: https://www.lesswrong.com/posts/2SCSpN7BRoGhhwsjg/using-consensus-mechanisms-as-an-approach-to-alignment

Mechanism design for AI Safety: https://www.lesswrong.com/posts/4NScyGegfL7Dv4u7G/mechanism-design-for-ai-safety-reading-group-curriculum

Current rudimentary forms of collective intelligence networks: https://drive.google.com/file/d/1VnsobL6lIAAqcA1_Tbm8AYIQscfJV4KU/view

Utilize AI Safety Strategy to further Accelerate Alignment. Support other's alignment endeavors, onboarding, mentoring, and connecting people with relevant research and organizations. Funds would be used only to accelerate alignment, not to lengthen timelines. We are growing in numbers, and I have many ideas for how this ecosystem can be developed further. We recently funded a prize pool hosted by AI Safety Plans, and have many more ideas in mind for the future. With AI Safety Support gone, we need new organizations to fill the void and provide the assistance aspiring Alignment researchers need.

Discord group: https://discord.gg/e8mAzRBA6y

Website: https://ai-safety-strategy.org/

What are this project's goals and how will you achieve them?


The goals are to find novel solutions and new angles for tackling the Alignment problem, further accelerate and onboard new talent into working on alignment, and improve the overall trajectory toward a better future.

How will this funding be used?

1-year's salary: $96,000 USD

Funding prizes and other ideas for accelerating alignment: $25,000 USD
One full-time or several part-time hires for onboarding and mentoring prospective alignment researchers: $100,000 USD

These will (roughly) be funded once the other one is satisfied. Example: if I get enough funding for an annual salary, I will begin funding prizes. If I receive enough to fund prizes, I will bring on new talent into the team.

What's your track record on similar projects?

Organizational: I have founded several tech startup companies, lead several teams, including as Project Manager, been the founding member of several others companies, and completed most of a Masters of Business Administration.

Mechanism Design: I have experience working on Mechanism Design and Consensus Engineering, such as my work at MOAT (creating the first decentralized energy token for the BSV network), Algoracle (worked on the White Paper for the first Oracle Network on Algorand), designing a form of decentralized voting for companies, assisting with incentivizing philanthropy at Project Kelvin, and was a Senior Cybersecurity Analyst where I audited blockchain contracts for security vulnerabilities.

AI Safety: I took the AI Safety Fundamentals (both Technical and Governance) in 2021. I worked on building a simulation for finding cooperation between governments on AIS when staying at the Centre For Enabling EA Learning & Research (CEEALAR). I received a grant from the Centre for Effective Altruism and Effective Ventures to further my self study of Alignment Research. I attended SERI MATS in the Fall, under John Wentworth's online program. I have also read extensively on the topic, and contributed to various discussions and blog posts, one of which won a Superlinear prize.

Other: I also TA'd and helped design the curriculum for the first University blockchain class while in undergrad, and have assisted in mentoring and offering consultation for new people wanting to get into the field.

What are the most likely causes and outcomes if this project fails? (premortem)

That alignment is hard, and getting more people working on the problem doesn't guarantee results.

What other funding are you or your project getting?

I have currently received $1000 for my alignment research.

Comments

No comments yet. Sign in to create one!