Manifund foxManifund
Home
Login
About
People
Categories
Newsletter
HomeAboutPeopleCategoriesLoginCreate
1

[Funded] Gabriel Mukobi Summer Research

Technical AI safety
GabeMukobi avatar

Gabe Mukobi

Not fundedGrant
$0raised

Note: This project now has funding from outside Manifund, so probably look elsewhere for opportunities!

Project summary

Gabe is requesting $5000 to pay for LLM compute to run experiments.

From Gabe's proposal on Nonlinear Network:

  • I’m seeing around $5000 for AI model API compute funds (GPT-3.5/4, Claude, and PaLM) for multipolar coordination failure evaluations during the Existential Risk Alliance (ERA)’s summer research fellowship.

    • As a rough BOTEC, I might imagine running 12 experiments 1024 data points per experiment 8192 tokens per data point * $0.045 / 1000 tokens for GPT-4 = $4530.

    • The amount of funding is somewhat flexible. With less funding, I’d just be able to run fewer experiments or have to use worse models (like gpt-3.5-turbo). With more funding, I’d have more room to run more complicated experiments.

    • I commit to giving back any extra funds remaining at the end.

  • I’ll be working on this project with mentorship from Alan Chan (Mila, Krueger Lab) and Jesse Clifton (CLR, Cooperative AI Foundation).

    • Unfortunately, ERA and these mentors do not have clear compute budgets they could allocate for my project, which is why I’m seeking funding.

    • That said, it’s probably not the worst if you didn’t fund this, as I might be able to get compute funds through them, it will just be more difficult.

Other sources of funding

Gabe is considering applying to an OpenAI grant program, or receiving compute from his lab

Comments4Similar8
GabeMukobi avatar

Gabe Mukobi

Empowering AI Governance - Grad School Costs Support for Technical AIS Research

9-month university tuition support for technical AI safety research focused on empowering AI governance interventions.

Technical AI safetyAI governance
2
7
$0 raised
ethanjperez avatar

Ethan Josean Perez

Compute and other expenses for LLM alignment research

4 different projects (finding RLHF alignment failures, debate, improving CoT faithfulness, and model organisms)

Technical AI safety
6
3
$400K raised
LawrenceC avatar

Lawrence Chan

Exploring novel research directions in prosaic AI alignment

3 month

Technical AI safety
5
9
$30K raised
kacloud avatar

Alex Cloud

Compute for 4 MATS scholars to rapidly scale promising new method pre-ICLR

Technical AI safety
3
5
$16K raised
agusmartinez92 avatar

Agustín Martinez Suñé

SafePlanBench: evaluating a Guaranteed Safe AI Approach for LLM-based Agents

Seeking funding to develop and evaluate a new benchmark for systematically assesing safety of LLM-based agents

Technical AI safetyAI governance
5
1
$1.98K raised
🐯

Scott Viteri

Attention-Guided-RL for Human-Like LMs

Compute Funding

Technical AI safety
4
2
$3.1K raised
JaesonB avatar

Jaeson Booker

Funding to attend AI Conclave

A month long on-sight campus to deeply understand and shape AI

2
2
$0 raised
LucyFarnik avatar

Lucy Farnik

Discovering latent goals (mechanistic interpretability PhD salary)

6-month salary for interpretability research focusing on probing for goals and "agency" inside large language models

Technical AI safety
7
4
$1.59K raised