Manifund foxManifund
Home
Login
About
People
Categories
Newsletter
HomeAboutPeopleCategoriesLoginCreate
0

Transformational Coaching for AI Alignment Researchers

Technical AI safetyAI governanceEA communityGlobal catastrophic risks
jaredclucas avatar

Jared Lucas

Not fundedGrant
$0raised

Project summary

I provide transformational 1-on-1 coaching to AI alignment researchers and existential risk contributors, helping them reduce burnout, navigate complexity, and stay aligned with their mission. This grant will subsidize 3-month coaching engagements for individuals who lack institutional support, improving retention, clarity, and resilience in a field where mental load and moral stress are high. By resourcing the humans behind high-stakes alignment work, this project strengthens the foundation for long-term impact.

What are this project's goals? How will you achieve them?

Goals:

  1. Reduce burnout and cognitive overload among AI alignment researchers

  2. Increase clarity, emotional resilience, and purpose alignment

  3. Improve retention and output in the AI safety field by supporting key contributors

How:

  • Deliver 3-month 1-on-1 coaching engagements to 15–20 alignment researchers

  • Use presence-based, non-coercive coaching to help clients navigate internal blocks

  • Offer subsidized or grant-funded access to those without organizational support

  • Track qualitative outcomes via feedback, testimonials, and post-engagement reflection

  • Engage with alignment orgs and events to reach and serve contributors most at risk of attrition

How will this funding be used?

The funding will be used to subsidize 3-month 1-on-1 coaching engagements for AI alignment researchers, covering coaching compensation, self-employment taxes, essential tools (e.g., scheduling, encrypted communication), outreach to reach under-resourced contributors, and a small buffer for contingencies. It enables me to provide high-quality support at no or reduced cost to 15–20 individuals who are contributing to existential risk reduction but lack institutional well-being support.

Who is on your team? What's your track record on similar projects?

I’m the sole team member and founder of Jared Lucas LLC. Over the past three years, I’ve coached 20 high-capacity individuals, including AI alignment researchers, founders, and mission-driven leaders. I’m trained in the Aletheia coaching paradigm, integrating presence-based methods and deep metaphysical inquiry. Clients report reduced burnout, clearer decision-making, and renewed alignment with their purpose. This project builds directly on that track record with a targeted focus on existential risk contributors.

What are the most likely causes and outcomes if this project fails?

The most likely causes of failure would be insufficient outreach to reach the intended AI alignment clients, or lower-than-expected uptake of coaching engagements. If that occurs, the primary outcome would be underutilization of available coaching capacity, limiting the project's impact. To mitigate this, I plan to partner with alignment-focused orgs, attend key events, and adapt offerings based on demand. Even in a low-uptake scenario, a smaller cohort would still receive deep, high-leverage support.

How much money have you raised in the last 12 months, and from where?

In the last 12 months, I have not raised any external funding. All activities have been self-funded through personal savings and early revenue from a small number of paying coaching clients. This is my first formal fundraising effort for this project.

CommentsSimilar7
JaesonB avatar

Jaeson Booker

Jaeson's Independent Alignment Research and work on Accelerating Alignment

Collective intelligence systems, Mechanism Design, and Accelerating Alignment

Technical AI safety
2
0
$0 raised
SandyFraser avatar

Sandy Fraser

Concept-anchored representation engineering for alignment

New techniques to impose minimal structure on LLM internals for monitoring, intervention, and unlearning.

Technical AI safetyGlobal catastrophic risks
3
1
$0 raised
LawrenceC avatar

Lawrence Chan

Exploring novel research directions in prosaic AI alignment

3 month

Technical AI safety
5
9
$30K raised
ronakrm avatar

Ronak Mehta

Coordinal Research: Accelerating the research of safely deploying AI systems.

Funding for a new nonprofit organization focusing on accelerating and automating safety work.

Technical AI safety
10
3
$40.1K raised
🥥

Alex Lintz

Funding for AI safety comms strategy & career transition support

Mostly retroactive funding for prior work on AI safety comms strategy as well as career transition support. 

AI governanceLong-Term Future FundGlobal catastrophic risks
4
5
$39K raised
michaeltrazzi avatar

Michaël Rubens Trazzi

Making 52 AI Alignment Video Explainers and Podcasts

EA Community Choice
8
9
$15.3K raised
Siao-Si-Looi avatar

Siao Si Looi

Building and maintaining the Alignment Ecosystem

12 months funding for 3 people to work full-time on projects supporting AI safety efforts

Technical AI safetyAI governanceEA communityGlobal catastrophic risks
8
2
$0 raised