Manifund foxManifund
Home
Login
About
People
Categories
Newsletter
HomeAboutPeopleCategoriesLoginCreate

Funding requirements

Sign grant agreement
Reach min funding
Get Manifund approval
1

Ambitious AI Alignment Seminar

Technical AI safetyGlobal catastrophic risks
Mateusz-Bagiski avatar

Mateusz Bagiński

ProposalGrant
Closes February 24th, 2026
$0raised
$20,000minimum funding
$179,520funding goal

Offer to donate

41 daysleft to contribute

You're pledging to donate if the project hits its minimum goal and gets approved. If not, your funds will be returned.

Sign in to donate

Project summary

We are going to gather ~35 exceptional people in the Hostačov Chateau in the Czech countryside for a five-weekend seminar running from March 13th to April 13th (albeit we may decide to move the starting date as late as May, if we do not secure sufficient funding in time). The seminar will focus on engaging those people with a large number of technical AI safety topics in order to let them develop a deep understanding of them. The topics in focus will be the ones that we judge to be likely important to understand for taking serious shots at superintelligence alignment.

The threshold of $179,520 constitutes the amount of money required to prepare and run the month-long seminar (budget breakdown below). Additional funding will allow us to extend the retreat into a year-long program: the AFFINE Fellowship, which will involve awarding grants to the ~10 most promising candidates and collocating them in several places where they can receive relevant support to continue their learning and research for another 11 months (one such place is CEELAR / "EA Hotel").

What are this project's goals? How will you achieve them?

The primary goals of the seminar, as well as of the fellowship it may be extended to two are the following:

  • Get more people who can actually understand and think about the problem of AI alignment and AI X-risk in order to take a good shot at trying to build pieces of a solution.

  • Have more people who can properly explain the issue to governments in a way that is productive (instead of backfiring).

  • Have people who can start reasonably shaped orgs once funding is abundant (which we expect to happen later this year or early 2027 at the latest).

  • The problem at hand is very difficult, so we do not expect novel and promising research outputs within the time frame of the program. It would, however, be a very welcome surprise.

We will achieve these goals through a carefully designed month-long intensive that prioritizes deep technical learning within a collaborative rather than competitive environment. The program structure differs fundamentally from other AI safety fellowships by emphasizing community formation and peer learning alongside technical rigor.

The month unfolds through four distinct phases designed to maximize both intellectual depth and collaborative relationships. Week 1 focuses on community formation, with participants rotating through different small groups to build relationships across the entire cohort while beginning to engage with foundational technical material. Week 2 transitions to intensive technical engagement as participants self-select into stable working pods of three to five people for deeper collaborative work. Week 3 reaches peak intellectual intensity with sustained deep technical work in established pods. Week 4 integrates learning through presentations and reflection while preparing participants for either continuation into the year-long fellowship or transition to other impactful work.

Rather than passively consuming lectures, participants will share their learning with each other through structured showcases and peer instruction, which research shows produces dramatically better retention than traditional formats. The Czech countryside setting removes urban distractions while providing space for both focused solo work and spontaneous collaboration. The program rhythm alternates between intensive technical engagement and explicit recovery time, preventing the burnout that plagues many month-long intensives. The design also accounts for predictable challenges—social overload, energy crashes, status competition—through structural choices rather than just good intentions.

Crucially, the selection for continuation into the year-long fellowship will happen because of collaborative excellence, not despite it. We're looking for participants who help others learn, who integrate across disciplines, and who build rather than hoard knowledge. The goal extends beyond producing ten individual researchers to creating a cohesive network that continues collaborating after the month ends, whether at CEEALAR or elsewhere.

Conditional on securing an additional $55k or more, the seminar will commence a year-long AFFINE fellowship. (See here for an explanation of why a 1-year-long fellowship is needed.)

How will this funding be used?

The first "valuable" (i.e., "we can use this money for something concretely useful in service of this project") threshold of $20k is meant to cover Mateusz's work on the retreat until getting the final decisions from our big funders on whether they finance the retreat and/or the fellowship (all in the scenario where funding for the retreat from other sources is not secured).

The second threshold of $179,520 will cover the work of Mateusz on preparing the retreat and will cover the costs of the retreat (including him and other staff).

The maximum amount of $1,561,120.00 will suffice to fund the entire Fellowship, the way we would see it ~ideally. The intermediate amounts will be used to cover as much of the Fellowship as we can. Roughly, less money will mean fewer fellows and/or smaller stipends. (We also provide a utility function over money made with plex's tool that you probably should also use in your funding applications.)

A detailed budget for the minimum amount is in the following table. Maximum funding amount budget can be made available upon private request.

(We made it before thinking about the minimum useful value, so it does not take into account the minimum value of $20k.)

Normalized utility values are:

  • $180k -- 28%

  • $220k -- 34%

  • $1,560k -- 100% (concave-ish)

Who is on your team? What's your track record on similar projects?

Mateusz Bagiński - Lead (technical, applications)

Mateusz studied cognitive science (BSc, MSc) and worked as a programmer at a startup developing software for enhancing collective sense-making. Having dissertated, he decided to transition into technical AI safety research: upskilling, helping build AI Safety Info, and participating in some AI Safety hackathons. Eventually, he landed on theoretical/agent foundations research as the field that is most important, neglected, and suitable for his interests and skills. PIBBSS Fellow 2024 (w/ mentor Tsvi Benson-Tilsen (ex-MIRI)).

Mateusz will be responsible for designing the program, selecting the candidates, and ensuring that everything runs as smoothly as possible on the research side. The latter will involve helping the participants with their learning and research (acting as a sort of secondary mentor), making connections between participants and [mentors, resources, or other participants], as well as being generally on the lookout for ways in which the program could be improved.

Sofie Meyer - Humans Lead

Sofie's background is in cognitive neuroscience (BSc, PhD, postdoc, Google Scholar) and several experiential practices and trainings: ten years of Zen meditation, two years of existential psychotherapy training, six months of circling facilitation training and certification, five years supporting co-counselling courses, two years volunteering at Maytree Sanctuary, and three months of teaching cognitive behavioral therapy group facilitation skills at Rethink Wellbeing. She also facilitates Core Transformation, Focusing, and Internal Family Systems processes.

Professionally, she has led user research at two mental health tech startups, one focused on depression and tracking cognitive effects of medication, another on using cognitive behavioral therapy to treat social anxiety in working women. Currently, she designs AI chatbots for global health at Turn.io and serves as Chair of EA Denmark and board member of Giv Effektivt (LinkedIn)

She loves facilitating nuanced conversations and creating space and emotional safety to enable brilliant people to truth-seek. She aims to bring compassionate, well-regulated, honest, evidence-based support and tools to humans and teams navigating complex cognitive and emotional challenges.

Attila Ujvari - Event design

As Executive Director of CEEALAR, he's transforming a residential facility in Blackpool into a professionalized incubator for AI safety researchers and entrepreneurs working on GCR reduction. Over the past six months, he's revitalized the infrastructure, implemented productivity frameworks, and community systems that have dramatically improved resident outcomes.

Before CEEALAR, Attila spent 15+ years building systems that unlock human potential: managing cross-functional teams of 18+ at Ericsson, overseeing operations for 1,100+ soldiers across four continents in the Army National Guard, and scaling operational processes as Director of Operations at V School. He's taught professional courses, provided career counseling and academic planning in college, and tutored students navigating complex learning pathways.

His foundation in Hungary runs intensive hackathons that bring cross-disciplinary groups together around singular problems—exactly the dynamic needed here. As a group embodiment facilitator, he creates experiences that connect people not just professionally, but holistically.

He's not an AI safety researcher, but the person who builds the conditions for researchers to do their best work. This seminar needs someone who understands how to design intensive learning experiences, manage group dynamics at scale, and create the rhythms that turn ambitious people into effective collaborators.

DeAnza College, Stanford University, Amherst College.

TBD - Ops & Volunteer Lead

The venue provides food and basics, but we’ll want a full-time person to help make all the thousand minor things work. Probably assisted by volunteers.

plex - Vision & Network

plex has dedicated almost his entire adult life and the vast majority of his funds to trying to avert the AI apocalypse. The world is not anything like safe, so it’s insufficiently successful, but he has built or inspired many neat things, including a weirdly high fraction of the existential safety ecosystem’s infrastructure.

What are the most likely causes and outcomes if this project fails?

We actually consider it very likely that the project "fails" in the sense that it will complete with none of the Fellows producing any clearly promising research outputs or directions at building pieces of a solution. The reason/cause of this would be that the problem being tackled is one of great difficulty, very slippery, and with difficult feedback loops with reality.

However, even in that case, the three theories of change we outlined in the section above will still likely be achieved: we are going to have more people who can (1) think about the problem; (2) explain it to governments; (3) be able to start good technical AI X-risk-reducing orgs when funding becomes abundant.

The primary type of "disappointing failure" that we can foresee befalling this project would be the failure to produce promising individuals possessing a deep understanding of the alignment problem. The most likely causes of this would be the failure to recruit the right people and provide them the right sort of support (in terms of environment (including social) and mentorship).

In order to prevent this failure mode, we are going to do all of the following:

  1. Get a large pool of potentially useful mentors.

  2. Mateusz will be continuously assessing how the program is going for every participant.

  3. We will have a full-time employee specialized in working with humans (Sofie), so as to ensure that obstacles such as demotivation due to lack of clear results, emotional weight of the problem, or mental problems more generally, are not as much of a hindrance on the participants' journeys.

  4. We will utilize our extensive social networks, as well as high-quality paid services, to recruit highly promising individuals.

  5. We are going to use CEEALAR as a well-proven longer-term environment for researchers.

How much money have you raised in the last 12 months, and from where?

Zero. We just started.

We are in conversation with a donor who is potentially interested in funding the retreat (fully or partially). A partial function of this post is to gather public opinion of relevant people to make the donor better informed on the value of the endeavor being proposed here.

Additional info

Selection criteria for the fellows:

  • Highly technically skilled (e.g., maths, technical philosophy, finance, founder/CEO types, sharp PhDs/researchers in various fields, top-level science communication, etc)

  • Would care about saving the world and all their friends if they thought human extinction was likely.

  • Decent team players, non-disruptive to the group cohesion.

  • (Existing understanding of AI Safety is not required. Starting with a ~blank slate is fine and good.)

CommentsOffers

There are no bids on this project.