What progress have you made since your last update?
See previous comment
What are your next steps?
See previous comment
Is there anything others could help you with?
See previous comment
The market for grants
Manifund helps great charities get the funding they need. Discover amazing projects, buy impact certs, and weigh in on what gets funded.
David Glidden
3 days ago
See previous comment
See previous comment
See previous comment
David Glidden
3 days ago
We continue to host these meetups monthly! 10-20 attendees come per month and we’ve had speakers from Manifold, PredictIt, as well as journalists and thought leaders like Robin Hanson.
While technically our pilot period has concluded, we are still open to funding for future meetups.
Shep Riley
3 days ago
Thank you @saulmunn!
I have not applied to the LTFF yet - my understanding was that this was something more relevant to the EAIF, which is not taking applications from individuals, I believe because of the same reasons related to EV.
I will look more into the LTFF though, I may have been wrong about that, thanks again!
Saul Munn
3 days ago
donating* $20 to signal support. on a 20m scan, this looks promising! there were no parts of this doc that were obviously missing.
have you applied to the LTFF? if so, what's the status on that? if not, why not?
[*] for annoying operational reasons, i unfortunately can't send the money right now. @shepriley if it's been >a week since you've seen this and i haven't operationally donated the $20, please feel free to ping me!
Saul Munn
3 days ago
thanks for writing this up!
(1)
[I] stay up to date with global AI safety communities
could you explain what this concretely means? for example — do you read the Alignment Forum, the EA Forum, or LessWrong? do you read AI-safety-relevant papers? do you attend EAGs, or alignment-related events? do you regularly schedule calls with people in the AI safety community?
and more specifically, how much have you connected with existing AI safety community builders (particularly those in the SF Bay Area, London, and Boston)?
(2)
Currently, there are no ... collaborators.
i would strongly suggest finding some collaborators! makes everything more motivating & fun, and also straightforwardly multiplies how much work you can do.
(3)
All work to date has been volunteer driven.
what work is that? it'd be great to see any work you've done as a volunteer!
(4)
How will this funding be used?
[... full section ...]
it'd be useful if, in this section, you gave a broad breakdown of how you expect to split the money between the different expense categories. additionally, i think you should be a bit more detailed about the stipend you intend to pay yourself/your collaborators. (tbc, it seems totally reasonable & likely the right call to pay yourselves, but having more detail about this would be good.)
(5)
Who is on your team? What's your track record on similar projects?
Kristina Vaia: Connector, networker, and passionate advocate for AI safety. Excited to build a community around what I care about more than anything: connecting people and making AI safety accessible and actionable in Los Angeles.
i'd love to see more concreteness here. some more-specific prompts (but please try to answer the general "lacks specificity" more than over-indexing on these particular questions):
what have you worked on in the past? particularly stuff that's AI-safety- or community-building-related, but also just cool/interesting/ambitious things you've done.
what's your professional background?
what projects have you led or centrally organized in the past?
have you ever led a team of people? how did that go?
what's your level of knowledge about AI safety?
are there people who could talk about your past work (i.e. references)? if so, maybe drop their names?
where can we learn more about you (e.g. LinkedIn, personal website, blog, etc)? [note: consider hyperlinking to your LinkedIn in this section.]
etc
(6)
what is the current landscape of AI safety work in Los Angeles? to what extent are you plugged into it?
you said in a different comment:
There was an AI Safety Event in Marina del Rey last month. ... AE Studio in Venice, CA is an AI product development company with an active alignment team. ... There is also significant crossover between members of the Effective Altruism LA, LA Rationality, and AI safety communities. ... UCLA hosts an AI safety research club ... USC has an org for AI alignment and safety ...
i'd be keen for more details about (a) your understanding of the AI safety community in LA; (b) the extent to which you're currently plugged in.
some example prompts for concreteness (again, don't index too hard on these exact questions):
are you in contact with the university club organizers at USC and UCLA? to what extent?
to what extent are you in contact with the university club organizers at USC and UCLA? or to AE studios?
have you been to any AI safety events in LA? how many?
etc
(7)
Amount Raised: $0
have you applied for grants elsewhere (& in particular, the LTFF)? what's the current status of those applications?
if you've applied & heard back, what was the response?
if you haven't applied, why not?
Apart Research
3 days ago
Thank you so much for the thoughtful reflections and your support, Austin - it genuinely means a lot to our team.
Regarding Esben's transition, we want to provide more context to help clarify how we're approaching this inflection point. Esben has transitioned to a position as chairman of the board and is the interim CEO with the main task of finding a strong replacement together with other board members and the rest of the leadership team (Jason and Jaime).
While Esben is building Seldon to complement Apart's work as a for-profit sister organization, we view the efforts as mission-aligned but distinctly scoped: Apart remains focused on accessible, public-good-driven AI safety research and mid-career talent incubation. The launch of Seldon creates an opportunity for even stronger synergies between mission-driven research and adequate speed-of-scaling of important AGI security technology.
With recent strategy retreats in conjunction with the fundraiser, the whole team is excited for the changes that are currently happening: Reprioritization of key projects, deploying learnings from our impact evaluations, restructuring our academic involvement for further direct impact, building out our Sprints team to evolve higher frequency sprints and quarterly frontier research workshops, and a reorganization of our org structure to accommodate all these changes and more.
As mentioned in our update, we are excited to keep all our supporters up-to-date and involved in a dedicated newsletter as we maximize the impact of every dollar donated during this campaign. Again, thank you and everyone who's taken part in our journey.
- The Apart Research Team
Romain Deléglise
6 days ago
Probably not extremely difficult to observe the AI deceiving and relatively useful so go !
Apart Research
6 days ago
We're in the final countdown of our fundraising campaign now. We have raised a total of $619,921 until now which takes us past our first four milestones towards our final goal of $954,800.
🫶 Massive thanks from our whole team to everyone who contributed:
...and many more
Notable progress since we launched this campaign:
Published the nearly 20-page Impact Report, a thorough overview of how Apart has impacted the world through talent development and direct research.
Hosted two hackathons; the Mechanistic Router Interpretability and ControlAI events
Received 158 testimonials from individuals around the globe who have directly benefitted from our Apart, in addition to 10+ from other organizations that we have collaborated with
Received blogpost submissions to the Studio from top projects at three hackathons, and accepted some to continue their work under the guidance of mentors from both Apart, Martian, and ControlAI
5 blogposts have been published after submitting to the Studio and receiving our feedback, with more coming out in the very near future.
Accepted 3 more projects into the Apart Lab Fellowship, all of which intend to submit to NeurIPS 2025 workshops in late August.
You can read in-depth reports of the progress during the campaign in our newsletter:
2.5 week update: Sharing that we got past the first milestone with the Richard Ngo's generous donation.
3.5-week update: Announcing that our support had now blown past our fourth milestone and extending our campaign by another 25 days.
7-week update: The final update released together with this progress update.
During the fundraiser, we've hosted a virtual retreat with the team to consolidate all our learnings and bring them into an updated and stronger Apart Research. We're really excited to share more with you in our donor newsletter during the next months.
Every dollar counts and our fundraiser is still going! Join more than 70 other private donors to keep Apart and AI safety going for the long-term.
Thank you.
Austin Chen
7 days ago
Apologies for the delay, approved now as part of our portfolio to improve animal welfare!
Alexandria Beck
9 days ago
@Austin Is it possible to share how long the admin approval phase might take? ESR is eager to get started on this project. Thanks!
Kristina Vaia
9 days ago
Yup. There was an AI Safety Event in Marina del Rey last month. Hosted by BlueDot Impact, AI Safety Awareness Project, and AE Studio. Technologists, researchers, and students interested in AI safety participated. AE Studio in Venice, CA is an AI product development company with an active alignment team. The CEO (Judd Rosenblatt) is a well-known figure in the LA tech community and would be a valuable contact. There is also significant crossover between members of the Effective Altruism LA, LA Rationality, and AI safety communities. These groups usually share interests and members, making them great sources. UCLA hosts an AI safety research club focused on the development and impact of advanced AI systems. Reaching out to the club’s leadership and active members can help seed AISLA with more students and researchers. USC has an org for AI alignment and safety and can be contacted as well. There are also a ton of tech companies in LA that have AI teams - SnapChat, Hulu, Google, & Apple.
Neel Nanda
10 days ago
Do you know of specific people who would be excited about this community? Do you have a sense of specific people you'd reach out to? I think that having a sense of the latent demand would make evaluating how promising this is much easier.
Austin Chen
10 days ago
Approving this grant as a low-cost cheap intervention to spread high-quality writing on AI safety. Thanks to Aidar for proposing this and Neel for funding!
Robert looman
10 days ago
Thanks for checking out GENESIS. I’m going all-in on building an AGI that isn’t just smarter, but understandable and safe. My prototypes already hit over 2 million tokens/sec on CPUs — no billion-dollar GPU farms required. Every reasoning path is traceable like code, which I believe is the only real path to scalable alignment. This isn’t just a replacement for current LLMs; it’s an evolution toward AGI that belongs to everyone, not just a few labs. Happy to answer any technical questions or talk through the vision. Let’s build something better.
Saul Munn
10 days ago
Not really, this amount of money we've gotten here is totally sufficient for our experiment needs. If we get a positive result we'll apply to several places for a more in-depth trial, but for now we're set.
gotcha, makes sense.
(Your link is to a chemistry Anki deck :-)
lol, thx 😅
Neel Nanda
10 days ago
I suggested Bryce apply, and have funded this for two months. Open source research tooling is really valuable for accelerating the work of people outside big orgs, TransformerLens is pretty popular, and I've often heard complaints about the problem this is solving.
Conflict of interest: I created transformerlens (though haven't been involved for a while), and several of my projects would benefit from this tooling, though only as a side effect of this benefitting the interp community as a whole. I don't financially benefit in any way from this
David Glidden
3 days ago
See previous comment
See previous comment
See previous comment
David Glidden
3 days ago
We continue to host these meetups monthly! 10-20 attendees come per month and we’ve had speakers from Manifold, PredictIt, as well as journalists and thought leaders like Robin Hanson.
While technically our pilot period has concluded, we are still open to funding for future meetups.
Shep Riley
3 days ago
Thank you @saulmunn!
I have not applied to the LTFF yet - my understanding was that this was something more relevant to the EAIF, which is not taking applications from individuals, I believe because of the same reasons related to EV.
I will look more into the LTFF though, I may have been wrong about that, thanks again!
Saul Munn
3 days ago
donating* $20 to signal support. on a 20m scan, this looks promising! there were no parts of this doc that were obviously missing.
have you applied to the LTFF? if so, what's the status on that? if not, why not?
[*] for annoying operational reasons, i unfortunately can't send the money right now. @shepriley if it's been >a week since you've seen this and i haven't operationally donated the $20, please feel free to ping me!
Saul Munn
3 days ago
thanks for writing this up!
(1)
[I] stay up to date with global AI safety communities
could you explain what this concretely means? for example — do you read the Alignment Forum, the EA Forum, or LessWrong? do you read AI-safety-relevant papers? do you attend EAGs, or alignment-related events? do you regularly schedule calls with people in the AI safety community?
and more specifically, how much have you connected with existing AI safety community builders (particularly those in the SF Bay Area, London, and Boston)?
(2)
Currently, there are no ... collaborators.
i would strongly suggest finding some collaborators! makes everything more motivating & fun, and also straightforwardly multiplies how much work you can do.
(3)
All work to date has been volunteer driven.
what work is that? it'd be great to see any work you've done as a volunteer!
(4)
How will this funding be used?
[... full section ...]
it'd be useful if, in this section, you gave a broad breakdown of how you expect to split the money between the different expense categories. additionally, i think you should be a bit more detailed about the stipend you intend to pay yourself/your collaborators. (tbc, it seems totally reasonable & likely the right call to pay yourselves, but having more detail about this would be good.)
(5)
Who is on your team? What's your track record on similar projects?
Kristina Vaia: Connector, networker, and passionate advocate for AI safety. Excited to build a community around what I care about more than anything: connecting people and making AI safety accessible and actionable in Los Angeles.
i'd love to see more concreteness here. some more-specific prompts (but please try to answer the general "lacks specificity" more than over-indexing on these particular questions):
what have you worked on in the past? particularly stuff that's AI-safety- or community-building-related, but also just cool/interesting/ambitious things you've done.
what's your professional background?
what projects have you led or centrally organized in the past?
have you ever led a team of people? how did that go?
what's your level of knowledge about AI safety?
are there people who could talk about your past work (i.e. references)? if so, maybe drop their names?
where can we learn more about you (e.g. LinkedIn, personal website, blog, etc)? [note: consider hyperlinking to your LinkedIn in this section.]
etc
(6)
what is the current landscape of AI safety work in Los Angeles? to what extent are you plugged into it?
you said in a different comment:
There was an AI Safety Event in Marina del Rey last month. ... AE Studio in Venice, CA is an AI product development company with an active alignment team. ... There is also significant crossover between members of the Effective Altruism LA, LA Rationality, and AI safety communities. ... UCLA hosts an AI safety research club ... USC has an org for AI alignment and safety ...
i'd be keen for more details about (a) your understanding of the AI safety community in LA; (b) the extent to which you're currently plugged in.
some example prompts for concreteness (again, don't index too hard on these exact questions):
are you in contact with the university club organizers at USC and UCLA? to what extent?
to what extent are you in contact with the university club organizers at USC and UCLA? or to AE studios?
have you been to any AI safety events in LA? how many?
etc
(7)
Amount Raised: $0
have you applied for grants elsewhere (& in particular, the LTFF)? what's the current status of those applications?
if you've applied & heard back, what was the response?
if you haven't applied, why not?
Apart Research
3 days ago
Thank you so much for the thoughtful reflections and your support, Austin - it genuinely means a lot to our team.
Regarding Esben's transition, we want to provide more context to help clarify how we're approaching this inflection point. Esben has transitioned to a position as chairman of the board and is the interim CEO with the main task of finding a strong replacement together with other board members and the rest of the leadership team (Jason and Jaime).
While Esben is building Seldon to complement Apart's work as a for-profit sister organization, we view the efforts as mission-aligned but distinctly scoped: Apart remains focused on accessible, public-good-driven AI safety research and mid-career talent incubation. The launch of Seldon creates an opportunity for even stronger synergies between mission-driven research and adequate speed-of-scaling of important AGI security technology.
With recent strategy retreats in conjunction with the fundraiser, the whole team is excited for the changes that are currently happening: Reprioritization of key projects, deploying learnings from our impact evaluations, restructuring our academic involvement for further direct impact, building out our Sprints team to evolve higher frequency sprints and quarterly frontier research workshops, and a reorganization of our org structure to accommodate all these changes and more.
As mentioned in our update, we are excited to keep all our supporters up-to-date and involved in a dedicated newsletter as we maximize the impact of every dollar donated during this campaign. Again, thank you and everyone who's taken part in our journey.
- The Apart Research Team
Romain Deléglise
6 days ago
Probably not extremely difficult to observe the AI deceiving and relatively useful so go !
Apart Research
6 days ago
We're in the final countdown of our fundraising campaign now. We have raised a total of $619,921 until now which takes us past our first four milestones towards our final goal of $954,800.
🫶 Massive thanks from our whole team to everyone who contributed:
...and many more
Notable progress since we launched this campaign:
Published the nearly 20-page Impact Report, a thorough overview of how Apart has impacted the world through talent development and direct research.
Hosted two hackathons; the Mechanistic Router Interpretability and ControlAI events
Received 158 testimonials from individuals around the globe who have directly benefitted from our Apart, in addition to 10+ from other organizations that we have collaborated with
Received blogpost submissions to the Studio from top projects at three hackathons, and accepted some to continue their work under the guidance of mentors from both Apart, Martian, and ControlAI
5 blogposts have been published after submitting to the Studio and receiving our feedback, with more coming out in the very near future.
Accepted 3 more projects into the Apart Lab Fellowship, all of which intend to submit to NeurIPS 2025 workshops in late August.
You can read in-depth reports of the progress during the campaign in our newsletter:
2.5 week update: Sharing that we got past the first milestone with the Richard Ngo's generous donation.
3.5-week update: Announcing that our support had now blown past our fourth milestone and extending our campaign by another 25 days.
7-week update: The final update released together with this progress update.
During the fundraiser, we've hosted a virtual retreat with the team to consolidate all our learnings and bring them into an updated and stronger Apart Research. We're really excited to share more with you in our donor newsletter during the next months.
Every dollar counts and our fundraiser is still going! Join more than 70 other private donors to keep Apart and AI safety going for the long-term.
Thank you.
Austin Chen
7 days ago
Apologies for the delay, approved now as part of our portfolio to improve animal welfare!
Alexandria Beck
9 days ago
@Austin Is it possible to share how long the admin approval phase might take? ESR is eager to get started on this project. Thanks!
Kristina Vaia
9 days ago
Yup. There was an AI Safety Event in Marina del Rey last month. Hosted by BlueDot Impact, AI Safety Awareness Project, and AE Studio. Technologists, researchers, and students interested in AI safety participated. AE Studio in Venice, CA is an AI product development company with an active alignment team. The CEO (Judd Rosenblatt) is a well-known figure in the LA tech community and would be a valuable contact. There is also significant crossover between members of the Effective Altruism LA, LA Rationality, and AI safety communities. These groups usually share interests and members, making them great sources. UCLA hosts an AI safety research club focused on the development and impact of advanced AI systems. Reaching out to the club’s leadership and active members can help seed AISLA with more students and researchers. USC has an org for AI alignment and safety and can be contacted as well. There are also a ton of tech companies in LA that have AI teams - SnapChat, Hulu, Google, & Apple.
Neel Nanda
10 days ago
Do you know of specific people who would be excited about this community? Do you have a sense of specific people you'd reach out to? I think that having a sense of the latent demand would make evaluating how promising this is much easier.
Austin Chen
10 days ago
Approving this grant as a low-cost cheap intervention to spread high-quality writing on AI safety. Thanks to Aidar for proposing this and Neel for funding!
Robert looman
10 days ago
Thanks for checking out GENESIS. I’m going all-in on building an AGI that isn’t just smarter, but understandable and safe. My prototypes already hit over 2 million tokens/sec on CPUs — no billion-dollar GPU farms required. Every reasoning path is traceable like code, which I believe is the only real path to scalable alignment. This isn’t just a replacement for current LLMs; it’s an evolution toward AGI that belongs to everyone, not just a few labs. Happy to answer any technical questions or talk through the vision. Let’s build something better.
Saul Munn
10 days ago
Not really, this amount of money we've gotten here is totally sufficient for our experiment needs. If we get a positive result we'll apply to several places for a more in-depth trial, but for now we're set.
gotcha, makes sense.
(Your link is to a chemistry Anki deck :-)
lol, thx 😅
Neel Nanda
10 days ago
I suggested Bryce apply, and have funded this for two months. Open source research tooling is really valuable for accelerating the work of people outside big orgs, TransformerLens is pretty popular, and I've often heard complaints about the problem this is solving.
Conflict of interest: I created transformerlens (though haven't been involved for a while), and several of my projects would benefit from this tooling, though only as a side effect of this benefitting the interp community as a whole. I don't financially benefit in any way from this