Thanks @Austin!
The market for grants
Manifund helps great charities get the funding they need. Discover amazing projects, buy impact certs, and weigh in on what gets funded.

The market for grants
Manifund helps great charities get the funding they need. Discover amazing projects, buy impact certs, and weigh in on what gets funded.

Matthew A Cator
1 day ago
Clarifying the smallest falsifiable version of this grant:
The near-term goal is not to prove that Golem Physics is “the” solution to hallucination or agent control. The goal is to make the current working system inspectable by outside reviewers.
A successful $5k phase would produce:
1. A reviewer walkthrough showing source material → claim extraction → lattice coordinate → verification state → speech/silence decision. 2. Updated public metrics and runtime traces. 3. 20–50 end-to-end claim traces that reviewers can inspect manually. 4. A benchmark plan comparing Golem against simpler baselines such as RAG with citations, LLM self-checking, and structured provenance systems. 5. A short Golem-to-Constraint-Native walkthrough showing how verification-before-voice maps to proof-gated action-before-execution.
The thing I most want from funders/reviewers is not blind confidence. It is critique on whether these artifacts would make the system legible enough for a stronger follow-on evaluation.
Connacher Murphy
2 days ago
We have released a live benchmark at https://agentisland.ai/. Give it a look!
We're running more games now and testing for provider preference in the output data.
Godbless A. Arhin
2 days ago
I am available to answer any questions you may have concerning the project.
Caroline Araujo de Oliveira
3 days ago
Project update / final update
Thank you to everyone who followed and supported this project.
This project was originally designed as a dedicated movement-building initiative to host in-person events in Colombia, Brazil, and Thailand, bringing together activists, influencers, local organizations, and other potential advocates to strengthen collaboration and promote more effective animal advocacy in the Global South.
Since the project did not receive the funding we hoped for through Manifund, we were not able to implement it exactly as originally proposed. However, with the support of other funding sources, we adapted the scope and integrated this work into our broader 2025 movement-building and campaign activities.
In practice, this allowed us to carry out a significant amount of community-building and public-facing work across several countries. According to our 2025 Year in Review, Sinergia Animal held more than 250 gatherings, training sessions, and other in-person or online activities throughout the year. Our activist base grew from 500 to 847 trained and engaged volunteers, representing a 69.4% expansion, and we expanded our reach into 13 new cities across Latin America and Southeast Asia. These activities strengthened our grassroots capacity and helped enable simultaneous campaign actions in up to five cities at once.
This work also supported broader campaign outcomes. In 2025, Sinergia Animal carried out 135 street actions across seven countries, helping increase public awareness, pressure companies, and support progress toward meaningful corporate reforms. We also secured 23 new corporate commitments to reduce animal suffering, including 16 cage-free egg commitments and seven pig welfare commitments, while our accountability campaigns led 12 major corporations, including Cargill, IKEA, and Colombina, to improve transparency and reinforce their cage-free transitions.
Beyond public actions and volunteer mobilization, we continued strengthening community engagement and development efforts. Our 2025 report highlights participation in VegFests in Argentina and Brazil, Colombia’s first wine-tasting fundraiser, and LATAM’s first Sinergia Day, which engaged around 70 supporters in Chile and Peru.
While the final format was different from the original Manifund proposal, the core goal remained the same: strengthening the animal advocacy movement in the Global South by bringing people together, creating opportunities for engagement, and building the grassroots capacity needed to achieve concrete wins for animals. We are closing this project with gratitude and with the understanding that, although the dedicated version of the project was not funded, its objectives were meaningfully advanced through adapted activities supported by other resources.
Uma versão um pouco mais curta e mais “platform update”, caso o campo da Manifund seja pequeno:
Final update
This project was originally designed to fund dedicated movement-building events in Colombia, Brazil, and Thailand. Since we did not receive the funding we hoped for through this campaign, we adapted the scope and integrated this work into Sinergia Animal’s broader 2025 movement-building and campaign activities, with support from other funding sources.
In 2025, Sinergia Animal held more than 250 gatherings, training sessions, and other in-person or online activities, growing our activist base from 500 to 847 trained and engaged volunteers — a 69.4% expansion. We also expanded into 13 new cities across Latin America and Southeast Asia, strengthening our grassroots capacity and enabling simultaneous campaign actions across up to five cities.
This adapted work contributed to a strong year for our campaigns and community engagement. Sinergia carried out 135 street actions across seven countries, secured 23 new corporate commitments to reduce animal suffering, and helped 12 major corporations improve transparency and reinforce their cage-free transitions. We also participated in VegFests in Argentina and Brazil and organized community-led initiatives, including Colombia’s first wine-tasting fundraiser and LATAM’s first Sinergia Day, engaging around 70 supporters in Chile and Peru.
Although the final format differed from the original proposal, the core objective was meaningfully advanced: strengthening the animal advocacy movement in the Global South by creating opportunities for connection, training, public engagement, and collaboration.
We appreciate all the support we received here and invite you to keep up with our work. For more information, please refer to our 2025 Report.
Ahmed Abdelhamed
3 days ago
Hello @NeelNanda I hope if you can take a look or at least tell if it's the right path.. thank!!
Modeling Cooperation
3 days ago
@swante Thank you so much for supporting our work! What an incredibly generous donation, we truly appreciate it! Together with the matched funds from SFF, your contribution covers Modeling Cooperation's budget for 7 months, the majority of a year, so this is incredibly valuable for us. A heartfelt thank you from the whole Modeling Cooperation team, Jonas
Swante Scholz
3 days ago
I've heard good things about the Intelligence Rising workshops, and research around AI competition dynamics seems highly relevant. Happy to cover the requested 26k.
Austin Chen
3 days ago
I love the design of this site!
every donation platform should look like partiful
is a fantastic slogan, will consider it ;)
Jasmine Brazilek
3 days ago
More forecasters should become grant-makers, the skill overlap is excellent, and Marcus is highly capable. Rapid funds like this are exactly what the field needs; it's plainly underfunded. I can't contribute financially, but I wanted to register my support.
Romain Deléglise
3 days ago
I have heard good feedback from the previous work of Tom at PauseAI, seems to be a good opportunity to continue with this project.
Guenin Nicolas
4 days ago
If Quark-AI captures the fossil, Transcendeur tests what the fossil becomes when placed under argumentative pressure.
Ryan Kidd
4 days ago
I am also making a small donation as a sign of support. I have a lot of respect for Marcus and this seems like an awesome initiative!
Ruby M.
4 days ago
I fear your demonstration is putting too much weight into whether a piece of text sounds like it is from an assistant or not, which reflects a small training foundation of probably mostly assistant prompted generated text. It is quite good at telling when it sounds like stock-standard AI slop, but I feel anyone can do that. When I change just a few characters, change the em dashes to -, change sided " to standard quotation marks, and remove one period from an ellipses, as well as remove the names of the speakers and replace them with Speaker 1/ Speaker 2. The estimate drops from -0.15 human (wrong) to -0.95 human (wrong).
I threw a few passages in there of rather nuanced heavy conversations I had, as well as some conversations I had with a smaller model. It got perfect marks against the local llama model that fits on my gpu. But at the moment, seems to be worse than a coin flip for ambiguous text (which is what we're trying to fill the gap for with these types of detectors; obvious ai text is obvious), which is confusing. Pangram's research paper on their method seems to be what you are doing except done in a way that doesn't rely on AI to rate AI work; it instead works on the false positive rate by continually training in false positives so that it can actually detect what a false positive will look like, which has improved Pangram's results explosively in my opinion. I thought it was useless before, but it is worth examining their methodology. Here, the explanations the models gave both on the experimental page + the other ones I saw (as one broke) looking at the network request were massive underestimates of what an LLM can do and made sweeping assumptions about the depth or philosophical difficulty of a passage.
At times, it seems to rate a passage purely on whether it sounds empathetic enough to be human and doesn't sound fake. Unfortunately, we live in a time where nuanced, well-spoken AI will be commonplace soon enough, so that's not good enough either. I struggle to know whether more training would solve this issue, because if the problem is it needs to distinguish between meaningful text and fakely meaningful text an ai writes, if it's an AI, then how can it know what is meaningful and what isn't if that's the definition?
I submitted all my entries with feedback correct/incorrects so you can review them if you want. Interesting idea for an alternative solution to this problem with an uncommon training style for this purpose, but I fail to see the things it's intended to show off.
Mu Zi
4 days ago
Update Date: April 26, 2026
Updated Materials Added:
• I have prepared cleaner and more compressed current versions of the RStar paper draft and external application packet.
• The revised paper is now centered on a narrower core invariant: execution-time authorization continuity as a deterministic runtime invariant. The current framing is: RStar does not try to make probabilistic agents deterministic; it makes execution permission deterministic at the final dispatch boundary.
• The revised application packet also makes the next phase more concrete. The proposed 90-day plan focuses on real-framework replay evidence: adapters for agent frameworks, an 8–12 scenario authorization-drift matrix, with/without-RStar replay logs, core metrics, and a reviewer-facing walkthrough package.
• This update narrows the project scope rather than expanding it. RStar is not presented as a general-purpose agent governance platform. It addresses a specific execution-boundary question: after identity, policy engines, gateways, observability, and approval surfaces have done their work, is this exact action still authorized now under the current actor, thread, delegation chain, policy state, resource target, and evidence state?
Current materials:
1. RStar_Workshop_Paper_v8.2_2026-04-26_1252_MuZi
2. RStar_Application_Packet_v1.4_2026-04-26_1252_MuZi
Matthew A Cator
1 day ago
Clarifying the smallest falsifiable version of this grant:
The near-term goal is not to prove that Golem Physics is “the” solution to hallucination or agent control. The goal is to make the current working system inspectable by outside reviewers.
A successful $5k phase would produce:
1. A reviewer walkthrough showing source material → claim extraction → lattice coordinate → verification state → speech/silence decision. 2. Updated public metrics and runtime traces. 3. 20–50 end-to-end claim traces that reviewers can inspect manually. 4. A benchmark plan comparing Golem against simpler baselines such as RAG with citations, LLM self-checking, and structured provenance systems. 5. A short Golem-to-Constraint-Native walkthrough showing how verification-before-voice maps to proof-gated action-before-execution.
The thing I most want from funders/reviewers is not blind confidence. It is critique on whether these artifacts would make the system legible enough for a stronger follow-on evaluation.
Connacher Murphy
2 days ago
We have released a live benchmark at https://agentisland.ai/. Give it a look!
We're running more games now and testing for provider preference in the output data.
Godbless A. Arhin
2 days ago
I am available to answer any questions you may have concerning the project.
Caroline Araujo de Oliveira
3 days ago
Project update / final update
Thank you to everyone who followed and supported this project.
This project was originally designed as a dedicated movement-building initiative to host in-person events in Colombia, Brazil, and Thailand, bringing together activists, influencers, local organizations, and other potential advocates to strengthen collaboration and promote more effective animal advocacy in the Global South.
Since the project did not receive the funding we hoped for through Manifund, we were not able to implement it exactly as originally proposed. However, with the support of other funding sources, we adapted the scope and integrated this work into our broader 2025 movement-building and campaign activities.
In practice, this allowed us to carry out a significant amount of community-building and public-facing work across several countries. According to our 2025 Year in Review, Sinergia Animal held more than 250 gatherings, training sessions, and other in-person or online activities throughout the year. Our activist base grew from 500 to 847 trained and engaged volunteers, representing a 69.4% expansion, and we expanded our reach into 13 new cities across Latin America and Southeast Asia. These activities strengthened our grassroots capacity and helped enable simultaneous campaign actions in up to five cities at once.
This work also supported broader campaign outcomes. In 2025, Sinergia Animal carried out 135 street actions across seven countries, helping increase public awareness, pressure companies, and support progress toward meaningful corporate reforms. We also secured 23 new corporate commitments to reduce animal suffering, including 16 cage-free egg commitments and seven pig welfare commitments, while our accountability campaigns led 12 major corporations, including Cargill, IKEA, and Colombina, to improve transparency and reinforce their cage-free transitions.
Beyond public actions and volunteer mobilization, we continued strengthening community engagement and development efforts. Our 2025 report highlights participation in VegFests in Argentina and Brazil, Colombia’s first wine-tasting fundraiser, and LATAM’s first Sinergia Day, which engaged around 70 supporters in Chile and Peru.
While the final format was different from the original Manifund proposal, the core goal remained the same: strengthening the animal advocacy movement in the Global South by bringing people together, creating opportunities for engagement, and building the grassroots capacity needed to achieve concrete wins for animals. We are closing this project with gratitude and with the understanding that, although the dedicated version of the project was not funded, its objectives were meaningfully advanced through adapted activities supported by other resources.
Uma versão um pouco mais curta e mais “platform update”, caso o campo da Manifund seja pequeno:
Final update
This project was originally designed to fund dedicated movement-building events in Colombia, Brazil, and Thailand. Since we did not receive the funding we hoped for through this campaign, we adapted the scope and integrated this work into Sinergia Animal’s broader 2025 movement-building and campaign activities, with support from other funding sources.
In 2025, Sinergia Animal held more than 250 gatherings, training sessions, and other in-person or online activities, growing our activist base from 500 to 847 trained and engaged volunteers — a 69.4% expansion. We also expanded into 13 new cities across Latin America and Southeast Asia, strengthening our grassroots capacity and enabling simultaneous campaign actions across up to five cities.
This adapted work contributed to a strong year for our campaigns and community engagement. Sinergia carried out 135 street actions across seven countries, secured 23 new corporate commitments to reduce animal suffering, and helped 12 major corporations improve transparency and reinforce their cage-free transitions. We also participated in VegFests in Argentina and Brazil and organized community-led initiatives, including Colombia’s first wine-tasting fundraiser and LATAM’s first Sinergia Day, engaging around 70 supporters in Chile and Peru.
Although the final format differed from the original proposal, the core objective was meaningfully advanced: strengthening the animal advocacy movement in the Global South by creating opportunities for connection, training, public engagement, and collaboration.
We appreciate all the support we received here and invite you to keep up with our work. For more information, please refer to our 2025 Report.
Ahmed Abdelhamed
3 days ago
Hello @NeelNanda I hope if you can take a look or at least tell if it's the right path.. thank!!
Modeling Cooperation
3 days ago
@swante Thank you so much for supporting our work! What an incredibly generous donation, we truly appreciate it! Together with the matched funds from SFF, your contribution covers Modeling Cooperation's budget for 7 months, the majority of a year, so this is incredibly valuable for us. A heartfelt thank you from the whole Modeling Cooperation team, Jonas
Swante Scholz
3 days ago
I've heard good things about the Intelligence Rising workshops, and research around AI competition dynamics seems highly relevant. Happy to cover the requested 26k.
Austin Chen
3 days ago
I love the design of this site!
every donation platform should look like partiful
is a fantastic slogan, will consider it ;)
Jasmine Brazilek
3 days ago
More forecasters should become grant-makers, the skill overlap is excellent, and Marcus is highly capable. Rapid funds like this are exactly what the field needs; it's plainly underfunded. I can't contribute financially, but I wanted to register my support.
Romain Deléglise
3 days ago
I have heard good feedback from the previous work of Tom at PauseAI, seems to be a good opportunity to continue with this project.
Guenin Nicolas
4 days ago
If Quark-AI captures the fossil, Transcendeur tests what the fossil becomes when placed under argumentative pressure.
Ryan Kidd
4 days ago
I am also making a small donation as a sign of support. I have a lot of respect for Marcus and this seems like an awesome initiative!
Ruby M.
4 days ago
I fear your demonstration is putting too much weight into whether a piece of text sounds like it is from an assistant or not, which reflects a small training foundation of probably mostly assistant prompted generated text. It is quite good at telling when it sounds like stock-standard AI slop, but I feel anyone can do that. When I change just a few characters, change the em dashes to -, change sided " to standard quotation marks, and remove one period from an ellipses, as well as remove the names of the speakers and replace them with Speaker 1/ Speaker 2. The estimate drops from -0.15 human (wrong) to -0.95 human (wrong).
I threw a few passages in there of rather nuanced heavy conversations I had, as well as some conversations I had with a smaller model. It got perfect marks against the local llama model that fits on my gpu. But at the moment, seems to be worse than a coin flip for ambiguous text (which is what we're trying to fill the gap for with these types of detectors; obvious ai text is obvious), which is confusing. Pangram's research paper on their method seems to be what you are doing except done in a way that doesn't rely on AI to rate AI work; it instead works on the false positive rate by continually training in false positives so that it can actually detect what a false positive will look like, which has improved Pangram's results explosively in my opinion. I thought it was useless before, but it is worth examining their methodology. Here, the explanations the models gave both on the experimental page + the other ones I saw (as one broke) looking at the network request were massive underestimates of what an LLM can do and made sweeping assumptions about the depth or philosophical difficulty of a passage.
At times, it seems to rate a passage purely on whether it sounds empathetic enough to be human and doesn't sound fake. Unfortunately, we live in a time where nuanced, well-spoken AI will be commonplace soon enough, so that's not good enough either. I struggle to know whether more training would solve this issue, because if the problem is it needs to distinguish between meaningful text and fakely meaningful text an ai writes, if it's an AI, then how can it know what is meaningful and what isn't if that's the definition?
I submitted all my entries with feedback correct/incorrects so you can review them if you want. Interesting idea for an alternative solution to this problem with an uncommon training style for this purpose, but I fail to see the things it's intended to show off.
Mu Zi
4 days ago
Update Date: April 26, 2026
Updated Materials Added:
• I have prepared cleaner and more compressed current versions of the RStar paper draft and external application packet.
• The revised paper is now centered on a narrower core invariant: execution-time authorization continuity as a deterministic runtime invariant. The current framing is: RStar does not try to make probabilistic agents deterministic; it makes execution permission deterministic at the final dispatch boundary.
• The revised application packet also makes the next phase more concrete. The proposed 90-day plan focuses on real-framework replay evidence: adapters for agent frameworks, an 8–12 scenario authorization-drift matrix, with/without-RStar replay logs, core metrics, and a reviewer-facing walkthrough package.
• This update narrows the project scope rather than expanding it. RStar is not presented as a general-purpose agent governance platform. It addresses a specific execution-boundary question: after identity, policy engines, gateways, observability, and approval surfaces have done their work, is this exact action still authorized now under the current actor, thread, delegation chain, policy state, resource target, and evidence state?
Current materials:
1. RStar_Workshop_Paper_v8.2_2026-04-26_1252_MuZi
2. RStar_Application_Packet_v1.4_2026-04-26_1252_MuZi