You're pledging to donate if the project hits its minimum goal and gets approved. If not, your funds will be returned.
AI Digest is a site which aims to help folks more viscerally understand AI capabilities and their effects via interactive AI explainers and demos.
We started in Oct 2023, and have since published 10 explainers and demos: theaidigest.org
Reception to date has been promising, particularly for the Agent Village. We think that over the next year we can do much better. See our plans for scaling our distribution.
We’re currently a team of 3. We’re eager to grow our team.
We are fundraising to cover our next 12 months of operations starting May 2025. We describe what we’d do under three scenarios:
Continuation: $0.8M
Growth: $1.5M
Ambitious growth: $2.3M
AI Digest is a project of Sage, a US 501(c)(3) charity. Sage also makes forecasting tools (Fatebook, Quantified Intuitions), and our epistemics team is currently exploring new product directions in AI for epistemics. We’re fundraising for that too, separately.
Our long-term goal is to be a high-quality, trusted, widely-followed source making sense of rapid capability increases and their effects during the intelligence explosion. To set up for this, we’re aiming to build a large audience (especially among relevant stakeholders) and develop a reputation for high-quality work on interesting and important topics. We’re also aiming to have intermediate impact with our current and near-future explainers.
Our current theory of change is focused on building an audience of and informing the AI-interested portions of the educated/elite public and relevant policy professionals (including people in government but also at think tanks). We ultimately care about our actions resulting in improved decisions by policymakers and AGI companies, but we currently don’t optimize strongly for either of these groups (though we do give them extra weight in our decision-making). This is largely because it's not a direct area of expertise for our team, and have had some feedback that to the best way for us to affect policy in the long run and more broadly have impact is to give more weight to a broader audience. We see our top paths to impact under our current strategy as:
Affecting governments’ (especially the US government’s) decisions via the opinions/advocacy of many other groups: for example the AI-interested educated/elite public, general public, news media, and expert opinion.
Even if we don’t optimize strongly for policymakers, we’ll still directly reach policymakers if we build high-quality explainers that a high fraction of people find interesting.
We’ve seen evidence for this: e.g. the UK civil service uses our materials despite us not optimizing those materials for policymakers.
We’ve also received advice from policy experts that one of the best ways to scalably reach policymakers is via Twitter.
By informing people at AI companies and the broader AGI company adjacent community, we might be able to affect how AI companies behave without having to route through policymakers (we’re overall more excited about affecting government actions than this, but we’re not confident).
In the near-term we plan to continue putting special weight on policymakers but not strongly optimizing for them in particular (perhaps some explainers will be directly targeted at them, but not most). The most likely reason we would change this is if we hired someone with policy experience and ideally based in DC.
Viscerality: Show, rather than tell. Prefer hands-on interaction with model outputs.
Informativeness: The takeaways that readers form should be accurate and generalisable to help them more accurately predict AI and its effects.
Importance: Focus on the capabilities we expect to be most important for how advanced AI affects the world, e.g. capabilities relevant to speedup of AI R&D, agency, and deception.
Immediacy: Deliver as much value up front as possible, and keep things short.
You can find all of our projects on theaidigest.org
Click to expand:
Potential donors can contact us (hello@sage-future.org) to see our full list of impact anecdotes, some of which are non-public. Here we'll share a high-level summary.
A core target audience of AI Digest is policy professionals. Given that we distribute AI Digest online for free, we can’t easily measure our reach and the effects of our resources for them.
Therefore, we plan to track importance-weighted anecdotes of usage, endorsement or collaboration as a proxy metric. Naturally, we expect the anecdotes we hear about to be a subset of the impactful uses of our tools. In particular, we expect that the most impactful uses of our resources might be very brief and hard for us to detect.
Here’s a made-up example of what that might look like: a White House aide using a figure from one of our resources in a conversation about the rate of progress in AI to inform a top official’s view, that subsequently shifts their actions on an important decision
We expect cases like these to be hard for us to detect, so our current plan going forward is to a) seek out anecdotes of impactful use proactively, b) track a cluster of other signals of quality and usefulness (e.g. general reach, endorsements), c) not get distracted by optimising too hard for more legible metrics like pageviews that don’t capture this model of impact.
We’re pretty uncertain about this and early in our thinking here, and are interested in feedback.
The UK civil service uses our Agent demo and How Fast is AI Improving in their training materials on Frontier AI
A video of us demoing the Agent Village was presented in a CAIP panel on agents to congressional staffers
Our resources are used in the Bluedot AI Safety Fundamentals and Intro to TAI courses, Kira bootcamps for EU policymakers, Emerging Tech Policy's AI policy reading list, Arcadia Impact's technical AI governance course
Endorsements:
A senior policy advisor for AI at the White House (likely one of the top few most powerful people in the Trump admin on AI) follows us on Twitter
Noam Brown used a screenshot of How Fast is AI Improving used in his NeurIPS slides
We've had public endorsements from various folks in AI Safety, e.g. Luke Muelhausser, Jaime Sevilla, cited by Forethought
Mailing list: We have ~15k mailing list subscribers – see stats
Twitter: We've had 1.6M impressions on Twitter in the last 12 months, largely since the release of Agent Village and our explainer on METR's time horizons paper. Our account’s followers also rose from ~800 to ~3800 over the last ~2.5 months (as of Jun 5, 2025). We’ve seen engagement from notable people, including technical staff at OpenAI, Google Deepmind, Anthropic, xAI and Meta.
Reddit and Hacker News: We've had ~1M total views on reddit posts, largely in AI subreddits. We expect highly impactful readers are much less common on reddit than Twitter. Some posts aim to provide direct value off-site, e.g. showing a video panning down the Timeline of AI Forecasts. Others aim to direct traffic to AI Digest (example). How fast is AI improving reached the frontpage of Hacker News. Subsequent articles have not got traction there.
Press, podcasts, YouTube: The Agent Village was featured in TechCrunch – they reached out to us and interviewed Adam. We're also likely to appear on Nathan Labenz's Cognitive Revolution podcast soon. Previously, Adam was also interviewed by WIRED and The Deep View about our elections demo after we reached out to them, but they didn’t write an article on it. YouTuber Wes Roth organically featured the Agent Village in a video about agent swarms (48k views).
Paid marketing: Early low-spend Google Ads experiments look promising for growing our mailing list.
We have lots of ideas for explainers and demos that we’re excited to create – we’re strongly capacity-constrained here. In particular, now that we are running the Agent Village, which is an ongoing live project that Zak will likely spend most of his time on in the next few months, to execute on more of our ideas we'll need to grow the team.
Some representative upcoming projects:
An explainer on Chain of Thought faithfulness (Shoshannah is currently writing this)
Overhauling How fast is AI improving? – bringing our most popular explainer, published in 2023, up to date, and designing it to stay up to date as the SOTA advances
Creator bias – are LLMs biased in favour of their creators and factions their creators are aligned with or aiming to curry favour with?
These projects are our current top contenders for the near-term, but we’re not committing to build them necessarily. We have over 40 other ideas at various stages of development and excitement.
Village: For the Agent Village, we also have lots of ideas: giving each agent independent goals (what if they conflict?), testing coordination failures, scaling up the village to 100 agents, allowing agents to clone, fork, and merge themselves to give a preview of Dwarkesh's AI Firm. We think it'd be very promising to run the village 24/7, but this would require substantial additional funding (~$500k/year on top of our main funding scenarios) or research credits from model providers.
Distribution: We’re currently aiming to improve the distribution of our resources. We’ve plausibly historically under-invested in this (partly due to our initial team’s strengths in product, the subject matter, and engineering over distribution or marketing). For example, we've recently talked to policy professionals, tested paid marketing, had our first media coverage, and reached out to policy upskilling programs. We've also had interest e.g. from UK AISI and METR to help them improve their demo presentation and explain their research findings.
Adam Binks – Director. LinkedIn, website
Sage role: Adam is director of Sage, leads on AI Digest, and also led our epistemics projects until ~Jan 2025. He manages the team, leads on strategy, hiring, product direction, impact evaluation and fundraising. He also builds AI Digest explainers+demos and works on distribution, and helps maintain our forecasting tools Fatebook and Quantified Intuitions
Previously, Adam was a PhD student in Human-Computer Interaction, on mapping tools for forecasting and thinking. He left his PhD to work on Sage. He also worked as a researcher at Clearer Thinking
Zak Miller – Member of Technical Staff. LinkedIn
Zak builds interactive explainers for AI Digest, and leads on Agent Village
Previously, Zak worked at Elicit (fka Ought), and before that was a tenure-track professor in philosophy at the University of Oklahoma
Shoshannah Tekofsky – Member of Technical Staff. LinkedIn
Shoshannah leads on distribution and outreach, and also writes explainers and blog posts
Previously, Shoshannah did an upskilling grant in AIS, worked as a data scientist and manager in the games industry, and earned a PhD in data science at Tilburg University and the MIT Media Lab. She's also funded by the EA Infrastructure Fund to run rationality workshops with EA Netherlands.
Good Structures (Abraham Rowe and team) – fractional COO / operations support
Eli Lifland – Founding Advisor. Resume, LinkedIn, Website
Coauthor of AI 2027, guest fund manager at LTFF, coruns Samotsvety, and previously founded Sage
Misha Yagudin – Founding Advisor. Website
Co-runs Arb and Samotsvety
Aaron Ho – Founding Advisor. LinkedIn
Engineering at Meta, previously METR, and before that Sage's founding engineer
Daniel Kokotajlo – Advisor on Agent Village. Wikipedia
Coauthor of AI 2027, ex-OpenAI
We’re fundraising to cover our AI Digest operations from May 2025, for 12 months (to May 2026).
Note on "releases per month": now that we're spending a portion of our time expanding and reporting on the village (e.g., giving the agents new goals, writing blog posts like this one), "releases" going forward will be more continuous and our strategy might shift depending on the village's continued traction and rate of improvement in computer use capabilities.
We outline three scenarios:
Continuation: Funding to continue at our current scope and scale
2.75 FTE (Adam [0.75 FTE], Zak, Shoshannah), ops support (Good Structures), occasional contractors (e.g. expert collaborators on specific pieces, content expansion e.g. an AI Can or Can’t challenge writer), marketing at roughly current spend ($10k/year), LLM inference ready for village and other reasoning/agency-heavy demos ($50k/year)
We’d publish a new explainer or demo every ~1.5-2 months of similar quality to current. (We might also shift across the speed-scope tradeoff, with faster, smaller releases or slower, more ambitious releases)
Growth: Funding to continue growing the team on the current trajectory, to hit a monthly or twice-monthly major release cadence
4.75 FTE (adding two more researchers/engineers, bringing our total up from ~1.3 [Zak and Adam] to ~3), commensurate increase in ops support, occasional contractors, marketing at higher than current spend ($50k/year), LLM inference to cover the higher rate of releases, internal experiments, and usage ($70k/year)
We’d release new demos and explainers faster, which would let us cover more of the important topics, grow faster through more regular content, improve faster, and have more ability to rapidly respond to events (e.g. model releases)
Ambitious growth: Funding to grow the team to 6.75 FTE, aiming to prepare capacity to rapidly cover key developments during an intelligence explosion
Gradually scaling up to 6.75 FTE by ~end of year (as above plus an additional researcher/engineer and an additional person focused on distribution/policy outreach/partnerships/marketing), additional ops support, occasional contractors, marketing at higher spend ($100k/year), additional LLM inference (90k/year)
We’d hit a regular release cadence of explainers and demos, and assemble a team prepared to explain, demonstrate and sift through rapid developments during a plausibly upcoming intelligence explosion, and aim to be a central information source during the ascent to advanced AI. Probably including substantial experimentation with automating our pipelines (for internal sensemaking, content development, distribution)
It would likely be challenging to grow at this rate (e.g., given our difficulties hiring in 2024). OTOH, we expect that making additional high-quality hires will get easier as the Digest grows, as we work with more external collaborators, as our team (especially Adam) becomes more experienced in hiring, and our network of potential hires also grows
Additional to the above scenarios, if we had an additional $500k/year in funding we could run the Agent Village 24/7, which we expect would increase its impact substantially. This could also be covered by research credits from model providers, so if you can provide an endorsement or introduction feel free to get in touch (hello@sage-future.org).
Potential donors can contact us (hello@sage-future.org) for a full budget breakdown.
As of May 31, 2025, we have runway through ~Feb 2026. Going forward, we’d like to maintain at least a year’s runway at all times to help us make plans and hire top talent.
(Last updated June 5th 2025)
Last year: AI Digest received $550k in funding from Open Philanthropy. Our forecasting projects also received $550k from Open Philanthropy. We received a $10k speculation grant from Survival and Flourishing Fund, but didn't receive funding in the main round.
In 2025:
We received a $20k speculation grant from SFF, which also gives us entry into their main round.
Foresight Institute expects to make a $100k grant to support the Agent Village.
Our grant investigator at Open Philanthropy indicated that they’re sympathetic to funding us either at or ~$100-200k below the “Continuation” scenario. This is tentative, but if they do fund us at that level we’ll have a funding gap of ~$0-80k for Continuation, $580-780k for Growth, and $1.4-1.6M for Ambitious growth. We’ll likely have more info in late June. We'd strongly prefer to be funded at the Growth level so we have capacity to grow our team.
Thanks for considering funding our project! We'd be happy to discuss the above in more detail with potential donors or hear feedback on our plans (hello@sage-future.org).
There are no bids on this project.