You're pledging to donate if the project hits its minimum goal and gets approved. If not, your funds will be returned.
Organizations and governments have strong incentives to automate open-source intelligence (OSINT), a task at which even non-specialized LLMs already perform well. This automated intelligence (AUTOINT) capability democratizes high-quality intelligence analysis, creating novel reasons to support aligned, interpretable AI [1] and new opportunities for macrostrategists to answer decision-relevant questions at scale [2].
This space is nearly empty; my paper [1] is the first on governing this technology.
I'm an Oxford Ellison Scholar (PNAS, ICLR publications; forthcoming NYT exclusive) on an unfunded gap year with ~7mo remaining. This grant covers ~2mo of living expenses to produce three concrete deliverables:
(a) Revise [1], currently in rebuttal at ICML (fixable critiques from reviewers), for publication at a top ML venue.
(b) Produce/circulate a memo scoping an AUTOINT research branch at a top AI policy org (in early talks).
(c) Moonshot: produce/circulate a proposal for an AI risk priority area at EIT [3] (well-resourced, influential org whose leadership has expressed openness to my input). >10% credence of success for <100hrs of work.
Terminal goals (in no particular order):
Reduce large-scale risks from AI deployment in high-stakes domains.
Increase strategic awareness [2] to improve macrostrategy.
Instrumental goals (within the next ~2 months) and theory of change:
Revise AUTOINT paper [1], aiming to publish it at ICML or a venue of similar quality. This publication would increase the popularity/prestige of advocacy for terminal goal #1 from a novel perspective.
Produce and circulate a memo at a top AI policy org advocating for an AUTOINT research branch. Such a branch would directly address important and neglected empirical governance questions, in line with terminal goal #2.
Produce and circulate a proposal for an AI risk priority area at EIT. Leadership has directly expressed openness to my input. Provides resources/credibility for terminal goal #1.
Publications at PNAS and ICLR; submission in review at ICML; news features at CBS, NYT (forthcoming), and more [5].
Ellison Scholar, University of Oxford; Expert Collaborator, MIT AI Risk Initiative.
Emergent Ventures grant recipient.
NeurIPS referee.
Advised World Bank/UN policy forecasts for 47 countries.
Researched policy brief whose recommendation was implemented by UK and US governments (reported by NYT, BBC, White House).
On AUTOINT specifically: my paper [1] is, to my knowledge, the first academic work on governing automated OSINT. ICML reviewers’ critiques were fixable.
Paper fails to get published (e.g., ICML submission rejected w/o clear revision path)
Result:
Argument has less reach among top ML audiences (at least temporarily). Harms terminal goal #1.
Mitigations:
High-signal, truth-seeking communication with reviewers.
Resubmit elsewhere (e.g., relevant workshops at NeurIPS, ICLR, or AAAI).
Institutional backing is inaccessible (e.g., org scoping conversations suggest lack of interest):
Result:
Reduced expected resources for AUTOINT-powered strategic awareness. Harms terminal goal #2.
Mitigations:
Publish/share information value from scoping chats, and/or use it to motivate founding a new non-profit or startup.
EIT engagement doesn't convert
Result:
Meaningfully reduced expected resources for AI risk. Harms terminal goal #1.
Mitigations:
Prepared but private (DM/email [6] for details).
No other external funding to date.
Medium term, this agenda can absorb $100–300k+ for an organizational research branch or $1M+ for a startup.
[1] doi.org/10.48550/arXiv.2509.17087
[2] forethought.org/research/design-sketches-tools-for-strategic-awareness#automated-osint
[3] eit.org