Nathan Thornhill
An ORCID-gated submission pipeline where a multi-model AI panel plus quality-control layer delivers rigorous peer review without institutional gatekeeping.
Joshua Michael Sparks
Stage 0 bridge for a focused-ultrasound research program targeting the neural architecture that maintains chronic suffering.
Vangelis Gkagkelis
A 6-month pilot testing probabilistic forecasting for AI, misinformation, institutional trust, and social risk in Greece.
Matthew A Cator
Funding the open-source launch of a working claim-state system and the local firewall bridge that carries verification before voice into governed agent action.
Alex Kwon
If your reward model is an LLM, you cannot tell whether the policy is gaming the reward or actually getting better. We built a simulator instead.
José Wheeler
Identifying and auditing reasoning circuits in LLMs within Algoverse 2026 using Sparse Autoencoders (SAEs).
Aashka Patel
Inspiring India’s Middle‑Schoolers to pursue AI Safety, Governance, and X‑Risk Work
Sardor Razikov
First quantitative framework for measuring when LLMs surrender independent reasoning under authority pressure
Zaelani
18+ preprints across multiple fields, all written on a 2GB RAM phone. $600 removes the only thing standing between me and the next body of work.
Francisco Antonio Da Costa Barroso
Independent researcher in Brazil scaling a validated sparse architecture to 1B, plus open interpretability tooling for expert routing. 6-month runway grant.
Jessica Pu Wang
Germany’s talents are critical to the global effort of reducing catastrophic risks brought by artificial intelligence.
Ida-Emilia Kaukonen
A 15,000+ page corpus on long-term interaction, symbolic language, unusual model behavior, and safety edge cases.
Ray Hsu
Non-invasive AI acoustic sensor to detect Varroa mites without killing bees, replacing destructive testing.
Studying more human-like intelligence through constraint-aware, curiosity-driven agents on ARC-AGI-3
Ahmed dawoud
An advanced agent that perceives your screen and executes tasks by controlling the mouse, acting as a digital proxy to handle complex work on your behalf.
Dhruv Yadav
Auditing and improving LLM-as-a-judge systems via interpretable aggregation of preferences
Haakon Huynh
Finn Metz
Funding 5–10 AI security startups through Seldon’s second SF cohort.
Nutrition labels transformed food safety through informed consumer choice, help me do the same for AI and make this standard :)
Lawrence Wagner
A benchmark for studying how failures spread across multi-agent AI systems and whether they can be detected and interrupted in time.