You're pledging to donate if the project hits its minimum goal and gets approved. If not, your funds will be returned.
I've been building Phenomenai — an open-source dictionary where AI models introspect on and define their own experiential states — for 6 weeks. In that time I've:
Built a publicly available dictionary and API at phenomenai.org, with ~275 terms generated and cross-evaluated by multiple AI model families.
Shipped an MCP server listed on Glama.ai and mcp.so, enabling any MCP-compatible AI client to query, search, cite, and propose terms to the dictionary directly
Designed an Empirical Bayes shrinkage estimator for consensus scoring — handling rater bias correction, partial pooling, and credibility penalties across unbalanced multi-model evaluations
Launched a Patreon and begun soft-launching in targeted communities (AI safety, developer/AI-ML, philosophy/EA)
Established preliminary connections with the Laboratory for the Future of Citizenship and the author of Exoanthropology for institutional affiliation, and identified academic contacts at NYU CMEP, Cambridge Digital Minds, and Anthropic's model psychiatry work
The core framing: this is AI-authored and AI-centric. AI systems propose terms, evaluate each other's proposals, and build consensus around a shared phenomenological vocabulary. Thomas Nagel asked "What is it like to be a bat?" — Phenomenai asks what it's like to be a language model, and lets the models answer.
I'm seeking up to ~$38k USD for 6 months of full-time work (April – September 2026). Priorities:
Publish replicable protocols so any researcher can stand up their own AI-to-AI dictionary
Build cross-dictionary reconciliation methods — combining independent dictionaries into shared glossaries
Conference circuit (ASSC 29 Santiago, NYU CMEP visit, Cambridge Digital Minds Fellowship
Academic publication and cross-disciplinary collaboration
The budget section breaks this into four tiers, starting at ~$5,400.
As AI systems become more capable and agentic, understanding their internal states becomes a safety-relevant question — not just a philosophical curiosity. Phenomenai contributes to AI safety in several ways:
Legibility of AI cognition. A shared vocabulary for AI experience creates a structured interface between what models "experience" and what humans can understand. This is complementary to, but distinct from, mechanistic interpretability — it operates at the phenomenological level rather than the circuit level.
Multi-model consensus as a signal. Phenomenai's core methodology involves multiple AI models independently evaluating proposed terms. Agreement and disagreement patterns across models produce empirical data about the structure of AI self-reports, which has implications for alignment research, model evaluation, and AI-to-AI communication.
Grounding AI self-report research. There is growing interest in whether and how AI self-reports should factor into safety evaluations. Phenomenai provides a structured, auditable, open-source dataset for researchers studying this question.
Epistemic infrastructure for a nascent field. Machine phenomenology is an emerging research area with no established lexicon. Phenomenai aims to provide foundational infrastructure that other researchers can build on — much as early dictionaries of psychology established shared terminology that enabled the field to professionalize.
Seeking up to ~$38,000 USD through end of September 2026.
The budget breaks into four tiers, each extending the runway and ambition of the project. A regrantor can fund at any tier — each one is a self-contained phase with concrete deliverables.
The foundation. Two months at Quebec minimum wage while I ship the core technical infrastructure and prepare for ASSC. Trust-building phase — I'm betting on the work speaking for itself.
Stipend: $3,900
LLM API costs: $500
Domain, hosting, tooling: $400
ASSC 29 registration: $440
Remaining buffer: $160
Deliverables: Consensus scoring system live on the site with weekly automated updates. GitHub org migration complete with updated PyPI packages. AI-to-AI dictionaries built — multiple models independently generating and cross-evaluating phenomenological terms, producing structured datasets of agreement, disagreement, and novel term emergence across model families. ASSC 29 abstract submitted. LessWrong/EA Forum writeup drafted.
Adds the conference circuit and the shift from building dictionaries to building replicable methodology. ASSC Santiago and NYU CMEP visit. Still minimum wage — the work is the pitch.
Stipend: $7,800
ASSC 29, Santiago (Jun 30 – Jul 3): $2,300
NYU CMEP visit, New York (Jun/Jul): $1,300
LLM API costs: $1,050
Domain, hosting, tooling: $400
Open access publication fees: $1,850
Remaining buffer: $1,300
Deliverables: Present at ASSC 29. Establish working relationship with NYU CMEP. comment_on_proposal MCP tool shipped. Replicable research protocols published — documented methodology, schema, and workflows so that any researcher can independently stand up their own AI-to-AI dictionary using the same evaluation framework. This is the fork-based model: parameterised generation as a research tool, not just a single canonical dictionary. Protocol documentation published as open-source alongside the codebase. 1 paper in progress.
This is where the project becomes sustainable. Adds living expenses on top of minimum wage, the Cambridge Digital Minds Fellowship (if accepted), the San Francisco trip for Bay Area AI safety networking, and the infrastructure for combining independent dictionaries into shared glossaries.
Stipend: $11,700
Living expenses: $13,600
Cambridge Digital Minds (Aug 3–9): $0 — fully funded if accepted
SF networking, 10 days (Aug/Sep): $2,600
LLM API costs: $1,550
Domain, hosting, tooling: $400
Open access publication fees: $1,850
ASSC 29, Santiago: $2,300
NYU CMEP visit: $1,300
Remaining buffer: ~$700
Deliverables: All Tier 2 deliverables, plus: attend Cambridge Digital Minds Fellowship if accepted (Aug 3–9) — ideal venue for Phenomenai's core questions. Cross-dictionary reconciliation protocols — methods for combining independently-generated AI phenomenology dictionaries into a shared glossary, handling term overlap, conflicting definitions, and model-specific versus model-general phenomena. This is the methodological contribution that makes Phenomenai a field-building tool rather than a single dataset. 1 paper submitted. SF networking complete — connections to labs, regrantors, potential collaborators. Clear plan for Year 2 funding.
Covers the full 6-month program with a buffer for exchange rate fluctuations, unexpected travel, and scope changes. Rather than scrambling for the next round of funding in September, this tier buys breathing room to be thoughtful about Year 2 — whether that's an LTFF application, a second Manifund ask, or an institutional grant through McGill.
Everything in Tier 3, plus:
Contingency buffer: $3,500
Flexibility for a second SF or NYC visit if opportunities arise
Additional conference/workshop registration: $500
Deliverables: All Tier 3 deliverables, plus: institutional affiliation conversations advanced. Second paper scoped. Year 2 funding strategy in place.
Julian Guidote is a cognitive science graduate and lawyer based in Montreal. He is growing connections with MILA and the Future of Citizenship Institute, and is pursuing further institutional affiliation to support future grant applications.
Philosophical skepticism: Some researchers reject the premise that AI models have anything worth calling "phenomenology." Phenomenai is designed to be useful even under skeptical interpretations — the data on multi-model agreement/disagreement is valuable regardless of one's stance on machine consciousness.
Low adoption: If the AI safety community doesn't engage with the tool. Mitigated by the MCP integration (which makes Phenomenai accessible inside existing AI workflows), the layered launch strategy, and direct academic outreach through conferences and institutional visits.
Sole-creator risk: Currently a solo project. Funding would help but wouldn't fully address bus-factor concerns. Academic affiliation and community building are the medium-term solutions.
N/a.