Manifund foxManifund
Home
Login
About
People
Categories
Newsletter
HomeAboutPeopleCategoriesLoginCreate

Funding requirements

Sign grant agreement
Reach min funding
Get Manifund approval
1

The first AI Safety Creator Affiliate Program

Science & technologyTechnical AI safetyAI governanceEA communityForecasting
akshyaesingh avatar

Akshyae Singh

ProposalGrant
Closes December 10th, 2025
$0raised
$10,000minimum funding
$55,000funding goal

Offer to donate

35 daysleft to contribute

You're pledging to donate if the project hits its minimum goal and gets approved. If not, your funds will be returned.

Sign in to donate

Project Summary

We're building the first creator affiliate program specifically designed to disseminate AI safety research to diverse audiences through micro-influencers. Rather than concentrating attention on a single large channel, we're testing a distributed approach: partnering with 100+ tech content creators (10k-100k followers) to translate complex AI safety research into accessible content for their unique audiences.

The insight: Marketing research consistently shows that 10 micro-creators with 50k followers each generate higher engagement and trust than 1 mega-creator with 500k followers (Stack Influence, 2025; Forbes, 2021). We're applying this principle to AI safety communication.

The Bigger Mission:

This program is part of our larger vision at Explainable (formerly Signal Creators- www.signalcreators.com ): building a comprehensive communication infrastructure for the AI safety field. We turn complex research from AIS organizations into powerful narratives. We want to build a sustainable movement for AI safety communication that scales as the field grows. We also want to make AI safety a conversation that's relevant to everyone, and partnering with micro-creators that represent their own communities and niches, is the first step. Read more on the mission HERE


What are this project's goals? How will you achieve them?

Primary Goal: Reach 10M+ views of AI safety content across 100+ creators by Q1 2026.

Secondary Goals:

  • Validate the micro-creator distribution model for technical research at scale

  • Build replicable systems for creator onboarding and quality control

  • Generate empirical data on AI safety communication effectiveness across platforms, formats, and narrative approaches

  • Create a sustainable pipeline for translating technical research into accessible content

The Research Component:

With 100+ creators producing 250-300 videos, each with their unique take on the same underlying research, we want to generate data on:

  • Which narrative frameworks resonate with different demographics (threat-focused vs. opportunity-focused)

  • How technical depth affects engagement (ELI5 vs. intermediate breakdowns)

  • Platform-specific optimization (TikTok vs. YouTube Shorts vs. Instagram Reels)

  • Which metaphors and analogies best communicate complex AI safety concepts

  • Audience sentiment patterns through comment analysis

Current Progress:

  • 10 creators onboarded, producing content across platforms, 15+ videos posted

  • 1M+ total views achieved, here are some examples-

    • The Neural Guide what does AI choose- christians or atheists?https://www.instagram.com/reel/DQXGdRTDIj6/?igsh=MzRlODBiNWFlZA%3D%3D&utm_campaign=&utm_medium=email&utm_source=newsletter

      _Dbrogle GPT values Trump’s life at 1/3 billionth that of an American Citizen https://www.instagram.com/reel/DQYIQYLjDx9/?igsh=MzRlODBiNWFlZA%3D%3D&utm_campaign=&utm_medium=email&utm_source=newsletter

      the_hybrid_professor so AIs have preferences now? https://www.instagram.com/reel/DQWddDFDYO1/?igsh=MzRlODBiNWFlZA%3D%3D&utm_campaign=&utm_medium=email&utm_source=newsletter

      Futurebriefing AI Bias doesnt necessarily come from biased data https://www.instagram.com/reel/DQYxlN9Dd_W/?igsh=MzRlODBiNWFlZA%3D%3D&utm_campaign=&utm_medium=email&utm_source=newsletter

  • Active outreach to 100+ additional creators in our pipeline (growing by 3-5 creators every 2 weeks, this is constraint because we dont have enough budget to meet market rates)

  • Partnerships with Center for AI Safety, Apart Research, Control AI to distribute their topics.

  • Refined processes for creator briefs, quality control, and performance tracking.

How We'll Scale:

  1. Creator Recruitment: Targeting the 10k-100k follower range across YouTube, TikTok, Instagram, and Twitter- creators focused on tech, science, or futurism. We've identified 100+ potential partners through direct outreach.

  2. Content Pipeline: We translate technical AI safety papers into creator briefs with key talking points, visuals, and narrative hooks tailored to different audiences. Critically, we give creators flexibility in presentation- this creative freedom enables natural experiments in communication effectiveness.

  3. Quality Control: Pre-publication review ensures accuracy while respecting each creator's unique voice. We've refined this process with our first 10 creators.

  4. Performance Tracking: We're measuring views, engagement rates, demographics, comment sentiment, and content variables to optimize our approach continuously. We will update the tracker with basic info here- https://explainable.work/metrics (to be updated)

Why Micro-Creators Matter:

The 10k-100k follower range is the sweet spot:

  • Higher trust: Audiences perceive them as authentic and relatable

  • Better engagement: 2-5x higher rates than mega-influencers

  • Greater volume: Exponentially more micro-creators exist, enabling diverse audience reach

  • Cost effectiveness: More views per dollar (see budget breakdown)

  • Niche targeting: Each creator reaches distinct audience segments

  • Natural experiments: More creators = more data on what works

This fundamentally differs from traditional AI safety content that concentrates attention on single channels. Instead of one perspective reaching 1M people, we're testing 100 perspectives reaching 100k people each.

How will this funding be used?

100% of funding goes directly to creator payments. No overhead, no administrative costs-every dollar pays for content creation.

Creator Compensation Structure:

Short-form creators typically charge $2-5 CPM (cost per thousand views). Our performance-based rates (calculated at day 7 post-publication):

  • Tier 1 (5k-50k views): $100 (~$2-4 CPM)

  • Tier 2 (50k-100k views): $200 (~$2-4 CPM)

  • Tier 3 (100k+ views): $400 (~$4 CPM)

These rates are competitive attracting creators passionate about AI safety while maximizing reach per dollar.

Tiered Funding Approach:

here's a realistic distribution (50% Tier 1, 30% Tier 2, 20% Tier 3):

Minimum Funding ($20,000)- Proof of Concept Expansion:

  • 30 creators, ~100 videos

  • Projected 4-5M views

  • Validates scalability beyond initial 10 creators

  • Generates preliminary data on cross-platform effectiveness

  • Cost per 1,000 views: ~$4-5

Median Funding ($35,000) - Substantial Scale Test:

  • 75 creators, ~175 videos

  • Projected 7-9M views

  • Robust dataset for analyzing optimal content strategies

  • Demonstrates viability at meaningful scale

  • Cost per 1,000 views: ~$3.90-5

Ideal Funding ($55,000) - Full-Scale Deployment:

  • 100 creators, ~275 videos

  • Projected 11-14M views

  • Comprehensive research on AI safety communication

  • Establishes proven, replicable distribution channel

  • Cost per 1,000 views: ~$3.90-5

All tiers deliver exceptional cost-effectiveness compared to traditional advertising ($10-30 CPM) while generating original research data that benefits the entire AI safety ecosystem.

Who is on your team? What's your track record on similar projects?

Akshyae Singh (Program Lead): Leading the Frame Fellowship with Mox (by Manifund) starting January 2026- https://framefellowship.com . responsible for overseeing current 1M+ views across 10 creators. Building Explainable with the mission of bridging the AI safety communication gap through creator partnerships. Established partnerships with Center for AI Safety and Apart Research for research distribution. Scaled the community to 300+ and conducted top events with 400+ registrations from creators, researchers, founders in the last 1.5 months of launching. In touch with top AIS creators like AI in Context, Species, Rob Miles, Michael Trazzi etc. Before this I was Co-organizing for EA SF, and growing AISF under Seldon Labs. Previously I've founded an AI consultancy org, grew it to 50+ engineers, built AI products for clients such as Google, Amazon, Cisco, GumGum, The Japanese Government, and many more. I've previously worked at KPMG, NASA, Sony.

Zac Lovat (Creator Lead): 60M+ impressions across social media. Building Explainable alongside Akshyae. Ex-Adobe Ambassador. Deep expertise in platform algorithms, content virality, and creator psychology. Leading the creator partnership and management efforts. Brings industry credibility when recruiting creators and proven strategies for content optimization.

Melynna Garcia (Operations): Currently Ops@ Palisade Research. Will help in building relationships with different AIS Orgs, Ensures smooth operations and expands partnerships with AIS orgs.

What are the most likely causes and outcomes if this project fails?

Primary Risk Factors:

  1. Creator churn: Creators drop out due to competing priorities or low audience engagement

    • Mitigation: Building a 2x pipeline; ongoing creator support; our 10 current creators show strong retention

  2. Content quality issues: Accuracy concerns or oversimplification

    • Mitigation: Pre-publication review process refined with first 10 creators; partnerships with technical orgs for fact-checking

  3. Platform algorithm changes: Shifts in content promotion

    • Mitigation: Multi-platform strategy across 4+ platforms reduces single-algorithm dependence

  4. Audience saturation: Same audiences see multiple creators' content

    • Mitigation: Targeting creators with different demographics, platforms, and styles; research component identifies saturation patterns

Failure Outcomes:

  • Soft failure: Reaching 50-70% of target metrics still provides valuable data and significant reach

  • Hard failure: Unable to maintain engagement at scale- though our 1M+ views from 10 creators suggests the model works at small scale

What we learn regardless: Quantitative data on which platforms, formats, and messaging work best for AI safety content. This intelligence benefits the entire field whether we hit numerical targets or not. Even partial success generates the first systematic dataset on public AI safety communication effectiveness.


Why This Matters

Most AI safety communication targets people already in the field or adjacent technical communities. We're testing whether distributed micro-creator content can reach the broader public- gamers, tech enthusiasts, science students- who will be affected by AI but aren't currently engaging with safety research.

This program serves dual purposes:

  1. Immediate impact: Getting AI safety concepts in front of millions who wouldn't otherwise encounter them

  2. Research value: Generating some systematic data on what communication strategies actually work for AI safety on a diversity scale.

Our goal is to make conversations around AI Safety and risk mainstream. The world has been talking about sky net and terminator for decades now. We feel that getting more traction, funding, and talent into the field is a bottle-neck based on the narrative that AI safety currently has, making it siloed and disconnected.

We have proof it works at small scale (1M+ views, 10 creators). Now we need funding to prove it works at large scale and generate the data to optimize it.


Low-budget: $20,000 (30 creators, ~100 videos, 4-5M views)

Median funding: $35,000 (75 creators, ~175 videos, 7-9M views)

Ideal funding: $55,000 (100 creators, ~275 videos, 11-14M views)

CommentsOffersSimilar8

No comments yet. Sign in to create one!