You're pledging to donate if the project hits its minimum goal and gets approved. If not, your funds will be returned.
Dads Against AGI Inc. funds AI existential risk media projects, the first of which is For Humanity. Many more projects are under development.
For Humanity: An AI Risk Podcast recently surpassed a major milestone—over 100,000 subscribers on YouTube. This podcast has become one of the fastest-growing grassroots channels dedicated to raising awareness about the existential threat posed by artificial general intelligence (AGI). The funds we are seeking will directly support a sustained ad campaign to expand our reach, wake up new audiences, and further cement this podcast as a critical voice in the broader public discourse around AI safety.
We believe it is absolutely essential to bring conversations about AI risk out of academic and technical circles and into the awareness of everyday citizens—especially parents, families, and civic-minded individuals who may not realize the urgency of the threat. That’s where this campaign comes in: pushing For Humanity into mainstream visibility through targeted advertising.
What are this project's goals? How will you achieve them?
Our goal is to add 100,000+ more subscribers and continue to wake up new nodes in new communities to spread the word. I am using a novel strategy of YouTube shorts on topics unrelated to AI, but with an AI risk message baked in. Some of these are converting 20-30% of the viewers into podcast subscribers.
Every dollar we raise will go directly toward Google and YouTube ad placements that promote both full podcast episodes and shorts that serve as high-converting entry points into the content. We’re targeting viewers by interest, behavior, and content consumption patterns, allowing us to reach groups that mainstream AI risk messaging tends to miss—parents, workers, young adults, and older Americans alike.
This funding will allow us to scale our ad testing, double down on what’s converting, and expand into new markets and regions where awareness of AI risk remains low.
This project is led by two passionate, deeply committed individuals with complementary backgrounds in journalism, technology, and activism.
John Sherman is a Peabody Award-winning citizen journalist on an urgent mission to wake up the general public to the risk of AI extinction. He cannot understand why—when AI CEOs openly warn that their technology could end all life on Earth—so few seem to believe them. A father of boy-girl twins who are now freshmen in college, John is in this fight for one reason: a desperate effort to save his children’s lives.
John is the host of the rapidly growing For Humanity: An AI Risk Podcast, which now has more than 50,000 subscribers on YouTube. He is also a small business owner and entrepreneur. As CEO and Creative Director of Storyfarm, his sixteen-year-old Addy Award-winning creative video agency, John has helped major brands and institutions tell human-centered stories that resonate. From 1998 to 2010, he was an investigative TV news reporter, earning journalism’s highest honors—including the Peabody Award, Dupont-Columbia Award, National Emmy, and National Edward R. Murrow Award. He grew up in Washington, DC, as the son of a congressional staffer who served the federal government for 45 years.
Louis Berman is the co-founder of Dads Against AGI and a driving force behind the podcast’s strategic growth. He has founded companies, led high-performing teams, and built advanced software platforms. He’s even spent time at massive telescopes, filling his eyeballs with photons, and remains one of the rarest of creatures: a former currency trader who isn’t broke.
Louis is the CTO and co-founder of SquidEyes, LLC, a company bringing institutional-grade, hedge-fund-style currency trading tools to the retail and CTA markets. Before that, he served as Chief Technologist at EPAM Systems (2020–2022), where he led Azure strategy for the top IT services firm on the Fortune “100 Fastest-Growing Companies” list three years running. At EPAM, he drove transformation projects for major clients including Disney, Walgreens, Harley-Davidson, Edward Jones, and Intrado, and was the firm’s top cloud expert.
From 2015 to 2020, Louis worked at Microsoft as a Cloud Solutions Architect. He helped deploy one of the largest RDMA-style clusters in Azure for DuPont, modernized over 500 workloads for Bentley Systems, and supported Comcast’s sweeping migration to the Azure cloud—touching brands like NBCUniversal, Telemundo, and DreamWorks. Earlier, at Neudesic, he conceived and developed Windows 8 XAML Store apps, including a well-received POC for Toyota’s Innovation Fair.
Now based outside Philadelphia, Louis lives with his wife, a gifted set and costume designer. He is a grassroots PauseAI US lobbyist, working to mitigate the risks of superintelligent AI through advocacy, education, and direct policy engagement. He has authored two books on existential risk: An AI-Safety Primer—available for free in flipbook form—and CATASTROPHE: THE UNSTOPPABLE THREAT OF AI, available via Amazon or free download.
Together, John and Louis are building a dynamic, fast-growing platform that blends emotionally resonant storytelling with rigorous, accessible analysis—designed to wake up the world to the most urgent threat of our time.
Our primary goal is make AI x-risk dinner table conversation on every street in America. To do that, we need to a least 10x our subscriber base, reaching 1,000,000+ YouTube followers within the next 6–9 months. But more than just numbers, we aim to activate new communities, inspire conversations, and give people tools and language to talk about AI risk with others in their lives.
To achieve this, I’ve been experimenting with an innovative and highly effective strategy: YouTube Shorts that cover a wide range of topics not directly related to AI—but with an embedded AI risk message. These shorts may start with a viral idea, a life tip, a meme, or a surprising fact—but they end with a brief but powerful warning about AGI. The conversion rates from some of these videos are extraordinary, with 20–30% of viewers becoming podcast subscribers.
This stealth-style messaging approach is helping us bypass resistance and reach people who would never search for “AI risk” on their own. We are hitting pockets of the internet that traditional campaigns never reach—and it’s working.
This project cannot afford to fail—because the stakes are too high. The future of our children, our families, and our civilization is on the line. I am personally committed to this work because I believe that once people understand the risk of AGI, they care deeply. They take action. They become advocates themselves.
If we don’t scale this project, we miss the opportunity to wake up millions before it's too late. But let me be clear:
I will not fail. I do not fail. I cannot fail.
This isn’t just a media campaign. This is my life’s mission.
In the past year, we’ve raised approximately $30,000 in grassroots support, primarily from podcast listeners and YouTube subscribers who believe in the mission. We also have several pending funding commitments from supporters who’ve seen the recent growth and momentum and want to help us scale further.
We’ve done a lot with a little—and with your help, we can do even more.