Manifund foxManifund
Home
Login
About
People
Categories
Newsletter
HomeAboutPeopleCategoriesLoginCreate
2

AI scam research targeting seniors

Science & technologyAI governance
🐢

Fred Heiding

ActiveGrant
$3,500raised
$5,000funding goal

Donate

Sign in to donate

Project summary

We’re seeking $3,000 to support our research on spear-phishing and AI-driven scams with a focus on elderly participants. Our research demonstrates how AI systems can be leveraged to create persuasive scam messages targeting older adults, achieving click-through rates comparable to those of human hackers.  

Seniors are among the most emotionally resonant demographics for policymakers and the public. When they’re targeted by scams, it gets attention and action. Presenting our findings to the broader cybersecurity community will help build defenses, influence regulation, and ultimately protect millions of at-risk individuals.

To maximize impact, we’re also collaborating with a Pulitzer Prize–winning journalist from a leading news outlet, who is writing a feature story based on our work. Their reporting will help bring our findings to the public and decision-makers through a compelling, accessible narrative.

We are currently unable to use institutional funds from the Harvard Kennedy School due to significant university-wide funding restrictions, driven by political pressure on academic institutions. This microgrant would enable us to share our urgent and timely research with the people who need to hear it.

What are this project's goals? How will you achieve them?

We aim to utilize our research to raise awareness about AI-enabled scams targeting seniors and build momentum for both technical defenses and policy responses.

How we’ll achieve our goals:

  • Present at BSides Las Vegas: Share our findings directly with leading cybersecurity professionals, journalists, and policymakers.

  • Engage policymakers through emotion and data: Seniors are not just victims, they are parents, grandparents, and constituents. By showing how easily AI can deceive them, we reach the hearts and priorities of decision-makers.

  • Showcase defensive tools: Demo early prototypes of personalized spam filters and AI-powered training tools for older adults.
    Continue research on cognitive vulnerability: Test which training methods help seniors resist personalized scam messages and refine our models accordingly.


How will this funding be used?

  • How will this funding be used?

    • Travel and lodging for both of us to attend BSides Las Vegas, where we’ve been invited to present two accepted talks on our research.

    • Tickets to DEF CON to further share our findings and connect with adjacent hacker and policy communities

    • Design of demo materials and audience handouts

    • In-person networking with cybersecurity researchers, industry leaders, and journalists

    • Outreach to policymakers and public sector partners attending both conferences

    We see this microgrant as a catalyst: it enables critical real-world engagement that accelerates impact and lays the groundwork for further collaboration and support.

    We expect this funding to cover a portion of these activities, with the primary focus on enabling conference participation to share our findings with the broader cybersecurity community. Any additional research work will be scaled based on available resources.


Who is on your team? What's your track record on similar projects?

Fred Heiding

  • Research Fellow at Harvard Kennedy School’s Belfer Center (Defense, Emerging Technology, and Strategy)

  • Leads research on AI-powered phishing, cybersecurity strategy and cyber policy

  • Co-author of “Evaluating Large Language Models’ Capability to Launch Fully Automated Spear Phishing Campaigns” (arXiv:2412.00586)

Simon Lermen

  • Independent AI safety researcher doing MATS summer 2025

  • Author of “LoRA Fine-tuning Efficiently Undoes Safety Training in Llama 2-Chat 70B”

  • Focuses on language model vulnerabilities, agent behavior, and AI misuse

  • Co-author of “Evaluating Large Language Models’ Capability to Launch Fully Automated Spear Phishing Campaigns” (arXiv:2412.00586) 

What are the most likely causes and outcomes if this project fails?

What are the most likely causes and outcomes if this project fails?

  • Inability to attend BSides, DEF CON, and other conferences:
    Without funding, we won’t be able to travel and present our work in person. This would sharply limit our ability to engage with the cybersecurity community, policymakers, and funders, diminishing the impact and reach of our research.

  • Missed opportunity to reach key political audiences:
    These conferences are rare venues where we can connect with senior and more conservative policymakers in a setting where AI risks to seniors resonate deeply. Losing this channel would reduce the emotional and strategic traction of our message.

  • Loss of media momentum:
    Our planned feature story with a Pulitzer Prize–winning journalist hinges on a timely, in-person presentation of our work. Missing the event may jeopardize this coverage and the broader awareness it could generate.

  • Reduced chances of future collaboration and follow-on funding:
    Events like BSides and DEF CON are critical hubs for forming long-term research and industry partnerships. Without attending, we risk missing connections that could lead to future grants, deployments, or policy pilots.

How much money have you raised in the last 12 months, and from where?


We have not yet raised any funding for this specific project. This would be our first external support for taking this research to the public.

Comments2Donations1Similar6
Thomas-Larsen avatar

Thomas Larsen

General Support for the Center for AI Policy

Help us fund 2-3 new employees to support our team

AI governance
9
5
$0 raised
cosmoplasmata avatar

Monica Ulloa

Effective AI Awareness: improving evidence-based AI risk communication

AI governance
2
0
$0 raised
midfieldai avatar

Abhinav singh

AI Safety & Security workshop with Adversarial Simulation labs

Workshop focused on AI Security attacks and defense use-cases through a Capture-the-flag style Adversarial simulation labs.

Technical AI safetyAI governance
1
0
$0 raised
Dhruv712 avatar

Dhruv Sumathi

AI For Humans Workshop and Hackathon at Edge Esmeralda

Talks and a hackathon on AI safety, d/acc, and how to empower humans in a post-AGI world.

Science & technologyTechnical AI safetyAI governanceBiosecurityGlobal catastrophic risks
1
0
$0 raised
caip avatar

Center for AI Policy

Support CAIP’s 3-month project on reducing chem-bio AI risk

AI governanceBiosecurityGlobal catastrophic risks
7
0
$0 raised
Bonsajce9171 avatar

Paul Mikov

Safeguarding international peace and security from risks generated by AI tech

Seed funding for the initial on-the-ground operational presence of the Bonsai Corp, a start-up NGO aiming to safeguard int peace & security from risks of AI

Science & technologyTechnical AI safetyAI governanceGlobal catastrophic risks
1
0
$0 raised