Manifund foxManifund
Home
Login
About
People
Categories
Newsletter
HomeAboutPeopleCategoriesLoginCreate

Funding requirements

Sign grant agreement
Reach min funding
Get Manifund approval
1

AI scam research targeting seniors

Science & technologyAI governance
🐢

Fred Heiding

ProposalGrant
Closes September 30th, 2025
$0raised
$8,000minimum funding
$20,000funding goal

Offer to donate

28 daysleft to contribute

You're pledging to donate if the project hits its minimum goal and gets approved. If not, your funds will be returned.

Sign in to donate

Project summary

We previously received a Manifund grant to support our research evaluating AI-powered social engineering targeting senior citizens. Thanks to that grant, we presented our work at BSides and the AI Security Forum in Las Vegas. Our work will be featured in a Reuters article authored by a three-time Pulitzer Prize winner and in The Economist’s Project 2030 series (https://impact.economist.com/progress-2030/). We have also presented our research to Congressmen and Senate staff in DC.

We are now seeking $20k to scale our research and implement it in a real-world setting. First, we will build upon our earlier pilot to scale our implementation of AI-assisted phishing tests targeting older adults. We will simulate realistic messages (email, text, voice) in a safe and friendly environment, evaluating different attack and persuasion strategies across LLM models and versions. Participants will learn how attacks can be personalized based on publicly available information and identify the attack strategies to which they are most susceptible. The pilot was well-received among the participants and sparked a strong interest in continuing our work in this area. All collected data is anonymized and used to populate ScamBench (https://scambench.com/).

Building on this work, we will also pursue collaborations to investigate how AI-assisted social engineering enables full-scale attack scenarios. For example, AI-assisted cyberattacks that use phishing to gain access to a system, then employ living-off-the-land techniques for persistent data exfiltration, similar to the threats recently discussed by Anthropic (https://www.nbcnews.com/tech/security/hacker-used-ai-automate-unprecedented-cybercrime-spree-anthropic-says-rcna227309).

What are this project's goals? How will you achieve them?

We aim to scale our evaluations of AI-assisted phishing with more robust data and develop new defense tools, such as personalized spam filters. We will also explore how to improve resilience against larger attacks, where phishing is only one part of a longer sequence. Lastly,  we will continue raising awareness about the dangers of AI-powered cyberattacks, particularly among policymakers in Washington and leaders in the tech industry.

How we’ll achieve our goals:

  • Onboard two Harvard undergraduate seniors to help with the development of the AI phishing agents and integration with ScamBench.

  • Partner with red-team researchers to investigate how our work can contribute to mitigating larger attack chains. 

  • Expand our pilot programs to help senior communities.

  • Publish an academic paper on our latest findings in AI-assisted phishing.

  • Collaborate with leading journalists like The Economist, Reuters, and Time Magazine. 

  • Engage policymakers and technologists through in-person appearances at conferences, panel discussions, and demo sessions.

How will this funding be used?

  • $4,000 per month for four months to cover Fred’s health insurance, rent, and living expenses after his Harvard funding ends on September 1 (his affiliation remains), while he secures new long-term funding.

  • Provide a one-time stipend of $ 1,000-$2,000 each to the two students who help conduct the research during the fall semester and winter recess. 

We view this microgrant as a bridge, enabling us to leverage the momentum from our recent publications to scale our work and explore new, high-impact areas, while securing long-term funding. 

We expect this funding to cover part of these activities, primarily Fred’s temporary stipend, enabling him to continue leading the research. Any additional research work will be scaled based on available resources.

Who is on your team? What's your track record on similar projects?

Fred Heiding

  • Research Fellow at Harvard Kennedy School’s Belfer Center’s DETS (Defense, Emerging Technology, and Strategy) group.

  • Leads research on AI and cybersecurity, cybersecurity strategy, and cyber policy.

  • Co-author of papers like “Evaluating Large Language Models’ Capability to Launch Fully Automated Spear Phishing Campaigns” (arXiv:2412.00586).

Simon Lermen

  • Independent AI safety researcher doing MATS during the summer and fall of 2025.

  • Author of “LoRA Fine-tuning Efficiently Undoes Safety Training in Llama 2-Chat 70B.”

  • Focuses on language model vulnerabilities, agent behavior, and AI misuse.

  • Co-author of “Evaluating Large Language Models’ Capability to Launch Fully Automated Spear Phishing Campaigns” (arXiv:2412.00586).

  • Simon is fully funded via MATS’s extension program.

What are the most likely causes and outcomes if this project fails?

  • Scams towards seniors continue to be unaddressed. Seniors lost $4.9 billion to scams in 2024, a 43% increase compared to 2023 (https://www.aarp.org/money/scams-fraud/fbi-report-fraud-2024.html). Many additional scams go unreported due to social stigma and shame. 

  • Loss of capitalization on significant media momentum. We will soon be featured in major news outlets like The Economist and Reuters. Without direct follow-up activity, this visibility risks fading, leaving us unable to convert attention into new collaborations, research initiatives, and policy impact.

  • Missed opportunity to reach key audiences. AI-assisted scams targeting seniors resonate strongly with policymakers, illustrating the risks of rapidly advancing AI. They show that the AI race already has winners and losers, and demonstrate how we can support those who risk being left behind. Losing this channel would reduce the emotional and strategic traction of our message.

  • Missed opportunity to research how AI-assisted phishing enabled larger cyberattacks. Phishing attacks are used in a wide range of attacks, from business email compromise to nation-state espionage and sabotage. Failing to research this link means missing insights critical for defending against escalatory AI-enabled cyber operations.

How much money have you raised in the last 12 months, and from where?

We received a $4,000 Manifund microgrant in June. Beyond that, we have not received any external funding for this specific project.

CommentsOffersSimilar7

No comments yet. Sign in to create one!