Manifund foxManifund
Home
Login
About
People
Categories
Newsletter
HomeAboutPeopleCategoriesLoginCreate

Funding requirements

Sign grant agreement
Reach min funding
Get Manifund approval
1

Analyzing media discourses on AI risks

AI governance
cosmoplasmata avatar

Monica Ulloa

ProposalGrant
Closes June 22nd, 2025
$0raised
$5,500minimum funding
$26,980funding goal

Offer to donate

39 daysleft to contribute

You're pledging to donate if the project hits its minimum goal and gets approved. If not, your funds will be returned.

Sign in to donate

Project summary

This research project aims to analyze the evolution of public, media, and political discourse surrounding the risks associated with advanced artificial intelligence (AI). Contrary to the assumption of a steadily increasing openness, we observe that public discourse on AI risks—particularly extreme risks—has fluctuated significantly and has recently lost prominence compared to its peak visibility in 2023. The study seeks to understand how interpretative frameworks, frequency, key actors, and contextual conditions influence the presence or absence of these risks in public conversations. The goal is to generate meta-analytical knowledge on the dynamics of visibility, legitimacy, and discursive transformation regarding AI risks, providing tools to navigate this changing environment.

What are this project's goals? How will you achieve them?

This project aims to analyze how existential and catastrophic risks related to artificial intelligence are framed in English-language media, and how these representations shape public understanding, urgency, and governance responses.

Using a combination of qualitative coding and critical discourse analysis, I will:

  • Build and analyze a corpus of media content (2020–2025) from a diverse range of outlets (mainstream, tech, and specialized).

  • Identify dominant narratives and framings.

  • Trace how AI risk narratives emerge, escalate, or fade over time, and what triggers these cycles.

  • Map whose voices are legitimized in shaping these narratives (governments, researchers, industry actors) and what perspectives are excluded.

Rather than approaching the issue from a purely technical or regulatory lens, this project focuses on the mediated construction of risk, offering insights into how fear, legitimacy, and action are negotiated in public discourse. It will result in a public-facing report that supports researchers, communicators, and policymakers working on AI risk governance.

How will this funding be used?

Funding will enable me to work full-time on this project for 8 months, with the following breakdown:

  • Researcher salary: $3,000/month (total $24,000)

  • Software: Atlas.ti license ($1,980 for 1 year, professional non-student rate)

  • Digital infrastructure: cloud storage and media access tools ($1,000)
    Total: $26,980

With minimum funding ($5,500), I would complete a scaled-down version of the project: a 2-month part-time analysis focused on a smaller media sample, which would still inform future work and outputs

Who is on your team? What's your track record on similar projects?

I will conduct this project independently. I hold a Master's in Science and Technology Studies and currently serve as:

  • AI Governance Manager at Carreras con Impacto, a nonprofit supporting professionals working on high-impact careers, including AI governance and global risk mitigation.

  • Policy Transfer Officer at the Observatorio de Riesgos Catastróficos Globales, where I bridge research and policy on global catastrophic risks.

I specialize in qualitative methods and critical discourse analysis. Relevant to this project, my peer-reviewed article analyzing the securitization of AI in governmental documents using discourse analysis will be published in the Q1-ranked Revista de Estudios Sociales (Universidad de los Andes) in July 2025. Reviewers have commended its rigorous methodology.

I have presented at prominent international events, including the First Latin American Conference on Global Catastrophic Risks (UNAM, 2024) and the CSER Global Catastrophic Risk Conference (University of Cambridge, 2024).

This project marks a strategic pivot in my career toward AI risk communication, an area I believe is both urgent and underserved—especially from perspectives grounded in qualitative research and critical media analysis.

What are the most likely causes and outcomes if this project fails?

Failure would likely stem from insufficient funding, limiting the scope of analysis and the ability to produce a comprehensive report. In that case, the project would still generate partial findings, but would miss the opportunity to uncover broader narrative patterns or engage meaningfully with experts in the field.

The main risk is underreach rather than misdirection: without adequate support, the full value of the project—to improve how AI risks are understood and communicated—would not be realized.

How much money have you raised in the last 12 months, and from where?

I have not received external funding for independent research in the past year. My current work is institutionally supported, but this proposal represents my first effort to build an independent research and communication agenda around AI risk.

CommentsOffers

No comments yet. Sign in to create one!