There has been much debate about whether people engaged in EA and longtermism should frame their efforts and outreach in terms of ‘effective altruism’, ‘longtermism’, ‘existential risk’, ‘existential security’, ‘global priorities research’, or by only mentioning specific risks, such as AI Safety and pandemic prevention. (1)(2)(3)(4)(5)(6)(7)(8)
However, these debates have largely been based on a priori speculation about which of these approaches would be most persuasive. We propose to run experiments directly testing public responses to these different framings. This will allow us to assess how responses to these different approaches differ in different dimensions (e.g. popularity, persuasiveness, interest, dislike).
We propose to conduct public attitudes surveys and message testing experiments, which would address these questions. We would then make reports of the results publicly available (e.g. on the EA Forum) so that the community can update based on these findings.
Our goal is that these results can inform the decisions of EA/longtermist decision-makers across the community (ranging from those at core movement building orgs, funders, movement builders and others. We see these results as potentially influencing decisions both large (should we stop promoting “effective altruism” and refocus our efforts more on alternative brands or individual causes) and small (should I frame my new program or my individual outreach more in terms of "longtermism" or just "risks from AI").
These studies would also assess how effects might differ for different groups (e.g. students, different gender, age, race). Such analyses may therefore also help the EA and longtermist communities become more representative and more diverse, by avoiding messages which are off-putting to different groups.
The proposed project would include studies such as:
Experiments to understand how responses to the ‘effective altruism’ brand compare to response to alternative brandings (e.g. ‘high impact giving’, ‘global priorities’)
Experiments to compare responses to ‘long-termism’, ‘existential risk’, ‘existential security’ or specific catastrophic risks.
Experiments to assess the impact of effective altruism representing a broader or narrower array of cause areas (e.g. how is outreach affected by EA representing primarily AI risk vs a broader array of causes).
Gathering qualitative data from respondents about their impressions or associations of the terms and assessment of whether there are any systematic misunderstandings or surprising impressions
The funding will be used for a combination of survey costs (e.g. compensating participants in the studies and platform fees) and staff costs (to design, run, analyze and report on the surveys).
The exact costs of each survey depend on the sample size needed to have adequate statistical power for a given design (influenced by factors including but not limited to: how many messages we are testing, whether we are weighting the sample to be representative of a given population, how many messages each participant receives), and how long the survey instrument is.
The exact number of studies and designs which we can employ will depend on the amount of funds raised. With maximum requested funds raised we will run multiple different studies using different designs to increase robustness. This will also allow us to run larger studies that allow us to identify small differences between messages or sub-populations with greater precision. With the minimum requested funding raised we will run and publicly report on one study (including necessary pilot studies) to provide a proof of concept to the community to help allow people assess the utility of this approach.
Rethink Priorities’ Survey and Data Analysis Team is composed of Senior Behavioral Scientists, Willem Sleegers (LinkedIn, Scholar, Website), who is also a Research Affiliate at Tilburg University, and Jamie Elsey (LinkedIn, Scholar, Website), and managed by Principal Research Director, David Moss, (LinkedIn, Scholar), who each worked on multiple academic projects prior to joining Rethink Priorities.
Rethink Priorities’ Surveys and Data Analysis Team has an extensive track record of conducting high quality projects targeted at the interests of actors in the EA and longtermist space. Since hiring Jamie and Willem less than 2 years ago, we have completed over 40 substantial projects, plus over 50 smaller consultations, including multiple commissions for Open Philanthropy, the Centre for Effective Altruism, 80,000 Hours, Forethought Foundation, Longview and others.
As most (>80%) of these projects are private commissions, we cannot share many of them, however, they have included:
Many message testing experiments for EA orgs
e.g. the examples discussed by Will MacAskill here
Tests of different ads (written and video)
Tests of different names for orgs/projects
Surveys of public attitudes
Surveys of the EA community
EA Survey 2014-2022
Impact surveys / reports
A couple of major orgs / projects (private)
Academic reports on
Given our strong track record as a team and organization, we place low probability on the risk of operational difficulties or a low-quality product.
One possible ‘failure’ mode is if the results are accurate and robust but practically uninteresting. For example, all the messages/framings we test might perform just as well as each other, with no significant differences between them. However, this would only be a qualified ‘failure’, since knowing that these differences in framing seem to make no difference in receptiveness to our causes would itself be useful to know, might inform decisions and could redirect EA attention away from unfruitful speculation about whether one approach is better than another.
Another practical failure mode is that the results are useful and actionable, but decision-makers do not update on them. Our core plan involves publishing the results publicly on the EA Forum, where relevant audiences are likely to encounter them. However, we will also reach out to some of the most relevant decision-makers directly regarding the results. With larger amounts of funding, we will be able to dedicate more time to describing and illustrating the results of the studies and engaging in more outreach to ensure decision-makers are aware of them and know how to make use of them. We also believe that the project being funded by a public funding mechanism like Manifund could help advertise the project and increase visibility.
One potential failure mode is if decision-makers update on the results more than is warranted. The first line of our efforts to guard against this is by being very clear in our writeup of the results what conclusions are or are not warranted by the findings, and by providing clear quantifications of the magnitude and degree of uncertainty around the effects we find, as well as working to make these quantifications accessible to decision-makers. In addition, with more funding, we can run multiple studies to replicate results with different designs and examine different audiences in more detail to ensure that results are robust.
Our department has received no funding for this project or similar projects. We have also received no funding to provide general support for our department.
We have received many individual commissions from different decision-makers for various projects. However these are private and so the results typically cannot be published.