Do you think the attempted replications you have done so far have had measurable impact? If so, how are you measuring that? If not, what do you think the missing ingredient for impact is?
Social science is facing a replication crisis. and the root problem is the lack of replications. Academics have little incentive to perform replications, since they do not yield original findings, and are not valued by journals. Hence, few replications are attempted. Knowing that their work will not be replicated, academics invest in quantity of research over quality. The result is unreliable findings that cannot be used by decisionmakers during crises.
We can fix these incentives by systematically replicating new research, so that researchers expect their work to be scrutinized as a regular practice. If their findings are not robust, their work will not be cited (or worse, retracted), affecting their promotion and tenure outcomes. To maintain their reputations, researchers will improve research design, and journals will improve peer review standards. The outcome is a scientific literature containing reliable knowledge, to help guide humanity through the long-term future.
My proposal is to do replication work for one year. As a specific first step, I will replicate or reanalyze the literature on the effects of air pollution, with the goal of convincing philanthropists and policymakers to take air pollution more seriously. There is some support for this literature being credible, but I know that literatures can be wrong, and that we need thorough vetting to tell if problems exist. I have a track record of finding research that isn't robust, so I'm well-positioned to give a passing grade to the air pollution literature (if warranted). My goal for the air pollution literature is to do shallower reanalyses, replicating 2-4 papers per month, since there are dozens of papers. Based on my contracting work for Open Philanthropy, deeper dives can take 1-2 months.
Doing direct replications will make researchers believe that their work will be scrutinized, which will motivate them to improve research design. To maximize media exposure (and hence pressure on researchers’ reputations), I will write tweet threads and submit formal write-ups to academic journals. I will also focus on recent papers, to give feedback to the scientific community within the media cycle. Since academics are unlikely to check results by running code, I will make videos where I walk through the replication code step-by-step, to show that my results are verifiable.
Doing replications for one year will allow me to publish and build a reputation, and make connections with other researchers in the open science movement. After one year of direct replication work, I will re-evaluate and investigate other approaches to promoting replications: prediction markets for replication, forecasting tournaments for replication, the Unjournal, etc.
I have a strong track record of finding flaws in published economics research. In my dissertation, I reanalyzed the literature on meritocratic promotion in China, finding that the empirical evidence did not hold up to scrutiny (article forthcoming in the journal Research and Politics). I have since done replication work for Open Philanthropy, on the effect of air pollution on mortality, the economic impacts of measles vaccination, and the effect of tech cluster size on innovation (article under review at the journal American Economic Review). I’ve also reanalyzed Lisa Cook’s research on racial violence and patenting.
Total: $125000 USD.
- $104000 on salary for myself.
- $1000 for journal fees
- $10000 to attend conferences
- $10000 buffer.
This is for one year of work. With less funding, I would scale down the project proportionally.
No response.
I define successful outcomes to include: widespread media coverage; replications published in the same journal as the original paper; severely negative replications lead to retractions from journals; journals implement new peer review standards; academic departments account for replications/retractions in tenure/promotion decisions.
The probability of these outcomes depends on the quality of the research that I look at. Eg. I’m more likely to get media coverage and a retraction if I discover flaws in a famous article. That being said, I roughly estimate these unconditional probabilities for one year of work:
- Media coverage (any, Vox-level): 45%
- Published replications (at least one, in the same journal): 25%
- Retraction (at least one): 5%
- Changing peer review/promotion standards: 3%
Jason
8 months ago
Do you think the attempted replications you have done so far have had measurable impact? If so, how are you measuring that? If not, what do you think the missing ingredient for impact is?
Petar Buyukliev
8 months ago
Just to be clear, do you plan to actually replicate the studies, or just re-analyze their data?