Is Social Research Really Not Better Than Alchemy? How a Many-Analyst Study Produced “A Hidden Universe of Uncertainty”
Abstract:
Katrin Auspurg & Josef Brüderl, LMU Munich
Starting with Silberzahn and Uhlmann (2015), many-analysts studies have become a popular design to examine the robustness and credibility of scientific research. We argue that the current design of many-analysts studies tends to exaggerate the unreliability of science. This is shown by reassessing the finding of a recent, widely discussed many-analysts study done by Breznau, Rinke, Wuttke et al. [“Observing many researchers using the same data and hypothesis reveals a hidden universe of uncertainty; PNAS 119 (2022)]. This study found that for unidentifiable (“hidden”) reasons, the many analysts’ results varied widely even when testing the same social science hypothesis (that immigration reduces social policy support) with the same data. Surprisingly, only a small proportion of the variance in results (5 %) was explainable by visible methodological choices made by the analysts. This finding is particularly worrisome because due to the hidden reasons, it was not possible to suggest ways for improving research. Some even concluded that social research is dominated by “dark methods.” Such stark conclusions warrant a close examination.
In our re-analysis we found several pitfalls, including re-scaling errors and overlooked effect size moderators. Due to this, we argue that the study’s findings do not justify the stark conclusions. More generally, our study provides several recommendations on how future many-analysts studies might avoid misleading results. The general take-away from our investigation is that for producing valid results on research certainty, many-analysts projects must adhere to some meta-analytical guidelines. We also discuss some alternative methods to assess the credibility of social science research.