When can scientific findings from experimental psychology be confidently applied to legal issues? And when applications have clear limits, do legal commentators readily acknowledge them? To address these questions, we survey recent findings from an emerging field of research on research (i.e., metaresearch). We find that many aspects of experimental psychology’s research and reporting practices threaten the validity and generalizability of legally relevant research findings, including those relied on by courts and policy-setting bodies. As a case study, we appraise the empirical claims relied on by commentators claiming that implicit bias deeply affects legal proceedings and practices, and that training can be used to reduce that bias. We find that these claims carry many indicia of unreliability. Only limited evidence indicates that interventions designed to reduce prejudicial behavior through implicit bias training are effective, and the research area shows many signs of publication bias. To examine whether law journal articles are acknowledging these limits, we collected a sample of 100 law journal articles mentioning “implicit bias training” published from 2017-2021. Of those 100 articles, 58 recommend implicit bias training and only 8 of those 58 express any skepticism about its effectiveness. Overall, only 19 articles express skepticism about implicit bias training. We end with recommendations for law journal authors, researchers, and practitioners towards more credible application of psychology findings in law research and policy. Our focus is on how empirical research can be best used to solve our most important social issues including racism.
Jason Chin, Alexander Holcombe, Kathryn Zeiler, Patrick Forscher & Ann Guo,
Metaresearch, Psychology, and Law: A Case Study on Implicit Bias
Available at: https://scholarship.law.bu.edu/faculty_scholarship/3422