Wednesday, April 1, 2026

Score: 50/50 for science

A massive 7-year project aiming to examine the repeatability of social science has come to a close. The project, called SCORE, found (as previous studies have found before) that only half of the tested papers could be replicated successfully when new researchers tackled the same question with new data. It's a glass half-empty / half-full situation... the authors say this is just a sign of scientists being human and making concessional honest mistakes, being messy in failing to report exactly what they did and how they did it, and the strange-but-true fact that doing something slightly differently can legitimately yield entirely different results. 

The team also tested whether people or machines could predict if a paper would replicate (by checking anything from the reputation of the authors to the sample size and the statistical power of the finding). People scored 76-78% at their best; computers failed miserably. But newer AI-based systems are doing far better.

Read my news story in Nature https://www.nature.com/articles/d41586-026-00955-5

Along with a Q&A with Brian Nosek, one of the most famous names in replicability studies and an advocate of open science https://www.nature.com/articles/d41586-026-00972-4


No comments:

Post a Comment