Fraud, Science & the Story That Wins
Why do some scientists commit scientific fraud?
If you’re a scientist, “How would I know?” is probably your smartest answer here. But your real answer is probably something like “ambition,” “ego,” and/or “they’re not good enough scientists.”
Here’s what interests me: All of the above answers (including “How would I know?”) are also the default answer many scientists still give if you ask them “Why would a scientist would give a TED talk, start a podcast or a newsletter, or say yes to multiple media requests for interviews?”
There’s been a longstanding unspoken analogy in science: The motivation to do fraudulent research and the motivation to offer public expertise are at their core the same — smarmy, shameful, less than, an affront to the integrity and complexity of real science.
I’m thinking about that analogy a week after news that famous behavioral psychology researcher Dan Ariely is under suspicion for putting fraudulent data in a famous paper about how to nudge people toward being honest — pause for nervous laughter — and that the fraud was transparently obvious…except that no one had detected it until nine years after the paper was published.
All the evidence points to Ariely, whom Science magazine describes as a “superstar honesty researcher,” as the one who fudged the data. Here’s what’s different this time: While Ariely fits the profile of an ambitious scientist — he has a TED talk and three New York Times bestsellers and runs a research center at Duke — almost all of the reaction I’ve seen to this news isn’t along the usual science-purist hot take of “this is what you get when scientists want to become media stars.” Instead, this case (and there’s apparently at least one more fraud that could be attributed to Ariely) is instead prompting soul searching among some scientists that there is probably a lot more scientific fraud that is going undetected.
For example, in the wake of the Ariely story, ecologists Jeremy Fox and Stephen B. Heard both asked “Why are scientific frauds so obvious?” on their respective blogs. (One answer: Well, those are the ones that get caught.)
I also like Fox’s answer: “There’s an important sense in which the shoddiest scientific fraud and the most careful art forgery are exactly the same. Both are designed to stand up to the scrutiny they’re likely to receive.”
In other words: Art forgeries frequently undergo intense inspection by forgery experts, but hardly anyone ever looks at data underlying a paper. The problem might not be limited to a few psychopaths; it might instead be a systemic pathology.
This is why, for instance, Jonatan Pallesten is arguing that “no study should be trusted if it doesn’t release the data…regardless of which journal it was published in.” Most casual observers would be surprised to hear that’s not always or even often the case in many fields. These observers simply assume that “science” = “work that passes a high level of scrutiny.”
Science is a story — one whose social trust and authority flow from its rigorous, systemic commitment to an ethos apart. If fraud is commonplace and our systems of scrutiny inadequate to detect it, then that story will eventually fall apart.
This week the statistician Andrew Gelman relayed on his blog the story of “an anonymous correspondent who happens to be an economist” (and a fan of the Atlanta Braves baseball club) who thought they had found a weird but striking correlation between Opening Day results by the Braves in any given year and how the rest of the Braves’ season that year turned out. “The first day is 8 times as important as a random day!” the correspondent wrote Gelman.
But then the correspondent wrote back again. Now they had run the same regression using all the other days in the season and found “plenty of other days that are higher and a bunch of days are negative. It’s just flat out random….” The first discovery turned out not to be a discovery once they widened their aperture.
Gelman writes:
The lesson here is, as always, to take the problem you’re studying and embed it in a larger hierarchical structure. You don’t always have to go to the trouble of fitting a multilevel model; it can be enough sometimes to just place your finding as part of the larger picture. This might not get you tenure at Duke, a Ted talk, or a publication in Psychological Science circa 2015, but those are not the only goals in life. Sometimes we just want to understand things.
I like Gelman’s blog a lot. But the real moral of the economist’s and of Ariely’s story might instead be: Take every chance you can to work and think transparently, in public, under maximum scrutiny — including that TED talk. Invite scrutiny for not only your approach, methods and findings, but your conclusions and recommendations.
The opposite of fraud or bad public expertise isn’t scientific hermeticism. It’s doing science and public expertise in ways that invite scrutiny. And in the case of public expertise, that means being as public as possible.
The best story isn’t the most entertaining. It’s the one that stands up.