Interesting article from Andrew Ferguson in The Weekly Standard: “Making It All Up:” “Behavioral science suffers from these afflictions only more so. Surveys have shown that published studies in social psychology are five times more likely to show positive results—to confirm the experimenters’ hypothesis—than studies in the real sciences. ”
This has been swirling around the science community for years. The article refers to a ten-year-old study by Ioannidis showing how statistical manipulation can be used to cook scientific results, and (even more interestingly) can produce something a lot like “cooked” results even when researchers follow proper protocols with all integrity.
Part of thee problem—especially in behavioral research—is that there’s enormous pressure to produce and publish positive results. The best defense against spurious results is to repeat the experiment to see if the same effect appears again. Ferguson reports that a recent Reproducibility Project found that a full two-thirds of published behavioral studies failed to produce the same result a second time around.
There are all kinds of factors in play here. Some researchers are dishonest, altering data, suppressing negative results, and so on. Others are naive to the way even the best research designs can produce false positives. Others succumb to the entirely human pressure to publish. And once they publish, they face another entirely human pressure, which is to trust their work no matter what.
I conducted a major research project several years ago for the mission organization I worked with, using a lengthy self-report survey with a very large sample. It was a paper survey, just prior to the availability of Internet surveying, and it took a fair monetary investment to get it done. The leader who funded it wanted above all else to get a certain result so he could encourage other leaders to make certain adjustments. The survey did not return the result he wanted. He was not happy.
He was particularly disturbed in that the results almost showed what he was looking for, but not to a degree that achieved statistical significance. I tried everything to help him understand how the difference between the means was too small to reach significance in view of the magnitude of the variance given the size of the sample.
(I have just illustrated another problem with the research literature: it’s easy to speak in such a way that few people even know what you’re talking about, and yet impress them well enough that they’ll assume you know what you’re doing. That wasn’t especially technical, as statistics go, but I’m sure it would be opaque to many readers anyway.)
I was pressured heavily by my funding source to report a certain positive result. Do you suppose it’s possible I’m not the only person that’s ever happened to? Do you suppose it’s possible any researchers anywhere have ever given in to such pressure? (That’s intended to be an easy question.) But it doesn’t have to be a funding source putting on the pressure: it could also be the tenure committee.
Part of that study of mine was reproduced a few years later. It failed to match one of my most favored findings. That was when I discovered another totally human effect arising up inside of me: “MY study was better! MY study was right!” My study methods probably were better in this case, but I couldn’t help noticing how resistant I was on the inside even to thinking about my findings being wrong. I didn’t even want to think about whether the later studies were valid, much less explore to see what could be learned from them.
The point of all this? Science is a human enterprise. I grew up in one of the most science-saturated cities in America, barring university locations: Midland, Michigan, research home to the Dow Chemical and Dow Corning corporations. Some of my friends there have wondered if I have something against science. I don’t. My dad was a world expert in the production of hyper-pure silicon, and I’m pretty proud of my dad for it. I’m also well aware of how dependent this blog is on silicon.
I could go on to boast my appreciate of science in many other ways, but it would be silly; everyone appreciates science, I think. There is an atheist conceit out there that Christians are anti-science, but that itself is an anti-scientific opinion arising from bias and skewed sampling procedures that would make any Research Methods 101 student blush.
My only problem with science is the way we sometimes ascribe it almost godlike properties as the one source of all reliable knowledge. There are people who describe it that way. They’re naive to the human element in science (not to mention a host of other errors in their theories of knowledge).
I am not a working scientist, but I have done enough original research to be able to recognize there are forces in play beyond a pure search for truth. Can a scientist recognize and reject those forces? I think I did, in the end; and I’m pretty sure most scientists do, most of the time. But not all of the time. Sometimes the forces go unrecognized, sometimes scientists give in to normal human influences leading them to discount results they don’t like, and sometimes (a minority of times, I’m sure) scientists lie.
One more point before I go. This seems to be more of a problem in the behavioral sciences than the physical sciences. It’s harder to get away with mistakes in chemistry or physics than in sociology or anthropology. Why? My answer to that comes in two parts. The first is that science is the one method that shines above all in discovering what’s true about the natural world of physical cause and effect. The second part I leave to you to work out. (If your answer is any version of “too many variables,” I would consider that a good explanation, but not good enough.)