“Making It All Up” — The Weekly Standard: Science is a Human Enterprise

comments form first comment

Interesting article from Andrew Ferguson in The Weekly Standard:Making It All Up:” “Behavioral science suffers from these afflictions only more so. Surveys have shown that published studies in social psychology are five times more likely to show positive results​—​to confirm the experimenters’ hypothesis​—​than studies in the real sciences. ”

This has been swirling around the science community for years. The article refers to a ten-year-old study by Ioannidis showing how statistical manipulation can be used to cook scientific results, and (even more interestingly) can produce something a lot like “cooked” results even when researchers follow proper protocols with all integrity.

Part of thee problem—especially in behavioral research—is that there’s enormous pressure to produce and publish positive results. The best defense against spurious results is to repeat the experiment to see if the same effect appears again. Ferguson reports that a recent Reproducibility Project found that a full two-thirds of published behavioral studies failed to produce the same result a second time around.

There are all kinds of factors in play here. Some researchers are dishonest, altering data, suppressing negative results, and so on. Others are naive to the way even the best research designs can produce false positives. Others succumb to the entirely human pressure to publish. And once they publish, they face another entirely human pressure, which is to trust their work no matter what.

I conducted a major research project several years ago for the mission organization I worked with, using a lengthy self-report survey with a very large sample. It was a paper survey, just prior to the availability of Internet surveying, and it took a fair monetary investment to get it done. The leader who funded it wanted above all else to get a certain result so he could encourage other leaders to make certain adjustments. The survey did not return the result he wanted. He was not happy.

He was particularly disturbed in that the results almost showed what he was looking for, but not to a degree that achieved statistical significance. I tried everything to help him understand how the difference between the means was too small to reach significance in view of the magnitude of the variance given the size of the sample.

(I have just illustrated another problem with the research literature: it’s easy to speak in such a way that few people even know what you’re talking about, and yet impress them well enough that they’ll assume you know what you’re doing. That wasn’t especially technical, as statistics go, but I’m sure it would be opaque to many readers anyway.)

I was pressured heavily by my funding source to report a certain positive result. Do you suppose it’s possible I’m not the only person that’s ever happened to? Do you suppose it’s possible any researchers anywhere have ever given in to such pressure? (That’s intended to be an easy question.) But it doesn’t have to be a funding source putting on the pressure: it could also be the tenure committee.

Part of that study of mine was reproduced a few years later. It failed to match one of my most favored findings. That was when I discovered another totally human effect arising up inside of me: “MY study was better! MY study was right!” My study methods probably were better in this case, but I couldn’t help noticing how resistant I was on the inside even to thinking about my findings being wrong. I didn’t even want to think about whether the later studies were valid, much less explore to see what could be learned from them.

The point of all this? Science is a human enterprise. I grew up in one of the most science-saturated cities in America, barring university locations: Midland, Michigan, research home to the Dow Chemical and Dow Corning corporations. Some of my friends there have wondered if I have something against science. I don’t. My dad was a world expert in the production of hyper-pure silicon, and I’m pretty proud of my dad for it. I’m also well aware of how dependent this blog is on silicon.

I could go on to boast my appreciate of science in many other ways, but it would be silly; everyone appreciates science, I think. There is an atheist conceit out there that Christians are anti-science, but that itself is an anti-scientific opinion arising from bias and skewed sampling procedures that would make any Research Methods 101 student blush.

My only problem with science is the way we sometimes ascribe it almost godlike properties as the one source of all reliable knowledge. There are people who describe it that way. They’re naive to the human element in science (not to mention a host of other errors in their theories of knowledge).

I am not a working scientist, but I have done enough original research to be able to recognize there are forces in play beyond a pure search for truth. Can a scientist recognize and reject those forces? I think I did, in the end; and I’m pretty sure most scientists do, most of the time. But not all of the time. Sometimes the forces go unrecognized, sometimes scientists give in to normal human influences leading them to discount results they don’t like, and sometimes (a minority of times, I’m sure) scientists lie.

One more point before I go. This seems to be more of a problem in the behavioral sciences than the physical sciences. It’s harder to get away with mistakes in chemistry or physics than in sociology or anthropology. Why? My answer to that comes in two parts. The first is that science is the one method that shines above all in discovering what’s true about the natural world of physical cause and effect. The second part I leave to you to work out. (If your answer is any version of “too many variables,” I would consider that a good explanation, but not good enough.)

top of page comments form

8 Responses to “ “Making It All Up” — The Weekly Standard: Science is a Human Enterprise ”

  1. My answer to the second part is that there is more reason for the scientist to care about the answer. In chemistry or physics there is less reason for someone to hope for some particular solution, so he can just accept what is there. But in sociology or anthropology someone may hope for a particular solution because it fits with his worldview, it seems to favor his politics or religion, or whatever.

    You can “recognize and reject” the things that lead away from truth to a certain extent, by caring more about truth, but you cannot do this 100% of the time and in every possible way, because this would imply that you care about nothing except truth, which is impossible for a human being (and would be a bad thing to do even if it were possible.)

  2. It’s harder to get away with mistakes in chemistry or physics than in sociology or anthropology.

    People are a lot harder to test. They don’t necessarily tell the truth on surveys, they can change their minds, and sometimes they don’t know what they think.

  3. I got a kick out of this. My daughter and I have been following this fairly closely since it occurred since she is currently getting a PhD in Marketing and taking Psychology research methodologies classes. She has an undergraduate degree in Statistics so she has spent a lot of time on the subject of scientific method, causality and replicability. I asked her if she thought Marketing research was fraught with the same shoddy research problems. She replied that, even more than the hard sciences like physics and chemistry, Marketing has an immediate check on the research that is performed. She said, if we do it right, stuff sells. I do not know whether I buy it completely and we both got a laugh. Maybe the check on social research is whether the expected social response occurs (stuff sells).

  4. One question I have in the back of my mind is why do people expect to get the same results from the same experiment? Why not accept the different results and conclude that nature is not regular all the time – that human beings are not mechanistic machines? There are probably many answers to this question depending on who it is you ask.

  5. Because the whole assumption when publishing results in a science journal is that these results will generalize to other like circumstances.

    So if the question is whether people will respond X when presented with stimulus Y, then that’s either generally true or generally not true (or sometimes true or partly true…). There are many Xs and Ys for which this relation would be generally true, and the social sciences try to discover those Xs and Ys. There’s nothing wrong with that.

  6. It is an assumption, but what is the underlying justification considering that everything changes over time. Culture generally influences human behavior and so I wouldn’t expect the same experiment to generally yield the same results 10 years later.

    I realize I’m speaking from a position of ignorance so maybe all I’m doing is making that all too clear. 🙂

  7. @Tom

    Because the whole assumption when publishing results in a science journal is that these results will generalize to other like circumstances.

    Which is a reasonable assumption, because without it why would people bother doing any research at all?

  8. Despite the fact that the field of Medicine is generally referred to as a “Practice,” something every patient is wise to remember – I think it is safe to say that Medical Research is considered a “Hard Science.” Nevertheless, we have all seen the precise human temptations you mention, plus greed, selfish desires and Lord knows what else, which present the world with countless “faultless studies,” which are later proven to not only be incorrect, but often times entirely opposite of the original findings. That doesn’t necessarily mean that the science is wrong, but it continues to demonstrate that many of the humans who practice it are.