Signature in the Cell: A View of Its Reviewers

I have a telephone interview scheduled with Dr. Stephen C. Meyer, author of Signature In the Cell: DNA and the Evidence for Intelligent Design, this afternoon (update: find that interview here). Part of my preparation has been reading through reviews of the book on Amazon, where I noticed some patterns that I decided to quantify. The results do not represent the whole world of the Intelligent Design controversy—there’s nothing random about the sampling—but they are intriguing nonetheless.*

Strong Reactions

Of the 200-plus reviews of the book posted on Amazon, 75 percent were highly positive (5 stars) and 17 percent were very negative (1 star). Six percent were 4-star ratings, and the remaining 1 percent (percentages are rounded) were either 2 or 3 stars. This is about as non-normal a distribution as you’ll ever see; an upside-down bell curve, with a skew toward the positive. The obvious interpretation of a response set like this is that the book produces strong reactions. One way or another it’s a great book, according to its reviewers: either great in its contribution to science, or else greatly upsetting and disturbing to science.

As I went through the reviews in detail I coded each one on five factors:

  • Had the reviewer actually read the book?
  • Did the reviewer introduce theological considerations into their assessment of the book?
  • Did the reviewer come out with a dogmatic statement for or against Intelligent Design?
  • Did the reviewer use negative and/or abusive language in describing representatives of the position they oppose?

place

Did They Read the Book?

On the first of these I gave the benefit of the doubt to each reviewer: I assumed they had read it unless they stated otherwise, or unless they said something that clearly demonstrated they had not read it. One reviewer I placed in that latter category said that the book presented no research and no scientific predictions. That’s an old anti-ID trope that I’m guessing he or she pulled out of the usual set of anti-ID talking points. In the case of this book, it’s just obviously false, as anyone should be able to see even by taking a quick look at the book’s Amazon preview. Obviously this reviewer did not even invest even that minimal effort into the book before giving it a negative review.

Some reviewers said they “skimmed” the book or read only portions of it. I placed them into a middle grouping. There were a small number who seriously misrepresented the content of the book, which would strongly indicate they hadn’t read it, yet they said they had read it. Rather than concluding that they were lying, I placed them into the same middle group. Thus there were three categories:

  • LIkely read the book (granting the benefit of the doubt).
  • Communicated that they read only part of the book, or communicated they read the book but made errors of representation that makes that statement doubtful
  • Communicated that they did not read the book

place

So then, who read the book? Of those who rated the book favorably (5 stars), 94 percent likely read the book, and 2 percent communicated they had not read it, and 4 percent were in the middle grouping. Of those who gave the book a 1-star rating, only 26 percent likely read the book. About 43 percent of very negative ratings came from people who read the book only in part (or whose reading of the book was in doubt), and 31 percent came from people who felt free to pronounce their opinion without even reading it.

Here’s another way to look at the same information. Three-quarters (75 percent) of reviews on Amazon were very favorable. Counting only those who (with the benefit of the doubt) actually read the book, however, that proportion jumps to 87 percent.

Theological Considerations?

What about theology? I used a simple yes/no distinction for this one. If theological considerations were present in the review, either in the arguments (pro or con) or in the conclusions the reviewer drew, I coded the review as a positive on the theology scale.

I’ll start again with those who gave the book a 5-star rating. In this group, only 8 percent introduced theological considerations into their reviews. But among those who gave the book a 1-star rating, 51 percent made theological considerations a part of their case. The positive reviewers got it right: the book is about science and philosophy of science, not about theology. One wonders how those 51 percent missed that.

Dogmatic In Their Views?

The above two analyses showed significance of .000 on a chi-square test. The third analysis did not have enough cases to allow for significance testing, and I must make it clear that it’s not as reliable a measure as the first two. The question was, Did the reviewer make a dogmatic statement for or against ID, to the point of saying that the question is settled once and for all? That was hard for me to operationalize in a bias-free way, so please take all due care in interpreting what I present next, because its reliability is not at all certain.

As I coded the reviews (keeping that disclaimer in mind), this is how it came out:

Of those who rated the book with 1 star, only 9 percent avoided dogmatism of the sort just described. More than nine-tenths said something to the effect that the question is settled, there’s no need to pursue it any more. Many of them were more colorful than that: The question is settled, and attempts to keep pursuing it are just lies from the “Dishonesty Institute.”

But those who rated the book highly had more open minds to the issue: only 20 percent of that group made statements to the effect that “the question is now settled.”

I wondered whether those who had not read the book might be more (or less) dogmatic in their view of ID than those who had read it. The result: if the analysis included just those who reviewed the book negatively, no significant difference was found (those who read the book were neither more nor less dogmatic than those who had not read it). If I included all the reviews, there was a definite difference between those who had read the book and those who had not read it. But that doesn’t reveal anything we didn’t already know from earlier analysis showing that among the 75 percent of reviewers who rated the book with 5 stars, the overwhelming majority had read the book and had also avoided dogmatism in their conclusions.

Personally Negative or Abusive Language

Another analysis that was significant to the .000 level was that related to personally negative language. Negative reviewers of the book were very negative: 86 percent of 1-star raters used personal pejorative language (accusations of stupidity, unthinkingness, or worse) with respect to Meyer or ID proponents generally. Positive reviewers were not perfectly kind to their opponents: 13 percent made personally critical comments with respect to their opponents.

But ID opponents’ language was considerably stronger. My coding for negative language was such that I included any negative personal reference whatsoever made to holders of the reviewer’s viewpoint, even at mild levels like, “Those who disagree with x must not be thinking clearly.” Some of the language was actually abusive (accusations of dishonesty, lying, or even “mendacious intellectual pornography”). Abusive language to that degree was present in fully 57 percent of 1-star reviews of the book, but only 3 percent of 5-star reviews.

Conclusions

I can’t say what larger population these reviewers represent, so these conclusions can only fairly be taken as a description of the actual reviewers on Amazon. Negative reviewers were much less likely than positive reviewers to have read the book and to have their opinions of the book affected by theological considerations. To the (unknown) extent that my third analysis was reliable, they were probably also much more likely to deliver closed, dogmatic conclusions about the overall issues involved. The Personal Negativity analysis showed that ID opponents among this set of reviewers are putting a lot of negative emotion into the controversy. Other reviewers have wondered why, and have made suggestions. Is it anger? Fear (feeling significantly threatened)? They don’t know, and neither can I conclude what’s going on, but one way or the other it’s clear that ID opponents among this set of reviewers are not behaving very well in public.

And for my final comment I’ll draw upon my own impressions of the larger world of controversy on this issue. ID antagonists, in my experience, typically complain that proponents are closed-minded and theologically driven. This group of reviewers turns that completely upside down. ID opponents often charge proponents with anti-intellectualism, too. But who in this group of reviewers was more likely to spout an opinion without bothering to read what they were talking about?

*Note 1/16/10: I have re-visited the reviews and taken a slower, more thorough look at each of them. I generated more reliable operational definitions for the factors, and I have slightly revised some of the numbers in here based on that review, and I have added another factor. My coded datasheet is available for your inspection and review.

13 Comments