- Knowledge and Bias: A First Response to Tom Clark
- Knowledge and Error: Second Response to Tom Clark
- Knowledge and Evidence: Third Response to Tom Clark
- Tom Clark, Empiricism, and Ethics
- Tom Clark, Empiricism, and Ethics, Part Two
Yesterday I made my first response to Tom Clark’s naturalistic epistemology, pointing to self-contradictions I believe it contains. I use the term “naturalistic epistemology” intentionally, for it seems to me his approach to knowledge is very strongly biased toward naturalistic conclusions.
It was not these internal contradictions, however, that interested me most about his paper. It was his approach to knowledge in general, which today I am looking at from an exploratory angle. I do not expect this to lead to a definitive statement, “X approach to knowledge is wrong,” or “Y approach is right.” I’m not entirely sure as I begin writing where this will lead, actually. Writing is a learning process: you find out what you really think. As I begin here, I only think I know what I think.
It begins with the observation that Clark leans heavily on two knowledge filters: the “insulation requirement” and the “public object requirement.” He explains them thus:
To back up our claim that experience captures reality we must rule out such influences, insulating our beliefs as best we can from subjective bias and possibly mistaken conventional wisdom.
Unless there’s intersubjective data, a public object of some sort we can all in principle see or sense in some fashion and thus agree exists, it doesn’t matter how many millions of individuals report subjective experiences of [G]od or the soul: they could all be mistaken, just as all those reporting experiences of alien abduction could be (and likely are) mistaken.
He goes on to add that these two “constitute basic epistemic good practice, without which no factual claim about the world has credibility.”
I called them knowledge filters, for that is what they are: conditions that must be applied to any putative knowledge before it can be accepted as real knowledge. The entire thrust of his epistemology in this article, in fact, is pointed toward filters. He is very concerned to achieve complete certainty before a putative piece of information is admitted into the realm of knowledge. Similarly he also says,
The only reliable basis for knowledge, the only route from subjectivity to objectivity, is to relentlessly subject a belief to doubt, then to allay the doubt (or confirm it) by gathering evidence that’s independent of one’s commitment to the belief. To the extent that worldviews, however widely held, fail to test their factual claims using publicly available evidence, and to the extent these claims are incapable of being tested, they fail as contenders for truth.
Religious and other non-empirical ways of knowing don’t sufficiently respect the distinction between appearance and reality, between subjectivity and objectivity. They are not sufficiently on guard…
If it is not “relentlessly” tested, it cannot be called knowledge.
He is probably aware of a statistical technique called power testing.* It is a mathematical means of estimating, before a research project is undertaken, how likely it is that an hypothesis would be supported by the study, if the hypothesis is actually true. Any research that involves examining a sample of a larger population is prone to errors of two opposite kinds. By chance, you might take a sample that makes the hypothesis appear to be true when actually it is not true. Or by chance, you might take a sample that makes the hypothesis appear not to be true when it actually is.
Power analysis can help a researcher estimate the chances of making the second error: what is the probability that, by chance, we would miss the effect just because our sample didn’t represent the whole population accurately? There are two factors contributing to statistical power that matter little to our discussion here—sample size and the strength of the hypothesized effect. There’s another factor that matters a great deal to this discussion: how high is the bar being set for this research? How certain do we think we have to be before we’ll say the outcome would count as real knowledge? The higher the bar you set for it to count, the more likely you are to commit the second error I mentioned: considering the hypothesis not to be true when it actually is.
There is therefore a trade-off: the tighter the filter—the more you protect yourself from the error of falsely seeing an effect where none exists—the more likely you are to miss an effect where it actually exists. In statistical power analysis we can quantify that relationship: a research design with extremely tight filters has low power to detect reality. I think there is an analogy between this and matters of theology and philosophy. It seems to me that Clark’s filters are so tight, his design has nearly zero power to detect non-natural realities, if they exist. He has made certain that whether they exist or not he could never see them.
I suggest that this is poor research design. One’s epistemology should not be of a sort that prevents one from seeing God, if God exists.
Now I think Clark might respond this way: “I’m being appropriately careful not to call “knowledge” of God knowledge unless we can really know that it is knowledge.” To this I have three general responses:
1) The filters, as I wrote yesterday, are unreasonably tight, so tight that they filter out even themselves. If my analysis is correct they are self-contradictory.
2) He is not just holding theological knowledge at arm’s length, taking an agnostic stance. He makes a number of positive anti-theological assertions:
When it comes to representing reality, there is no coherent, ethically responsible substitute for science and other empirical disciplines. The alternatives—faith-based religions, empirically unfounded secular ideologies, and commercial agendas hostile to evidence—often claim to be objective representations of how the world is in various respects, but have no entitlement to such claims.
There’s consequently no reason to grant [any religious system] any domain of cognitive competence.
This is to say that being epistemically responsible, not taking appearances at face value, inevitably pushes us toward intersubjectivity and science. This in turn heightens the plausibility of the claim that there’s nothing over and above the natural world, what science shows to exist.
Such a way of knowing, were it available, would give us confidence that [G]od, the soul, contra-causal free will, and perhaps other phenomena science can’t confirm (paranormal powers, astrological influences, etc.) actually exist. The difficulty, however, is that there’s no epistemic space in which to construct such an alternative.
Indeed, the Palinesque parochialism that disdains correction by science and knowledge-based expertise is manifestly dangerous.
Although organizations promoting science shouldn’t be contemptuous of religious faith and revelation—that’s counter-productive and unwarranted—they should challenge the idea that non-empiricism has cognitive competence in some purportedly real domain, such as the supernatural.
If we take ourselves to be governed by rational rules of evidence, then as Victor Stenger argues we should agree that [G]od is a failed hypothesis.
A more reasonable approach would be to recognize that his epistemic filters are so tight, his examination of the God “hypothesis” employs a research design with impossibly low power to detect him. A responsible researcher would acknowledge this and say, “we did not find God but we did not show he does not exist, either. Further research with more power would ” Clark is not as careful as that.
3) Why set the filters so tight? In science, filtering (confidence levels) is set by a comparison of risks. We know there’s always a possibility that we’ll come out with a false conclusion by chance, so which error—supporting a false hypothesis, or failing to support one that’s true—would be easier to live with if we made it? Often researchers will accept a 5% chance of calling a false hypothesis true (these things are easily quantifiable through basic statistics). Sometimes that’s too generous and researchers will accept no more than a 1% chance or less. Clark’s filters, if they could be quantified, would surely be tighter than that. What is the risk to him if he opens them up?
The danger is that he would see spiritual reality where none exists. The risk that comes from keeping the filters too tightly closed is that he will fail to see a spiritual reality that actually does exist. Pascal’s Wager intrudes on one’s thoughts here, but this is different. This is not about one’s decision about belief, but something prior to that: the way a person approaches the question of belief. I submit that failing to see a spiritual reality that actually exists is very dangerous, more dangerous than falsely “seeing” one that does not; and that therefore creating an epistemology that cannot see anything but the natural world is not wise.
So that is my exploration of what I take to be the ideas behind Tom Clark’s epistemology. If I am right about any of this, then here is where we have arrived: his epistemology is seriously flawed. Even if his filters were not self-contradictory, as I suggested yesterday, they would still be poorly designed for detecting what might possibly exist in reality.
*You have my permission to skip this paragraph and the next if you wish, though I really am trying to speak English in them. Those with a background in statistics may cringe at the resulting imprecision of my expression, but then if you know statistics, you don’t need me to be that precise anyway. This is for those who do not.