“Evolution News & Views: My “Neuroscience Denial”?”

comments form first comment

Some of you have been watching the exchange between Dr. Steven Novella and Dr. Michael Egnor on neuroscience and mind. The most recent installments are here (Dr. Egnor’s), responding to this from Dr. Novella. Dr. Egnor writes in summary to his post,

I’ll get back to Dr. Novella’s specific arguments in my ensuing posts, but Dr. Novella’s invocation of “neuroscience denialism” leaves me dumbfounded.

He’s right. Dr. Novella is playing rhetorical games. This time it’s almost pathetically transparent. Why, oh why do anti-ID people not engage the real argument?

top of page comments form

26 Responses to “ “Evolution News & Views: My “Neuroscience Denial”?” ”

  1. While I haven’t the time to read all of the commentary referenced in this post – I look forward to reading it in the near future.

    Thank you for your posts.

  2. M = Materialism
    D = Dualism

    Pprior(M) = 0.5, Pprior(D) = 0.5.

    P(central nervous system|M) = 1
    P(central nervous system|D) < 0.5

    P(brain with neurons|M) = 1
    P(brain with neurons|D) < 0.5

    P(physical memory storage|M) = 1
    P(physical memory storage|D) < 0.5

    P(neural computing capacity|M) = 1
    P(neural computing capacity|D) < 0.5

    P(Capgras Delusion|M) = 1
    P(Capgras Delusion|D) < 0.5

    P(Alzheimer’s|M) = 1
    P(Alzheimer’s|D) < 0.5

    + hundreds of other predictions of neuroscientific model of mind

    P(variety of ESP effects|M) << 1
    P(variety of ESP effects|D) ~ 0.5

    Observations:
    central nervous system
    brain with neurons
    physical memory storage
    neural computing capacity
    Capgras Delusion
    Alzheimer’s

    + hundreds of other VERIFIED predictions of neuroscientific model of mind

    NOT variety of ESP effects

    Conclusion:
    P(D) << 1
    P(M) approximately 1.

    Sigh.

  3. P(Capacity for Rationality|M) < 0.1 P(Capacity for Rationality|D) = 1 P(Consciousness Exists and is Not Illusory|M) < 0.1 P(Consciousness Exists and is Not Illusory|M) = 1 D

  4. What the heck, I’ll give it a try…

    P(induction leads to true beliefs) < P(deduction leads to true beliefs)

    P(Beliefs about M are true|M) < P(Beliefs about M are true|D)

  5. Oh, and by the way, doctor(logic), did you read what Egnor wrote before you responded? It doesn’t show in what you wrote, if you did.

    I did read it. I think I lost some brain cells in the process.

    P(Capacity for Rationality|M) < 0.1
    P(Capacity for Rationality|D) = 1

    Really? Why does dualism by itself guarantee rationality? Or theism, for that matter. Couldn’t God (or a god) fool us into thinking we can think rationally when we don’t?

    Capacity for rationality an assumption of dualistic theories just as much as it is an assumption in naturalistic theories. The belief that our rational capability is not an assumption of naturalism is a myth.

    And, as for materialism, I assume you refer to the EAAN. Can you not imagine any environment in which rationality has a survival advantage? How about an environment whose rules changes faster than the reproduction rate of the species? Or, equivalently, a species whose niche is migrating from one environment to another environment with other rules?

    Of course, it is so. Rationality is a faster algorithm than genetics. A single individual can learn and adapt to an environment faster than is possible by just selecting members of species for fitness over many generations.

    Rationality as a strategy is a trade-off. If the environment were in perfect stasis, never changing, rationality would not be a good strategy. Crocodiles might be as perfectly adapted to their environment as any animal could be. But they are not adapted to adapting. They would not be good in a desert or a temperate forest or the tundra. Humans are adapted to adapting. They can move from one environment to another in a generation. They can create tools and plan ahead. If humans were trapped in a single environment, then you could make an argument that there’s no survival advantage to rationality, but it’s plainly not the case.

    You can see why science of mind is the next front in the war on evolutionary science. The ID position and the dualist position are both crippled by the same god-of-the-gaps reasoning. They both ignore the statistics, and both fail to make any predictions.

    P(Consciousness Exists and is Not Illusory|M) < 0.1

    If oceans reduce to H2O, they don’t cease to be oceans. The same goes for consciousness. Consciousness is only an illusion if you define consciousness to be non-physical (i.e., beg the question).

  6. Is anything “beyond” materialism “dualism”? If so, surely the inequalities:
    P(central nervous system|D) < 0.5
    P(brain with neurons|D) < 0.5
    P(physical memory storage|D) < 0.5

    are “out of the ether”. If not, then surely there is no need to beat upon such a bizarre straw-man position?

  7. Doug,

    So you’re saying that given what was known (or, rather, what wasn’t known) 500 years ago, you would have thought that souls don’t have memory, nor means of information processing, nor the ability to communicate with flesh?

    I’ll go back to a court case example, since that’s the simplest analogy.

    Suppose a jewel is stolen, and your friend Fred is a suspect. You like Fred, so you instinctively believe Fred is innocent. However, Fred’s fingerprints are soon found on the disabled security mechanism, and his bank account is mysteriously $200K larger than it was before the theft. Consequently, the police believe Fred is guilty. However, you still insist that Fred is innocent, and must have been framed. The problem is that you lack evidence, and you cannot explain the circumstantial evidence came to incriminate Fred.

    If we start from a statement of event and a description of the evidence, it is rational to conclude the Fred is guilty. Why? Because the odds of Fred’s prints being on the mechanism are predicted by the Guilty Fred theory, not the Fred is innocent theory.

    P( Fred’s prints | Fred Guilty) >
    P( Fred’s prints | Fred Innocent)

    Surely,

    P(Fred’s prints|Fred Innocent) < 0.5
    P($200K deposit|Fred Innocent) < 0.5

    However, to defend Fred, you theorize that, in some unknown way, someone framed Fred for the crime. Alas, the next day, the closed circuit TV tapes are discovered showing Fred appearing at the scene just before the crime takes place. Things look even more grim for Fred.

    P(CCTV footage|Fred Innocent) < 0.5

    It would now seem that it is perhaps 8 times more likely that Fred is guilty than innocent.

    So you formulate the New and Revised Fred Is Innocent theory. In this theory, someone else stole the jewel, then planted the evidence and spoofed the CCTV evidence. Clearly, the unknown evil genius behind the theft can pass through security systems at will, has vast quantities of cash, and can fabricate evidence with ease. There seems to be nothing the unknown bandit cannot do. The only problem is that you have no evidence to back up your theory. The bandit is very good at covering his tracks.

    When the case goes to court and the prosecution says

    P(Fred’s prints|Fred Innocent) < 0.5

    You shout…

    “That’s a straw man! We are not here to defend the generic Fred Is Innocent theory! No one who believes Fred is innocent actually believes that Fred’s prints were not planted by some unseen supervillain! Fred was framed by someone unseen, someone beyond the reach of the courts! And you, the jury, ought to place this finely-tuned Fred Is Innocent theory on an equal footing with the Fred Is Guilty theory!”

    The jury begins to laugh at you.

    What’s the lesson? You cannot fine-tune your theory unless you successfully predict something as peculiar as your fine-tuning. Suppose that it was 8 times more likely the Fred is Guilty theory is true than the generic Fred Is Innocent theory. You then devise the Supervillain Fred Is Innocent theory that accounts for the evidence. In order to win over the jury, you need to point to evidence for the supervillain that is 8 times more likely than the No Supervillain theory. But, alas, the supervillain is so good at covering his tracks, that no scientific evidence ever appears. So you’ll never, ever win the case.

  8. Yoohoo! Doctor Logic! Over here! I’m over here!

    That is to say, “do you really imagine that what you just posted addressed my point? Your ‘so you’re saying’ was SO distant from anything I was trying to say! Your court example is SO alien to my argument. Let me try to be more clear:”

    Materialism (M) is relatively easy to define. In fact, M is by definition definable. As your last post correctly implies, non-Materialism (~M) is not. As a result, it is farcical to start assigning values to q in constructs like:
    p({anything}|~M) < q

    My point, since it was missed so profoundly, was simply that Dualism (D) is not necessarily the same thing as ~M. If there exists a D such that assignments such as given above are not farcical, then it is indeed a straw man (in effect, there exists another ~M, which is not the same as the straw man D, for which assignments such as you have made are thoroughly farcical).

    Note that the undefinability of D is not necessarily a point against it. After all, things like language, understanding, rationality, and consciousness are also beyond definition due to the inherent recursion…

  9. Hey Doctor Logic… I wouldn’t want you to think that all the effort you put into your court analogy was wasted, so let me attempt to address it.

    You are suggesting that
    M ~ Fred guilty
    ~M ~ Fred innocent

    But I’ve never met an opponent of materialism who actually treats ~M in that way.

    You see, we treat ~M like “someone *paid* Fred to steal a jewel”. That is to say, ~M can certainly implicate Fred in *precisely* the *same* ways that M can: Fred’s prints and Fred’s bank account say *nothing* about whether someone or other paid Fred to commit the crime.

    We don’t say that materialism is *wrong* — we say that it is *insufficient*. As a doctor of logic, I’m sure you’ll understand the distinction.

  10. Dualism (D) is not necessarily the same thing as ~M

    ….

    We don’t say that materialism is *wrong* — we say that it is *insufficient*.

    Good point, Doug. I often think of D as “M + something that M can’t accomplish”, or to put it briefly, D = M+.

    Hope your Christmas was enjoyable.

  11. Hi Steve,

    Lovely Christmas, thanks. 🙂

    It is unfortunate that our inclination is to use the descriptive label Dualism to mean either M+ or ~M.

    There are M+/~M ideas which are barely related to a Cartesian Dualism. For example, there is a (for want of a better label) “knowable materialism”, in which there is a knowable m which is “smaller” than M, the “complete” materialism. Since there is a non-trivial component of M which does not belong to m, we can never know if M is a sufficient explanation for consciousness. Since it is so easy to confuse m and M, one can present this position in such a way as to appear to be a materialist to a dualist, and to appear to be a dualist to a materialist 😀

    cheers, Doug

  12. Doug,

    You see, we treat ~M like “someone *paid* Fred to steal a jewel”. That is to say, ~M can certainly implicate Fred in *precisely* the *same* ways that M can: Fred’s prints and Fred’s bank account say *nothing* about whether someone or other paid Fred to commit the crime.

    We don’t say that materialism is *wrong* — we say that it is *insufficient*.

    Unfortunately, this doesn’t help your case. For what is physics insufficient? Is physics insufficient to explain memory? Recognition? Computation? Subconsciousness?

    Suppose I formulated dualism hundreds of years ago. A priori, physical memory need not be explained as a physical system. It could go either way. A posteriori, we found memory was a physical system. What did dualists do in light of the new physical explanation? They re-tuned (fine-tuned) their dualism to say that the inexplicable part of cognition must be one of those things not yet explained by science. But in exchange for that fine-tuning, they got nothing in return. (That’s like fitting a curve to some data points and still not being able to predict where other data points will lie.) If, tomorrow, some other mental feature is explained by materialism, dualists will just re-tune and act as if experience has nothing to do with their thesis.

    This is a dualism of the gaps. It’s a fallacy because either the claim is meaningless (irrelevant to experience), or else it fails to reduce confidence in the thesis by ignoring its own fine-tuning.

    What experiences would reduce your confidence in dualism? Are there any?

    Suppose I argue that the physical description of water is insufficient to explain everything about water. For example, we have never simulated Niagara Falls at the scale of molecules, so there are some things about water not fully understood. Indeed, it may be impossible to simulate water perfectly, so there will always be some gap between our experience of actual water and our physical model of water.

    But can’t we still say that we think it is likely that physics completely describes water. Why is there no water dualism?

    After all, things like language, understanding, rationality, and consciousness are also beyond definition due to the inherent recursion…

    I disagree that these things are beyond definition, but it doesn’t really matter. Naturalism and materialism don’t purport to define these things, or to say what is the proper way of thinking. Naturalism purports to explain how a mechanism can think according to the conventional definitions.

  13. Hey Doctor Logic!
    Why would you ask me to defend a “dualism of hundreds of years ago”? Besides it being a convenient straw man for you, I mean… 😉

    For what is physics insufficient?

    Physics does not even begin to address the hard problem of consciousness. But let’s start small. Since language is insufficient to explain language, how do you propose to have physics explain language? Physics is language-limited! The explanatory gap is enormous. And a X of the gaps isn’t fallacious just because you say so. For all your faith in physics, this is one gap that just isn’t going to disappear any time soon. The only people I’ve met who think otherwise are either very young or fervent ideologues. How old are you, Doctor Logic? After studying the matter for a decade or so, I’ll be delighted to see you attempt to claim with a straight face that you “think it is likely that physics explains consciousness”.

    I disagree that these things are beyond definition

    Then feel free to offer a definition for any of them! And I promise that I won’t give you a hard time when you realize (for the ninety-fifth time) that your definitions still require “fine-tuning” 😉

  14. For what it is worth…

    The wielding of the sub-phrase “…of the gaps” like a weapon is just a bluff. It derives from a time of giddy enthusiasm in which a model of science as a “gap-closer” was mindlessly embraced. Is such a model consistent with the fact that the number of scientific papers published in refereed journals is increasing annually? Of course not. The fact is that true science uncovers more unanswered questions than it answers! As a result, the (apparent) “gaps” are growing rather than shrinking. Knowledge is Mandelbrotian: the finer resolution we achieve, the greater the complexity beneath.

  15. Doug,

    Why would you ask me to defend a “dualism of hundreds of years ago”? Besides it being a convenient straw man for you, I mean… 😉

    Because you are apparently incapable of seeing what you’re doing in context.

    What are the consequences of your dualism? Is dualism an empty, trivial concept to you? What experiences would be different if your dualism were not true? If none of your experiences would have been be any different, then why do you have any confidence in dualism? What does the claim even mean?

    Since language is insufficient to explain language, how do you propose to have physics explain language? Physics is language-limited! The explanatory gap is enormous.

    Note that I’m not saying that everything can be rationally explained. The rules of rationality cannot be rationally explained without circularity (in any system). For example, you can’t use God to explain rational thinking. Fortunately, materialism is not trying to explain the rules of rational thinking. It’s showing that mechanisms can think according to those rules.

    Besides, how did you learn the meaning of the word milk? How does someone in France learn the meaning of the word lait? It’s empirical. You see and taste milk and associate the use of the word with the experiences. The same thing applies to mental ideas and processes. There’s always some uncertainty in meaning that is learned this way, but it’s shared empirical fact that enables us to learn languages, especially our native tongue. Languages are symbols for experience, potential experience, future experience. So now you ask whether physics (a calculus of physical experience) could explain how we create symbols for experience? I think it can and does. There are models of auto-associative neural networks that are capable of this sort of self-organization. Saying that language limits our ability to explain language is just an excuse not to think about the issue.

    The wielding of the sub-phrase “…of the gaps” like a weapon is just a bluff. It derives from a time of giddy enthusiasm in which a model of science as a “gap-closer” was mindlessly embraced. Is such a model consistent with the fact that the number of scientific papers published in refereed journals is increasing annually? Of course not. The fact is that true science uncovers more unanswered questions than it answers! As a result, the (apparent) “gaps” are growing rather than shrinking. Knowledge is Mandelbrotian: the finer resolution we achieve, the greater the complexity beneath.

    Au contraire. What you have written here is a bluff. The “…of the gaps” phrase is not, and never has been, a denial that there are any gaps in the sciences. The phrase is used to describe a fallacious reliance on these gaps in order to ignore rules induction and probability.

    Go back to the court example. In Fred’s case, “innocence of the gaps” is irrational, and it doesn’t make it less likely that Fred is guilty.

    Now, you said that you weren’t advocating for Fred’s innocence, but instead advocated for the possibility that Fred had help. What counts as help? If Fred took the bus to the scene, does that count as Fred getting the bus driver’s help? Did the cows that went into Fred’s hamburger lunch count as help? If so, that’s a pretty trivial claim.

    Suppose we look at your theory that Fred was paid to commit the crime presumably by someone else who wanted Fred to commit the crime. Here’s your mistake. You think the competing theories are

    1) An entity wanted the job done. That entity was Fred who performed the job personally.

    2) An entity wanted the job done. That entity is unknown, and instead of performing the job personally, instead of finding any number of other people, chose Fred.

    Given that we only find evidence E that Fred did the job, would you say that P(2|E) was the same as P(1|E)?

    How about this:

    3) Unseen Entity Y hypnotized Entity X, convincing her that she wanted the job done. Entity X, instead of performing the job personally, instead of finding any number of other people, chose Fred.

    How about P(3|E)?

    4) Unseen entity, Z, hypnotized unseen entity Y, convincing her she wanted the crime committed. Instead of committing the crime personally, or picking another victim, unseen Entity Y hypnotizes unseen Entity X, convincing her that she wanted the job done. Entity X, instead of performing the job personally, instead of finding any number of other people, chose Fred.

    How about P(4|E)?

    Why are the more complex theories less likely? It’s because the number of ways the hidden entities could have implemented their plans is higher, and finding Fred as the guilty party picks out just one of the ways the hidden entities could have done their work.

    So I really don’t see what your dualism does for you (except confuse you and others). Either it’s a trivial statement about gaps in our knowledge or about ambient conditions, or else it’s fallacious.

  16. Dr. L.

    Far too much of your last post was too remote from my position to address. Forgive me for ignoring it.

    The rules of rationality cannot be rationally explained without circularity (in any system).

    Excellent. A point we can agree on. We can work from here.

    [M]aterialism is not trying to explain the rules of rational thinking. It’s showing that mechanisms can think according to those rules.

    So without being able to describe the rules, it is going to show that the rules are being followed… Please explain… take your time!

    Languages are symbols…

    I’m afraid that this is simply false. Words, perhaps, are symbols. Language is not.

    But this is a delightful analogy. Let’s work with it.
    S (i.e., Symbolism) ~ M & L (i.e., Language) ~ C (i.e., Consciousness)

    Symbolism and Materialism are things that we can wrap our heads around. There are still philosophers of language who claim that language “simply” derives from Symbolism. There are still philosophers of mind who claim that consciousness “simply” derives from Materialism. But this is a vastly more difficult subject.

    Are you willing to engage in a discussion of whether Language derives from Symbolism? Would you permit the analogy?

  17. Doug,

    I’m afraid that this is simply false. Words, perhaps, are symbols. Language is not.

    True. X + Y = Z has no meaning until you introduce the semantics of language. Contra DL, it’s clear that the semantics of language are not the same as the syntax of symbols. The question for the materialist is how did the semantic content arrive on the scene, and most important, how do we *know* we can *know* it’s truth value in relationship to reality?

    Logic can’t help you with the latter unless you FIRST have semantic content. Logic can’t tell you if ‘X > Y’ or ‘if X, then Y’ until you know what the symbols mean.

    Does materialism show that mechanisms can think according to the rules of rationality, as DL claims?. Obviously not because logic isn’t enough as I’ve already shown. It’s also not enough to physically show matter behaving according to the rules of rationality. A robot may pass a Turing test but it cannot reason. It can’t tell you if its program leads to true beliefs about reality. It cannot tell you if Line 75 of its program has an error in it. It simply does as it is told.

  18. Doug,

    Far too much of your last post was too remote from my position to address. Forgive me for ignoring it.

    Um, no. That’s too convenient. If you cannot say what would be different in experience if your claim were true, then even you don’t know what you’re advocating. I’m not going to play your game your way

    [M]aterialism is not trying to explain the rules of rational thinking. It’s showing that mechanisms can think according to those rules.

    So without being able to describe the rules, it is going to show that the rules are being followed… Please explain… take your time!

    “Explain” is not the same thing as “describe”.

    If you cannot even describe the rules of rationality, how will you know rational thinking when you see it? What makes you think you’re rational?

    So, I don’t buy your line here. If you can tell whether someone else is being rational, then you can tell whether a mechanism is being rational. Neither you nor I can prove that what we recognize or describe as “rational information processing and behavior” is the way people ought to think, but naturalism does not try to prove this. The axioms of rationality are built-in to every rational worldview, including naturalism and materialism.

    S (i.e., Symbolism) ~ M & L (i.e., Language) ~ C (i.e., Consciousness)…

    Would you permit the analogy?

    I haven’t got the faintest idea what your analogy is, but I don’t see how I can stop you from expounding upon it, if you can.

    However, saying “languages are symbols” was an oversimplification, so I’m not going to be distracted on that point. The point is that neural networks can learn concepts the way humans do, can associate sounds and symbols with concepts, and can combine these features to create intentional languages. I’m happy to consider actual predictions about mechanisms. Are you?

  19. even you don’t know what you’re advocating

    It certainly is inconvenient when your opponent refuses to acknowledge the tidy little straw man you’ve so carefully constructed for him, isn’t it?

    The point is that neural networks can learn concepts the way humans do, can associate sounds and symbols with concepts, and can combine these features to create intentional languages

    What dreamland do you come from?
    Have you ever worked with an Artificial Neural Network? Do you have any idea how foolish what you just wrote is to anyone with actual experience with these things? You ask me to state what would be different in my experience if my claim were true. Well… if my claim were false then your above quote wouldn’t be so hilariously over-optimistic.

  20. Well, Doug, I think we’re done.

    You’re technique of pretending you know what you’re talking about, all the while not addressing any of the issues is a waste of my time.

    If you’re ever sincerely interested in how the brain works and what our models can already do, watch this video.

  21. Dear Dr. L,

    I’m not pretending. I’ve been employed doing research for the world’s market leader in speech recognition for the last fourteen years.

    Let me give you a clue. Artificial Neural Networks (ANNs) are a reasonable model of the human cerebellum. The cerebellum, in case you are interested, is the “unconscious” brain. When the “conscious brain” (i.e., the cerebral neocortex) presents information to the cerebellum, it “learns” in very much the same way that ANNs do. However, this is not supportive of the claim that “[ANNs] can learn concepts the way humans do”.

    The piece that you are missing is that the “information presentation” is the tough job. When the cerebellum is being trained, the cerebral neocortex does this job. But when ANNs are trained, the human researcher (i.e., a distant cerebral neocortex) is still doing this job! That is to say, the conscious part of learning is still being done by the human being. That we can simulate the unconscious part of learning with ANNs is interesting, informative, and even practical. But please do not delude yourself into thinking that this even begins to address the business of conscious learning.

    cheers, Doug

  22. I did check out the link. I’m familiar with the work. If a researcher uses the label “belief”, it does not magically make the computer have beliefs. If a researcher refers to “semantics”, it does not magically provide the computer with understanding. They have models of the neo-cortex like paper airplanes are models of F-15s.

    If the work was so ground-breaking (Jeff used the words “scalable”, “simple”, “robust” and “powerful”), would it be too much to ask that it be demonstrated to perform better than conventional ANNs on a real world task in the two and a half years since that video was released??

  23. Jeff in the Numenta video lists people, cars, buildings, etc as “causes” when these are conditions necessary for the cause. The cause that results in the belief is the pattern directly interacting with the ANN – which is the electrical signal, or brain signal. I think that is an important distinction that needs to be made. Also, he uses the term belief, but I don’t think he is using it in the same way that it is normally used (I see Doug said the same).

    Anyway, correctly identifying and categorizing the reoccuring, high-level pattern (which is brain signals) is one thing (syntax). Knowing what the brain signals *mean* is quite another (semantics). What is it about Pattern A that gives it a certain meaning – a certain truth value – in relation to Pattern B? In other words, what makes Brain Signal A true in relation to Brain Signal B? The ANN has no ability to know a valid syllogism from an invalid one, and so all it can do is ‘recognize’ patterns. That certainly has value, but it’s NOT reasoning.

    Can you imagine any conceiveable way that an ANN would come to know that the high-level pattern (brain signal) it detects has any value – that the high-level pattern “human life” should be valued over the high-level pattern “ANN” or “plant life”? I can’t either.

Comments close automatically on posts older than 120 days.