“How thoughts arise”

comments form first comment

Science Centric has a report on a new, more effective approach to simulating neural networks:

In their doctoral theses, Arvind Kumar and Sven Schrader have simulated large neuronal networks that, for the first time, take this neuronal feature into account. Especially in the neocortex, neurones are intensely interconnected, i.e. they receive many input signals that can modify the integration of subsequent signals. Taking the special features of such highly interconnected networks into account yields simulations that are in excellent agreement with recordings from biological nerve cells in the intact brain. The new virtual network thus reflects reality better than previous models.

This is of course relevant to the mind-brain question we’ve looked at often here. The article’s headline is “How thoughts arise.” For a dualist like myself, a better headline would be, “How neural aspects of thoughts arise,” or “How neural correlates of thoughts arise.” For it seems quite obvious to me that viewing thoughts as strictly neural events will not work. I’ll re-state one problem with it here.

We know that thoughts cause things. Suppose someone says to me:

All men are mortal.
Socrates is a man.
Is Socrates mortal?

My thoughts will run through what I know about men, and I will agree that all men are mortal. I will assume for the sake of discussion that Socrates is a man (he was once, at any rate). I will consider whether the answer is logically entailed by the statements I’ve been given. So I will think, yes, Socrates is mortal. Then I will answer aloud, “Yes, Socrates is mortal.” (Or I could be a wise guy and say, “Who wants to know, and what do I get if I answer right?”)

I have thoughts, which cause other thoughts, which ultimately cause physical actions in the world. Note that the progression of causes is in virtue of the content of the thoughts. It’s what the thoughts are about. The scientists quoted in this article want to address that question:

‘But it does not suffice that the brain is just active,’ adds Rotter. ‘The activity pattern must somehow be connected to a meaning.’ When we remember, our brain has to make associations and has to produce meaningful behaviour. How meaningful patterns arise in the ocean of neuronal network activity will be subject of new investigations by Rotter and his colleagues at the Bernstein Centre.

But there indeed is the rub for those who would say thoughts are just neural activity. That activity must “somehow be connected to a meaning.” The activity must be about a meaning. It is the meaning of the Socrates question above that causes the subsequent thoughts and spoken words.

Nobody has yet determined how a physical state, condition, or action can be about something else. How can a rock be about a seashore? It can be on it, it can be near it, but it cannot be about it. How can ink on a page be about love? That’s a little tougher, in the case of a love sonnet, for instance; but the answer is it can only be about love if there is a translation to that in the mind of the reader. The ink itself has no aboutness to it. The rock on the seashore cannot mean anything in itself, unless there is a mind to make a link of meaning with it.

There is good reason to believe this aboutness problem for physical entities cannot be solved; that a physical thing could never, in principle, be about something else. Thus neurons, no matter how complexly interrelated, cannot by themselves by connected to a meaning.

There is even better reason to believe that even if there were a solution to this, it wouldn’t be discovered through science. It’s a philosophical question, not an observational question. Still, it can’t hurt for them to work on it, and I wish them great success in learning more as they proceed.

[From Science Centric | News | How thoughts arise]
top of page comments form

22 Responses to “ “How thoughts arise” ”

  1. Here’s my hypothesis to the problem of aboutness that requires nothing beyond materialism. It’s only a hypothesis, it may be pure bunk, but I’ll take the devil’s advocate and try to defend it.

    The neuronal, materialistic component of any thought otherwise described by Tom as requiring a non-materialistic component is sufficiently “about” the content of the thought to the same extent that tropism (the tendency for plants to grow toward the sun) is “about” the sun. That is, the purely physical shape of the letters of the word “tree” triggers change in the brain (given its pattern-matching abilities), purely materialistically and neuronally, and this is all that “about” needs to be, er, well, about.

    If we reduce what is required of the word “about,” if we reduce what must absolutely be when something is “about” something else, then materialism still works. When we say that A “means” B, all we need mean is that A induces a particular type of materialistic change in the brain that we call B.

  2. Paul,

    I basically agree with you. I would put it like this. I know what my thoughts are about because I know what experiences they would correlate with. I know what my thoughts about chopping down a tree are about because I could distinguish experiences of a tree from a non-tree, and experiences of chopping down from not chopping down. And I don’t have to be having those experiences at the time I make the intentional reference. Nor need I ever have those experiences, so I can refer intentionally to counterfactuals.

    To perform this feat, a person/machine needs the ability to recognize stuff, and the ability to anticipate his/its recognition or potential recognition of stuff. This is all straightforward from a physicalist/technology perspective.

    Throwing up examples of inanimate objects (“How can a rock be about a seashore?”), or simplistic computer programs to support the idea that machines can’t have intentionality is silly. Those things don’t have recognition, nor anticipation of recognition. This is like someone in 1700 saying that man will never fly like birds because all prior examples of humans moving through air have been mere ‘falling’. We know that’s not good argumentation. It’s a gaps argument.

    The root of the problem, IMO, is that dualists are profoundly uninterested in what intentionality is or means. They don’t want to ask “How is it that I myself know when a thought is about something?” Dualists claim that we must simply accept without question that intentionality is inscrutable. I’m flabbergasted every time I’m exposed to this utter lack of curiosity. It’s like they’re not sincerely interested in the question, but only in their resulting conclusions about supernaturalism.

  3. “Throwing up examples of inanimate objects (”How can a rock be about a seashore?”), or simplistic computer programs to support the idea that machines can’t have intentionality is silly. Those things don’t have recognition, nor anticipation of recognition.”

    That’s the point. They’re not the kind of things that can have those properties. I don’t know how neurons could either. Your analogy from the 1700s is even further removed from relevance.

    Dualists claim that we must simply accept without question that intentionality is inscrutable. I’m flabbergasted every time I’m exposed to this utter lack of curiosity. It’s like they’re not sincerely interested in the question, but only in their resulting conclusions about supernaturalism.

    No, doctor(logic). It’s not that we’ve never asked the question, or never been interested in other views on intentionality. We’ve looked at them and find them lacking.

    But what do you mean by “inscrutable,” anyway? We all know intentionality very well, by personal experience. I don’t find it beyond understanding at all! But it’s not explainable in terms of physical causation, because physical causation, ontologically, is the wrong kind of thing to explain it.

  4. Tom,

    Those things don’t have recognition, nor anticipation of recognition.”

    That’s the point. They’re not the kind of things that can have those properties. I don’t know how neurons could either.

    But you do know how they can because we’ve discussed it many times. Neural networks can recognize a pattern after seeing just a part of a pattern. A hierarchical temporal network can recognize a part of a melody, and know to expect the rest of the melody before it hears it. If the melody is different, the network can quickly detect that difference.

    So why would you say that neural networks cannot anticipate or recognize?

    More to my point, why would you point to things that cannot anticipate or recognize (e.g., rocks) instead of things that can?

    But what do you mean by “inscrutable,” anyway? We all know intentionality very well, by personal experience.

    You think intentionality is inscrutable because you assume it doesn’t reduce, not even into other non-physical things.

    Let me put it this way. Would it be reasonable to assume that my ability to understand the Socrates syllogism is unique and different from my ability to understand a similar Plato syllogism? No. There are common ingredients to the processing of syllogisms. Things like logic and generalization. More to the point, I know when I am evaluating a syllogism incorrectly. I know when one of the ingredients is missing.

    In contrast, you assume that in every case in which your thought is about a thing, that aboutness is irreducible to anything more fundamental. You simply claim to know your thoughts are about something, without giving consideration to in what way you know that your thoughts are about something, or in what way your thoughts could fail to be about something.

    But if you do consider those things, you find that aboutness reduces to anticipation, expectation, prediction, and recognition. These are all things that can be done physically because we have built components that do this.

  5. DL:

    But if you do consider those things, you find that aboutness reduces to anticipation, expectation, prediction, and recognition. These are all things that can be done physically because we have built components that do this.

    Yes, we have built this. We build machines that are an extension of our minds. We give them anticipation, expectation, prediction, and recognition. You assume this can be done from scratch, starting with raw materials and energy and there is nothing to suggest it is even possible. The historical scorecard looks like this….

    Intelligence producing intelligence: billions of data points. Non-intelligence producing intelligence: zero data points (unless you beg the question).

    You claim to hold beliefs that coincide with the statistical data. How do you explain the fact that you hold a belief that is contrary to the historical data? It doesn’t bother me that you do hold this belief, but it does bother me that you seem to be violating your own rules.

  6. doctor(logic),

    “But you do know how [neurons] can [recognize, anticipate, exhibit intentionality] because we’ve discussed it many times. Neural networks can recognize a pattern after seeing just a part of a pattern. A hierarchical temporal network can recognize a part of a melody, and know to expect the rest of the melody before it hears it. If the melody is different, the network can quickly detect that difference.”

    First point, which I think you would readily agree on: this is not a slam-dunk simple and obvious point of discussion. Good thinkers stand on opposite side of this issue, which indicates it cannot be obvious or it would not be so much in dispute.

    Second point, which you would probably also agree with: our conclusions on this are going to be colored by our overall belief sets. You have a vested interest in finding that neurons can do these things, and I have the opposite interest. But it’s not just self-interest; it’s a background of beliefs, which we each consider to be justified in our respective cases.

    Neither of us has the advantage there, though. That we have background beliefs does not mean that neither of us can be right or wrong on this, nor does it mean we cannot make our case for our opinions, based on a focus just on this question.

    I have led with that this time because I want to bring some of the complexities to the surface. This is not necessarily for doctor(logic), but for other readers. I have observed both Christians and non-believers lately acting as though their position was completely obvious and easily proved, and as if their opponents were merely stupid or morally deficient. Both stupidity and moral deficiency can lead one to false conclusions, but good thinkers of good will (within the limits of our sinful nature, in my view) can also make mistakes. This came up a couple of times recently: To the extent that you can make your opponent’s position look ridiculous, to that extent you probably do not understand it.

    Enough of that, though. Yes, we have discussed this often. Here’s why I have trouble agreeing that your answer solves the matter. You say that neural networks can recognize a pattern. (I assume now that you are referring to computer simulations of biological neural networks; there would be no point in saying that about humans or intelligent mammals.) What does recognize mean? I need your input on this, but I’ll take a stab at this: it means nothing beyond returning a programmed response to some output device.

    Is this the same as what happens when a human recognizes a pattern? We humans assign meaning to patterns, such as light and dark on the computer screen. We know that words mean something, and we know what they mean (provided the writer and reader meet each other as intended through the written word). It seems to me there are two functions operating within me at that point. One is that which decodes the pattern, and the other is that which reflects, considers, knows meaning. The second function is entirely missing from machine networks. The meaning of the pattern is not known until an intelligent observer interprets it.

    Second problem: this doesn’t touch the intentionality problem. It doesn’t begin to show that the computer’s processing is about the stimulus, or that its output is about the patterns it processes. Of course on first reflection that seems wrong: if the computer’s working is not about the stimulus or the patterns, then what is going on? It certainly seems to be about those things.

    There is anthopomorphizing there, however. The computer does not recognize that a melody is taking place, or that there is music. The computer can generate numbers corresponding to the shape of an audio waveform. Wait; check that–the computer cannot generate numbers at all. The computer can generate voltage states corresponding to numbers corresponding to audio waveforms. It can perform calculations — no, we’re anthropomorphizing again, darn it! — it can generate various voltage states corresponding to various logical and numeric operations, corresponding to calculations comparing two audio waveforms.

    What computers can do is mechanically generate voltage states, store them, and manipulate them. In what way is a voltage state (or any set of them) about music? Do voltage states mean music? Voltage states can initiate processes leading to outputs that have nothing at all to do with music, and the computer doesn’t give a whit about it. You could turn the output into colors on a monitor, or into words we could read as predictions about the time and place of Elvis’s next appearance, and the computer would be–and even here I’m anthropomorphizing–the computer would be oblivious to the difference. (I’m not sure even “oblivious” can be accurately ascribed to a computer.)

    To a later point:

    “But if you do consider those things, you find that aboutness reduces to anticipation, expectation, prediction, and recognition.”

    No, there is no reducing there at all. These are all ways in which aboutness is experienced, but aboutness is not built from these things as molecules are built from atoms.

  7. Steve,

    Yes, we have built this.

    So you are ready to acknowledge that a machine, even if built by a human, can have intentional thought?

    Intelligence producing intelligence: billions of data points.

    I assume you’re referring to humans. Unfortunately for your example, when humans have children, they aren’t designing them. They’re only, um, well, you know. It is a natural process that makes people, just like it is a natural process that makes rabbits.

  8. DL:

    So you are ready to acknowledge that a machine, even if built by a human, can have intentional thought?

    I think so, but it would be determined by the program/design. In other words, no free-will in the true sense of the word.

    Unfortunately for your example, when humans have children, they aren’t designing them.

    I don’t see the problem here. Intelligence begets intelligence but unintelligence doesn’t. Where’s the problem?

    It is a natural process that makes people, just like it is a natural process that makes rabbits.

    This ‘natural process’ is an example of intelligence producing intelligence. Without begging the question, where is the evidence for unintelligence producing intelligence?

  9. Tom,

    What does recognize mean? I need your input on this, but I’ll take a stab at this: it means nothing beyond returning a programmed response to some output device.

    As long as you broaden the term programmed. Neural networks can be self-programming.

    Here’s an example. Take a hierarchical temporal memory connected to a virtual vision system. Show the eye of this system 50 glyphs. The network will automatically create 50 outputs corresponding to the different glyphs. (The system might be sort of random, and so output #1 might refer to a different glyph if the training is done differently. What’s important is that you get a new output for each new glyph.)

    Such a simple network has no other inputs or feedback mechanisms with which to attach significance to the things it is seeing. However, it is creating for itself what you might call primitive “concepts” for the glyphs that it sees.

    When I first saw the demonstration of this network, I was confused. The exposure of the glyphs to the network was called ‘training’, and I couldn’t figure out where the human corrective input was getting into the system. I intuitively thought that there had to be a human saying “That’s glyph #2 so (spank!/reward!) light up LED #2. Of course, the reason for my confusion was that there was no human input identifying the glyphs. The system taught itself to recognize the repeated patterns it was being exposed to, just as I would learn to recognize a set of abstract patterns. I don’t need someone else to tell me “this is pattern #1 and this is pattern #2” in order to learn and recognize the patterns as distinct, unnumbered, patterns. If I had never seen cats or Latin characters, I would have no idea that an “A” glyph is a letter and a stick figure of a cat is just a drawing. However, I would learn to recognize both kinds of glyphs as abstract patterns, and create concepts for them. That’s just what neural networks can do.

    Bottom line: neural networks are self-programming and self organizing.

    Let’s now suppose that some of the glyphs are danger signs. Well, a neural net cannot know this just by looking at the glyphs (and neither could we), but, as before, it doesn’t need to be programmed explicitly. What it needs is access to inputs that correlate with the danger. If the glyph is a warning of high temperature, then the neural net needs temperature inputs of some kind. Once it has these, the self-organizing ability of the network will recognize that the symbol is a signifier of high temperature. And so on.

    Continued…

  10. Is this the same as what happens when a human recognizes a pattern? We humans assign meaning to patterns, such as light and dark on the computer screen. We know that words mean something, and we know what they mean (provided the writer and reader meet each other as intended through the written word).

    But how do you know what the symbols mean? You don’t magically know them. You know them because you correlated those symbols with the sounds and other physical inputs you have been having since infancy.

    Suppose that, as a baby, I was exposed to nothing more than the symbols on a computer screen. In that case, I would recognize the symbols, but I would have no idea what they meant.

    One is that which decodes the pattern, and the other is that which reflects, considers, knows meaning. The second function is entirely missing from machine networks. The meaning of the pattern is not known until an intelligent observer interprets it.

    Agreed, but this is not a differentiator between humans and machines. If a neural network machine is not equipped with sensors and emotions, and does not learn about the world as we do, then we should not expect it to do more than recognize the raw symbols it is exposed to. It just doesn’t get the information it needs, not even in principle.

    There are two reasons why modern computers don’t have intentionality. For most technological applications, we only want limited recognition. We don’t want a car steering computer to get confused by the style of surrounding architecture. We want the system to focus narrowly on steering a car safely through intersections according to definitive rules. And we don’t want it unlearning how to do that, either!

    Also, for the next 10-20 years, we will lack neural computing power comparable with human minds. So even if our models of neural computing were correct, we would have difficulty replicating minds for decades.

    I agree with you that when we talk about contemporary computers “knowing” this or that, we’re generally anthropomorphizing. However, we’re not anthropomorphizing when we say computers can recognize or predict things (which is weaker than knowledge or aboutness).

    When you say that computers don’t think “about” music in the way that we do, that’s true too, but it does not constitute support for your case. Not many (if any) people in cognitive science are pretending that contemporary computers think about the world with “aboutness”. They have a very good idea about what it takes to think that way, and computers are not doing that.

    That’s where my flight analogy comes in. It would have done no good to have said in 1850 that flying machines are impossible because looms don’t fly. Even in 1850 it would have been clear that looms lack both aerodynamic lift and thrust. Even if the engineers of the day could not make a machine that could fly, nor know precisely what types of wings make effective lift, nor what engines could propel a vehicle through air, they at least knew what a machine needed. And looms don’t have those things.

    In the same way, your sample physical systems don’t have what we already know it takes to think about stuff, and we both agree they don’t. They’re not a test of your belief.

  11. Steve,

    Without begging the question, where is the evidence for unintelligence producing intelligence?

    Aren’t you begging the question in the same way?

    You are assuming that we pass on some “spirit of intelligence” to an embryo. That’s just one possibility for what is happening, and it’s a possibility that doesn’t match up well with biology.

    If you assume that all human intellects were created by passing on this spirit, then you have billions of examples. But that just assumes your conclusion to prove your conclusion.

    Similarly, if I assumed my conclusion that intelligence is physical, I could not legitimately claim billions of examples to prove my conclusion. Fortunately for me, that’s not the basis of my argument. If intelligence is non-physical, then we ought to see (or have seen) intelligence that is physically unbreakable. We ought not have seen physical correlates for everything. And yet we do. Sure, it’s still possible that intelligence is non-physical, but it would have to be that astonishingly peculiar variety of non-physical mind that looks exactly like plain ol’ physics.

  12. DL:
    Maybe this is a better way to make my point and show that I’m not begging the question.

    a) Living beings produced because other living beings participated somehow in the causal chain of events: massive pile of data points that we know of

    b) Living beings produced because living beings were totally absent from the causal chain of events: zero data points that we know of

  13. Interesting post.

    SteveK,

    a) Living beings produced because other living beings participated somehow in the causal chain of events: massive pile of data points that we know of

    b) Living beings produced because living beings were totally absent from the causal chain of events: zero data points that we know of

    You still seem to be begging the question, as investigation into abiogenesis is going forward instead of being refuted.
    I’m assuming here that you’d place God at the start of this causal chain of events. Why do you feel you need to fit God in this gap in our knowledge? Do you believe it is a valid hypothesis? Does it increase what we know?

  14. You still seem to be begging the question, as investigation into abiogenesis is going forward instead of being refuted.

    I’m reporting what empirical science has demonstrated to be true so far. How is this begging the question? I’ve assumed nothing about future findings one way or another. I even qualified each statement with “that we know of”

    I’m assuming here that you’d place God at the start of this causal chain of events.

    The causal chain of life? Yes, I would. Others might say it was some other living being.

    Why do you feel you need to fit God in this gap in our knowledge?

    I have theological and philosophical reasons that I won’t bother to go into. However, DL wanted empirical/scientific reasons so that is what I supplied. As I said above, current scientific knowledge says both (a) and (b) are true. If you have zero known empirical/scientific data points in your corner then what does that say about your scientific theory?

    Do you believe it is a valid hypothesis?

    Sure. It’s not a scientific hypothesis, but it’s logically sound.

    Does it increase what we know?

    If it’s true, then yes. If it’s not, then yes but only in the sense that we now know it’s not true.

  15. If it’s true, then yes. If it’s not, then yes but only in the sense that we now know it’s not true.

    How do we check if your hypothesis is true or not? Does it involve testing every other competing hypothesis until that is the only thing left?

  16. Havok,

    You still seem to be begging the question, as investigation into abiogenesis is going forward instead of being refuted.

    I’m assuming here that you’d place God at the start of this causal chain of events. Why do you feel you need to fit God in this gap in our knowledge? Do you believe it is a valid hypothesis? Does it increase what we know?

    OOL research is proceeding and certainly ought to; and it’s true that a naturalistic explanation of life is “not being refuted.” It’s being frustrated instead. All attempts to get to such an explanation are running up against severe chicken-and-egg problems (A requires B’s existence before it can develop, but B requires A’s existence first). So it’s proving to be among the hardest of all scientific problems.

    That’s no reason to inhibit research into it. I’m wide open to research that could potentially support a position I initially disagree with. (So is the whole ID community, by the way, unlike some of their opponents.)

    I put God at the origin of life for multiple reasons, as Steve also alluded to. For one, there is knowledge of God from outside of science, and by that we know He is creator. (That’s a one-sentence summary of hundreds of pages that ought to be written and in fact have been.) For another, there is no other viable explanation on the table.

    Which leads to your last question here:

    How do we check if your hypothesis is true or not? Does it involve testing every other competing hypothesis until that is the only thing left?

    That’s a time-honored method in science, but it only works well when we’re fairly confident we know and can test the entire set of potential hypotheses. I certainly support the effort to come up with every potential hypothesis and test it. But we’ll not likely ever come to an agreed conclusion, by that means, that God designed the first life. It requires knowledge from other disciplines to go with it.

    And you asked whether it would increase our knowledge. Sure! We would progress from, “we don’t have a clue how life formed, maybe by one natural means, maybe by another, maybe by God”– to “we know God accomplished the first steps.” We would decrease ignorance and increase knowledge. Since God is knowable personally, this opens the door to increased knowledge in multiple dimensions. It would not, despite some straw man objections, lead to a conclusion that we should then give up research into early life. Every Christian in science knows that if God is assumed to be behind nature, then every scientific advance is an advance in understanding the mind of God.

  17. For one, there is knowledge of God from outside of science, and by that we know He is creator. (That’s a one-sentence summary of hundreds of pages that ought to be written and in fact have been.) For another, there is no other viable explanation on the table.

    Since God is knowable personally, this opens the door to increased knowledge in multiple dimensions.

    I understand that science cannot disprove a deistic or basic theistic God, though things like the laws of thermo dynamics would appear to put extreme limits on any intervention.
    Regardless, when you claim that God can be known personally, and yet billions of people around the world have different conceptions of this personal experience of God, I become suspicious of any personal revelation.
    I also become skeptical of claims that this God has spoken to men, and that these revelations have been written down, yet these revelations appear to be internally and externally contradictory.
    So, how do I pick the right revelation? Can I even trust my personal experience, when it seems everone’s revelation is different and are often contradictory?

  18. (So is the whole ID community, by the way, unlike some of their opponents.)

    Well, the main problem which seems to have been had, concerning the ID community, is the attempt to push it into schools science curriculum, when it hasn’t proven itself scientifically. If ID folk want to simply research, investigate etc ID, I can’t see that there’d be a problem. Though money for an unorthodox theory can be hard to come by, ID is not alone here 🙂

  19. Hoo boy. I had a hard day yesterday and the day before–unexpected tasks that took up hours at work, and more serious, my daughter got a cerebral contusion from an incident in her PD class. And then this.

    Look, this thing about the schools is a straw man, it keeps getting repeated and repeated and repeated, and it’s just wrong. It’s been wrong for years. Would y’all please stop and listen to what we’re saying, for pete’s sake?!

    The Dover school board pushed an approach to teaching ID in their schools that was unwise and contrary to the advice of ID leaders.

    Actual leaders of the ID movement are not pushing for ID to be taught in schools, and haven’t been. Not for I don’t know how long–years, at least. Here is a document from 2004, and here is one from 2003. Would you like to see one that’s seven years old?

    They/we are advocating that evolution’s evidential challenges be taught. That’s not advocating for ID to be taught. They/we are not trying to “push” what “hasn’t been proven scientifically” into “schools [sic] science curriculum [sic]”!

    I’ll probably regret the tone of this comment later. But man, I am so tired of saying the same thing over and over and over again.

  20. Hi Havoc,
    If you thought the main problem with the ID community is its (alleged) attempts to get ID taught in schools then I daresay we’d never have heard from you. For instance, I’ve never been interested in the school curricula in other countries and been prompted to debate the surrounding issues on various websites. If it’s true that in your country this is happening and that you find this to be the main problem then you are probably wasting your time arguing against the belief in God on American websites.
    I would suggest that the main problem isn’t what communities determine to teach in their own schools but what ID says about your worldview.

  21. The Dover school board pushed an approach to teaching ID in their schools that was unwise and contrary to the advice of ID leaders.

    Agreed.

    Actual leaders of the ID movement are not pushing for ID to be taught in schools, and haven’t been. Not for I don’t know how long–years, at least. Here is a document from 2004, and here is one from 2003. Would you like to see one that’s seven years old?

    That’s good news. It seems I don’t have a problem with these ID leaders. I assume they’re getting down to doing the scientific research, instead of pursuing things as outlined in the infamous wedge document?

    I would suggest that the main problem isn’t what communities determine to teach in their own schools but what ID says about your worldview.

    Not really. Yes, there are places in my country (Australia) where intelligent design is taught in science class. Yes I do have a problem with that – not because of what ID means for my world view, but because ID is not mainstream science. In essence it is somewhat immature. Evolutionary theory remains the major explanation for biological diversity. Should ID become the major explanation, then I’d be happy for it to be taught.

Comments close automatically on posts older than 120 days.