[A researcher] placed the noncommunicative patient in a magnetic scanner and asked her to imagine playing tennis or to imagine visiting the rooms in her house. You and I have no trouble doing these tasks. In healthy volunteers given these instructions, regions of the brain involved in motor planning, spatial navigation and imagery light up. They did likewise in the unfortunate woman. Her brain activity in various regions far outlasted the briefly spoken words and in their specificity cannot be attributed to a brain reflex. The pattern of activity appeared quite willful…
But this small argument doesn’t work. I don’t see any way to structure it other than as follows, which I take as faithful to the terms of the article:
- Let “consciousness” be an awareness of one’s environment or of the people in it.
- Let “a brain reflex” be a brain responding to a stimulus in a way that does not require consciousness. (This last part makes the response a reflex, as opposed to what we usually think of as “cognition.”)
- Let “a noncommunicative person” be one who cannot indicate that he or she has consciousness. (The lack of such an indication in fact propels the researchers towards the search for signs consciousness in the patient other than deliberate communication.)
- Inversely, let “a communicative person” be one who can can offer such indications.
- Noncommunicative person x‘s brain responds to stimulus a in manner b, just as we would expect communicative person y‘s brain would.
- Person y‘s brain’s response b to stimulus a requires consciousness (in the “willful” imagining of the required material).
- Therefore, any brain’s response b to stimulus a must require consciousness.
- Given 1 – 7, x‘s response must not be a reflex.
- Given 1 – 8, x must have consciousness.
First, I question premise 5’s soundness. Although y‘s brain’s response would include b, y‘s brain would also likely respond in ways that led to things like talking—in short, the kinds of things that differentiate x from y in the first place. The section of the review covering this study mentions scans of 17 noncommunicative patients, but no scans of communicative patients, a clear lack of a control group. In short, premise 5 seems akin to arguing that, although you ate the ice cream and the cone and I tossed my cone away, we both ate the same thing.
More crucially, I see no reason to claim, as in 7, that just because we expect y to involve consciousness in her response b, we should expect every response b to involve consciousness. Let’s assume that conscious experience requires a functioning brain structure p, for example (as is widely held, and as some collected thoughts by the author of the review, Christof Koch, might lead us to believe). Couldn’t it then be that damage localized to p might allow for certain responses b in the brain that were not included in consciousness?
I grant that it’s difficult to imagine responding to a command to visualize playing tennis without consciously making the decision to do so. But that could be a failure of imagination. Couldn’t there be some analog to blindsight applying to the realm of consciousness? Do we require the filter of consciousness in order to imagine? Or, as during dreams or under anesthesia, can the brain operate in ways that resemble consciousness without actually having it?
The researchers would answer this charge, I take it, by pointing to the conclusion in 8 (hence also the flow of the argument in the article, in which 8 comes last). Even granting 7, though, the argument begs the question. Highly condensed, it looks like this: x has response b, which we take (because y also has it) to require consciousness; therefore b cannot be a reflex; therefore b requires consciousness.
To avoid the charge of circularity, you have to knock 8 out of the argument. Without it, though, you have no answer to my objection to 7. When we measure things that neither are nor certainly require consciousness, how can we claim that we’re measuring consciousness, even indirectly?