Unconscious Consciousness?

A recent review at Scientific American covers “new and ingenious ways to measure consciousness” in noncommunicative (i.e., vegetative) patients:

[A researcher] placed the noncommunicative patient in a magnetic scanner and asked her to imagine playing tennis or to imagine visiting the rooms in her house. You and I have no trouble doing these tasks. In healthy volunteers given these instructions, regions of the brain involved in motor planning, spatial navigation and imagery light up. They did likewise in the unfortunate woman. Her brain activity in various regions far outlasted the briefly spoken words and in their specificity cannot be attributed to a brain reflex. The pattern of activity appeared quite willful…

But this small argument doesn’t work. I don’t see any way to structure it other than as follows, which I take as faithful to the terms of the article:

  1. Let “consciousness” be an awareness of one’s environment or of the people in it.
  2. Let “a brain reflex” be a brain responding to a stimulus in a way that does not require consciousness. (This last part makes the response a reflex, as opposed to what we usually think of as “cognition.”)
  3. Let “a noncommunicative person” be one who cannot indicate that he or she has consciousness. (The lack of such an indication in fact propels the researchers towards the search for signs consciousness in the patient other than deliberate communication.)
  4. Inversely, let “a communicative person” be one who can can offer such indications.
  5. Noncommunicative person x‘s brain responds to stimulus a in manner b, just as we would expect communicative person y‘s brain would.
  6. Person y‘s brain’s response b to stimulus a requires consciousness (in the “willful” imagining of the required material).
  7. Therefore, any brain’s response b to stimulus a must require consciousness.
  8. Given 1 – 7, x‘s response must not be a reflex.
  9. Given 1 – 8, x must have consciousness.

First, I question premise 5’s soundness. Although y‘s brain’s response would include b, y‘s brain would also likely respond in ways that led to things like talking—in short, the kinds of things that differentiate x from y in the first place. The section of the review covering this study mentions scans of 17 noncommunicative patients, but no scans of communicative patients, a clear lack of a control group. In short, premise 5 seems akin to arguing that, although you ate the ice cream and the cone and I tossed my cone away, we both ate the same thing.

More crucially, I see no reason to claim, as in 7, that just because we expect y to involve consciousness in her response b, we should expect every response b to involve consciousness. Let’s assume that conscious experience requires a functioning brain structure p, for example (as is widely held, and as some collected thoughts by the author of the review, Christof Koch, might lead us to believe). Couldn’t it then be that damage localized to p might allow for certain responses b in the brain that were not included in consciousness?

I grant that it’s difficult to imagine responding to a command to visualize playing tennis without consciously making the decision to do so. But that could be a failure of imagination. Couldn’t there be some analog to blindsight applying to the realm of consciousness? Do we require the filter of consciousness in order to imagine? Or, as during dreams or under anesthesia, can the brain operate in ways that resemble consciousness without actually having it?

The researchers would answer this charge, I take it, by pointing to the conclusion in 8 (hence also the flow of the argument in the article, in which 8 comes last). Even granting 7, though, the argument begs the question. Highly condensed, it looks like this: x has response b, which we take (because y also has it) to require consciousness; therefore b cannot be a reflex; therefore b requires consciousness.

To avoid the charge of circularity, you have to knock 8 out of the argument. Without it, though, you have no answer to my objection to 7. When we measure things that neither are nor certainly require consciousness, how can we claim that we’re measuring consciousness, even indirectly?

Disgust Science, Disgusting Journalism

I’ve just read on LiveScience.com that “[b]ooks are just as powerful as movies” at triggering “delight, pain, or disgust” reactions in the brain. As is so often the case on LiveScience, this gripping opening represents a kind of yellow science journalism that has intensified its hold on the popular imagination in the last few years.

For one thing, in the single study writer Andrea Thompson discusses, a team of Dutch scientists (Jabba et al.) set out to test the triggering of disgust by various stimuli: print, motion picture, foul-tasting beverage. The study concerns disgust exclusively; there’s no hint of delight or pain in it, or for the rest of the article.

Moreover, contrary to what’s suggested by the article’s title—”Books Still Rival Movies For Stirring Emotions”—the scientists themselves make no quantitative comparison between the power of books and movies. That is, nobody’s saying “just as powerful” besides Thompson, not in their interviews or in the study. The parts of the study discussed on LiveScience.com only show that similar responses can be generated from various stimuli.

Finally, those portions of the study absent from the article in fact show deep differences among the responses generated by the stimuli. Let’s take a closer look at its methods to see why and how these differences emerge. At stake is not only a better understanding of the study, but the success or failure of the LiveScience article as a piece of science journalism.


First, participants in the study were shown three-second films of a disgust reaction: A person on the screen takes a drink and then appears disgusted. There were also control films showing pleasurable and neutral responses. As they watched, an fMRI machine captured changes in blood flow to various regions of their brain. During the disgust clips, these included an increase in activity in the region that injury and other studies suggest is a crucial part of the disgust reaction. (Jabba et al., 2)
One of the team members describes the next step in Thompson’s article:

“Later on, we asked them to read and imagine short emotional scenarios. […] For instance, walking along a street, bumping into a reeking, drunken man, who then starts to retch, and realizing that some of his vomit had ended up in your own mouth.”

In both cases, as well as a third in which the subjects tasted something, well, disgusting, the anterior insula and adjacent frontal operculum lit up the fMRI. (Jabba et al., 3-4) So, we conclude, books are as effective at provoking disgust as movies, right?

Not a chance. There are crucial differences between a three-second clip of a man taking a drink and then experiencing disgust and the process of reading and imagining a scenario like the one described above, and similar differences between participants’ reactions to the two stimuli.

First, the clips and the written scenarios differ in duration. The scripts given to the participants took much longer than three seconds to read and process; the study gives the reading times as 35 seconds (2). That the scripts required so much more time than the clips to induce the desired reaction belies any suggestion that the former are “just as powerful” as the latter.

Second, the clips and the scripts differ in terms of the depth of the empathetic reaction they allow. Reading a disgusting scenario and imagining oneself in that scenario seems much more likely to trigger deep emotional reaction than just seeing a face on-screen for three seconds (mirror neurons be damned). The study confirms this differing depth; the greatest average change to the fMRI signal during clip viewing was about 0.1%, compared to about .66% for the script-reading portions of the experiment (figs. 1b and 1c, page 3).

And of course, these clips don’t replicate the experience of watching a “movie.” Typically, narrative and generic context as well as cinematic style and technique allow or assist viewers in developing empathic responses to persons on-screen. All of that is absent or extremely limited in a laboratory setting, and in the films themselves. (You can find stills of the clips in another study, requiring a paid subscription or institutional access.)


Are these problems with the study? Or just with the reporting? The study’s abstract ends:

[T]his shared region however [sic] was found to be embedded in distinct functional circuits during the three modalities, suggesting why observing, imagining and experiencing an emotion feels so different. (1)

In other words, the study acknowledges the apples-to-oranges nature of the comparisons between the different stimuli-response pairs, even as it shows their similarities. So what do we make of LiveScience’s expansion of the conversation from the realm of clips and scripts to that of “movies” and “books”?

It seems to me that one of the biggest problems with science reporting in its current state—and not just on the web—is the indiscriminate use of the facts in the service of stories with artificially inflated “wow” factors. There’s a diminished ethical standard for much science journalism, in which obfuscation, equivocation, and fallacious conclusion or conjecture are all widely accepted, or at least routinely tolerated. By looking closely at one article and the study it concerns, I hope to have shown some of the problems to which this climate can lead.

A Phenomenology of the Clipboard

I’ve noticed: When I have an image or sentence or URL copied (or cut) to the clipboard, but not yet pasted into its destination, my left hand—with which I always perform my pasting operations—feels, somehow, different.

I struggle to describe the sensation, but my hand and wrist feel a little higher in my brain’s list of sensory priorities, like there’s an urgency attached to the nerves there. The phenomenon, albeit subtle, has nonetheless proved effective as a safeguard against the accidental overwriting of clipped items with new copy operations.

The effect is heightened with increases in a clip’s “importance,” a flexible term in this case. Usually, “importance” maps onto one of two factors, or a combination of them:

  • a lack of redundancy (the clip doesn’t exist elsewhere, or not in a readily accessible location)
  • a personal attachment to the details of the clip (I’m particularly attached to this way of structuring that paragraph and don’t want to figure it out again).

The effect is also particularly intense when the destination of the clip is unknown at the time I perform the original copy operation. In those cases, I have not only to remember to paste the clip, but also to figure out how I intend to use it. My conscious cognitive faculties are more heavily “booked,” in other words.

To recap, the intensity (and also the likelihood of occurrence) of the cognitive outsourcing effect seems to correlate independently with these two factors:

  1. an increase in the consequences of losing the clip
  2. an increase in the chance of losing the clip

It is as though my cerebral cortex, in great demand when I’m sitting at my computer doing cognitively intense work, makes use of my sensory cortex, mostly unengaged at those times, for assistance: “Here, sensory center, take this responsibility—We’re too busy.”

This is a sensible system indeed, though its effects can be distractingly intense. Sometimes, when I have failed to paste for several minutes, I often relief myself into a new text document or browser window, and the sensation quickly dissipates.

(I am then left with a separate feeling, a kind of emotional vulnerability associated with having unsaved work open. But that’s fodder for another post.)

The Eyes Have It

Here’s an article on how human eyes might be indicators of personality. More pits in the iris means correlates with “tender, warm, and trusting” personalities, and more curves around the edge correlates with neurosis and impulsiveness.

The most important objection here, it seems to me, is that correlation does not necessarily indicate causation. However, the study did provide at least some provisional notes on causation. One gene, PAX6, controls embryonic iris development. Scientists have also found that PAX6 mutating correlates with “impulsiveness and poor social skills.”

I suppose this just puts the correlation ≠ causation charge at one remove. In other words, the crucial decision to make is whether PAX6 mutations cause social issues. If you grant that, then this study of irises makes perfect sense.

Even if the study is sound, I think, this particular “biomarker” would vary in importance based on the individual and on the culture. David Schmitt has argued that “humans evolved a pluralistic mating repertoire that differs in adaptive ways across sex and temporal context, personal characteristics […], and facultative features of culture and local ecology” (259).1 That argument seems readily portable to questions of more general personality evaluation (that is, those in which mate choice is not necessarily the genre of decision-making).

Anecdotally, I find myself mistrustful of people with small pupils.

1 “Fundamentals of Human Mating Strategies.” In The Handbook of Evolutionary Psychology. Buss, David M., ed. Hoboken, NJ: John Wiley & Sons, Inc., 2005. 258-91.

Keep your eye on the ball

Sharon Begley, one of the better science journalists (and science editor for the Wall Street Journal), wrote an article in 2003 called “This Year, Try Getting Your Brain into Shape.” (Sorry for the link to a reproduction; the original’s not available for free.)

The piece details a study at the University of Wisconsin that looked into the neurological effects of Jon Kabat-Zinn’s “mindfulness” meditation. What’s remarkable about the study’s conclusions is that meditation apparently increases the firing of neurons in patterns associated with strong positive emotions—whether or not the subject is meditating at the moment such firings are measured.

Begley alludes to other studies of neural plasticity in which it is shown that mere thought about a physical movement repeatedly can affect the brain as much as actually performing the movement. In Blink, Malcolm Gladwell discusses some such studies involving sports training; it turns out that making a habit of watching a professional swing a golf club, let’s say, can improve one’s swing somewhat dramatically.

All this makes me wonder, though: What, precisely, is being imagined in meditation? It’s certainly not a movement, and, at least in my experience, it’s not any particular emotional state. That is, one doesn’t sit on a cushion striving for happiness or envisioning a joyful life (though to be fair, some people may do just that). Kabat-Zinn’s brand of meditation, like so many others, is about observing the emotional states that are present, and naming thoughts as they come and go (e.g., “judging, judging”) without taking any other particular mental action.

Or at least, that’s what practitioners say. But Gladwell also discusses the ways that, for example, professional athletes tend to be mistaken when describing the mechanics of their swings, shots, pitches, and so on. In baseball, it seems, it isn’t quite possible to “look the ball into the bat,” as per the oft-repeated tidbit of guidance. Keeping our eye on the ball may be just a story we tell ourselves because the description somehow meshes with our conscious experiences of swinging, even though it does not correspond to physical reality.

Could the same be true of meditators? Could the conscious experience of mindfulness meditation—nonjudgmental observation of thoughts and feelings—be something we work towards only in order to allow or encourage the unconscious mind to do the real work of meditation? If counting your breath or “noticing, noticing” your thoughts as they drift through is akin to keeping your eye on the ball (something that Ted Williams famously swore he was doing), then what’s happening below the surface that leads to neural patterns associated with joy? What allows us to make contact with that ball, despite our flawed description about how we did so?

It would take some research to make anything but the most rudimentary, speculative suggestions, but in the meantime, here’s something of that latter kind (which I would call evolutionary philosophy in order to underscore the distance from any actual scientific practice): In my mind, it wouldn’t be unreasonable to think that humans and our closest primate ancestors could increase their reproductive success by enjoying simple, somewhat repetitive tasks for some part of the day, all or most days. In other words, if my clan’s primary or sole source of protein is this sago palm, I and my offspring would do well to enjoy grinding its inner layers to a pulpy paste, some part of the day, every day. If I have no sago palm, but the forest where I live is scattered with small animals and the occasional large one, I’d better enjoy the tedium of walking in the woods all day every day, and indeed coming back all but empty-handed most of the time. (I have often thought of the shallow promise embedded in this “most of the time” while playing hand after hand of no-limit Texas hold ’em with the lads.) Given these possibilities for our mental architecture (and I promise to do some research here in the coming days or weeks), it seems also at least possible that the performance of a simple, repetitive mental task (or a physical one—hence the existence of yoga) at more or less the same time daily either in isolation or in a group of people performing similar task could provoke our brains in such a way that they become, well, happy.