As readers of this blog will have
noted, I write often about the hubris of scientists, especially evolutionary
biologists and neuroscientists, who are trying to reduce human behavior to the
most primitive level of unconscious, uncontrollable, and irrational reflexes,
born out of random mutations and their utility in species survival.
I
used to think that some institutions had my back on this, notably the Jesuits,
who educated me for eight crucial years. Reading Plato as early as high school,
and taking courses in Logic and Epistemology, Metaphysics, Ethics, and the like
every semester of college, I was proud of the inquiry tradition of these
schools. (For those who might
think Catholic education is necessarily blindered, please read philosopher
Martha Nussbaum’s chapters on Notre Dame and BYU in Cultivating Humanity; and remember the Jesuits are well to the left
of the Holy Cross fathers.)
So
I was delighted, at first, to see an article about a young scientist at my alma
mater who had received a national grant for her work. The piece, entitled “Moral Compass,” described the work of a
woman studying “what she calls moral intuition.” My school seemed to be
continuing the tradition of asking the Big Questions, as did Jesus, Aquinas,
Kant, and all those moral giants we were introduced to in our philosophy and
theology classes.
But my hopes were dashed when I
found that the scholar was approaching the problem with the tools of neurology,
not philosophy, in her ominously named “morality lab.” Specifically, she was using
Transcranial Magnetic Stimulation to jolt the brain’s right temporoparietal
junction (RTPJ), then used functionalMRI to watch brain activity as subjects
heard about a person who tried but failed to poison a friend. The conclusion: people who have had
that part of their brain temporarily zapped are more lenient toward the failed
attempt than unzapped subjects.
All well and good, if a little
truncated perhaps for the reading audience. We all pretty much know that damage to various parts of the
brain can cause behavioral changes (from aggression to depression to lack of
judgment), as well as that using various parts of the brain can develop greater
connections and even greater overall size of those parts (London cab-drivers
with large geographic memories, violinists with more developed hand
areas). But the article went on to
quote the young scientist thus: “For a while now, I’ve been really interested
in moral intuitions and where they come from—and the extent to which people share
these intuitions. When someone has
a different set of intuitions, how do you know who’s right?”
That last sentence floored me for
two reasons: that where moral intuitions come from is answerable in terms of
synaptic locations, and even more, that knowing who’s right could be determined
by examining the brain activity of the person having the intuition.
Moral intuitions apparently mean
immediate judgments, as studied in such classes as Harvard professor Michael
Sandel’s “Justice,” widely available on the Internet. In these studies, people are asked to respond quickly to a
case, then to examine the validity of their intuition, and to elucidate
possible reasons for and against it.
(The young scientist in question apparently attended exactly this class,
which set her off on her career path.)
Are these intuitions instantaneous
eruptions from a segment of our brain, or are they more the result of life’s
experience, both direct and indirect?
Since the publication of Malcolm Gladwell’s Blink, there has been a lot of discussion of this question, and
most writers and researchers have concluded that our “intuition,” as in his
cases of art critics who doubted a statue’s authenticity, and tennis players
who could predict faults on serves as soon as the stroke was beginning, is most
valid when it is a rapid judgment based on long familiarity with a given
situation. Isn’t it likely that “moral
intuitions” are more likely to be right when they’re grounded in years of
thought, discussion, reading, and life experience? (The child’s outburst that “it’s not fair” that rain spoiled
the trip to the beach surely isn’t of the same validity as the later
declaration that “it’s not fair” that he should be punished for cheating on a
paper when he didn’t do so.)
So if I were asked where a moral
judgment came from, I would suggest many possible answers: universal
perceptions of what is harmful to oneself or to others, cultural upbringing,
social conditions, ethical reflection, and on and on. “Above and behind the right ear” would never even occur to
me. Would it to you?
It’s interesting that Socrates took
up the same issue over two millennia ago.
While in prison, he tells us:
“I
heard someone who had a book of Anaxagoras, out of which he read that mind was
the disposer and cause of all, and I was quite delighted at the notion of this…
What hopes I had formed, and how grievously was I disappointed! As I proceeded,
I found my philosopher altogether forsaking mind or any other principle of
order…I might compare him to a person who…when he endeavored to explain the
causes of my actions, went on to show that I sit here because my body is made
up of bones and muscles; and the bones, as he would say, are hard and have
ligaments which divide them, and the muscles are elastic, and they cover the
bones, which have also a covering or environment of flesh and skin which
contains them; and as the bones are lifted at their joints by the contraction
or relaxation of the muscles, I am able to bend my limbs, and this is why I am
sitting here in a curved posture… forgetting to mention the true cause, which
is that the Athenians have thought fit to condemn me, and accordingly I have
thought it better and more right to remain here and undergo my sentence…
There
is surely a strange confusion of causes and conditions in all this. It may be
said, indeed, that without bones and muscles and the other parts of the body I
cannot execute my purposes. But to say that I do as I do because of them, and
that this is the way in which mind acts, and not from the choice of the best,
is a very careless and idle mode of speaking. I wonder that they cannot
distinguish the cause from the condition.”
That same error of confusing causes and conditions is
apparently still with us.
Further, what exactly does the
study, or the snippet from it the college magazine published, tell us? That people are more lenient toward the
person who fails to commit a misdeed when their RTPJ is disrupted. Not only do we not know whether we’re
talking about averages, or the same people under two circumstances. We also don’t know whether they judged
these would-be killers extremely harshly and then somewhat less harshly, or
fairly harshly and then very leniently.
Or whether a disrupted RTPJ left them with more scope to consider the
situation in a broader light, or made them indifferent to the matter and so
less punitive.
Most of all, how could this study
in any way tell us which moral intuition was right? Is any moral intuition by a person with an apparently whole
RTPJ “right”? If two people whose RTPJs
appear similar come to different judgments, who’s right? Whatever the truth of the old question
whether “ought” can be derived from “is,” there’s not much likelihood of ever being
able to prove that a point of view is right by examining the neuro-bio-electro-chemical
events that accompany holding or stating that point of view.
If we could do that, imagine the
effects. No need for debates to
choose a candidate – just put them in the scanner. Criminals could prove their innocence by showing a clean RTPJ
or whatever other locus was relevant, or could plead the “defective RTPJ”
defense. Unless we could agree on
the precise definition of health or superiority in RTPJs, we could always
dismiss others’ views: this
liberal has an overactive RTPJ, that conservative an underactive one, so he is
too soft-hearted, she’s too severe.
(Maybe we could define a healthy RTPJ as the site of Right, True, and
Proper Judgments.)
One way of looking at this project
is that it’s running backward. It
makes perfect sense to examine the brains of people whose views or actions are
highly atypical, to see if there’s a biological contribution: does the
dyslexic, autistic or sociopathic person have a specific brain
abnormality? Does the gifted
person have some different abnormality? (The second has so far been much harder
to come upon than the first.) But when two apparently normal people hold differing
moral intuitions, say on war, capital punishment, abortion, or hate speech,
does it make any sense to think that we can examine their brains to find out
not only why they differ (I expect there are dozens of places in the brain,
from memories to emotions to others we can barely dream of, that go into a
complex decision), but to say who is right?
We need, and I believe will always
need, separate criteria for deciding what is right and for discovering what
neural correlates happen when we make a moral choice. Trying to do otherwise
can lead in one of two directions: fruitless quests to find moral and immoral
synapses, or frightening efforts to control behavior by altering the brains of
the morally wrong. Nazi eugenics
were bad enough, but judging and condemning people for the state of their brains
as others were once judged and exterminated because of their noses or foreheads
would be even more disastrous.
No comments:
Post a Comment