Let me start by saying I’m no math whiz. The fact that all three of my college roommates were math majors had no impact whatsoever on my understanding of the field. In fact all I recall from conversations with them about their disciplines comes down to this: from the point of view of topology, a human being is just like a doughnut, and there’s a formula that can prove there’s always one spot on earth where the wind isn’t blowing, and one spot on a head with no hair.
That being said, I think I know enough math and science to disqualify me for membership in the NRA and the Republican Party. I thought my beef with them was limited to evolution and global warming, but now I see it extends to simple statistics and basic scientific method as well.
I’m talking about the NRA’s gun logic, and its gun proposal. Let’s review their recent arguments: guns are not the problem. Violent video games are the problem, and media coverage of these events. The solution: guns in every school. (Let's skip the fact that the great majority of mass killings happen in places other than schools, from malls to movie theaters to religious establishments, and most often in workplaces.)
What do you do when you’re building a hypothesis about what causes Phenomenon B? You look at the possible factors, and eliminate them one by one until you’re left with unique characteristics of the environment where B happens, or as close as you can get. That’s how we finally proved smoking caused cancer, microbes caused disease, seat belts saved lives, etc.
So let’s try that. The U.S. has violent video games and media coverage of violence. Let’s compare some other places that have both, and let’s choose places as like the U.S. as possible. We’ll use English-speaking countries that share a lot of our heritage: Canada, right next store, England, and say Australia. Do any of them ban violent video games? No. Is there any reason to believe these games are not sold there, as they are here? No. Do any of them avoid media coverage of violent events? Apparently not: replacing “U.S.” with “Canada,” “Britain” or “Australia” in a Google search of “Sandy Hook coverage” reveals 114 million hits for the U.S., 92 million for Canada, 32 million for Australia, and 28 million for Britain. In fact, Canada has almost three hits for every Canadian, and Australia more than 1 per Australian. (As a sidebar, the video game industry has pointed out that since the 90s sales of video games have quadrupled, while rates of homicide by juveniles have decreased by 71%.)
Now what about gun ownership? The U.S. has nearly 90 privately owned guns per 100 people, Canada 30, Australia 15, and Britain 6. Homicide rates in these four countries: 4.8 per 100,000 in the U.S., 1.8 per 100,000 in Canada, 1.4 in Australia, and 1.2 in England.
Most important, firearms account for 67% of all U.S. homicides, 26% of Canadian homicides, and 8% and 6% of Australian and British homicides, respectively. Putting it at its simplest, your chance of being killed by a gun in Britain is about 1 in 1.6 million; in the U.S. it’s about 1 in 30,000.
I wish I was a great chart-maker or statistician, but it’s pretty clear that the number of guns a country has is the key variable in murder rates, at least among the factors the NRA has proposed versus the guns themselves.
One more excursion into numbers. What would we need to put the NRA's "guard in every school" into effect? We’d need more than 132,000 full time guards, assuming exactly one per school. How does that compare with the protections we have now? It’s more than all the police officers in the 36 largest police departments in American cities. If we eliminate New York, which has an astonishing 26% of those 131,000 officers, it means more trained officers than all the other 106 cities with over 250 police. It’s actually equal to 47% of all the police in all the 867 cities listed by the FBI. It’s more than 3 times the Coast Guard, and nearly ¾ as large as the Marines.
And what would that cost? If we paid these people the same amount as the lowest starting police officer’s salary in the country, it’s around 4.2 billion dollars. If we pay them a teacher’s average salary, it’s 6.2 billion, besides the cost of arming and training them. That’s about 20 times the NRA budget, so I’m afraid they couldn’t help much even if they wanted to.
And of course they don’t. What they want is to preserve an antiquated right that has now been extended far beyond what any signer of the Constitution could have imagined, when no gun existed that could fire more than one shot before being reloaded by hand, a process that took between 20 seconds and a minute. Maybe that’s the answer: go back to the hallowed “strict construction” and allow anyone who wants to own a single-shot weapon that takes at least 20 seconds to reload. Make everyone who wants more than that to build a case (e.g showing they’re engaged in a dangerous profession, or are a proven hunter or have been trained by one to use the standard weapons for hunting game), or else spike their guns and hang them on the wall as antiques from a more savage and violent era.
"When we are born we weep that we are come to this great stage of fools" - William Shakespeare "To me the meanest flower that blows can give thoughts that do often lie too deep for tears." William Wordsworth
Tuesday, December 25, 2012
Sunday, December 2, 2012
How Not to Write
As anyone can see, I’ve been away for a while. The main reason is course work; I’m taking two classes for my certification in mediation and organizational conflict. Unlike last semester, I’ve fallen into the land of the pure social scientist. My courses require formal submissions in APA style, and I’ve done six papers in that genre. The experience has been so traumatic that I fear I’ve forgotten how to write like a normal human being, so aside from the time it’s taken to produce 34,000 words in this dialect, I’ve been reluctant to risk confusing real writing with what’s done in social science courses.
But the last paper is in, and it’s time to start recovering. I thought the best way to do that would be to contrast what’s been required for the past four months with the way people normally write.
Now I know every field has its jargon. But some fields are worse than others. English and history, I would say, except when contaminated by dogmas like semiotics, can actually produce something approaching real communication. There are peculiarities, of course. Mike Rose, in his wonderful Lives on the Boundary, said that when you write an English paper about a play or a novel, you’re not supposed to do what any normal person does when they’re talking about plays or novels: tell what happens and tell if it was any good. Instead you’re supposed to delve deeper into matters of style, theme, archetype, ambiguity, etc. that prove you can read more carefully than the average best-seller consumer. But you’re still connecting your reader to the book, often with extensive quotations. You may also take issue in English and history with other prior writers, whose case you describe in more or less detail before demolishing it.
The key here is that you’re writing about things that have been written, whether imaginatively or historically. Often you’re actually reading excellent writing, which may improve your own.
But in the world of social psychology and the like, none of the above applies. You don’t follow most of the rules of ordinary discourse, and you almost seem to avoid illuminating your reader. Take the mention of Mike Rose, above. This might be re-written as: “Rose (Year) has analyzed the narrative-evaluative paradigm and its inapplicability to the academic setting.” You would then have to supply the full reference to Rose in a list of references, like “Rose, M. (Year). Lives on the Boundary. New York: Penguin.” That, of course, is an oversimplification. To do the job right, you might have to put “Rose, 1995/2005)” to distinguish when the book came out from the edition you consulted. But you aren’t giving a page number or even a chapter number, so if anyone wanted to find out if Rose really said what you say he said, they would have to read the whole book. Even more complex, if you bought the book while traveling, you might feel you should cite the country (out of 8) where Penguin has offices; or maybe you should say “London,” because that’s where their registered office is. This often means interrupting the flow of your thoughts to track down all the data, or else facing hours and hours of citation management just when you’re done and would like a walk, a beer, or some other distraction, like reading a real book.
The wisest among the professors I’ve had in this program explained to me that my problem is I’m not the intended reader of the article. The authors are writing for the select group of people working in the same field or sub-field, who know Rose inside out, and they’re just trying to tell the readers that they’re filling in a hole left by Rose and whoever else they cite, so that the others can see whether it’s a hole they need to know about while they’re filling in whatever hole they’ve staked out.
Question 1, then, is why are we reading people who are not writing for us, and whom we can’t understand until we’ve read everyone else? Question 2 is, really? One article I read had 77 references for seven pages. What are the odds that reader X or Y has read all 77 things that writer A has read, and remembers them in such detail that a single word and a last name brings it all back? (One of the interesting tricks played is that these writers also cite everything they’ve ever written that is remotely germane to the current piece. Are they just showing off, or are they listing 8 other articles so that a research engine will tick off 8 more citations of their work for “mine is bigger than yours” judgments by the powers that be? They also play the Alphonse-Gaston game: if Larry, Curley and Moe do three pieces of work together, they evidently negotiate whose name goes first, so each of them, or at least each with clout, can get “lead author” props.)
Believe me, I do not exaggerate the time and energy spent citing. I have counted paragraphs where 66 words are actual text and 61 are citations in parentheses. My own long papers have consisted of 80% writing and 20% references, not counting the parentheses in the text that lead you to the 20% at the back of the paper. This proportion is required by the demand, articulated by one of my teachers, that you need to cite everything that is not your own opinion or observation. Mention D-Day and you’d better have evidence that it happened on June 6, 1944. Quote the phrase “the rest is silence” and you’d better credit Shakespeare. I’m not kidding. For a sentence that said Lord of the Flies and The Fountainhead reflected views of their era about human nature, I was told to give full citations: last name of author, first initial, date of publication, and city, with details if the city isn’t a famous one. Do I have a copy of either book nearby? What edition should I cite? Will any of my readers go to New York to buy Lord of the Flies and read it to see if I’m right?
Now it’s easy to play this game. Think of an idea you want to include, state it in a word or two (Oedipus complex, cognitive dissonance, conditioned response), go look on your shelf or in Google, and you’ve got another citation. My 14-page paper had 102 citations; my 347-page doctoral thesis 112.
What I find worse than the tedium and the impenetrability, the cliqueishness and the petty point-scoring, is the impersonality. No one gets a first name, no one’s argument is given any scope. (A teacher even said it’s bad writing to quote other people: just paraphrase them.) Everyone else’s work is simply one more pebble piled on the mound that will get you to the top of tenure hill.
To paraphrase Edgar Lee Masters: Tick, tick, tick. Such little citations. While Homer and Whitman roared in the pines. (“Petit the Poet,” Spoon River Anthology, written for all time. Read it.)
But the last paper is in, and it’s time to start recovering. I thought the best way to do that would be to contrast what’s been required for the past four months with the way people normally write.
Now I know every field has its jargon. But some fields are worse than others. English and history, I would say, except when contaminated by dogmas like semiotics, can actually produce something approaching real communication. There are peculiarities, of course. Mike Rose, in his wonderful Lives on the Boundary, said that when you write an English paper about a play or a novel, you’re not supposed to do what any normal person does when they’re talking about plays or novels: tell what happens and tell if it was any good. Instead you’re supposed to delve deeper into matters of style, theme, archetype, ambiguity, etc. that prove you can read more carefully than the average best-seller consumer. But you’re still connecting your reader to the book, often with extensive quotations. You may also take issue in English and history with other prior writers, whose case you describe in more or less detail before demolishing it.
The key here is that you’re writing about things that have been written, whether imaginatively or historically. Often you’re actually reading excellent writing, which may improve your own.
But in the world of social psychology and the like, none of the above applies. You don’t follow most of the rules of ordinary discourse, and you almost seem to avoid illuminating your reader. Take the mention of Mike Rose, above. This might be re-written as: “Rose (Year) has analyzed the narrative-evaluative paradigm and its inapplicability to the academic setting.” You would then have to supply the full reference to Rose in a list of references, like “Rose, M. (Year). Lives on the Boundary. New York: Penguin.” That, of course, is an oversimplification. To do the job right, you might have to put “Rose, 1995/2005)” to distinguish when the book came out from the edition you consulted. But you aren’t giving a page number or even a chapter number, so if anyone wanted to find out if Rose really said what you say he said, they would have to read the whole book. Even more complex, if you bought the book while traveling, you might feel you should cite the country (out of 8) where Penguin has offices; or maybe you should say “London,” because that’s where their registered office is. This often means interrupting the flow of your thoughts to track down all the data, or else facing hours and hours of citation management just when you’re done and would like a walk, a beer, or some other distraction, like reading a real book.
The wisest among the professors I’ve had in this program explained to me that my problem is I’m not the intended reader of the article. The authors are writing for the select group of people working in the same field or sub-field, who know Rose inside out, and they’re just trying to tell the readers that they’re filling in a hole left by Rose and whoever else they cite, so that the others can see whether it’s a hole they need to know about while they’re filling in whatever hole they’ve staked out.
Question 1, then, is why are we reading people who are not writing for us, and whom we can’t understand until we’ve read everyone else? Question 2 is, really? One article I read had 77 references for seven pages. What are the odds that reader X or Y has read all 77 things that writer A has read, and remembers them in such detail that a single word and a last name brings it all back? (One of the interesting tricks played is that these writers also cite everything they’ve ever written that is remotely germane to the current piece. Are they just showing off, or are they listing 8 other articles so that a research engine will tick off 8 more citations of their work for “mine is bigger than yours” judgments by the powers that be? They also play the Alphonse-Gaston game: if Larry, Curley and Moe do three pieces of work together, they evidently negotiate whose name goes first, so each of them, or at least each with clout, can get “lead author” props.)
Believe me, I do not exaggerate the time and energy spent citing. I have counted paragraphs where 66 words are actual text and 61 are citations in parentheses. My own long papers have consisted of 80% writing and 20% references, not counting the parentheses in the text that lead you to the 20% at the back of the paper. This proportion is required by the demand, articulated by one of my teachers, that you need to cite everything that is not your own opinion or observation. Mention D-Day and you’d better have evidence that it happened on June 6, 1944. Quote the phrase “the rest is silence” and you’d better credit Shakespeare. I’m not kidding. For a sentence that said Lord of the Flies and The Fountainhead reflected views of their era about human nature, I was told to give full citations: last name of author, first initial, date of publication, and city, with details if the city isn’t a famous one. Do I have a copy of either book nearby? What edition should I cite? Will any of my readers go to New York to buy Lord of the Flies and read it to see if I’m right?
Now it’s easy to play this game. Think of an idea you want to include, state it in a word or two (Oedipus complex, cognitive dissonance, conditioned response), go look on your shelf or in Google, and you’ve got another citation. My 14-page paper had 102 citations; my 347-page doctoral thesis 112.
What I find worse than the tedium and the impenetrability, the cliqueishness and the petty point-scoring, is the impersonality. No one gets a first name, no one’s argument is given any scope. (A teacher even said it’s bad writing to quote other people: just paraphrase them.) Everyone else’s work is simply one more pebble piled on the mound that will get you to the top of tenure hill.
To paraphrase Edgar Lee Masters: Tick, tick, tick. Such little citations. While Homer and Whitman roared in the pines. (“Petit the Poet,” Spoon River Anthology, written for all time. Read it.)
Wednesday, October 24, 2012
Straws in the Wind
Several minor incidents in the last couple of weeks have gotten me thinking about the ways of the world. Not that I want to go all Andy Rooney on you, but sometimes you do just have to ask if anybody’s noticed.
Example 1:
A mile or so from my home there’s a little square that sits on the edge of the Boston town line. Not much there: a Dunkin’ Donuts (this is Massachusetts after all), a liquor store, pizzeria, gas station, neighborhood bar, and tiny, imperiled post office with friendly staff and no lines. One of its roads passes under a commuter rail line, and sitting above the embankment for the trains is a billboard, which mixes ads and public service announcements every few weeks or so.
This week, however, it sports the most distressing PSA I’ve ever seen. Next to a photo of a young black boy are these words in giant letters:
MURDER
IT'S NOT OKAY
My first impulse was to say “I knew that.” Then I began to wonder who doesn’t? And in our age of hyperbole, who decided to take this understated approach?
There are, I believe, a number of “It’s Not Okay” campaigns, or similar, around. I’ve seen the “It’s Not Acceptable” campaign about name-calling, with Jane Lynch and Lauren Potter, and I find it very impressive. But “It’s Not Acceptable” seems like a tougher stance than “It’s Not Okay.” The latter sounds rather playground or parent-child to me: It’s not okay to take the last cookie, leave someone out of the game when choosing sides, or bite your older brother. But is murder now just “not okay”? Isn’t not being okay part of the definition of murder, as opposed to say justifiable homicide, self-defense, or a few other types of death-dealing that have at one or another time, in one or another place, been socially sanctioned.
We do live in the age of water-boarding, drone strikes, and stand your ground laws, but none of these seem exactly relevant to this ad, which almost undermines itself. After all, I do a fair number of “not okay” things from time to time: slide through a yellow light at the last minute, feed the parking meter, say I’ve only had two glasses of wine when I’ve really had three. Those are not okay. But murder? That’s forbidden by all the laws of God and man, in every religion I know of, and with the strictest of penalties for murderers of almost any crime on the books. What young person with a grudge (because that’s certainly what the image suggests to me) will be dissuaded from a drive-by shooting by a sign at the intersection telling him “It’s Not Okay”? As far as I'm concerned, this billboard is not okay.
Example 2 (a and b) :
I’m taking classes, as I’ve mentioned before, in conflict resolution. Most of my classmates are young enough to be my grandchildren, and in general I am impressed with their commitment to make the world a better place, and with the work many of them are already doing toward that end. I certainly never did as much as they have when I was their age; working in the State House, volunteering as mediators, or even working their way through school while carrying full-time jobs.
But every once in a while, one of them says something that makes me realize how differently we see the world.
Two of these happened recently in my Theory of Conflict class. In one case, we were discussing a famous 1949-54 study called “The Robbers Cave,” in which a group of social psychologists took two groups of kids to a summer camp. Each group didn’t know the other existed until they were brought together and urged to compete for prizes. They became antagonistic and aggressive toward each other, but when the adults arranged “real” challenges that could only be solve by their working together, the rivalries diminished and cooperation increased.
An interesting study, to be sure. But I raised my hand to suggest that extrapolating from 11- and 12-year-old behavior to fundamentals of group behavior was rather dubious, especially given the intervening fifty years of research on brain development and its impact on judgement. Another student responded that maybe these kids were closer to “real human nature” than adults would be.
“Real human nature”? I wondered. Did he mean literally that the underdeveloped young of a species are more true to the type than the adults? Or that aggression and conflict are what he thinks of as human nature, and everything else is a veneer covering the brutish and nasty reality of our biology? The long arm of Social Darwinism, and Freud’s rampaging ids still stretch into the twenty-first century. Is the impulse to resolve conflict peacefully that motivates students in our program “against nature”? If so, are we doomed to fight a losing battle?
There’s a lot of recent research that challenges the “red in tooth and claw” image of human, and general mammalian, nature. The discovery of bonobo culture, evidence that being social and helpful may be a better survival strategy than dominance among the great apes (Alan Alda does better than Arnold Schwarzenegger is the way one wag put it), and studies showing that chimps, and even rats, will refuse to take a reward that costs a peer suffering, are among many that suggest cooperation, altruism, and group solidarity may be as well-founded in our makeup as survival of the individually fittest.
That’s example two(a). Two(b) comes from the same class. In a small group discussion, five or us were asked to design ways of calming tension in a multicultural community dealing with economic struggles and racial tension. When someone suggested using the churches of the various ethnic groups to connect young people around projects, one student said “That never works.” She went on to say that she hated religion, and asserted that “You should do what’s right because it’s right, not because someone in an old book said it’s right.” (She also mentioned that she was upset that the school did not have an atheist alliance. But hold that.)
An interesting view. But how do you know what’s right? Because you have reasoned to a “right” that eluded all the people before you – the ones who wrote or set down the books? Because you met someone who persuaded you of their vision of what’s right? Or because you just knew, from birth, or from some other moment of insight, what was right? Given a choice between those who listen to a long tradition of wisdom and analysis of right and wrong, or someone who just “got it” between birth and today, whether from solo ratiocination, sitting at the feet of a master, or the promptings of their own heart, I think I’d feel safest with the first. To go back to our beginning, would I rather be surrounded by people who had heard “Thou shalt not kill” from the time they could understand the words, people who had worked out the sanctity of human life all by themselves, or people who need a poster in Hyde Park to tell them “Murder: It’s Not Okay”? Tell me what you think.
Sunday, September 9, 2012
The "Business" of Government, Part One
One
of the biggest themes of the Republican campaign in this election is Mitt Romney’s
alleged business acumen. Of
course, some debate his track record, and others the relevance of business skills
to the presidency. Be that as it
may, I’ve been thinking about Republicans, Democrats, and business wisdom, and
in the next blogs I’ll apply some of the tenets of the past decade’s biggest
business book, Jim Collins’s Good to
Great to the two parties. GtoG seems
particularly apt, since going from good (or bad) to great is what every
presidential candidate promises he’ll do for America.
For
the unfamiliar, Collins’s Good to Great
followed a number of businesses that had made a leap from average to dominant
in their fields, paired with similar companies that had not leapt forward,
(e.g. Walgreens vs. Eckerd, Circuit City vs. Silo). His research team found certain characteristics that they
believe consistently distinguished such companies.
The
first of these I’ll consider is “First Who…Then What.” The idea is that great companies put
together the right team of people and only then decide new directions for the
company. The popular phase that
captures this theme is “getting the right people on the bus.” So I decided to choose one particular
seat on the bus –the relief driver, so to speak – the Vice Presidents and VP
candidates of the two major parties.
In
my lifetime (which I’ll stretch to include prenatal life, to be fair to
Republicans), the two parties have put forth 25 candidates for the position
(besides those three who have stepped up after deaths or resignations): 11
Republicans and 14 Democrats.
Here they are:
Republican VPs Democratic
VPs
Richard Nixon Harry
Truman
Spiro Agnew Alben
Barkley
George H.W. Bush Lyndon
Johnson
Dan Quayle Hubert
Humphrey
Dick Cheney Walter
Mondale
Al
Gore
Joe
Biden
Republican Candidates Democratic
Candidates
Earl Warren John
Sparkman
Henry Cabot Lodge, Jr. Estes
Kefauver
William E. Miller Ed
Muskie
Bob Dole Sargent
Shriver
Jack Kemp Geraldine
Ferraro
Sarah Palin Joe
Lieberman
John
Edwards
Now let’s ask if these were the right people to put in the
relief driver’s seat. The first
cut:
Each party has had two VPs who later ran for and won the
presidency (I won’t insult you by naming them.) The Republicans have had two VPs who lost (Papa Bush is on
both lists), the Democrats three.
But
digging deeper, how about the general quality of the choices? We could look at it this way: Were any of the losing or non-running
VPs plausible presidential candidates?
Obviously the five who ran (Bush, Dole, Humphrey, Mondale,
Gore) were. Judging from history,
Kefauver, who would have been the nominee in 1952 if primaries had functioned
as they do now, Ed Muskie, and Joe Biden could easily be added to the
list. Equally obviously, Spiro
Agnew and John Edwards scandaled or grafted themselves out of the running. On the Republican side, I’d give a loud
no to Quayle, Cheney, and Palin, as well as William E. Miller who, although
only 50 when he lost in 1964 left public life completely thereafter. I would give a yes to the early
Republican losers, Warren and Lodge, though neither ever expressed interest in
the presidency. That’s three yeses
and five no’s for the Republicans; five yeses and one no for the
Democrats. (Let’s pair Jack Kemp
and Geraldine Ferraro as unlkelies but not no’s; we’ll get to Sargent Shriver
and a few early ones later.)
Oddly, each party has had one VP
candidate they would later consider as a traitor: Warren and Lieberman. Taking
this a step further, while no Democrats other than Lieberman would be rejected
by their party if they were running today, Warren (usually labeled “progressive”)
and Lodge (“moderate internationalist”) would be as unlikely to finish in the
running today as John Huntsman did.
In the broadest sense, which party
has nominated more people whom history might regard as “great Americans”? In Tier One I’d put Harry Truman and
Lyndon Johnson, whose records in Civil Rights alone earn them that honor. Humphrey, Lodge, and Warren come close,
for their many impacts on history and the extent of their service. George H.W. Bush is the only other
Republican contender, and perhaps deserves a Tier 3 slot. (I’m steering clear
of Bush and Iran-Contra, as I have with Johnson and Vietnam, Truman and the
atomic bomb.) Gore, Mondale and Muskie all deserve mention, perhaps a notch
below Bush, though Gore’s the only Nobel Prize winner among these VPs. Sargent Shriver may in fact rank higher
than several of these: his role in starting the Peace Corps, Job Corps, and
Head Start match Warren’s for long-term impact. Then there’s Kefauver, who not only could have been
president, but also was one of the bravest Democrats of his generation, one of
three southern Democrats (with Al Gore’s father and Lyndon Johnson) to refuse
to sign the Southern Manifesto of 1956, objecting to Brown vs. Board of
Education.
Finally, there’s the category of
disgraces and laughingstocks. John Edwards’ certainly belongs here, but his personal sins pale in
comparison to the crimes of Nixon and Agnew, while the sheer triviality of
Palin, Miller, and Quayle are unmatched in Democratic circles. Considering life
after the VP run, far more Democrats than Republicans made contributions after
their moment or years in the VP spotlight: Gore, Ferraro (UN Commission on
Human Rights), Mondale (Ambassador to Japan, Minnesota A.G.), John Sparkman,
Kefauver, Alben Barkley, Lieberman (all returned to the Senate), Ed Muskie
(Secretary of State). Only Dole
and Warren played significant roles after their Vice Presidential runs.
Summing up: which party shows the
better business sense in getting the right people on the vice presidential
bus? In my book, the Democrats
come out way ahead; since 1960, only Bob Dole and Papa Bush have been people of
stature, while Nixon, Agnew, Quayle and Palin (and I’d add Cheney) have been disasters
or embarrassments. Post-World War
II, all the vice presidents or candidates whose major achievements will go down
in history unmarred by their crimes are either Democrats or old Republicans who
would be thoroughly repudiated today.
Next: “Confront the Brutal Facts”
Saturday, September 1, 2012
Brain Damage
As readers of this blog will have
noted, I write often about the hubris of scientists, especially evolutionary
biologists and neuroscientists, who are trying to reduce human behavior to the
most primitive level of unconscious, uncontrollable, and irrational reflexes,
born out of random mutations and their utility in species survival.
I
used to think that some institutions had my back on this, notably the Jesuits,
who educated me for eight crucial years. Reading Plato as early as high school,
and taking courses in Logic and Epistemology, Metaphysics, Ethics, and the like
every semester of college, I was proud of the inquiry tradition of these
schools. (For those who might
think Catholic education is necessarily blindered, please read philosopher
Martha Nussbaum’s chapters on Notre Dame and BYU in Cultivating Humanity; and remember the Jesuits are well to the left
of the Holy Cross fathers.)
So
I was delighted, at first, to see an article about a young scientist at my alma
mater who had received a national grant for her work. The piece, entitled “Moral Compass,” described the work of a
woman studying “what she calls moral intuition.” My school seemed to be
continuing the tradition of asking the Big Questions, as did Jesus, Aquinas,
Kant, and all those moral giants we were introduced to in our philosophy and
theology classes.
But my hopes were dashed when I
found that the scholar was approaching the problem with the tools of neurology,
not philosophy, in her ominously named “morality lab.” Specifically, she was using
Transcranial Magnetic Stimulation to jolt the brain’s right temporoparietal
junction (RTPJ), then used functionalMRI to watch brain activity as subjects
heard about a person who tried but failed to poison a friend. The conclusion: people who have had
that part of their brain temporarily zapped are more lenient toward the failed
attempt than unzapped subjects.
All well and good, if a little
truncated perhaps for the reading audience. We all pretty much know that damage to various parts of the
brain can cause behavioral changes (from aggression to depression to lack of
judgment), as well as that using various parts of the brain can develop greater
connections and even greater overall size of those parts (London cab-drivers
with large geographic memories, violinists with more developed hand
areas). But the article went on to
quote the young scientist thus: “For a while now, I’ve been really interested
in moral intuitions and where they come from—and the extent to which people share
these intuitions. When someone has
a different set of intuitions, how do you know who’s right?”
That last sentence floored me for
two reasons: that where moral intuitions come from is answerable in terms of
synaptic locations, and even more, that knowing who’s right could be determined
by examining the brain activity of the person having the intuition.
Moral intuitions apparently mean
immediate judgments, as studied in such classes as Harvard professor Michael
Sandel’s “Justice,” widely available on the Internet. In these studies, people are asked to respond quickly to a
case, then to examine the validity of their intuition, and to elucidate
possible reasons for and against it.
(The young scientist in question apparently attended exactly this class,
which set her off on her career path.)
Are these intuitions instantaneous
eruptions from a segment of our brain, or are they more the result of life’s
experience, both direct and indirect?
Since the publication of Malcolm Gladwell’s Blink, there has been a lot of discussion of this question, and
most writers and researchers have concluded that our “intuition,” as in his
cases of art critics who doubted a statue’s authenticity, and tennis players
who could predict faults on serves as soon as the stroke was beginning, is most
valid when it is a rapid judgment based on long familiarity with a given
situation. Isn’t it likely that “moral
intuitions” are more likely to be right when they’re grounded in years of
thought, discussion, reading, and life experience? (The child’s outburst that “it’s not fair” that rain spoiled
the trip to the beach surely isn’t of the same validity as the later
declaration that “it’s not fair” that he should be punished for cheating on a
paper when he didn’t do so.)
So if I were asked where a moral
judgment came from, I would suggest many possible answers: universal
perceptions of what is harmful to oneself or to others, cultural upbringing,
social conditions, ethical reflection, and on and on. “Above and behind the right ear” would never even occur to
me. Would it to you?
It’s interesting that Socrates took
up the same issue over two millennia ago.
While in prison, he tells us:
“I
heard someone who had a book of Anaxagoras, out of which he read that mind was
the disposer and cause of all, and I was quite delighted at the notion of this…
What hopes I had formed, and how grievously was I disappointed! As I proceeded,
I found my philosopher altogether forsaking mind or any other principle of
order…I might compare him to a person who…when he endeavored to explain the
causes of my actions, went on to show that I sit here because my body is made
up of bones and muscles; and the bones, as he would say, are hard and have
ligaments which divide them, and the muscles are elastic, and they cover the
bones, which have also a covering or environment of flesh and skin which
contains them; and as the bones are lifted at their joints by the contraction
or relaxation of the muscles, I am able to bend my limbs, and this is why I am
sitting here in a curved posture… forgetting to mention the true cause, which
is that the Athenians have thought fit to condemn me, and accordingly I have
thought it better and more right to remain here and undergo my sentence…
There
is surely a strange confusion of causes and conditions in all this. It may be
said, indeed, that without bones and muscles and the other parts of the body I
cannot execute my purposes. But to say that I do as I do because of them, and
that this is the way in which mind acts, and not from the choice of the best,
is a very careless and idle mode of speaking. I wonder that they cannot
distinguish the cause from the condition.”
That same error of confusing causes and conditions is
apparently still with us.
Further, what exactly does the
study, or the snippet from it the college magazine published, tell us? That people are more lenient toward the
person who fails to commit a misdeed when their RTPJ is disrupted. Not only do we not know whether we’re
talking about averages, or the same people under two circumstances. We also don’t know whether they judged
these would-be killers extremely harshly and then somewhat less harshly, or
fairly harshly and then very leniently.
Or whether a disrupted RTPJ left them with more scope to consider the
situation in a broader light, or made them indifferent to the matter and so
less punitive.
Most of all, how could this study
in any way tell us which moral intuition was right? Is any moral intuition by a person with an apparently whole
RTPJ “right”? If two people whose RTPJs
appear similar come to different judgments, who’s right? Whatever the truth of the old question
whether “ought” can be derived from “is,” there’s not much likelihood of ever being
able to prove that a point of view is right by examining the neuro-bio-electro-chemical
events that accompany holding or stating that point of view.
If we could do that, imagine the
effects. No need for debates to
choose a candidate – just put them in the scanner. Criminals could prove their innocence by showing a clean RTPJ
or whatever other locus was relevant, or could plead the “defective RTPJ”
defense. Unless we could agree on
the precise definition of health or superiority in RTPJs, we could always
dismiss others’ views: this
liberal has an overactive RTPJ, that conservative an underactive one, so he is
too soft-hearted, she’s too severe.
(Maybe we could define a healthy RTPJ as the site of Right, True, and
Proper Judgments.)
One way of looking at this project
is that it’s running backward. It
makes perfect sense to examine the brains of people whose views or actions are
highly atypical, to see if there’s a biological contribution: does the
dyslexic, autistic or sociopathic person have a specific brain
abnormality? Does the gifted
person have some different abnormality? (The second has so far been much harder
to come upon than the first.) But when two apparently normal people hold differing
moral intuitions, say on war, capital punishment, abortion, or hate speech,
does it make any sense to think that we can examine their brains to find out
not only why they differ (I expect there are dozens of places in the brain,
from memories to emotions to others we can barely dream of, that go into a
complex decision), but to say who is right?
We need, and I believe will always
need, separate criteria for deciding what is right and for discovering what
neural correlates happen when we make a moral choice. Trying to do otherwise
can lead in one of two directions: fruitless quests to find moral and immoral
synapses, or frightening efforts to control behavior by altering the brains of
the morally wrong. Nazi eugenics
were bad enough, but judging and condemning people for the state of their brains
as others were once judged and exterminated because of their noses or foreheads
would be even more disastrous.
Monday, August 13, 2012
My own Shangri-Las
“No utopia has ever been described in which any sane man
would on any conditions consent to live, if he could possibly escape.” -- Alexander Gray
While
Gray may be right about the utopias of social engineers, from Plato to Skinner,
I think many of us would be happy to live in the worlds created by imaginative
writers, past and present. As a
boy, I lost myself in King Arthur’s and Robin Hood’s England, Tarzan’s Africa,
Arthur Ransome’s Swallowdale, and many others. C.S. Lewis’s Narnia and J.R.R. Tolkien’s Middle Earth came
later and lasted much longer; I still dip into their Otherworlds occasionally.
But
these days I have less interest in dwelling among knights, apes, fauns or hobbits,
and more in imagining plausible communities of ordinary, or not so ordinary,
humans who promise a richer moral and social life than exists in contemporary
America. In my reading over the
past few years I’ve found three such imaginary places, all of which I share
with others as often as possible.
Two are from the frequently underestimated world of mystery writing, and
one from a master of the short story, the novel, the essay, and the lyric poem.
To
start with the detectives, my first recommendation is P.A. Gaus’s Amish
mysteries. Gaus, a college professor
in Ohio’s Amish country (despite Pennsylvania’s greater prominence,
Ohio’s Amish population is equally large), has written seven – and counting –
mysteries in which the pacific world of the Amish is rent by violence, and
three “English” characters (as the Amish call all outsiders), come to their
assistance. The primary one of
these is, oddly enough, a local college professor, ironically, of military
history. (Gaus himself was a
chemist until his recent retirement; obviously no connection there.)
Along with his alter ego, the
irascible local sheriff, and occasional help from a non-Amish pastor and
carpenter, Professor Michael Branden often takes on himself the burden of
protecting those who will not fight either to avenge or to defend themselves.
In one novel, an Amish bishop explains why his people believe so completely in
non-violence: “We are taught that
we are to be harmless as doves. It
means a lot of things, but one thing is that the harm we do is always harmful
to us, if for no other reason than the guilt that we shoulder for it.” Like HBO’s The Newsroom,” some
will find these books “preachy,” but in my view they’re none the worse for
that, and the leaven of crime and mystery can attract even readers who may
slide over the homilies – to their own detriment. (The novels, beginning
with The Blood of the Prodigal, are
published by Plume.)
A more secular Shangri-la can be
found in the woods of southern Quebec, in Louise Penny’s Three Pines, the setting of another ongoing series
featuring Inspector Armande Gamache of the Sureté de Quebec (the Chief
Inspector for the whole Francophone province). Penny has won an extraordinary number of mystery writer
awards, so a lort of people must agree with me.
Three Pines is a hidden gem of a village, with a gourmet bistro, a
bookstore, several practicing artists, a national treasure of a poet, a great
house with a grim history, and, unfortunately, a penchant for attracting
murderers as well as refugees from the urban world.
Penny’s novels delve much deeper into the art of the mystery
novel: Gamache is a French Canadian version of such thoughtful British sleuths
as Inspector Dalgleish and Lord Peter Wimsey (note that they were all created
by women, and none is a muscular, quick-on-the-draw Lothario). His skill is not the ratiocinative brilliance
of Holmes, but an ability to delve deep into human motivation; he always
insists that the strangest crime makes complete sense if seen from the
viewpoint of the killer, and that the roots of crime are often buried far in
the past.
Gamache is also a deeply kind man, giving second chances to junior
officers who have failed and even betrayed him. He serves as a center around whom circle a collection of
complicated characters, especially in Three Pines, which is home, among others,
to a gay couple who own the bistro and the B&B, a former psychologist now
running the bookstore, and Ruth Zardo, a brilliant poet with a heart of – well
something slightly soft – buried beneath geological layers of rage. Penny also has two unusual
characteristics for a series writer – she can allow her characters to change radically
(or reveal totally unexpected layers), and, like David Simon in The Wire and Treme, has not hesitated to shock us with the loss of favorites from
her cast. Every new Penny
therefore, is not only a whodunit, but a who will surprise us and how. (The
first in the series is Still Life;
all are published by Minotaur.)
Wendell Berry’s Port William
Kentucky, like Ohio’s Amish country, is an American enclave in which different
mores and values survive the political and economic distortion that so plagues
most of our world. The key
term in Berry’s vision is “membership.”
In his good society, people understand that they are connected by more
than blood, marriage, proximity, or even friendship. Rather they are connected by a common humanity that appears
sometimes as secular, sometimes as spiritual. As one of the town’s eldest members says, "The way we are, we are members of each other.
All of us. Everything. The difference ain't in who is a member and who is not,
but in who knows it and who don't."
Being
members of each other means many things to Berry’s characters. It means protecting a stranger who is
hurt in a Saturday night brawl; it means helping a young couple purchase a farm
where they had worked for years, but to which they have no legal claim; it
means taking care of a drunken uncle who falls again and again, without ever
hoping that he will change, and it means loving a woman from afar all her life,
just because you believe she deserves to have someone love her.
It
does not mean living in eternal peace and harmony. In and around Port William there are not only town drunks;
there are abusive husbands, greedy landowners, thieves, and even an occasional
murder. Because of course there
are those who don’t know or won’t admit they are members of each other. For the ones who do, however,
there is a deep awareness that we are more than producers and consumers. As an older character says to a
younger, “we’re dealing in goods and services that we didn’t make, that can’t
exist at all except as gifts.
Everything about a place that’s different from its price is a gift. Everything about a man or woman that’s
different from their price is a gift.
The life of a neighborhood is a gift…you’re friends and neighbors, you
work together, and so there’s lots of giving and taking without a price – some that
you don’t remember, some that you never knew about. You don’t send a bill. You don’t, if you can help it, keep
an account. Once the account is
kept and the bill presented, the friendship ends, the neighborhood is
finished.”
That’s
the kind of Shangri-la that just might exist, or be brought into existence for
a time, like an evanescent subatomic particle. I’d like to be there when it happened. I’d like to help make it happen.
(Berry’s Port William
appears and reappears in novels and short stories over the past half century,
and tell about more than a century of the town’s life. His short stories are
collected in That Distant Land; among his several novels my favorite is Jayber
Crow. Most are published by
Counterpoint.)
Monday, July 23, 2012
The National Collegiate Almighty Association
“While the future's there
for anyone to change, still you know it seems/It would be easier sometimes to
change the past.” -- Jackson Browne
What is the most powerful force on
our planet, or maybe the cosmos?
Evidently it’s the NCAA. Apparently
they have two powers no other earthly body has: they can change the past and
they can punish the dead. Yes, the
old Soviet Union and many other dictatorships have tried, airbrushing out the
now-out-of-favor from photographs, altering historical records, and such. Countries try to change the past:
Turkey insists it never practiced genocide on the Armenians. But there’s always someone, often the
world’s majority, to call them on it.
In ancient times – what we used to
call the Dark Ages -- some nations and religions would dig up corpses and drive
stakes through their hearts, burn, or hang them for purported misdeeds. You’d think we’d be beyond such
primitive thinking.
But the NCAA has borrowed from the
playbooks of the Inquisition and the Stalinist era in its sanctions against
Penn State.
True, sports authorities have
sometimes changed the past, but with great inconsistency. Sign a wrong scorecard in golf, and you
either get the poorer score you signed, or forfeit if you signed a better score
than you made. So your birdie is now a par, your par a bogie. But that’s in the rulebook. For decades, almost every sport refused
to change a wrong call in the immediate past, except for umpires in tennis
over-ruling line calls. Now we
have instant replay in one baseball event, and many football, basketball, and
hockey situations. But baseball
refuses to correct an obviously wrong call, even when it costs a player a
perfect game and does not alter the game’s outcome in any way. And soccer absolutely will not sanction
corrections, even though its referees have an impossible real-time job.
Sports also change the past when a
violation has been discovered that falsifies the game’s outcome – ineligible
players, drug enhancements, etc.
Of course they do this with total inconsistency too: Barry Bonds, Alex Rodriguez, and Mark
McGwire still hold over a dozen home run records despite steroid use, while
cyclists and Olympic athletes are stripped of their titles for the same
infraction.
In the so-called real world, very
few democracies change the past because of later discoveries. Imagine if they did: Colin Powell
stripped of his rank because he later told the UN that Iraq had weapons of mass
destruction on railroad cars. Ronald
Reagan and George Bush’s names removed from all airports and other public
buildings if future documentation proves them complicit in Iran-Contra. Let’s not even mention J. Edgar Hoover.
One fundamental principle in our
civil and criminal society is that the dead cannot be held accountable for
their misdeeds. When Ken Lay died in prison, his conviction was vacated, not
because of any new evidence, but because his appeal had not run its
course. You can’t try a dead
person, so you can’t convict him.
Innocent until proven guilty, especially in the afterlife.
Now we come to Penn State. Because of allegations that involve one
deceased coach and one former coach, a team loses thirteen years of games that
it once won, perhaps the largest reality alteration in the history of sports. Who
can doubt that the punishment is aimed primarily at the late Joe Paterno,
condemned in the Freeh report post-mortem by the testimony of the indicted and
disgraced living? During the years
of the forfeits, Penn State players won two Butkus awards (Lavar Arrington and
Paul Poslusny). If they made no tackles in those years, do they forfeit
the awards as well? Does every
Penn State player who was drafted in those years get undrafted because he did
nothing during his college days?
This
is not to excuse anything done by Joe Paterno or anyone else at Penn
State. As Marc Antony said of
Caesar, “If it were so, it was a grievous fault, and grievously hath [he]
answered for it.” But to pursue a
man beyond the grave serves no rational purpose: does the NCAA really think we
need to deter future sports programs from similar behaviors? Does it claim that if Penn State had
turned in Sandusky in 2001, when the crime occurred in the athletic facility,
its football program would have been devastated retroactively to 1999? Do we benefit as a society by punishing
the dead and by imagining that a declaration can alter history, even the small
history that is college sports? I
think not. The best thing about
the NCAA decision is its fine of Penn State and designation of the money for
abuse prevention. Most of us hope
we can change the future: only time travel movies try to do it by changing the
past.
Wednesday, July 11, 2012
For the Birds
No,
I don’t spend all my time indoors reading and carping. Sometimes I turn my curmudgeonry toward
the natural world, especially out here on Martha’s Vineyard. Last year, for example, I
squirrel-proofed my bird feeders by stringing them (the feeders, not the
squirrels) from thin wires between the house and a tree – six or more feet
high. With no tightrope wide enough for their skills, the gray robbers are
content to pick up what the birds drop, like the family dog lying under the
chair of the sloppiest eater among the children.
Victory
– but short-lived. After a week or
so, the colorful array of finches, cardinals, bluejays, redwing blackbirds,
chickadees and others were almost entirely driven off by a large and growing
mob of grackles.
Note to the pedantic: there is no
term of venery for grackles as there is for owls (parliament), quail (bevy),
starlings (murmuration), or, most hyperbolic, wrens (herd? Who thought that up?). One blogger has suggested flash mob,
which does capture their abruptness, but is much too pleasant. I think they should be put in the lineup with their
bigger cousins: a murder of crows and a mugging of grackles sounds about right to me. Subnote: the British use “grackles” to
refer to mobs of tourists – one of the best Britishisms since “bumph,” which
means both toilet paper and any tedious pile of paper that requires your
reluctant attention.
But
back to quiscalus quiscula. (The
dictionary makers list this odd name as “of uncertain origin,” and suggest a
possibility that it comes from the Spanish quisquilla,
worthless fellow. Sounds apt to
me – maybe it all goes back to the Latin “who” as in “Who the hell are these
birds anyway?”) I called my good
birding friend Peter Tacy in Connecticut, who advised me on these Mafiosi -- the
black and purple combo suggests a similar fashion sense, doesn’t it? Peter explained that grackles shift
from insects to seeds after mating, that they do indeed take over feeders, and
that they can not only chase off smaller birds, but also add them to their
diets. His suggested cure was to
put out unattractive food, such as nyger thistle seeds, which worked well,
except that almost no one else liked the seeds either and they got wet and
clogged up the feeder.
Fast
forward to this June, when I opened up my wallet for a specifically designed
nyger feeder (the little birds seem much happier with this one, especially a mated
pair of finches who often dine together), and a large-bird-proof
feeder that shuts down when anyone heavier than a cardinal tries to eat by
perching on its ring.
At
last we have a winner, and a source of great entertainment. The jays stop by occasionally, but give
up almost instantly. Everyone else
dines successfully. But best of all
is watching a grackle spend several minutes trying to beat the
system.
First
he lands and tries a feeder hole, but it’s closed. He studies it, then moves to the next hole. (There are six.) Still no joy. He glances up again at the tube,
where he sees plenty of seed. So
he looks into a hole, which of course is not a hole as long as he is
perched. Around the perching ring again
a few times. Now he stretches high
up to stare right into the cylinder.
Looks like seed from here too, he must think. Around the ring again, checking holes, up to look at the
seed. Then (for he doesn’t worry
about predators as do almost all the other birds), he looks around for the
culprit. Am I being punked? Where’s Ashton Kutcher? Around again, check the holes, check
the tube. Then straight up as if
imploring the Great Grackle in the Sky for help. For a good five minutes I watch him, as smaller birds perch
on the wire, balancing caution and hope.
Finally he gives up. I turn
my head to watch him fly, and when I snap back to the feeder, the house finches
are already dining.
Ah,
sweet triumph! Anyone know a
parallel strategy for the mob feeders at Morgan Stanley, Barclays, etc., etc.?
Friday, June 29, 2012
Appealing to a Higher Court
Although the Supreme Court has allowed the Affordable
Health Care Act to stand, there are those who still contend that, like birth
control and artificial insemination, the act is against the Natural Law, as
developed primarily in the Christian tradition over two millennia. Amherst legal scholar Hadley Arkes has
been a prime advocate of this view in numerous articles, including “Natural Rights Trumps Obamacare, Or Should” in the magazine First Things, December 2011. As a card-carrying Jesuit fellow traveler, I couldn’t help
trying my hand at a response. For
those of you interested, here’s what I had to say:
Five
hundred years before Christ, Heraclitus observed, “although the Logos is common
to all, most men live as if each of them had a private intelligence of his
own.” Heraclitus’s Logos, like the Dao or the Natural Law (mutatis mutandis), defines the principles which govern the universe,
and from which humans stray at their peril. Hadley Arkes’s natural law challenge
to America’s health care plan substitutes a private intelligence for well-defined
principles of natural law, possibly to identify the author’s position with more
universal concepts than a particular constitutional theory.
The challenge needs to be examined on several levels.
At the highest, is there a reasoned basis for believing that natural law is
incompatible with universal health care? Is there consensus among natural law
authorities, or in common practice, for this belief? Finally, assuming no disqualifying
argument against universal health care, is there evidence that such a system
would be inferior to the prior one as regards its effect on the natural rights
of Americans?
Arkes evidently perceives
two ways in which mandated health care violates natural law. The first is that the law abrogates a specific
natural right. Quoting a recent legal brief, Arkes maintains that “Imposing on
people a contract they do not want would be quite as wrong as dissolving,
without their consent, a contract they had knowingly made.” Oddly, the
statement is itself entirely correct, but the implication drawn from it
entirely wrong. As a citizen, I have multiple contracts imposed on me that I
may or may not want: to pay taxes, to contribute to Social Security and
Medicare, to serve my country if called in time of war, to support any children
I may have helped conceive, and on and on. I may also have contracts I have knowingly
made dissolved without my consent, including both unlawful contracts (e.g.
murder for hire), contracts lawful but restricted (e.g. bigamous marriages),
and contracts entirely lawful but overridden by the concerns of the state (e.g.
property contracts abrogated by eminent domain).
Arkes’s second, more powerful objection is based on
the possibility that mandated health care would impinge upon the fundamental
right to life. Arkes makes this contention twice, first asserting, “The
generous provision of care to the poor would come along with controls that
could deny to ordinary people the medical care they would regard as necessary
to the preservation of their own lives, perhaps even when they were willing to
pay for that care themselves,” and again, “this scheme of national medical care
is virtually bound to produce a scheme of rationing, as it has produced that
rationing in Britain and Canada, denying medical care to people now entirely
reliant on the government for their care. The serious question then is whether
this denial to people of the means to preserve their own lives, with means
quite legitimate, touches the ground of natural rights.
I believe that it
does.”
This position, though dramatically stated, totters on
not one, but four cracked legs. First, it contends that these harms “could,”
and more strongly are “virtually bound” to happen, offering in evidence only
that they have allegedly come about in two other counties. This contention can
be challenged in two ways. Empirically, there are more than thirty countries
with universal health care, so experience in two is not proof of inevitability,
or even likelihood. But there is also a rational objection. Actions that have a
possible, or even highly likely, outcome may be entirely permissible if
precautions can be taken to avoid the outcome, or if the action is the only apparent
way to avoid a worse result. A surgery virtually bound to result in the death
of a patient may still be ethical, even when the patient is physically unable
to give consent, if it is deemed to be the patient’s only hope. A rescue effort
likely to result in the death of the rescuer is still legitimate, even heroic,
when the same action (plunging into a raging torrent, for example), would be
suicidal if not for the intent of the rescuer.
Even if rationing were to evolve, it need not deny
people “the means to preserve their own life.” A society may deny orthopedic or
other corrective surgery for very elderly patients or others whose lives are
minimally affected by their physical limitation, yet offer all available
interventions to patients with life-threatening diseases.
Further, no system could leave people “entirely
reliant on the government for their care,” unless it were combined with
draconian regulations against travel that would themselves be against all
contemporary standards of natural rights. This last point deserved further
examination, particularly in light of the rationale for adopting universal
health care.
Here,
to invert Anatole France’s quip that “The law, in its majestic equality, forbids
the rich as well as the poor to sleep under bridges, to beg in the streets, and
to steal bread,” we can say the law equally allows rich and poor to travel
abroad for health care denied them at home. Even today, large numbers of
Americans – estimates range as high as 1.3 million – seek medical care abroad. Some
even travel to Canada, especially to the Shouldice Hospital for hernia care, as
well as to Mexico, Thailand, and other countries. Estimates of global medical
tourism run into the several millions.
While the wealthy – or even comfortable, since an
international trip plus treatment may cost less than the same in a U.S. hospital
– could seek whatever means of life preservation they desire, those now unable
to afford medical care would be assured of the means to preserve life, at least
to a greater extent than under the present system.
If there is no rational conflict between universal
health care and natural law, does informed opinion favor that position? Apparently
not. The Roman Catholic Church, for example, has consistently described
universal health care as both a right of persons and a duty of governments. As
Richard McBrien noted in the National Catholic Reporter (October 5, 2009), “The
teaching that health care is a right rather than a privilege was articulated by
Pope John XXIII in his encyclical, Pacem
in Terris....The pope began that encyclical with a list of rights, the
first set of which pertained to the right to life and a worthy standard of
living. Included in these rights were the right to ‘food, clothing, shelter,
medical care, rest and finally the necessary social services.’” More recently,
Catholic News Service reported, “Pope Benedict XVI and other church leaders
said it was the moral responsibility of nations to guarantee access to health
care for all of their citizens, regardless of social and economic status or
their ability to pay. Access to adequate medical attention, the pope said in a
written message Nov. 18, was one of the ‘inalienable rights’ of man.” (November
18, 2010) The same view has been consistently held by the American hierarchy,
as in recent testimony before Congress: “As national debate about a major
Congressional health care bill continues, the U.S. bishops have called for
‘genuine’ health care reform that protects human life and provides
comprehensive health care access. Bishop of Rockville Centre, New York William
F. Murphy, writing a July 17 letter to Congress on behalf of the U.S.
Conference of Catholic Bishops (USCCB), commented: ‘Genuine health care reform
that protects the life and dignity of all is a moral imperative and a vital
national obligation.’” (Catholic News Agency, July 22, 2009).
This
last item suggests a further place to seek common wisdom regarding natural law
and health care. As implied by Bishop Murphy, natural law requires respect for
human life from conception. We might therefore suppose that state opposition to
abortion and to universal health care would go hand in hand. Yet the eleven
European countries – all with overwhelming Catholic or Orthodox majorities –prohibiting
abortion all provide universal health care, as do the four largest South
American (and Catholic) countries prohibiting abortion.
If
the rational case against universal health care lacks substance, and the weight
of opinion among Christian leaders and nations favors such care, can we examine
empirically whether mandated health care in fact tends toward the preservation
of natural rights?
If
as Professor Arkes correctly notes, the ability to preserve life “with means
quite legitimate” is a part of natural law, the current America health system
is far worse than those of most developed countries at preserving that right. (A
problem arises, of course, when “legitimate” is introduced into a discussion
based on natural law. Certain means of preserving life, such as killing another
to provide yourself with a lifesaving part of that person’s body, may be
illegitimate. Many forms of legitimacy, however, are arbitrary decisions of
governments or societies, and may have some, or no, basis in natural law. A
Hindu, for example, may find killing a cow to provide life-saving human nourishment
to be illegitimate, without in any way deriving this decision from natural
law.)
Not
only is American life expectancy lower than in 19 of 23 European, North
American, and Pacific countries, and newborn mortality levels and indices of
child well-being equally low or lower, but we “lead” the rest of the civilized
world in the number of people who die of curable illnesses before age 75, and trail
in years of healthy life expectancy for those over 60. During this century, each
of these countries, even financially beleaguered Greece, have reduced deaths
from amenable causes (so-called “avoidable mortality”) at rates at least twice
as fast as has the U.S. Journalist T.R. Reid observed in 2010 that “Government
and academic studies reported that more than 20,000 Americans die in the prime
of life each year from medical problems that could be treated, because they
can’t afford to see a doctor.” More
than 100,000 avoidable deaths occur each year in America, from this and other
causes.
Given
these facts, can we understand, if not agree, with Professor Arkes’s position? Several
of his points offer a possible answer. Although the phrase “natural law”
appears prominently at the beginning of his article, it is far from central to
his case. The term itself appears only seven times in the text (as opposed to
24, for example, for “government” and 15 for “constitution”). He focuses
primarily on Constitutional law, historical precedents and concern for “the
vast enlargement of the reach and powers of the state.” Despite his title, he
openly states that “the most serious argument
against Obamacare is that it threatens to change the American regime in a grave
way: that it sweeps past the constitutional restraints intended to ensure a
federal government ‘limited’ in its ends, confined to certain ‘enumerated’
powers, and respecting a domain of local responsibilities that it has no need
or rationale for displacing.”
These
arguments are serious ones, and can and should stand on their own, rather than
attempting to climb onto the shoulders of the natural law. But beneath a
commitment to originalism and limited government is an unexamined premise with
greater explanatory power.
In invoking the natural law Arkes speaks entirely of
natural rights, and never mentions natural duties. However, in such sources as
the Catholic Encyclopedia, natural law is identified with duties, rather than
rights. (Not to deny that duties and rights are often obverse and reverse of
the same principle.) In Arkes’s
rights-focused (dare we say libertarian?) discussion, an individual right to
preserve life is paramount, but the duty to see that others’ lives are
preserved is nowhere mentioned. This and the off-hand assertion of “a natural
right not be coerced into buying things we have no wish to buy,” and fears of
government power, all suggest that the individual’s right to make choices is
the foundation of Arkes’s edifice.
It may be more than a linguistic anomaly that this
places the natural law argument on the “pro-choice” side. For in espousing a
radical (in the etymological if not the political sense) commitment to
individual choice, Arkes aligns himself with one of the chief ailments of our
age. As
David Bentley Hart observes in Atheist
Delusions, “In even our gravest political and ethical debates—regarding
economic policy, abortion, euthanasia, assisted suicide, censorship, genetic
engineering, and so on—‘choice’ is a principle not only frequently invoked, by
one side or both, but often seeming to exercise an almost mystical supremacy
over all other concerns.” It
should not exercise such supremacy over health care for all Americans.
Subscribe to:
Posts (Atom)