Tuesday, December 25, 2012

Guns by the Numbers

    Let me start by saying I’m no math whiz.  The fact that all three of my college roommates were math majors had no impact whatsoever on my understanding of the field.  In fact all I recall from conversations with them about their disciplines comes down to this: from the point of view of topology, a human being is just like a doughnut, and there’s a formula that can prove there’s always one spot on earth where the wind isn’t blowing, and one spot on a head with no hair.
    That being said, I think I know enough math and science to disqualify me for membership in the NRA and the Republican Party.  I thought my beef with them was limited to evolution and global warming, but now I see it extends to simple statistics and basic scientific method as well.
    I’m talking about the NRA’s gun logic, and its gun proposal.  Let’s review their recent arguments: guns are not the problem.  Violent video games are the problem, and media coverage of these events.  The solution: guns in every school.  (Let's skip the fact that the great majority of mass killings happen in places other than schools, from malls to movie theaters to religious establishments, and most often in workplaces.)
  What do you do when you’re building a hypothesis about what causes Phenomenon B?  You look at the possible factors, and eliminate them one by one until you’re left with unique characteristics of the environment where B happens, or as close as you can get.  That’s how we finally proved smoking caused cancer, microbes caused disease, seat belts saved lives, etc.
    So let’s try that.  The U.S. has violent video games and media coverage of violence.  Let’s compare some other places that have both, and let’s choose places as like the U.S. as possible.  We’ll use English-speaking countries that share a lot of our heritage: Canada, right next store, England, and say Australia.  Do any of them ban violent video games?  No.  Is there any reason to believe these games are not sold there, as they are here?  No.  Do any of them avoid media coverage of violent events?  Apparently not: replacing “U.S.” with “Canada,” “Britain” or “Australia” in a Google search of “Sandy Hook coverage” reveals 114 million hits for the U.S., 92 million for Canada, 32 million for Australia, and 28 million for Britain.  In fact, Canada has almost three hits for every Canadian, and Australia more than 1 per Australian.  (As a sidebar, the video game industry has pointed out that since the 90s sales of video games have quadrupled, while rates of homicide by juveniles have decreased by 71%.)
    Now what about gun ownership?  The U.S. has nearly 90 privately owned guns per 100 people, Canada 30, Australia 15, and Britain 6. Homicide rates in these four countries: 4.8 per 100,000 in the U.S., 1.8 per 100,000 in Canada, 1.4 in Australia, and 1.2 in England.
Most important, firearms account for 67% of all U.S. homicides, 26% of Canadian homicides, and 8% and 6% of Australian and British homicides, respectively.  Putting it at its simplest, your chance of being killed by a gun in Britain is about 1 in 1.6 million; in the U.S. it’s about 1 in 30,000. 
    I wish I was a great chart-maker or statistician, but it’s pretty clear that the number of guns a country has is the key variable in murder rates, at least among the factors the NRA has proposed versus the guns themselves.
    One more excursion into numbers.  What would we need to put the NRA's "guard in every school" into effect?  We’d need more than 132,000 full time guards, assuming exactly one per school.  How does that compare with the protections we have now? It’s more than all the police officers in the 36 largest police departments in American cities.  If we eliminate New York, which has an astonishing 26% of those 131,000 officers, it means more trained officers than all the other 106 cities with over 250 police.  It’s actually equal to 47% of all the police in all the 867 cities listed by the FBI.  It’s more than 3 times the Coast Guard, and nearly ¾ as large as the Marines.
    And what would that cost?  If we paid these people the same amount as the lowest starting police officer’s salary in the country, it’s around 4.2 billion dollars.  If we pay them a teacher’s average salary, it’s 6.2 billion, besides the cost of arming and training them.  That’s about 20 times the NRA budget, so I’m afraid they couldn’t help much even if they wanted to.
    And of course they don’t.  What they want is to preserve an antiquated right that has now been extended far beyond what any signer of the Constitution could have imagined, when no gun existed that could fire more than one shot before being reloaded by hand, a  process that took between 20 seconds and a minute.  Maybe that’s the answer: go back to the hallowed “strict construction” and allow anyone who wants to own a single-shot weapon that takes at least 20 seconds to reload.  Make everyone who wants more than that to build a case (e.g showing they’re engaged in a dangerous profession, or are a proven hunter or have been trained by one to use the standard weapons for hunting game), or else spike their guns and hang them on the wall as antiques from a more savage and violent era.

Sunday, December 2, 2012

How Not to Write

As anyone can see, I’ve been away for a while.  The main reason is course work; I’m taking two classes for my certification in mediation and organizational conflict.  Unlike last semester, I’ve fallen into the land of the pure social scientist.  My courses require formal submissions in APA style, and I’ve done six papers in that genre.  The experience has been so traumatic that I fear I’ve forgotten how to write like a normal human being, so aside from the time it’s taken to produce 34,000 words in this dialect, I’ve been reluctant to risk confusing real writing with what’s done in social science courses.
    But the last paper is in, and it’s time to start recovering.  I thought the best way to do that would be to contrast what’s been required for the past four months with the way people normally write.
    Now I know every field has its jargon.  But some fields are worse than others.  English and history, I would say, except when contaminated by dogmas like semiotics, can actually produce something approaching real communication.  There are peculiarities, of course.  Mike Rose, in his wonderful Lives on the Boundary, said that when you write an English paper about a play or a novel, you’re not supposed to do what any normal person does when they’re talking about plays or novels: tell what happens and tell if it was any good.  Instead you’re supposed to delve deeper into matters of style, theme, archetype, ambiguity, etc. that prove you can read more carefully than the average best-seller consumer.  But you’re still connecting your reader to the book, often with extensive quotations.  You may also take issue in English and history with other prior writers, whose case you describe in more or less detail before demolishing it.
    The key here is that you’re writing about things that have been written, whether imaginatively or historically.  Often you’re actually reading excellent writing, which may improve your own.
    But in the world of social psychology and the like, none of the above applies.  You don’t follow most of the rules of ordinary discourse, and you almost seem to avoid illuminating your reader.  Take the mention of Mike Rose, above.  This might be re-written as: “Rose (Year) has analyzed the narrative-evaluative paradigm and its inapplicability to the academic setting.”  You would then have to supply the full reference to Rose in a list of references, like “Rose, M. (Year). Lives on the Boundary. New York: Penguin.”  That, of course, is an oversimplification.  To do the job right, you might have to put “Rose, 1995/2005)” to distinguish when the book came out from the edition you consulted.  But you aren’t giving a page number or even a chapter number, so if anyone wanted to find out if Rose really said what you say he said, they would have to read the whole book.  Even more complex, if you bought the book while traveling, you might feel you should cite the country (out of 8) where Penguin has offices; or maybe you should say “London,” because that’s where their registered office is.  This often means interrupting the flow of your thoughts to track down all the data, or else facing hours and hours of citation management just when you’re done and would like a walk, a beer, or some other distraction, like reading a real book.
    The wisest among the professors I’ve had in this program explained to me that my problem is I’m not the intended reader of the article.  The authors are writing for the select group of people working in the same field or sub-field, who know Rose inside out, and they’re just trying to tell the readers that they’re filling in a hole left by Rose and whoever else they cite, so that the others can see whether it’s a hole they need to know about while they’re filling in whatever hole they’ve staked out.
    Question 1, then, is why are we reading people who are not writing for us, and whom we can’t understand until we’ve read everyone else?  Question 2 is, really?  One article I read had 77 references for seven pages.  What are the odds that reader X or Y has read all 77 things that writer A has read, and remembers them in such detail that a single word and a last name brings it all back?  (One of the interesting tricks played is that these writers also cite everything they’ve ever written that is remotely germane to the current piece.  Are they just showing off, or are they listing 8 other articles so that a research engine will tick off 8 more citations of their work for “mine is bigger than yours” judgments by the powers that be?   They also play the Alphonse-Gaston game: if Larry, Curley and Moe do three pieces of work together, they evidently negotiate whose name goes first, so each of them, or at least each with clout, can get “lead author” props.)
    Believe me, I do not exaggerate the time and energy spent citing.  I have counted paragraphs where 66 words are actual text and 61 are citations in parentheses.  My own long papers have consisted of 80% writing and 20% references, not counting the parentheses in the text that lead you to the 20% at the back of the paper.  This proportion is required by the demand, articulated by one of my teachers, that you need to cite everything that is not your own opinion or observation.  Mention D-Day and you’d better have evidence that it happened on June 6, 1944.  Quote the phrase “the rest is silence” and you’d better credit Shakespeare.  I’m not kidding.  For a sentence that said Lord of the Flies and The Fountainhead reflected views of their era about human nature, I was told to give full citations: last name of author, first initial, date of publication, and city, with details if the city isn’t a famous one.  Do I have a copy of either book nearby?  What edition should I cite?  Will any of my readers go to New York to buy Lord of the Flies and read it to see if I’m right?
    Now it’s easy to play this game.  Think of an idea you want to include, state it in a word or two (Oedipus complex, cognitive dissonance, conditioned response), go look on your shelf or in Google, and you’ve got another citation.  My 14-page paper had 102 citations; my 347-page doctoral thesis 112. 
    What I find worse than the tedium and the impenetrability, the cliqueishness and the petty point-scoring, is the impersonality. No one gets a first name, no one’s argument is given any scope. (A teacher even said it’s bad writing to quote other people: just paraphrase them.)  Everyone else’s work is simply one more pebble piled on the mound that will get you to the top of tenure hill.
    To paraphrase Edgar Lee Masters: Tick, tick, tick.  Such little citations. While Homer and Whitman roared in the pines.  (“Petit the Poet,” Spoon River Anthology, written for all time.  Read it.)

Wednesday, October 24, 2012

Straws in the Wind


            Several minor incidents in the last couple of weeks have gotten me thinking about the ways of the world.  Not that I want to go all Andy Rooney on you, but sometimes you do just have to ask if anybody’s noticed.

Example 1:

A mile or so from my home there’s a little square that sits on the edge of the Boston town line.  Not much there: a Dunkin’ Donuts (this is Massachusetts after all), a liquor store, pizzeria, gas station, neighborhood bar, and tiny, imperiled post office with friendly staff and no lines.  One of its roads passes under a commuter rail line, and sitting above the embankment for the trains is a billboard, which mixes ads and public service announcements every few weeks or so.
            This week, however, it sports the most distressing PSA I’ve ever seen.  Next to a photo of a young black boy are these words in giant letters:
                                                     MURDER   
                           IT'S NOT OKAY
My first impulse was to say “I knew that.”  Then I began to wonder who doesn’t? And in our age of hyperbole, who decided to take this understated approach?
            There are, I believe, a number of “It’s Not Okay” campaigns, or similar, around.  I’ve seen the “It’s Not Acceptable” campaign about name-calling, with Jane Lynch and Lauren Potter, and I find it very impressive.  But “It’s Not Acceptable” seems like a tougher stance than “It’s Not Okay.”  The latter sounds rather playground or parent-child to me: It’s not okay to take the last cookie, leave someone out of the game when choosing sides, or bite your older brother.  But is murder now just “not okay”?  Isn’t not being okay part of the definition of murder, as opposed to say justifiable homicide, self-defense, or a few other types of death-dealing that have at one or another time, in one or another place, been socially sanctioned.
            We do live in the age of water-boarding, drone strikes, and stand your ground laws, but none of these seem exactly relevant to this ad, which almost undermines itself.  After all, I do a fair number of “not okay” things from time to time: slide through a yellow light at the last minute, feed the parking meter, say I’ve only had two glasses of wine when I’ve really had three.  Those are not okay.  But murder?  That’s forbidden by all the laws of God and man, in every religion I know of, and with the strictest of penalties for murderers of almost any crime on the books.  What young person with a grudge (because that’s certainly what the image suggests to me) will be dissuaded from a drive-by shooting by a sign at the intersection telling him “It’s Not Okay”?  As far as I'm concerned, this billboard is not okay.
Example 2 (a and b) :
            I’m taking classes, as I’ve mentioned before, in conflict resolution.  Most of my classmates are young enough to be my grandchildren, and in general I am impressed with their commitment to make the world a better place, and with the work many of them are already doing toward that end.  I certainly never did as much as they have when I was their age; working in the State House, volunteering as mediators, or even working their way through school while carrying full-time jobs.
            But every once in a while, one of them says something that makes me realize how differently we see the world.
            Two of these happened recently in my Theory of Conflict class.  In one case, we were discussing a famous 1949-54 study called “The Robbers Cave,” in which a group of social psychologists took two groups of kids to a summer camp.  Each group didn’t know the other existed until they were brought together and urged to compete for prizes.  They became antagonistic and aggressive toward each other, but when the adults arranged “real” challenges that could only be solve by their working together, the rivalries diminished and cooperation increased.
            An interesting study, to be sure.  But I raised my hand to suggest that extrapolating from 11- and 12-year-old behavior to fundamentals of group behavior was rather dubious, especially given the intervening fifty years of research on brain development and its impact on judgement.  Another student responded that maybe these kids were closer to “real human nature” than adults would be.
            “Real human nature”? I wondered. Did he mean literally that the underdeveloped young of a species are more true to the type than the adults?  Or that aggression and conflict are what he thinks of as human nature, and everything else is a veneer covering the brutish and nasty reality of our biology?  The long arm of Social Darwinism, and Freud’s rampaging ids still stretch into the twenty-first century.  Is the impulse to resolve conflict peacefully that motivates students in our program “against nature”?  If so, are we doomed to fight a losing battle?
            There’s a lot of recent research that challenges the “red in tooth and claw” image of human, and general mammalian, nature. The discovery of bonobo culture, evidence that being social and helpful may be a better survival strategy than dominance among the great apes (Alan Alda does better than Arnold Schwarzenegger is the way one wag put it), and studies showing that chimps, and even rats, will refuse to take a reward that costs a peer suffering, are among many that suggest cooperation, altruism, and group solidarity may be as well-founded in our makeup as survival of the individually fittest.
            That’s example two(a).  Two(b) comes from the same class.  In a small group discussion, five or us were asked to design ways of calming tension in a multicultural community dealing with economic struggles and racial tension.  When someone suggested using the churches of the various ethnic groups to connect young people around projects, one student said “That never works.”  She went on to say that she hated religion, and asserted that “You should do what’s right because it’s right, not because someone in an old book said it’s right.”  (She also mentioned that she was upset that the school did not have an atheist alliance.  But hold that.)
            An interesting view.  But how do you know what’s right?  Because you have reasoned to a “right” that eluded all the people before you – the ones who wrote or set down the books?  Because you met someone who persuaded you of their vision of what’s right?  Or because you just knew, from birth, or from some other moment of insight, what was right?  Given a choice between those who listen to a long tradition of wisdom and analysis of right and wrong, or someone who just “got it” between birth and today, whether from solo ratiocination, sitting at the feet of a master, or the promptings of their own heart, I think I’d feel safest with the first.  To go back to our beginning, would I rather be surrounded by people who had heard “Thou shalt not kill” from the time they could understand the words, people who had worked out the sanctity of human life all by themselves, or people who need a poster in Hyde Park to tell them “Murder: It’s Not Okay”?  Tell me what you think.
          

Sunday, September 9, 2012

The "Business" of Government, Part One


            One of the biggest themes of the Republican campaign in this election is Mitt Romney’s alleged business acumen.  Of course, some debate his track record, and others the relevance of business skills to the presidency.  Be that as it may, I’ve been thinking about Republicans, Democrats, and business wisdom, and in the next blogs I’ll apply some of the tenets of the past decade’s biggest business book, Jim Collins’s Good to Great to the two parties. GtoG seems particularly apt, since going from good (or bad) to great is what every presidential candidate promises he’ll do for America.
            For the unfamiliar, Collins’s Good to Great followed a number of businesses that had made a leap from average to dominant in their fields, paired with similar companies that had not leapt forward, (e.g. Walgreens vs. Eckerd, Circuit City vs. Silo).  His research team found certain characteristics that they believe consistently distinguished such companies. 
            The first of these I’ll consider is “First Who…Then What.”  The idea is that great companies put together the right team of people and only then decide new directions for the company.  The popular phase that captures this theme is “getting the right people on the bus.”  So I decided to choose one particular seat on the bus –the relief driver, so to speak – the Vice Presidents and VP candidates of the two major parties.
            In my lifetime (which I’ll stretch to include prenatal life, to be fair to Republicans), the two parties have put forth 25 candidates for the position (besides those three who have stepped up after deaths or resignations): 11 Republicans and 14 Democrats.   Here they are:

Republican VPs                                                Democratic VPs
Richard Nixon                                                Harry Truman
Spiro Agnew                                                   Alben Barkley                                   
George H.W. Bush                                          Lyndon Johnson
Dan Quayle                                                     Hubert Humphrey
Dick Cheney                                                   Walter Mondale
                                                                        Al Gore
                                                                        Joe Biden

Republican Candidates                                    Democratic Candidates
Earl Warren                                                       John Sparkman
Henry Cabot Lodge, Jr.                                     Estes Kefauver
William E. Miller                                              Ed Muskie
Bob Dole                                                           Sargent Shriver
Jack Kemp                                                        Geraldine Ferraro
Sarah Palin                                                        Joe Lieberman
                                                                          John Edwards

Now let’s ask if these were the right people to put in the relief driver’s seat.  The first cut:
Each party has had two VPs who later ran for and won the presidency (I won’t insult you by naming them.)  The Republicans have had two VPs who lost (Papa Bush is on both lists), the Democrats three. 
            But digging deeper, how about the general quality of the choices?  We could look at it this way:  Were any of the losing or non-running VPs plausible presidential candidates?
Obviously the five who ran (Bush, Dole, Humphrey, Mondale, Gore) were.  Judging from history, Kefauver, who would have been the nominee in 1952 if primaries had functioned as they do now, Ed Muskie, and Joe Biden could easily be added to the list.  Equally obviously, Spiro Agnew and John Edwards scandaled or grafted themselves out of the running.  On the Republican side, I’d give a loud no to Quayle, Cheney, and Palin, as well as William E. Miller who, although only 50 when he lost in 1964 left public life completely thereafter.  I would give a yes to the early Republican losers, Warren and Lodge, though neither ever expressed interest in the presidency.  That’s three yeses and five no’s for the Republicans; five yeses and one no for the Democrats.  (Let’s pair Jack Kemp and Geraldine Ferraro as unlkelies but not no’s; we’ll get to Sargent Shriver and a few early ones later.)
Oddly, each party has had one VP candidate they would later consider as a traitor: Warren and Lieberman. Taking this a step further, while no Democrats other than Lieberman would be rejected by their party if they were running today, Warren (usually labeled “progressive”) and Lodge (“moderate internationalist”) would be as unlikely to finish in the running today as John Huntsman did. 
In the broadest sense, which party has nominated more people whom history might regard as “great Americans”?  In Tier One I’d put Harry Truman and Lyndon Johnson, whose records in Civil Rights alone earn them that honor.  Humphrey, Lodge, and Warren come close, for their many impacts on history and the extent of their service.  George H.W. Bush is the only other Republican contender, and perhaps deserves a Tier 3 slot. (I’m steering clear of Bush and Iran-Contra, as I have with Johnson and Vietnam, Truman and the atomic bomb.) Gore, Mondale and Muskie all deserve mention, perhaps a notch below Bush, though Gore’s the only Nobel Prize winner among these VPs.  Sargent Shriver may in fact rank higher than several of these: his role in starting the Peace Corps, Job Corps, and Head Start match Warren’s for long-term impact.  Then there’s Kefauver, who not only could have been president, but also was one of the bravest Democrats of his generation, one of three southern Democrats (with Al Gore’s father and Lyndon Johnson) to refuse to sign the Southern Manifesto of 1956, objecting to Brown vs. Board of Education.
Finally, there’s the category of disgraces and laughingstocks. John Edwards’ certainly belongs here, but his personal sins pale in comparison to the crimes of Nixon and Agnew, while the sheer triviality of Palin, Miller, and Quayle are unmatched in Democratic circles. Considering life after the VP run, far more Democrats than Republicans made contributions after their moment or years in the VP spotlight: Gore, Ferraro (UN Commission on Human Rights), Mondale (Ambassador to Japan, Minnesota A.G.), John Sparkman, Kefauver, Alben Barkley, Lieberman (all returned to the Senate), Ed Muskie (Secretary of State).  Only Dole and Warren played significant roles after their Vice Presidential runs.
Summing up: which party shows the better business sense in getting the right people on the vice presidential bus?  In my book, the Democrats come out way ahead; since 1960, only Bob Dole and Papa Bush have been people of stature, while Nixon, Agnew, Quayle and Palin (and I’d add Cheney) have been disasters or embarrassments.  Post-World War II, all the vice presidents or candidates whose major achievements will go down in history unmarred by their crimes are either Democrats or old Republicans who would be thoroughly repudiated today.

Next: “Confront the Brutal Facts”



Saturday, September 1, 2012

Brain Damage


As readers of this blog will have noted, I write often about the hubris of scientists, especially evolutionary biologists and neuroscientists, who are trying to reduce human behavior to the most primitive level of unconscious, uncontrollable, and irrational reflexes, born out of random mutations and their utility in species survival.
            I used to think that some institutions had my back on this, notably the Jesuits, who educated me for eight crucial years. Reading Plato as early as high school, and taking courses in Logic and Epistemology, Metaphysics, Ethics, and the like every semester of college, I was proud of the inquiry tradition of these schools.  (For those who might think Catholic education is necessarily blindered, please read philosopher Martha Nussbaum’s chapters on Notre Dame and BYU in Cultivating Humanity; and remember the Jesuits are well to the left of the Holy Cross fathers.)
            So I was delighted, at first, to see an article about a young scientist at my alma mater who had received a national grant for her work.  The piece, entitled “Moral Compass,” described the work of a woman studying “what she calls moral intuition.” My school seemed to be continuing the tradition of asking the Big Questions, as did Jesus, Aquinas, Kant, and all those moral giants we were introduced to in our philosophy and theology classes.
But my hopes were dashed when I found that the scholar was approaching the problem with the tools of neurology, not philosophy, in her ominously named “morality lab.”  Specifically, she was using Transcranial Magnetic Stimulation to jolt the brain’s right temporoparietal junction (RTPJ), then used functionalMRI to watch brain activity as subjects heard about a person who tried but failed to poison a friend.  The conclusion: people who have had that part of their brain temporarily zapped are more lenient toward the failed attempt than unzapped subjects.
All well and good, if a little truncated perhaps for the reading audience.  We all pretty much know that damage to various parts of the brain can cause behavioral changes (from aggression to depression to lack of judgment), as well as that using various parts of the brain can develop greater connections and even greater overall size of those parts (London cab-drivers with large geographic memories, violinists with more developed hand areas).  But the article went on to quote the young scientist thus: “For a while now, I’ve been really interested in moral intuitions and where they come from—and the extent to which people share these intuitions.  When someone has a different set of intuitions, how do you know who’s right?”
That last sentence floored me for two reasons: that where moral intuitions come from is answerable in terms of synaptic locations, and even more, that knowing who’s right could be determined by examining the brain activity of the person having the intuition.           
Moral intuitions apparently mean immediate judgments, as studied in such classes as Harvard professor Michael Sandel’s “Justice,” widely available on the Internet.  In these studies, people are asked to respond quickly to a case, then to examine the validity of their intuition, and to elucidate possible reasons for and against it.  (The young scientist in question apparently attended exactly this class, which set her off on her career path.)
Are these intuitions instantaneous eruptions from a segment of our brain, or are they more the result of life’s experience, both direct and indirect?  Since the publication of Malcolm Gladwell’s Blink, there has been a lot of discussion of this question, and most writers and researchers have concluded that our “intuition,” as in his cases of art critics who doubted a statue’s authenticity, and tennis players who could predict faults on serves as soon as the stroke was beginning, is most valid when it is a rapid judgment based on long familiarity with a given situation.  Isn’t it likely that “moral intuitions” are more likely to be right when they’re grounded in years of thought, discussion, reading, and life experience?  (The child’s outburst that “it’s not fair” that rain spoiled the trip to the beach surely isn’t of the same validity as the later declaration that “it’s not fair” that he should be punished for cheating on a paper when he didn’t do so.)
So if I were asked where a moral judgment came from, I would suggest many possible answers: universal perceptions of what is harmful to oneself or to others, cultural upbringing, social conditions, ethical reflection, and on and on.  “Above and behind the right ear” would never even occur to me.  Would it to you?
It’s interesting that Socrates took up the same issue over two millennia ago.  While in prison, he tells us:

“I heard someone who had a book of Anaxagoras, out of which he read that mind was the disposer and cause of all, and I was quite delighted at the notion of this… What hopes I had formed, and how grievously was I disappointed! As I proceeded, I found my philosopher altogether forsaking mind or any other principle of order…I might compare him to a person who…when he endeavored to explain the causes of my actions, went on to show that I sit here because my body is made up of bones and muscles; and the bones, as he would say, are hard and have ligaments which divide them, and the muscles are elastic, and they cover the bones, which have also a covering or environment of flesh and skin which contains them; and as the bones are lifted at their joints by the contraction or relaxation of the muscles, I am able to bend my limbs, and this is why I am sitting here in a curved posture… forgetting to mention the true cause, which is that the Athenians have thought fit to condemn me, and accordingly I have thought it better and more right to remain here and undergo my sentence…
There is surely a strange confusion of causes and conditions in all this. It may be said, indeed, that without bones and muscles and the other parts of the body I cannot execute my purposes. But to say that I do as I do because of them, and that this is the way in which mind acts, and not from the choice of the best, is a very careless and idle mode of speaking. I wonder that they cannot distinguish the cause from the condition.”

That same error of confusing causes and conditions is apparently still with us.

Further, what exactly does the study, or the snippet from it the college magazine published, tell us?  That people are more lenient toward the person who fails to commit a misdeed when their RTPJ is disrupted.  Not only do we not know whether we’re talking about averages, or the same people under two circumstances.  We also don’t know whether they judged these would-be killers extremely harshly and then somewhat less harshly, or fairly harshly and then very leniently.  Or whether a disrupted RTPJ left them with more scope to consider the situation in a broader light, or made them indifferent to the matter and so less punitive.
Most of all, how could this study in any way tell us which moral intuition was right?  Is any moral intuition by a person with an apparently whole RTPJ “right”?  If two people whose RTPJs appear similar come to different judgments, who’s right?  Whatever the truth of the old question whether “ought” can be derived from “is,” there’s not much likelihood of ever being able to prove that a point of view is right by examining the neuro-bio-electro-chemical events that accompany holding or stating that point of view.
If we could do that, imagine the effects.  No need for debates to choose a candidate – just put them in the scanner.  Criminals could prove their innocence by showing a clean RTPJ or whatever other locus was relevant, or could plead the “defective RTPJ” defense.  Unless we could agree on the precise definition of health or superiority in RTPJs, we could always dismiss others’ views:  this liberal has an overactive RTPJ, that conservative an underactive one, so he is too soft-hearted, she’s too severe.  (Maybe we could define a healthy RTPJ as the site of Right, True, and Proper Judgments.)
One way of looking at this project is that it’s running backward.  It makes perfect sense to examine the brains of people whose views or actions are highly atypical, to see if there’s a biological contribution: does the dyslexic, autistic or sociopathic person have a specific brain abnormality?  Does the gifted person have some different abnormality? (The second has so far been much harder to come upon than the first.) But when two apparently normal people hold differing moral intuitions, say on war, capital punishment, abortion, or hate speech, does it make any sense to think that we can examine their brains to find out not only why they differ (I expect there are dozens of places in the brain, from memories to emotions to others we can barely dream of, that go into a complex decision), but to say who is right?
We need, and I believe will always need, separate criteria for deciding what is right and for discovering what neural correlates happen when we make a moral choice. Trying to do otherwise can lead in one of two directions: fruitless quests to find moral and immoral synapses, or frightening efforts to control behavior by altering the brains of the morally wrong.  Nazi eugenics were bad enough, but judging and condemning people for the state of their brains as others were once judged and exterminated because of their noses or foreheads would be even more disastrous.

           




           

Monday, August 13, 2012

My own Shangri-Las


“No utopia has ever been described in which any sane man would on any conditions consent to live, if he could possibly escape.”                                                                                    -- Alexander Gray

            While Gray may be right about the utopias of social engineers, from Plato to Skinner, I think many of us would be happy to live in the worlds created by imaginative writers, past and present.  As a boy, I lost myself in King Arthur’s and Robin Hood’s England, Tarzan’s Africa, Arthur Ransome’s Swallowdale, and many others.  C.S. Lewis’s Narnia and J.R.R. Tolkien’s Middle Earth came later and lasted much longer; I still dip into their Otherworlds occasionally.
            But these days I have less interest in dwelling among knights, apes, fauns or hobbits, and more in imagining plausible communities of ordinary, or not so ordinary, humans who promise a richer moral and social life than exists in contemporary America.  In my reading over the past few years I’ve found three such imaginary places, all of which I share with others as often as possible.  Two are from the frequently underestimated world of mystery writing, and one from a master of the short story, the novel, the essay, and the lyric poem.
            To start with the detectives, my first recommendation is  P.A. Gaus’s Amish mysteries.  Gaus, a college professor in Ohio’s Amish country (despite Pennsylvania’s greater prominence, Ohio’s Amish population is equally large), has written seven – and counting – mysteries in which the pacific world of the Amish is rent by violence, and three “English” characters (as the Amish call all outsiders), come to their assistance.  The primary one of these is, oddly enough, a local college professor, ironically, of military history.  (Gaus himself was a chemist until his recent retirement; obviously no connection there.) 
Along with his alter ego, the irascible local sheriff, and occasional help from a non-Amish pastor and carpenter, Professor Michael Branden often takes on himself the burden of protecting those who will not fight either to avenge or to defend themselves. In one novel, an Amish bishop explains why his people believe so completely in non-violence:  “We are taught that we are to be harmless as doves.  It means a lot of things, but one thing is that the harm we do is always harmful to us, if for no other reason than the guilt that we shoulder for it.”   Like HBO’s The Newsroom,” some will find these books “preachy,” but in my view they’re none the worse for that, and the leaven of crime and mystery can attract even readers who may slide over the homilies – to their own detriment.  (The novels, beginning with The Blood of the Prodigal, are published by Plume.)
A more secular Shangri-la can be found in the woods of southern Quebec, in Louise Penny’s Three Pines, the setting of another ongoing series featuring Inspector Armande Gamache of the Sureté de Quebec (the Chief Inspector for the whole Francophone province).  Penny has won an extraordinary number of mystery writer awards, so a lort of people must agree with me.
Three Pines is a hidden gem of a village, with a gourmet bistro, a bookstore, several practicing artists, a national treasure of a poet, a great house with a grim history, and, unfortunately, a penchant for attracting murderers as well as refugees from the urban world.
Penny’s novels delve much deeper into the art of the mystery novel: Gamache is a French Canadian version of such thoughtful British sleuths as Inspector Dalgleish and Lord Peter Wimsey (note that they were all created by women, and none is a muscular, quick-on-the-draw Lothario).  His skill is not the ratiocinative brilliance of Holmes, but an ability to delve deep into human motivation; he always insists that the strangest crime makes complete sense if seen from the viewpoint of the killer, and that the roots of crime are often buried far in the past. 
Gamache is also a deeply kind man, giving second chances to junior officers who have failed and even betrayed him.  He serves as a center around whom circle a collection of complicated characters, especially in Three Pines, which is home, among others, to a gay couple who own the bistro and the B&B, a former psychologist now running the bookstore, and Ruth Zardo, a brilliant poet with a heart of – well something slightly soft – buried beneath geological layers of rage.  Penny also has two unusual characteristics for a series writer – she can allow her characters to change radically (or reveal totally unexpected layers), and, like David Simon in The Wire and Treme, has not hesitated to shock us with the loss of favorites from her cast.  Every new Penny therefore, is not only a whodunit, but a who will surprise us and how.  (The first in the series is Still Life; all are published by Minotaur.)
Wendell Berry’s Port William Kentucky, like Ohio’s Amish country, is an American enclave in which different mores and values survive the political and economic distortion that so plagues most of our world.   The key term in Berry’s vision is “membership.”  In his good society, people understand that they are connected by more than blood, marriage, proximity, or even friendship.  Rather they are connected by a common humanity that appears sometimes as secular, sometimes as spiritual.  As one of the town’s eldest members says, "The way we are, we are members of each other. All of us. Everything. The difference ain't in who is a member and who is not, but in who knows it and who don't." 
Being members of each other means many things to Berry’s characters.  It means protecting a stranger who is hurt in a Saturday night brawl; it means helping a young couple purchase a farm where they had worked for years, but to which they have no legal claim; it means taking care of a drunken uncle who falls again and again, without ever hoping that he will change, and it means loving a woman from afar all her life, just because you believe she deserves to have someone love her.
It does not mean living in eternal peace and harmony.  In and around Port William there are not only town drunks; there are abusive husbands, greedy landowners, thieves, and even an occasional murder.  Because of course there are those who don’t know or won’t admit they are members of each other.   For the ones who do, however, there is a deep awareness that we are more than producers and consumers.  As an older character says to a younger, “we’re dealing in goods and services that we didn’t make, that can’t exist at all except as gifts.  Everything about a place that’s different from its price is a gift.  Everything about a man or woman that’s different from their price is a gift.  The life of a neighborhood is a gift…you’re friends and neighbors, you work together, and so there’s lots of giving and taking without a price – some that you don’t remember, some that you never knew about.  You don’t send a bill. You don’t, if you can help it, keep an account.  Once the account is kept and the bill presented, the friendship ends, the neighborhood is finished.”
That’s the kind of Shangri-la that just might exist, or be brought into existence for a time, like an evanescent subatomic particle.  I’d like to be there when it happened.  I’d like to help make it happen.
 
(Berry’s Port William appears and reappears in novels and short stories over the past half century, and tell about more than a century of the town’s life. His short stories are collected in That Distant Land; among his several novels my favorite is Jayber Crow.  Most are published by Counterpoint.)
           
           
           

Monday, July 23, 2012

The National Collegiate Almighty Association


“While the future's there for anyone to change, still you know it seems/It would be easier sometimes to change the past.”                                       --  Jackson Browne


What is the most powerful force on our planet, or maybe the cosmos?  Evidently it’s the NCAA.  Apparently they have two powers no other earthly body has: they can change the past and they can punish the dead.  Yes, the old Soviet Union and many other dictatorships have tried, airbrushing out the now-out-of-favor from photographs, altering historical records, and such.  Countries try to change the past: Turkey insists it never practiced genocide on the Armenians.  But there’s always someone, often the world’s majority, to call them on it.
In ancient times – what we used to call the Dark Ages -- some nations and religions would dig up corpses and drive stakes through their hearts, burn, or hang them for purported misdeeds.  You’d think we’d be beyond such primitive thinking. 
But the NCAA has borrowed from the playbooks of the Inquisition and the Stalinist era in its sanctions against Penn State.
True, sports authorities have sometimes changed the past, but with great inconsistency.  Sign a wrong scorecard in golf, and you either get the poorer score you signed, or forfeit if you signed a better score than you made. So your birdie is now a par, your par a bogie.  But that’s in the rulebook.  For decades, almost every sport refused to change a wrong call in the immediate past, except for umpires in tennis over-ruling line calls.  Now we have instant replay in one baseball event, and many football, basketball, and hockey situations.  But baseball refuses to correct an obviously wrong call, even when it costs a player a perfect game and does not alter the game’s outcome in any way.  And soccer absolutely will not sanction corrections, even though its referees have an impossible real-time job.
Sports also change the past when a violation has been discovered that falsifies the game’s outcome – ineligible players, drug enhancements, etc.  Of course they do this with total inconsistency too:  Barry Bonds, Alex Rodriguez, and Mark McGwire still hold over a dozen home run records despite steroid use, while cyclists and Olympic athletes are stripped of their titles for the same infraction. 
In the so-called real world, very few democracies change the past because of later discoveries.  Imagine if they did: Colin Powell stripped of his rank because he later told the UN that Iraq had weapons of mass destruction on railroad cars.  Ronald Reagan and George Bush’s names removed from all airports and other public buildings if future documentation proves them complicit in Iran-Contra.  Let’s not even mention J. Edgar Hoover.
One fundamental principle in our civil and criminal society is that the dead cannot be held accountable for their misdeeds. When Ken Lay died in prison, his conviction was vacated, not because of any new evidence, but because his appeal had not run its course.  You can’t try a dead person, so you can’t convict him.  Innocent until proven guilty, especially in the afterlife.
Now we come to Penn State.  Because of allegations that involve one deceased coach and one former coach, a team loses thirteen years of games that it once won, perhaps the largest reality alteration in the history of sports. Who can doubt that the punishment is aimed primarily at the late Joe Paterno, condemned in the Freeh report post-mortem by the testimony of the indicted and disgraced living?  During the years of the forfeits, Penn State players won two Butkus awards (Lavar Arrington and Paul Poslusny).  If they made no  tackles in those years, do they forfeit the awards as well?  Does every Penn State player who was drafted in those years get undrafted because he did nothing during his college days?
            This is not to excuse anything done by Joe Paterno or anyone else at Penn State.  As Marc Antony said of Caesar, “If it were so, it was a grievous fault, and grievously hath [he] answered for it.”  But to pursue a man beyond the grave serves no rational purpose: does the NCAA really think we need to deter future sports programs from similar behaviors?  Does it claim that if Penn State had turned in Sandusky in 2001, when the crime occurred in the athletic facility, its football program would have been devastated retroactively to 1999?  Do we benefit as a society by punishing the dead and by imagining that a declaration can alter history, even the small history that is college sports?  I think not.  The best thing about the NCAA decision is its fine of Penn State and designation of the money for abuse prevention.  Most of us hope we can change the future: only time travel movies try to do it by changing the past.