Tracking down the seat of moral reasoning
Joshua Greene pushes into a new field
Moral philosophers have long grappled with ethical questions, creating
hypotheticals that test basic beliefs about right and wrong. For
example:
A trolley is running down a track out of control. If it keeps going, it
will run over the five unsuspecting people hanging out on the track.
You can prevent this disaster by throwing a switch, redirecting the
trolley onto a siding where it will kill one person. Do you hit the
switch?
“Most people say that that’s OK, not great, but OK, and I have the same
intuition that most people have,” says Joshua D. Greene, an Assistant
Professor of Psychology in Harvard’s Faculty of Arts and
Sciences. Greene is interested not only in what answers people give to
these sorts of questions, but also in what kinds of intuitions drive
such moral decision making and which regions of the brain they involve.
And he is attempting to answer those questions by literally looking
inside the brains of volunteers as they grapple with classic moral
dilemmas.
One might think that the answers to Greene’s questions wouldn’t be
affected by slight variations in the philosophical constructs; but one
would be wrong. For instance, suppose the trolley problem is changed
somewhat:
You’re watching the first drama unfold from a footbridge and the only
way to save the five people from certain death is to push a large
person standing next to you off of the bridge and onto the tracks.
“He’ll get crushed by the trolley and die, but the five people will
live, so it’s another five-over-one trade-off,” explains Greene, “but
there it seems like it’s wrong.” The question is: Why do we think it’s
all right to sacrifice one life for five in one case, but not in the
other, and is there a good justification for the difference in our
intuitions?
Expanding a new area of exploration
According to Dartmouth College philosophy professor Walter
Sinnott-Armstrong, philosophers have traditionally been divided on the
matter of how we come to have these intuitions. Some, like Immanuel
Kant and his followers, believed moral judgments were rational,
reasoned. Others like David Hume and his followers believed they were
emotional judgments. But Harvard’s Greene picks up where classical
philosophers left off, testing the hypothesis that both camps are right
by asking people to make these judgments while recording their brain
processes with a functional magnetic resonance imaging (fMRI) scanner.
In so doing, says Dartmouth’s Sinnott-Armstrong, Greene has made real a new field — neurophilosophy.
In Greene’s first neuroimaging study, volunteers responded to questions
that appeared as text on a screen, which they viewed from inside the
scanner using a mirror placed in front of their eyes. The machine
recorded an image of the brain every one to three seconds. In this
study, Greene saw increased activity in the medial prefrontal cortex,
which is involved in emotion and social behavior, when volunteers
considered cases like the footbridge case. That study, Greene says,
marked the first time that anyone had correlated a pattern of neural
activity with moral judgment behavior.
“I have the hypothesis that somehow the idea of pushing the guy off the
footbridge in the second case is more emotionally salient,” Greene
says, whereas throwing a switch is less so. In the switch case, he
suggests, it may be that our utilitarian brain is doing the thinking.
So, despite the fact that numerically the problem is the same, one life
for five, we intuit different moral judgments because different brain
functions are involved in thinking through each problem.
About eight years ago, he teamed up with
Jonathan Cohen, director of the Center for the Study of Brain, Mind and
Behavior at Princeton, and became the first neurophilosopher. And then, says Dartmouth’s Sinnott-Armstrong says that since Greene’s second publication, programs in the empirical study
of philosophy have emerged at universities across the country.
In 2002, Greene completed his doctorate in philosophy at Princeton and
in 2006, he came to Harvard as an assistant professor. His corner
office on the 14th floor of Harvard’s William James Hall overlooks much
of the University, including the new site of the brain imaging center,
which is still under construction. For now, he is conducting his
experiments at the Massachusetts General Hospital imaging
center, working with variations on the classic trolley problem.
Weighing a baby’s life
“So it’s wartime,” Greene begins, presenting another problem, “and
you’re hiding in the basement with some fellow villagers and your baby.
The enemy soldiers are outside and they have orders to kill all
remaining civilians, and if they find you they’re gonna kill you and
your baby and everybody else.”
This is known as the “crying baby scenario,” and it is of particular
interest to Greene because it offers a hard choice between emotional
and reasoned responses. People disagree about which answer is correct,
and many take a long time to reach any answer. Greene wants to show
that in these cases, a part of the brain that is associated with
competing impulses is more active.
“Your baby starts to cry,” he continues, “and if you don’t cover your
baby’s mouth, the soldiers will find you and kill everybody. But if you
do, then your baby will smother to death. Is it morally acceptable for
you to do this?” he asks. “It’s a horrible question,” he continues.
“No one likes to think about it very much, and some people say, ‘I
guess so’ and some people say, ‘No, no that would be wrong.’ In those
difficult kinds of cases you see more activity in the part of the brain
called the anterior cingulate cortex, which tends to become active when
there are competing behavioral responses, and not just in a moral
context.”
Greene’s hypothesis suggests that we have an emotional impulse to think
it’s wrong to smother the baby, as well as a utilitarian impulse to
weigh the number of deaths with each possible outcome. Moreover,
different parts of the brain are at work in the emotional and
utilitarian case.
A multi-faceted process
“So if you had to just sum it up,” says Greene, “I guess the overall
lesson is that moral judgment is not a single kind of process … at
least in my view not a single moral faculty or moral sense, rather it’s
different systems in the brain in some cases competing with each other.”
Sinnott-Armstrong said Greene has “done more than anyone to show how
neuroscience can illuminate traditional philosophical disputes.” His
contributions have prompted “traditional moral philosophers [to] get
much clearer about which questions they are asking, and that itself is
a significant contribution to philosophy.” He has also “introduce[d] a
new method for understanding moral judgments.”
But how do cultural differences fit into Greene’s approach to moral
intuition? “The trend,” says Greene, “is that at least with these
trolley sorts of questions, people’s intuitions are surprisingly stable
across cultures. But we know that there’s a lot of cultural variability
in terms of people’s moral intuitions because people from different
cultures have quite stark moral disagreements.”
Greene is careful to point out that his work does not suggest that
we’re born with an innate set of moral judgments. “Just because
something is in the brain doesn’t mean it’s hardwired,” he says. Just
as we aren’t born speaking English or Chinese, we aren’t born with a
fixed moral sense.
Morality and neurons
But in the end, all moral decision making is a neurological process of
some kind, Greene says. “We’re used to thinking of ourselves as having
brains, but not being brains. That is, we think of ourselves as having
a soul … and it’s sort of separate from the physical brain stuff, and
if there’s anything that your soul traditionally does, it makes moral
judgments… . So, thinking of moral decision making as a physical brain
process and as merely a physical brain process, that may be a shock for
a lot of people. … But the more we understand,” Greene believes, “the
less likely it seems that there’s something going on beyond the firing
of neurons in certain complicated patterns.”
But what exactly do trolleys and neurons have in common? What might an
out-of-control railcar reveal about our moral brains at large?
“I view these little trolley problems as like fruit flies,” says
Greene, explaining that “they’re sort of simple little systems that are
still surprisingly complicated. And they’re nice because you can bring
them into the lab and poke at them and prod at them and they’re simple
enough that you can study them and get meaningful results … but they’re
complex enough that they capture something that’s interesting about the
more complicated real-world moral dilemmas that we’re really interested
in.” By understanding how our brain deals with lab-grown conundrums,
Greene hopes we might learn how to better handle broader moral and
social problems.