News
Garber Announces Advisory Committee for Harvard Law School Dean Search
News
First Harvard Prize Book in Kosovo Established by Harvard Alumni
News
Ryan Murdock ’25 Remembered as Dedicated Advocate and Caring Friend
News
Harvard Faculty Appeal Temporary Suspensions From Widener Library
News
Man Who Managed Clients for High-End Cambridge Brothel Network Pleads Guilty
There are rats here. Big ones. Stuck following Google Maps on my first night in this part of Washington, D.C., I found myself facing a family of them hours after arriving in the city. It occurred to me that our showdown would be a good test: I’ve heard that it’s important to “challenge yourself” during the summer.
My test of courage went south when I began imagining how, in my CVS-bought flip flops, one of the creatures would bite onto my big toe and send me scurrying to George Washington Hospital for weeks of rabies vaccines. I fled towards the cover of streetlights feeling a mixture of terror and embarrassment. The notion that I needed to do anything other than continue walking at my ordinary pace was, I told myself, “just an emotional reaction.” And sometimes I like to think that my judgments come from a more trustworthy source.
What’s so wrong with emotional reactions? Quite a lot, according to some. Harvard psychologist Joshua D. Greene ’97 has argued that understanding the emotional origin of common intuitions should change the way we answer the most important questions about our moral relationships. Through fMRI experiments, for instance, Greene and others find that individuals using “conscious reasoning” tend to think that we ought to promote the good of the most people, no matter how it comes about, while subjects who favor constraints on promoting some consequences generally do so on the basis of “automatic emotional responses.”
Hypothetical moral dilemmas called “trolley problems” illustrate this distinction. According to Greene, individuals using conscious, cognitive processes favor pushing a person in front of a trolley to save five others. Subjects responding to their emotional impulses, meanwhile, think that doing so is gravely immoral.
Chalking moral reasoning up to “automatic emotional responses” feels disappointing. Who wouldn’t want to be on the side of the “conscious reasoners”? And if Greene is right that our rational processes support rescuing the most people possible, then maybe they also require that I start donating far more of my time and resources to aid others.
But first we need to explain why it is that our “emotional” judgments count for less. Greene argues by analogy that we consider a romantic relationship between adults who eventually discover that they are siblings, but decide to stay together while taking precautions to ensure they remain childless. The logic goes that if our condemnation stems from a historic fear of inbreeding, we should discard that emotional judgment when birth defects are not at issue. Understanding the origins of moral intuitions allows us to see when they are irrelevant and unreliable guides to the situation at hand.
As interesting as it might be for neuroscience to answer these fundamental questions, the psychological research tells us nothing about which moral principles are true in cases of incest or anything else. If someone does think that genetic similarity makes incest impermissible, then she isn’t making a mistake because she trusts her emotions. Her error lies in simply giving an unconvincing account of what makes some relationships wrong and others right.
When we remove the specter of genetic disease from our thinking, we need to explain the wrongness of incest in some other way, or realize that we have no reason to object to it. For example, we might discover that incest is morally concerning when it endangers family structures we take to be important. Or perhaps we will appeal to other concerns about sibling relationships.
In any case, the core questions about the ethical propriety of such relationships are decided not by an appeal to the emotional or cognitive sources of our judgments, but to the kinds of justifications we can give them. If we change our mind about incest in some scenarios, we should do so only because reasoning about the features of such cases supports our conclusion.
Remember the rats? What made my judgment that I was in imminent danger from rats wrong-headed was not (even marginally) the fact that it arose as an automatic emotional response, though it certainly did, but rather the observation that no rat was capable of causing me harm. In some respects, judgments about far more significant topics operate similarly: Our emotions may lead us astray, like anything else, but not because they are emotions. We need to decide how to act the way people always have—without psychological substitutes for difficult moral reasoning. Ethics doesn’t come with any shortcuts.
Gabriel H. Karger ’18 is a philosophy concentrator in Mather House.
Want to keep up with breaking news? Subscribe to our email newsletter.