Morality: The Amazing Side-Taking Machine

1,396 total words    

6 minutes of reading

Modern biology and psychology point to a few big surprises about morality. First, the human moral sense is among the most impressive engineering marvels in the natural world. Morality is on par with complex adaptations such as the vertebrate eye, hive-making in honeybees, and the neural control systems that stabilize bird flight. Second, morality is among the most selfish, coercive, backstabbing, and destructive adaptations that evolution has ever produced in its four billion year history. This shocker comes with a crucial corollary: morality is different from its kinder, gentler, and more ancient doppelganger, altruism. Humans show many forms of altruism and compassion that do not require or depend on the capacity for moral judgment.

Let me point to the amassing evidence for these claims[1], starting with the idea that human morality is an incredible software program built by evolution.

Evolution by natural selection, though blind and purposeless, is the best computer programmer in the universe. Animal brains are computers in the technical sense that they receive inputs from sensors such as eyes and ears, process this information, and then deliver outputs, which ultimately resolve into muscle contractions. Think about the intricate patterns of muscle contractions when a peregrine falcon dives 200 mph to impale a fish, a beaver builds a dam, or a human recites love poetry to a mate. Who, or what, calculates all of these nested sets of muscle twitches?

The answer is that evolution installed in the brains of each animal species specialized neural programs that orchestrate behaviors to solve the adaptive problems associated with the species’ ecological niche. These evolved programs far outperform human engineering even on relatively simple tasks like navigating obstacles. Someday your smartphone might have an app to fly like a bird weaving through tangled trees, but for now evolution’s flight controllers, and countless other programs, remain unmatched. This modern understanding suggests that morality, too, is performed by specialized software, simply because all complex behaviors are products or byproducts of highly organized neural programs.

Evidence from experiments reveals the mind’s moral algorithms. Psychologists have been busy dissecting moral judgments in the laboratory. The most straightforward method is presenting vignettes to participants, inputs to moral programs, and measuring judgments, outputs of moral programs. By varying one scenario element at a time, researchers can describe the input-output mappings of moral judgment, i.e., its information-processing structure. This research is ongoing but already shows intricate and systematic patterns, enough to make researchers compare moral judgment to language—a paragon of complex information-processing and a holy grail of artificial intelligence.

If morality is a computer program, then what is it designed to do? The seemingly obvious answer is that morality makes people nice to each other. Darwin held this view and so do many scientists today. These theorists seek to understand why people show niceties such as cooperation, honesty, restrained aggression, and respect for property. This work has generated many compelling theories to explain how evolution can favor nice behaviors, and these accounts are supported by evidence of analogous behaviors in many non-human species.[2]

"Be Nice" scribbled on a utility box. Indeed, these evolutionary theories might be too successful. If there are so many evolutionary pathways to nice behaviors, and if many animals are cooperative, including bees, bats, hyenas, and monkeys, then perhaps the elaborate paraphernalia of human morality—explicit rules of behavior, moral taboos, moral debates, accusations, impartiality, punishments—are not needed to make people nice. Right?

This is exactly what psychological research indicates. Developmental evidence shows that children are nice to people before acquiring adult-like moral judgment. Moreover, when children develop moral judgment, it does not prevent them from taking actions they judge wrong such as lying or stealing. In adults, research shows that moral judgments differ from and can even oppose altruistic motives. Research on hypocrisy shows that people are mostly motivated to appear moral rather than to actually abide by their moral judgments. Research on “motivated reasoning’” shows that people deviously craft moral justifications to push their own agendas. In short, people can be nice without morality and nasty with morality—altruism and morality are independent.

In fact, humans are more eager to judge other people than to follow their own moral advice. Moral condemnation of other people’s behavior is distinctly, perhaps uniquely, human. So, what is the evolutionary function of condemnation? Again, there’s a seemingly obvious answer that condemnation makes people nice, this time with sticks of punishment rather than carrots. And again, experiments show this intuition fails under scrutiny: people’s judgments do not systematically fit predictions of the hypothesis that condemnation functions to deter harmful or non-cooperative behavior. 

Here is a distinctive human problem that just might explain our distinctive moral condemnation: Humans, more than any other species, support each other in fights, whether fistfights, yelling matches, or gossip campaigns. In most animal species, fights are mano-a-mano or between fixed groups. Humans, however, face complicated conflicts in which bystanders are pressured to choose sides in other people’s fights, and it’s unclear who will take which side. Think about the intrigues of family feuds, office politics, or international relations.

One side-taking strategy is supporting the higher-status fighter like a boss against a coworker or parent against child. However, this encourages bullies because higher-ups can exploit their position. Another strategy is to form alliances with friends and loyally support them. Alliances deflate bullies but create another problem: When everyone sides with their own friend, the group tends to split into evenly matched sides and fights escalate. This is costly for bystanders because they get scuffed up fighting their friends’ battles.

Moral condemnation offers a third strategy for choosing sides. People can use moral judgment to assess the wrongness of fighters’ actions and then choose sides against whoever was most immoral. When all bystanders use this strategy, they all take the same side and avoid the costs of escalated fighting. That is, moral condemnation functions to synchronize people’s side-taking decisions. This moral strategy is, of course, mostly unconscious just like other evolved programs for vision, movement, language, and so on.

For moral side-taking to work, the group needs to invent and debate moral rules to cover the most common fights—rules about violence, sex, resources, etc. Humans are quite motivated to do just this. Once moral rules are established, people can use accusations of wrongdoing as coercive threats to turn the group, including your family and friends, against you.

The side-taking hypothesis fits many otherwise puzzling observations from the laboratory and the world around us. For one thing, it explains why people sometimes oppose their own family or friends if they act immorally. Of course, people are not always impartial because they must weigh the value of their relationships against the costs of opposing other bystanders. Morality makes us betray friends and family when their (alleged) wrongdoing causes us too much trouble.

The side-taking model also explains why moral condemnation is so destructive. Moral accusations divide friends and destroy marriages. Morality motivates hate crimes, genocide, terrorism, drug wars, denial of abortion services to women, discrimination against homosexual people, and many other abuses and inhumanities. Morality is destructive because moral rules are not designed to promote beneficial behavior (although they sometimes do so incidentally) but instead to synchronize side-taking.

We can add masterful programming and cunning strategy to the amazing wonders of morality. But, to be better people, we ought to seek our inner altruist because our inner moralist is a devil in disguise.

[1] For further reading and supporting citations for these arguments, see:

DeScioli, P., & Kurzban, R. (2009). Mysteries of morality. Cognition, 112, 281-299.

DeScioli, P., & Kurzban, R. (2013). A solution to the mysteries of morality. Psychological Bulletin, 139, 477-496.

[2] On cooperative behaviors in non-human animals, see:

Davies N. B., Krebs J. R., West S. A. (2012). An Introduction to Behavioural Ecology: Fourth Edition. Hoboken, NJ: Wiley.

Dawkins, R. (1976). The selfish gene. Oxford: Oxford University Press.

De Waal, F. B. (1996). Good natured: The origins of right and wrong in humans and other animals. Cambridge: Harvard University Press.

Dugatkin, L. A. (1997). Cooperation among animals. New York: Oxford University Press.

Nowak, M. A. (2006). Five rules for the evolution of cooperation. Science, 314, 1560-1563.

Ridley, M. (1996). The origins of virtue. London: Penguin Books.

Sachs, J. L., Mueller, U. G., Wilcox, T. P., & Bull, J. J. (2004). The evolution of cooperation. The Quarterly Review of Biology, 79, 135-160.

Image Credit

“be-nice” by Graham C99. (CC BY 2.0)

Share on facebook
Share on twitter
Share on linkedin
  • Peter DeScioli

    Peter DeScioli is a professor of political science at Stony Brook University. Professor DeScioli’s research combines evolutionary theory, cognitive science, and game theory to study how the human mind uses principles of strategy to solve problems in the social world

Scroll to Top