019 - Understanding Your Divided Mind: Kahneman, Haidt and Greene


Argument Ninjas need to acquire a basic understanding of the psychology of human reasoning. This is essential for improving the quality of our own reasoning, and for mastering skills in communication and persuasion.

On this episode I take you on a guided tour of our divided mind. I compare and contrast the dual-process theories of Daniel Kahneman (Thinking, Fast and Slow), Jonathan Haidt (The Righteous Mind) and Joshua Greene (Moral Tribes). The simple mental models these authors use should be part of every critical thinker's toolbox.

My other goal with this episode is to help listeners think more critically about dual-process theories in cognitive science, to better understand the state of the science and the diversity of views that fall under this label.

In This Episode:

  • Why it's important to cultivate multiple mental models (2:40)
  • Kahneman and Tversky: biases and heuristics (4:20)
  • Example: the availability heuristic (5:30)
  • Cognitive biases originating from mismatches between the problem a heuristic was designed to solve, and the problem actually faced (8:20)
  • Dual-process theories in psychology that pre-date System 1 and System 2 (9:35)
  • The System 1 - System 2 distinction (12:00)
  • Kahneman's teaching model: System 1 and System 2 as personified agents (18:30)
  • Example: "Answering an Easier Question" (19:30)
  • How beliefs and judgments are formed: System 1 --> System 2 (22:20)
  • System 2 can override System 1 (23:35)
  • Assessing Kahneman's model (25:40)
  • Introduction to Jonathan Haidt (28:40)
  • The Elephant and the Rider model (30:50)
  • Principles for changing human behavior, based on the Elephant and the Rider model (33:00)
  • Introduction to Haidt's moral psychology (34:00)
  • Haidt's dual-process view of moral judgment (34:30)
  • Moral reasoning as an adaptation for social influence (35:20)
  • Moral intuitions as evolutionary adaptations (36:30)
  • Introduction to the moral emotions (six core responses) (37:50)
  • Liberal versus conservative moral psychology (39:20)
  • The moral matrix: it "binds us and blinds us" (40:30)
  • What an enlightened moral stance would look like (41:55)
  • Assessing Haidt's model (42:40)
  • Introduction to Joshua Greene (46:20)
  • Greene's digital camera model: presets vs manual mode (47:20)
  • When preset mode (moral intuition) is unreliable (50:52)
  • When should we rely on System 2, "manual mode" (52:40)
  • Greene's consequentialist view of moral reasoning (53:10)
  • How Greene's dual-process view of moral judgment differs from Haidt's (53:30)
  • Summary: the value of multiple mental models for critical thinking (55:55)

Quotes:

"And as critical thinkers, we shouldn’t shy away from having multiple models that address the same, or similar, phenomena. On the contrary, we should try to accumulate them. Because each of these represents a different perspective, a different way of thinking, about a set of complex psychological phenomena that are important for us to understand. " "Kahneman is inviting us to think of System 1 and System 2 like characters, in something like the way that the movie Inside Out personified emotions like happy, sad, anger and disgust. We identify with System 2, our conscious reasoning self that holds beliefs and makes decisions. But System 2 isn’t in the driver’s seat most of the time. Most of the time, the source of our judgments and decisions is System 1. System 2 more often plays the role of side-kick, but a side-kick who is under the delusion that he or she is the hero." "The rider can reason and argue and plan all it wants, but if you can’t motivate the elephant to go along with the plan, it’s not going to happen. So we need to pay attention to the factors that influence the elephant, that influence our automatic, intuitive, emotion-driven cognitive processes." "[According to Haidt] our moral psychology was designed by evolution to unite us into teams, divide us against other teams, and blind us from the truth. This picture goes a long way to explaining why our moral and political discourse is so divisive and so uncompromising. But what is the “truth” to which we are blind?"

References and Links


Subscribe to the Podcast


Play or download the mp3 file for this episode


Introduction

On this episode I want to introduce a very important topic. We like to talk about mental models on this show. Mental models that can help us think critically and be more effective communicators and persuaders.

These can come in all shapes and sizes, but some models are more important than others because they’re models of the process of reasoning itself. Models of how our minds function, how biases arise in our thinking, and why we behave the way we do.

These models come in families, that have core features in common, but that different in other respects.

The most influential among these families are what are known as “dual-process” models of cognition. Many of you are already familiar with the distinction between System 1 and System 2 thinking. That’s the family I’m talking about.

These aren’t the only kind of models that are useful for critical thinking purposes, but they’re very important. So at some point on the education of an Argument Ninja, you need to be introduced to the basic idea of a dual process view of the mind.

From a teaching perspective, that first model needs to be simple, intuitive, and useful as a tool for helping us become better critical thinkers.

Luckily, we’ve got a few to choose from. They’ve been provided for us by psychologists who work in this tradition and who write popular books for a general audience.

So that’s one of my goals with this episode. To introduce you to the ways that some prominent psychologists talk about dual process reasoning, and the simple conceptual models they’ve developed to help communicate these ideas.

Specifically, we’re going to look at dual process models in the work of Daniel Kahneman, Jonathan Haidt, and Joshua Greene. Kahneman you may know as the author of Thinking, Fast and Slow. Haidt is familiar to many of the listens of this show, he’s the author of The Righteous Mind, and he introduced the well known metaphor of the Elephant and the Rider.Joshua Greene isn’t quite as famous as Kahneman or Haidt, but his work in moral psychology overlaps in many ways with Haidt’s, and he introduces a very interesting mental model for dual process thinking in his book Moral Tribes.

I have another goal for this podcast. That goal is to help you, the listener, become better critical thinkers and consumers of information about these writers, these mental models, and dual process theories of the mind in general.

Why is this necessary? Because it’s easy for people to develop misconceptions about these models and what they tell us about human reasoning.

Part of the problem is that most people are introduced to dual-process thinking through one of these popular psychology books, either through Kahneman or Haidt or some other author. And a lot of people don’t read beyond that one book, that one exposure. At least on the topic of dual-process thinking.

So it’s easy for a reader to come to think that one particular author’s version of dual-process thinking represents the final word on the subject.

When your only exposure is one or two popular science books, you don’t have a chance to see these models from the perspective of the author as a working scientist within a community of working scientists who are engaged in very specific scientific projects, trying to answer very specific questions, and who often disagree with one another.

The reality is that there isn’t just one dual-process theory of human cognition. There are many dual-process theories. And not all of them are compatible.

The territory is much larger and more complex than any given map we may have in our hands. That’s important to know.

And as critical thinkers, we shouldn’t shy away from having multiple models that address the same, or similar, phenomena. On the contrary, we should try to accumulate them. Because each of these represents a different perspective, a different way of thinking, about a set of complex psychological phenomena that are important for us to understand.

With these multiple models in our heads, we can then think more critically and creatively about our own reasoning and behavior, and the behavior of other people.

Daniel Kahneman and the Biases and Heuristics Research Program

Let’s start with Daniel Kahneman and the model he presents in his 2011 book Thinking, Fast and Slow.

The book is a combination of intellectual biography and an introduction to dual-process thinking in psychology for the layperson.

It became a best-seller partly due to Kahneman’s status as a Nobel Prize winner in Economics in 2002.

But the Nobel Prize was based on the work he did with Amos Tversky cognitive biases and heuristics that lead to a revolution in psychology and launched the field of behavioral economics.

In 1974 they published an article, titled “Judgment under Uncertainty: Heuristics and Biases”, that summarized a number of studies they conducted on how human beings reason about probabilities.

They showed that there’s a significant and predictable gap between how we ought to reason, based on the standard rules of statistics and probability, and we in fact reason.

This gap between how we ought to reason and how we in fact reason is what they called a cognitive “bias”.

In order to explain these systematic errors in our judgment, they introduced the idea that our brains use shortcuts, or heuristics, to answer questions about chance and probability.

For example, if we’re asked to estimate the frequency of events of a certain kind, like being hit by lightning, or winning the lottery, or dying in a car accident, and we have to assign a probability to these events, how do we do this?

If you were forced to write down an answer right now, you would write down an answer. But how do you decide what to write down?

Well, Kahneman and Tversky suggested that what our brains do is swap out these hard questions for an easier question. The easier question is this: How easy is it for me to imagine examples of the events in question?

And then our brains follow a simple rule, a heuristic, for generating a judgment: The easier it is for me to imagine examples of the events in question, the higher I will judge the probability of events of this type. The harder it is for me to imagine examples, the lower I will judge the probability.

So if it’s easy for me to recall or imagine examples of people being killed by lightning, I’ll judge the probability of being killed by lightning to be higher than if I struggle to imagine such examples.

This particular shortcut they called the “availability heuristic”, because we base on judgments on how available these examples are to our memory or imagination.

In the paper, Kahneman and Tversky introduced several other heuristics, including what they called the “representativeness” heuristic, and the “anchoring and adjustment” heuristic.

These heuristics are themselves hypotheses that can be tested, and this launched a whole research program devoted to testing such hypotheses and looking for new ones.

And over the past forty years, there’s been an explosion of research on cognitive biases. The wikipedia page called “list of cognitive biases” has over two hundred entries in it.

Now, at this early stage, in the 1970s, no one was using the language of System 1 and System 2. But the idea of our brains using two distinct methods of answering these questions was implicit in the experiments and the analysis of the results.

There are the fast, automatic shortcuts that our brains seem to default to, that generate our gut responses. And there’s the slower, more deliberate reasoning we do when, for example, we’re consciously trying to apply our knowledge of probability and statistics to work out a solution.

This becomes the template for the System 1, System 2 distinction that Kahneman features so prominently in Thinking, Fast and Slow.

It’s important to remember that our heuristic-based, System 1 reasoning isn’t always wrong. In fact, the view that most researchers hold is that heuristic reasoning is highly adaptive. It generates results that are good enough, most of the time, and it’s fast and automatic.

Many of these heuristics have evolutionary origins. We have them precisely because they were adaptive for survival in our ancestral past.

But heuristic reasoning works best when there’s a good match between the kind of problems that the heuristic was designed to solve efficiently, and the problem that an organism is actually facing. If there’s a mismatch, then we’re inclined to call the resulting judgment an error.

And one can argue that modern life poses more problems of this type, where there’s a mismatch and our initial judgments don’t give the best answers.

I’ll give a common example. In our ancestral environment it may have been adaptive to have a craving for sweets and salty foods, to over-eat on these food sources when we come across them, because such food sources were few and far between.

But in our modern environment our craving for sweets and salty foods is no longer adaptive, because we’ve created an environment where we have easy access to them all the time, and over-eating results in obesity, diabetes and so on. Now that adaptive shortcut has become an unhealthy bias.

Now, over time, as new heuristics and different kinds of cognitive biases were discovered, it became natural to see all of this as pointing toward a more general dual-process view of human reasoning and behavior.

This is the picture that Kahneman lays out in Thinking, Fast and Slow.

Dual-Process Theories in Psychology

But we shouldn’t think that the biases and heuristics program was the only source for this view.

Dual-process views have a history that predates Kahneman’s use of this language, and that run along independent paths.

Kahneman himself borrows the language of System 1 and System 2 from Keith Stanovich and Richard West, from a 2000 paper titled “Individual Differences in Reasoning: Implications for the Rationality Debate”.

Stanovich and West used the System 1/System 2 language back in 2000. But modern dual-process theories appeared in different areas of psychology much earlier.

Seymour Epstein, for example, introduced a dual-process view of personality and cognition back in 1973, in his work on what he called “cognitive-experiential self theory”.

Epstein argued that people operate using two separate systems for information processing: analytical-rational and intuitive-experiential. The analytical-rational system is deliberate, slow, logical and rule-driven. The intuitive-experiential system is fast, automatic, associative and emotionally driven. He treated these as independent systems that operate in parallel and interact to produce behavior and conscious thought.

Sound familiar?

Dual-process theories were also introduced back in the 1980s by social psychologists studying social cognition and persuasion.

Shelly Chaiken, for example, called her view the “heuristic-systematic” model of information processing. The model states that people process persuasive messages in one of two ways: heuristically or systematically.

This view is closed related to Petty and Cacioppo’s model of the same phenomena, which they called the “elaboration likelihood model”. They argued that persuasive messages get processed by what they called the peripheral route or the central route.

In both of these cases, these styles of information processing would line up today with the System 1, System 2 distinction.

There are lots of examples like this in the literature. So what you have is a variety of dual-process views that have a family resemblance to one another. The cognitive biases and heuristics tradition is just one member of this family.

But the similarities among these views suggest a convergence on a general dual-system view of the mind and behavior, and there was a temptation to lump all these distinctions together.

For example, it’s quite common in popular psychology books or online articles to see dual process views presented as a single generic theory, with System 1 and System 2 as the headers for a long list of attributes that are claimed to fall under each category.

So, you’ll hear people say that System 1 processing is unconscious while System 2 processing is conscious.

System 1 is automatic, System 2 is controlled.

System 1 is low effort, system 2 is high effort.

Fast vs. slow. Implicit vs explicit. Associative vs rule-based. Contextual vs abstract. Pragmatic vs logical. Parallel vs sequential.

Here are some associations related to evolutionary thinking.

System 1 is claimed to be evolutionarily old,System 2 is evolutionarily recent.

System 1 expresses “evolutionary rationality,” in the sense that it’s adaptive for survival, while System 2 expresses individual or personal rationality.

System 1 processes are shared with animals, System 2 processes are uniquely human.

System 1 is nonverbal, System 2 is linked to language.

Another claim is that System 1 processing is independent of general intelligence and working memory, while System 2 processing is linked to general intelligence and limited by working memory.

And in some lists, emotion and feeling are linked directly to system 1 processes, while analytic reasoning is linked to system 2 processes.

Now, as tempting as it is to imagine that these lists are describing some general theory of the mind and behavior, that’s not the case.

There is no general dual-process theory that is worthy of being called a “theory”.

What there is is a collection of theories and explanatory models that have homes in different branches of psychology and cognitive science, which share a family resemblance.

Its more helpful to divide them into sub-groups, so you can actually compare them.

So there are dual-process theories of judgment and decision-making. The biases and heuristics tradition that Kahneman pioneered is in this group.

There are dual-process theories of social cognition, which focuses on conscious and unconscious processing of social information. The “elaboration likelihood” model of persuasion is in this group.

And there are dual-process theories of reasoning. And by reasoning I mean deductive reasoning, how people reason about the kinds of logical relationships you might study in a formal logic class. Why are some inferences easy for us to recognize as valid or invalid, and some much harder to recognize? Formal logic doesn’t ask this question, but psychologists have been studying this for decades.

So there’s more diversity in dual-process views than you would learn from reading popular psychology books.

There’s also a lot more disagreement among these authors than these books would suggest.

However, that doesn’t mean that there aren’t useful models that we can extract from this literature, that we can use to help us become better critical thinkers. There are.

This is actually one of Kahneman’s goals in Thinking, Fast and Slow. So let’s look at how Kahneman tries to do this.

Kahneman's Teaching Model

One thing to remember is that when an academic is in “teaching mode” they can get away with making broad generalizations that they would never say in front of their academic peers.

When Kahneman introduces the System 1, System 2 distinction, he’s in teaching mode. He believes that if the reader can successfully internalize these concepts, they can help us make better judgments and decisions.

So he starts out with a standard description.

“System 1 operates automatically and quickly, with little or no effort and no sense of voluntary control”.

Here are some examples of skills and behaviors that he attributes to System 1.

  • Detect that one object is more distant than another.
  • Orient to the source of a sudden sound.
  • Complete the phrase “bread and …”.
  • Make a “disgust face” when shown a horrible picture.
  • Detect hostility in a voice.
  • The answer to 2 + 2 is …?
  • Read words on large billboards.
  • Drive a car on an empty road.
  • Find a strong move in chess (if you’re a chess master)
  • Understand simple sentences.

On the other hand, “System 2 allocates attention to the effortful mental activities that demand it, including complex computations. The operations of System 2 are often associated with the subjective experience of agency, choice and concentration”.

Here are some examples of System 2 activities.

  • Brace for the starter gun in a race.
  • Focus attention on the clowns in the circus.
  • Focus on the voice of a particular person in a crowded and noisy room.
  • Look for a woman with white hair.
  • Search memory to identify a surprising sound.
  • Maintain a faster walking speed than is natural for you.
  • Monitor the appropriateness of your behavior in a social situation.
  • Count the occurrences of the later “a” on a page of text.
  • Tell someone your phone number.
  • Compare two washing machines for overall value.
  • Fill out a tax form.
  • Check the validity of a complex logical argument.

In all these situations you have to pay attention, and you’ll perform less well, or not at all, if you’re not ready or your attention is distracted.

Then Kahneman says, “the labels of System 1 and System 2 are widely used in psychology, but I go further than most in this book, which you can read as a psychodrama with two characters.”

“When we think of ourselves, we identify with System 2, the conscious, reasoning self that has beliefs, makes choices, and decided what to think about and what to do. Although System 2 believes itself to be where the action is, the automatic System 1 is the hero of the book. I describe System 1 as effortlessly originating impressions and feelings that are the main sources of the explicit beliefs and deliberate choices of System 2. The automatic operations of System 1 generate surprisingly complex patterns of ideas, but only the slower System 2 can construct thoughts in an orderly series of steps. I also describe circumstances in which System 2 takes over, overruling the freewheeling impulses and associations of System 1. You will be invited to think of the two systems as agents with their individual abilities, limitations, and functions.” (P. 21)

So, Kahneman is inviting us to think of System 1 and System 2 like characters, in something like the way that the movie Inside Out personified emotions like happy, sad, anger and disgust.

We identify with System 2, our conscious reasoning self that holds beliefs and makes decisions. But System 2 isn’t in the driver’s seat most of the time. Most of the time, the source of our judgments and decisions is System 1. System 2 more often plays the role of side-kick, but a side-kick who is under the delusion that he or she is the hero.

Now, as Kahneman works through one chapter after another in the book, he introduces new psychological principles and then tries to reframe them as attributes of these two characters.

For example, there’s a chapter on how we substitute one question for another. I gave an example of that already.This is chapter 9 in Thinking, Fast and Slow, called “Answering An Easier Question”.

You’re given a target question, and if it’s a difficult question to answer, System 1 will try to answer a different, easier question, which Kahneman calls the “heuristic question”.

If the target question is “How much would you contribute to save an endangered species?”, the heuristic question is “How much emotion do I feel when I think of dying dolphins?”.

If the target question is “How happy are you with your life these days?”, the heuristic question is “What is my mood right now?”

If the target question is “How popular will the president be six months from now?”, the heuristic question is “How popular is the president right now?”.

If the target question is “How should financial advisers who prey on the elderly be punished?”, the heuristic question is “How much anger do I feel when I think of financial predators?”

If the target question is “This woman is running for the primary. How far will she go in politics?”, the heuristic question is “Does this woman look like a political winner?”.

Now, there’s a step still missing. Because if I’m asked how much money I would contribute to a conservation campaign, and all I say is “I feel really bad for those animals”, that’s not answering the question. Somehow I have to convert the intensity of my feelings to a dollar value.

And that’s something that System 1 can do as well. Kahneman and Tversky called it “intensity matching”.

Feelings and dollar amounts can both be ordered on a scale. High intensity, low intensity. High dollar amount, low dollar amount.

System 1 is able to pick out a dollar amount that matches the intensity of my feelings. The answer I give, the dollar amount that actually pops into my head, is the amount that System 1 has determined is the appropriate match for my feelings. Similar intensity matchings are possible for all these questions.

So, substitution and intensity matching is a type of System 1 heuristic processing that we use in a variety of situations.

Now, how would we describe the personality of System 1, if we were trying to imagine a character who behaves like this?

When I used to teach this material in a classroom I would sometimes try to act out this persona, someone who perceives reality through the filter of their emotions and immediate impulses.

But we have to remember that in Kahneman’s story, System 1 isn’t who we identify with. In most cases we’re not consciously aware of these processes. We identify with System 2. So how does System 2 relate to System 1?

Well, in class I would present System 2 as a lazy, half-awake student who is good at math and solving problems when she’s alert and paying attention, but she spends most of her time in that half-awake state.

A belief or a judgment starts in System 1. What System 1 outputs are impressions, intuitions, impulses, feelings. Not quite fully formed judgments.

These are then sent on to System 2, which converts these into consciously held beliefs and judgments and voluntary actions.

Now, when System 2 is in this low-effort, half awake mode, what it basically does is rubber-stamp the outputs delivered by System 1.

So what eventually comes out of your mouth, or pops into your head, is really just a version of what System 1 decided, based on your impressions, intuitions, feelings, and the heuristic processing that goes along with these.

Most of the time, this system works fine.

But when System 1 encounters a problem or a task that is surprising or has a hard time handling, it can call on System 2 to provide more detailed and specific processing that might solve the problem at hand.

With our two characters, this involves System 1 telling System 2 to wake up because we need you to do some thinking that requires focus and attention.

Now, in this alert, active mode, System 2 has the capacity to override or modify the outputs of System 1. This is what we want it to do if these System 1 outputs are biased, if they’re prone to error.

System 2 can tell System 1, wait a minute … I might want the conclusion of this argument to be true, but that doesn’t make it a good argument. And look, I can see a problem with the logic right here …

Or System 2 can do things like imagine hypothetical scenarios, and run mental simulations to see how different actions might have different outcomes, and then pick the action that gives the best outcome in these simulations.Like imagining what the outcomes will be if I go to this party tonight rather than study for my final exam, which is at 9 o’clock in the morning.

But System 2 has a limited capacity for this kind of work. System 2 is fundamentally lazy. Its default mode is to minimize cognitive effort, when it can get away with it.

This is an important element of this dual-process model. Kahneman calls it the “lazy controller”. Stanovich calls it the “cognitive miser”. The principle actually applies to both System 1 and 2, but it’s a particular issue for System 2.

So one source of error can came from System 1, when its outputs are biased.

Another source of error can come from System 2, when it fails to override the biased judgments that System 1 feeds it, or when it doesn’t know how to come up with a better answer.

System 2 can fail to do this for any number of reasons: because you’re tired or distracted, or because you don’t have the right background knowledge or training or cognitive resources to work out a better answer.

But this is the basic idea of how System 1 and System 2 interact, according to Kahneman. And this is how cognitive biases fit within this framework.

Overall, the division of labor between System 1 and System 2 is highly efficient. It minimizes effort and optimizes performance. It works well most of the time because System 1 is generally very good at what it does.

But the vulnerabilities are real, and what’s more distressing is that these vulnerabilities can be exploited. There’s a whole field of persuasion practice that is organized around exploiting these vulnerabilities.

Assessing Kahneman's Model

So, what should we think of this model?

Well, my concern isn’t so much with whether it accurately reflects the consensus on dual-process theories in psychology. It’s Kahneman’s perspective on dual-process theories of judgment and decision-making, filtered through his own history as a central contributor to this field, and his aim to write a book that is accessible to the public.

It’s not hard to find respected people in the field who take issue with various elements of the story that Kahneman tells.

My interest is in how useful this model is for teaching critical thinking, as a tool for improving how people reason and make decisions.

From that standpoint, it’s got a lot of virtues.

I can draw a diagram on a chalkboard and describe the basic picture of how System 1 and System 2 interact, and how cognitive biases appear and fit within this picture. I can describe how these personified characters behave and interact with each other, which is extremely useful.

If I can get students to internalize this picture at some point, that’s a useful mental model to have in anyone’s critical thinking toolbox.

And this is exactly what Kahneman had in mind when he was writing the book. He’s very clear that System 1 and System 2 are fictions. Useful fictions, but still fictions.

Here’s a quote: “System 1 and System 2 are so central to the story I tell in this book that I must make it absolutely clear that they are fictitious characters.

Systems 1 and 2 are not systems in the standard sense of entities with interacting aspects or parts. And there is no one part of the brain that either of the systems would call home.

You may well ask: What is the point of introducing fictitious characters with ugly names into a serious book?

The answer is that the characters are useful because of some quirks of our minds, yours and mine.

A sentence is understood more easily if it describes what an agent does than if it describes what something is, what properties it has.

In other words, “System 2” is a better subject of a sentence than “mental arithmetic”.

The mind — especially System 1 — appears to have a special aptitude for the construction and interpretation of stories about active agents, who have personalities, habits, and abilities.

Why call them System 1 and System 2 rather than the more descriptive “automatic system” and “effortful system”?

The reason is simple: “Automatic system” takes longer to say than “System 1” and therefore takes more space in your working memory.

This matters, because anything that occupies your working memory reduces your ability to think.

You should treat “System 1” and “System 2” as nicknames, like Bob and Joe, identifying characters that we will get to know over the course of this book.

The fictitious systems make it easier for me to think about judgment and choice, and will make it easier for you to understand what I say”.

Unquote.

So, part of the explanation for the language that Kahneman uses is that he’s in teaching mode.

Jonathan Haidt: The Elephant and the Rider

Now, let’s talk about another popular model for dual-process thinking, Jonathan Haidt’s “elephant and the rider” model.

Haidt introduced this metaphor in his 2006 book the The Happiness Hypothesis, which is subtitled Finding Modern Truth in Ancient Wisdom.

Haidt is a social psychologist who specializes in moral psychology and the moral emotions. And he very much views his work as a contribution to the broader field of positive psychology, which you can think of very roughly as the scientific study of the strengths that enable individuals and communities to thrive, and what sorts of interventions can help people live happier and more fulfilled lives.

Haidt has always been interested in how people from different cultures and historical periods pursue their collective goals and conceive the good life. The thesis that he’s been pushing for most of his career is that the scientific study of human nature and human flourishing has been handicapped by an overly narrow focus on modern, western, industrialized cultures.

He thinks we should look at human cultures across space and time, and try to develop an account of human nature that explains both the common patterns and the differences that we see across cultures.

When we do this, we get a richer picture of human psychology and human values, and new insights into how we can live happier and more meaningful lives.

It’s in this context that Haidt introduces the metaphor of the elephant and the rider. It’s part of a discussion about the various ways that we experience the human mind as divided, as a source of internal conflict.

I want to lose weight and get in shape but I constantly fail to make lasting changes to my eating and exercise habits.

I want to be more patient with other people but I get triggered and can’t control my emotions.

I want get started early on this writing project but I find myself procrastinating on YouTube and Facebook and I end up waiting until the last minute again.

I want to make positive changes in my life but my mind and my body seem to be conspiring against me.

This is where Haidt introduces the metaphor as a model for our divided mind, a mind with two distinct operating modes that sometimes come into conflict.

Imagine yourself as a rider sitting on top of an elephant. You’re holding the reins in your hands, and by pulling one way or the other you can tell the elephant to turn, to stop, or to go. You can direct things, but only when the elephant doesn’t have desires of its own. When the elephant really wants to do something, like stop and eat grass, or run away from something that scares it, there’s nothing the rider can to do stop it, because it’s too powerful. The elephant, ultimately, is the one in control, not the rider.

The rider represents our conscious will, the self that acts on the basis of reasons, that can plan and formulate goals. The elephant represents our gut feelings, our visceral reactions, emotions and intuitions that arise automatically, outside of conscious control.

If this sounds familiar, there’s a reason for that. Haidt is connecting this model to dual process theories of cognition. The rider is effectively System 2, the elephant is System 1.

The great virtue of this metaphor, from a teaching standpoint, is that it vividly depicts a key thesis about the relationship between these two systems. It’s not an equal partnership. The elephant, System 1, is the primary motivator and driver of our actions. The rider, System 2, has a role to play, but it’s not in charge; System 1 is in charge.

Kahneman says something similar, but when you just have the labels, System 1 and System 2, the asymmetry of the relationship isn’t apparent on the surface. But with the image of the rider atop a huge elephant, it’s implicit in the image itself.

The purpose of this model, for Haidt, is to help us understand failures of self-control, and how spontaneous thoughts and feelings can seem to come out of nowhere. And it can give us guidance in thinking about how to change our own behavior, and the behavior of others.

The principle is simple. The rider can reason and argue and plan all it wants, but if you can’t motivate the elephant to go along with the plan, it’s not going to happen. So we need to pay attention to the factors that influence the elephant, that influence our automatic, intuitive, emotion-driven cognitive processes. If we can motivate the elephant in the right way, then the rider can be effective in formulating goals and coming up with a plan to achieve those goals, because the core motivational structures of the elephant and the rider are aligned. But once that alignment breaks, and the elephant is following a different path, the rider is no longer effective.

From a teaching perspective, this is the value of the rider and the elephant model.

But it has its limits. It doesn’t say much about these System 1 processes other than they’re hard to control. And it doesn’t give us much insight into the philosophical and psychological themes that Haidt is actually interested in, that have to do with moral psychology and the moral emotions.

That’s the topic of his next book, which he published in 2012, called The Righteous Mind, which is subtitled “Why Good People Are Divided by Politics and Religion”.

Haidt's Moral Psychology

In this book, Haidt argues that our moral judgments have their origins in the elephant, in System 1 automatic processes.

You can think of the book as an exercise in unpacking the various evolutionary and cultural sources of our moral intuitions, our moral “gut feelings”, and examining how this bears on our modern political and religious differences.

Now, here’s an important question: what’s the role of the rider in our moral psychology?

Haidt has a specific thesis about this.

Intuitively, it feels like we have a capacity to reason about moral issues and convince ourselves of a moral viewpoint on the basis of those reasons.

But Haidt thinks this is largely an illusion. This rarely happens.

For Haidt, the primary role of the rider, in our moral psychology, is to justify our moral judgments to other people, to convince other people that they should hold the same judgment as us.

So we come up with reasons and present them to others. But we didn’t arrive at our original moral judgment on the basis of these reasons.We arrived at that moral judgment based on intuitive, automatic processing in System 1, that is going on below the surface, that’s largely outside of our conscious control.

The reasoning that we come up with to justify our judgment is a System 2 process. But the main function of this kind of reasoning is to rationalize, to others, a judgment that has been made on very different grounds.

In other words, for Haidt, the primary function of moral reasoning, the reason why we have the capacity at all, is social persuasion, to convince others. Not to convince ourselves, though sometimes we do that too. And certainly not to arrive at timeless truths about morality.

Now, he grants that it doesn’t feel this way, to us. It doesn’t feel like all we’re doing when we argue about ethical or political issues is rationalize a position that we’ve arrived at by other means.

It feels like we’re arguing about moral facts that can be true or false. It feels like we are reasoning our way to knowledge of an objective morality.

But for Haidt, all of this is an illusion. An illusion manufactured by our minds. There are no moral truths of this kind.

This is not to say that our moral intuitions are meaningless, that they have no cognitive content. They do. But it’s not the kind of content that most of us think it is.

Haidt would say that our moral intuitions are adaptations, they’re a product of our evolutionary history that is subsequently shaped by culture.

As adaptations, our evolved moral intuitions served the survival needs of our evolutionary ancestors by making us sensitive to features of our physical and social environment that can harm us or otherwise undermine our ability to survive.

So we have natural aversions to things that cause physical pain, disease, and so on. We’re wired to be attracted to things that promote our self-interest and to avoid things that undermine our self-interest.

But humans are also a social species. Our primate ancestors lived in groups and survived because of their ability to function within groups. Parent-child groups, kin groups, and non-kin groups.

The most distinguishing feature of human cultures is highly coordinated social activity even among genetically unrelated members of a group.We are an ultra-social species.

That means that at some point we had to learn how to cooperate within large groups to promote the goals of the group, not just individual self-interest.

Haidt’s view is that our moral psychology developed to solve the evolutionary problem of cooperation within groups.

Now, if you’re familiar with Haidt’s approach to the moral emotions you know that he thinks there are six distinct categories of moral value that are correlated with distinctive moral emotions.

There’s care, the desire to help those in need and avoid inflicting harm.

There’s liberty, the drive to seek liberation from constraints and to fight oppression.

There’s fairness, the impulse to impose rules that apply equally to all and avoid cheating.

There’s loyalty, the instinct to affirm the good of the group and punish those who betray it.

There’s authority, the urge to uphold hierarchical relationships and avoid subverting them.

And there’s sanctity, the admiration of purity and disgust at degradation.

Each of these values is correlated with a moral emotion or an intuitive moral response. For all of these, Haidt gives an evolutionary story for why these responses would be adaptive in promoting the survival of individuals or groups and for coordinating social behavior.

You might also be familiar with Haidt’s favorite visual metaphor for these instinctive moral responses. They’re like taste receptors on our tongue.

When we’re born we all have an innate capacity to respond to different tastes, like sweet, bitter, sour, salt, and so on. But children from different cultures are exposed to different foods that emphasize some flavors over others. So the palate of a person born and raised in Indian or China ends up being quite different from the palate of a person raised on a typical American diet.

Similarly, nature provides a first draft of our moral psychology that we all share. But then culture and experience revise this first draft, emphasizing certain values and deemphasizing others.

Now, Haidt’s book focuses on the differences in moral psychology between liberals and conservatives. He argues that modern, so-called liberal cultures tend to emphasize the moral significance of the values of care, liberty and fairness, and they tend to downplay the moral significance of the values of loyalty, authority and sanctity.

By contrast, conservative cultures, and traditional cultures more generally, uphold the moral importance of all six categories of value.

Conservative moral psychology treats values like loyalty, authority and sanctity as morally important, morally relevant, in way that liberal moral psychology does not.

Haidt’s own view is that we need to allow space for both moralities. They complement one another. Society is better off with both in play.

This is a very quick introduction to Haidt’s work, and there’s a lot more to say about it, but my main interest here is how he thinks about moral intuition and moral reasoning, his dual-process, “elephant and rider” view of moral psychology.

And I’m interested in how Haidt thinks this model can help us think more critically about our own reasoning, and specifically about the way we approach ethical and political disagreements.

So let’s push on just a bit further. What, ultimately, does Haidt think our moral psychology was designed to do?

Here’s his answer, which has been much quoted and discussed.

Our moral psychology was designed by evolution to unite us into teams, divide us against other teams, and blind us from the truth.

This picture goes a long way to explaining why our moral and political discourse is so divisive and so uncompromising.

But what is the “truth” to which we are blind?

It’s this: that the moral world that we inhabit, the “moral matrix” within which we live, is the only one that can be rationally justified.

In other words, we think our righteousness is justified. “We’re right, they’re wrong”. But this conviction in our own rightness is itself a part of our moral psychology, part of our moral matrix, that has been selected for its capacity to unite us into teams and divide us against other teams. That feeling of righteousness that we experience is nothing more than an evolutionary survival tool.

Now, given this view, what would an enlightened moral stance look like?

For Haidt, an enlightened moral stance is one that allows us to occasionally slip out from under our own moral matrix and see the world as it truly is.

This is essential to cultivate what Haidt calls “moral humility”, to get past our own sense of self-righteousness.

This is valuable because doing so will allow us to better see how other people view the world, and will contribute to greater sympathy and understanding between cultures.

And doing so will increase our capacity for constructive dialogue that has a real chance of changing people's behavior.

That’s what Haidt believes.

Assessing Haidt's Model

Let me summarize the parts of this that I like, from a critical thinking perspective.

I like the elephant and rider model, for the reasons I mentioned earlier. It’s a great way to introduce dual process thinking, and it captures some important features of the asymmetry between System 1 and System 2 that are harder to explain if you’re just working with these labels.

I think Haidt’s work on moral emotions and moral psychology is very important. It does paint a naturalistic, evolutionary picture of the nature of morality that will be hard for many people to swallow who aren’t already disposed to think this way. In fact, this is also his account of the origins of naturalistic origins of religion. So it’s a direct challenge to devout religious belief, and religious views of morality,and even many traditional secular views of morality. But I think the exercise of trying to see things from this perspective is a valuable one.

Also, Haidt’s empirical work on differences in moral psychology have some immediate applications to moral persuasion.

The basic rule is if you’re a conservative and you want to persuade a liberal, you should try to appeal to liberal moral values, like care, fairness and liberty, even if you yourself are motivated differently.

If you’re a liberal trying to convince a conservative, you can appeal to these values too, but you’ll do better if you can make a case that appeals to conservative values of loyalty, authority or sanctity.

Robb Willer, a sociologist at Stanford, has been studying the effectiveness of moral persuasion strategies that are inspired by Haidt’s framework. I’ll share some links in the show notes.

I also like Haidt’s views on moral humility, and I like this notion of cultivating an ability to step outside our own moral matrix, if only for a short time — to see our tribal differences from the outside, and how they operate to simultaneously bind us and blind us. That’s a skill that takes practice to develop, but from a persuasion standpoint I think it’s an essential, “argument ninja” skill.

Now let me offer a few words of caution.

I know there are some anti-PC, anti-SJW audiences who view Haidt as something of an intellectual hero and who seem eager to swallow just about everything he says, but just as with Kahneman, his popular work doesn’t necessarily reflect the internal debates within his discipline, or the degree of consensus there is within the field about the topics he writes about.

So, just so you know, but there’s a wide range of opinion, both positive and negative, about Haidt’s work, among psychologists, and outside his field, especially among evolutionary biologists and philosophers.

There’s disagreement about the moral categories he uses; there’s considerable disagreement about his thesis that moral reasoning is almost never effective at motivating moral action or revising moral beliefs; there’s a ton of debate over his use of group selectionist arguments in his evolutionary psychology; and among philosophers, there’s a large contingent that believes that Haidt simply begs the question on a number of important philosophical positions, that he draws much stronger conclusions about the nature of ethics than his descriptive psychology alone would justify.

Now, these debates are par for the course for any prominent academic, and they tend to stay within their academic silos. They don’t have much impact on Haidt’s reputation as a public intellectual.

But when I’m teaching this material, I have to remind people that there’s a difference between presenting a position that I happen to think has important insights, and uncritically endorsing whatever the author says on the subject.

The more models we have the better. So in that spirit, I’d like to introduce a third dual-process model of reasoning. This one is by Joshua Green, who also works on moral reasoning and moral psychology. But his take-away message is quite a bit different from Haidt’s.

Joshua Greene: The Digital Camera Model

Joshua Greene is an experimental psychologist and philosopher. He’s Professor of Psychology at Harvard, and he’s director of Harvard’s Moral Cognition Lab.

He published a book in 2013 called Moral Tribes: Emotion, Reason, and the Gap Between Us and Them. He covers a lot of the same ground as Haidt, in that he endorses a broadly dual-process view of cognition, and specifically a dual-process view of moral judgment.

Greene also agrees with Haidt that our moral psychology was designed to bind us into groups that we belong to, and pit us against groups that we don’t belong to.

However, Greene is much more optimistic about the role that moral reasoning can and ought to play in our moral psychology.

But let’s start with Greene’s preferred model for our divided mind, because I think it has a lot to recommend it. Kahneman has his System 1 and System 2 agents. Haidt has his Elephant and the Rider. Joshua Greene’s model is the different modes of operating a digital camera.

Here’s how it goes.A modern digital SLR camera has two modes of operation.

There’s the point-and-shoot mode that offers you different PRESETS for taking pictures under specific conditions. Landscape, daylight, sunny. Sunset, sunrise. Night portrait, night snapshot. Action or motion shot. And so on.

If you’re facing one of these conditions you just set the preset, point the camera and shoot, and all the work is done. You get a good quality picture.

But if you need more control of the camera settings, you can switch to manual mode.

There you can make individual adjustments to the f-stop, the aperture, the shutter speed, focus, white balance, filters, the lens you’re using, and so forth.

Now, that’s a lot more work. It takes more knowledge and effort to operate the camera in manual mode, and actually take a good picture. But for those who can do it, it’s fantastic.

In general, it’s good to have both options, the preset mode and the manual mode.

The presets work well for the kind of standard photographic situations that the manufacturer of the camera anticipated.

The manual mode is necessary if your goal or situation is NOT the kind of thing the camera manufacturer could anticipate.

Both are good for different purposes. It’s optimal to have both, because they allow us to to navigate tradeoffs between efficiency and flexibility.

So, what’s the analogy here?

Preset mode is like our System 1, fast thinking. Heuristic shortcuts function like cognitive point-and-shoot presets.

Manual mode is like System 2, slow thinking. Conscious, deliberate calculation functions like manual mode.

System 1 presets, or heuristics, work well for the kind of standard cognitive tasks that they were designed for.

System 2 manual mode is necessary to solve problems that your automatic presets were NOT designed to solve.

As with the camera, both modes are good for different purposes, and we need both, for the same reason. They allow us to navigate the tradeoff between efficiency and flexibility.

Now, Greene applies these distinctions directly to moral psychology.

System 1 generates our intuitive moral judgments. System 2 is responsible for deliberate moral reasoning.

Both modes are good for different purposes, and we need both.

Notice how different this is already from Haidt’s view. Joshua Greene isn’t minimizing the role of deliberate moral reasoning, and he’s not suggesting that moral reasoning does nothing more than rationalize moral intuitions.

Greene thinks that our moral intuitions, operating in automatic preset mode, give good results when the intuitive response is adaptive and appropriate.

Like, if you go out of your way to do something good for me, my natural intuitive response is to feel grateful and to feel like I now owe you something in return. So I’m more inclined to help you when you need it, or accept a request from you.

That’s a natural response of moral reciprocity, and the basic instinct is hard-wired into us. You scratch my back, I’ll scratch yours. Reciprocal altruism.

But when we’re dealing with problem situations that are fundamentally new, our automatic settings aren’t trained to solve these problems.

Here are some examples.

Consider our modern ability to kill at at distance.

Historically, killing was usually a face-to-face affair. Our emotional responses to personal contact are conditioned by this.

Our emotional responses were never conditioned for cases where we can kill at a distance, like bombing, or with drones.

So our moral emotions don’t respond as strongly to the prospect of lives lost when killing is conducted at a distance, compared to when it’s done face-to-face.

Similarly, our ability to save lives at a distance is a relatively new situation. If we see a child drowning in a pool across the street, we would judge someone who simply walked past as some kind of moral monster.

But if we’re given information about children dying in other parts of the world, and that only a few dollars from us could save a life, we don’t judge those who fail to donate those dollars as moral monsters.

Our natural moral sympathies diminish, they fall off, with distance.

Another example is almost anything to do with intercultural contact between groups. Our intuitive moral psychology is designed to facilitate cooperating within groups, not between groups. It’s much easier for us to harm, discredit and dehumanize people who we see as outsiders.

Another example is any situation that involves long time frames, uncertain outcomes, or distributed responsibility.

This is what we’re facing with the problem of global climate change. It’s the perfect example because it involves all three.

There is nothing in our evolutionary or cultural history that could train our moral emotions to respond appropriately to this problem.

So, for all these reasons, Greene argues that we should treat our automatic moral intuitions in these cases as unreliable.

When this is the case, what we should do is place greater emphasis on our deliberate moral reasoning, our System 2 reasoning.

What kind of reasoning is that? Well, that’s another part of the story that I don’t have time to get into, but Greene has an argument that System 2 moral reasoning basically involves figuring out the actions that will maximize good consequences and minimize bad consequences.

And he argues that this is what we ought to do. So Greene is defending a form of consequentialist moral reasoning in contexts where we have reason to believe that our intuitive moral judgments are unreliable.

So, to sum up, Greene and Haidt have very similar, dual-process, evolutionary views of moral psychology.

But they have very different views about the role of deliberate moral reasoning within this scheme. Haidt is skeptical, Greene is much more optimistic.

And notice that Greene’s digital camera model of dual process reasoning also includes a new element that we haven’t seen before. Haidt has the Elephant and the Rider. Greene has the automatic preset mode and the manual mode. But implicit in Greene’s model is a third element, the camera operator, the person who has to decide which mode to use in a given situation.

Greene chose this model because what’s most important for Greene is this meta-cognitive skill, the skill of deciding when we can rely on our intuitive moral responses and when shouldn’t trust them, and switch over to a more deliberate form of moral reasoning. There’s nothing like this in Haidt’s model.

And one final important difference between Haidt and Greene is that they also have very different views about the moral values that should matter to us.

Haidt thinks that in general shouldn’t privilege liberal over conservative moral values, that society is better off if we allow for the full range of moral values to thrive.

But Greene’s argument suggest thats we should privilege one of these liberal moral values, specifically the value of care for human welfare.

The sorts of consequences that Greene is talking about involve the happiness and suffering of individuals. So our system 2 moral reasoning, according to Greene, should (a) have priority in these fundamentally new problem situations, and (b) be focused on determining those actions that promote happiness and minimize suffering of individuals.

That’s quite different from Jonathan Haidt’s position.

Now, for the sake of being fair, I should add that there are just as many philosophers who take issue with Greene as with Haidt, so neither has a special advantage in this regard. For most moral philosophers are very cautious about inferring normative ethical conclusions from these kinds of empirical arguments.

The Value of Multiple Models

What can we take away from this survey of dual process thinking in psychology? What’s the critical thinking upshot?

Well, remember I talked about the value of having multiple mental models. We’ve got three different authors, giving three different versions of a dual process view of the mind, with three different mental models to represent these processes.

Kahneman has his System 1 and 2 actors, Haidt has the Elephant and the Rider, and Greene has the digital SLR camera.

They’ve all got useful things to say, but the problems that motivate their work are different, and for that reason, the models they use are different. Our goal as critical thinkers should be to understand why the author chose the model they did, and why they thought it was important for their purposes.

And we can apply dual process thinking to a wider range of situations, because we understand better the problems that these authors were trying to solve when they introduced those models.

And its important to remember that we don’t have to choose between them. We want different points of view, we want different perspectives.There might be some tension between them and the overall picture may be messier, but reality is messier, and the multiplicity of models helps us to see that.

A Final Word

I want to end with another suggestion. I’m a big fan of dual process models. I think some version of these ideas is going to survive and will always be part of our understanding of human reasoning.

But saying this doesn’t commit you to any deeper view of how the mind and the brain work, or of the fundamental processes responsible for the patterns in the observable phenomena that we see.

So you should know that there’s a lot of work being done by different people trying to show that, for a given class of such phenomena, a single process theory, rather than a dual process theory, is able to explain the patterns.

But this is just science. You’ve got observable phenomena and generalizable patterns within those phenomena.Then you’ve got hypotheses about different types of processes that might explain these patterns. You should expect to see multiple, competing hypotheses at this level. This fact doesn’t invalidate the observable patterns or the ways we can use those patterns to predict and intervene in human behavior.

And it’s worth remembering that Kahneman could have done all of his work without understanding what a neuron is, or anything to do with the physical underpinnings of information processing in the brain and the body.

We shouldn’t be surprised that scientists who work more closely with the brain tend to think about these issues very differently. Philosophers of mind and cognitive science also tend to think of these issues differently.

So, my suggestion is that, even if you’re a fan of dual process models, as I am, you should be open to the possibility that at a more fundamental level, the dual process distinction may not hold up, or how we think about the distinction will be radically different from how we might think of it now.

And this is okay. There’s lots of areas in science like this.

Think about the Bohr model of the atom that you learn in high school science class. You’ve got electrons movingaround the nucleus in different shells, or orbitals, that occupy different energy levels. And you can explain how atoms absorb and emit photons of radiation by showing how electrons move from one orbital to another.

It’s a very useful model, you can use it to predict and explain all sorts of interesting phenomena.

But the distance between that simple model, and our modern understanding of the fundamental nature of particles and forces, as represented in quantum field theory, say, is almost unimaginable.

At that level, the language of localized particles “orbiting” and “spinning” is just a figure of speech, a way of talking about a reality that is far removed from the ordinary way we use those concepts.

We shouldn’t be surprised if something similar happens here. Except in the case of the mind and the brain, we don’t have anything like a fundamental physical theory, so there’s even more room for possibilities that we haven’t even imagined yet.