Thinking fast and slow chapter 9: Answering an easier question

This chapter is key to understand Kahneman’s biases model. In a nutshell:

We do have feelings and intuitions about everything that interests us. When facing a difficult problem,instead of really trying to solve it we substitute the original problem for an easier one, a problem for which we do have an intuitive answer, we solve the simpler problem but we think that we have found an answer for the original one.

For example, when having to decide in a political election who is best suited to rule the country -a difficult problem indeed- we substitute it with a simpler problem: whose face do I find more trusting/attractive/authentic. System 1 finds a  answer to that simpler problem and we vote for the most trusting/attractive face, thinking that we found an answer for the conundrum of who is best suited to govern the country.

A very elegant experiment in which this effect is demonstrated shows how a correlation between dates and happiness is generated or not depending on the order of the questions. If a group of students is asked how happy their life is and next how many dates they have in the last month, no correlation shows up. But if they are asked about their love life first the correlation emerges.

The chapter ends with a table stating what system 1 usually does.

Let one of our next year resolution be: may I be aware when I’m substituting a complex question for a simpler one.

Thinking, Fast and Slow. 8 How Judgments Happen

Following the schedule of our Reading Calendar

After chapter 7, where we encountered different examples of biases in reasoning and we confronted the terrible Halo Effect, we come back now to a more descriptive chapter in preparation of what is to come.

System 1 is continuously on, checking the environment and making a lot of judgments of the possible threats and opportunities that surround us. The assessments that system 1 does are limited by its capabilities and limitations. It uses very often approximate heuristics because it evaluates questions that need an analytical approach which system 1 cannot provide.

System 1 is good at grasping averages and textures but is very bad at counting and giving exact numerical answers. It compares and classifies different intensities but then, is able to compare and mix intensities from different categories and domains. Finally, in what Kahneman calls the “mental shotgun”, system 2 can direct the assessment action of system 1 to certain issues but can do it only in an imprecise way. That means that system 1 will take not only that particular issue but a lot of other similar or related questions.

We are promised that in future chapters we will be shown how all these characteristics of system 1 assessment activity bring us to bias and error.

Thinking Fast and Slow 7. A machine for jumping to conclusions

When I first read this book, in 2011, this chapter changed my life. Or better said, I decided that this book should have to change my life. That is, unfortunately, I’m most of the time an stupid machine that jumps continuously into conclusions, but thanks to Kahneman, a few times I can stop myself in the middle of a thought and realize that “Damm you! This is the halo bias” or “Come on. How could you be so inside the confirmation bias”.

It also helped me to change habits. For example, after reading Kahneman’s observations about exams and corrections, as a teacher every time it is possible I use test exams to avoid some biases.

I’d made mandatory in every school to explain those biases and make students analyze how often those biases control their own thoughts. And here’s the main problem: it is very easy to spot those biases in other people, but so difficult to find them in our own trains of thoughts. Like Feynman said “The first principle is that you must not fool yourself, and you are the easiest person to fool”

Let me present those biases briefly:

The confirmation bias: we are cognitive machines that -by default- believe. In order to remove a belief, system 2 has to work hard. If you started thinking that the only reason of the last economic crisis was the unmitigatable greed of the evil 1%, then it is going to be very difficult for you to change views, and you’ll just interpret every further development of the crisis accordingly, paying close attention to those elements in favor of your position, and dismissing or forcing reinterpretation to those that go against it.

The halo bias. If a person, a theory or a concept is first presented with a few characteristics that are positive,  then we’ll consider that other different characteristics of that person or theory have to be also positive, even if we don’t have any good reason to think so. We saw an stunning example of that when after winning the elections, Barack Obama was awarded the Nobel Prize.

The What You See is All There is bias. When facing a decision, we just pay attention to whatever is salient at the moment, and we are not able to look for other evidence that might present the problem in a different light. You consider a food that is “90% fat free” as a saludable choice, almost no fat! But then, if you could look if from another point of view, you’d realize that it has a good deal of fat: 10%

One of the corollaries of Gödel’s theorem is that if a logical system can prove of itself that it is complete, then it is inconsistent for sure. In a similar way, if a person reads this chapter and thinks that it has nothing to do with him or her, then it really has to do a lot to do with that person.

Thinking, Fast and Slow. 6. Norms, Surprises, and causes

Following the schedule of our Reading Calendar

This chapter is devoted to convey a single but powerful idea: Our system 1 is constantly monitoring our environment checking that everything falls into an expected order of things. When something happens that doesn’t fully fit into it, system 1 generates a surprise feeling and immediately begins to find a casual explanation for that fact.

We humans are kind of machines of finding causal explanations of everything that happens around us. There are two kinds of set of rules that are applied to finding causalities. One of them applies to the way physical objects interact among them. This rules include movement, impact, velocity, pushing, being pushed, falling, etc… There is another set of rules that is used to understand the interactions between human beings. Here we use categories completely different: help, explain, pretend, love, betrayal, hope, etc…

This idea, that we are machines desperately looking for a story is very Talebian and, in fact Kahneman uses an example from Taleb in the chapter. As Talebian is also the idea that this human tendency is a source of cognitive bias in the sense that system 1 generates lots of spurious ideas that, if not checked thoroughly by system 2 (and remember that system 2 is too lazy and too busy for thoroughness), will be a cause of erroneous judgment.

Let’s see how this concept develops in future chapters.

Thinking Fast and Slow: 5. Cognitive Ease

Following our schedule in our reading calendar, it is time for chapter 5.

In this chapter Kahneman describes a very interesting phenomenon, cognitive ease, or the feeling that the situation is fine, nothing worries us and we think that the tasks at hand are easy to be solved.

We might be in such state just because the task is easy, but  there are other very different and incongruent reasons, like it is a repeated experience, we are in front of a clear display, we have received former priming that we didn’t realize, or we are just in a good mood.

This generates biases and errors in our judgment. For example, if we are familiar with a name just because it has been primed to us before, we might wrongly think  that we remember this person because it is famous, but actually is just a name that the investigador made up.

The more astonishing, fascinating and unexpected result in the chapter, for me, was how if students have to fight hard with how a problem is stated, they’ll more probably switch to System II and solve it correctly. For example, if a mathematical problem is written down in a small font size and poorly printed, experimental subjects will more probably solved it correctly than if it is presented in a very clear form and then the subjects use just their system 1 instead and make mistakes.

This clearly go against the idea of that clearer a problem is presented the better to understand it. But if you think about it, it makes perfect sense, and there are plenty other experiments that shows similar results. Sounds like we have to revise some classical ideas about how to teach…

I have some precautions with some experiments that are described in passim, so it is difficult to establish their real relevance. For example the experiment in which investors prefer stocks with fluent names Emmi or Swiss-first versus Geberit or Ypsomed how was it developed exactly? If experimental subjects were just told to choose between a list of stocks which they didn’t know and no more extra info was provided, then of course, one has to invent some criterion, or use an unconscious one. But my question here is: if these were real investors -and not just experimental subjects- and have to use their real money to buy stocks, would them just pick the nice sounding names? Of course not. So the experiment doesn’t prove that people choose stocks depending on how the name sounds, but only that confronted with a problem that is far away from any real one, confronted to a pure lab problem, people just do some educated guesses in order to not look like a fool.

A similar problem arises with the experiment of the Turkish words in a newspaper. People considered “good” words those that they were familiar with them because they were published in a newspaper before and have seen them several times. The more they see them, the more familiar with them, and the more they tend to consider “good”. But, what does “good” mean here if the experimental subjects do not undestand  Turkish? Probably not much.

Still the general thesis of the chapter holds together well, and we all have experienced cognitive ease so we see that this makes sense, and this chapter also give us another disturbing fact: if a learning material is presented in a supereasy way, it might be less helpful than a more difficult to process one.