Thinking fast and slow: Conclusions

Following the schedule of our Reading Calendar

In this conclusion, system 2 is presented as “fictious Econs, who live in the land of theory” while system 1 is “real human beings”. However, the the real, experiencing self is both system 1 and system 2. Most of this final chapter is a development of that idea: humans are both system 1 and system 2.

This final chapter works more like a “future research” ending remarks on an academic journal paper than applications for living a better life, which is a pity.

Two selves

The main bias here is how we neglect duration and value more “a short period of intense joy over a long period of moderate happiness” which lead us to make unwise decisions on how to live our live, missing opportunities for a long happy period if there might be a poor ending. However, the stories we tell ourselves about our lives also matter, so any policy aiming at improving the citizens of a country wellbeing should consider both.

Econs and Humans

Here Kahneman discusses the idea of rationality itself and point out how divergent is the common sense interpretation of rationality versus the idea of it that economists have. Here lies the more relevant contributions of Kahneman’s work: to point out how mislead economics is and all the different things that need to be considered in order to make economics a little more realistic, considering real human beings and not mathematical abstractions.

Let me quote Kahneman on the subject:

“I often cringe when my work with Amos is credited with demonstrating that human choices are irrational, when in fact our research only showed that Humans are not well described by the rational-agent model”

That doesn’t mean that humans do not need any help and our decisions are perfect. In the following pages, things get a little more interesting as Kahneman gets political and states how a libertarian vision of human beings, as put forward by the Chicago School of Economics is basicly flawed, as it is based in such a limited and non realistic view of rationality.

Instead a behavioral school of economics will point out of the need of a state regulation to equalize mistakes people make due to the biases described in the book. The Chicago School of Economics does not face such a problem, because they believe in a mythical rational human being that can’t be found in the real world.

Kahneman is not completely clear about his own beliefs, but he seems to favor some sort of “nudge” program in which citizens are nudged to take the “best option” for them by making it the default.  Therefore the state has two main roles against biases: first to guarantee that “best options” are offered by default and second to protect humans from con artists that may use biases in order to exploit our weaknesses.

This is a very relevant dicussion, a key one, and I wish that Kahneman had devoted more time to it, instead of bugging us to death with lab experiments.

Two Systems

Here Kahneman develops the main idea of this conclusion, how humans are very complex and fascinating mixture of system 1 and system 2, and gives a more fair description of system 1 and how our intuition, besides of being a source of biases, is also the responsible of lots of things we do rightly and effortlessly. However sometimes it does very wrong things, and we have to be aware of it.

This section looks mostly like a way of distancing himself from Gigerenzer view of “bounded rationality”, done in a oblique way stating that “the heuristic answer is not necessarily simpler or more frugal” playing with Gigerenzer’s paper (and book) “Fast and Frugal Heuristics”.

When considering the applications of all this knowledge in our daily life, Kahneman is -fortunately- very realistic. It is very difficult to change the way System 1 behaves. We have to put a lot of effort into it.

The main mechanism for detect such biases is simple, we have to recognize that our mind is trying a shortcut to solve a very complex problem, slow down and bring on System 2 to make a more detailed analysis of the problem.

But this is not easy, as  Kahneman states:

Except for some effects that I attribute mostly to age, my intuitive thinking is just as prone to overconfidence, extreme predictions and the planning fallacy as it was beofre I made a study of these issues.

And this is because:

The voice of reason may be much fainter than the loud and clear voice of an erroneous intuition, and questioning your intuitions is unpleasant when you face the stress of a big decision. More doubt is the last thing you want when you are in trouble.

Other tools than can helpful

  • Organizations making decisions instead of indidivuals, as they have to think and process decisions more slowly
  • To have  a richer language so we can label errors and biases in a more detailed manner.

Like humans beings, this book is a peculiar mix of system 1 and system 2, good, helpful intuitions on how humans see the world, and boring academic discussions that are only relevant to other academics. It is not the best book I’ve ever read, and certainly not the most entertaining one, but I’ve learned quite a lot things about how other people and myself take decisions and analyze the world, and overall I am happy to have read it.



Thinking Fast and slow: 37 Experienced Well Being

Following the schedule of our Reading Calendar

In this chapter Kahneman revises our idea of well being, based on the idea of flow, developed by psychologist Mihaly Csikszentmihalyi. The main idea is to substitute the idea of a “general well being” which is clearly too ambiguous to be of any help, by the idea of “experienced well being” making people to recall their experiences in a given day (Day Reconstruction Method or DRM).

The main aim of this chapter is to argue for this “experienced well being” as a better way to mesure well being and inform policies in order to assure a better well being of citizens.

The results described are not very espectacular. Some of them are common sense: suffering is unevenly distributed and some people seem to have no worries in a whole day while other people spend most of their time in emotional distress and pain. Active hobbies are better than passive leisure like watching TV. Having children generates more stress but produce a higher life evaluation. Some are a little less evident, like the fact that people with better education tend to report higher stress, or that religion does not help to reduce depression or worry. Severe poverty also amplifies bad effects, beyond the mere economical facts, and illness has a more negative effect in people under extreme poverty.

Probably, the most quoted and surprising result is how experienced well being grows with salary but only to a point. When you pass the $ 75.000 income in high cost areas there is no relevant increase in experienced well being. Money gives happiness, but only to a certain point. Then it is irrelevant for that.

When developing his studies, Kahneman used  DRM because  “Experience sampling is expensive and burdensome.” BUt now it is not anymore. You can design an app and ask thousands of volunteers to participate in a research on well being. This has actually been done in several experiments but results are quite similar to those obtained by Kahneman and his team, as far as I can tell.

Thinking Fast and Slow: 36 Life as a Story

Following the schedule of our Reading Calendar

Well… This chapter is like really well… boring.

My rephrasing: Some friends of Kahneman, after watching “Total Recall” decided to make a pointless experiment about how important are memories when evaluating former experiences. He put them in his book so they get some extra credit. Plus the chapter starts with a little digression on the Traviatta so he also sound cool and intelectual. Meh!


Thinking Fast and Slow: 33 Reversals

Following the schedule of our Reading Calendar

In this chapter we are presented with a different type of bias: how we humans make a different decision depending on whether we can compare two different outcomes or not. When we are presented with two different situations to evaluate, the way we decide is highly influenced by the way the information is presented. In tone of the first experiments described people will choose a safer bet to a riskier one if presented together even if the riskier bet can give you more money. However if they are presented separated and one has to state what is the minimum amount of money that would desire to put on the safer, people tend to put a lot more money in the riskier bet.

Of course, such results are devastating for the  Morgenstern/von Neuman model which suppose that choices human make are “rational” and consistent, but it should worry us too that systematic inconsistency in the way we make decisions, as it is presented in several experiments related to trials and juries.

And we can forget very basic facts. In another experiment, if people are asked to give a donation to save dolphins from pollution or to save farmworkers from skin cancer, alone, people tend to give more people for the dolphins cause. The emotional impact of an endagered dolphin is greater than a farmer getting skin cancer; after all, they are supposed to work under the sun… However, when they are presented together people give more money to farmers. After all, they are human, and dolphins are not.

The main reason for such a discrepancy is the fact that when we have to choose among several choices it is more likely that system 2 activates.  So this chapter give us an interesting heuristics: when suspecting that one is under a rushed evaluation from system 1, consider other choices and force system 2 to show up. I think I’ll start right now to use it.


Thinking Fast and Slow 31: Risk Policies

Following the schedule of our Reading Calendar

As if in a rollercoaster, we are back to a pseudo-bias in which stupid humans are not able to think as probability experts think they should.

It is clear that Carlos and I share the view of rationality that Kahneman presents here:

it helps us see the logical consistency of Human Preferences for what it is -a hopeless mirage.

However, I don’t think that the example presented in the first pages is a proof of such a consideration.

In the example, first we have to choose between a sure gain a a certain probability to win some more. Without trying any calculation most people will settle for the sure gain. However, if we are presented with two choices that both imply probabilities then we’ll make the calculation.

I don’t think that’s a problem, really. There is another way of reading the two situations. In the first situation we are faced with a real decision, and in the real world going for something sure is after all, not a bad heuristics. However, in the second situation it is clear from the framing of the problem, that the experimenter wants us to make probabilistic calculations, so we do it.

If the main point here would be that we humans are not rational in the sense described by Morgensten and von Neumann, I’d agree completely, but here Kahneman seems to argue that because we are not computing probabilities all the time we are doomed to failure and we are irrational.

The main problem described in this chapter -Samuelson’s problem- is boring as hell and it is another elaboration on how far humans are from being perfect statisticians.

The last part of the chapter, on risk policies makes sense -have a risk policy and apply it routinely whenever a relevan problem arises- but the description is too broad to be actionable.

Besides, nothing new has been presented. This chapter is  mostly prescindible.



Thinking Fast and Slow 29 The Fourfold Pattern

Following the schedule or our reading calendar

Here Kahneman studies how we combine expected probabilities of an event and how we value it in order to assign weights to different outcomes. That way, we assess how desirable it is for a person and what is its likelihood to happen.

Two main biases are presented here: the possibility effect and the certainty effect. The fact that we can change something from impossible to happen, to have certain probability to happen has a disproportionate effect on us. That’s the reason we buy lottery tickets: it rises a 0% probability to gain something to a very low probability. It is also the reason why we fear very small risks: the operation is quite safe, but there is a 1% risk that something bad happens.

In the certainty effect we know that something is going to happen, not that it has a 95% or a 99% to happen that it is going to happen for sure. This also has disproportionate effects on us.

So, Kahneman’s point here is that there is a relevant difference between the real probability of an outcome and how people calculate such probabilities. This clashes with Morgernstern and von Neumann theory of rational choice, that expects humans to be natural calculators of real mathematical probabilities.

To solve such a problem, within the prospect theory model, Kahneman and Tverksy proposed what is known as The Fourthfold pattern. Such schema predicts feeling and behavior of real humans considering the four combination of two variables: Probability (high or low) and expected value (gains or loses). In this model, emotions play a very substantial role, powered by risk aversion.


This chapter has the same problem we discussed when talking about Mr W. speciality or Linda the accountant. Yes, people have a different impression of risk and probability that the one that is taught in mathematics. So what? Consider the possibility effect: I am myopic and I need glasses. Surgery could solve that, but let’s say that there is a very small risk (less than %1) of losing my sight and becoming blind. I don’t care what a mathematician might say, I’m not going to take that risk, I prefer wearing glasses. If that makes me irational, so be it.

Kahneman is sympathetic with people like me, and admit that “The decisions described by the fourfold pattern are not obviously unreasonable” but says that in the long run, they are much costly.

The general discussion of gambling as a way to study how we make decisions, reminded me of Taleb and his critiques in Antifragile on how casinos are a very very bad model for studying risk. In real life, there is no way to ponder the probabilities of a certain event to happen, so at best, casino-based models are useless for any real risk assesment.

Thinking Fast and Slow 27: The Endowment Effect

In this chapter we find some more examples of “theory induced blindness”‘ and how the idea of indiference can lead us to forget about previous conditions while making a decision, which is clearly a bad idea, but seems quite common in economics that don’t consider loss aversion while one person has to move from one job to an “equivalent” one.

The endowment effect, tested in various original experiments, shows how when something belongs to us, suddenly we value it more. When I am introduced to an item, a cup, for example, I wouldn’t pay more than $5. But once it is mine, and I’m asked how much I’d like to receive for it, I might end up asking for $15.

Prospect theory can solve that, pointing out at the fact that now the reference point to value the cup has changed.

Not everything generates an endowment effect. In order to do so it has to be an item you can use. If you use for exchange it won’t be activated. So if you sell cups and you won’t feel any special relationship with them and just sell them at market value.

According to others experiments, physical possession of the object is key to create an endowment effect. Being poor can also diminish or even making dissapear the endowment effect.

My overall impression is that any decision implies so many emotional and irational implications, that prospect theory can’t really take into account whether endowment effect is going to be activated and with what strenght. Unfortunately, Kahneman seems to be best when saying why we are not rational and less good when proposing alternative models.

Thinking Fast and Slow 25: Bernoulli’s errors

Following the schedule of our Reading Calendar.

We are entering a new section of the book, part IV. What is explained in this section made Kahneman and his colleague and friend Tversky famous, and gave Kahneman his Nobel Prize in Economy: challenging the assumptionsthat are at the core of economic theory on how we make decisions .

Kahneman explains the main ideas behind Morgenstern and von Neumann model on how we make decisions -the game theory- and the experiments they develop in order to check whether this theory applied to real people and how this leads to this prospect theory on how real humans make decisions. Prospect theory is described in more detail in the next chapter.


Morgenstern (left) and von Neumann (right) the two guys behind Game Theory and “rational” economics.

The main case discussed here is the “bias” of preferring the sure thing (like $80) versus a complex gambling in which one has 80% chances to win $100 and 20% chances to win only 10, which makes an expected value of $82.

We are also presented with a formulation of the risk aversion bias, which implies that loosing something weights more when facing a decision that winning the equivalent. The pain of  loosing $100 is not compensated by the happiness of winning $100.

The first time I read this book I wondered how is this possible. And I’m wondering it again: if humans do not behave at all like the rational decision makers that  Bernoulli, Morgenstern and von Neumann imagined, if traditional economics is wrong at its very core, why are we still using it to make economic, political and sociological models on how markets are going to develop and what the people is going to do?

An aside comment. Consider John von Neumann. He was one of the main authors helping to create:

One of the first computers

The mathematical theory behind such computers (von Neumann Architecture)


The atom bombs

The theory of Mutual Assisted Destruction (MAD) which defined the Cold War

The formalism of quantum mechanics

Game theory and its applications to economics

And then he  is quite unkown. Very few people have heard about him. Maybe he defines so close the XXth century that nobody feels really comfortable with such a character.




Thinking Fast and Slow Chapter 23: The Outside View

Following the schedule of our reading calendar

This is another Talebian chapter, touching a subject that he also develops in anti-fragile, the planning fallacy, and how difficult it is to calculate how long a complex project is going to take.

The main point here is to observe how different planning is if you take the inside view -the people that are working with the project-and an outside view that brings statistical information on how long on average a certain type of project is going to take. From the inside the temptation of thinking “this is different” is so great that one only focus in our specific, unique case and not consider the statistics of the class to which the case belongs to.

Once again, the examples -in this case of planning fallacies- are depressing, like the New Scotish Parliament estimated to cost 40 millions Pounds ending up costing 431 millions. But the cure to such fallacy, as Kahneman explains, is simple. Just get some previous statistics of how much similar projects took and cost before you make you estimations.

We all can get relevant lessons for our daily lives if we consider the outside view. A couple of yeasr ago or so I realized that I sucked at planning how much time any project was going to take, and then I realized that I was doing the inside view, I was with my mind set on the idea that “next month is going to be different” but then I realized that it was more proper to take an outside view and make a simple calculation. If, in the last ten months, I’ve been so busy with other stuff that I couldn’t finish so and so, why do I think the next months is going to be different?  And now I sucked a little less at planning.

Thank you Daniel, thank you Nassim