Thinking Fast and Slow 21:Intuitions vs. Formula

Following the schedule of our reading calendar

This is another devastating chapter on “experts”. Kahneman is not debunking the whole idea of expertise, there can be experts in making violins or bridges, let’s say, but mostly on the idea of experts in what he calls “low-validity environments” that is environments in which there is a relevant degree of uncertainty, so predictions rapidly turn pointless: economics, policy, law, and almost anything relevant to daily human lives are low-validity environments.

More specifically, we are introduced to a shocking idea: very simple algorithms are a lot better to predict future outcomes than experts. Simple combinations of two or three variables and a simple mathematical formula makes better predictions than any expert, that tries to be clever and impressive and, therefore, makes a lot of mistakes. Even, when the experts are given the formula, they make worse predictions than the formula. The reason, of course, is that experts think that they really know a lot more about the subject than the common man or woman, so they try to outsmart the formula.

We are not talking here to elegant algorithms based on elaborated regression formulae, but very very simple linear models (a*x +b). That sounds preposterous to social scientists, and consider that such formulae are inhuman, unreal, artificial, while expert text pert knowledge is holistic, rich, subtle and so on

Expert text pert choking smokers,
Don’t you think the joker laughs at you?

Another scientific result to make your day miserable: Humans are inconsistent when evaluating information. If you give an expert the same information twice in different days he’ll probably give you a different answer each day

The chapter ends with a short story in which Kahneman described how he developed a simple algorithm based in just six dimensions, scored from 1 to 5 in order to decide recruits for the Israeli army. We are invited to do a similar method in our daily lives, when having to take a complex decision, like hiring someone for a job or not, to consider not more than six dimensions relevant for the problem and score them on a scale of 1 to 5 (1=very weak, 5= very strong).

I’d say lately in the book the connection between the experimental results and differences between system 1 and system 2 are not that relevant. When Kahneman talks about them, it is a sort of general background, and not  anymore as the mechanism to explain why we make mistakes. I have the impression that this section is no longer about a rational system that works well (system 2) that is hijacked by a more intuitive system (system 1), but a general claim of irrationality of humans.

Advertisements

Thinking, Fast and Slow. 20. The Illusion of Validity

Following the schedule of our Reading Calendar

In part 3 of the book we have entered Nassim Taleb territory, as David commented in the previous chapter. That is good and that is bad. Good because the subject is fascinating, bad because we already have read a lot about it. In this particular chapter all concepts and studies referred to are already known to us. It’s not even two different guys with similar ideas. The feeling here is as Kahneman and Taleb were teaching the same course in some college.

Having said that, and supposing that one comes to the subject for the first time, this chapter is superb. Reading it and understanding it is one of the most humiliating experiences that a young, optimistic and full of energy and hope in humanity young man can do. If it were not because of the fact that he is going to forget everything of it in a week, it could transform him into a old man in a matter of days.

The chapter begins with a bomb

Considering how little we know, the confidence we have in our beliefs is preposterous.

and elaborates on that till the end.

Some ideas stand out:

– We FEEL that we know even when we KNOW that we don’t know. This is simply devastating. You cannot trust yourself anymore. Devastating.

– Finance expertise is bullshit. This is not that shocking.

– Human sciences experts are useless. Their talk is just an invention that creates imaginary order in the events of the past and that predicts things for the future with accuracy similar to chance. I have thought since long ago that after the total failure in predicting the fall of the Soviet Union by any kind of experts (including experts in Soviet Union politics), no one should ever pay attention to this guys (and of course not pay money either).

– All experts are dangerous. This is a really shocking point. Experts, even those who are not CNN blatant charlatans, are dangerous because they show a bigger level of overconfidence that the rest. Even if they are right often, when they are wrong they can misled us all into disaster.

And now a personal reflection. I have known of it all since quite a long ago. Do I have all this in mind in my daily life or did I forgot it after some days?

As far as the experts issue is concerned, it really begun part of my life. I forgot about all data and studies presented in the chapter but I forged an idea in my mind that I use regularly: “when you hear the word EXPERT run”.

However, with the other issue, things are not that easy. Its so hard to admit that the things that you feel that you know you don’t really know. If someone has developed a method to keep that idea in your mind please tell me.

In God we trust.

Better to trust Him even if there were no God than to trust ourselves.

Thinking Fast and Slow: 19. The Illusion of Understanding

Following our reading calendar

We are now entering part III of the book, which discusses the problems related to overconfidence. It is a very Talebian chapter and it actually starts describing the narrative fallacy from The Black Swan.

Good stories provide a simple and coherent account, but they lead to a sense of inevitability: this had to happen exactly as it did, because this and that happened before. It is coherent. It is a story.

But, Kahneman says, there is no real understanding in stories. Just the illusion of understanding. Only focusing on the events of the story that make sense and turns it coherent we forget about all the random things that happened also, and were they any different, the final result would have been completely different too.

We are introduced to the halo effect in business and how, due to a coherent successful story we end up thinking that everything an enterpreneur or a CEO does is brilliant and if we do the same we’ll be successful too.

Most productivity posts are based on that halo effect: “Ten things that successful CEOs do before breakfast.” “Steve Jobs practiced Zen meditation and you should too!” and so are books like Built to last that presents a compact description of the series of decisions that some successful companies take and why they are also good for your company too. That this was just illusion of understanding based on the halo effect has been confirmed when after some years the difference between less successful companies and those described in Built to last end up in almost nothing.

In a brave and defiant mode, Kahneman proposed we get rid of the verb “to know” in certain contexts:

I have heard of too many people who “knew well before it happened that the 2008 financial crisis was inevitable.” This sentence contains a highly objectionable word, which should be removed from our vocabulary in discussion of major events. The word is, of course, knew. Some people thought well in advance that there would be a crisis, but they did not know it.

We have discussed before about Kahneman’s tendency to change from English to Statistich without warning but he is right here, and is pointing to a very important problem. We love to think that we know so many things but actually we don’t. It doesn’t differ much from one of the principles of black swans: in hindsight, nobody expects X (the Spanish Inquisition, for example) but after X has happened lots of “experts” will show up to tell us that it was clear that this would have to happen for so and so. Kahneman calls that type of reasoning the hindsight bias.

A variant of such a bias is the outcome bias, by which when the outcome of a random effect -like a black swan- is pretty bad whoever is in charge is accused of not being diligent enough to predict such an outcome.

Do you remember how some years ago scientists in Italy were sentence to prison for six years because they couldn’t forecast the earthquake that destroyed the town of L’Aquila? That’s how powerful this bias can be.

As usual, Kahneman is taciturn and moderate, but his observations are pretty gloomy. Humans need stories to feel that they really understand something, but stories won’t give us the truth, just coherent made-up selection of events.

 

 

 

Thinking, Fast and Slow. 18. Taming Intuitive Predictions

Following the schedule of our Reading Calendar

This chapter comes closely tied to the previous one. In chapter 17 we were told about the phenomenon of regression to the mean and how easily we ignore it in our intuitive processes and how difficult it is to grasp it when we reflect about it. Kahneman goes a little too far away, for my taste, talking about it as a kind of obscure, incomprehensible phenomenon that eludes even the greatest of the minds. At least I would have liked more examples and elaboration on that.

Now we are shown here how not taking into account regressions to the mean results in biased predictions. The chapter begins with an overall introduction to the prediction business and makes the distinction between analytic prediction (system 2) and intuitive ones (system 1). And shows how inside the intuitive wagon there are also those predictions that are based in system 1 performing quickly and effectively tasks that system 2 has taught him (expert intuitions) and other predictions were system 1 does as he pleases (everyday predictions). At that point of the chapter, that is page 1, I was very excited about the prospect of a global analysis of the prediction world and I have to admit that I was a little disappointed by the chapter centering only into the regression issue.

The author gives a procedure to correct our intuitive predictions taking into account the regression to the mean. As in the previous chapter everything gets too long and complicated. He simply means: “Calculate the amount of luck that happened in the past and consider that in the future such luck will not be there”. And that’s it.

And I have being growing uneasy as I was reading the chapter for several reasons, all of them intuitive as always, because these are deep questions that need a lot of reflection.

First, his procedure to correct predictions is a kind of blending the colors of past observations and generating a gray vision of the future. On average closer to reality but dull like death. There is no room in this system for extreme predictions. This cannot be good. He addresses this critique later in the chapter.

Second, he also comments on that, accuracy in the prediction is not always the main goal. Statisticians laugh at people who play lottery because the expected value of it is negative. Playing lottery is very rational because the harm of losing 1 dollar, in practical terms, equals zero and so no matter how low the probabilities of gaining 5 million may be, and what the expected value may say, the rational thing is to play lottery if you have fun expending the time to do it.

Third, there are anchoring effects. Some of them hidden. Past grades may be, lets imagine, 100% result of luck. But if future grades are decided by teachers aware of past grades they may create a spurious causality there were there was none. And if you are a puritan statistician you may end up over correcting and yielding predictions far worse than those of the average Joe. The world is populated by “regression to the mean non correctors”. So, since the world you are living in is not only the result of the laws of physics but also the acts of people, sometimes it may happen that your predictions about the future may be wrong because it is influenced by people who is not correcting anything.

I cannot provide a rational elaboration of point 3. I simply smell “rational fool”.