Think, Fast and Slow is a really good book about how human mind works. It summarizes decades of research done by the author, who is an astonishingly well accomplished person. The book is really well written and is packed with loads of useful information, with only brief sections of less importance. Among other things, the book teaches you to understand your mind and its weaknesses. This equips you to deal with your brain when it is falling for fallacies. You can also counteract defend yourself against others who are trying to influence it, as well as trying to influence others.

I found most concepts discussed in this book very important. So I will summarize them here so I won’t need to re-listen to this long book again.

Firstly, the base concept is that your mind has two systems that are constantly working. System 1 is the really fast and operates automatically, intuitively, involuntary, and effortlessly. System 2 is the conscious you, which is slow, lazy, deliberate, good at reasoning and problem solving. The interaction between these two systems are often the cause to many of the below mentioned flaws of the mind. Derek Muller of Veritasium explains this more in depth in his two videos here and here. Remember, your system 1, which jumps to conclusions based on heuristics, does more work than you would like to admit.

Regression to the mean is the explanation to why a lot of people believe negative feedback works well and positive feedback does not. These beliefs are flawed. Regression to the mean is easiest explained by an example. We can, in almost all cases, assume that a person’s performance on a task will vary according to a bell curve. This means that a person is most likely to get a result close to the average, and less likely to get any extraordinarily good or bad result. For simplicity, we assume that the curve is stationary (in reality it will slide slightly as your skills increase or decrease). Using these assumptions, a person who manages to get a really good result is more likely to get a worse result the next time, since the majority of the outcomes are worse than this good result. This leads people to believe that positive feedback is bad, since they are more likely to give positive feedback to a person who has excelled. For the same reason, a lot of people believe their negative feedback makes the person receiving the feedback perform better, even though these thing are unrelated. This bias is extremely prevalent in sports. Joe Hanson does a better job explaining the concept of luck here.

Brains are inherently bad at statistics. Even trained statisticians often makes serious mistakes. For example, a person is described as a stereotypical programmer (quiet, not good with people etc.) and you are asked to guess what kind of university degree the person has. The easy thing to do would be to jump the gun and guess that the person took a computer science degree. However, what you should do is to estimate the base rate i.e. of all living persons with a university degree, how big is the portion with a computer science degree? You should always use a base rate, and then adjust from that point considering the given circumstances.

The law of small number states that small sample sizes are more prone to extreme outcomes. Our brains consistently give much more credibility to results, with small sample sizes, than statistics warrant. Even statisticians and scientists consistently creates studies with sample sizes that have an extremely high risk of producing the wrong result. For example, a study concluded that schools that perform the best are small, but this doesn’t mean that small schools are better. If the study would have looked for the worst schools they would have found that even the worst schools are small.

Availability bias plays a very important role in how people view the world. This bias makes you underestimate or overestimate the frequency of an event based on ease of retrieval of examples rather than statistical calculation. For example, it is easy to overestimate your contribution to a group, since you easily can think of examples of what you have done. This bias is also very affected by the news coverage. A recent terror attack shown on the news makes us more scared of terror attacks. This, in turn, makes the news more likely to feature terror attacks, both since the audience cares more about the topic and since news businesses make money by grabbing viewers attention. This created a negative feedback loop (availability cascade) where even minor news stories are being sensationalized. A less extreme example that turns up in media a lot is plane crashes. This is discussed in more detail in Ryan Holiday’s Trust Me, I’m Lying.

The hindsight illusion (or outcome bias) is something that is very prevalent in today’s society. As humans, we think we understand the past, which implies that we can predict the future. Luck usually plays a bigger role than we are led to believe. Since humans really want to create a consistent world view, we try to create explanations for events, even though these explanations often are flawed. This causes us to appreciate bad decisions that turn out good and blame good decisions that turned out bad. Always be aware of this in fields where circumstances play a big role.

Other minor things worth noting:

  • What you see is all there is – your brain really wants to jump to conclusions and create a coherent story based on limited or no data. This is  the cause of many fallacies in your mind, and is especially clear in the halo effect and estimations.
  • Difficult questions are often unconsciously substituted for an easier one.
  • Loss aversion. People value equal losses and gains differently, where losses are avoided to a larger degree than equal gains are pursued. This has interesting implications for insurance companies. Explained in depth here.
  • Use algorithms and quantitative measurements when you are hiring and doing similar things.
  • Confirmation bias – System 1 is gullible and biased to believe, System 2 is in charge of doubting and unbelieving, but System 2 is sometimes busy, and often lazy.
  • Anchors causes you to make incorrect estimates based on previously heard quantities. House listing prices, first job offers etc. are examples of this. Mobilize system 2 to not get too affected by these anchors!
  • Stereotypes are not as bad as they may appear. Most of the time they are founded on some kind of truth and you would be better of if you treat them as such. Of course there are many unfounded and stereotypes in today’s society that just end up hurting people.
  • Use algorithms instead of gut feelings and intuitions! About 60% of the studies on this shows that simple algorithms outperform experts. A lot of the rest of the studies show no significant difference. On top of this, experts asked to predict things are only slightly better than random chance. Confidence is not a good measurement for how likely an expert is to be correct.
  • When you have to use human intuitions, always try to create as short of a feedback loop as possible.
  • The planning fallacy – plans are often created based on a best case scenario without seriously considering the worst case. Always try to find base rates for similar projects. You can also try to use the premortem technique to identify potential problems. Don’t fall for the common ‘what you see is all there is’ fallacy.
  • Once you have accepted a theory and used it as a tool in your thinking, it is extraordinarily difficult to notice its flaws.
  • People value certainty. A move from 98% to 100% certainty is valued significantly more than 90% to 92% with the same expected value.
  • Humans have two selfs, the remembering self and the experiencing self, which often have conflicting goals. As you might expect, the remembering self wants to create good memories while the experiencing self wants to feel good in the moment. Climbing a mountain is a good example where conflicts arise between the two.
  • The peak and the end matters the most when creating memories, duration does not. Subjects consistently chose to endure objectively more pain, because it was a better memory. This makes the experiencing self suffer more than it has to. The peak and end rule applies to a lot of other areas as well. Movies, jobs, relationships etc.
  • Happiness – the two selfs complicates the definition of a happy life. Should the experiencing or remembering self be evaluated? This is especially important in government policies.
  • The author raises interesting cases where a libertarian policies are not as good as they seem. Should the government protect people against their own mistakes, like not saving for retirement, getting addicted etc? Or more interestingly, should the government protect their people against others who are exploiting their quirks of system 1 and the laziness of system 2? One idea in the book is to give people nudges in the directions that serve their own long term interests. This can be to change the default options for pension savings or require companies to offer contract that are sufficiently simple to be read and understood by human customers.