Tuesday 06 December 2011 at 11:47 am. Used tags: , , , , , , , , , , , , , ,


How do shark attacks save lives;  How can computers beat the experts;  why politics is dysfunctional and fools the voters; why intuition fails us?

and more.  The reviewer, Freeman Dyson writes a brilliantly simple book review on the works of Daniel Kahneman.  Dyson, a Nobel laureate, is certainly among the top 10 physicists in the world.  Kahneman, also a Nobel laureate is probably at the top of the heap of geniuses exploring and quantifying how the brain (or mind) really works.  Continued in next section.

How to Dispel Your Illusions

Freeman Dyson

Thinking, Fast and Slow
by Daniel Kahneman


Daniel Kahneman, New York City, September 2011

In 1955, when Daniel Kahneman was twenty-one years old, he was a lieutenant in the Israeli Defense Forces. He was given the job of setting up a new interview system for the entire army. The purpose was to evaluate each freshly drafted recruit and put him or her into the appropriate slot in the war machine. The interviewers were supposed to predict who would do well in the infantry or the artillery or the tank corps or the various other branches of the army. The old interview system, before Kahneman arrived, was informal. The interviewers chatted with the recruit for fifteen minutes and then came to a decision based on the conversation. The system had failed miserably. When the actual performance of the recruit a few months later was compared with the performance predicted by the interviewers, the correlation between actual and predicted performance was zero.

Kahneman had a bachelor’s degree in psychology and had read a book, Clinical vs. Statistical Prediction: A Theoretical Analysis and a Review of the Evidence by Paul Meehl, published only a year earlier. Meehl was an American psychologist who studied the successes and failures of predictions in many different settings. He found overwhelming evidence for a disturbing conclusion. Predictions based on simple statistical scoring were generally more accurate than predictions based on expert judgment.

A famous example confirming Meehl’s conclusion is the “Apgar score,” invented by the anesthesiologist Virginia Apgar in 1953 to guide the treatment of newborn babies. The Apgar score is a simple formula based on five vital signs that can be measured quickly: heart rate, breathing, reflexes, muscle tone, and color. It does better than the average doctor in deciding whether the baby needs immediate help. It is now used everywhere and saves the lives of thousands of babies. Another famous example of statistical prediction is the Dawes formula for the durability of marriage. The formula is “frequency of love-making minus frequency of quarrels.” Robyn Dawes was a psychologist who worked with Kahneman later. His formula does better than the average marriage counselor in predicting whether a marriage will last.

Having read the Meehl book, Kahneman knew how to improve the Israeli army interviewing system. His new system did not allow the interviewers the luxury of free-ranging conversations with the recruits. Instead, they were required to ask a standard list of factual questions about the life and work of each recruit. The answers were then converted into numerical scores, and the scores were inserted into formulas measuring the aptitude of the recruit for the various army jobs. When the predictions of the new system were compared to performances several months later, the results showed the new system to be much better than the old. Statistics and simple arithmetic tell us more about ourselves than expert intuition.

Reflecting fifty years later on his experience in the Israeli army, Kahneman remarks in Thinking, Fast and Slow that it was not unusual in those days for young people to be given big responsibilities. The country itself was only seven years old. “All its institutions were under construction,” he says, “and someone had to build them.” He was lucky to be given this chance to share in the building of a country, and at the same time to achieve an intellectual insight into human nature. He understood that the failure of the old interview system was a special case of a general phenomenon that he called “the illusion of validity.” At this point, he says, “I had discovered my first cognitive illusion.”

Cognitive illusions are the main theme of his book. A cognitive illusion is a false belief that we intuitively accept as true. The illusion of validity is a false belief in the reliability of our own judgment. The interviewers sincerely believed that they could predict the performance of recruits after talking with them for fifteen minutes. Even after the interviewers had seen the statistical evidence that their belief was an illusion, they still could not help believing it. Kahneman confesses that he himself still experiences the illusion of validity, after fifty years of warning other people against it. He cannot escape the illusion that his own intuitive judgments are trustworthy.

An episode from my own past is curiously similar to Kahneman’s experience in the Israeli army. I was a statistician before I became a scientist. At the age of twenty I was doing statistical analysis of the operations of the British Bomber Command in World War II. The command was then seven years old, like the State of Israel in 1955. All its institutions were under construction. It consisted of six bomber groups that were evolving toward operational autonomy. Air Vice Marshal Sir Ralph Cochrane was the commander of 5 Group, the most independent and the most effective of the groups. Our bombers were then taking heavy losses, the main cause of loss being the German night fighters.

Cochrane said the bombers were too slow, and the reason they were too slow was that they carried heavy gun turrets that increased their aerodynamic drag and lowered their operational ceiling. Because the bombers flew at night, they were normally painted black. Being a flamboyant character, Cochrane announced that he would like to take a Lancaster bomber, rip out the gun turrets and all the associated dead weight, ground the two gunners, and paint the whole thing white. Then he would fly it over Germany, and fly so high and so fast that nobody could shoot him down. Our commander in chief did not approve of this suggestion, and the white Lancaster never flew.

The reason why our commander in chief was unwilling to rip out gun turrets, even on an experimental basis, was that he was blinded by the illusion of validity. This was ten years before Kahneman discovered it and gave it its name, but the illusion of validity was already doing its deadly work. All of us at Bomber Command shared the illusion. We saw every bomber crew as a tightly knit team of seven, with the gunners playing an essential role defending their comrades against fighter attack, while the pilot flew an irregular corkscrew to defend them against flak. An essential part of the illusion was the belief that the team learned by experience. As they became more skillful and more closely bonded, their chances of survival would improve.

When I was collecting the data in the spring of 1944, the chance of a crew reaching the end of a thirty-operation tour was about 25 percent. The illusion that experience would help them to survive was essential to their morale. After all, they could see in every squadron a few revered and experienced old-timer crews who had completed one tour and had volunteered to return for a second tour. It was obvious to everyone that the old-timers survived because they were more skillful. Nobody wanted to believe that the old-timers survived only because they were lucky.

At the time Cochrane made his suggestion of flying the white Lancaster, I had the job of examining the statistics of bomber losses. I did a careful analysis of the correlation between the experience of the crews and their loss rates, subdividing the data into many small packages so as to eliminate effects of weather and geography. My results were as conclusive as those of Kahneman. There was no effect of experience on loss rate. So far as I could tell, whether a crew lived or died was purely a matter of chance. Their belief in the life-saving effect of experience was an illusion.

The demonstration that experience had no effect on losses should have given powerful support to Cochrane’s idea of ripping out the gun turrets. But nothing of the kind happened. As Kahneman found out later, the illusion of validity does not disappear just because facts prove it to be false. Everyone at Bomber Command, from the commander in chief to the flying crews, continued to believe in the illusion. The crews continued to die, experienced and inexperienced alike, until Germany was overrun and the war finally ended.

Another theme of Kahneman’s book, proclaimed in the title, is the existence in our brains of two independent sytems for organizing knowledge. Kahneman calls them System One and System Two. System One is amazingly fast, allowing us to recognize faces and understand speech in a fraction of a second. It must have evolved from the ancient little brains that allowed our agile mammalian ancestors to survive in a world of big reptilian predators. Survival in the jungle requires a brain that makes quick decisions based on limited information. Intuition is the name we give to judgments based on the quick action of System One. It makes judgments and takes action without waiting for our conscious awareness to catch up with it. The most remarkable fact about System One is that it has immediate access to a vast store of memories that it uses as a basis for judgment. The memories that are most accessible are those associated with strong emotions, with fear and pain and hatred. The resulting judgments are often wrong, but in the world of the jungle it is safer to be wrong and quick than to be right and slow.

System Two is the slow process of forming judgments based on conscious thinking and critical examination of evidence. It appraises the actions of System One. It gives us a chance to correct mistakes and revise opinions. It probably evolved more recently than System One, after our primate ancestors became arboreal and had the leisure to think things over. An ape in a tree is not so much concerned with predators as with the acquisition and defense of territory. System Two enables a family group to make plans and coordinate activities. After we became human, System Two enabled us to create art and culture.

The question then arises: Why do we not abandon the error-prone System One and let the more reliable System Two rule our lives? Kahneman gives a simple answer to this question: System Two is lazy. To activate System Two requires mental effort. Mental effort is costly in time and also in calories. Precise measurements of blood chemistry show that consumption of glucose increases when System Two is active. Thinking is hard work, and our daily lives are organized so as to economize on thinking. Many of our intellectual tools, such as mathematics and rhetoric and logic, are convenient substitutes for thinking. So long as we are engaged in the routine skills of calculating and talking and writing, we are not thinking, and System One is in charge. We only make the mental effort to activate System Two after we have exhausted the possible alternatives.

System One is much more vulnerable to illusions, but System Two is not immune to them. Kahneman uses the phrase “availability bias” to mean a biased judgment based on a memory that happens to be quickly available. It does not wait to examine a bigger sample of less cogent memories. A striking example of availability bias is the fact that sharks save the lives of swimmers. Careful analysis of deaths in the ocean near San Diego shows that on average, the death of each swimmer killed by a shark saves the lives of ten others. Every time a swimmer is killed, the number of deaths by drowning goes down for a few years and then returns to the normal level. The effect occurs because reports of death by shark attack are remembered more vividly than reports of drownings. System One is strongly biased, paying more prompt attention to sharks than to riptides that occur more frequently and may be equally lethal. In this case, System Two probably shares the same bias. Memories of shark attacks are tied to strong emotions and are therefore more available to both systems.


An American bomber being prepared at a Royal Air Force base in Great Britain for a daylight raid over occupied France, 1942; photograph by Robert Capa

Kahneman is a psychologist who won a Nobel Prize for economics. His great achievement was to turn psychology into a quantitative science. He made our mental processes subject to precise measurement and exact calculation, by studying in detail how we deal with dollars and cents. By making psychology quantitative, he incidentally achieved a powerful new understanding of economics. A large part of his book is devoted to stories illustrating the various illusions to which supposedly rational people succumb. Each story describes an experiment, examining the behavior of students or citizens who are confronted with choices under controlled conditions. The subjects make decisions that can be precisely measured and recorded. The majority of the decisions are numerical, concerned with payments of money or calculations of probability. The stories demonstrate how far our behavior differs from the behavior of the mythical “rational actor” who obeys the rules of classical economics.

A typical example of a Kahneman experiment is the coffee mug experiment, designed to measure a form of bias that he calls the “endowment effect.” The endowment effect is our tendency to value an object more highly when we own it than when someone else owns it. Coffee mugs are intended to be useful as well as elegant, so that people who own them become personally attached to them. A simple version of the experiment has two groups of people, sellers and buyers, picked at random from a population of students. Each seller is given a mug and invited to sell it to a buyer. The buyers are given nothing and are invited to use their own money to buy a mug from a seller. The average prices offered in a typical experiment were: sellers $7.12, buyers $2.87. Because the price gap was so large, few mugs were actually sold.

The experiment convincingly demolished the central dogma of classical economics. The central dogma says that in a free market, buyers and sellers will agree on a price that both sides regard as fair. The dogma is true for professional traders trading stocks in a stock market. It is untrue for nonprofessional buyers and sellers because of the endowment effect. Trading that should be profitable to both sides does not occur, because most people do not think like traders.

Our failure to think like traders has important practical consequences, for good and for evil. The main consequence of the endowment effect is to give stability to our lives and institutions. Stability is good when a society is peaceful and prosperous. Stability is evil when a society is poor and oppressed. The endowment effect works for good in the German city of Munich. I once rented a home there for a year, a few miles from the city center. Across the street from our home was a real farm with potato fields and pigs and sheep. The local children, including ours, went out to the fields after dark, made little fires in the ground, and roasted potatoes. In a free-market economy, the farm would have been sold to a developer and converted into a housing development. The farmer and the developer would both have made a handsome profit. But in Munich, people were not thinking like traders. There was no free market in land. The city valued the farm as public open space, allowing city dwellers to walk over grass all the way to the city center, and allowing our children to roast potatoes at night. The endowment effect allowed the farm to survive.

In poor agrarian societies, such as Ireland in the nineteenth century or much of Africa today, the endowment effect works for evil because it perpetuates poverty. For the Irish landowner and the African village chief, possessions bring status and political power. They do not think like traders, because status and political power are more valuable than money. They will not trade their superior status for money, even when they are heavily in debt. The endowment effect keeps the peasants poor, and drives those of them who think like traders to emigrate.

At the end of his book, Kahneman asks the question: What practical benefit can we derive from an understanding of our irrational mental processes? We know that our judgments are heavily biased by inherited illusions, which helped us to survive in a snake-infested jungle but have nothing to do with logic. We also know that, even when we become aware of the bias and the illusions, the illusions do not disappear. What use is it to know that we are deluded, if the knowledge does not dispel the delusions?

Kahneman answers this question by saying that he hopes to change our behavior by changing our vocabulary. If the names that he invented for various common biases and illusions, “illusion of validity,” “availability bias,” “endowment effect,” and others that I have no space to describe in this review, become part of our everyday vocabulary, then he hopes to see the illusions lose their power to deceive us. If we use these names every day to criticize our friends’ mistaken judgments and to confess our own, then perhaps we will learn to overcome our illusions. Perhaps our children and grandchildren will grow up using the new vocabulary and will automatically correct their congenital biases when making judgments. If this miracle happens, then future generations will owe a big debt to Kahneman for giving them a clearer vision.

One thing that is notably absent from Kahneman’s book is the name of Sigmund Freud. In thirty-two pages of endnotes there is not a single reference to his writings. This omission is certainly no accident. Freud was a dominating figure in the field of psychology for the first half of the twentieth century, and a fallen tyrant for the second half of the century. In the article on Freud in Wikipedia, we find quotes from the Nobel Prize–winning immunologist Peter Medawar—psychoanalysis is the “most stupendous intellectual confidence trick of the twentieth century”—and from Frederick Crews:

Step by step, we are learning that Freud has been the most overrated figure in the entire history of science and medicine—one who wrought immense harm through the propagation of false etiologies, mistaken diagnoses, and fruitless lines of enquiry.

In these quotes, emotions are running high. Freud is now hated as passionately as he was once loved. Kahneman evidently shares the prevalent repudiation of Freud and of his legacy of writings.

Freud wrote two books, The Psychopathology of Everyday Life in 1901 and The Ego and the Id in 1923, which come close to preempting two of the main themes of Kahneman’s book. The psychopathology book describes the many mistakes of judgment and of action that arise from emotional bias operating below the level of consciousness. These “Freudian slips” are examples of availability bias, caused by memories associated with strong emotions. The Ego and the Id decribes two levels of the mind that are similar to the System Two and System One of Kahneman, the Ego being usually conscious and rational, the Id usually unconscious and irrational.

There are huge differences between Freud and Kahneman, as one would expect for thinkers separated by a century. The deepest difference is that Freud is literary while Kahneman is scientific. The great contribution of Kahneman was to make psychology an experimental science, with experimental results that could be repeated and verified. Freud, in my view, made psychology a branch of literature, with stories and myths that appeal to the heart rather than to the mind. The central dogma of Freudian psychology was the Oedipus complex, a story borrowed from Greek mythology and enacted in the tragedies of Sophocles. Freud claimed that he had identified from his clinical practice the emotions children feel toward their parents that he called the Oedipus complex. His critics have rejected that claim. So Freud became to his admirers a prophet of spiritual and psychological wisdom, and to his detractors a quack doctor pretending to cure imaginary diseases. Kahneman took psychology in a diametrically opposite direction, not pretending to cure ailments but only trying to dispel illusions.

It is understandable that Kahneman has no use for Freud, but it is still regrettable. The insights of Kahneman and Freud are complementary rather than contradictory. Anyone who strives for a complete understanding of human nature has much to learn from both of them. The scope of Kahneman’s psychology is necessarily limited by his method. His method is to study mental processes that can be observed and measured under rigorously controlled experimental conditions. Following this method, he revolutionized psychology. He discovered mental processes that can be described precisely and demonstrated reliably. He discarded the poetic fantasies of Freud.

But together with the poetic fantasies, he discarded much else that was valuable. Since strong emotions and obsessions cannot be experimentally controlled, Kahneman’s method did not allow him to study them. The part of the human personality that Kahneman’s method can handle is the nonviolent part, concerned with everyday decisions, artificial parlor games, and gambling for small stakes. The violent and passionate manifestations of human nature, concerned with matters of life and death and love and hate and pain and sex, cannot be experimentally controlled and are beyond Kahneman’s reach. Violence and passion are the territory of Freud. Freud can penetrate deeper than Kahneman because literature digs deeper than science into human nature and human destiny.

William James is another great psychologist whose name is not mentioned in Kahneman’s book. James was a contemporary of Freud and published his classic work, The Varieties of Religious Experience: A Study in Human Nature, in 1902. Religion is another large area of human behavior that Kahneman chooses to ignore. Like the Oedipus complex, religion does not lend itself to experimental study. Instead of doing experiments, James listens to people describing their experiences. He studies the minds of his witnesses from the inside rather than from the outside. He finds the religious temperament divided into two types that he calls once-born and twice-born, anticipating Kahneman’s division of our minds into System One and System Two. Since James turns to literature rather than to science for his evidence, the two chief witnesses that he examines are Walt Whitman for the once-born and Leo Tolstoy for the twice-born.

Freud and James were artists and not scientists. It is normal for artists who achieve great acclaim during their lifetimes to go into eclipse and become unfashionable after their deaths. Fifty or a hundred years later, they may enjoy a revival of their reputations, and they may then be admitted to the ranks of permanent greatness. Admirers of Freud and James may hope that the time may come when they will stand together with Kahneman as three great explorers of the human psyche, Freud and James as explorers of our deeper emotions, Kahneman as the explorer of our more humdrum cognitive processes. But that time has not yet come. Meanwhile, we must be grateful to Kahneman for giving us in this book a joyful understanding of the practical side of our personalities.

No comments

(optional field)
(optional field)
To prevent automated comment spam we require you to answer this silly question.
Remember personal info?
Small print: All html tags except <b> and <i> will be removed from your comment. You can make links by just typing the url or mail-address.