HOW WE THINK, BIAS, BLINDNESS

Bias, Blindness and How We Truly Think (Part 1): Daniel Kahneman

Rolling the Dice

I

Bias, Blindness and How We Truly Think (Part 1): Daniel Kahneman

Rolling the Dice

Illustration by Bob Gill

Most of us view the world as more benign than it really is, our own attributes as more favorable than they truly are, and the goals we adopt as more achievable than they are likely to be. We also tend to exaggerate our ability to forecast the future, which fosters overconfidence.

In terms of its consequences for decisions, the optimistic bias may well be the most significant cognitive bias. Because optimistic bias is both a blessing and a risk, you should be both happy and wary if you are temperamentally optimistic.

Optimism is normal, but some fortunate people are more optimistic than the rest of us. If you are genetically endowed with an optimistic bias, you hardly need to be told that you are a lucky person -- you already feel fortunate.

Optimistic people play a disproportionate role in shaping our lives. Their decisions make a difference; they are inventors, entrepreneurs, political and military leaders -- not average people. They got to where they are by seeking challenges and taking risks. They are talented and they have been lucky, almost certainly luckier than they acknowledge.

A survey of founders of small businesses concluded that entrepreneurs are more sanguine than midlevel managers about life in general. Their experiences of success have confirmed their faith in their judgment and in their ability to control events. Their self-confidence is reinforced by the admiration of others. This reasoning leads to a hypothesis: The people who have the greatest influence on the lives of others are likely to be optimistic and overconfident, and to take more risks than they realize.

Optimistic Bias

The evidence suggests that an optimistic bias plays a role -- sometimes the dominant role -- whenever people or institutions voluntarily take on significant risks. More often than not, risk-takers underestimate the odds they face and, because they misread the risks, optimistic entrepreneurs often believe they are prudent, even when they are not. Their confidence sustains a positive mood that helps them obtain resources from others, raise the morale of their employees and enhance their prospects of prevailing. When action is needed, optimism, even of the mildly delusional variety, may be a good thing.

An optimistic temperament encourages persistence in the face of obstacles. But this persistence can be costly. A series of studies by Thomas Astebro shed light on what happens when optimists get bad news. (His data came from Canada’s Inventor’s Assistance Program -- which provides inventors with objective assessments of the commercial prospects of their ideas. The forecasts of failure in this program are remarkably accurate.)

In Astebro’s studies, discouraging news led about half of the inventors to quit after receiving a grade that unequivocally predicted failure. However, 47 percent of them continued development efforts even after being told that their project was hopeless, and on average these individuals doubled their initial losses before giving up.

Significantly, persistence after discouraging advice was relatively common among inventors who had a high score on a personality measure of optimism. This evidence suggests that optimism is widespread, stubborn and costly.

In the market, of course, belief in one’s superiority has significant consequences. Leaders of large businesses sometimes make huge bets in expensive mergers and acquisitions, acting on the mistaken belief that they can manage the assets of another company better than its current owners do. The stock market commonly responds by downgrading the value of the acquiring firm, because experience has shown that such efforts fail more often than they succeed. Misguided acquisitions have been explained by a “hubris hypothesis”: The executives of the acquiring firm are simply less competent than they think they are.

Risk Takers

The economists Ulrike Malmendier and Geoffrey Tate identified optimistic chief executive officers by the amount of company stock that they owned personally and observed that highly optimistic leaders took excessive risks. They assumed debt rather than issue equity and were more likely to “overpay for target companies and undertake value-destroying mergers.” Remarkably, the stock of the acquiring company suffered substantially more in mergers if the CEO was overly optimistic by the authors’ measure. The market is apparently able to identify overconfident CEOs.

This observation exonerates the CEOs from one accusation even as it convicts them of another: The leaders of enterprises who make unsound bets don’t do so because they are betting with other people’s money. On the contrary, they take greater risks when they personally have more at stake. The damage caused by overconfident CEOs is compounded when the business press anoints them as celebrities; the evidence indicates that prestigious awards to the CEO are costly to stockholders.

The authors write, “We find that firms with award-winning CEOs subsequently underperform, in terms both of stock and of operating performance. At the same time, as CEO compensation increases, CEOs spend more time on activities such as writing books and sitting on outside boards, and they are more likely to engage in earnings management.”

I have had several occasions to ask founders and participants in innovative startups this question: To what extent will the outcome of your effort depend on what you do in your company? The answer comes quickly, and in my small sample it has never been less than 80 percent. Even when they are not sure they will succeed, these bold people think their fate is almost entirely in their own hands. They know less about their competitors and find it natural to imagine a future in which the competition plays little part.

Competition Neglect

Colin Camerer, who coined the concept of competition neglect, illustrated it with a quote from a chairman of Disney Studios. Asked why so many big-budget movies are released on the same holidays, he said, “Hubris. Hubris. If you only think about your own business, you think, ‘I’ve got a good story department, I’ve got a good marketing department’ … and you don’t think that everybody else is thinking the same way.” The competition isn’t part of the decision. In other words, a difficult question has been replaced by an easier one.

This is a kind of dodge we all make, without even noticing. We use fast, intuitive thinking -- System 1 thinking -- whenever possible, and switch over to more deliberate and effortful System 2 thinking only when we truly recognize that the problem at hand isn’t an easy one.

The question that studio executives needed to answer is this: Considering what others will do, how many people will see our film? The question they did consider is simpler and refers to knowledge that is most easily available to them: Do we have a good film and a good organization to market it?

Organizations that take the word of overconfident experts can expect costly consequences. A Duke University study of chief financial officers showed that those who were most confident and optimistic about how the Standard & Poor’s index would perform over the following year were also overconfident and optimistic about the prospects of their own companies, which went on to take more risks than others.

As Nassim Taleb, the author of “The Black Swan,” has argued, inadequate appreciation of the uncertainty of the environment inevitably leads economic agents to take risks they should avoid. However, optimism is highly valued; people and companies reward the providers of misleading information more than they reward truth tellers. An unbiased appreciation of uncertainty is a cornerstone of rationality -- but it isn’t what organizations want. Extreme uncertainty is paralyzing under dangerous circumstances, and the admission that one is merely guessing is especially unacceptable when the stakes are high. Acting on pretended knowledge is often the preferred approach.

Medical Certainty

Overconfidence also appears to be endemic in medicine. A study of patients who died in the intensive-care unit compared autopsy results with the diagnoses that physicians had provided while the patients were still alive. Physicians also reported their confidence. The result: “Clinicians who were ‘completely certain’ of the diagnosis ante-mortem were wrong 40 percent of the time.” Here again, experts’ overconfidence is encouraged by their clients. As the researchers noted, “Generally, it is considered a weakness and a sign of vulnerability for clinicians to appear unsure.”

According to Martin Seligman, the founder of positive psychology, an “optimistic explanation style” contributes to resilience by defending one’s self-image. In essence, the optimistic style involves taking credit for successes but little blame for failures.

Organizations may be better able to tame optimism than individuals are. The best idea for doing so was contributed by Gary Klein, my “adversarial collaborator” who generally defends intuitive decision-making against claims of bias.

Klein’s proposal, which he calls the “premortem,” is simple: When the organization has almost come to an important decision but hasn’t committed itself, it should gather a group of people knowledgeable about the decision to listen to a brief speech: “Imagine that we are a year into the future. We implemented the plan as it now exists. The outcome has been a disaster. Please take 5 to 10 minutes to write a brief history of that disaster.”

As a team converges on a decision, public doubts about the wisdom of the planned move are gradually suppressed and eventually come to be treated as evidence of flawed loyalty. The suppression of doubt contributes to overconfidence in a group where only supporters of the decision have a voice. The main virtue of the premortem is that it legitimizes doubts.

Furthermore, it encourages even supporters of the decision to search for possible threats not considered earlier. The premortem isn’t a panacea and doesn’t provide complete protection against nasty surprises, but it goes some way toward reducing the damage of plans that are subject to the biases of uncritical optimism.

(Daniel Kahneman, a professor of psychology emeritus at Princeton University and professor of psychology and public affairs emeritus at Princeton’s Woodrow Wilson School of Public and International Affairs, received the Nobel Memorial Prize in Economic Sciences for his work with Amos Tverksy on decision making. This is the first in a four-part series of condensed excerpts from his new book, “Thinking Fast and Slow,” just published by Farrar, Straus and Giroux. The opinions expressed are his own. Read Part 2, Part 3 and Part 4.)

Bias, Blindness and How We Truly Think (Part 2): Daniel Kahneman

How We Think, Part 2

Illustration by Bob Gill

In 1738, the Swiss scientist Daniel Bernoulli argued that a gift of 10 ducats has the same utility to someone who already has 100 ducats as a gift of 20 ducats to someone whose current wealth is 200 ducats.

It was one of the earliest known efforts to look at the relationship between mind and matter -- between the magnitude of a stimulus and the intensity or quality of subjective experience. And it tells us something about how people make choices between gambles and sure things.

Bernoulli was right, of course: We normally speak of changes of income in terms of percentages, as when we say, “She got a 30 percent raise.” The idea is that a 30 percent raise may evoke a fairly similar psychological response for the rich and for the poor, which an increase of $100 will not do.

The psychological response to a change of wealth is inversely proportional to the initial amount of wealth, which suggests that utility is a logarithmic function of wealth. If this function is accurate, the same psychological distance separates $100,000 from $1 million, and $10 million from $100 million.

Bernoulli drew on his psychological insight into the utility of wealth to propose a radically new approach to the evaluation of gambles, an important topic for the mathematicians of his day. Earlier thinkers had assumed that gambles are assessed by their expected value: a weighted average of the possible outcomes, where each outcome is weighted by its probability. For example, the expected value of 80 percent chance to win $100 and 20 percent chance to win $10 is $82 (0.8 x 100 + 0.2 x 10).

Taking the Gamble

Now ask yourself this question: Which would you prefer to receive as a gift, this gamble or $80 for sure? Almost everyone prefers the sure thing. If people valued uncertain prospects by their expected value, they would prefer the gamble, because $82 is more than $80. Bernoulli pointed out that people do not in fact evaluate gambles in this way.

He observed that most people dislike risk, and if they are offered a choice between a gamble and an amount equal to its expected value they will pick the sure thing. In fact a risk- averse decision maker will choose a sure thing that is less than the expected value, in effect paying a premium to avoid the uncertainty.

Bernoulli invented psychophysics to explain this aversion to risk. His idea was straightforward: People’s choices are based not on dollar values but on the psychological values of outcomes, their utilities. The psychological value of a gamble is therefore not the weighted average of its possible dollar outcomes; it is the average of the utilities of these outcomes, each weighted by its probability.

Bernoulli proposed that the diminishing marginal value of wealth (in the modern jargon) is what explains risk aversion -- the common preference that people generally show for a sure thing over a favorable gamble of equal or slightly higher expected value.

Consider the choice between having equal chances to have 1 million ducats or 7 million ducats and having 4 million ducats with certainty. If you calculate the expected value of the gamble, it comes out to 4 million ducats -- the same as the sure thing. The psychological utilities of the two options are different, however, because of the diminishing utility of wealth: The increase in utility from 1 million ducats to 4 million is greater than the increase from 4 million to 7 million. Bernoulli’s insight was that a decision maker with diminishing marginal utility for wealth will be risk-averse.

Bernoulli’s Moral Expectation

Bernoulli’s essay is a marvel of concise brilliance. He applied his new concept of expected utility (which he called “moral expectation”) to compute how much a merchant in St. Petersburg would be willing to pay to insure a shipment of spice from Amsterdam if “he is well aware of the fact that at this time of year of one hundred ships which sail from Amsterdam to Petersburg, five are usually lost.” His utility function explained why poor people buy insurance and why richer people sell it to them.

That Bernoulli’s theory prevailed for so long is even more remarkable when you see that, in fact, it is seriously flawed. The errors are found not in what it asserts explicitly, but what it tacitly assumes.

Consider, for example, the following scenarios: Today, Jack and Jill each have wealth of $5 million. Yesterday, Jack had $1 million, and Jill had $9 million. Are they equally happy? (Do they have the same utility?)

Bernoulli’s theory assumes that the utility of their wealth is what makes people more or less happy. Jack and Jill have the same wealth, and the theory therefore asserts that they should be equally happy. But you do not need a degree in psychology to know that today Jack is elated and Jill despondent. Indeed, we know that Jack would be a great deal happier than Jill even if he had only $2 million today while she has $5 million. So Bernoulli’s theory must be wrong.

The happiness that Jack and Jill experience is determined by the recent change in their wealth.

For another example of what Bernoulli’s theory misses, consider Anthony and Betty: Anthony, whose current wealth is $1 million, and Betty, whose current wealth is $4 million, are both offered a choice between a gamble and a sure thing: equal chances to end up with $1 million or $4 million or end up with $2 million for sure.

In Bernoulli’s account, Anthony and Betty face the same choice: Their expected wealth is $2.5 million if they take the gamble and $2 million if they opt for the sure thing. Bernoulli would therefore expect Anthony and Betty to make the same choice, but this prediction is incorrect.

Theory-Induced Blindness

Here again, the theory fails because it does not account for Anthony and Betty’s different reference points. Anthony may think, “If I choose the sure thing, my wealth will double. This is very attractive. Or, I can take a gamble with equal chances to quadruple my wealth or to gain nothing.”

Betty would think differently: “If I choose the sure thing, I lose half of my wealth with certainty, which is awful. Alternatively, I can take a gamble with equal chances to lose three-quarters of my wealth or lose nothing.”

You can sense that Anthony and Betty are likely to make different choices because the sure-thing option of owning $2 million makes Anthony happy and makes Betty miserable. Note also how the sure outcome differs from the worst outcome of the gamble: For Anthony, it is the difference between doubling his wealth and gaining nothing; for Betty, it is the difference between losing half her wealth and losing three-quarters of it.

Betty is much more likely to take her chances, as others do when faced with very bad options. As I have told their story, neither Anthony nor Betty thinks in terms of states of wealth: Anthony thinks of gains, and Betty thinks of losses. The psychological outcomes they assess are entirely different, although the possible states of wealth they face are the same.

Because Bernoulli’s model lacks the idea of a reference point, expected utility theory does not account for the obvious fact that the outcome that is good for Anthony is bad for Betty. His model could explain Anthony’s risk aversion, but it can’t explain Betty’s preference for the gamble, a risk-seeking behavior that is often observed in entrepreneurs and in generals when all their options are bad.

All this is rather obvious, isn’t it? One could easily imagine Bernoulli himself constructing similar examples and developing a more complex theory to accommodate them; for some reason, he did not. One could also imagine colleagues of his time disagreeing with him, or later scholars objecting as they read his essay; for some reason, they didn’t either.

The mystery is how a conception that is vulnerable to such obvious counterexamples survived for so long. I can explain it only by a weakness of the scholarly mind that I have often observed in myself. I call it theory-induced blindness: Once you have accepted a theory, it is extraordinarily difficult to notice its flaws. As the psychologist Daniel Gilbert has observed, disbelieving is hard work.

(Daniel Kahneman, a professor of psychology emeritus at Princeton University and professor of psychology and public affairs emeritus at Princeton’s Woodrow Wilson School of Public and International Affairs, received the Nobel Memorial Prize in Economic Sciences for his work with Amos Tverksy on decision making. This is the second in a four-part series of condensed excerpts from his new book, “Thinking Fast and Slow,” just published by Farrar, Straus and Giroux. The opinions expressed are his own. See Part 1, Part 3 and Part 4.)

Bias, Blindness and How We Truly Think (Part 3): Daniel Kahneman

Way We Think, Part 3

Illustration by Bob Gill

Source: Science / Paul J. Whalen et al.

Take a look at the photos of two pairs of eyes, and take note: Your heartbeat accelerated when you looked at the left-hand figure. In fact, it accelerated even before you could label what is so eerie about the picture.

After some time, you may have recognized the eyes of a terrified person. The eyes on the right, narrowed by the raised cheeks of a smile, express happiness -- and they are not nearly as exciting.

When the two pictures were presented to people in a brain scanner, each was shown for less than two one-hundredths of a second and immediately masked by a random display of dark and bright squares. None of the observers ever consciously recognized what they had seen, but one part of their brain evidently knew: the amygdala, which has a primary role as the “threat center” of the brain, although it is also activated in other emotional states.

Information about the threat probably traveled via a superfast neural channel that feeds directly into a part of the brain that processes emotions, bypassing the visual cortex that supports the conscious experience of “seeing.” The same circuit also causes schematic angry faces to be processed faster and more efficiently than schematic happy ones. Some experimenters have reported that an angry face “pops out” from a happy crowd, but a single happy face does not stand out in an angry crowd.

Bad Is Stronger

The brains of humans contain a mechanism that is designed to give priority to bad news. No comparably rapid mechanism for recognizing good news has been detected. Threats are privileged above opportunities, as they should be. Loss aversion is one of many manifestations of a broad negativity dominance in people.

In a paper titled, “Bad Is Stronger Than Good,” some scholars summarized the evidence: “Bad emotions, bad parents and bad feedback have more impact than good ones, and bad information is processed more thoroughly than good. The self is more motivated to avoid bad self-definitions than to pursue good ones. Bad impressions and bad stereotypes are quicker to form and more resistant to disconfirmation than good ones.”

John Gottman, an expert in marital relations, observed that the long-term success of a relationship depends far more on avoiding the negative than on seeking the positive. Gottman estimated that a stable relationship requires that good interactions outnumber bad ones by at least 5-to-1.

Some distinctions between good and bad are hardwired into our biology. Infants enter the world ready to respond to pain as bad and to sweet (up to a point) as good. In many situations, however, the boundary between good and bad is a reference point that changes over time and depends on the immediate circumstances.

Imagine that you are out in the country on a cold night, inadequately dressed for the torrential rain. When you find a large rock that provides some shelter, the moment is intensely pleasurable. But the relief will not last long; you will soon be shivering behind the rock, driven by your renewed suffering to seek better shelter.

Loss aversion refers to the relative strength of two motives: We are driven more strongly to avoid losses than to achieve gains. A reference point is sometimes the status quo, but it can also be a goal in the future: not achieving a goal is a loss; exceeding it is a gain.

Bogey vs. Birdie

The economists Devin Pope and Maurice Schweitzer, at the University of Pennsylvania, suggest that golf provides the perfect example of a reference point: par. For a professional golfer, a birdie (one stroke under par) is a gain, and a bogey (one stroke over par) is a loss. Failing to make par is a loss, but missing a birdie putt is a forgone gain, not a loss. Pope and Schweitzer analyzed more than 2.5 million putts to test their prediction that players would try harder when putting for par than when putting for a birdie.

They were right. Whether the putt was easy or hard, at every distance from the hole, players were more successful when putting for par than for a birdie.

Tiger Woods was one of the golfers in their study. If in his best years Woods had managed to putt as well for birdies as he did for par, his average tournament score would have improved by one stroke and his earnings by almost $1 million per season.

If you are set to look for it, the asymmetric intensity of the motives to avoid losses and to achieve gains shows up almost everywhere. It is an ever-present feature of negotiations, especially of renegotiations of an existing contract, the typical situation in labor negotiations, and in international discussions of trade or arms limitations. Loss aversion creates an asymmetry that makes agreements difficult to reach.

Negotiations over a shrinking pie are especially difficult because they require an allocation of losses. People tend to be much more easygoing when they bargain over an expanding pie.

In the world of territorial animals, the principle of loss aversion explains the success of defenders. A biologist observed that “when a territory holder is challenged by a rival, the owner almost always wins the contest -- usually within a matter of seconds.”

Losers Try Harder

In human affairs, the same simple rule explains much of what happens when institutions attempt to reform themselves, in “reorganizations” and “restructuring” of companies and in efforts to rationalize a bureaucracy, simplify the tax code or reduce medical costs. As initially conceived, plans for reform almost always produce many winners and some losers while achieving an overall improvement.

If the affected parties have any political influence, however, potential losers will be more active and determined than potential winners; the outcome will be biased in their favor and inevitably more expensive and less effective than initially planned.

When Richard Thaler, Jack Knetsch and I studied public perceptions of what constitutes unfair behavior on the part of merchants, employers and landlords, we found that the moral rules by which the public evaluates what companies may or may not do draw a crucial distinction between losses and gains. The basic principle is that the existing wage, price or rent sets a reference point, which has the nature of an entitlement that must not be infringed. It is considered unfair for the company to impose losses on its customers or workers relative to the reference transaction, unless it must do so to protect its own entitlement. Consider this example:

A hardware store has been selling snow shovels for $15. The morning after a large snowstorm, the store raises the price to $20. Eighty-two percent of participants in the survey rated the action unfair, though the hardware store was behaving appropriately, according to the standard economic model.

Participants evidently viewed the pre-blizzard price as a reference point and the raised price as a loss that the store imposes on its customers, not because it must but simply because it can. We found that the exploitation of market power to impose losses on others is unacceptable.

Rules of Fairness

Different rules governed what a company could do to improve its profits or to avoid reduced profits. When a company faced lower production costs, the rules of fairness did not require it to share the bonanza with either its customers or its workers. Of course, our respondents liked a company better, and described it as more fair, if it was generous when its profits increased, but they did not brand as unfair a company that did not share.

They showed indignation only when a company exploited its power to break informal contracts with workers or customers, and to impose a loss on others in order to increase its profit. The important task for students of economic fairness is not to identify ideal behavior but to find the line that separates acceptable conduct from actions that invite opprobrium and punishment.

More recent research has also shown that fairness concerns are economically significant. Employers who violate rules of fairness are punished by reduced productivity, and merchants who follow unfair pricing policies can expect to lose sales. People who learned that a merchant was charging less for a product that they had bought at a higher price reduced their future purchases from that supplier by 15 percent, an average loss of $90 per customer. The customers evidently perceived the lower price as the reference point and thought of themselves as having sustained a loss by paying more than appropriate.

The influence of loss aversion and entitlements extends beyond the realm of financial transactions. One study of legal decisions found many examples of sharp distinction between actual losses and forgone gains. For example, a merchant whose goods were lost in transit may be compensated for costs he actually incurred, but is unlikely to be compensated for lost profits.

The familiar rule that possession is nine-tenths of the law confirms the moral status of the reference point. If people who lose suffer more than people who merely fail to gain, they may also deserve more protection from the law.

(Daniel Kahneman, a professor of psychology emeritus at Princeton University and professor of psychology and public affairs emeritus at Princeton’s Woodrow Wilson School of Public and International Affairs, received the Nobel Memorial Prize in Economic Sciences for his work with Amos Tverksy on decision making. This is the third in a four-part series of condensed excerpts from his new book, “Thinking Fast and Slow,” just published by Farrar, Straus and Giroux. The opinions expressed are his own. See Part 1, Part 2 and Part 4.)

Bias, Blindness and How We Truly Think (Pt 4): Kahneman

Way We Think, Part 4

Illustration by Bob Gill

Early in the days of my work on the measurement of experience, I saw Verdi’s opera “La Traviata.” Known for its gorgeous music, it is also a moving story of the love between a young aristocrat and Violetta, a woman of the demimonde.

The young man’s father approaches Violetta and persuades her to give up her lover, to protect the honor of the family and the marriage prospects of the young man’s sister. In an act of supreme self-sacrifice, Violetta pretends to reject the man she adores. She soon relapses into consumption (tuberculosis). In the final act, she lies dying, though her beloved is rushing to Paris to see her. Hearing the news, she is transformed with hope and joy, but she is also deteriorating quickly.

No matter how many times you have seen the opera, you are gripped by the tension and fear of the moment: Will the lover arrive in time? He does, of course, some love duets are sung and, after 10 minutes of music, Violetta dies.

I wondered: Why do we care so much about those last 10 minutes? I realized that I did not care at all about the length of Violetta’s life. Her missing a year of happy life would not have moved me, but her missing the last 10 minutes mattered. Furthermore, the emotion I felt about the reunion would not have changed if I had learned that they had a week together.

Memorable Stories

If the lover had come too late, however, “La Traviata” would have been a different story. A story is about significant events and memorable moments, not about time passing. This is how the remembering self works: It composes stories and keeps them for future reference.

It is not only at the opera that we think of life as a story and wish it to end well. When we hear about the death of a woman estranged from her daughter for years, we want to know whether they were reconciled as death approached. We do not care only about the daughter’s feelings -- it is the narrative of the mother’s life that we wish to improve.

Caring for people often takes the form of concern for the quality of their stories, not for their feelings. Indeed, we can be deeply moved even by events that change the stories of people already dead. We feel pity for a man who died believing in his wife’s love for him when we hear that she had a lover for many years and stayed with her husband only for his money. We pity the husband although he had lived a happy life. We feel the humiliation of a scientist who made a discovery that was proved false after she died, although she did not feel the humiliation.

Most important, we all care intensely for the narrative of our own life and very much want it to be a good story, with a decent hero.

The psychologist Ed Diener and his students wondered whether duration neglect and the peak-end rule governed evaluations of entire lives. They used a short description of a fictitious character called Jen, a never-married woman with no children, who died instantly and painlessly in an automobile accident. In one version of Jen’s story, she was extremely happy throughout her life, which lasted either 30 or 60 years. Another version added to Jen’s life five extra years that were pleasant but less so than the previous 30 or 60. After reading a biography of Jen, each participant was asked to assess Jen’s total happiness.

Doubling the duration of Jen’s life, it turned out, had no effect whatsoever on people’s judgments of the total happiness that Jen experienced. Diener and his students also found that adding five “slightly happy” years to a very happy life caused participants to substantially reduce their evaluations of the total happiness of Jen’s life.

Also collected were data on the effect of the extra five years, with each participant making both judgments in immediate succession. In spite of my experience with judgment errors, I did not believe that reasonable people could say that adding five slightly happy years to a life would make it worse. I was wrong. The intuition that the disappointing extra five years made the whole life worse was overwhelming.

Duration Doesn’t Matter

Diener and his students initially thought that the results represented the folly of the young people who participated in their experiments, but the pattern did not change when parents and older friends answered the same questions. In intuitive evaluations of lives, peaks and ends matter, but duration does not.

The pains of labor and the benefits of vacations come up as objections to the idea of duration neglect: We all share the intuition that it is worse for labor to last 24 hours than six hours, and that six days at a good resort is better than three. Duration appears to matter in these situations, but this is only because the quality of the end changes with the length of the episode. The mother is more depleted and helpless after 24 hours, and the vacationer is more refreshed after six days. What matters when we intuitively assess such episodes is the progressive deterioration or improvement of the ongoing experience, and how the person feels at the end.

Consider the choice of a vacation. Do you prefer to enjoy a relaxing week at the familiar beach you visited last year? Or do you hope to enrich your store of memories? Distinct industries have developed to cater to these alternatives: Resorts offer restorative relaxation; tourism is about helping people construct stories and collect memories.

The frenetic picture-taking of tourists suggests that storing memories is often an important goal, which shapes both the vacation plans and the experience. The photographer does not view the scene as a moment to be savored but as a future memory to be designed. Pictures may be useful to the remembering self - - though we rarely look at them for very long, or as often as we expected, or even at all -- but picture-taking is not necessarily the best way for the tourist’s experiencing self to enjoy a view.

In many cases we evaluate touristic vacations by the story and the memories that we expect to store. But in other situations -- love comes to mind -- the declaration that the moment will never be forgotten changes the character of the moment. A self-consciously memorable experience gains a significance that it would not otherwise have.

Benefits of Vacation

Diener’s team provided evidence that it is the remembering self that chooses vacations. They asked students to record daily evaluations of their experiences during spring break. The students also provided a global rating of the vacation at its end. Finally, they indicated whether or not they intended to repeat the vacation.

Statistical analysis established that the intentions for future vacations were determined by the final evaluation -- even when that score did not accurately represent the quality of the experience described in the diaries.

An experiment about your next vacation will allow you to observe your attitude to your experiencing self: At the end of the vacation, all pictures and videos will be destroyed. Furthermore, you will swallow a potion that will wipe out all your memories of the vacation. How would this affect your vacation plans? How much would you be willing to pay for it, relative to a normally memorable vacation?

My impression is that the elimination of memories greatly reduces the value of the experience.

Imagine a painful operation during which you will scream in pain and beg the surgeon to stop. However, you are promised an amnesia-inducing drug that will wipe out any memory of the episode. Here again, my observation is that most people are remarkably indifferent to the pains of their experiencing self. Some say they don’t care at all. Others share my feeling, which is that I feel pity for my suffering self but not more than I would feel for a stranger in pain.

I am my remembering self, and the experiencing self, who does my living, is like a stranger to me.

Brophy Sunday 30 October 2011 - 5:51 pm | | Brophy Blog

No comments

(optional field)
(optional field)
Remember personal info?
Small print: All html tags except <b> and <i> will be removed from your comment. You can make links by just typing the url or mail-address.