Skip to main content Skip to search Skip to search

Social Science Future Studies

Future Babble

Why Expert Predictions Fail - and Why We Believe Them Anyway

by (author) Dan Gardner

Publisher
McClelland & Stewart
Initial publish date
Sep 2011
Category
Future Studies, Social History, Cognitive Psychology
  • Paperback / softback

    ISBN
    9780771035135
    Publish Date
    Sep 2011
    List Price
    $21.00

Add it to your shelf

Where to buy it

Description

In 2008, as the price of oil surged above $140 a barrel, experts said it would soon hit $200; a few months later it plunged to $30. In 1967, they said the USSR would have one of the fastest-growing economies in the year 2000; in 2000, the USSR did not exist. In 1911, it was pronounced that there would be no more wars in Europe; we all know how that turned out. Face it, experts are about as accurate as dart-throwing monkeys. And yet every day we ask them to predict the future — everything from the weather to the likelihood of a catastrophic terrorist attack. Future Babble is the first book to examine this phenomenon, showing why our brains yearn for certainty about the future, why we are attracted to those who predict it confidently, and why it’s so easy for us to ignore the trail of outrageously wrong forecasts.

In this fast-paced, example-packed, sometimes darkly hilarious book, journalist Dan Gardner shows how seminal research by UC Berkeley professor Philip Tetlock proved that pundits who are more famous are less accurate — and the average expert is no more accurate than a flipped coin. Gardner also draws on current research in cognitive psychology, political science, and behavioral economics to discover something quite reassuring: The future is always uncertain, but the end is not always near.

About the author

Contributor Notes

DAN GARDNER is a prize-winning journalist and author of Risk: Why We Fear the Things We Shouldn't — and Put Ourselves in Greater Danger. He is a senior writer and columnist at the Ottawa Citizen, and a popular public speaker. He holds a law degree and master's in history.

Excerpt: Future Babble: Why Expert Predictions Fail - and Why We Believe Them Anyway (by (author) Dan Gardner)

1
INTRODUCTION

“The end of everything we call life is close at hand and cannot be evaded.” H.G. Wells, 1946


George Edward Scott, my mother’s father, was born in an English village near the city of Nottingham. It was 1906. We can be sure that anyone who took notice of George’s arrival in the world agreed that he was a very lucky baby. There was the house he lived in, for one thing. It was the work of his father, a successful builder, and it was, like the man who built it, correct, confident, and proudly Victorian. Middle-class prosperity was evident throughout, from the sprawling rooms to the stained-glass windows and the cast-iron bathtub with a pull-cord that rang a bell downstairs. A maid carrying a bucket of hot water would arrive in due course.

And there was the country and the era. Often romanticized as the “long Edwardian summer,” Britain at the beginning of the twentieth century was indeed a land of peace and prosperity, if not strawberries and champagne. Britain led the world in industry, science, education, medicine, trade, and finance. Its empire was vaster than any in history, its navy invincible. The great and terrible war with Napoleon’s France was tucked away in dusty history books and few worried that its like would ever come again.

It was a time when “Progress” was capitalized. People were wealthier. They ate better and lived longer. Trade, travel, and communication steadily expanded, a process that would be called, much later, globalization. Science advanced briskly, revealing nature’s secrets and churning out technological marvels, each more wonderful than the last, from the train to the telegraph to the airplane. The latest of these arrived only four years before George Scott was born, and in 1912, when George was six, his father gathered the family in a field to witness the miracle of a man flying through the air in a machine. The pilot waved to the gawkers below. “Now I’ve seen it,” George’s grandmother muttered. “But I still don’t believe it.”

And the future? How could it be anything but grand? In 1902, the great American economist John Bates Clark imagined himself in 2002, looking back on the last hundred years. He pronounced himself profoundly satisfied. “There is certainly enough in our present condition to make our gladness overflow” and to hope that “the spirit of laughter and song may abide with us through the years that are coming,” Clark wrote. The twentieth century had been a triumph, in Clark’s imagining. Technology had flourished, conflict between labour and capital had vanished, and prosperity had grown until the slums were “transformed into abodes of happiness and health.” Only trade had crossed borders, never armies, and in the whole long century not a shot had been fired in anger. Of course this was only to be expected, Clark wrote, even though some silly people in earlier generations had actually believed war could happen in the modern world – “as if nations bound together by such economic ties as now unite the countries of the world would ever disrupt the great industrial organism and begin fighting.”

At the time, Clark’s vision seemed as reasonable as it was hopeful, and it was widely shared by eminent persons. “We can now look forward with something like confidence to the time when war between civilized nations will be as antiquated as the duel,” wrote the esteemed British historian, G.P. Gooch, in 1911. Several years later, the celebrated Manchester Guardian journalist H.N. Norman was even more definitive. “It is as certain as anything in politics can be, that the frontiers of our modern national states are finally drawn. My own belief is that there will be no more wars among the six Great Powers.”

One day, a few months after H.N. Norman had declared the arrival of eternal peace, George Scott fetched his father’s newspaper. The top story was the latest development in the push for Irish home rule. Below that was another headline. “War Declared,” it read.

It was August 1914. What had been considered impossible by so many informed experts was now reality. But still there w as no need to despair. It would be “the war to end all wars,” in H.G. Wells’s famously optimistic phrase. And it would be brief. It has to be, wrote the editors of the Economist, thanks to “the economic and financial impossibility of carrying out hostilities many more months on the present scale.”

For more than four years, the industry, science, and technology that had promised a better world slowly ground millions of men into the mud. The long agony of the First World War shattered empires, nations, generations, and hopes. The very idea of progress came to be scorned as a rotten illusion, a raggedy stage curtain now torn down and discarded.

In defeated Germany, Oswald Spengler’s dense and dark Decline of the West was the runaway best-seller of the 1920s. In victorious Britain, the Empire was bigger but the faith in the future that had sustained it faded like an old photograph left in the sun. The war left crushing debts and the economy staggered. “Has the cycle of prosperity and progress closed?” asked H.G. Wells in the foreword to a book whose title ventured an even bleaker question: Will Civilisation Crash? Yes to both, answered many of the same wise men who had once seen only peace and prosperity ahead. “It is clear now to everyone that the suicide of civilization is in progress,” declared the physician and humanitarian Albert Schweitzer in a 1922 lecture at Oxford University. It may have been “the Roaring Twenties” in the United States – a time of jazz, bathtub gin, soaring stocks, and real estate speculation – but it was a decade of gloom in Britain. For those who thought about the future, observes historian Richard Overy, “the prospect of imminent crisis, a new Dark Age, became a habitual way of looking at the world.”

My grandfather’s fortunes followed Britain’s. His father’s business declined, prosperity seeped away, and the bathtub pull-cord ceased to summon the downstairs maid. In 1922, at the age of fifteen, George was apprenticed to a plumber. A few years later, bowing to the prevailing sense that Britain’s decline was unstoppable, he decided to emigrate. A coin toss – heads Canada, tails Australia – settled the destination. With sixty dollars in his pocket, he landed in Canada. It was 1929. He had arrived just in time for the Great Depression.

A horror throughout the industrialized world, the Great Depression was especially savage in North America. Half the industrial production of the United States vanished. One-quarter of workers were unemployed. Starvation was a real and constant threat for millions. Growing numbers of desperate, frightened people sought salvation in fascism or communism. In Toronto, Maple Leaf Gardens was filled to the rafters not for a hockey game but a Stalinist rally, urging Canadians to follow the glorious example of the Soviet Union. Among the leading thinkers of the day, it was almost a truism that liberal democracy and free-market capitalism were archaic, discredited, and doomed. Even moderates were sure the future would belong to very different economic and political systems.

In 1933, the rise to power of the Nazis added the threat of what H.G. Wells called the “Second World War” in his sci-fi novel The Shape of Things to Come. Published the same year Adolf Hitler became chancellor of Germany, The Shape of Things to Come saw the war beginning in 1940 and predicted it would become a decade-long mass slaughter, ending not in victory but the utter exhaustion and collapse of all nations. Military analysts and others who tried to imagine another Great War were almost as grim. The airplanes that had been so wondrous to a young boy in 1912 would fill the skies with bombs, they agreed. Cities would be pulverized. There would be mass psychological breakdown and social disintegration. In 1934, Britain began a rearmament program it could not afford for a war that, it increasingly seemed, it could not avoid. In 1936, as Nazi Germany grew stronger, the program was accelerated.

A flicker of hope came from the United States, where economic indicators jolted upward, like a flat line on a heart monitor suddenly jumping. It didn’t last. In 1937, the American economy plunged again. It seemed nothing could pull the world out of its death spiral. “It is a fact so familiar that we seldom remember how very strange it is,” observed the British historian G.N. Clark, “that the commonest phrases we hear used about civilization at the present time all relate to the possibility, or even the prospect, of its being destroyed.”

That same year, George Scott’s second daughter, June, was born. It is most unlikely that anyone thought my mother was a lucky baby.

The Second World War began in September 1939. By the time it ended in 1945, at least forty million people were dead, the Holocaust had demonstrated that humanity was capable of any crime, much of the industrialized world had been pounded into rubble, and a weapon vastly more destructive than anything seen before had been invented. “In our recent history, war has been following war in ascending order of intensity,” wrote the influential British historian Arnold Toynbee in 1950. “And today it is already apparent that the War of 1939–45 was not the climax of this crescendo movement.” Ambassador Joseph Grew, a senior American foreign service officer, declared in 1945 that “a future war with the Soviet Union is as sure as anything in this world.” Albert Einstein was terrified. “Only the creation of a world government can prevent the impending self-destruction of mankind,” declared the man whose name was synonymous with genius. Some were less optimistic. “The end of everything we call life is close at hand and cannot be evaded,” moaned H.G. Wells.

Happily for humanity, Wells, Einstein, and the many other luminaries who made dire predictions in an era W.H. Auden dubbed “The Age of Anxiety” were all wrong. The end of life was not at hand. War did not come. Civilization did not crumble. Against all reasonable expectation, my mother turned out to be a very lucky baby, indeed.

Led by the United States, Western economies surged in the postwar decades. The standard of living soared. Optimism returned, and people expressed their hope for a brighter future by getting married earlier and having children in unprecedented numbers. The result was a combination boom – economic and baby – that put children born during the Depression at the leading edge of a wealth-and-population wave. That’s the ultimate demographic sweet spot. Coming of age in the 1950s, they entered a dream job market. To be hired at a university in the early 1960s, a professor once recalled to me, you had to sign your name three times “and spell it right twice.” Something of an exaggeration, to be sure. But the point is very real. Despite the constant threat of nuclear war, and lesser problems that came and went, children born in the depths of the Great Depression – one of the darkest periods of the last five centuries – lived their adult lives amid peace and steadily growing prosperity. There has never been a more fortunate generation.

Who predicted that? Nobody. Which is entirely understandable. Even someone who could have foreseen that there would not be a Third World War – which would have been a triumph of prognostication in its own right – would have had to correctly forecast both the baby boom and the marvellous performance of post-war economies. And how would they have done that? The baby boom was caused by a post-war surge in fertility rates that sharply reversed a downward trend that had been in place for more than half a century. Demographers didn’t see it coming. No one did. Similarly, the dynamism of the post-war economies was a sharp break from previous trends that was not forecast by experts, whose expectations were much more pessimistic. Many leading economists even worried that demobilization would be followed by mass unemployment and stagnation. One surprise after another. That’s how the years unfolded after 1945. The result was a future that was as unpredictable as it was delightful – and a generation born at what seemed to be the worst possible time came to be a generation born at the most golden of moments.

The desire to know the future is universal and constant, as the profusion of soothsaying techniques in human cultures – from goats’ entrails to tea leaves – demonstrates so well. But certain events can sharpen that desire, making it fierce and urgent. Bringing a child into the world is one such force. What will the world be like for my baby? My great-grandfather undoubtedly asked himself that question when his little boy was born in 1906. He was a well-read person, and so he likely paid close attention to what the experts said. George Edward Scott was a very lucky baby, he would have concluded. And any intelligent, informed person would have agreed. Thirty-one years later, when my grandfather held his infant daughter in his arms, he surely asked himself the same question, and he, too, would have paid close attention to what the experts said. And he would have feared for her future, as any intelligent, informed person would have.

My great-grandfather was wrong. My grandfather was wrong. All those intelligent, informed people were wrong. But mostly, the experts were wrong.

They’re wrong a lot, those experts. History is littered with their failed predictions. Whole books can be filled with them. Many have been.

Some failed predictions are prophecies of disaster and despair. In the 1968 book The Population Bomb, which sold millions of copies, Stanford University biologist Paul Ehrlich declared “the battle to feed all of humanity is over. In the 1970s, the world will undergo famines – hundreds of millions of people will starve to death in spite of any crash programs embarked upon now.” But there weren’t mass famines in the 1970s. Or in the 1980s. Thanks to the dramatic improvements in agriculture collectively known as “the Green Revolution” – which were well underway by the time Ehrlich wrote his book – food production not only kept up with population growth, it greatly surpassed it. Ehrlich thought that was utterly impossible. But it happened. Between 1961 and 2000, the world’s population doubled but the calories of food consumed per person increased 24 per cent. In India, calories per person rose 20 per cent. In Italy, 26 per cent. In South Korea, 44 per cent. Indonesia, 69 per cent. China had experienced a famine that killed some 30 million people in the dark years between 1959 and 1961, but in the 40 years after that horror China’s per capita food consumption rose an astonishing 73 per cent. And the United States? In the decades after The Population Bomb was published, fears that people would not get enough to eat were forgotten as American waistlines steadily expanded. The already-substantial consumption of the average American rose 32 per cent, and the United States became the first nation in history to struggle with an epidemic of obesity.

In 1977, President Jimmy Carter called for the “moral equivalent of war” to shift the American economy off oil because, he said, the production of oil would soon fail to keep up with demand. When that happened, oil prices would soar and never come down again – the American economy would be devastated and the American dream would turn brown and die like an unwatered suburban lawn. Eight years later, oil prices fell through the floor. They stayed low for two decades.

A small library could be filled with books predicting stock market crashes and economic disasters that never happened, but the giant of the genre was published in 1987. The hardcover edition of economist Ravi Batra’s The Great Depression of 1990 hit the top spot on the New York Times best-seller list and spent a total of ten months on the chart; the paperback stayed on the list for an astonishing nineteen months. When the American economy slipped into recession in 1990, Batra looked prophetic. When the recession proved to be mild and brief, he seemed less so. When the 1990s roared, he looked foolish, particularly when he spent the entire decade writing books predicting a depression was imminent. In 1990, Jacques Attali – intellectual, banker, former adviser to French president François Mitterrand – published a book called Millennium, which predicted dramatic change on the other side of the year 2000. Both the United States and the Soviet Union would slowly lose their superpower status, Attali wrote. Their replacements would be Japan and Europe. As for China and India, they “will refuse to fall under the sway of either the Pacific or the European sphere,” but it would be hard for these desperately poor countries to resist. Catastrophic war was “possible, even probable.” However, Attali cautioned, this future isn’t quite chiselled in stone. “If a miracle were to occur” and China and India were to be “integrated into the global economy and market, all strategic assumptions underpinning my prognostications would be overturned. That miracle is most unlikely.” Of course, that “miracle” is precisely what happened. And almost nothing Attali predicted came true.

Even economists who win Nobel Prizes have been known to blow big calls. In 1997, as Asian economies struggled with a major currency crisis, Paul Krugman – New York Times columnist and winner of the Nobel in 2008 – worried that Asia must act quickly. If not, he wrote in Fortune magazine, “we could be looking at a true Depression scenario – the kind of slump that 60 years ago devastated societies, destabilized governments, and eventually led to war.” Krugman’s prescription? Currency controls. It had to be done or else. But mostly, it wasn’t done. And Asia was booming again within two years.

Pessimists have no monopoly on forecasting flops, however. Excited predictions of the amazing technologies to come – Driverless cars! Robot maids! Jet packs! – have been dazzling the public since the late nineteenth century. These old forecasts continue to entertain today, though for quite different reasons. And for every bear prophesying blood in the stock markets, there is a bull who is sure things will only get better. The American economist Irving Fisher was one. “Stock prices have reached what looks like a permanently high plateau,” the esteemed economist assured nervous investors. “I do not feel there will soon be, if ever, a 50 or 60 point break from present levels, such as they have predicted. I expect to see the stock market a good deal higher within a few months.” That was October 17, 1929. The market crashed the following week. But that crash was none of Britain’s concern, the legendary John Maynard Keynes believed. “There will be no serious consequences in London resulting from the Wall Street Slump,” Keynes wrote. “We find the look ahead decidedly encouraging.” Shortly afterward, Britain sank with the rest of the world into the Great Depression.

Another bull market, this one in the late 1990s, produced a bookshelf full of predictions so giddy they made Irving Fisher sound like Eeyore. The most famous was the 1999 book Dow 36,000 by James Glassman and Kevin Hassett. “If you are worried about missing the market’s big move upward, you will discover that it’s not too late,” Glassman and Hassett wrote. Actually, it was too late. Shortly after Dow 36,000 was published, the Dow peaked at less than 12,000 and started a long, painful descent.

Paul Ehrlich can also take consolation in the fact that many of the optimists who assailed his writing were not much better at predicting the future. “The doomsayers who worry about the prospect of starvation for a burgeoning world population” will not see their terrible visions realized, Time magazine reported in 1966. The reason? Aquaculture. “rand experts visualize fish herded and raised in offshore pens as cattle are today. Huge fields of kelp and other kinds of seaweed will be tended by undersea ‘farmers’ – frogmen who will live for months at a time in submerged bunkhouses. The protein-rich underseas crop will probably be ground up to produce a dull-tasting cereal that eventually, however, could be regenerated chemically to taste like anything from steak to bourbon.” The same rand Corporation experts agreed that “a permanent lunar base will have been established long before A.D. 2000 and that men will have flown past Venus and landed on Mars.” Herman Kahn, a founder of the Hudson Institute and a determined critic of Ehrlich, was similarly off the mark in a thick book called The Year 2000, published in 1967. It is “very likely,” Kahn wrote, that by the end of the century nuclear explosives would be used for excavation and mining, “artificial moons” would be used to illuminate large areas at night, and there would be permanent undersea colonies. Kahn also expected that one of the world’s fastest-growing economies at the turn of the millennium would be that of the Soviet Union.

So pessimists and optimists both make predictions that look bad in hindsight. What about left versus right? Not much difference there, either. There are plenty of examples of liberal experts making predictions that go awry, like Jonathan Schell’s belief that Ronald Reagan’s arms buildup was putting the world on course for nuclear war. “We have to admit that unless we rid ourselves of our nuclear arsenals a holocaust not only might occur but will occur – if not today, then tomorrow; if not this year, then the next,” Schell wrote in 1982. “One day – and it is hard to believe it will not be soon – we will make a choice. Either we will sink into the final coma and end it all or, as I trust and believe, we will awaken to the truth of our peril . . . and rise up to cleanse the earth of nuclear weapons.” The stock of failed predictions on the right is equally rich. It was, for example, a “slam dunk” that Saddam Hussein’s weapons of mass destruction would be discovered following the American invasion of Iraq in 2003 and that, as Vice-President Dick Cheney said, American soldiers would be “greeted as liberators” by the grateful Iraqi people. “A year from now,” observed neo-conservative luminary Richard Perle in September 2003, “I’ll be very surprised if there is not some grand square in Baghdad named after President Bush.”

So the inaccuracy of expert predictions isn’t limited to pessimists or optimists, liberals or conservatives. It’s also not about a few deluded individuals. Over and over in the history of predictions, it’s not one expert who tries and fails to predict the future. It’s whole legions of experts.

Paul Ehrlich’s bleak vision in The Population Bomb was anything but that of a lone crank. Countless experts made similar forecasts in the 1950s and 1960s. In 1967, the year before Ehrlich’s book appeared, William and Paul Paddock – one an agronomist, the other a foreign service officer – published a book whose title said it all: Famine 1975! When biologist James Bonner reviewed the Paddocks’ book in the journal Science, he emphasized that “all serious students of the plight of the underdeveloped nations agree that famine among the peoples of the underdeveloped nations is inevitable.” The only question was when. “The U.S. Department of Agriculture, for example, sees 1985 as the beginning of the years of hunger. I have guessed publicly that the interval 1977–1985 will bring the moment of truth, will bring a dividing point at which the human race will split into the rich and the poor, the well-fed and the hungry – two cultures, the affluent and the miserable, one of which must inevitably exterminate the other. . . . I stress again that all responsible investigators agree that the tragedy will occur.”

There was also an expert consensus in support of Jimmy Carter’s prediction of perpetually rising oil prices. And Jacques Attali’s belief that Japan and Europe would eclipse the United States and dominate the world economy in the twenty-first century was standard stuff among strategic thinkers. As for Jonathan Schell’s fear that Ronald Reagan’s policies would plunge the world into a nuclear inferno, it dominated university faculties and brought millions of protestors to the streets. And, as easy as it is to forget now, support for the invasion of Iraq was widespread among foreign policy analysts and politicians, most of whom were confident weapons of mass destruction would be uncovered and American forces greeted as liberators.

We are awash in predictions. In newspapers, blogs, and books, on radio and television, every day, without fail, experts tell us how the economy will perform next year or whether a foreign conflict will flare into war. They tell us who will win the next election, and whether the price of oil will rise or fall, housing sales will grow or shrink, stock markets will soar or dive. Occasionally, the experts lift their eyes to more distant horizons. I recently read a cover story in Time magazine that claimed the first ten years of the twenty-first century were “the decade from hell” and went on to explain “why the next one will be better.” But what made the “decade from hell” what it was? Events that confounded the expectations of most experts. The 9/11 terrorist attacks. The debacle in Iraq. Hurricane Katrina. The financial crisis of 2008 and the global recession. If the previous decade was shaped by uncertainty and surprise – and no one can seriously argue it was not – why would we expect the next ten years to be so much more predictable? But simple questions like that are seldom asked. Instead, the predictions are churned out, one after another, like widgets on an assembly line. I recently read a description of the Chinese economy in 2040. And American suburbs in 2050. And now I’m reading an article that explains “why Europe will outshine North America in the 21st century.” There are apparently no limits to the vision of these wise men and women. Experts peer into the distant future and warn of great wars and conflicts. They tell us what’s in store for the climate, globalization, food, energy, and technology. They tell us all about the world of our children and our grandchildren. And we listen.

Economists, in particular, are treated with the reverence the ancient Greeks gave the Oracle of Delphi. But unlike the notoriously vague pronouncements that once issued from Delphi, economists’ predictions are concrete and precise. Their accuracy can be checked. And anyone who does that will quickly conclude that economists make lousy soothsayers: “The record of failure to predict recessions is virtually unblemished,” wrote imf economist Prakash Loungani in one of many papers demonstrating the near-universal truth that economists’ predictions are least accurate when they are most needed. Not even the most esteemed economists can claim significant predictive success. Retired banker and financial writer Charles Morris examined a decade’s worth of forecasts issued by the brilliant minds who staff the White House’s Council of Economic Advisors. Morris started with the 1997 forecast. There would be modest growth, the council declared; at the end of the year, the American economy had grown at a rate more than double the council’s forecast. In 1998, the story was much the same. And in 1999. In 2000, the council “sharply raised both their near- and medium-term outlooks – just in time for the dot-com bust and the 2001–2002 recession.” The record for the Bush years was “no better,” Morris writes. But it was the forecast for 2008 that really amazes: “The 2008 report expected slower but positive growth in the first half of the year, as investment shifted away from housing, but foresaw a nice recovery in the second half, and a decent year overall. Their outlook for 2009 and 2010 was for a solid three per cent real growth with low inflation and good employment numbers,” Morris writes. “In other words, they hadn’t a clue.”

And they weren’t alone. With very few exceptions, economists did not foresee the financial and economic meltdown of 2008. Many economists didn’t recognize the crisis for what it was even as it was unfolding. In December 2007 – months after the credit crunch began and the very moment that would officially mark the beginning of the recession in the United States – BusinessWeek magazine ran its annual chart of detailed forecasts for the year ahead from leading American analysts. Under the headline A Slower But Steady Economy, every one of fifty-four economists predicted the U.S. economy wouldn’t “sink into a recession” in 2008. The experts were unanimous that unemployment wouldn’t be too bad, either, leading to the consensus conclusion that 2008 would be a solid but unspectacular year. One horrible year later – as people watching the evening news experienced the white-knuckle fear of passengers in a plunging jet – Business Week turned to the economists who had so spectacularly blown that year’s forecast and asked them to tell its readers what would happen in 2009. There was no mention of the previous year’s fiasco, only another chart filled with reassuringly precise numbers. The headline: A Slower But Steady Economy.

By definition, experts know much about their field of expertise. Economists can – usually – look around and tell us a great deal about the economy, political scientists can do the same for politics and government, ecologists for the environment, and so on. But the future? All too often, their crystal balls work no better than those of fortune tellers. And since rational people don’t take seriously the prognostications of Mysterious Madam Zelda or any psychic, palm reader, astrologer, or preacher who claims to know what lies ahead, they should be skeptical of expert predictions. And yet we are not skeptical. No matter how often expert predictions fail, we want more. This strange phenomenon led Scott Armstrong, an expert on forecasting at the Wharton School of the University of Pennsylvania, to coin his “seer-sucker” theory: “No matter how much evidence exists that seers do not exist, suckers will pay for the existence of seers.” Sometimes we even go back to the very people whose predictions failed in the past and listen, rapt, as they tell us how the future will unfold.

This book explains why expert predictions fail and why we believe them anyway.

The first part of the answer lies in the nature of reality and the human brain. The world is complicated – too complicated to be predicted. And while the human brain may be magnificent, it is not perfect, thanks to a jumble of cognitive wiring that makes systematic mistakes. Try to predict an unpredictable world using an error-prone brain and you get the gaffes that litter history.

As for why we believe expert predictions, the answer lies ultimately in our hard-wired aversion to uncertainty. People want to know what’s happening now and what will happen in the future, and admitting we don’t know can be profoundly disturbing. So we try to eliminate uncertainty however we can. We see patterns where there are none. We treat random results as if they are meaningful. And we treasure stories that replace the complexity and uncertainty of reality with simple narratives about what’s happening and what will happen. Sometimes we create these stories ourselves, but, even with the human mind’s bountiful capacity for self-delusion, it can be hard to fool ourselves into thinking we know what the future holds for the stock market, the climate, the price of oil, or a thousand other pressing issues. So we look to experts. They must know. They have Ph.D.s, prizes, and offices in major universities. And thanks to the news media’s preference for the simple and dramatic, the sort of expert we are likely to hear from is confident and conclusive. They know what will happen; they are certain of it. We like that because that is how we want to feel. And so we convince ourselves that these wise men and women can do what wise men and women have never been able to do before. Fundamentally, we believe because we want to believe.

We need to see this trap for what it is, especially at this moment. Over the last several years, we have experienced soaring prices for commodities, food shortages, talk of an “age of scarcity,” the bursting of a real estate bubble that ruined millions of middle-class home owners, growing evidence of environmental catastrophe, a financial crisis that upset conventional economic wisdom, and a global economic recession the like of which has not been seen since the Second World War. Uncertainty? The air is electric with it. It’s precisely in times such as these that the desire to know what the future holds becomes a ravenous hunger. We’ve seen it happen before. The 1970s may be remembered as the era of disco and bad fashion but it was, in reality, a tumultuous and unsettling time that created an enormous demand to know what lay ahead. The result was a profusion of detailed and compelling expert predictions, many of them involving the very same issues – oil, food, terrorism, recession, unemployment, deficits and debt, inflation, environmental crisis, the decline of the United States – we are grappling with today. Most of them turned out to be wrong, some hilariously so. That doesn’t prove that similar predictions in the present will also fall flat, but it does provide a valuable reminder to be skeptical when experts claim to know what lies in our future.

That sort of skepticism doesn’t come easily, but it is possible. As natural as it is to want to hear predictions, and to believe them, we do not have to. With effort, we can learn to accept reality when we do not, and cannot, know what lies ahead.

Of course, that still leaves us with a big problem because, in our lives and businesses, we all have to make plans and forecasts. If the future is unpredictable, doesn’t that mean all our planning and forecasting is pointless? Not if we go about it the right way. Certain styles of thinking and decision making do a far better job of groping amid the inky blackness of the future to find a path ahead. These styles can be learned and applied, with results that are positive, although far from perfect. And that leads to the ultimate conclusion, which is one we do not want to accept but must: There are no crystal balls, and no style of thinking, no technique, no model will ever eliminate uncertainty. The future will forever be shrouded in darkness. Only if we accept and embrace this fundamental fact can we hope to be prepared for the inevitable surprises that lie ahead.


Are Experts Really So Bad?

But now I have to pause and make an admission: My whole argument is based on the belief that expert predictions have a lousy track record. But I haven’t actually proved that, at least not yet.

So far, I’ve presented a number of expert predictions that failed. Or rather, I’ve presented a number of expert predictions that I think failed. But not everyone would agree. Many people insist even today that Paul Ehrlich was essentially on the mark in The Population Bomb. One of those people is Paul Ehrlich. In a 2009 essay, Ehrlich acknowledged that the book “underestimated the impact of the Green Revolution” and so the starvation he expected wasn’t as bad as he predicted. But the book’s grim vision was basically accurate, he insisted. In fact, its “most serious flaw” was that it was “much too optimistic about the future.”

I’ll take a closer look at Ehrlich’s defence of The Population Bomb later. What matters here is that the failure of Ehrlich’s prediction is disputed, and untangling that dispute is complicated. That’s typical because expert predictions are common and so are failed predictions. But experts who agree that their predictions failed are rare. As Paul Ehrlich did, they will often concede that they were off on some details here and there. But flat-out wrong? No. Never. Unless pinned down by circumstances as firmly as a butterfly in a display case, they will resolutely deny being wrong.

“I was almost right” is a standard dodge. Another is “It would have happened if I hadn’t been blindsided by an unforeseeable event.” And then there is the claim that the prediction was a “self-negating prophecy,” that it caused others to act and it was those actions that prevented the predicted event from happening. Remember Y2K? The more excitable experts claimed the world’s computers would crash on January 1, 2000, and take civilization with them. When nothing remotely like that happened, the doomsters boasted that their predictions had prompted massive remediation efforts that had saved humanity from certain doom: You’re welcome.

Another mental manoeuvre is the wait-and-see twist. Many predictions have only vague time frames and so, when an observer thinks a forecast has failed, the expert can insist time isn’t up yet: Wait and see. A variation on this is the off-on-timing gambit, which is used when a prediction comes with a clear time frame and the prediction clearly fails within the allotted time: The expert grudgingly concedes that the predicted event hasn’t happened within the time frame but he insists that’s a minor detail. What matters is that the prediction will come to pass. Eventually. Some day. Paul Ehrlich, for example, acknowledges that the famines he predicted for the 1970s didn’t happen, at least not to the extent he expected, but he insists that his analysis was sound and the disasters he foresaw are still coming. “The probability of a vast catastrophe looms steadily larger,” he wrote in 2009, forty-one years after The Population Bomb warned of imminent peril. Similarly, when a journalist reminded Richard Perle in 2008 that five years earlier he had predicted a grand square in Baghdad would be named after George W. Bush within one year, Perle didn’t respond with a forthright admission of error. Instead, he insisted Bush could still get his Baghdad square. It would just take a little longer than anticipated.

A third defence involves carefully parsing the language of the forecast so that a statement that was intended to be a rock-solid prediction that Event X would happen – and is taken that way by the media and the public – is shown to be much more elastic. “I didn’t actually say Event X would certainly happen,” the expert explains. “I said ‘It could happen.’” And implicit in the phrase “could happen” is the possibility that the predicted event may not happen. Thus, the fact that the event did not happen does not mean the prediction was “wrong.” This line of reasoning is often heard from liberal experts who claimed, in the early 1980s, that Ronald Reagan’s policies put the world in danger. Very few said nuclear war was “inevitable.” They only said Reagan’s belligerence made war more likely. Does the fact that there was no war prove they were wrong? Not at all, they say. It’s like a weather forecaster who says there is a 70 per cent chance of rain. He can’t be blamed if the sun shines because an implicit part of his forecast was “30 per cent chance of sunshine.” People should just be glad they got lucky.

Obviously, I’m being a little sarcastic here because these arguments are often weak and self-serving. But not always. Sometimes there is real substance in them and they have to be taken seriously. Predictions about the damage a widening hole in the ozone layer would do, for example, did cause governments to make policy changes that would ensure the predictions did not come true: That’s a genuine “self-negating prophecy.” It is also undeniably true to say, as Paul Ehrlich does, that the failure of population growth to cause famines in the 1970s does not prove population growth will not cause famines sometime in the future. And the fact that there was no nuclear war in the 1980s really does not prove that Ronald Reagan’s policies did not raise the risk of war.

Put all this together and it means there are substantial question marks over many of the failed predictions I presented. Did they really fail? I think so. But reasonable people can and do disagree. Sorting out who’s right isn’t easy. Different observers will come to different conclusions. In some cases, the truth may never be known.

And there’s an even bigger objection that can be raised to my claim about the fallibility of expert predictions: Even if we accept that my examples of failed predictions really are failed predictions, they don’t actually prove that expert predictions routinely fail. They only prove that some expert predictions have failed. Even if I were to stuff whole chapters with examples, all I would prove is that many expert predictions have failed. What would be missing is what’s needed to prove my point: the rate of failure. If, say, 99 out of 100 predictions fail, we would probably be better off consulting fortune cookies. But if one in 100 fails, expert predictions really should be treated with hushed reverence.

So how do we figure out the rate of failure? The first thing we would have to do is expand our inquiry beyond the misses to the hits. And there are hits. Here’s one: In 1981, energy expert Amory Lovins predicted that sometime between 1995 and 2005 the world would see “the effective collapse of the Soviet Union from internal political stress.”23 That’s pretty impressive, and there are plenty of others like it in the pages of books, magazines, and journals.

But still I wouldn’t be able to prove much. What’s the total number of experts I would be examining? What’s the total number of predictions they made? Over what period? Simply adding the hits and misses I collected wouldn’t tell me any of that and so I still wouldn’t know the rate at which predictions fail.

And if that’s not complicated enough, there’s another frustrating problem to contend with: Imagine someone who throws a dart and – smack! – he hits the bull’s eye. Does that prove he is a great dart thrower? Maybe. But there’s no way to be sure based on that one dart. If he throws a second, third, and fourth dart and they all hit the bull’s eye, it’s increasingly reasonable to think we are witnessing skill, not luck. But what if he throws dozens more darts and not one hits the bull’s eye? What if he often misses the board entirely? What if this person leaves darts scattered around the room, even a few stuck in the ceiling? In that case, his bull’s eye is probably a fluke. Amory Lovins’s amazing prediction about the Soviet Union is a case in point. It was only one of dozens of predictions Lovins made in the same forecast and almost all the others completely missed the board. (By the end of the 1980s, Lovins predicted, nuclear power programs would “persist only in dictatorships,” oil and gas would be scarce and fantastically expensive, unemployment would be high and persistent, the unreliability of food supplies in American cities would give rise to “urban farming and forestry” . . . and so on.) Once you know that, you know it probably wasn’t keen geopolitical insight that produced Lovins’s bull’s eye. It was luck.

At this point, I suspect, your head is swimming. That’s the point, I’m afraid. Figuring out how good experts are at predicting the future seems like a simple task but if we take logic and evidence seriously, it’s actually very difficult.

The media have occasionally taken a stab at sorting this out. In 1984, the Economist asked sixteen people to make ten-year forecasts of economic growth rates, inflation rates, exchange rates, oil prices, and other staples of economic prognostication. Four of the test subjects were former finance ministers, four were chairmen of multinational companies, four were economics students at Oxford University, and four were, to use the English vernacular, London dustmen. A decade later, the Economist reviewed the forecasts and discovered they were, on average, awful. But some were more awful than others: The dustmen tied the corporate chairmen for first place, while the finance ministers came last.24 Many other publications have conducted similar exercises over the years, with similarly humiliating results. The now-defunct magazine Brill’s Content, for one, compared the predictions of famous American pundits with a chimpanzee named Chippy, who made his guesses by choosing among flashcards. Chippy consistently matched or beat the best in the business.

As suggestive and entertaining as these stunts are, they are not, to say the least, scientifically rigorous. Rising to that level requires much more: It requires an experiment that is elaborate, expensive, and exhausting.


The Experiment

The first thing the experiment needs is a very large group of experts. The group should be as diverse as possible, with experts from different fields, different political leanings, different institutional affiliations, and different backgrounds. At the very beginning of the experiment, the experts should answer a battery of questions designed to test political orientation, world view, personality, and thinking style.

The experts must be asked clear questions whose answers can later be shown to be indisputably true or false. That means vague pronouncements about “weakening state authority” or “growing public optimism” won’t do. Even a question like “Will relations between India and Pakistan be increasingly strained?” – which is the standard language of tv pundits – isn’t good enough. Questions have to be so precise that no reasonable person would argue about what actually happened – which means asking questions like “Will the official unemployment rate be higher, lower, or the same a year from now?” and “Will India and Pakistan go to war within the next five years?”

For each prediction, experts must state how likely they think it is to actually happen. If they are dead certain something will happen, that is a 100 per cent probability. If they are sure it won’t happen, it’s a zero per cent probability. In between these extremes, experts will be required to attach precise percentages to guesses rather than use vague terms like “improbable” or “very likely.” There’s no room for fudging when someone says, “There is a 30 per cent chance India and Pakistan will go to war within the next five years.”

The experiment must obtain a very large number of predictions from each expert in order to allow statistical analysis that can expose lucky hits for what they are. It also allows us to get past the problem of judging predictions in which the expert says the chance of something happening is, for example, “70 per cent.” If the expert is perfectly accurate, then a broad survey of his predictions will show that in 70 per cent of the cases in which he said there was a 70 per cent chance of something happening, it actually happened. Similarly, 60 per cent of the outcomes said to have a 60 per cent chance of happening should have happened. This measure of accuracy is called “calibration.”

But there’s more to the story than calibration. After all, someone who sat on the fence with every prediction – “Will it happen? I think the odds are 50/50” – would likely wind up with a modestly good calibration score. We can get predictions like that from a flipped coin. What we want in a forecaster, ideally, is someone with a godlike ability to predict the future. The gods don’t bother with middling probabilities and they certainly don’t say, “The odds are 50/50.” The gods say, “This will certainly happen” or “This is impossible.” So there must be a second measure of accuracy to go along with calibration. Experts should be scored by confidence. This means that an expert who said there is a 100 per cent chance of something happening that actually did happen would score more points than another expert who had said there was only a 70 per cent chance of it happening. This measure is called “discrimination.”

A third measure must also be generated by answering the same questions that are put to the experts using a variety of simple and arbitrary rules. For example, there is the “no change” rule: No matter what the question is, always predict there will be no change. These results will create benchmarks against which the experts’ results can be compared.

And finally, the experiment must continue over the course of many years. That will allow for questions involving time frames ranging from the short term – one to two years – to longer-term predictions covering five, ten, even twenty years, ensuring that the experiment will require experts to make predictions in times of stability and surprise, prosperity and recession, peace and war. When the passage of time has revealed the correct answers, they should show how well the experts did. And be given the opportunity to explain the results.

It’s difficult to exaggerate how demanding this experiment would be. It would be expensive, complicated, and require the patience of Job. But most of all, it would require a skilled and devoted researcher prepared to give a big chunk of his life to answering one question: How accurate are expert predictions?

Fortunately, there is such a researcher. He is Philip Tetlock.

Today, Tetlock is a much-honoured psychologist at the University of California’s Haas School of Business. In 1984, he was a newly tenured academic who had just been appointed to a new committee of the National Research Council, a branch of the National Academy of Sciences, arguably the most prestigious scientific body in the world. The committee’s remit was nothing less than figuring out how social scientists could help avoid nuclear war and the end of civilization.

“It’s hard to recreate the tenor of the times,” Tetlock recalls, “but there was a lot of uneasiness.” It was the height of the “second” Cold War. The Reagan White House was stockpiling nuclear weapons, the Red Army was fighting cia-backed guerrillas in Afghanistan, and the death of Leonid Brezhnev had put the Soviet regime into transition, though to what no one could be sure. Watching television in living rooms across the United States, Americans were shocked when the evening news reported a Soviet fighter jet had shot down a Korean Airlines passenger jet that had strayed into Soviet airspace; then they were terrified by a made-for-tv drama, The Day After, about the ash and tears of life following a nuclear exchange.

At this perilous moment, the committee brought together an array of renowned social scientists, along with one junior professor from the University of California. “I mostly sat at the table and listened very quietly to the arguments going back and forth,” Tetlock says. “The liberals and conservatives in particular had very different assessments of the Soviet Union. The conservative view as of 1984 was not that they could bring the Soviet Union down, but that they could effectively contain and deter it. Whereas the liberal view of the Soviet Union was that the conservatives [in the White House] were increasing the influence of the hardliners in the Kremlin and that they were going to trigger a neo-Stalinist retrenchment.” Tetlock started tracking down and interviewing respected experts – in universities, governments, think-tanks, and the media – about the current situation and where it was headed. With a good sense of the prevailing expert opinions, he waited to see what the future would bring.

He didn’t have to wait long. In March 1985, Mikhail Gorbachev took control of the Soviet Union and dramatic liberalization followed. Neither liberals nor conservatives had expected that. But neither side took the surprise as evidence that their understanding of the situation was flawed or incomplete. Instead, they saw it as proof they had been right all along. “The conservatives argued that we forced the Soviets’ hand, that we compelled these dramatic concessions in the late eighties,” Tetlock says. “Whereas the liberal view is that the Soviet elite had learned from the failings of the economy [and that] if anything, Reagan had slowed down that process of learning and change.”

It was hard to avoid the suspicion that what did or did not happen was almost irrelevant. The experts had their stories and they were sticking to them. “Each side was very well prepared to explain whatever happened,” Tetlock says. “I found that puzzling and intriguing and worth pursuing.”

And so Tetlock designed, prepared, and launched the massive experiment described above.

Scouring his multidisciplinary networks, Tetlock recruited 284 experts – political scientists, economists, and journalists – whose jobs involve commenting or giving advice on political or economic trends. All were guaranteed anonymity because Tetlock didn’t want anyone feeling pressure to conform or worrying about what their predictions would do to their reputations. With names unknown, all were free to judge as best they could.

Then the predictions began. Over many years, Tetlock and his team peppered the experts with questions. In all, they collected an astonishing 27,450 judgements about the future. It was by far the biggest exercise of its kind ever, and the results were startlingly clear.

On “calibration,” the experts would have been better off making random guesses. Tetlock puts it a little more acidly: The experts would have been beaten by “a dart-throwing chimpanzee,” he says. On “discrimination,” however, the experts did a little better. They were still terrible, but not quite so terrible. When the scores for “calibration” and “discrimination” were combined, the experts beat the chimp by a whisker. Technically, at least. In practical terms, that whisker is irrelevant. The simple and disturbing truth is that the experts’ predictions were no more accurate than random guesses.

Astrologers and psychics can make random guesses as well as Harvard professors so it’s hard not to look at these results and conclude that those who seek forecasts of the future would be well-advised to consult fortune cookies or the Mysterious Madam Zelda. They’re cheaper. And you can eat a fortune cookie.

But that’s not Philip Tetlock’s conclusion. Serious skepticism about the ability of experts to predict the future is called for, he says. But just as important as the dismal collective showing of experts in his experiment is the wide variation among individual experts. “There’s quite a range. Some experts are so out of touch with reality, they’re borderline delusional. Other experts are only slightly out of touch. And a few experts are surprisingly nuanced and well-calibrated.”

What distinguishes the impressive few from the borderline delusional is not whether they’re liberal or conservative. Tetlock’s data showed political beliefs made no difference to an expert’s accuracy. The same was true of optimists and pessimists. It also made no difference if experts had a doctorate, extensive experience, or access to classified information. Nor did it make a difference if experts were political scientists, historians, journalists, or economists.

What made a big difference is how they think.

Experts who did particularly badly – meaning they would have improved their results if they had flipped a coin through the whole exercise – were not comfortable with complexity and uncertainty. They sought to “reduce the problem to some core theoretical theme,” Tetlock says, and they used that theme over and over, like a template, to stamp out predictions. These experts were also more confident than others that their predictions were accurate. Why wouldn’t they be? They were sure their One Big Idea was right and so the predictions they stamped out with that idea must be, too.

Experts who did better than the average of the group – and better than random guessing – thought very differently. They had no template. Instead, they drew information and ideas from multiple sources and sought to synthesize it. They were self-critical, always questioning whether what they believed to be true really was. And when they were shown that they had made mistakes, they didn’t try to minimize, hedge, or evade. They simply acknowledged they were wrong and adjusted their thinking accordingly. Most of all, these experts were comfortable seeing the world as complex and uncertain – so comfortable that they tended to doubt the ability of anyone to predict the future. That resulted in a paradox: The experts who were more accurate than others tended to be much less confident that they were right.

In a famous essay, the political philosopher Isaiah Berlin recalled a fragment of an ancient Greek poem. “The fox knows many things,” the warrior-poet Archilochus wrote, “but the hedgehog knows one big thing.” In Berlin’s honour, Tetlock dubbed his experts “foxes” and “hedgehogs.”

Foxes beat hedgehogs. Tetlock’s data couldn’t be more clear. On both calibration and discrimination, complex and cautious thinking trounced simple and confident. By cross-checking other factors in the data, Tetlock also found that hedgehogs who are ideologically extreme are even worse forecasters than others of their kind. He even found that when hedgehogs made predictions involving their particular specialty, their accuracy declined. And it got worse still when the prediction was for the long term.

Put all that together and there’s a very clear lesson: If you hear a hedgehog make a long-term prediction, it is almost certainly wrong. Treat it with great skepticism. That may seem like obscure advice but take a look at the television panels, magazines, books, newspapers, and blogs where predictions flourish. The sort of expert typically found there is the sort who is confident, clear, and dramatic. The sort who delivers quality sound bites and compelling stories. The sort who doesn’t bother with complications, caveats, and uncertainties. The sort who has One Big Idea. Yes, the sort of expert typically found in the media is precisely the sort of expert who is most likely to be wrong. This explains one of the most startling findings to emerge from Philip Tetlock’s data: The bigger the media profile of an expert, the less accurate his predictions are.

Paul Ehrlich is a hedgehog. So are many of the other famous experts whose failed predictions I mentioned earlier. That’s not a coincidence. “There is a serious problem with overconfidence in many experts,” Tetlock concludes. “But with the proper interventions and proper encouragement, people can be induced to become more self-critical, thoughtful, and foxlike.” And better able to see what’s coming.

But only to a modest extent, I’m afraid. I wish it were otherwise but reality is stubborn. As delightful as it would be to think we could train ourselves to predict the future with ease and accuracy, even the predictions of the wisest foxes in Tetlock’s experiment were miles from perfect. In fact, predictions made by applying mindless rules such as “always predict no change” beat not only the hedgehogs in the experiment but the foxes as well. No matter how clever we are, no matter how sophisticated our thinking, the brain we use to make predictions is flawed and the world is fundamentally unpredictable.

In Dante Alighieri’s vision of hell, fortune tellers and diviners are condemned to spend eternity with their heads twisted backwards, unable to see ahead, as they had tried to do in life. This seems a little harsh. We all want to see the future. It’s human nature. But it is also within us to understand what we can and cannot do – and to know that although attempting to do what we cannot do may not be a mortal sin worthy of eternal damnation, it is folly.

Editorial Reviews

“It’s rare for a book on public affairs to say something genuinely new, but Future Babble is genuinely arresting, and should be required reading for journalists, politicians, academics, and anyone who listens to them. Mark my words: if Future Babble is widely read, then within 3.7 years the number of overconfident predictions by self-anointed experts talking through their hats will decline by 46.2%, and the world will become no less than 32.1% wiser.”
– Steven Pinker, Harvard College Professor of Psychology, Harvard University, and author of How the Mind Works and The Stuff of Thought
“Well-researched, well-reasoned, and engagingly written. I’m not making any predictions, but we can only hope that this brilliant book will shock the human race, and particularly the chattering expert class, into a condition of humility about proclamations about the future.”
– John Mueller, author of Overblown and Political Scientist, Ohio State University

“As Yogi Berra observed, 'it's tough to make predictions, especially about the future.' In this brilliant and engaging book, Dan Gardner shows us how tough forecasting really is, and how easy it is to be convinced otherwise by a confident expert with a good story. This is must reading for anyone who cares about the future.”
– Paul Slovic, Professor of Psychology, University of Oregon

“If you are paying a lot of money for forecasting services – be they crystal ball gazers or math modelers or something in between – put your orders on hold until you have had a chance to read this book – a rare mix of superb scholarship and zesty prose. You may want to cancel, or at least re-negotiate the price. For the rest of us who are just addicted to what experts are telling us everyday in every kind of media about what the future holds, Future Babble will show you how to be a bit smarter than what you usually hear.”
– Philip Tetlock, author of Expert Political Judgement and Mitchell Professor of Organizational Behavior, Hass School of Business, University of California

Other titles by