July 2, 2009

Making Luck Work For You

Booz & Co. has an interesting piece on the value of experts. Not what you might expect from a consulting firm. And that’s just the point—you may not know what you think you know. And if you’re really smart you’ll keep that in mind. Call it enlightened self-doubt perhaps? That doesn’t mean that experts, scientists, teachers don’t know what they’re talking about. It just means that you (and they) have to be a careful not to overplay your intellectual hand.

Nassim Nicholas Taleb, author of The Black Swan: The Impact of the Highly Improbable, introduces an engaging lesson in business forecasting from Dance with Chance: Making Luck Work for You, by Spyros Makridakis, Robin Hogarth, and Anil Gaba.

Just about every human decision about the future is tainted by a gap — the difference between what we think we know and what we actually know. The more expert we are, the wider the gap is likely to be. The story below, an excerpt from the book Dance with Chance, is a classic example of an expert-busting enterprise. The experts in this case are highly sophisticated statisticians and professors who are using up-to-date models. Most of us assume that sophistication helps us understand the future. But in fact, sophistication makes things worse; it invites misplaced focus on the complicated. In the end, if we expect to make better decisions we need to look for experts who understand the limits of what they know and the relative value of simpler methods.
— Nassim Nicholas Taleb

Excerpted from chapter 9 of
Dance with Chance: Making Luck Work for You

As an expert in statistics, working in a business school during the 1970s, one of the authors…couldn’t fail to notice that executives were deeply preoccupied with forecasting. Their main interest lay in various types of business and economic data: the sales of their firm, its profits, exports, exchange rates, house prices, industrial output…and a host of other figures. It bugged the professor greatly that practitioners were making these predictions without recourse to the latest, most theoretically sophisticated methods developed by statisticians like himself. Instead, they preferred simpler techniques which — they said — allowed them to explain their forecasts more easily to senior management. The outraged author decided to teach them a lesson. He embarked on a research project that would demonstrate the superiority of the latest statistical techniques. Even if he couldn’t persuade business people to adopt his methods, at least he’d be able to prove the precise cost of their attempts to please the boss.

Horror of horrors, the practitioners’ simple, boss-pleasing techniques turned out to be more accurate than the statisticians’ clever, statistically sophisticated methods. To be honest, neither [were] particularly great, but there was no doubt that the statisticians had served themselves a large portion of humble pie.

At first, the professor and his assistant were so taken aback by their results that they suspected they’d made a mistake. They made extensive checks on their own calculations but could find no errors at all. The initial shock now over, they began to cheer up. If there’s one thing that makes up for an academic proving himself wrong, it’s the opportunity to show that other eminent authorities are wrong too. So the professor submitted a paper on his surprising and important findings to a prestigious, learned journal and waited for the plaudits to start rolling in. This in itself turned out to be another forecasting error. The paper was rejected on the grounds that the results didn’t square with statistical theory! Fortunately, another journal did decide to publish the paper, but they insisted on including comments from the leading statisticians of the day. The experts were not impressed. Among the many criticisms was a suggestion that the poor performance of the sophisticated methods was due to the inability of the author to apply them properly.

Undaunted, the valiant statistician and his faithful assistant set out to prove their critics wrong. This time around they collected and made forecasts for even more sets of data (1,001 in total, as computers were much faster by this time), from the worlds of business, economics and finance. As before, the series were separated into two parts: the first used to develop forecasting models and make predictions; and the second used to measure the accuracy of the various methods. But there was a new and cunning plan. Instead of doing all the work himself, the author asked the most renowned experts in their fields — both academics and practitioners — to forecast the 1,001 series. All in all, fourteen experts participated and compared the accuracy of seventeen methods.

This time, there were no bad surprises for the professor. The findings were exactly the same as in his previous research. Simpler methods were at least as accurate as their complex and statistically sophisticated cousins. The only difference was that there were no experts to criticize, as most of the world’s leading authorities had taken part.

That was way back in 1982. Since then, the author has organized two further forecasting “competitions” to keep pace with new developments and eliminate the new criticisms that academics have ingeniously managed to concoct. The latest findings, published in 2000, consisted of 3,003 economic series, an expanding range of statistical methods, and a growing army of experts. However, the basic conclusion — supported by many other academic studies over the past three decades — remains steadfast. That is, when forecasting, always use the KISS principle: Keep It Simple, Statistician.

— Spyros Makridakis, Robin Hogarth, and Anil Gaba

Related Posts

  1. The Naked Truth About Making Money From Home
1 Comment | Add a Comment | Tags:, , , | Permalink

One Response to Making Luck Work For You

  1. Ryan Biddulph says:

    Very insightful read. Thanks for sharing.

Leave a Reply

Your email address will not be published. Required fields are marked *

*

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>