Book Reviews

Go To Hopkins & Company Homepage

Go to Executive Times Archives

 

Go to 2004 Book Shelf

 

The Wisdom of Crowds : Why the Many Are Smarter Than the Few and How Collective Wisdom Shapes Politics, Business, Economies and Culture by James Surowiecki

 

Rating: (Outstanding book-read it now)

 

Click on title or picture to buy from amazon.com

 

 

Collective

Would you believe that groups are often smarter than the smartest people in them? I didn’t until I read James Surowiecki’s new book, The Wisdom of Crowds. My experience has been that groups acting in concert tend to “dumb down” the results. Surowiecki lays out a different case, backed up by lots of facts and examples. Do you think compromise and consensus produces the best results? According to Surowiecki, “…the best way for a group to be smart is for each person in it to think and act as independently as possible.” There are four conditions that characterize wise crowds: diversity of opinion, independence, decentralization and aggregation. In the Wisdom of Crowds, Surowiecki explores each of these characteristics, and explains how they can work best, and how they can lead to trouble. Here’s an excerpt from Chapter 2, Section III that may revise your thinking about experts:

 

The fact that cognitive diversity matters does not mean that if you assemble a group of diverse but thoroughly uninformed peo­ple, their collective wisdom will be smarter than an expert’s. But if you can assemble a diverse group of people who possess vary­ing degrees of knowledge and insight, you’re better off entrusting it with major decisions rather than leaving them in the hands of one or two people, no matter how smart those people are. If this is difficult to believe—in the same way that March’s as­sertions are hard to believe—it’s because it runs counter to our basic intuitions about intelligence and business. Suggesting that the organization with the smartest people may not be the best organization is hereti­cal, particularly in a business world caught up in a ceaseless “war for talent” and gov­erned by the assumption that a few super­stars can make the difference between an excellent and a mediocre company. Here­tical or not, it’s the truth: the value of ex­pertise is, in many contexts, overrated.

Now, experts obviously exist. The play of a great chess player is qualitatively differ­ent from the play of a merely accomplished one. The great player sees the board differ­ently, he processes information differently, and he recognizes meaningful patterns al­most instantly. As Herbert A. Simon and W. G. Chase demonstrated in the 1970s, if you show a chess expert and an amateur a board with a chess game in progress on it, the expert will be able to re-create from memory the layout of the entire game. The amateur won’t. Yet if you show that same expert a board with chess pieces irregularly and haphazardly placed on it, he will not be able to re—create the layout. This is impressive testimony to how thoroughly chess is imprinted on the minds of successful players. But it also demonstrates how limited the scope of their expertise is. A chess ex­pert knows about chess, and that’s it. We intuitively assume that intelligence is fungible, and that people who are excellent at one intellectual pursuit would be excellent at another. But this is not the case with ex­perts. Instead, the fundamental truth about expertise is that it is, as Chase has said, “spectacularly narrow.”

More important, there’s no real evi­dence that one can become expert in something as broad as “decision making” or “policy” or “strategy.” Auto repair, piloting, skiing, perhaps even management; these are skills that yield to application, hard work, and native talent. But forecast­ing an uncertain future and deciding the best course of action in the face of that fu­ture are much less likely to do so. And much of what we’ve seen so far suggests that a large group of diverse individuals will come up with better and more robust fore­casts and make more intelligent decisions than even the most skilled “decision maker.”

We’re all familiar with the absurd pre­dictions that business titans have made: Harry Warner of Warner Bros. pronounc­ing in 1927, “Who the hell wants to hear actors talk?,” or Thomas Watson of IBM declaring in 1943, “I think there is a world market for maybe five computers.” These can be written off as amusing anomalies, since over the course of a century, some smart people are bound to say some dumb things. What can’t be written off, though, is the dismal performance record of most experts.

Between 1984 and 1999, for instance, almost 90 percent of mutual-fund managers underperformed the Wilshire 5000 Index, a relatively low bar. The numbers for bond-fund managers are similar: in the most recent five-year period, more than 95 percent of all managed bond funds under­performed the market. After a survey of expert forecasts and analyses in a wide va­riety of fields, Wharton professor J. Scott Armstrong wrote, “I could find no studies that showed an important advantage for expertise.” Experts, in some cases, were a little better at forecasting than laypeople (although a number of studies have con­cluded that nonpsychologists, for instance, are actually better at predicting people’s be­havior than psychologists are), but above a low level, Armstrong concluded, “exper­tise and accuracy are unrelated.” James Shanteau is one of the country’s leading thinkers on the nature of expertise, and has spent a great deal of time coming up with a method for estimating just how expert someone is. Yet even he suggests that “ex­perts’ decisions are seriously flawed.”

Shanteau recounts a series of studies that have found experts’ judgments to be neither consistent with the judgments of other experts in the field nor internally consistent. For instance, the between-expert agreement in a host of fields, in­cluding stock picking, livestock judging, and clinical psychology, is below 50 per­cent, meaning that experts are as likely to disagree as to agree. More disconcertingly, one study found that the internal consistency of medical pathologists’ judgments was just 0.5, meaning that a pathologist presented with the same evidence would, ha1f the time, offer a different opinion. Experts are also surprisingly bad at what social scientists call “calibrating” their judgments. If your judgments are well calibrated, then you have a sense of how likely it is that your judgment is correct. But experts are much like normal people: they routinely overestimate the likelihood that they’re right. A survey on the question of over­confidence by economist Terrance Odean found that physicians, nurses, lawyers, engineers, entrepreneurs, and investment bankers all believed that they knew more than they did. Similarly, a recent study of foreign-exchange traders found that 70 percent of the time, the traders overesti­mated the accuracy of their exchange-rate predictions. In other words, it wasn’t just that they were wrong; they also didn’t have any idea how wrong they were. And that seems to be the rule among experts. The only forecasters whose judgments are rou­tinely well calibrated are expert bridge players and weathermen. It rains on 30 percent of the days when weathermen have predicted a 30 percent chance of rain.

Armstrong, who studies expertise and forecasting, summarized the case this way: “One would expect experts to have reliable information for predicting change and to be able to utilize the information effec­tively. However, expertise beyond a mini­mal level is of little value in forecasting change.” Nor was there evidence that even if most experts were not very good at fore­casting, a few titans were excellent. Instead, Armstrong wrote, “claims of accuracy by a single expert would seem to be of no prac­tical value.” This was the origin of Arm­strong’s “seer-sucker theory”: “No matter how much evidence exists that seers do not exist, suckers will pay for the existence of seers.”

Again, this doesn’t mean that well-­informed, sophisticated analysts are of no use in making good decisions. (And it cer­tainly doesn’t mean that you want crowds of amateurs trying to collectively perform surgery or fly planes.) It does mean that however well-informed and sophisticated an expert is, his advice and predictions should be pooled with those of others to get the most out of him. (The larger the group, the more reliable its judgment will be.) And it means that attempting to “chase the expert,” looking for the one man who will have the answers to an organization’s problem, is a waste of time. We know that the group’s decision will consistently be better than most of the people in the group, and that it will be better decision af­ter decision, while the performance of human experts will vary dramatically de­pending on the problem they’re asked to solve. So it is unlikely that one person, over time, will do better than the group.

Now, it’s possible that a small number of genuine experts—that is, people who can consistently offer better judgments than those of a diverse, informed group—do exist. The investor Warren Buffett, who has consistently outperformed the S&P 500 Index since the 1960s, is certainly someone who comes to mind. The problem is that even if these superior beings do exist, there is no easy way to identify them. Past per­formance, as we are often told, is no guar­antee of future results. And there are so many would—be experts out there that dis­tinguishing between those who are lucky and those who are genuinely good is often a near—impossible task. At the very least, it’s a job that requires considerable patience: if you wanted to be sure that a successful money manager was beating the market because of his superior skill, and not be­cause of luck or measurement error, you’d need many years, if not decades, of data. And if a group is so unintelligent that it will flounder without the right expert, it’s not clear why the group would be intelligent enough to recognize an expert when it found him.

We think that experts will, in some sense, identify themselves, announcing their presence and demonstrating their ex­pertise by their level of confidence. But it doesn’t work that way. Strangely, experts are no more confident in their abilities than average people are, which is to say that they are overconfident like everyone else, but no more so. Similarly there is very h tile correlation between experts’ self— assessment and their performance. Knowing and knowing that you know are apparently two very different skills.

If this is the case, then why do we cling so tightly to the idea that the right expert will save us? And why do we ignore the fact that simply averaging a group’s estimates will produce a very good re­sult? Richard Larrick and Jack B. Soll suggest that the answer is that we have bad intuitions about averaging. We assume averaging means dumbing down or compro­mising. When people are faced with the choice of picking one expert or picking pieces of advice from a number of experts, they try to pick the best expert rather than simply average across the group. Another reason, surely, is our assumption that true intelligence resides only in individuals, so that finding the right person—the right consultant. The right CEO—will make all the difference. In a sense, the crowd is blind to its own wisdom. Finally, we seek out ex­perts because we get, as the writer Nassim Taleb asserts, “fooled by randomness.” If there are enough people out there making predictions, a few of them are going to compile an impressive record over time. That does not mean that the record was the product of skill, nor does it mean that the record will continue into the future. Again, trying to find smart people will not lead you astray. Trying to find the smartest person will.

I’ve enjoyed Surowiecki articles in The New Yorker and Slate for several years, so I was predisposed with an open mind to what he had to say in The Wisdom of Crowds. Beyond the good writing, Surowiecki brings some new thinking to disrupt my entrenched opinions and attitudes, and I’m open to the possibility that my thinking about group processes may be flawed. I’ve awarded The Wisdom of Crowds with our top rating for several reasons: the premises require thinking and reflection; Surowiecki’s premises are supported through facts and examples; the notes disclose ample sources for further investigation; good writing; and the material covers a wide array of applications. The challenge for managers and leaders of groups small and large is how to foster diversity of opinion, independence, decentralization, and aggregate the knowledge to lead to better decisions and better action. Reading The Wisdom of Crowds and thinking about these issues provide a good beginning.

Steve Hopkins, June 25, 2004

 

ã 2004 Hopkins and Company, LLC

 

The recommendation rating for this book appeared in the July 2004 issue of Executive Times

URL for this review: http://www.hopkinsandcompany.com/Books/The Wisdom of Crowds.htm

 

For Reprint Permission, Contact:

Hopkins & Company, LLC • 723 North Kenilworth AvenueOak Park, IL 60302
Phone: 708-466-4650 • Fax: 708-386-8687

E-mail: books@hopkinsandcompany.com

www.hopkinsandcompany.com