| 
 | Executive Times | |||
|  |  | |||
|  |  | |||
|  | 2007 Book Reviews | |||
| The Black
  Swan: The Impact of the Highly Improbable by Nassim
  Nicholas Taleb | ||||
| Rating: | **** | |||
|  | (Highly Recommended) | |||
|  |  | |||
|  | Click on
  title or picture to buy from amazon.com | |||
|  |  | |||
|  | Unknowns Nassim Nicholas Taleb
  tells readers of his new book, The Black
  Swan, that “a black swan is a highly improbable event with three
  principal characteristics: it is unpredictable; it carries a massive impact;
  and after the fact, we concoct an explanation that makes it appear less
  random and more predictable than it was.” This quirky and thought provoking
  book promotes the notion that randomness lives, and blasts those who rely on
  data that doesn’t include the impact of the highly improbable, which does
  occur. We place too much emphasis on the odds that the past will repeat
  itself. He calls the bell curve predictability “Mediocristan”
  and the wild gyrations of the world we really live in as “Extremistan.”
  Here’s an excerpt, pp. 158-163: The Beauty of Technology: Excel Spreadsheets In the not too distant
  past, say the precomputer days, projections remained
  vague and qualitative, one had to make a mental effort to keep track of them,
  and it was a strain to push scenarios into the future. It took pencils,
  erasers, reams of paper, and huge wastebaskets to engage in the activity. Add
  to that an accountant’s love for tedious, slow work. The activity of
  projecting, in short, was effortful, undesirable, and marred with self-doubt. But things changed with the
  intrusion of the spreadsheet. When you put an Excel spreadsheet into
  computer-literate hands you get a “sales projection” effortlessly extending
  ad infinitum! Once on a page or on a computer screen, or, worse, in a
  PowerPoint presentation, the projection takes on a life of its own, losing
  its vagueness and abstraction and becoming what philosophers call reified,
  invested with concreteness; it takes on a new life as a tangible object. My friend Brian Hinchcliffe suggested the following idea when we were
  both sweating at the local gym. Perhaps the ease with which one can project
  into the future by dragging cells in these spreadsheet programs is
  responsible for the armies of forecasters confidently producing longer-term
  forecasts (all the while tunneling on their assumptions). We have become
  worse planners than the Soviet Russians thanks to these potent computer
  programs given to those who are incapable of handling their knowledge. Like
  most commodity traders, Brian is a man of incisive and sometimes brutally
  painful realism. A classical mental
  mechanism, called anchoring, seems to be at work here. You lower your anxiety
  about uncertainty by producing a number, then you
  “anchor” on it, like an object to hold on to in the middle of a vacuum. This
  anchoring mechanism was discovered by the fathers of the psychology of
  uncertainty, Danny Kahneman and Amos Tversky, early in their heuristics and biases project. It
  operates as follows. Kahneman and Tversky had their subjects spin a wheel of fortune. The
  subjects first looked at the number on the wheel, which they knew was random, then they were
  asked to estimate the number of African countries in the United Nations.
  Those who had a low number on the wheel estimated a low number of African
  nations; those with a high number produced a higher estimate. Similarly, ask someone to
  provide you with the last four digits of his social security number. Then ask
  him to estimate the number of dentists in  We use reference points in
  our heads, say sales projections, and start building beliefs around them
  because less mental effort is needed to compare an idea to a reference point
  than to evaluate it in the absolute (System
  1 at work!). We cannot work without a point of reference. So the introduction of a
  reference point in the forecaster’s mind will work wonders. This is no
  different from a starting point in a bargaining episode: you open with high
  number (“I want a million for this house”); the bidder will answer “only
  eight-fifty”—the discussion will be determined by that initial level. The Character of Prediction Errors Like many biological
  variables, life expectancy is from Mediocristan,
  that is, it is subjected to mild randomness. It is not scalable, since the
  older we get, the less likely we are to live. In a developed country a
  newborn female is expected to die at around 79, according to insurance
  tables. When she reaches her 79th birthday, her life expectancy, assuming
  that she is in typical health, is another 10 years. At the age of 90, she
  should have another 4.7 years to go. At the age of 100, 2.5 years. At the age of 119, if she miraculously lives that
  long, she should have about nine months left. As she lives beyond the
  expected date of death, the number of additional years to go decreases. This
  illustrates the major property of random variables related to the bell
  curve. The conditional expectation of additional life drops as a person gets
  older. With human projects and
  ventures we have another story. These are often scalable, as I said in
  Chapter 3. With scalable variables, the ones from Extremistan,
  you will witness the exact opposite effect. Let’s say a project is expected
  to terminate in 79 days, the same expectation in days as the newborn female
  has in years. On the 79th day, if the project is not finished, it will be
  expected to take another 25 days to complete. But on the 90th day, if the project
  is still not completed, it should have about 58 days to go. On the 100th, it should have 89 days to go. On the
  119th, it should have an extra 149 days. On day 600, if the project is not
  done, you will be expected to need an extra 1,590 days. As you see, the
  longer you wait, the longer you will be expected to wait. Let’s say you are a refugee
  waiting for the return to your homeland. Each day that passes you are getting
  farther from, not closer to, the day of triumphal return. The same applies to
  the completion date of your next opera house. If it was expected to take two
  years, and three years later you are asking questions, do not expect the
  project to be completed any time soon. If wars last on average six months,
  and your conflict has been going on for two years, expect another few years
  of problems. The Arab-Israeli conflict is sixty years old,
  and counting—yet it was considered “a simple problem” sixty years ago.
  (Always remember that, in a modern environment, wars last longer and kill
  more people than is typically planned.) Another example: Say that you send
  your favorite author a letter, knowing that he is busy and has
  a two-week turnaround. If three weeks later your mailbox is still empty, do
  not expect the letter to come tomorrow—it will take on average another three
  weeks. If three months later you still have nothing, you will have to expect
  to wait another year. Each day will bring you closer to your death but
  further from the receipt of the letter. This subtle but extremely
  consequential property of scalable randomness is unusually counterintuitive.
  We misunderstand the logic of large deviations from the norm. I will get deeper into
  these properties of scalable randomness in Part Three. But let us say for now
  that they are central to our misunderstanding of the business of prediction. DON’T CROSS A RIVER
  IF IT IS (ON AVERAGE) FOUR FEET DEEP Corporate and government
  projections have an additional easy-to-spot flaw: they do not attach a possible error rate to their
  scenarios. Even in the absence of Black Swans this omission would be a
  mistake. I once gave a talk to
  policy wonks at the  The attendees were tame and
  silent. What I was telling them was against everything they believed and
  stood for; I had gotten carried away with my aggressive message, but they
  looked thoughtful, compared to the testosterone-charged characters one
  encounters in business. I felt guilty for my aggressive stance. Few asked
  questions. The person who organized the talk and invited me must have been
  pulling a joke on his colleagues. I was like an aggressive atheist making his
  case in front of a synod of cardinals, while dispensing with the usual
  formulaic euphemisms. Yet some members of the
  audience were sympathetic to the message. One anonymous person (he is
  employed by a governmental agency) explained to me privately after the talk
  that in January 2004 his department was forecasting the price of oil for
  twenty-five years later at $27 a
  barrel, slightly higher than what it was at the time. Six months later,
  around June 2004, after oil doubled in price, they had to revise their
  estimate to $54 (the price of oil
  is currently, as I am writing these lines, close to $79 a barrel). It did not
  dawn on them that it was ludicrous to forecast a second time given that
  their forecast was off so early and so markedly, that this business of
  forecasting had to be somehow questioned. And they were looking twenty-five years ahead! Nor did it
  hit them that there was something called an error rate to take into account. * Forecasting without
  incorporating an error rate uncovers three fallacies, all arising from the
  same misconception about the nature of uncertainty. The first fallacy: variability matters. The first error
  lies in taking a projection too seriously, without heeding its accuracy. Yet,
  for planning purposes, the accuracy in your forecast matters far more the
  forecast itself. I will explain it as follows. Don’t cross a river if it is four feet deep on
  average. You
  would take a different set of clothes on your trip to some remote destination
  if I told you that the temperature was expected to be seventy degrees
  Fahrenheit, with an expected error rate of forty degrees than if I told you
  that my margin of error was only five degrees. The policies we need to make
  decisions on should depend far more on the range of possible outcomes than on
  the expected final number. I have seen, while working for a bank, how people
  project cash flows for companies without wrapping them in the thinnest layer
  of uncertainty. Go to the stockbroker and check on what method they use to
  forecast sales ten years ahead to “calibrate” their valuation models. Go find
  out how analysts forecast government deficits. Go to a bank or
  security-analysis training program and see how they teach trainees to make
  assumptions; they do not teach you to build an error rate around those
  assumptions—but their error rate is so large that it is far more significant
  than the projection itself! The second fallacy lies in
  failing to take into account forecast degradation as the projected period
  lengthens. We do not realize the full extent of the difference between near
  and far futures. Yet the degradation in such forecasting through time becomes
  evident through simple introspective examination—without even recourse to
  scientific papers, which on this topic are suspiciously rare. Consider
  forecasts, whether economic or technological, made in 1905 for the following
  quarter of a century. How close to the projections did 1925 turn out to be? For a convincing experience, go read George
  Orwell’s 1984. Or look at more
  recent forecasts made in 1975 about the prospects for the new millennium.
  Many events have taken place and new technologies have appeared that lay
  outside the forecasters’ imaginations; many more that were expected to take
  place or appear did not do so. Our forecast errors have traditionally been
  enormous, and there may be no reasons for us to believe that we are suddenly
  in a more privileged position to see into the future compared to our blind
  predecessors. Forecasting by bureaucrats tends to be used for anxiety relief
  rather than for adequate policy making. The third fallacy, and
  perhaps the gravest, concerns a misunderstanding of the random character of
  the variables being forecast. Owing to the Black Swan, these variables can
  accommodate far more optimistic—or far more pessimistic—scenarios than are
  currently expected. Recall from my experiment with Dan Goldstein testing the
  domain-specificity of our intuitions, how we tend to make no mistakes in Mediocristan, but make large ones in Extremistan
  as we do not realize the consequences of the rare event. What is the implication
  here? Even if you agree with a given forecast, you have to worry about the
  real possibility of significant divergence from it. These divergences may be
  welcomed by a speculator who does not depend on steady income; a retiree,
  however, with set risk attributes cannot afford such gyrations. I would go even
  further and, using the argument about the depth of the river, state that it
  is the lower bound of estimates (i.e., the worst case) that matters when
  engaging in a policy—the worst case is far more consequential than the
  forecast itself. This is particularly true if the bad scenario is not
  acceptable. Yet the current phraseology makes no allowance for that. None. It is often said that
  “is wise he who can see things coming.” Perhaps the wise one is the one who
  knows that he cannot see things far away. Get Another Job The two typical replies I
  face when I question forecasters’ business are: “What should he do? Do you
  have a better way for us to predict?” and “If you’re so smart, show me your
  own prediction.” In fact, the latter question, usually boastfully presented,
  aims to show the superiority of the practitioner and “doer” over the
  philosopher, and mostly comes from people who do not know that I was a
  trader. If there is one advantage of having been in the daily practice of
  uncertainty, it is that one does not have to take any crap from bureaucrats. One of my clients asked for
  my predictions. When I told him I had none, he was offended and decided to
  dispense with my services. There is in fact a routine, unintrospective
  habit of making businesses answer questionnaires and fill out paragraphs
  showing their “outlooks.” I have never had an outlook and have never made
  professional predictions—but at least I
  know that I cannot forecast and a small number of people (those I care
  about) take that as an asset. There are those people who
  produce forecasts uncritically. When asked why they forecast, they answer,
  “Well, that’s what we’re paid to do here.” My suggestion: get another
  job. This suggestion is not too
  demanding: unless you are a slave, I assume you have some amount of control
  over your job selection. Otherwise this becomes a problem of ethics, and a
  grave one at that. People who are trapped in their jobs who forecast simply
  because “that’s my job,” knowing pretty well that their forecast is
  ineffectual, are not what I would call ethical. What they do is no different
  from repeating lies simply because “it’s my job.” Anyone who causes harm by
  forecasting should be treated as either a fool or a liar. Some forecasters
  cause more damage to society than criminals. Please, don’t drive a school
  bus blindfolded. * While
  forecast errors have always been entertaining, commodity prices have been a
  great trap for suckers. Consider this 1970 forecast by  Also note
  this additional aberration: since high oil prices are marking up their
  inventories, oil companies are making record bucks and oil executives are
  getting huge bonuses because “they did a good job”—as if they brought profits
  by causing the rise of oil prices. The Black
  Swan is an unusual book by an unusual author. Odds are that reading it
  will increase your appreciation of what you don’t and can’t know, and what a
  big impact that has on your life.  Steve Hopkins,
  July 25, 2007 | |||
|  |  | |||
| Go to Executive Times
  Archives | ||||
|  | ||||
|  |  | |||
|  | 
 The recommendation rating for
  this book appeared  in the August 2007
  issue of Executive Times URL for this review: http://www.hopkinsandcompany.com/Books/The
  Black Swan.htm For Reprint Permission,
  Contact: Hopkins & Company, LLC •  E-mail: books@hopkinsandcompany.com | |||
|  |  | |||
|  |  | |||