Book Reviews

Go To Hopkins & Company Homepage

Go to Executive Times Archives


Go to 2004 Book Shelf


The Science of Good and Evil: Why People Cheat, Gossip, Care, Share and Follow the Golden Rule by Michael Shermer


Rating: (Mildly Recommended)


Click on title or picture to buy from




It’s usually philosophers or theologians who explore moral issues. Michael Shermer, a scientist, uses the scientific method to explore good an evil in his new book, The Science of Good and Evil. Taking a skeptical, scientific approach yields some interesting perspectives. Here’s an excerpt from Chapter 6, “How We Are Moral: Absolute, Relative, and Provisional Ethics,” pp. 166-178:

Provisional Ethics


If we are going to try to apply the methods of science to thinking about moral issues and ethical systems, here is the problem as I see it: as soon as one makes a moral decision—an action that is deemed right or wrong— it implies that there is a standard of right versus wrong that can be applied in other situations, to other people, in other cultures (in a manner that one might apply the laws of planetary geology to planets other than our own). But if that were the case, then why is that same standard not obvious and in effect in all cultures (as, in the above analogy, that geolog­ical forces operate in the same manner on all planets)? Instead, observa­tion reveals many such systems, most of which claim to have found the royal road to Truth and all of whom differ in degrees significant enough that they cannot be reconciled (as if gravity operated on some planets but not others). If there is no absolute moral standard and instead only rela­tive values, can we realistically speak of right and wrong? An action may be wise or unwise, prudent or imprudent, profitable or unprofitable within a given system. But is that the same as right or wrong?

So, both absolutism and relativism violate clear and obvious obser­vations: there is a wide diversity of ethical theories about right and wrong moral behavior; because of this there are disputes about what constitutes right and wrong both between ethical theories and moral systems as well as within them; we behave both morally and immorally; humans desire a set of moral guidelines to help us determine right and wrong; there are moral principles that most ethical theories and moral systems agree are right and wrong. Any viable ethical theory of moral­ity must account for these observations. Most do not.

In thinking about this problem I asked myself this question: how do we know something is true or right? In science, claims are not true or false, right or wrong in any absolute sense. Instead, we accumulate evi­dence and assign a probability of truth to a claim. A claim is probably true or probably false, possibly right or possibly wrong. Yet probabili­ties can be so high or so low that we can act as if they are, in fact, true or false. Stephen Jay Gould put it well: “In science, ‘fact’ can only mean ‘confirmed to such a degree that it would be perverse to withhold pro­visional assent.’” That is, scientific facts are conclusions confirmed to such an extent it would be reasonable to offer our provisional agree­ment. Heliocentrism—that the earth goes around the sun and not vice versa—is as factual as it gets in science. That evolution happened is not far behind heliocentrism in its factual certainty. Other theories in sci­ence, particularly within the social sciences (where the subjects are so much more complex), are far less certain and so we assign them much lower probabilities of certitude. In a fuzzy logic manner, we might say heliocentrism and evolution are .9 on a factual scale, while political, economic, and psychological theories of human social and individual behavior are much lower on the fuzzy scale, perhaps in the range of .2 to .5. Here the certainties are much fuzzier, and so fuzzy logic is critical to our understanding of how the world works, particularly in assigning fuzzy fractions to the degrees of certainty we hold about those claims. Here we find ourselves in a very familiar area of science known as probabilities and statistics. In the social sciences, for example, we say that we reject the null hypothesis at the .05 level of confidence (where we are 95 percent certain that the effect we found was not due to chance), or at the .01 level of confidence (where we are 99 percent cer­tain), or even at the .0001 level of confidence (where the odds of the effect being due to chance are only one in ten thousand). The point is this: there is a sliding scale from high certainty to high doubt about the factual validity of a particular claim, which is why science traffics in probabilities and statistics in order to express the confidence or lack of confidence a claim or theory engenders.

The same way of thinking has application to morals and ethics. Moral choices in a provisional ethical system might be considered anal­ogous to scientific facts, in being provisionally right or provisionally wrong, provisionally moral or provisionally immoral:


In provisional ethics, moral or immoral means confirmed to such an

extent it would be reasonable to offer provisional assent.

Provisional is an appropriate word here, meaning “conditional, pending confirmation or validation.” In provisional ethics it would be reasonable for us to offer our conditional agreement that an action is moral or immoral if the evidence for and the justification of the action is overwhelming. It remains provisional because, as in science, the evi­dence and justification might change. And, obviously, some moral prin­ciples have less evidence and justification for them than others, and therefore they are more provisional and more personal.

Provisional ethics provides a reasonable middle ground between absolute and relative moral systems. Provisional moral principles are applicable for most people in most circumstances most of the time, yet flexible enough to account for the wide diversity of human behavior, culture, and circumstances. What I am getting at is that there are moral principles by which we can construct an ethical theory. These prin­ciples are not absolute (no exceptions), nor are they relative (anything goes). They are provisional—true for most people in most circum­stances most of the time. And they are objective, in the sense that morality is independent of the individual. Moral sentiments evolved as part of our species; moral principles, therefore, can be seen as transcen­dent of the individual, making them morally objective. Whenever pos­sible, moral questions should be subjected to scientific and rational scrutiny, much as nature’s questions are subjected to scientific and rational scrutiny. But can morality become a science?



Fuzzy Provisionalism


One of the strongest objections to be made against provisional ethics is that if it is not a form of absolute morality, then it must be a form of relative morality, and thus another way to intellectualize one’s ego-centered actions. But this is looking at the world through bivariate glasses, a violation of the either-or fallacy, breaking the law of the excluded middle.

Here again, fuzzy logic has direct applications to moral thinking. In the discussion of evil, we saw how fuzzy fractions assigned to evil deeds assisted us in assessing the relative merits or demerits of human actions. Fuzzy logic also helps us see our way through a number of moral conundrums. When does life begin? Binary logic insists on a black-and-white Aristotelian A or not-A answer. Most pro-lifers, for example, believe that life begins at conception—before conception not-life, after conception, life. A or not-A. With fuzzy morality we can assign a probability to life—before conception a, the moment of con­ception .1, one month after conception .z, and so on until birth, when the fetus becomes a 1.0 life-form. A and not-A. You don’t have to choose between pro-life and pro-choice, themselves bivalent categories still stuck in an Aristotelian world (more on this in the next chapter).

Death may also be assigned in degrees. “If life has a fuzzy boundary, so does death,” fuzzy logician Bart Kosko explains. “The medical defi­nition of death changes a little each year. More information, more pre­cision, more fuzz.” But isn’t someone either dead or alive? A or not-A? No. “Fuzzy logic may help us in our fight against death. If you can kill a brain a cell at a time, you can bring it back to life a cell at a time just as you can fix a smashed car a part at a time.”9 A and not-A. Birth is fuzzy and provisional and so is death. So is murder. The law is already fuzzy in this regard. There are first-degree murder, second-degree mur­der, justifiable homicide, self-defense homicide, genocide, infanticide, suicide, crimes of passion, crimes against humanity. A and not-A. Complexities and subtleties abound. Nuances rule. Our legal systems have adjusted to this reality; so, too, must our ethical systems. Fuzzy birth. Fuzzy death. Fuzzy murder. Fuzzy ethics.



Moral Intuition and the Captain Kirk Principle


Long before he penned the book that justified laissez-faire capitalism, Adam Smith became the first moral psychologist when he observed:

“Nature, when she formed man for society, endowed him with an orig­inal desire to please, and an original aversion to offend his brethren. She taught him to feel pleasure in their favorable, and pain in their unfavorable regard.” Yet, by the time he published The Wealth of Nations in 1776, Smith realized that human motives are not so pure:

“It is not from the benevolence of the butcher, the brewer or the baker that we expect our dinner, but from their regard of their own interest. We address ourselves not to their humanity, but to their self-love, and never talk to them of our necessities, but of their advantage.”

Is our regard for others or for ourselves? Are we empathetic or ego­tistic? We are both. But how we can strike a healthy balance between serving self and serving others is not nearly as rationally calculable as we once thought. Intuition plays a major role in human decision mak­ing—including and especially moral decision making—and new research is revealing both the powers and the perils of intuition. Consider the following scenario: imagine yourself a contestant on the classic televi­sion game show Let’s Make a Deal. You must choose one of three doors. Behind one of the doors is a brand-new automobile. Behind the other two doors are goats. You choose door number one. Host Monty Hall, who knows what is behind all three doors, shows you what’s behind door number two, a goat, then inquires: would you like to keep the door you chose or switch? It’s fifty-fifty, so it doesn’t matter, right? Most people think so. But their intuitive feeling about this problem is wrong. Here’s why: you had a one in three chance to start, but now that Monty has shown you one of the losing doors, you have a two-thirds chance of winning by switching doors. Think of it this way: there are three possibilities for the three doors: (1) good bad bad; (2) bad good bad; (3) bad bad good. In possibility one you lose by switching, but in possibilities two and three you can win by switching. Here is another way to reason around our intuition: there are ten doors; you choose door number one and Monty shows you doors number two through nine, all goats. Now would you switch? Of course you would, because your chances of winning increase from one in ten to nine in ten. This is a counterintuitive problem that drives people batty, includ­ing mathematicians and even statisticians.

Intuition is tricky. Gamblers’ intuitions, for example, are notori­ously flawed (to the profitable delight of casino operators). You are playing the roulette wheel and hit five reds in a row. Should you stay with red because you are on a “hot streak” or should you switch because black is “due”? It doesn’t matter because the roulette wheel has no memory, but try telling that to the happy gambler whose pile of chips grows before his eyes. So-called hot streaks in sports are equally misleading. Intuitively, don’t we just know that when the Los Angeles Lakers’ Kobe Bryant is hot he can’t miss? It certainly seems like it, par­ticularly the night he broke the record for the most three-point baskets in a single game, but the findings of a fascinating 1985 study of “hot hands” in basketball by Thomas Gilovich, Robert Vallone, and Amos Tversky—who analyzed every basket shot by the Philadelphia 76ers for an entire season—does not bear out this conclusion. They discov­ered that the probability of a player hitting a second shot did not  increase following an initial successful basket beyond what one would expect by chance and the average shooting percentage of the player. What they found is so counterintuitive that it is jarring to the sensibili­ties: the number of streaks, or successful baskets in sequence, did not exceed the predictions of a statistical coin-flip model. That is, if you conduct a coin-flipping experiment and record heads or tails, you will encounter streaks. On average and in the long run, you will flip five heads or tails in a row once in every thirty-two sequences of five tosses. Players may feel “hot” when they have games that fall into the high range of chance expectations, but science shows that this intuition is an illusion. 12

These are just a couple of the countless ways our intuitions about the world lead us astray: we rewrite our past to fit present beliefs and moods, we badly misinterpret the source and meaning of our emotions, we are subject to the hindsight bias where after the fact we surmise that we knew it all along, we succumb to the self-serving bias where we think we are far more important than we really are, we see illusory cor­relations that do not exist (superstitions), and we fall for the confirma­tion bias where we look for and find evidence for what we already believe. Our intuitions also lead us to fear the wrong things. Let us return to Adam Smith. According to Smith’s theory, our moral senti­ments lead us to observe what happens to others, empathize with their pain, then turn to our own self-interest in dreaded anticipation of the same disaster befalling us. The week I wrote this section the ABC tele­vision news program 20/20 ran a story about kids who dropped heavy stones off freeway overpasses that smashed through car windows, kill­ing or maiming the passengers within. The producers appealed to the fearful side of our nature by introducing viewers to the hapless victims with mangled faces and shattered lives, evoking our empathy; they then engaged our self-love with the rhetorical question: “could this happen to you?”

Could it? Not likely. In fact, it is so unlikely you would be better off Worrying about lightning striking you. Then why do we worry about such matters? Because our moral intuitions have been hijacked by what University of Southern California sociologist Barry Glassner calls a “culture of fear.” Who created this culture? Ultimately we did, by buying into the rumors and hearsay that pass for factual data fed to us by the media and other sources. But those factoids and reports had to come from somewhere. Follow the money and those who traffic in fear mongering. Politicians, for example, can win elections by grossly exag­gerating (and sometimes outright lying about) crime and drug-use percentages under their opponent’s watch. Advocacy groups profit (lit­erally) from fear campaigns that heighten an expectation of doom (to be thwarted just in time, if the donor’s contribution is beefy enough). Think of conservatives decrying the demise of the family or liberals proclaiming the destruction of the environment.

Religions play on our fears by hyping up the doom and gloom of this world to make the next world seem all the more appealing. On May 17, 1999, an evangelical Christian friend of mine insisted that we are in the “end times” because the Bible prophesied an increase in immorality and malfeasance. Since everyone knows crime is an epi­demic problem in America that worsens by the year (“just look at the recent Columbine shooting,” he enthused), the end is nigh. I remember the date because it was the same day the FBI released its findings that we are in the midst of the longest decline in crime rates since the bureau began collecting data in 1930. In other words, we are confronted with the paradox of being more fearful than we have ever been at the same time that things have never been so safe. “In the late 1990s the number of drug users had decreased by half compared to a decade earlier,” Glassner explains, yet the “majority of adults rank drug abuse as the greatest danger to America’s youth.” Ditto the economy, where “the unemployment rate was below 5 percent for the first time in a quarter century. Yet pundits warned of imminent economic disaster.” In this century alone modern medicine and social hygiene practices and tech­nologies have nearly doubled our life span and improved our health immeasurably, but Glassner points out that if you tally up the reported disease statistics, out of 280 million Americans, 543 million of us are seriously ill!

How can this be? Benjamin Disraeli had an answer: lies, damn lies, and statistics. We may be good storytellers, but we are lousy statisti­cians. Glassner shows, for example, that women in their forties believe they have a 1 in 10 chance of dying from breast cancer, but their real lifetime odds are more like 1 in 250. He notes that some “feminists helped popularize the frightful but erroneous statistic that two of three teen mothers had been seduced and abandoned by adult men” when in reality it “is more like one in ten, but some feminists continued to culti­vate the scare well after the bogus stat had been definitively debunked.” The bigger problem here is the law of large numbers, where million-to-one odds happen 280 times a day in America, and of those the most sensational dozen make the evening news, especially if captured on video. Stay tuned—film at eleven!

Herein lies the problem for our moral sensibilities. We are fed num­bers daily that we cannot comprehend about threats to our security we cannot tolerate. Better safe than sorry, right? Not necessarily. Patholog­ical fear takes a dramatic toll on our psyches and wallets. “We waste tens of billions of dollars and person-hours every year,” Glassner notes, “on largely mythical hazards like road rage, on prison cells occupied by people who pose little or no danger to others, on programs designed to protect young people from dangers that few of them ever face, on com­pensation for victims of metaphorical illnesses, and on technology to make airline travel—which is already safer than other means of trans­portation—safer still.”’

Of all the institutions feeding our fears, the media takes center stage for sensationalism (“if it bleeds, it leads”). An Emory University study revealed that the leading cause of death in men—heart disease— received the same amount of coverage as the eleventh-ranked vector: homicide. Not surprising, drug use, the lowest-ranking risk factor asso­ciated with serious illness and death, received as much attention as the second-ranked risk factor, poor diet and lack of exercise. From 1990 to 1998, America’s murder rate decreased by 20 percent while the number of murder stories on network newscasts increased by an incredible 600 percent (and this doesn’t count 0. J. Simpson stories). The fact is, there is no evidence that secondhand smoke causes cancer or that cell-phone use generates brain tumors; likewise, Gulf War Syndrome appears to be a chimera, television does not cause violence, Satanic cults are phantas­magorical, most recovered memories of childhood abuse are nothing more than false memories planted by bad therapists, silicon breast implants cause nothing more than metastatic litigation, the drug war was lost decades ago, and the drug emperor has no clothes—he’s butt naked and it’s high time someone said it. We would be well-advised to remember the law of large numbers, and to keep in mind that we have selective memory of the most egregious events and that most of our fears are illusory—the vaporous product of a culture of fear of which we are both creators and victims.

These notable shortcomings to our intuitive instincts aside, how­ever, there is something quite empowering about intuition that cannot be dismissed, especially in the moral realm. In fact, intuition is so ingrained into the human psyche that it cannot be separated from intel­lect (witness the aforementioned intuitive afflictions). So integrated are intuition and intellect that I have coalesced them into what I call the Captain Kirk Principle, from an episode of Star Trek entitled “The Enemy Within.” Captain James T. Kirk has just beamed up from planet Alpha 177, where magnetic anomalies have caused the trans­porter to malfunction, splitting Kirk into two beings. One is cool, cal­culating, and rational. The other is wild, impulsive, and irrational. Rational Kirk must make a command decision to save the landing party now stranded on the planet because of the malfunctioning trans­porter. (Why they could not just send down a shuttle craft to rescue them is never explained, and thus this episode has contributed to the long list of Star Trek bloopers.) Because his intellect and intuition have been split, Kirk is paralyzed with indecision, bemoaning to Dr. McCoy: “I can’t survive without him [irrational Kirk]. I don’t want to take him back. He’s like an animal—a thoughtless, brutal animal. And yet it’s me.” This psychological battle between intellect and intuition was played out in nearly every episode of Star Trek in the characters of the ultrarational Mr. Spock and hyperemotional Dr. McCoy, with Captain Kirk as the near-perfect embodiment of both. Thus, I call this balance the Captain Kirk Principle: intellect is driven by intuition, intuition is directed by intellect.

For most scientists, intuition is the bête noire of a rational life, the enemy within to beam away faster than a Vulcan in heat. Yet the Cap­tain Kirk Principle is now finding support from a rich new field of sci­entific inquiry brilliantly summarized by psychologist David G. Myers, who demonstrates through countless well-documented experiments that intuition—”our capacity for direct knowledge, for immediate insight without observation or reason”—is as much a part of our thinking as analytic logic. Physical intuition, of course, is well known and accepted as part of an athlete’s repertoire of talents—Michael Jor­dan and Tiger Woods come to mind. But there are social, psychologi­cal, and moral intuitions as well that operate at a level so fast and subtle that they cannot be considered a product of rational thought. Harvard’s Nalini Ambady and Robert Rosenthal, for example, discov­ered that the evaluations of teachers by students who saw a mere thirty-second video of the teacher were remarkably similar to those of students who had taken the entire course. Even three two-second video clips of the teacher yielded a striking .72 correlation with the course student evaluations! How can this be? We have an intuitive sense about people that allows us to make reasonably accurate snap judg­ments about them.

Research consistently shows how even unattended stimuli can sub­tly affect us. In one experiment, for example, researchers flashed emo­tionally positive scenes (a kitten or a romantic couple) or negative scenes (a werewolf or a dead body) for forty-seven milliseconds before subjects viewed slides of people. Although subjects reported seeing only a flash of light for the initial emotionally charged scenes, they gave more positive ratings to people whose photos had been associated with the positive scenes. In other words, something registered somewhere in the brain. That also appears to be the situation in the case of a patient who was unable to recognize her own hand, and when asked to use her thumb and forefinger to estimate the size of an object was unable to do it. Yet when she reached for the object her thumb and forefinger were correctly placed.2 Another study revealed that stroke patients who have lost a portion of their visual cortex are consciously blind in part of their field of vision. When shown a series of sticks, they report seeing nothing, yet unerringly identify whether the unseen sticks are vertical or horizontal. That’s weird.

Intuition especially plays a powerful role in “knowing” other people. The best predictor of how well a psychotherapist will work out for you is your initial reaction in the first five minutes of the first ses­sion.2s The reason for this is because for psychotherapy (talk therapy), research shows that no one modality or style is better than any other. It does not matter what type or how many degrees the therapist has, or what particular school the therapist attended, or whom the therapist trained under. What matters most is how well suited the therapist is for you, and only you can make that judgment, one best made through intuition, not intellect. Similarly, people with dating experience know within minutes whether or not they will want to see a first date again. That assessment is not made through tallying up the pluses and minuses of the date in some intellectual process equivalent to a mental ledger; we don’t usually ask for a date’s résumé or curriculum vitae before agreeing to a second date. But we do perform something like this in a quick intu­itive assessment based on subtle cues—body language, facial expres­sions, voice tone and volume, wit and humor, politeness, and so forth—all of which can be assessed relatively quickly.

To the extent that lie detection through the observation of body language and facial expressions is accurate (overall not very), women are better at it than men because they are more intuitively sensitive to subtle cues. In experiments in which subjects observe someone either truth telling or lying, although no one is consistently correct in identify­ing the liar, women are correct significantly more often than men. Women are also superior in discerning which of two people in a photo was the other’s supervisor, whether a male-female couple is a genuine romantic relationship or a posed phony one, and when shown a two-second silent video clip of an upset woman’s face, women guess more accurately then men whether she is criticizing someone or discussing her divorce. People who are highly skilled in identifying “micromo­mentary” facial expressions are also more accurate in judging lying. In testing such professionals as psychiatrists, polygraphists, court judges, police officers, and secret service agents on their ability to detect lies, only secret service agents trained to look for subtle cues scored above chance. Most of us are not good at lie detection because we rely too heavily on what people say rather than on what they do. Subjects with damage to the brain that renders them less attentive to speech are more accurate at detecting lies, such as aphasic stroke victims who were able to identify liars 73 percent of the time when focusing on facial expres­sions (normal subjects did no better than chance). In support of an evo­lutionary explanation of a moral sense, research shows that we may be hardwired for such intuitive thinking: a patient with damage to parts of his frontal lobe and amygdala (the fear center) is prevented from understanding social relations or detecting cheating, particularly in social contracts, even though cognitively he is otherwise normal. Cheating detection in social relations, such as in the role of gossip in small groups, is a vital part of our evolutionary heritage.

Although most secular theories of morality are rationalist theories, recent research on moral intuition reveals that the Captain Kirk Prin­ciple is at work in the moral realm as well. University of Virginia social psychologist Jonathan Haidt, for example, has demonstrated that the mind makes quick and automatic moral judgments similar to how we make aesthetic judgments. We do not reason our way to a moral deci­sion; we jump right in, then later rationalize the quick decision. Our moral intuitions are more emotional than rational. Haidt’s “social intu­itionist” theory says that moral feelings come first, then the rationaliza­tion of those moral feelings. “Could human morality really be run by the moral emotions, while moral reasoning struts about pretending to be in control?” Haidt asks. He answers his own question thusly: “Moral judg­ment involves quick gut feelings, or affectively laden intuitions, which then trigger moral reasoning.” In other words, research supports our usual distinction between morality (thoughts and behaviors about right and wrong) and ethics (theories about moral thoughts and behaviors). In this context, ethics is an expression of emotional moral intuitions aimed at convincing others of the rational validity of our intuitions.

Consider the following moral dilemma and how our moral intu­itions respond: you witness a runaway trolley headed for five people. If you throw a switch to derail the trolley, it will save the five but send it down another track to kill one person. Would you do it? Most people say that they would. Rationally, it seems justified: sacrificing one life to save five seems like the logical thing to do. However, consider this minor modification of the moral dilemma: you witness a runaway trol­ley headed for five people. You can stop the trolley by pushing a person onto the track, killing that one individual but saving five lives in the process. Would you do it? It is the same moral calculation, but most say they would not do it. Why? Princeton University’s Joshua Greene believes he has found a reason through brain imaging technology. In presenting these moral dilemmas to subjects and recording what is going on inside their brains as they think about them, the second sce­nario of pushing the subject onto the tracks triggered the subjects’ brains to light up in their emotional areas (normally active when feel­ing sad and frightened) much more than when they were thinking about the first scenario. The difference in these two scenarios is that in the first one the subject is emotionally detached by being one step removed from the killing process—to save five lives by killing one person, one has only to flip a switch to derail the trolley car. The trolley killed the individual, not the subject. In the second scenario the subject is emotionally involved—to save five lives by killing one person, one has to be directly and viscerally responsible for killing another person. Moral judgment is not calculatingly rational. It is intuitively emotional.

Cognitive biases also play a powerful role in our moral intuitions. The self-serving bias, for example, which dictates that we tend to see ourselves in a more positive light than others actually see us, leads us to think we are more moral than others. National surveys, for instance, show that most businesspeople believe they are more moral than other businesspeople. Even social psychologists who study moral intuition think they are more moral than other social psychologists! And we all believe that we will be rewarded for our ethical behavior. A U.S. News & World Report study asked Americans who they think is most likely to make it to heaven: 19 percent said 0. J. Simpson, 52 percent said former President Bill Clinton, 6o percent said Princess Diana, 65 per­cent chose Michael Jordan, and, not surprisingly, 79 percent elected Mother Teresa. But the person survey takers thought most likely to go to heaven, at 87 percent, was the survey taker him- or herself!33

Consistent with these experimental results are studies that show people are more likely to rate themselves superior in “moral goodness” than in “intelligence,” and community residents overwhelmingly see themselves as caring more about the environment and other social issues than other members of the community do.34 In one College Entrance Examination Board survey of 829,000 high school seniors, none rated themselves below average in the category “ability to get along with others,” 6o percent rated themselves in the top 10 percent, and 25 percent said they were in the top 10 percent. Likewise, just as behaviors determine perceptions—smokers overestimate the number of people who smoke, for example—moral behaviors determine moral perceptions: liars overestimate the number of lies other people tell. One study found that people who cheat on their spouses and income taxes overestimate the number of others who do so.

Although in science we eschew intuition because of its many perils, we would do well to remember the Captain Kirk Principle that intellect and intuition are complementary, not competitive. Without intellect our intuition may drive us unchecked into emotional chaos. Without intuition we risk failing to resolve complex social dynamics and moral dilemmas, as Dr. McCoy explained to the indecisive rational Kirk: “We all have our darker side—we need it! It’s half of what we are. It’s not really ugly—it’s human. Without the negative side you couldn’t be the captain, and you know it! Your strength of command lies mostly in him.”

If you’re scientifically bent, you’re likely to really enjoy The Science of Good and Evil. If you’re more philosophically inclined, this book may become frustrating to read.

Steve Hopkins, March 23, 2004


ã 2004 Hopkins and Company, LLC


The recommendation rating for this book appeared in the April 2004 issue of Executive Times

URL for this review: Science of Good and Evil.htm


For Reprint Permission, Contact:

Hopkins & Company, LLC • 723 North Kenilworth AvenueOak Park, IL 60302
Phone: 708-466-4650 • Fax: 708-386-8687