Book Reviews


Go to Executive Times
Archives 

The
Fabric of the Cosmos: Space, Time, and the Texture of Reality by Brian
Greene Rating: ••• (Recommended) 

Click on
title or picture to buy from amazon.com 




Weavings At
least once a year, I try to tackle a science book to humble myself about all
the things I don’t know. This year’s book was Brian Green’s The
Fabric of the Cosmos that tries to answer the question, “What is Reality?”
for general readers. In what is usually a readable style, Green helps explain
String Theory, Inflationary Cosmology and the Uncertainty Principle. One
thing that brought me great pleasure in reading this book was the realization
that my contemporaries who were physics majors awoke each Saturday morning
for a required class in the late 1960s and early 1970s, learning about things
that have since been proven wrong. I’m glad I had slept in. Green teaches readers in this version of a Physics for Poets course, and is gentle enough to direct
those of us with passive interest to skip some sections of the book that
become a bit obtuse (I plowed through despite his warnings and came out
unscathed.) Piece by piece, Green explains the state of modern Physics, and describes
some of the questions that have been answered, and some of the questions that
remain. Here’s an excerpt from
the end of Part II, “Time and Experience,” Chapter 7, “Time and the Quantum,” pp. 208216: Decoherence and Quantum Reality When you first encounter
the probabilistic aspect of quantum mechanics, a natural reaction is to think
that it is no more exotic than the probabilities that arise in coin tosses or
roulette wheels. But when you learn about quantum interference, you realize
that probability enters quantum mechanics in a far more fundamental way. In
everyday examples, various outcomes—heads versus tails, red versus black, one
lottery number versus another—are assigned probabilities with the
understanding that one or another result will definitely happen and that each
result is the end product of an independent, definite history. When a coin
is tossed, sometimes the spinning motion is just right for the toss to conic
out heads and sometimes it’s just right for the toss to come out tails. The
5050 probability we assign to each outcome refers not just to the final
result—heads or tails— but also to the histories that lead to each outcome.
Half of the possible ways you can toss a coin result in heads, and half
result in tails. The histories themselves, though, are totally separate,
isolated alternatives. There is no sense in which different motions of the
coin reinforce each other or cancel each other out. They’re all independent. But in quantum mechanics, things are
different. The alternate paths an electron can follow from the two slits to
the detector are not separate, isolated histories. The possible histories
commingle to produce the observed outcome. Some paths reinforce each other,
while others cancel each other out. Such quantum interference between the
various possible histories is responsible for the pattern of light and dark
bands on the detector screen. Thus, the telltale difference between the
quantum and the classical notions of probability is that the former is
subject to interference and the latter is not. Decoherence is a widespread phenomenon that forms
a bridge between the quantum physics of the small and the classical physics
of the notsosmall by suppressing quantum interference—that is, by diminishing
sharply the core difference between quantum and classical probabilities. The
importance of decoherence was realized way back in
the early days of quantum theory, but its modern incarnation dates from a
seminal paper by the German physicist Dieter Zeh in
1970, and has since been developed by many researchers, including Erich Joos, also from Germany, and Wojciech
Zurek, of the Los Alamos National Laboratory in New
Mexico. Here’s the idea. When Schrodinger’s equation is applied in a simple situation
such as single, isolated photons passing through a screen with two slits, it
gives rise to the famous interference pattern. But there are two very special
features of this laboratory example that are not characteristic of realworld
happenings. First, the things we encounter in daytoday life are larger and
more complicated than a single photon. Second, the things we encounter in
daytoday life are not isolated: they interact with us and with the
environment. The book now in your hands is subject to human contact and, more
generally, is continually struck by photons and air molecules. Moreover,
since the book itself is made of many molecules and atoms, these constantly
jittering constituents are continually bouncing off each other as well. The
same is true for pointers on measuring devices, for cats, for human brains,
and for just about everything you encounter in daily life. On astrophysical
scales, the earth, the moon, asteroids, and the other planets are continually
bombarded by photons from the sun. Even a grain of dust floating in the
darkness of outer space is subject to continual hits from lowenergy microwave
photons that have been streaming through space since a short time after the
big bang. And so, to understand what quantum mechanics says about realworld
happenings—as opposed to pristine laboratory experiments—we should apply Schrodinger’s equation to these more complex, messier
situations. In essence, this is what Zeh emphasized, and his work, together with that of many
others who have followed, has revealed something quite wonderful. Although
photons and air molecules are too small to have any significant effect on the
motion of a big object like this book or a cat, they are able to do something
else. They continually “nudge” the big object’s wavefunction,
or, in physicsspeak, they disturb its coherence: they blur its
orderly sequence of crest followed by trough followed by crest. This is critical, because a wavefunction’s
orderliness is central to generating interference effects (see Figure 4.2).
And so, much as adding tagging devices to the doubleslit experiment blurs
the resulting wavefunction and thereby washes out
interference effects, the constant bombardment of objects by constituents of
their environment also washes out the possibility of intereference
phenomena. In turn, once quantum interference is no longer possible, the
probabilities inherent to quantum mechanics are, for all practical purposes,
just like the probabilities inherent to coin tosses and roulette wheels. Once
environmental decoherence blurs a wavefunction, the exotic nature of quantum probabilities
melts into the more familiar probabilities of daytoday hife.’~
This suggests a resolution of the quantum measurement puzzle, one that, if
realized, would be just about the best thing we could hope for. I’ll describe
it first in the most optimistic light, and then stress what still needs to be
done. If a wavefunction
for an isolated electron shows that it has, say, a 50 percent chance of being
here and a 50 percent chance of being there, we must interpret these
probabilities using the fullfledged weirdness of quantum mechanics. Since both
of the alternatives can reveal themselves by commingling and generating an
interference pattern, we must think of them as equally real. In loose
language, there’s a sense in which the electron is at both locations.
What happens now if we measure the electron’s position with a nonisolated,
everydaysized laboratory instrument? Well, corresponding to the electron’s
ambiguous whereabouts, the pointer on the instrument has a 50 percent chance
of pointing to this value and a 50 percent chance of pointing to that value.
But because of decoherence, the pointer will not
be in a ghostly mixture of pointing at both values; because of decoherence, we can interpret these probabilities
in the usual, classical, everyday sense. Just as a coin has a 50 percent
chance of landing heads and a 50 percent chance of landing tails, but lands either
heads or tails, the pointer has a 50 percent chance of pointing to
this value and a 50 percent chance of pointing to that value, but it will
definitely point to one or the other. Similar reasoning applies for all other
complex, nonisolated objects. If a quantum
calculation reveals that a cat, sitting in a closed box, has a 50 percent
chance of being dead and a 50 percent chance of being alive— because there is
a 50 percent chance that an electron will hit a boobytrap mechanism that
subjects the cat to poison gas and a 50 percent chance that the electron
misses the booby trap—decoherence suggests that the
cat will not be in some absurd mixed state of being both dead and
alive. Although decades of heated debate have been devoted to issues like
What does it mean for a cat to be both dead and alive? How does the act of
opening the box and observing the cat force it to choose a definite status,
dead or alive?, decoherence suggests that long
before you open the box, the environment has already completed billions of
observations that, in almost no time at all, turned all mysterious quantum
probabilities into their less mysterious classical counterparts. Long before
you look at it, the environment has compelled the cat to take on one, single,
definite condition. Decoherence forces much of the
weirdness of quantum physics
to “leak” from large objects since, bit by bit, the quantum weirdness is carried
away by the innumerable impinging particles from the environment. It’s hard to imagine a more satisfying
solution to the quantum measurement problem. By being more realistic and
abandoning the simplifying assumption that ignores the environment—a
simplification that was crucial to making progress during the early
development of the field—we would find that quantum mechanics has a builtin
solution. Human consciousness, human experimenters, and human observations
would no longer play a special role since they (we!) would simply be elements
of the environment, like air molecules and photons, which can interact with a
given physical system. There would also no longer be a stage one / stage two
split between the evolution of the objects and the experimenter who measures
them. Everything—observed and observer—would be on an equal footing.
Everything—observed and observer—would be subject to precisely the same
quantum mechanical law as is set down in Schrödinger’s equation. The act of
measurement would no longer be special; it would merely be one specific
example of contact with the environment. Is that it? Does decoherence
resolve the quantum measurement problem? Is decoherence
responsible for wavefunctions’ closing the door on
all but one of the potential outcomes to which they can lead? Some think so.
Researchers like Robert Griffiths, of Carnegie Mellon; Roland Omnès, of Orsay; the Nobel
laureate Murray GellMann, of the Santa Fe Institute; and Jim Hartle, of the University of California at Santa Barbara,
have made great progress and claim that they have developed decoherence into a complete framework (called decoherent histories) that solves the
measurement problem. Others, like myself, are
intrigued but not yet fully convinced. You see, the power of decoherence is that it successfully removes the
artificial barrier Bohr erected between large and small physical systems,
making everything subject to the same quantum mechanical formulas. This is
important progress and I think Bohr would have found it gratifying. Although
the unresolved quantum measurement problem never diminished physicists’
ability to reconcile theoretical calculations with experimental data, it did
lead Bohr and his colleagues to articulate a quantum mechanical framework
with some distinctly awkward features. Many found the framework’s need for
fuzzy words about wavefunction collapse or the
imprecise notion of “large” systems belonging to the dominion of classical
physics, unnerving. To a significant extent, by taking account of decoherence, researchers have rendered these vague ideas
u n necessary. However, a key issue that I skirted in
the description above is that even though decoherence
suppresses quantum interference and thereby coaxes weird quantum
probabilities to be like their familiar classical counterparts, each of
the potential outcomes embodied in a wave function still vies for
realization. And so we are still left wondering how one outcome “wins”
and where the many other possibilities “go” when that actually happens. When
a coin is tossed, classical physics gives an answer to the analogous
question. It says that if you examine the way the coin is set spinning with
adequate precision, you can, in principle, predict whether it will
land heads or tails. On closer inspection, then, precisely one outcome is
determined by details you initially overlooked. The same cannot be said in
quantum physics. Decoherence allows quantum
probabilities to be interpreted much like classical ones, but does not
provide any finer details that select one of the many possible outcomes to
actually happen. Much in the spirit of Bohr, some
physicists believe that searching for such an explanation of how a single,
definite outcome arises is misguided. These physicists argue that quantum
mechanics, with its updating to include decoherence,
is a sharply formulated theory whose predictions account for the behavior of
laboratory measuring devices. And according to this view, that is the
goal of science. To seek an explanation of what’s really going on, to
strive for an understanding of how a particular outcome came to be, to
hunt for a level of reality beyond detector readings and computer
printouts betrays an unreasonable intellectual greediness. Many others, including me, have a
different perspective. Explaining data is what science is about. But
many physicists believe that science is also about embracing the theories
data confirms and going further by using them to gain maximal insight into
the nature of reality. I strongly suspect that there is much insight to be
gained by pushing onward toward a complete solution of the measurement
problem. Thus, although there is wide agreement
that environmentinduced decoherence is a crucial
part of the structure spanning the quantumtoclassical divide, and while
many are hopeful that these considerations will one day coalesce into a
complete and cogent connection between the two, far from everyone is
convinced that the bridge has yet been fully built. Quantum Mechanics and
the Arrow of Time So where do we stand on
the measurement problem, and what does it mean for the arrow of time? Broadly
speaking, there are two classes of proposals for liiiking
common experience with quantum reality. In the first class (for example, wavefunction as knowledge; Many Worlds; decoherence), Schrodinger’s
equation is the beall and endall of the story; the proposals simply provide
different ways of interpreting what the equation means for physical reality.
In the second class (for example, Bohm; GhirardiRiminiWeber),
Schrodinger’s equation must be supplemented with
other equations (in Bohm’s case, an equation that
shows how a wavefunction pushes a particle around)
or it must be modified (in the GhirardiRiminiWeber
case, to incorporate a new, explicit collapse mechanism). Akey
question for determining the impact on time’s arrow is whether these
proposals introduce a fundamental asymmetry between one direction in time and
the other. Remember, Schrodinger’s equation, just
like those of In
the first class of proposals, the Schrodinger
framework is not at all modified, so temporal symmetry is maintained. In the
second class, temporal symmetry may or may not survive, depending on the
details. For example, in Bohm’s approach, the new
equation proposed does treat time future and time past on an equal footing
and so no asymmetry is introduced. However, the proposal of Ghirardi, Rimini, and Weber
introduces a collapse mechanism that does have a temporal arrow—an “uncollapsing” wavefunction,
one that goes from a spiked to a spreadout shape, would not conform to the
modified equations. Thus, depending on the proposal, quantum mechanics,
together with a resolution to the quantum measurement puzzle, may or may not
continue to treat each direction in time equally. Let’s consider the
implications of each possibility. If
time symmetry persists (as I suspect it will), all of the reasoning and all
of the conclusions of the last chapter can be carried over with little change
to the quantum realm. The core physics that entered our discussion of time’s
arrow was the timereversal symmetry of classical physics. While the basic
language and framework of quantum physics differ from those of classical
physics—wavefunctions instead of positions and
velocities; Schrodinger’s equation instead of We
would thus come to the same puzzle encountered in Chapter 6. If we take our
observations of the world right now as given, as undeniably real, and if
entropy should increase both toward the future and toward the past, how do we
explain how the world got to be the way it is and how it will subsequently
unfold? And the same two possibilities would present themselves: either all
that we see popped into existence by a statistical fluke that you would
expect to happen every so often in an eternal universe that spends the vast
majority of its time being totally disordered, or, for some reason, entropy
was astoundingly low just following the big bang and for the last 14 billion
years things have been slowly unwinding and will continue to do so toward the
future. As in Chapter 6, to avoid the quagmire of not trusting memories,
records, and the laws of physics, we focus on the second option—a lowentropy
bang—and seek an explanation for how and why things began in such a special
state. If, on the other hand, time symmetry is
lost—if the resolution of the measurement problem that is one day accepted
reveals a fundamental asymmetric treatment of future versus past within
quantum mechanics— it could very well provide the most straightforward
explanation of time’s arrow. It might show, for instance, that eggs splatter
but don’t unsplatter because, unlike what we found
using the laws of classical physics, splattering solves the full quantum
equations but unsplattering doesn’t. A reverserun
movie of a splattering egg would then depict motion that couldn’t happen in
the real world, which would explain why we’ve never seen it. And that would
be that. Possibly. But even though this would
seem to provide a very different explanation of time’s arrow, in reality it
may not be as different as it appears. As we emphasized in Chapter 6, for the
pages of War and Peace to become increasingly disordered they must
begin ordered; for an egg to become disordered through splattering, it must
begin as an ordered, pristine egg; for entropy to increase toward the
future, entropy must be low in the past so things have the potential to
become disordered. However, just because a law treats past and future
differently does not ensure that the law dictates a past with lower entropy.
The law might still imply higher entropy toward the past (perhaps entropy
would increase asymmetrically toward past and future), and it’s even possible
that a timeasymmetric law would be unable to say anything about the past at
all. The latter is true of the GhirardiRiminiWeber
proposal, one of the only substantive timeasymmetric
proposals on the market. Once their collapse mechanism does its trick, there
is no way to undo it—there is no way to start from the collapsed wavefunction and evolve it back to its previous
spreadout form. The detailed form of the wavefunction
is lost in the collapse—it turns into a spike—and so it’s impossible to “retrodict” what things were like at any time before the
collapse occurred. Thus, even though a timeasymmetric law
would provide a partial explanation for why things unfold in one temporal
order but never in the reverse order, it could very well call for the same
key supplement required by timesymmetric laws: an explanation for why entropy
was low in the distant past. Certainly, this is true of the timeasymmetric
modifications to quantum mechanics that have so far been proposed. And so,
unless some future discovery reveals two features, both of which I consider
unlikely— a timeasymmetric solution to the quantum measurement problem that,
additionally, ensures that entropy decreases toward the past—our effort to
explain the arrow of time leads us, once again, back to the origin of the
universe, the subject of the next part of the book. As these chapters will make clear,
cosmological considerations wend their way through many mysteries at the
heart of space, time, and matter. So on the journey toward modern cosmology’s
insights into time’s arrow, it’s worth our while not
to rush through the landscape, but rather, to take a wellconsidered stroll
through cosmic history. In
addition to Green’s readable prose, and simple explanations and examples, The
Fabric of the Cosmos contained 85 illustrations that provide one more way
of grasping the material presented. By the end of the book, readers will come
away with the acquisition of some of the threads of string theory, and an
understanding, no matter how counterintuitive it may seem, of what reality is
all about. Steve
Hopkins, August 26, 2004 



ã 2004 Hopkins and Company, LLC The recommendation rating for
this book appeared in the September
2004 issue of Executive Times URL for this review: http://www.hopkinsandcompany.com/Books/The
Fabric of the Cosmos.htm For Reprint Permission,
Contact: Hopkins & Company, LLC • Email: books@hopkinsandcompany.com 
