Information Processing

Just another WordPress.com weblog

Archive for the ‘many worlds’ Category

Parallel universes

with one comment

This BBC documentary follows Mark Everett, the rock singer son of Hugh Everett III (the discoverer of many worlds quantum mechanics) as he seeks to understand his father’s legacy. Unfortunately I can only find the snippet linked to above — does anyone have a copy? Apparently Hugh Everett and his son were not close. Mark Everett explains that the first time he held his father was on the day he discovered him dead. Hugh was 51, Mark 19.

Another related (and possibly copyright violating) request: anyone have a copy of this Believer article by novelist Rivka Galchen on many worlds quantum mechanics?

BBC News:

…Mark Oliver Everett’s own career path couldn’t have been more different from that of his father.

Mark Everett is the creative force behind the successful American cult rock band Eels. He is the first to admit that he can barely add up a restaurant tip and knows virtually nothing about quantum physics.

The splitting universe

But the main reason Mark decided to participate in the documentary was that he has always felt estranged from his father, and this would be an opportunity to understand his father better.

Along the way, Mark meets many of his father’s old colleagues and also younger physicists who have been inspired by Hugh Everett’s work. …

Advertisements

Written by infoproc

June 20, 2008 at 8:40 pm

Are you Gork?

with 6 comments


Slide from this talk.

Survey questions:

1) Could you be Gork the robot? (Do you split into different branches after observing the outcome of, e.g., a Stern-Gerlach measurement?)

2) If not, why? e.g,

I have a soul and Gork doesn’t

Decoherence solved all that! See previous post.

I don’t believe that quantum computers will work as designed, e.g., sufficiently large algorithms or subsystems will lead to real (truly irreversible) collapse. Macroscopic superpositions larger than whatever was done in the lab last week are impossible.

QM is only an algorithm for computing probabilities — there is no reality to the quantum state or wavefunction or description of what is happening inside a quantum computer.

Stop bothering me — I only care about real stuff like the Higgs mass / SUSY-breaking scale / string Landscape / mechanism for high-Tc / LIBOR spread / how to generate alpha.

Written by infoproc

April 27, 2008 at 3:20 pm

Feynman and Everett

with 8 comments

A couple of years ago I gave a talk at the Institute for Quantum Information at Caltech about the origin of probability — i.e., the Born rule — in many worlds (“no collapse”) quantum mechanics. It is often claimed that the Born rule is a consequence of many worlds — that it can be derived from, and is a prediction of, the no collapse assumption. However, this is only true in a particular limit of infinite numbers of degrees of freedom — it is problematic when only a finite number of degrees of freedom are considered.

After the talk I had a long conversation with John Preskill about many worlds, and he pointed out to me that both Feynman and Gell-Mann were strong advocates: they would go so far as to browbeat visitors on the topic. In fact, both claimed to have invented the idea independently of Everett.

Today I noticed a fascinating paper on the arXiv posted by H.D. Zeh, one of the developers of the theory of decoherence:

Feynman’s quantum theory

H. D. Zeh

(Submitted on 21 Apr 2008)

A historically important but little known debate regarding the necessity and meaning of macroscopic superpositions, in particular those containing different gravitational fields, is discussed from a modern perspective.

The discussion analyzed by Zeh, concerning whether the gravitational field need be quantized, took place at a relativity meeting at the University of North Carolina in Chapel Hill in 1957. Feynman presents a thought experiment in which a macroscopic mass (source for the gravitational field) is placed in a superposition state. One of the central points is necessarily whether the wavefunction describing the macroscopic system must collapse, and if so exactly when. The discussion sheds some light on Feynman’s (early) thoughts on many worlds and his exposure to Everett’s ideas, which apparently occurred even before their publication (see below).

Nowadays no one doubts that large and complex systems can be placed in superposition states. This capability is at the heart of quantum computing. Nevertheless, few have thought through the implications for the necessity of the “collapse” of the wavefunction describing, e.g., our universe as a whole. I often hear statements like “decoherence solved the problem of wavefunction collapse”. I believe that Zeh would agree with me that decoherence is merely the mechanism by which the different Everett worlds lose contact with each other! (And, clearly, this was already understood by Everett to some degree.) Incidentally, if you read the whole paper you can see how confused people — including Feynman — were about the nature of irreversibility, and the difference between effective (statistical) irreversibility and true (quantum) irreversibility.

Zeh: Quantum gravity, which was the subject of the discussion, appears here only as a secondary consequence of the assumed absence of a collapse, while the first one is that “interference” (superpositions) must always be maintained. … Because of Feynman’s last sentence it is remarkable that neither John Wheeler nor Bryce DeWitt, who were probably both in the audience, stood up at this point to mention Everett, whose paper was in press at the time of the conference because of their support [14]. Feynman himself must have known it already, as he refers to Everett’s “universal wave function” in Session 9 – see below.

Toward the end of the conference (in the Closing Session 9), Cecile DeWitt mentioned that there exists another proposal that there is one “universal wave function”. This function has already been discussed by Everett, and it might be easier to look for this “universal wave function” than to look for all the propagators. Feynman said that the concept of a “universal wave function” has serious conceptual difficulties. This is so since this function must contain amplitudes for all possible worlds depending on all quantum-mechanical possibilities in the past and thus one is forced to believe in the equal reality [sic!] of an infinity of possible worlds.

Well said! Reality is conceptually difficult, and it seems to go beyond what we are able to observe. But he is not ready to draw this ultimate conclusion from the superposition principle that he always defended during the discussion. Why should a superposition not be maintained when it involves an observer? Why “is” there not an amplitude for me (or you) observing this and an amplitude for me (or you) observing that in a quantum measurement – just as it would be required by the Schrödinger equation for a gravitational field? Quantum amplitudes represent more than just probabilities – recall Feynman’s reply to Bondi’s first remark in the quoted discussion. However, in both cases (a gravitational field or an observer) the two macroscopically different states would be irreversibly correlated to different environmental states (possibly including you or me, respectively), and are thus not able to interfere with one another. They form dynamically separate “worlds” in this entangled quantum state.

Feynman then gave a resume of the conference, adding some “critical comments”, from which I here quote only one sentence addressed to mathematical physicists:

Feynman: “Don’t be so rigorous or you will not succeed.”

(He explains in detail how he means it.) It is indeed a big question what mathematically rigorous theories can tell us about reality if the axioms they require are not, or not exactly, empirically founded, and in particular if they do not even contain the most general axiom of quantum theory: the superposition principle. It was the important lesson from decoherence theory that this principle holds even where it does not seem to hold. However, many modern field theorists and cosmologists seem to regard quantization as of secondary or merely technical importance (just providing certain “quantum corrections”) for their endevours, which are essentially performed by using classical terms (such as classical fields). It is then not surprising that the measurement problem never comes up for them. How can anybody do quantum field theory or cosmology at all nowadays without first stating clearly whether he/she is using Everett’s interpretation or some kind of collapse mechanism (or something even more speculative)?

Previous posts on many worlds quantum mechanics.

Written by infoproc

April 23, 2008 at 8:05 pm

Many Worlds: A brief guide for the perplexed

with 2 comments

I added this to the earlier post 50 years of Many Worlds and thought I would make it into a stand alone post as well.

Many Worlds: A brief guide for the perplexed

In quantum mechanics, states can exist in superpositions, such as (for an electron spin)

(state)   =   (up)   +   (down)

When a measurement on this state is performed, the Copenhagen interpretation says that the state (wavefunction) “collapses” to one of the two possible outcomes:

(up)     or     (down),

with some probability for each outcome depending on the initial state (e.g., 1/2 and 1/2 of measuring up and down). One fundamental difference between quantum and classical mechanics is that even if we have specified the state above as precisely as is allowed by nature, we are still left with a probabilistic prediction for what will happen next. In classical physics knowing the state (e.g., position and velocity of a particle) allows perfect future prediction.

There is no satisfactory understanding of how or exactly when the Copenhagen wavefunction “collapse” proceeds. Indeed, collapse introduces confusing issues like consciousness: what, exactly, constitutes an “observer”, capable of causing the collapse?

Everett suggested we simply remove wavefunction collapse from the theory. Then the state evolves in time always according to the Schrodinger equation. Suppose we follow our electron state through a device which measures its spin. For example: by deflecting the electron using a magnetic field and recording the spin-dendent path of the deflected electron using a detector which amplifies the result. The result is recorded in some macroscopic way: e.g., a red or green bulb lights up depending on whether deflection was up or down. The whole process is described by the Schrodinger equation, with the final state being

(state)   =   (up) (device recorded up)   +   (down) (device recorded down)

Here “device” could, but does not necessarily, refer to the human or robot brain which saw the detector bulb flash. What matters is that the device is macroscopic and has a large (e.g., Avogadro’s number) number of degrees of freedom. In that case, as noted by Everett, the two sub-states of the world (or device) after the measurement are effectively orthogonal (have zero overlap). In other words, the quantum state describing a huge number of emitted red photons and zero emitted green photons is orthogonal to the complementary state.

If a robot or human brain is watching the experiment, it perceives a unique outcome just as predicted by Copenhagen. Success! The experimental outcome is predicted by a simpler (sans collapse) version of the theory. The tricky part: there are now necessarily parts of the final state (wavefunction) describing both the up and down outcomes (I saw red vs I saw green). These are the many worlds of the Everett interpretation.

Personally, I prefer to call it No Collapse instead of Many Worlds — why not emphasize the advantageous rather than the confusing part of the interpretation?

Do the other worlds exist? Can we interact with them? These are the tricky questions remaining…

Some eminent physicists who (as far as I can tell) believe(d) in MW: Feynman, Gell-Mann, Hawking, Steve Weinberg, Bryce DeWitt, David Deutsch, Sidney Coleman … In fact, I was told that Feynman and Gell-Mann each claim(ed) to have independently invented MW, without any knowledge of Everett!

Written by infoproc

July 23, 2007 at 12:26 am

50 years of Many Worlds

with 3 comments

Max Tegmark has a nice essay in Nature on the Many Worlds (MW) interpretation of quantum mechanics.

Previous discussion of Hugh Everett III and MW on this blog.

Personally, I find MW more appealing than the conventional Copenhagen interpretation, which is certainly incomplete. This point of view is increasingly common among those who have to think about the QM of isolated, closed systems: quantum cosmologists, quantum information theorists, etc. Tegmark correctly points out in the essay below that progress in our understanding of decoherence in no way takes the place of MW in clarifying the problems with measurement and wavefunction collapse, although this is a common misconception.

However, I believe there is a fundamental problem with deriving Born’s rule for probability of outcomes in the MW context. See research paper here and talk given at Caltech IQI here.

A brief guide for the perplexed:
In quantum mechanics, states can exist in superpositions, such as (for an electron spin)

(state)   =   (up)   +   (down)

When a measurement on this state is performed, the Copenhagen interpretation says that the state (wavefunction) “collapses” to one of the two possible outcomes:

(up)     or     (down),

with some probability for each outcome depending on the initial state (e.g., 1/2 and 1/2 of measuring up and down). One fundamental difference between quantum and classical mechanics is that even though we have specified the state above as precisely as is allowed by nature, we are still left with a probabilistic prediction for what will happen next. In classical physics knowing the state (e.g., position and velocity of a particle) allows perfect future prediction.

There is no satisfactory understanding of how or exactly when the Copenhagen wavefunction “collapse” proceeds. Indeed, collapse introduces confusing issues like consciousness: what, exactly, constitutes an “observer”, capable of causing the collapse?

Everett suggested we simply remove wavefunction collapse from the theory. Then the state evolves in time always according to the Schrodinger equation. Suppose we follow our electron state through a device which measures its spin. For example: by deflecting the electron using a magnetic field and recording the spin-dendent path of the deflected electron using a detector which amplifies the result. The result is recorded in some macroscopic way: e.g., a red or green bulb lights up depending on whether deflection was up or down. The whole process is described by the Schrodinger equation, with the final state being

(state)   =   (up) (device recorded up)   +   (down) (device recorded down)

Here “device” could, but does not necessarily, refer to the human or robot brain which saw the detector bulb flash. What matters is that the device is macroscopic and has a large (e.g., Avogadro’s number) number of degrees of freedom. In that case, as noted by Everett, the two states of the world (or device) after the measurement are effectively orthogonal (have zero overlap). In other words, the quantum state describing a huge number of emitted red photons and zero emitted green photons is orthogonal to the complementary state.

If a robot or human brain is watching the experiment, it perceives a unique outcome just as predicted by Copenhagen. Success! The experimental outcome is predicted by a simpler (sans collapse) version of the theory. The tricky part: there are now necessarily parts of the final state (wavefunction) describing both the up and down outcomes (I saw red vs I saw green). These are the many worlds of the Everett interpretation.

Personally, I prefer to call it No Collapse instead of Many Worlds — why not emphasize the advantageous rather than the confusing part of the interpretation?

Do the other worlds exist? Can we interact with them? These are the tricky questions remaining…

Some eminent physicists who (as far as I can tell) believe in MW: Feynman, Gell-Mann, Hawking, Steve Weinberg, Bryce DeWitt, David Deutsch, … In fact, I was told that Feynman and Gell-Mann each claim(ed) to have independently invented MW, without any knowledge of Everett!

Many lives in many worlds

Max Tegmark, Nature

Almost all of my colleagues have an opinion about it, but almost none of them have read it. The first draft of Hugh Everett’s PhD thesis, the shortened official version of which celebrates its 50th birthday this year, is buried in the out-of-print book The Many-Worlds Interpretation of Quantum Mechanics. I remember my excitement on finding it in a small Berkeley book store back in grad school, and still view it as one of the most brilliant texts I’ve ever read.

By the time Everett started his graduate work with John Archibald Wheeler at Princeton University in New Jersey quantum mechanics had chalked up stunning successes in explaining the atomic realm, yet debate raged on as to what its mathematical formalism really meant. I was fortunate to get to discuss quantum mechanics with Wheeler during my postdoctorate years in Princeton, but never had the chance to meet Everett.

Quantum mechanics specifies the state of the Universe not in classical terms, such as the positions and velocities of all particles, but in terms of a mathematical object called a wavefunction. According to the Schrödinger equation, this wavefunction evolves over time in a deterministic fashion that mathematicians term ‘unitary’. Although quantum mechanics is often described as inherently random and uncertain, there is nothing random or uncertain about the way the wavefunction evolves.

The sticky part is how to connect this wavefunction with what we observe. Many legitimate wavefunctions correspond to counterintuitive situations, such as Schrödinger’s cat being dead and alive at the same time in a ‘superposition’ of states. In the 1920s, physicists explained away this weirdness by postulating that the wavefunction ‘collapsed’ into some random but definite classical outcome whenever someone made an observation. This add-on had the virtue of explaining observations, but rendered the theory incomplete, because there was no mathematics specifying what constituted an observation — that is, when the wavefunction was supposed to collapse.

Everett’s theory is simple to state but has complex consequences, including parallel universes. The theory can be summed up by saying that the Schrödinger equation applies at all times; in other words, that the wavefunction of the Universe never collapses. That’s it — no mention of parallel universes or splitting worlds, which are implications of the theory rather than postulates. His brilliant insight was that this collapse-free quantum theory is, in fact, consistent with observation. Although it predicts that a wavefunction describing one classical reality gradually evolves into a wavefunction describing a superposition of many such realities — the many worlds — observers subjectively experience this splitting merely as a slight randomness (see ‘Not so random’), with probabilities consistent with those calculated using the wavefunction-collapse recipe.

Gaining acceptance

It is often said that important scientific discoveries go though three phases: first they are completely ignored, then they are violently attacked, and finally they are brushed aside as well known. Everett’s discovery was no exception: it took more than a decade before it started getting noticed. But it was too late for Everett, who left academia disillusioned1.

Everett’s no-collapse idea is not yet at stage three, but after being widely dismissed as too crazy during the 1970s and 1980s, it has gradually gained more acceptance. At an informal poll taken at a conference on the foundations of quantum theory in 1999, physicists rated the idea more highly than the alternatives, although many more physicists were still ‘undecided’2. I believe the upward trend is clear.

Why the change? I think there are several reasons. Predictions of other types of parallel universes from cosmological inflation and string theory have increased tolerance for weird-sounding ideas. New experiments have demonstrated quantum weirdness in ever larger systems. Finally, the discovery of a process known as decoherence has answered crucial questions that Everett’s work had left dangling.

For example, if these parallel universes exist, why don’t we perceive them? Quantum superpositions cannot be confined — as most quantum experiments are — to the microworld. Because you are made of atoms, then if atoms can be in two places at once in superposition, so can you.

The breakthrough came in 1970 with a seminal paper by H. Dieter Zeh, who showed that the Schrödinger equation itself gives rise to a type of censorship. This effect became known as ‘decoherence’, and was worked out in great detail by Wojciech Zurek, Zeh and others over the following decades. Quantum superpositions were found to remain observable only as long as they were kept secret from the rest of the world. The quantum card in our example (see ‘Not so random’) is constantly bumping into air molecules, photons and so on, which thereby find out whether it has fallen to the left or to the right, destroying the coherence of the superposition and making it unobservable. Decoherence also explains why states resembling classical physics have special status: they are the most robust to decoherence.

Science or philosophy?

The main motivation for introducing the notion of random wavefunction collapse into quantum physics had been to explain why we perceive probabilities and not strange macroscopic superpositions. After Everett had shown that things would appear random anyway (see ‘Not so random’) and decoherence had been found to explain why we never perceive anything strange, much of this motivation was gone. Even though the wavefunction technically never collapses in the Everett view, it is generally agreed that decoherence produces an effect that looks like a collapse and smells like a collapse.

In my opinion, it is time to update the many quantum textbooks that introduce wavefunction collapse as a fundamental postulate of quantum mechanics. The idea of collapse still has utility as a calculational recipe, but students should be told that it is probably not a fundamental process violating the Schrödinger equation so as to avoid any subsequent confusion. If you are considering a quantum textbook that does not mention Everett and decoherence in the index, I recommend buying a more modern one.

After 50 years we can celebrate the fact that Everett’s interpretation is still consistent with quantum observations, but we face another pressing question: is it science or mere philosophy? The key point is that parallel universes are not a theory in themselves, but a prediction of certain theories. For a theory to be falsifiable, we need not observe and test all its predictions — one will do.

Because Einstein’s general theory of relativity has successfully predicted many things we can observe, we also take seriously its predictions for things we cannot, such as the internal structure of black holes. Analogously, successful predictions by unitary quantum mechanics have made scientists take more seriously its other predictions, including parallel universes.

Moreover, Everett’s theory is falsifiable by future lab experiments: no matter how large a system they probe, it says, they will not observe the wavefunction collapsing. Indeed, collapse-free superpositions have been demonstrated in systems with many atoms, such as carbon-60 molecules. Several groups are now attempting to create quantum superpositions of objects involving 1017 atoms or more, tantalizingly close to our human macroscopic scale. There is also a global effort to build quantum computers which, if successful, will be able to factor numbers exponentially faster than classical computers, effectively performing parallel computations in Everett’s parallel worlds.

The bird perspective

So Everett’s theory is testable and so far agrees with observation. But should you really believe it? When thinking about the ultimate nature of reality, I find it useful to distinguish between two ways of viewing a physical theory: the outside view of a physicist studying its mathematical equations, like a bird surveying a landscape from high above, and the inside view of an observer living in the world described by the equations, like a frog being watched by the bird.

From the bird perspective, Everett’s multiverse is simple. There is only one wavefunction, and it evolves smoothly and deterministically over time without any kind of splitting or parallelism. The abstract quantum world described by this evolving wavefunction contains within it a vast number of classical parallel storylines (worlds), continuously splitting and merging, as well as a number of quantum phenomena that lack a classical description. From their frog perspective, observers perceive only a tiny fraction of this full reality, and they perceive the splitting of classical storylines as quantum randomness.

What is more fundamental — the frog perspective or the bird perspective? In other words, what is more basic to you: human language or mathematical language? If you opt for the former, you would probably prefer a ‘many words’ interpretation of quantum mechanics, where mathematical simplicity is sacrificed to collapse the wavefunction and eliminate parallel universes.

But if you prefer a simple and purely mathematical theory, then you — like me — are stuck with the many-worlds interpretation. If you struggle with this you are in good company: in general, it has proved extremely difficult to formulate a mathematical theory that predicts everything we can observe and nothing else — and not just for quantum physics.

Moreover, we should expect quantum mechanics to feel counterintuitive, because evolution endowed us with intuition only for those aspects of physics that had survival value for our distant ancestors, such as the trajectories of flying rocks.

The choice is yours. But I worry that if we dismiss theories such as Everett’s because we can’t observe everything or because they seem weird, we risk missing true breakthroughs, perpetuating our instinctive reluctance to expand our horizons. To modern ears the Shapley–Curtis debate of 1920 about whether there was really a multitude of galaxies (parallel universes by the standards of the time) sounds positively quaint.

If we dismiss theories because they seem weird, we risk missing true breakthroughs.
Everett asked us to acknowledge that our physical world is grander than we had imagined, a humble suggestion that is probably easier to accept after the recent breakthroughs in cosmology than it was 50 years ago. I think Everett’s only mistake was to be born ahead of his time. In another 50 years, I believe we will be more used to the weird ways of our cosmos, and even find its strangeness to be part of its charm.

Written by infoproc

July 16, 2007 at 4:59 pm