## Feynman and Everett

A couple of years ago I gave a talk at the Institute for Quantum Information at Caltech about the origin of probability — i.e., the Born rule — in many worlds (“no collapse”) quantum mechanics. It is often claimed that the Born rule is a *consequence* of many worlds — that it can be derived from, and is a prediction of, the no collapse assumption. However, this is only true in a particular limit of infinite numbers of degrees of freedom — it is problematic when only a finite number of degrees of freedom are considered.

After the talk I had a long conversation with John Preskill about many worlds, and he pointed out to me that both Feynman and Gell-Mann were strong advocates: they would go so far as to browbeat visitors on the topic. In fact, both claimed to have invented the idea independently of Everett.

Today I noticed a fascinating paper on the arXiv posted by H.D. Zeh, one of the developers of the theory of decoherence:

Feynman’s quantum theoryH. D. Zeh

(Submitted on 21 Apr 2008)

A historically important but little known debate regarding the necessity and meaning of macroscopic superpositions, in particular those containing different gravitational fields, is discussed from a modern perspective.

The discussion analyzed by Zeh, concerning whether the gravitational field need be quantized, took place at a relativity meeting at the University of North Carolina in Chapel Hill in 1957. Feynman presents a thought experiment in which a macroscopic mass (source for the gravitational field) is placed in a superposition state. One of the central points is necessarily whether the wavefunction describing the macroscopic system must collapse, and if so exactly when. The discussion sheds some light on Feynman’s (early) thoughts on many worlds and his exposure to Everett’s ideas, which apparently occurred even before their publication (see below).

Nowadays no one doubts that large and complex systems can be placed in superposition states. This capability is at the heart of quantum computing. Nevertheless, few have thought through the implications for the necessity of the “collapse” of the wavefunction describing, e.g., our universe as a whole. I often hear statements like “decoherence solved the problem of wavefunction collapse”. I believe that Zeh would agree with me that decoherence is merely the mechanism by which the different Everett worlds lose contact with each other! (And, clearly, this was already understood by Everett to some degree.) Incidentally, if you read the whole paper you can see how confused people — including Feynman — were about the nature of irreversibility, and the difference between effective (statistical) irreversibility and true (quantum) irreversibility.

Zeh: Quantum gravity, which was the subject of the discussion, appears here only as a secondary consequence of the assumed absence of a collapse, while the first one is that “interference” (superpositions) must always be maintained. …

Because of Feynman’s last sentence it is remarkable that neither John Wheeler nor Bryce DeWitt, who were probably both in the audience, stood up at this point to mention Everett, whose paper was in press at the time of the conference because of their support [14]. Feynman himself must have known it already, as he refers to Everett’s “universal wave function” in Session 9 – see below.

Toward the end of the conference (in the Closing Session 9), Cecile DeWitt mentioned that there exists another proposal that there is one “universal wave function”. This function has already been discussed by Everett, and it might be easier to look for this “universal wave function” than to look for all the propagators.

Feynman said that the concept of a “universal wave function” has serious conceptual difficulties. This is so since this function must contain amplitudes for all possible worlds depending on all quantum-mechanical possibilities in the past and thus one is forced to believe in the equal reality [sic!] of an infinity of possible worlds.Well said! Reality is conceptually difficult, and it seems to go beyond what we are able to observe. But he is not ready to draw this ultimate conclusion from the superposition principle that he always defended during the discussion. Why should a superposition not be maintained when it involves an observer? Why “is” there not an amplitude for me (or you) observing this and an amplitude for me (or you) observing that in a quantum measurement – just as it would be required by the Schrödinger equation for a gravitational field? Quantum amplitudes represent more than just probabilities – recall Feynman’s reply to Bondi’s first remark in the quoted discussion. However, in both cases (a gravitational field or an observer) the two macroscopically different states would be irreversibly correlated to different environmental states (possibly including you or me, respectively), and are thus not able to interfere with one another. They form dynamically separate “worlds” in this entangled quantum state.

Feynman then gave a resume of the conference, adding some “critical comments”, from which I here quote only one sentence addressed to mathematical physicists:

Feynman: “Don’t be so rigorous or you will not succeed.”(He explains in detail how he means it.) It is indeed a big question what mathematically rigorous theories can tell us about reality if the axioms they require are not, or not exactly, empirically founded, and in particular if they do not even contain the most general axiom of quantum theory: the superposition principle. It was the important lesson from decoherence theory that this principle holds even where it does not seem to hold. However, many modern field theorists and cosmologists seem to regard quantization as of secondary or merely technical importance (just providing certain “quantum corrections”) for their endevours, which are essentially performed by using classical terms (such as classical fields). It is then not surprising that the measurement problem never comes up for them.

How can anybody do quantum field theory or cosmology at all nowadays without first stating clearly whether he/she is using Everett’s interpretation or some kind of collapse mechanism (or something even more speculative)?

Previous posts on many worlds quantum mechanics.

Actually I think Feynman wasn’t happy with many worlds… from the original PhysComp conference:

“There are all kinds of questions like this, and what I’m trying to do is to get you people who think about computer-simulation possibilities to pay a great deal of attention to this, to digest as well as possible the real answers of quantum mechanics, and see if you can’t invent a different point of view than the physicist have had to invent to describe this. In fact the physicists have no good point of view. Somebody mumbled something about a many-world picture, and that man-world picture says the wave function psi is what’s real, and damn the torpedos if there are so many variables, N^R. All these different worlds and every arrangement of configurations are all there just like our arrangement of configurations, we just happen to be sitting in this one. It’s possible, but I’m not very happy with it.”

Dave BaconApril 23, 2008 at 10:12 pm

Well, I’m not happy about it either, but I don’t see any other sensible interpretation! Copenhagen is (to me) ill-defined (when does “collapse” happen, exactly?) and the Bayesian “qm is about what the observer knows (information), really” is a much more limited theory than the usual ones — try answering questions about quantum gravity and quantum cosmology with that perspective!

Here is Gell-Mann claiming that Feynman is a many worlder (decoherent historicist, in Gell-Mann and Hartle’s language; from a letter in Physics Today, Feb. 1999):

It is worth mentioning that the figure caption on the last page of the article is misleading. The photograph shows Richard Feynman and one of us (Gell-Mann), and the caption describes Gell-Mann as “one of the most sensible critics of orthodox quantum theory” and Feynman as “one of its most sensible defenders.” In fact, both physicists held very similar views of quantum mechanics. Some months before Feynman’s death in 1988, Gell-Mann described to a class at Caltech the status of our work on decoherent histories at that time. Feynman was in attendance, and at the end of the class, he stood up, and some of the students expected an exciting argument. But his comment was, “I agree with everything you said.”

http://www.math.rutgers.edu/~oldstein/papers/qtwoe/qtwoe.html

steveApril 24, 2008 at 12:11 am

Dieter Zeh responds below. I guess trying to figure out someone else’s interpretation of quantum mechanics has a significant intrinsic uncertainty!

*****************************

Dear Professor Hsu,

Thank you for your information. I can confirm what you say in your sentence that starts with “I believe that Zeh would agree …” (in your blog).

However, I am a bit surprised about what you say in your second paragraph – especially after Feynman’s reaction in the Chapel Hill discussion. I talked to Murray Gell-Mann on several occasions (unfortunately not with Feynman), but I think that he did not interpret Everett quite correctly. He did not particularly like the wave function (he used density matrices – indicating that they or the wave function were just tools for him). So he needed operators and projections for their interpretation (to form histories, which are NOT branching wave functions but discrete events). When he claimed that he and Hartle independently discovered Everett, he simply meant that they do not use a collapse. Occasionally they spoke of their theory as “post-Everett” quantum mechanics. I never quite understood it, but Robert Griffith once asked me not he quote his papers any more, since they “have nothing to do with our decoherence approach”.

So I wonder what Gell-Mann may have said when Feynman agreed with him (according to the comment by “Steve”).

Best regards,

Dieter Zeh

P.S.: Perhaps I should have written this in your blog.

steveApril 24, 2008 at 6:09 pm

It’s all incoherence theory to me.

Dr. Hsu, could you just sum up in two or three basic sentences your current understanding of the physical nature of the universe.Thanks.

Is life really just a dream?

QuercusApril 25, 2008 at 4:26 pm

Hi Steve,

It seems to me that decoherence and many worlds (modulo your finiteness corrections) seems to explain Born probabilities pretty reasonably. Is it the use of the phrase “many worlds” that throws people off? Can I summarize the idea to: there is a huge “universe wave function” that is unitarily evolving. However, with decoherence and the central limit theorem you get Born QM? Sounds more reasonable than collapse to me. What exactly bothers people? What am I missing?

best,

Carson

Carson ChowApril 25, 2008 at 7:31 pm

Quercus: when it comes to interpretations of qm, no one really knows what the ultimate answers are!

Carson:

I think people are unhappy with the “other branches (worlds)”. But, often such people have not thought through the more conventional approaches (e.g. Copenhagen) thoroughly enough to realize they are not just unpalatable, but even logically incomplete. (See the Weinberg excerpt on one of my linked blog pages.)

I believe that the no collapse interpretation is logically complete, and its main problem is the Born rule (the existence of the other branches does not bother me). One has to accept that there are many more branches where physicists have not seen empirical evidence for the Born rule than there are branches like ours where it seems to work. It turns out that the “maverick” branches all have small norm, but there is nothing in the standard formulation which says we should ignore them. Why, then do we happen to live on a non-maverick branch? Zeh would say we just have to assume this a priori. I might hope that there is some dynamical reason that small norm branches somehow go away…

Some critics of no collapse claim there is a “basis problem”, but I believe that decoherence + the assumption of local interactions solves this problem.

Everett simply assumed the primacy of unitary (Schrodinger) evolution, and showed that all the other stuff (the *appearance* of collapse, Born probabilities, in a certain limit) followed as consequences.

There seems to be some dispute about what Feynman believed, or whether Gell-Mann and Hartle’s “decoherent histories” is the same as Griffith’s or Everett’s formulation, but it does seem that all would agree that the Copenhagen collapse of the wavefunction is unncessary, though its removal then implies the existence of other branches.

steveApril 25, 2008 at 8:12 pm

Just by the by, I studied physics as an undergrad (MS in OR subsequently). It never ceases to amaze me that the questions that physics-type people ruminate on extensively get ignored by thinkers in other fields. Yet they end up being fundamental to the formation of those problems.

DavidApril 26, 2008 at 3:05 pm

I’m a bit confused. I think I’m misunderstanding the relationship between entanglement and decoherence. I find myself thinking that entanglement would prevent branching of multiple strongly decohered universes on a macroscopic scale. I understand that according to the MWI, the entire phase space defined by what is possible according to the laws of physics could be said to exist, but what if those laws dictate that the vast majority of possible interactions of each particle in a macroscopic object would result in that particle becoming entangled again with the macroscopic object, and the entire world due to the constant barrage of photons and other particles that all macroscopic objects have to sustain. Why wouldn’t this result in a sort of feedback cycle of quantum inbreeding (in-tangling) preventing any macroscopic splitting of worlds, just as in evolution though inbreeding could be said to cause speciation, none will occur unless a small (quantum?) subset of the population stops breeding (entangling genes) with the rest. What have I got wrong?

Spencer HargissSeptember 7, 2008 at 5:19 am