Monthly Archives: December 2016

More on Entropy and the Arrow of Time

This is somewhat technical. It’s for those interested in the puzzle of how we get the irreversible processes we see all around us from laws that are supposedly reversible. The trick: they are not all reversible. A crucial part of the physics of Nature involves an irreversible step that has long been neglected. The paper is an invited contribution to the journal EntropyClick here to read.



Observation is Measurement, but Measurement is not necessarily “Observation”

“By final [state], we mean at that moment the probability is desired—that is, when the experiment is “finished.” –Richard P. Feynman, Feynman Lectures, Vol. 3

The challenge of defining measurement is evident in the excerpt from Feynman’s famous Lectures in Physics, quoted above–when is the experiment ‘finished’??. This remark arises in his discussion of when to add amplitudes and when to add probabilities, in order to arrive at the correct probability of a particular quantum process:

“Suppose you only want the amplitude that the electron arrives at x, regardless of whether the photon was counted at [detector 1 or detector 2]. Should you add the amplitudes [for those detections]? No! You must never add amplitudes for different and distinct final states. Once the photon is accepted by one of the photon counters, we can always determine which alternative occurred if we want, without any further disturbance to the system…do not add amplitudes for different final conditions, where by ‘final’ we mean at the moment the probability is desired—that is, when the experiment is ‘finished’. You do add the amplitudes for the different indistinguishable alternatives inside the experiment, before the complete process is finished. At the end of the process, you may say that ‘you don’t want to look at the photon’. That’s your business, but you still do not add the amplitudes. Nature does not know what you are looking at, and she behaves the way she is going to behave whether you bother to take down the data or not.” [Feynman 1965 Vol 3, 3-7; original italics and quotations]

We’ve already observed here in previous posts (e.g., this one) that TI provides the process, missing in the standard quantum theory, that triggers the measurement transition and, in Feynman’s terms, tells us when the experiment is ‘finished.’ As Feynman noted, Nature is going to behave this way whether or not you look at something—and TI is what tells us how she behaves that way! (Yet Feynman was not taking into account absorption, so he could not really pin down what makes the process ‘finished.’ ) That process is the response of absorbers. While there can be no deterministic, mechanistic account of what “causes” the final outcome, the measurement problem is solved by TI to the extent that it succeeds in defining what a “measurement” is, and that definition does not require reference to anything outside the theory itself.

We’ll return to the very interesting question of the apparent mystery of the choice of one outcome out of many eligible ones in later posts regarding free will. For now, we’re going to focus on the history of how the concept of an “external conscious observer” became entangled (pardon the pun) with quantum theory in a dysfunctional way, to the detriment of both the study of consciousness and the study of quantum theory.

John Von Neumann and the “Measurement Transition”

It was the brilliant mathematical physicist John von Neumann who put the awkward, but functional, machinery of quantum theory on a rigorous mathematical footing. Von Neumann observed that there seemed to be two different processes at work in the successful application of the theory: (A) the deterministic evolution of Schrodinger’s famous equation for the wave function, and (B) the mysterious indeterministic evolution that occurred during a measurement. While he provided a useful and seemingly correct mathematical description of this Process B occurring during measurement, he could provide no physical reason for it. And indeed, without including absorber response, there simply is no physical reason for it. Since Von Neumann, and everyone else working in quantum theory (except for a few physicists exploring the “direct action theory of fields,” which is the basis for TI) were unaware of the possibility of absorber response, it was concluded by Von Neumann and the vast majority of physicists that “There is no physical reason for the measurement transition.”

Thus was born the resort to the “consciousness of an external observer.” In this form, consciousness was a mysterious and primitive notion, detached from scientific examination, since it was by definition external to the processes under scientific study. Of course, according to TI, Process B corresponds to a specific process under scientific study, so TI does not need to resort to the “consciousness of an external observer” in this way. However, TI in no way denies consciousness! Under TI, the topic of consciousness and subjective awareness regains its place as a legitimate subject of study without serving as an ineffective placeholder for a missing part of quantum theory.

Why is the appeal to an external conscious ineffective? Because there is no way to say where the required “external consciousness” enters. That is, it smuggles in an ill-defined (and arguably undefinable) dividing line between the “nonconscious” things in the experiment and the “external conscious observer.” In the Schrodinger’s Cat experiment, isn’t the Cat conscious? Why can’t he “collapse the wave function”? Why is he just an internal system and not an “external observer”?

This puzzle is the so-called “Wigner’s Friend” variation on the Cat Paradox. Eugene Wigner, a famous physicist, noted that every observer of the box with the Cat becomes himself entangled with the previous participating systems (atom, Geiger counter, vial of gas, etc). So, if Wigner is the one who opens the box, he must be treated by quantum theory as simply a new part of the entanglement, lacking any reason for “collapsing” anything. Appealing to a friend entering the room and looking at Wigner as the relevant “conscious observer” doesn’t help, because then the friend becomes entangled also; and then the friend’s friend, etc. Without any basis for Process B, the chain of entanglement necessarily continues; there is no principled way to say that a “conscious observer” is external to anything, or even what a “conscious observer” is! Yet, since (apart from TI) there is no way around this, the notion of a “conscious observer” as crucial to accounting for measurement results has hung on like a ragged band-aid that has long since ceased to protect the wound.

So we have the following curious situation: owing to the press of history and the long-intractable problem of explaining measurement (without including absorber response), it is now often considered naïve to expect that measurement can be defined without resort to “consciousness.” The failure to solve the measurement problem has been elevated to the “lesson” that “quantum theory is a theory about the observer,” and/or that “quantum theory tells us that consciousness is necessary to collapse the wave function.” While there is truth in the point that the type of outcome that will occur is dependent on how a quantum system is detected—which is addressed by TI, as we’ll see in later posts–the appeal to an ill-defined notion of “consciousness” fails to serve the function for which it is invoked.

Decoherence Fail

Another very prominent way to dispose of the measurement problem is to say that “decoherence” solves it. This approach assumes the so-called “Many Worlds Interpretation,” in which one denies that Process B ever really occurs. The only thing that is supposed to be going on is Process A, the deterministic evolution of all the quantum systems in the universe, all components of one gigantic universal quantum state. The claim is that if one considers only a part of that gigantic state, for technical reasons which we’ll won’t go into here, its mathematical description will be the same as the one that we get from Process B—that is, it will look as though it has undergone the Process B measurement transition, even though it hasn’t.

The notion of “decoherence” is invoked to try to explain why the system we’re looking at will appear to have undergone Process B. Decoherence is the argument that a quantum system is interacting with a very large number of other, distinguishable systems in its environment, and that we are not interested in those other systems, so we just average over whatever they are doing and look only at the resulting description of our system of interest. When we do that, our system seems to be in the state resulting from Process B (basically a list of outcomes with probabilities attached to them.) Then, we only see one of those  outcomes because we are in a particular “branch” of the Many Worlds, the other outcomes occurring in other branches. (Of course, that begs numerous other intractable questions about what it means to be “Me in this branch” as opposed to  “Me in a different branch.”)

If we ignore the troubling questions about which “Me” I am, this sounds like a way to get around the measurement problem. However,  it doesn’t really work. For one thing, the mathematical description of the part of the universe we’re looking at (say our Geiger counter in the Schrodinger Cat experiment) is not exactly a match for the Process B transition—it’s close, but it’s really not the same. (In technical terms, the matrix describing the system has off-diagonal elements, even if they are very small. They need to be strictly zero in order to pass as a real measurement transition.) Another, deeper reason why it doesn’t work is given in an earlier blog post . Put briefly, the whole program is circular: it depends on assuming that the kinds of objects required to be distinguishable, in order to effect the appropriate decoherence, must have been distinguishable from the beginning–before there were any ‘conscious human observers’ around. If the universe was one giant quantum state (with any and all possible quantum entanglement), where did this distinguishability come from? It has to be put in by hand, in a circular and ad hoc way, seemingly based only on our otherwise unexplained experiences as observers.

Thus, standard quantum theory always ends up getting stuck on an ill-defined, primitive appeal to a “conscious observer,” outside whatever it is the theory  is describing.   In contrast, including absorption in the theory allows us to quantitatively explain the conditions for the measurement transition of Process B (even if it is inherently indeterministic). Then consciousness becomes freed from its misguided use as an ineffective explanatory band-aid, and can be considered instead in the more appropriate context of such topics as the “Hard Problem.”  This is the argument that if we assume that all matter is inherently nonconscious (as in Descartes’ conception of matter as pure physical extension and nothing else), then no process involving that sort of dead matter can ever lead to anything conscious.  That is, every aspect of the behavior of such a system is accounted for without its ever having any consciousness or subjective experience.

From this standpoint, it may very well be that consciousness and the capacity for subjective experience is an essential ground to all that is. TI itself takes no position on that issue, which is a metaphysical one. But there would be no inconsistency with TI in taking the fundamental ontology of the universe as consciousness or mental in nature. In such a picture, there would be no artificial dividing line between non-conscious stuff and conscious stuff; all would be inherently conscious. Then the arising of conscious biological organisms would not involve any sudden discontinuity, but would be a process in which consciousness gradually manifests itself in more and more complex and volitionally capable forms, through self-organization. Again, all of that is speculative and open for debate–not part of TI, although certainly consistent with it.

Returning now to the title of the post: TI provides an account of the “measurement transition” from within the theory, by taking into account the response of absorbers. Thus, all we need for “measurement” is an absorber–and this is well-defined in the relativistic version of TI (see this and either of my books for details). Now, observers like humans have absorbers too (our sense organs)–so of course when we interact with quantum objects, we trigger the measurement transition! This is why we can see, e.g., a single photon from a light source like the Sun. But the fact that we are ‘human observers’ is not what is required for that transition. Being a human being is a sufficient but not necessary condition for the measurement transition. The existence of an absorber, whether a human retinal cell or just a ground state atom,  is the necessary and sufficient condition.

Now, are all absorbers (indeed all quantum systems) inherently conscious? Does a photon really make a “choice” as to whether to go through a polarizer or not, as Heisenberg mused? Do quantum systems have some primitive form of volition? Freeman Dyson certainly thought so: “…mind is already inherent in every electron, and the processes of human consciousness differ only in degree but not in kind from the processes of choice between quantum states which we call “chance” when they are made by electrons.” (from Disturbing the Universe.) So this may be the case; but again, that’s a separate, and now well-defined, issue. Indeed, it can figure in providing a basis for free will (already explored here). But we no longer need to use consciousness as an ineffective band-aid for measurement in quantum theory. The study of consciousness deserves better.