Schrodinger’s Cat Scratches Back with the Transactional Interpretation


(c) 2016 Ruth E. Kastner

The Arrow of Time from an Overlooked Physical Law

I’m reblogging this as a counterpoint to a recent book by John Gribbin, “The Time Illusion,” claiming that the ‘block world’ picture of spacetime is settled science. In fact, it is not. There is no real physical evidence for the ‘block world’ model, and there are counterexamples to the claim that relativity requires such a model. The inability of the block world to provide a complete explanation for the 2nd Law of Thermodynamics is another reason to keep an open mind regarding alternative, ‘growing universe’ models.

Transactional Interpretation


In this post, I’m going to disagree with the following statement by physicist Sean Carroll concerning the nature of time:

“The weird thing about the arrow of time is that it’s not to be found in the underlying laws of physics. It’s not there. So it’s a feature of the universe we see, but not a feature of the laws of the individual particles. So the arrow of time is built on top of whatever local laws of physics apply.”–Sean Carroll,

That is a common position, but it could very well be wrong. Specifically, what could be wrong with it is the claim that the arrow of time is “not to be found in the underlying laws of physics.” That claim comes from ignoring the possibility that there could be real, dynamical, irreversible collapse in quantum theory. If there is such collapse, that provides the missing link between physical…

View original post 767 more words

More on Entropy and the Arrow of Time

This is somewhat technical. It’s for those interested in the puzzle of how we get the irreversible processes we see all around us from laws that are supposedly reversible. The trick: they are not all reversible. A crucial part of the physics of Nature involves an irreversible step that has long been neglected. The paper is an invited contribution to the journal EntropyClick here to read.



Observation is Measurement, but Measurement is not necessarily “Observation”

“By final [state], we mean at that moment the probability is desired—that is, when the experiment is “finished.” –Richard P. Feynman, Feynman Lectures, Vol. 3

The challenge of defining measurement is evident in the excerpt from Feynman’s famous Lectures in Physics, quoted above–when is the experiment ‘finished’??. This remark arises in his discussion of when to add amplitudes and when to add probabilities, in order to arrive at the correct probability of a particular quantum process:

“Suppose you only want the amplitude that the electron arrives at x, regardless of whether the photon was counted at [detector 1 or detector 2]. Should you add the amplitudes [for those detections]? No! You must never add amplitudes for different and distinct final states. Once the photon is accepted by one of the photon counters, we can always determine which alternative occurred if we want, without any further disturbance to the system…do not add amplitudes for different final conditions, where by ‘final’ we mean at the moment the probability is desired—that is, when the experiment is ‘finished’. You do add the amplitudes for the different indistinguishable alternatives inside the experiment, before the complete process is finished. At the end of the process, you may say that ‘you don’t want to look at the photon’. That’s your business, but you still do not add the amplitudes. Nature does not know what you are looking at, and she behaves the way she is going to behave whether you bother to take down the data or not.” [Feynman 1965 Vol 3, 3-7; original italics and quotations]

We’ve already observed here in previous posts (e.g., this one) that TI provides the process, missing in the standard quantum theory, that triggers the measurement transition and, in Feynman’s terms, tells us when the experiment is ‘finished.’ As Feynman noted, Nature is going to behave this way whether or not you look at something—and TI is what tells us how she behaves that way! (Yet Feynman was not taking into account absorption, so he could not really pin down what makes the process ‘finished.’ ) That process is the response of absorbers. While there can be no deterministic, mechanistic account of what “causes” the final outcome, the measurement problem is solved by TI to the extent that it succeeds in defining what a “measurement” is, and that definition does not require reference to anything outside the theory itself.

We’ll return to the very interesting question of the apparent mystery of the choice of one outcome out of many eligible ones in later posts regarding free will. For now, we’re going to focus on the history of how the concept of an “external conscious observer” became entangled (pardon the pun) with quantum theory in a dysfunctional way, to the detriment of both the study of consciousness and the study of quantum theory.

John Von Neumann and the “Measurement Transition”

It was the brilliant mathematical physicist John von Neumann who put the awkward, but functional, machinery of quantum theory on a rigorous mathematical footing. Von Neumann observed that there seemed to be two different processes at work in the successful application of the theory: (A) the deterministic evolution of Schrodinger’s famous equation for the wave function, and (B) the mysterious indeterministic evolution that occurred during a measurement. While he provided a useful and seemingly correct mathematical description of this Process B occurring during measurement, he could provide no physical reason for it. And indeed, without including absorber response, there simply is no physical reason for it. Since Von Neumann, and everyone else working in quantum theory (except for a few physicists exploring the “direct action theory of fields,” which is the basis for TI) were unaware of the possibility of absorber response, it was concluded by Von Neumann and the vast majority of physicists that “There is no physical reason for the measurement transition.”

Thus was born the resort to the “consciousness of an external observer.” In this form, consciousness was a mysterious and primitive notion, detached from scientific examination, since it was by definition external to the processes under scientific study. Of course, according to TI, Process B corresponds to a specific process under scientific study, so TI does not need to resort to the “consciousness of an external observer” in this way. However, TI in no way denies consciousness! Under TI, the topic of consciousness and subjective awareness regains its place as a legitimate subject of study without serving as an ineffective placeholder for a missing part of quantum theory.

Why is the appeal to an external conscious ineffective? Because there is no way to say where the required “external consciousness” enters. That is, it smuggles in an ill-defined (and arguably undefinable) dividing line between the “nonconscious” things in the experiment and the “external conscious observer.” In the Schrodinger’s Cat experiment, isn’t the Cat conscious? Why can’t he “collapse the wave function”? Why is he just an internal system and not an “external observer”?

This puzzle is the so-called “Wigner’s Friend” variation on the Cat Paradox. Eugene Wigner, a famous physicist, noted that every observer of the box with the Cat becomes himself entangled with the previous participating systems (atom, Geiger counter, vial of gas, etc). So, if Wigner is the one who opens the box, he must be treated by quantum theory as simply a new part of the entanglement, lacking any reason for “collapsing” anything. Appealing to a friend entering the room and looking at Wigner as the relevant “conscious observer” doesn’t help, because then the friend becomes entangled also; and then the friend’s friend, etc. Without any basis for Process B, the chain of entanglement necessarily continues; there is no principled way to say that a “conscious observer” is external to anything, or even what a “conscious observer” is! Yet, since (apart from TI) there is no way around this, the notion of a “conscious observer” as crucial to accounting for measurement results has hung on like a ragged band-aid that has long since ceased to protect the wound.

So we have the following curious situation: owing to the press of history and the long-intractable problem of explaining measurement (without including absorber response), it is now often considered naïve to expect that measurement can be defined without resort to “consciousness.” The failure to solve the measurement problem has been elevated to the “lesson” that “quantum theory is a theory about the observer,” and/or that “quantum theory tells us that consciousness is necessary to collapse the wave function.” While there is truth in the point that the type of outcome that will occur is dependent on how a quantum system is detected—which is addressed by TI, as we’ll see in later posts–the appeal to an ill-defined notion of “consciousness” fails to serve the function for which it is invoked.

Decoherence Fail

Another very prominent way to dispose of the measurement problem is to say that “decoherence” solves it. This approach assumes the so-called “Many Worlds Interpretation,” in which one denies that Process B ever really occurs. The only thing that is supposed to be going on is Process A, the deterministic evolution of all the quantum systems in the universe, all components of one gigantic universal quantum state. The claim is that if one considers only a part of that gigantic state, for technical reasons which we’ll won’t go into here, its mathematical description will be the same as the one that we get from Process B—that is, it will look as though it has undergone the Process B measurement transition, even though it hasn’t.

The notion of “decoherence” is invoked to try to explain why the system we’re looking at will appear to have undergone Process B. Decoherence is the argument that a quantum system is interacting with a very large number of other, distinguishable systems in its environment, and that we are not interested in those other systems, so we just average over whatever they are doing and look only at the resulting description of our system of interest. When we do that, our system seems to be in the state resulting from Process B (basically a list of outcomes with probabilities attached to them.) Then, we only see one of those  outcomes because we are in a particular “branch” of the Many Worlds, the other outcomes occurring in other branches. (Of course, that begs numerous other intractable questions about what it means to be “Me in this branch” as opposed to  “Me in a different branch.”)

If we ignore the troubling questions about which “Me” I am, this sounds like a way to get around the measurement problem. However,  it doesn’t really work. For one thing, the mathematical description of the part of the universe we’re looking at (say our Geiger counter in the Schrodinger Cat experiment) is not exactly a match for the Process B transition—it’s close, but it’s really not the same. (In technical terms, the matrix describing the system has off-diagonal elements, even if they are very small. They need to be strictly zero in order to pass as a real measurement transition.) Another, deeper reason why it doesn’t work is given in an earlier blog post . Put briefly, the whole program is circular: it depends on assuming that the kinds of objects required to be distinguishable, in order to effect the appropriate decoherence, must have been distinguishable from the beginning–before there were any ‘conscious human observers’ around. If the universe was one giant quantum state (with any and all possible quantum entanglement), where did this distinguishability come from? It has to be put in by hand, in a circular and ad hoc way, seemingly based only on our otherwise unexplained experiences as observers.

Thus, standard quantum theory always ends up getting stuck on an ill-defined, primitive appeal to a “conscious observer,” outside whatever it is the theory  is describing.   In contrast, including absorption in the theory allows us to quantitatively explain the conditions for the measurement transition of Process B (even if it is inherently indeterministic). Then consciousness becomes freed from its misguided use as an ineffective explanatory band-aid, and can be considered instead in the more appropriate context of such topics as the “Hard Problem.”  This is the argument that if we assume that all matter is inherently nonconscious (as in Descartes’ conception of matter as pure physical extension and nothing else), then no process involving that sort of dead matter can ever lead to anything conscious.  That is, every aspect of the behavior of such a system is accounted for without its ever having any consciousness or subjective experience.

From this standpoint, it may very well be that consciousness and the capacity for subjective experience is an essential ground to all that is. TI itself takes no position on that issue, which is a metaphysical one. But there would be no inconsistency with TI in taking the fundamental ontology of the universe as consciousness or mental in nature. In such a picture, there would be no artificial dividing line between non-conscious stuff and conscious stuff; all would be inherently conscious. Then the arising of conscious biological organisms would not involve any sudden discontinuity, but would be a process in which consciousness gradually manifests itself in more and more complex and volitionally capable forms, through self-organization. Again, all of that is speculative and open for debate–not part of TI, although certainly consistent with it.

Returning now to the title of the post: TI provides an account of the “measurement transition” from within the theory, by taking into account the response of absorbers. Thus, all we need for “measurement” is an absorber–and this is well-defined in the relativistic version of TI (see this and either of my books for details). Now, observers like humans have absorbers too (our sense organs)–so of course when we interact with quantum objects, we trigger the measurement transition! This is why we can see, e.g., a single photon from a light source like the Sun. But the fact that we are ‘human observers’ is not what is required for that transition. Being a human being is a sufficient but not necessary condition for the measurement transition. The existence of an absorber, whether a human retinal cell or just a ground state atom,  is the necessary and sufficient condition.

Now, are all absorbers (indeed all quantum systems) inherently conscious? Does a photon really make a “choice” as to whether to go through a polarizer or not, as Heisenberg mused? Do quantum systems have some primitive form of volition? Freeman Dyson certainly thought so: “…mind is already inherent in every electron, and the processes of human consciousness differ only in degree but not in kind from the processes of choice between quantum states which we call “chance” when they are made by electrons.” (from Disturbing the Universe.) So this may be the case; but again, that’s a separate, and now well-defined, issue. Indeed, it can figure in providing a basis for free will (already explored here). But we no longer need to use consciousness as an ineffective band-aid for measurement in quantum theory. The study of consciousness deserves better.


The Arrow of Time from an Overlooked Physical Law


In this post, I’m going to disagree with the following statement by physicist Sean Carroll concerning the nature of time:

“The weird thing about the arrow of time is that it’s not to be found in the underlying laws of physics. It’s not there. So it’s a feature of the universe we see, but not a feature of the laws of the individual particles. So the arrow of time is built on top of whatever local laws of physics apply.”–Sean Carroll,

That is a common position, but it could very well be wrong. Specifically, what could be wrong with it is the claim that the arrow of time is “not to be found in the underlying laws of physics.” That claim comes from ignoring the possibility that there could be real, dynamical, irreversible collapse in quantum theory. If there is such collapse, that provides the missing link between physical theory and the phenomena we see that reflect the arrow of time.

First, it should be noted that collapse has been a formal part of standard quantum mechanics since the brilliant mathematician/physicist John von Neumann formalized the theory back in the 1920s. Von Neumann referred explicitly to collapse as a discontinuous, indeterministic process, and noted that it was irreversible. However, in recent decades, it has become fashionable to ignore collapse, which means to (explicitly or implicity) use an Everettian or “Many-Worlds” approach to quantum theory. The Everettian approach denies that collapse ever occurs, so in that interpretation, all the laws are time-reversible. This assumption underlies the usual negative conclusion (exemplified above by Carroll’s statement) about the existence of any physical law that could account for the irreversibility we see around us.

This evolution toward Everettianism has occurred for several reasons, probably chief among them the ad hoc nature of many of the specific models of collapse, which make changes to quantum theory. Alternatively, many physicists assume that collapse is just something that happens in our minds–that it corresponds to updating our own subjective information about the world as we advance through spacetime. But in that case, it is assumed that we somehow ‘move through’ the world, following an unexplained arrow of time. Clearly, if we are going to just help ourselves to an arrow of time in our ‘movement through the world,’ we are not explaining it.

Carroll’s assumption that the arrow of time has to be ‘built on top’ of laws that lack such an arrow involves appealing to notions of entropy increase– roughly, the idea that in a closed system, disorder always increases over time. But entropy increase, which is a time-asymmetric law, cannot itself be obtained from the allegedly underlying time-symmetric laws; that’s part of the ‘mystery’ of time’s arrow.  Moreover, trying to get time’s arrow from entropy considerations alone involves identifying the future solely with the direction of decreasing order in systems. This identification rules out identifying a future direction with processes of increasing order, which are commonplace (e.g., plant growth). Allowing exceptions for ordering process associated with living things on the basis that they are open systems doesn’t take into account that the universe as a whole is a closed system, and that such order-increasing processes take place, alongside other order-decreasing processes, within that closed system. Since entropy both increases and decreases all around us, and yet our experience is always future-directed, appealing to entropy increase is inadequate to the task of explaining time’s arrow.

Thus, the problem will not be properly solved unless physical laws really do have some irreversible component. But maybe they do: maybe we should not be neglecting collapse. And there is a model of collapse that does not involve changing the basic theory–it’s the Transactional Interpretation (TI). The transactional process corresponds precisely to Von Neumann’s intrinsically irreversible ‘measurement’ process. According to TI, ‘measurement’ is not about the consciousness of an observer (a very common misconception)–rather, it’s a real, physical process. That process is defined non-arbitrarily here,  here  and here.

Thus, we gain an irreversible step at a fundamental level of physical systems. For example, take a closed box of gas. With only time-symmetric (reversible) laws, it’s actually impossible to explain why entropy does not decrease in that box of gas. Appealing to ‘random thermal interactions’ doesn’t help, because the sort of ‘randomness’ one needs is time-asymmetric (this is explained very nicely by Price).

With collapse included, as in the transactional process, the thermal interactions between the gas molecules give rise to true randomness. Each such interaction consists of one or more photons being delivered from one gas molecule to another, in an irreversible process (the technical term is ‘non-unitary’). One molecule is the emitter and the other is the absorber, and the process of delivery of the photon(s) establishes the future direction. (For details on this account of spacetime emergence, see this paper.)

Interestingly, this picture also disagrees with the common assumption that ‘even in empty space, time and space still exist.’ (S. Carroll, same reference) However, Einstein himself also disagreed with that common assumption: he stated that ‘There is no such thing as an empty space, i.e. a space without field. Space-time does not claim existence on its own, but only as a structural quality of the field.’  (Einstein, Relativity and the Problem of Space.) The transactional account of spacetime emergence is completely consistent with Einstein’s observation. In that account, transactions establish, through exchanges of mass/energy,  the structure that we call ‘spacetime.’ Without those transfers of mass/energy, there is no spacetime, and therefore no arrow of time.

Science and Spirit: A Troublemaker In The Cave (Part 2)

Recall that in Part I of this post, I discussed an option (ii) in which scientists make a non-scientific, metaphysical choice when they presume that scientific theories are only about the world of appearance (as opposed to a realm that may not be observable). Now, that is certainly a choice a scientist can make–but for it to be an intellectually responsible choice, the chooser must recognize that it is a metaphysical choice that goes well beyond anything mandated by the discipline of empirical science. This is because empirical science simply has nothing to say about whether there is any unobservable realm, and whether or not its theories refer to such a realm.

I’ll begin this second installment with a bad example of choice (ii) being made irresponsibly:

“The claim that the fields of modern physics have anything to do with the “field of consciousness” is false. The notion that what physicists call “the vacuum state” has anything to do with consciousness is nonsense. The claim that large numbers of people meditating helps reduce crime and war by creating a unified field of consciousness is foolishness of a high order. The presentation of the ideas of modern physics side by side, and apparently supportive of, the ideas of the Maharishi about pure consciousness can only be intended to deceive those who might not know any better. (Pagels).”

Not only does Pagels make a metaphysical (extra-scientific) assumption denying any possible understanding of physical theory as describing anything beyond the world of appearance, he even goes so far as to impute nefarious motives to those who do so. Let us return to Plato’s Cave to illustrate the problem with Pagels’ assertion. (The following is my own parable, of course.)


A Troublemaker in the Cave

Cast of Characters






Late evening, The Cave. The five prisoners are chatting.

Bohrus: Hey guys, I’ve come up with a theory that successfully accounts for what is happening on this wall with these moving shapes.

Suacus: Wow, great! Well done, Bohrus!

Bohrus: Hold on a minute, I’m not sure it’s good news. There are some strange discontinuities going on in these phenomena, at a level we can’t see clearly, and that doesn’t make sense to me. Also, the tiny shapes are not compatible with the big, spread-out shapes–we can never see them both at the same time.

Qbus: But when you apply your theory, it successfully describes our perceptions, doesn’t it?

Bohrus: Yes. I suppose we could just say that the shape world is Complementary.

Qbus: Great term, it has a nice ring to it. What do you think, Socrates?

Socrates: Well, looking at the mathematical structure of this theory, it has more than the 2 dimensions of this wall we’re looking at.

Suacus: So? It works, doesn’t it? Socrates, you are so annoying–always looking for trouble. Always stirring up the pot.

Socrates. Sorry. (Fiddles with chains.)

Same room in the Cave. The next morning.

Pageles: Hey, what were you guys chatting about last night? I dozed off early.

Suacus: You missed something great. Bohrus here developed a theory that successfully describes the behavior of these shapes on the wall. Now, if we want to predict what these shapes are doing, all we have to do is apply his theory. It’s awesome.

Bohrus: Aw Suacus, you’re too kind. I’m still a bit troubled by the perplexities of the theory. The formalism seems to involve invisible entities called Shuanta that seem to transcend the frame of this wall and give rise to mutually incompatible phenomena.

Qbus: But didn’t we decide to call that Complementarity? And isn’t your theory just about our perceptions? Isn’t that the whole lesson of your new theory, in fact? We should enlighten our colleagues with this new insight, and give it a name–maybe a combination of a great logician and a modern art movement, all wrapped up in one cool-sounding term.

Bohrus: Yes of course, you’re right. There is no Shuantum world. There is only my abstract Shuantum-Mechanical description.

Suacus: There ya go. It’s all good.

Pageles: Wow, I missed a lot. You guys are awesome. Count me in.

Bohrus: (Noticing dangling chains next to him) Hey, where’s Socrates?

Later that afternoon.

Socrates (rushing in breathlessly): Hey Bohrus! You know that theory you just came up with? Well I just saw the entities it’s describing! To see them, you have to go outside the Cave! There this bright light, I think it’s what the Mystics refer to as “Consciousness”– and all these huge, multidimensional objects that we can’t see in here. When the light shines on them, we see their shadows, but only one side at a time! That’s why you’re getting this ‘Complementarity’ thing!

Suacus: Goodness, Socrates! Sit down and get comfortable, put your chains back on, and stop talking nonsense.

Bohrus: We just decided there is no Shuantum World, so whatever you think you saw, you’re delusional.

Qbus: Yes, you’ve been talking to too many of those religious charlatans, and they’re messing with your head.

Pageles: We’ve all heard those stories about a strange invisible realm of higher beings and/or ‘consciousness’ that has somehow created all the phenomena we see here. But those are just superstitions and myths created by primitive, unscientific people. The claim that our scientific shadow-studies have anything to do with some domain outside this Cave is false. The notion that what we scientists call “shuanta” has anything to do with anything outside this wall is nonsense. The presentation of Bohrus’ theory side by side, and apparently supportive of, the ideas of the Mystics about a realm beyond the Cave can only be intended to deceive those who might not know any better. Now sit down and shut up, before we feed you hemlock.


Now, of course, Socrates might be wrong. He might be deluded. He may simply have drunk one too many Cave-Cocktails. But the point is that his scientific colleagues cannot use empirical science to make that judgment, one way or the other. And when they try to do that, they misuse their scientific authority by extending it beyond its legitimate purview.













Science and Spirit: Two Sides of the Coin of Understanding(Part I)

  1. The Boundary: Scientific vs. Philosophical or Spiritual Inquiry

It might be said that religion begins where science ends. And it may be turning out that quantum theory has indeed taken us to that point. But first of all, let’s take a quick look at what science is. Science is fundamentally about the observable world — it’s about what we can collectively observe and measure, and about which we have some basis for supposing that we’re all looking at the same thing and seeing it in essentially the same way. Thus, it is fundamentally based on a clear subject-object distinction, where in general many inquiring subjects (theorists and experimenters) are analyzing and measuring the same object. However, as we move to smaller and smaller scales of observation, we find that this is not so easy or straightforward to do; and this is because we run into a fundamental problem with our usual assumption that we can separate our modes of detection (which is required for any observation) from what it is we are trying to observe.

At the quantum level, ‘objects’ behave in what is called a ‘contextual’ manner. That is, they exhibit different kinds of behavior based on how we choose to measure them. This is the well-known ‘wave-particle duality’, in which a quantum object such as an electron will exhibit wavelike interference in an experiment designed to measure its wavelike (extended, non-localized) properties, but it will exhibit particle-like behavior (such a spot on a detection screen) in an experiment designed to localize it. This tells us that the same underlying reality (electron as a quantum system) can give rise to very different phenomena, and that we can never ‘pin down’ that underlying reality to one unambiguous phenomenon. This is not just a pragmatic difficulty: the theoretical description of the underlying reality — the so-called ‘wave function’ that is the solution to the Schrodinger equation of quantum theory — has a mathematical property that literally says that the electron is neither a wave nor a particle, but potentially both.

“Potentially” is the operative word here. Werner Heisenberg, a key pioneer of quantum theory, had this to say about quantum objects described by this ‘wave function’ or ‘probability wave’: “The probability wave …was a quantitative version of the old concept of “potentia” in Aristotelian philosophy. It introduced something standing in the middle between the idea of an event and the actual event, a strange kind of physical reality just in the middle between possibility and reality. [1]

He also put it this way:

Atoms and the elementary particles themselves… form a world of potentialities or possibilities rather than things of the facts.”[2]

By “things of the facts,” Heisenberg meant the empirically observable world–the world of appearance. Thus, he understood that quantum theory was pointing to something beyond the world of appearance, and in order to do that, he was allowing for the possibility that reality consists of more than the world of appearance. In doing so, he was of course venturing beyond empirical science and into philosophical territory. And of course, beyond the purely philosophical lies the domain of spiritual inquiry.

  1. Appearance vs Reality

In the West, the ancient Greek philosopher Plato already had useful insights into this distinction between the observable and the unobservable levels of reality. He said that reality consisted of two different levels: (i) the level of appearance and (ii) the level of fundamental reality–the underlying, hidden reality, which he conceived of as a realm of “Perfect Forms.” His famous allegory of the The Cave was designed to illustrate this distinction. In this story, prisoners are chained deep in a cave, facing a wall on which shadows are cast. The wall is all that they can see, and the phenomena on the wall seem to them to be their entire reality. However, unbeknownst to the prisoners, just outside the mouth of the cave there is a bright light, and people are coming and going between the light and the prisoners, carrying various objects whose shadows are cast on the wall. For Plato, the exterior of the cave, the objects being carried by the people, and the bright light comprise the hidden world of perfect forms (the fundamental reality), while the wall upon which the prisoners gaze is our ordinary world of experience.

We encounter the same contrast between a fundamental, unmanifest reality and an emanated, manifest world of appearance in the Vedic concept of ‘Maya’. While this term has been used in various ways throughout the Easten world, one of its chief uses is to denote the world of appearance as distinct from–and even as obscuring–the underlying, hidden reality. As mythologist Wendy Doniger observes, “to say that the universe is an illusion (māyā) is not to say that it is unreal; it is to say, instead, that it is not what it seems to be, that it is something constantly being made. Māyā not only deceives people about the things they think they know; more basically, it limits their knowledge.”[3] This is very similar to Plato’s allegorical warning that we are deceived when we take the phenomenal ‘shadow play’ as the final story about reality.

The 18th century German philosopher Immanuel Kant also distinguished two fundamental aspects of objects: (1) the object of appearance and (2) the ‘thing-in-itself’, apart from its appearances, which he stated was unknowable. Kant used the Greek term ‘noumenon’ for this second unseen aspect of an object, which translates roughly as ‘object of the mind’. Kant also proposed that there are ‘categories of experience’ that make knowledge of the world of appearance possible. But, unlike Plato and the Eastern philosophers and theologians, Kant assumed that “knowledge” was only about the world of appearance–he held that the world of noumena was unknowable. Kant’s ‘categories of experience’ consisted of concepts like space, time, and causality. But we should take note that Kant proclaimed that Euclidean space was an ‘a priori’ category of understanding, meaning a necessary concept behind any knowable phenomenon—an assertion which has since been decisively falsified by relativity’s non-Euclidean accounts of spacetime. This error illustrates the danger of making categorical assumptions about what principles are required (or conversely, are to be excluded) in order for gaining knowledge about reality, whether at the level of appearance or otherwise.

Twentieth century philosopher Bertrand Russell also had some interesting things to say about the distinction between appearance and reality. In the first chapter of his book, The Problems of Philosophy, he takes us on an exploration of an ordinary table, which leads to an unexpected puzzle. He notes that the table appears differently depending on the conditions under which we observe it, and even to different people who may have different visual capabilities. Finally, he says:

“the real shape[of the table] is not what we see; it is something inferred from what we see. And what we see is constantly changing shape as we move about the room so that here again the senses seem not to give us the truth about the table itself, but only about the appearance of the table. Similar difficulties arise when we consider the sense of touch. It is true that the table always gives us a sensation of hardness, and we feel that it resists pressure. But the sensation we obtain depends upon how hard we press the table, and also upon what part of the body we press with. Thus the various sensations due to various pressures or various parts of the body cannot be supposed to reveal directly any definite property of the table, but at most to be signs of some property which perhaps causes all the sensations, which is not actually apparent in any of them. … it becomes evident that the real table, if there is one, is not the same as what we immediately experience by site or touch or hearing. The real table, if there is one, is not immediately known to us at all but must be an inference from what is immediately known. Thus, two very difficult questions at once arise: (i) is there a real table at all? (ii) If so, what sort of object can it be?”[4]

Recall that this was the same contrast that Plato highlighted in his allegory of The Cave. He noted that the world of appearance is quite different from the real world or the underlying reality–just as, according to the concept of maya, reality is not what it appears to be. Bertrand Russell laid out quite effectively how hard it is to actually know anything about the underlying reality: something as trivial and obvious as a table has been analyzed to the point where it seems to have almost disappeared; we are having trouble getting at what the real table is, or even whether there really is one at all.

This is a notorious problem in philosophy, and there are various approaches to solving this problem and perhaps getting around it. There are a great many modern philosophers who feel that they may have resolved this problem by revising the whole way that we approach the question of how we know about the “real table” that we think is out there. But the bottom line is that we have to take into account that what is directly accessible to us, especially as scientists, is the world of appearance. Western empirical science is first and foremost about the world of appearance by definition, because it’s about what we can observe. In that respect, it must be limited to the ‘Cave.’ It is very hard to justify, within empirical science, saying anything at all about the reality that underlies the appearances.

On the other hand, it is Western science that came up with quantum theory, which ironically seems to point to a domain outside the Cave, in that the mathematical properties of the theory dictate that what it describes is not something that can be contained within the Cave! This is the fundamental source of the controversy over the interpretation of quantum theory–it is why many practitioners of quantum theory wish to deny that the theory actually describes anything real. To do so would be to admit that ‘reality’ must go beyond the Cave-world of appearance.

Thus, while science as a system of knowledge is very rigorous and capable of providing us with well corroborated theories, when we want to talk about those theories as providing facts, we need to take into account that what we take as facts have to be limited to the world of appearance. When we consider using science to talk about an underlying reality, we enter into some very difficult and puzzling issues –and much attendant controversy–because we are faced with a choice: either (i) to acknowledge that science cannot answer all our questions about reality, or (ii) if we want to insist that it does, to make a philosophical (as opposed to scientific!) choice that all there is to reality is the world of appearance. The reason that (ii) cannot be a purely scientific choice is that empirical science, being limited to the world of appearance, cannot itself determine whether or not there is an aspect of reality beyond the world of appearance! If we opt for (i), as scientists and as philosophers of science, we celebrate the power and utility of science, but we acknowledge its limitations as well. In the next installment, we’ll look more closely at the pitfalls of opting for (ii), especially without being aware that doing so goes well beyond science.

[1] Heisenberg, W. (2007). Physics and Philosophy. New York: HarperCollins.

[2] Ibid.

[3] Wendy Doniger O’Flaherty (1986). Dreams, Illusion, and Other Realities. University of Chicago Press, p. 119.

[4] Russell, B. (1912). The Problems of Philosophy. Public Domain.

[5] As quoted in Jammer, M. (1993). Concepts of Space: the History of Theories of Space in Physics. New York: Dover Books. p. 189.




How light ‘smells’ all its possible paths from source to destination

“Now in the further development of science, we want more than just a formula. First we have an observation, then we have numbers that we measure, then we have a law which summarizes all the numbers. But the real glory of science is that we can find a way of thinking such that the law is evident.” -Richard Feynman

Quantum theory tells us that light is somehow both a wave and a particle. It behaves like a particle pursuing an ordinary ray-like path in some situations; but in others, its wave nature cannot be ignored. In this post, we revisit Feynman’s delightful account of the principle of least action, which can help to explain the propagation of light under all these changing circumstances. He starts by considering the principle of least time (a simplified form of the principle of least action), about which he says:

“The idea of causality, that it goes from one point to another, and another, and so on, is easy to understand. But the principle of least time is a completely different philosophical principle about the way nature works. Instead of saying it is a causal thing, that when we do one thing, something else happens, and so on, it says this: we set up the situation, and light decides which is the shortest time, or the extreme one, and chooses that path. But what does it do, how does it find out? Does it smell the nearby paths, and check them against each other? The answer is, yes, it does, in a way.”

Feynman liked to picture light as always being a particle, and came up with a way to explain its wavelike behavior based on the particle’s ability to explore all possible paths in spacetime. This is what he meant by his metaphor of light ‘smelling’ its way from a source to a final destination. He thought of a particle of light starting out from its source and exploring all the infinite possible routes to get to a final destination, judging the best route by the way the neighboring routes compare with it in terms of the time they would take. If those neighboring routes take a very different time from the route being considered, that route gets rejected; but if the neighboring routes take about the same time, that route is chosen.

Clearly this is a complicated and sophisticated process! If we think of light as a little particle doing this route comparison for every possible route, we might wonder how light ever manages to get anywhere!

Screen Shot 2016-06-19 at 10.39.30 PM

A photon examining and comparing all possible routes with one another.

It turns out that if we just stick to the wave picture, we can see quite readily how the behavior of light emerges naturally. But one might ask, what happened to the particle nature of light? We’ll see that it emerges after the wave has done its exploratory work.

Below is a slightly modified version of Feynman’s picture of a wave of light encountering an opening in a screen (notice that even Feynman, who thought of light as a particle, had to include their wave nature!). Two different possible sizes of the opening are shown; the dashed lines show the initially large opening closing down to just a tiny pinhole. For the wider opening, a lot of the initial wave gets through, and is relatively undisturbed by its passage through the hole, so it continues to propagate in the original direction (shown in blue wavefronts and the blue arrow). Most of the light is received at point A in a straight, ‘ray-like’ path from the source.

Feynman Figure 1

However, for the tiny opening, the wave is greatly disturbed by its passage, and spreads out as it exits from the hole (this is called ‘diffraction’). This situation is shown in red. We see this sort of thing all the time with ordinary waves, such as water waves. For this case, the light has a much greater chance of ending up at B or C, which was very unlikely with the wider slit.

In the Transactional Interpretation (TI), all quantum objects such as photons are fundamentally wavelike. They do all their basic ‘exploring’ as waves , and it’s only in the very final stage that a particle-like behavior emerges. In TI, a photon begins life as an ‘offer wave’ (OW for short) emanating from an emitter. But at subtler levels (the relativistic level), it turns out that an OW is only emitted if it also gets responses — ‘confirmation waves’ from systems (such as atoms) that are eligible to absorb its energy. The interaction between an emitting atom and one or more (usually many) absorbing atoms is a kind of mutual negotiation, and both are necessary to get the process started. Once the process starts, the OW still has to decide which of many responding atoms it will choose for its energy deposit. All of this goes on in the background, beneath or beyond the spacetime theater. It’s akin to actors taking their places before a scene is filmed–only the final filmed scene is the spacetime process. But in this case, many actors are called but in the end only 2 are chosen: the emitter and ‘winning’ absorber. Then the filming proceeds — and that is the actual process that occurs in spacetime. The selection of one absorber and the delivery of a chunk of energy is the point where the discrete, particle-like aspect enters. The delivered chunk of energy is the “particle” or quantum.

All stages except the final choosing of the winning absorber are carried out with the wavelike aspect–this is the De Broglie wave, named after the French physicist Louis de Broglie, who first proposed that not only light, but material particles like electrons have a wavelike aspect as well.

So in the TI picture, we don’t have a photon of light having to examine all possible paths. We just have a wave undergoing natural wavelike interference. It is that interference that becomes part of the negotiation between the emitter and all its potential absorbers. Some potentially absorbing atoms may not respond at all, if the offered wave undergoes completely destructive interference before it reaches them. On the other hand, the wave can constructively interfere and provide a large OW component that elicits a correspondingly large CW response from potential absorbers that it reaches. Feynman’s ‘sum over paths’ boils down to a description of the behavior of the interfering OW. The particle of light–the photon–emerges only at the final stage, when one of the responding absorbers ‘wins’ the contest and absorbs a quantum of electromagnetic energy–a photon.

Decoherence in the Everettian Picture: Why It Fails


[Note: this is an adapted excerpt from the introductory chapter to a collected volume, Quantum Structural Studies, forthcoming from World Scientific (eds. R.E. Kastner, J. Jeknic-Dugic, and G. Jaroszkiewicz.]

The idea that unitary-only dynamics can lead naturally to preferred observables, such that decoherence suffices to explain emergence of classical phenomena (e.g., Zurek 2003) has been shown in the peer-reviewed literature to be problematic. However, claims continue to be made that this approach, also known as ‘Quantum Darwinism,’ is the correct way to understand classical emergence.

The problem of basis ambiguity in the unitary-only theory is laid out particularly clearly by Bub, Clifton and Monton (1996), and the difficulty highlighted by them is not resolved through decoherence arguments alone. This is because decoherence is  relational rather than absolute (Dugic and Jeknic-Dugic 2012; Zanardi et al 2004). In order to get off the ground with a particular structure, “Quantum Darwinism”-type arguments depend on assuming special initial conditions of separable, localizable degrees of freedom, along with suitable interaction Hamiltonians, which amount to “seeds” of classicality from the outset.

Under these circumstances, the purported explanation of classical emergence becomes
circular (Kastner, 2014a, 2015). But circularity is not the only problem with the decoherence-based attempt to explain the emergence of classicality. In what follows we examine the logical structure of the argument and find a further, serious flaw: affirming the consequent.

2. The logical flaws of “Quantum Darwinism”

The structure of the Quantum Darwinism argument is as follows:
1. the quantum dynamics is unitary-only, and
2. the universe has initially separable, localizable degrees of freedom such as distinguishable atoms, and
3. those degrees of freedom interact by Hamiltonians that do not re-entangle them,
4. classicality emerges.

For decoherence to account for the emergence of classicality under the assumption of unitary-only (U-O) evolution (approximately and only in a “FAPP” sense, see below), all three premises must hold. However, classicality is implicitly contained in 2 and 3 through the partitioning of the universal degrees of freedom into separable, localized substructures interacting via Hamiltonians that do not re-entangle them, so (given U-O) one has to put in classicality to get classicality out. Premises 2 and 3 are special initial conditions on the early universe that may not hold–certainly they are not the most general case for an initially quantum universe. Yet it seems common for researchers assuming U-O to assert that 2 and 3 also must hold without question. This actually amounts to the fallacy of affirming the consequent, as follows: one observes that we have an apparently classical world (affirm 4), and then one asserts that 1, 2 and 3 therefore must hold.

The insistence on 2 appears, for example, in Wallace’s invocation of “additional structure on the Hilbert Space” as ostensibly part of the basic formalism (Wallace 2012, p. 14-15). Such additional structure–preferred sets of basis vectors and/or a particular decomposition of the Hilbert space–is imposed when quantum theory is applied to specific situations in the laboratory. However, what we observe in the laboratory is the already-emergent classical world, in which classical physics describes our macroscopic measuring instruments and quantum physics is applied only to prepared quantum systems that are not already entangled with other (environmental) degrees of freedom.

If the task is to explain how we got to this empirical situation from an initially quantum-only universe, then clearly we cannot assume what we are trying to explain; i.e., that the universe began with quasi-localized quantum systems distinguishable from each other and their environment, as it appears to us today. Yet Wallace includes this auxiliary condition imposing structural separability under a section entitled “The Bare Formalism” (by which he means U-O), despite noting that we assign the relevant Hilbert space structures “in practice” to empirical laboratory situations. The inclusion of this sort of auxiliary condition in the “bare formalism” cannot be legitimate, since such imposed structures are part of the application of the theory to a particular empirical situation. They thus constitute contingent information, and are therefore not aspects of the “bare formalism,” any more than, for example, field boundary conditions are part of the bare theory of electromagnetism.

These separability conditions are auxiliary hypotheses to which we cannot help ourselves, especially since the most general state of an early quantum universe is not one that comes with preferred basis vectors and/or distinguishable degrees of freedom. Thus, the addition of this condition amounts to asserting (2), and becomes (at best) circular reasoning, or (at worst) outright affirming of the consequent, illicitly propping up the claim that quasi-classical world “branches” naturally appear in an Everettian (unitary-only) picture.

Now, to be charitable: perhaps unitary-only theorists are tacitly assuming that (1) is not subject to question; i.e. they  take it as a “given.” If one presumes the truth of (1) in this way, then (2) and (3) seem required in order to arrive at our current apparently classical world. If (1) were really known to be true, the logical structure of the argument would be: “2 and 3 if and only if 4”. So, rather than reject the argument based on its circularity, such researchers seem to assume that the consequent is evidence for the truth of premises 2 and 3 (i.e., 2 and 3 together are seen as the only way that we could have arrived at the classical macro-phenomena we now experience). The possibility that the dynamics may not be wholly unitary–the falsity of the unitary-only premise (1)–does not seem to be considered. However, the need to use a circular argument in order to preserve the claims of Quantum Darwinism should prudently be taken as an indication that the U-O assumption (1) may well be false, and that non-unitary collapse is worth exploring for a non-circular account of how classically well-defined structures arise in a world described fundamentally by quantum theory. (Such an account is proposed in Kastner (2012) and (2014b). In that account (‘possibilist transactional interpretation’ or PTI), decoherence can of course occur under circumstances discussed in Zurek (2003), as a deductive consequence of quantum theory under certain specified conditions; but decoherence alone is neither necessary nor sufficient as an explanation for everyday classical phenomena such as the observed determinacy of macroscopic objects. Decoherence is not necessary because classical emergence can arise through a specific collapse process in PTI, and decoherence is not sufficient because it does not solve the measurement problem (cf. Bub 1997, p. 231).)

3. Conclusion.

Everettian unitary-only quantum theory seems to have become so “mainstream” that in many quarters it now appears to be considered the “standard” theory, replacing the theory consisting of Schrodinger unitary evolution plus von Neumann non-unitary measurement transition. Yet the only way to arrive at the world of classical phenomena we experience in the unitary-only theory is to assume classicality at the outset–and even this is only approximate and “FAPP,” since it fails to solve the measurement problem, as noted in Bub 1997, Section 8.2. The “decoherence” process as invoked in service of “Quantum Darwinism” is at best circular and at worst amounts to the logical fallacy of affirming the consequent. The alleged utility of decoherence is greatly overstated and illusory. It is time to consider the possibility that Everett might have been wrong.


Bub J, Clifton R, Monton B, 1998, The Bare Theory Has No Clothes.

In {\bf Quantum Measurement: Beyond Paradox}, eds. Healey R A, Hellman
G, {\bf Minnesota Studies in the Philosophy of Science 17}, 32-51.
Dugi\’ c M., Jekni\’ c-Dugi\’ c J., 2012, Parallel decoherence in composite quantum
systems, Pramana {\bf 79}, 199

Dugi\’ c M., Arsenijevi\’ c M., Jekni\’ c-Dugi\’ c J., 2013,
Quantum correlations relativity, {\bf Sci. China Phys., Mech. Astron.
56}, 732
Jekni\’ c-Dugi\’ c J.. Dugi\’ c M., Francom A., 2014, Quantum
Structures of a Model-Universe: Questioning the Everett
Interpretation of Quantum Mechanics, {\bf Int. J. Theor. Phys.
53}, 169

Kastner, R.E., 2012. {\bf The Transactional Interpretation of Quantum Mechanics: The Reality of Possibillty}.
Cambridge: Cambridge University Press.

Kastner R. E., 2014a, Einselection of pointer observables: The new
H-theorem?, {\bf Stud. Hist. Phil. Mod. Phys. 48}, 56

Kastner R. E., 2014b, The Emergence of Spacetime: Transactions and Causal Sets,
forthcoming in {\bf Beyond Peaceful Coexistence}, I, Licata, ed.; Preprint version

Kastner R. E., 2015, Classical selection and quantum Darwinism,
{\bf Phys. Today 68}, 8
Wallace, D., 2012, {\bf The Emergent Multiverse: Quantum Theory
according to the Everett Interpretation}. Oxford University Press,
Zanardi P., Lidar D. A., Lloyd S., 2004, Quantum Tensor Product
Structures are Observable Induced, {\bf Phys. Rev. Lett. 92},

Zurek W. H., 2003, Decoherence, einselection, and the quantum
origins of the classical, {\bf Rev. Mod. Phys. 73}, 715

The Quantum and the “Preternatural”

I recently was reminded of the somewhat archaic term ‘preternatural’ while watching the classic 1963 horror flick “The Haunting.” In this amazing film, a scientist interested in occult matters (including, especially, ghosts) decides to investigate Hill House, a nearly century-old mansion notorious for being cursed with untimely deaths and considered as undeniably haunted. He and several other hand-picked personnel take up residence in the house, and become subject to various terrifying experiences (I won’t include any spoilers here).

The remarkable feature of the film, from my standpoint as a philosopher of science, was the sophistication of the film’s treatment of scientific inquiry through the persona of the ‘ghost-hunting’ scientist. In his attempts to assuage their fears (on the one hand) or dislodge their skepticism (on the other), he engages his fellow residents/subjects in conversation about his goals and methods. He tells them that he is convinced that there is an understandable explanation behind the phenomena, even though that explanation might involve forces or entities previously unknown. These sorts of phenomena he refers to as preternatural. He notes that in ancient times, magnetic phenomena were viewed suspiciously in this way: they were either feared or denied, since no “natural” explanation was known for them. Yet eventually, science was able to account for magnetic phenomena in terms of the notion of a force that acts according to specific laws, and now it is viewed as perfectly “natural.” So the preternatural, in this context, means something at first disturbing and incomprehensible that nevertheless may become familiar and comprehensible once we better understand it through an expanded conceptual awareness. In that sense, the preternatural is distinguished from the supernatural (which means completely outside the domain of natural scientific explanation).

We have been face to face with a very similar situation ever since the discovery of quantum phenomena. Einstein famously called the nonlocal features of quantum entanglement “spooky action at a distance.”  Just as ancient people faced with magnetic phenomena often denied them because they had no “natural” explanation, many researchers want to deny that such nonlocal phenomena reflect anything that really exists. This is because such phenomena don’t have what many researchers can accept as a natural explanation, where what is currently viewed as “natural” is referred to as “local realism.”

Local realism boils down to the idea that all influences are conveyed from one well-localized object to another on a well-defined spacetime trajectory (like a baseball going from the pitcher to the catcher). In fact, progress was made in explaining magnetic (and also electric) phenomena when physicists could explain those in terms of what is called a ‘field of force’. This classical notion of a field of force is a ‘local realistic’ one, in that it accounts for the motions of objects under the influence of these forces in a local, spacetime-connected way: the force is carried by a kind of ‘bucket brigade’ through space and time at no more than the speed of light.

However, it is now well known (through Bell’s theorem) that quantum influences cannot be explained through this bucket brigate picture of classical fields. The influence due to a measurement on one member of an entangled pair of quanta is communicated apparently instantaneously  to the other, no matter how far away it is.

Many researchers, faced with these results, throw up their hands and say that there can be no natural explanation for the phenomena in terms of real things; that no realistic explanation is possible. Since no self-respecting scientist will dabble in the supernatural, such researchers turn to antirealism: they deny that there is anything physically real beneath these phenomena. In doing so, they assume that ‘natural’ or ‘realistic’ can only mean a ‘bucket brigade’ spacetime process, as described above for classical fields. But perhaps there is an alternative: recognize that these phenomena need not be viewed suspiciously as supernatural, but that they are merely preternatural; and that in order to understand them, we must expand our viewpoint concerning what counts as ‘natural’.

This expansion consists in the idea that there may be more to reality than spacetime, and that quantum theory is what describes that subtler, unseen reality. In this picture, quantum processes underlying the nonlocal entanglement phenomena (and other strange phenomena such as ‘collapse of the wavefunction‘) take place in a realm beneath and beyond the spacetime realm. In fact, collapse is what gives rise to spacetime events. For more on this expanded view of reality, see my new book:UOUR.cover