Note correction: I should have said that a real photon goes FROM the emitter TO the absorber (said it backwards, LOL).

]]>Below is an excerpt from a paper by Fuchs, Mermin and Schack entitled “An Introduction to QBism with an Application to the Locality of Quantum Mechanics”

(https://arxiv.org/pdf/1311.5253.pdf):

(https://arxiv.org/pdf/1311.5253.pdf):

“II. AGAINST NONLOCALITY

There is no nonlocality in quantum theory; there are only some nonlocal interpretations of quantum mechanics. ….

.. QBism makes sense of quantum mechanics by taking an unfamiliar perspective on scientific theories and the scientists who use them. QBist quantum mechanics is local because its entire purpose is to enable any single agent to organize her own degrees of belief about the contents of her own personal experience. No agent can move faster than light: the space-time trajectory of any agent is necessarily time- like. Her personal experience takes place along that trajectory.

Therefore when any agent uses quantum mechanics to calculate “[cor]relations between the manifold aspects of [her] experience”, those experiences cannot be space-like separated. Quantum correlations, by their very nature, refer only to time-like separated events: the acquisition of experiences by any single agent. Quantum mechanics, in the QBist interpre- tation, cannot assign correlations, spooky or otherwise, to space-like separated events, since they cannot be experienced by any single agent. Quantum mechanics is thus explicitly local in the QBist interpretation. And that’s all there is to it.”

What Henson did was to point out that one can replace ‘Quantum theory” or “quantum mechanics” in the above Qbism argument with a hypothetical *nonlocal* theory called “NL Theory” or “NL mechanics.” That is, NL is an explicitly nonlocal theory that allows violations of relativity such as controllable superluminal signaling (although it prohibits observers themselves from moving faster than light). This gives the following (with the replacement highlighted in red):

“There is no nonlocality in NL theory; there are only some nonlocal interpretations of NL mechanics. ….

.. QBism makes sense of NL mechanics by taking an unfamiliar perspective on scientific theories and the scientists who use them. QBist NL mechanics is local because its entire purpose is to enable any single agent to organize her own degrees of belief about the contents of her own personal experience. No agent can move faster than light: the space-time trajectory of any agent is necessarily time- like. Her personal experience takes place along that trajectory.

Therefore when any agent uses NL mechanics to calculate “[cor]relations between the manifold aspects of [her] experience”, those experiences cannot be space-like separated. NL correlations, by their very nature, refer only to time-like separated events: the acquisition of experiences by any single agent. NL mechanics, in the QBist interpretation, cannot assign correlations, spooky or otherwise, to space-like separated events, since they cannot be experienced by any single agent. NL mechanics is thus explicitly local in the QBist interpretation. And that’s all there is to it.”

Thus, the Qbism ‘against nonlocality’ argument regarding QM also must also deny the nonlocality of *an explicitly nonlocal theory* as long as that theory doesn’t allow agents themselves to travel faster than light. This is a *reductio ad absurdum*; i.e., it demonstrates that the Qbism argument for locality leads to the self-contradictory conclusion that a nonlocal theory is local.

Another way to see the problem with the Qbism argument is to consider a medical test for the existence of some disease. Suppose Mr. X has a symptom, such as shortness of breath, and he goes in for tests to determine the cause. Unfortunately, Mr. X has lung cancer, but the test fails to disclose it. (All medical tests have some fallibility in this way, but good tests have a very small chance of this sort of error.) This is called a ‘false negative’: Mr. X really does have a disease, but the test for the disease yields the wrong answer: it says he lacks the disease when in fact he has it. (A false positive would say that someone has the disease when in fact he does not).

The Qbism test for nonlocality is whether an agent will experience spacelike separated correlations. But since a timelike-restricted agent can *never* encounter spacelike separated events, this ‘test’ trivially yields a negative result for nonlocality. It is thus *guaranteed* to find ‘no nonlocality’ in all theories in which observers are restricted to timelike paths, whether the theory itself is local or not. So it will yield a false negative for a theory that is explicitly nonlocal, as long as that theory does not allow agents themselves to move faster than light. This is like a test for a disease that is guaranteed to yield the result ‘no disease’ whether or not the disease is present (provided there is some condition, analogous to a timelike-restricted observer, that masks the disease).

Would it help Qbists to argue that there could be no theory such as NL that would allow controllable superluminal signaling while restricting observers to timelike trajectories? No, since there is no logical necessity for a theory to describe observers by the same laws as it describes physical objects or events. Moreover, it is certainly conceivable to have a theory that would allow superluminal behavior on the part of some objects but not to observers, as defined by the theory. So it’s important to note that one cannot escape this conclusion by noting that such a theory would not be realistic, or correct, and/or would not apply to our world. The issue, raised by Qbists themselves, is not whether a theory (such a quantum mechanics) is correct or true or not, but *whether or not it is local*. And they are forced by their own criterion to say ‘yes’ even for a hypothetical nonlocal theory that has explicitly relativity-violating controllable signaling, whether or not such a theory could ever be true. This contradiction is what reveals the Qbism locality ‘test’ as having no real informative content about whether any theory (whether or not it’s a good theory) is local.

If we don’t want to subject Mr. X to a test guaranteed to say that there are no cancer cells even when there are (given a loophole analogous to the timelike-restricted observer), then we should probably be leery of an interpretive criterion guaranteed to say that a theory is ‘local’ even when, like ‘NL’, it allows explicitly superluminal nonlocal signaling between events.

A final remark regarding what is meant by ‘about the world’ in the title of this post: I can imagine a self-described Qbist as objecting and saying that Qbism is ‘about the world’ and that the Qbist approach is an enlightened view on what it means for a theory to be about the world–in that ‘world’ can only mean ‘the subjective experiencing of an agent’. Such a view denies that the theoretical quantities of quantum theory (such as quantum states) refer to specific definable entities beyond the observer’s perceptions. In essence, Qbism assumes that we are moving through unspeakable ‘quantum stuff’ and that we can (and should) say nothing more about it than that. But in fact it is perfectly possible and reasonable to suppose that quantum theory is indeed saying something about this ‘quantum stuff’: i.e., that quantum states really do describe constituents of reality and their nature. It’s just that their nature is highly non-classical, and that is evidenced by their nonlocality.

Thus, many researchers still think of quantum nonlocality as a kind of disease that needs to be eliminated, denied, or explained away in order for the theory to ‘make sense’. An alternative approach is to admit that maybe the theory is indeed telling us that Nature has a certain kind of hidden nonlocality. For a way to make sense of quantum theory without denying its essential nature, I invite the reader to consider this book. It is certainly true that what we perceive is contextual in some sense, but quantum theory has much more to tell us about the nature of this contextuality than the usual instrumentalist assumption, as in Qbism, that we cannot know anything about the nature of the interaction between observers and the quantum ‘stuff’ and therefore must treat everything external to our perceptions as an unknowable ‘black box’. The transactional picture allows us to understand at least something about the nature of our interaction with the unseen aspects of reality that we describe by quantum states and forces (see the above book link for that account, and stay tuned for my 2019 followup book, pictured below.)

]]>

(Note: Some of the slides were missing text here and there due to a compatibility issue; the correct version is provided here: Baires 2017 slides)

]]>This is a preprint version of this paper: AIP Conference Proceedings 1841, 020002 (2017)

R. E. Kastner

INTRODUCTION

Several researchers have proposed interpretations of quantum theory that involves explicit times symmetry [1 – 5]. This paper discusses the inconsistency between causal dynamism and the block world ontology implicit in these time-symmetric interpretations of quantum mechanics. While the present author also has proposed an interpretation involving time-symmetric field propagation based on the direct-action theory of fields (the “possibilist transactional interpretation PTI), [6], my approach has a temporal asymmetry owing to the response of absorbers, which break the time symmetry. (For a discussion of this, see [7],[8].) In addition, PTI treats space-time events as unidirectionally emergent from the underlying time-symmetric processes, which are seen as taking place in a pre-spatiotemporal domain of Heisenbergian potentiae. So the observations herein do not apply to PTI, which has a growing space-time ontology in which the future is open; there is no future spacetime boundary condition. The focus of this paper is solely on those time-symmetric theories that seek to ‘restore time symmetry’ at the level of spacetime, such that the distinction between past and future is merely perspectival rather than ontological. Such interpretations all have future spacetime boundary conditions.

It should be emphasized at the outset that this author is certainly sympathetic towards the philosophical project of exploring time symmetry and investigating whether our experience of temporal directionality is ontologically based or merely perspectival. As far as I can tell, there is no way to settle this question empirically; we have to invoke other considerations such as fruitfulness of the approach, economy and consistency of explanation, realism versus non-realism, etc., in order to try to assess which interpretation might be a correct description of nature. The present paper aims only to observe that what is often said about these theories in dynamical terms cannot really be upheld, since their explicit time symmetry forces a static ontology. Thus my criticism is not based on a priori rejection of the block world ontology; it is possible that we do live in a block world. Rather, I simply offer an observation that the dynamical stories presented in these interpretations are inconsistent with the ontology they require.

CAUSATION, FORWARD AND/OR BACKWARD

Time-symmetric interpretations such as the two-state vector formalism (TSVF) [5] and time symmetric hidden variables (TSHV) such as those of Price and Wharton[1,2] and Sutherland [3,4] are presented as involving a bidirectional causal flow. (We might consider these theories, rather than mere interpretations, of quantum theory, since they all involve additional theoretical structure beyond that of standard quantum mechanics, so I will refer to them as theories in this discussion). In TSVF, a system is taken as being described by both its pre-selection state |ψ> and post-selection state |φ>, through the putatively bi-directional ‘two-state vector’ <φ||ψ>. The theories of Price, Wharton and Sutherland involve time symmetric hidden variables (TSHV); i.e., hidden variables that have dependence on both past and future events. Sutherland’s is a time-symmetric version of the Bohmian theory in which position is the hidden variable. Price, in [1], coined the term ‘advanced action’ for the idea that dynamical influences can propagate from an event E occurring at t toward past times tp < t.

This author is aware of the highly nontrivial task of defining causation, and this paper by no means pretends to offer a categorical definition. I will simply lay out what I think is a common understanding of what it means to say that one event causes or influences another event (the latter being perhaps a weaker proposal but still involving dynamism). I believe this common understanding is what is in play when the above theories are presented, so that is why I think it should suffice for the present discussion. For example, Sutherland expresses the view that retrocausation can ‘save locality’ in quantum theory, by explaining the distant connections between entangled EPR pairs in terms of a dynamical “zig-zag” influence going backward and then forward again in time between one particle, back to the source, and then forward to the other particle.(1)

In addition, advocates of the TSVF suggest that a ‘future event influences a past event’ (e.g.,[9]). As noted above, TSVF a system is described by a “two state vector” (TSV), <φ||ψ>. Aharonov et al explicitly state that the post-selection state propagates toward the past; the TSV is taken as a bi-directional entity. So it is assumed that dynamical propagation is occurring in spacetime. All these time-symmetric theories have in common a final boundary condition on the universe, and a determinate set of spacetime events in between the initial and final boundary condition. (It should be noted that the correlations discussed in [9] are straightforwardly predicted by standard quantum mechanics and therefore do not require explicit spacetime retrocausation.(2)

Let us now see whether we can make sense of the dynamical stories accompanying these theories. First, consider a sequence of events (even a deterministic one) appearing one after another in a single temporal direction (either forward or backward). We can apply a counting process to each sequence: the first event to appear is indexed by 0, the second by 1, etc. If we like, we can index a past-direct sequence by 0, -1, -2, …. ; or we can use the same positive indices and consider them values of -t. This is just an arbitrary convention, given a unidirectional sequence of events: an observer experiencing a sequence of events growing in this way, in either direction, would presumably not be aware of any difference in temporal orientation. One event is simply experienced as following another. In this circumstance, one can meaningfully think of event n as the cause of event n+1, since the occurrence of event n was a prerequisite for the appearance of event n+1. (For the past-directed sequence, one can think of event n as the cause of event n-1; event n is a prerequisite for the occurrence of event n-1.) Another way to describe the situation is that one event is being generated by another event; without its generating event n, event n+1 would never appear (and mutatis mutandis for the past-directed sequence). This account identifies a cause n of an event n±1 as a necessary and sufficient condition for the occurrence of n±1.

Thus, there is nothing strange about ‘retrocausation’ if it is taken as applying to a unidirectional sequence of events. It is just a convention in which the temporal index has the opposite sign. However, it is only in such a unidirectional sequence of events, appearing one after the other (where ‘after’ is relative to the orientation of the sequence numbering), that one can think of one event as causing another, in this ‘generating’ sense; the set of events is always increasing. In contrast, in a block world, all the events are given as an entire set, not as a growing sequence. It is not the case that any events are ‘coming into being’; all spacetime events exist up to and including the final instant of the universe. Therefore, no event is a prerequisite for, or generator of, any other event, since all events appear with equal priority. Put differently, for any event E occurring at t=0, events D and F preceding and following E have nothing to contribute to E’s existence. If event E is King Arthur’s coronation, event D is young Arthur pulling the sword from the stone, and F is Merlin approaching from the future to guide the young Arthur, event E has no need for either D or F, since it already exists in spacetime. Neither D nor F is a necessary and/or sufficient condition for E. The only dynamical role that could be played by D and F would be in establishing the spacetime manifold from some orthogonal manifold (which we could think of as ‘God creating the block world’).

One might think that since the laws of physics, including quantum mechanics, have a dynamical expression–e.g., the Schrödinger evolution describes a quantum state changing with time–that these laws are ‘in play’ even in a block world picture. So suppose we think of the spacetime manifold as analogous to the field between capacitor plates, where the plates play the part of the initial and final universal boundary conditions. The establishment of such a field is certainly a dynamical process (charging up the plates); but this corresponds to the initial establishment of the boundary conditions and the field (God creating the spacetime manifold, if you will). Once the plates are fully charged and the field established, we have a static situation–an electrostatic field. The latter is what corresponds to the spacetime manifold in the block world of the time-symmetric theories, with their specified boundary conditions and physical laws determining the configuration of all the events. There is no provision in these theories for the ‘becoming’ stage of the spacetime manifold. Perhaps that could be added as an additional ontological component; but it is not part of the proposals in their current form.

RETROCAUSATION ABSENT IN BLOCK WORLD TIME-SYMMETRIC THEORIES

According to the above, the block world ontology is static and acausal. This is also the conclusion of proponents of the Relational Block World (RBW) interpretation of quantum mechanics [11], who present their interpretation as an adynamical and acausal one, and who therefore propose a block world interpretation with a consistent ontology. In contrast, there is an ontological inconsistency in TSVF and TSHV theories to the extent that retrocausation is claimed to play an explanatory role in service of such perceived goals as ‘saving locality’. For example, Price has also argued that TSHV constitute a local explanation for the Bell correlations. The idea is that each of the particles in an entangled Bell state possesses time-symmetric hidden variables that gives it a dependence not only on its prepared state but on its detection state, so that it is purportedly influenced by both of these. But in fact, all spacetime events are determinate; each particle’s entire trajectory already exists in the block world. So there is no influence propagating “anywhen” in spacetime; it is just a story tacked on to a set of events that already exist.

Thus, once we have all events specified in this way, there is no need for any additional story about propagating influences such as ‘advanced action’; it is superfluous. In fact it is the static block world that doing the work of ‘saving locality’, not any dynamical process. Faster-than-light influences are eliminated because the quantum correlations are explained through violation of Einstein’s ‘being thus’ criterion (also called “separability,” quantified by Shimony [12] as “outcome independence”), rather than through faster-than-light signaling (quantified by Shimony as “parameter independence”). The point here is that the dynamical story is not an accurate reflection of the ontology of these theories. The different specific theories simply amount to different ways of accounting for the pattern of static relationships among spacetime events.

In addition, in these theories the quantum state has been relegated to an epistemic role: it does not refer to an objective uncertainty but only an epistemic uncertainty. In the view of this author, the epistemic approach is therefore a step toward non-realism about the quantum state, since any statistical uncertainty obtaining in the state descriptions refers only to the ignorance of an observer. Thus, the quantum state description does not ontologically refer. This is certainly an option one can take. However, it appears that proponents of these interpretations see them as realist approaches, which would seem to be in conflict with the fact that their quantum states do not refer to something in the world. One might argue against this that classical statistical mechanics (CSM) could be regarded as realist, so why shouldn’t taking the quantum state as a statistical description be considered realist? The difference is that CSM adopts the statistical approach despite the fact that in any given situation, the system’s properties can be considered determinate; they are in principle precisely specifiable. Thus, in classical physics, there are two levels of description, precise vs. statistical, that have different functions. In CSM, we can choose the statistical level of description by voluntarily introducing an uncertainty allowing the inclusion of other possible states of the system besides the one currently possessed by it. In contrast, there is only a quantum single description — the quantum state — which has an irreducible uncertainty not introduced voluntarily by the observer. It is a choice, not a necessity, to view that irreducible uncertainty as describing the observer rather than the system itself. Taking a theoretical object

as referring to an observer’s knowledge rather than to the system clearly constitutes a step away from realism about the quantum state.

It thus seems to this author that the above time-symmetric theories and interpretations of quantum mechanics are presented in a way that does not reflect their ontological nature, in two chief ways: (1) They are presented as dynamical accounts when they are actually adynamical; and (2) They are presented as full-blown realist accounts when they are really epistemic about quantum states. In these theories, quantum states–whether single states or TSVs–are just observer-dependent labels placed on sets of events that are ontologically determinate and which have certain static relations amongst one another in the block world. What is doing the heavy lifting in these theories is the block world. The dynamical stories presented along with these accounts do no actual explanatory work, since there is no real dynamics.

FREE WILL CONSIDERATIONS

Finally, a few remarks about free will. While this is not a necessary part of my critique, it should be briefly addressed here, since proponents of TSHV theories have claimed that their theories are compatible with free will.

It is typically assumed that in order to have free will, we must (at least) have some causal control over our actions. According to the above arguments, one would therefore not have free will in either the TSVF or TSHV ontology. Suppose one attempted to counter this negative conclusion through the application of particular theory of free will, such as the ‘agent interventionist’ theory. According to this theory, an agent Alice could be said to have free will if she has a ‘control knob’ that she can turn, which influences some other event. If her influence is forward- propagating, then she is considered as having free will towards the future by virtue of her control; if her influence is past-propagating, then she is considered as having free will toward the past. In the time-symmetric story told with these theories, theoretically Alice would have free will in both directions; but (as a time-asymmetric creature “moving through the block world’ in a particular direction), she has knowledge of past events, so she could only be considered to have free will towards the future because of her ignorance of future events. (The notion that observers ‘move through’ the block world is taken as primitive in block world theories. It cannot be explained from within the theory and must be assumed as an ad hoc principle.(3)

However this ignorance-based account will not do, because (as observed above), Alice actually exerts no dynamical influence in either temporal direction. Moreover, it is not up to her what her knob connects to in her future–that is a uniquely determined event. Her knob is not a control knob; it is just one of a pair of related events. So if Alice thinks she has control, she is simply mistaken. Ignorance of future events does not constitute control over future events, and being mistaken about whether one has control does not constitute free will. This account therefore purports to “save” free will by attributing it to what amounts to a delusion on Alice’s part. Incorrect information about events cannot reasonably be taken as any form of control over events. If anything, it signals lack of control: if “information is power,” then surely wrong information is (at best) lack of power.

CONCLUSION

It has been argued that time-symmetric theories and interpretations of quantum mechanics that imply a block world ontology cannot consistently be portrayed as involving dynamical influences propagating within spacetime. Such dynamical stories are just narrative overlays on a static ontology. The block world ontology is what ‘defangs’ the nonlocal correlations between entangled EPR particles by taking all outcomes as determinate in the block world, so that neither particle has to make a seemingly miraculous instantaneous decision as to which way to spin based on its partner’s distant measurement result. The story of the dynamical ‘zig-zag’ influence from one detector back in time to the source and forward again in time to the other detector is not a description of something that really occurs in the ontology. It is therefore misleading and, in the view of this author, should be dropped.

In addition, these theories relegate quantum states to observer-dependent epistemic descriptions whose statistical uncertainties reflect only the ignorance of observers regarding events that are determinate in the spacetime manifold. Underlying every state attribution is a complete set of determinate spacetime events, about which the observer who assigns the state is ignorant; the state therefore amounts to an epistemic probability distribution. Thus these approaches appear not to be fully realist, in the sense that they do not take the quantum state as referring to an observer-independent entity.

Finally, a general observation: it appears that much of the discussion of time symmetric theories — both by proponents of such theories and even by more or less neutral researchers — is being undertaken against a backdrop of declining to take the static block world ontology of the theories seriously. For example, a recent paper by Leifer and Pusey [14] attributes backward-propagating physical influences to the term “retrocausation” but then actually defines it in terms of (epistemic) conditional statistical dependence on future events. Clearly, an epistemic statistical dependence and a propagating physical influence are two completely different things. In foundational studies of quantum theory, the distinction between an epistemic quantity and an ontological, physical process or property is a crucial one that bears on which sorts of interpretations are viable and which are not. Yet very commonly in the literature on-time symmetric theories, epistemic and ontological quantities seem to be viewed as interchangeable. This is a curious situation.

Now, one might reply that nobody really knows what causation is, or even if it exists at all, and that all causation/dynamism talk is really (or is reducible to) talk about our epistemic perspectives as observers. That is, a demand to refrain from applying dynamical narratives and claims to a block world ontology is often met with the claim that such a demand is unreasonable: that nobody should be expected to take the underlying ontology seriously when it comes to talking about physical influences and causation. However, such a response assumes that the restrictions of the static block world ontology apply equally to all possible ontologies, when that is not the case. Both the growing causet model of Sorkin et al, and the poset model of Knuth and colleagues (e.g., [15]), have physical dynamism built into the growth of the set, such that the dynamical experience of an observer corresponds to features of the ontology itself, not to the perspective of observer assumed to be (somehow) moving through a static ontology. In such growing universe (“becoming”) models, the future is ontologically open and, on that basis, genuine (non-illusory) intervention by an agent is at least in principle possible.(4) Dismissing these models as “speculative” is not a legitimate response: it could be said that time-symmetric hidden variables models are also “speculative.” Thus, we have to avoid a double standard here. If a block world ontology is not strictly required by physical theory (as has been shown by Sorkin [17]), then the methodological claim that all “causation” talk is always necessarily perspectival-only is refuted. In fact, it is in principle possible to have a model in which there is genuine dynamics and even genuine intervention by an agent — not just the illusion of intervention. Thus, intervention and dynamism are not reducible or equivalent to an epistemic-only account. Time-symmetric hidden variables models (as well as TSVF) cannot have it both ways. They are static, not dynamical, ontologies.

ACKNOWLEDGMENTS

The author is pleased to thank the organizer, Daniel Sheehan, and participants of the Workshop on Retrocausation (AAAS Conference, San Diego, CA; June 15-16, 2016) for the opportunity to present and discuss this paper and related ideas.

NOTES

1 This was stated, for example, in Sutherland’s presentation at the AAAS Workshop on Retrocausation (San Diego, June 15-16, 2016). The “zig- zag” account of EPR correlations is also given in Cramer’s original presentation of the transactional interpretation [10]. This author differs with Cramer in his assumption that all processes are spacetime processes, for that makes his original version of TI subject to the same criticism elaborated herein.

2 This issue will be elaborated in a separate work.

3 It is often supposed that one would be mistaken in thinking that this aspect of experience is something that a physical theory should be expected to explain, thus elevating a shortcoming of a particular theoretical approach to an ostensibly required and right-thinking methodological and metaphysical principle. But in fact, other approaches, such as a growing universe picture, can explain this temporally oriented aspect of our experience. An example is the causal set account of Sorkin et al (e.g., [13]), which is fully compatible with relativity.)

4 Arguments that no real intervention (free will) is possible even with quantum indeterminism are rebutted in [16].

REFERENCES

- H. Price, Time’s Arrow and Archimedes’ Point: New Directions for the Physics of Time (Oxford University Press, New York, 1996).
- H. Price and K. Wharton, “Disentangling the Quantum World,” Entropy 17, 7752-7767 (2015).
- R. Sutherland, “Lagrangian Formulation for Particle Interpretations of Quantum Mechanics: Single-ParticleCase.” Preprint, arXiv:1411.3762v2 (2014).
- R. Sutherland, “Lagrangian Description for Particle Interpretations of Quantum Mechanics — Entangled Many-Particle Case.” Preprint, arXiv:1509.02442.pdf (2015).
- Y. Aharonov and L. Vaidman, “The Two-State Vector Formalism of Quantum Mechanics: an UpdatedReview” in Time in Quantum Mechanics, Volume 1, Lecture Notes in Physics 734, 2nd ed., edited by Juan G. Muga, R. S. Mayato, Í. Egusquiza (Springer, 2008), pp. 399–447.

- R. E. Kastner, The Transactional Interpretation of Quantum Mechanics: The Reality of Possibility (Cambridge University Press, Cambridge, 2012).
- R. E. Kastner, “The Broken Symmetry of Time, ” AIP Conf. Proc. 1408, 7 (2011).
- R. E. Kastner, “Real and Virtual Photons in the Davies Theory of Time-Symmetric Electrodynamics,”Electronic Journal of Theoretical Physics 11, 75-86 (2014).
- Y. Aharonov, E. Cohen, A. Elitzur, “Can a Future Choice Affect a Past Measurement’s Outcome?” Preprint,arXiv:1206.6224 (2012).
- J. Cramer, “The Transactional Interpretation of Quantum Mechanics.” Reviews of Modern Physics 58, 647-688 (1986)
- M. Stuckey, M. Silberstein, T. McDevitt, “Relational blockworld: Providing a realist psi-epistemic account ofquantum mechanics.” International Journal of Quantum Foundations 1, Issue 3, 123-170 (2015).
- A. Shimony, “Contextual Hidden Variables Theories and Bell’s Inequalities.” Brit. J. Phil. Sci. 35, 25-45(1984).
- R. Sorkin, D.P. Rideout, Phys. Rev D 6, 024002 (2000).
- M. Leifer, M. Pusey, http://arxiv.org/abs/1607.07871 (2016).
- K. Knuth, N. Bahreyni, “A Potential Foundation for Emergent Spacetime.” Journal of Mathematical Physics55, 112501 (2014).
- R. E. Kastner, “The Born Rule and Free Will.” To appear in D. Aerts, S. Aerts, and C. deRonde (Eds.),Probing the Meaning of Quantum Mechanics: Physical, Philosophical, Mathematical and Logical Approaches.
Singapore: World Scientific (forthcoming, 2016).

- R. D. Sorkin, “Relativity theory does not imply that the future already exists: a counterexample,” in V. Petkov(Ed.), Relativity and the Dimensionality of the World. (2007). Springer. Preprint, http://arxiv.org/abs/gr- qc/0703098

https://www.sciencenews.org/blog/context/quantum-mysteries-dissolve-if-possibilities-are-realities

]]>In a nutshell, the measurement problem (MP) is this: given an interaction among quantum systems (such as an unstable atom, atoms comprising a Geiger Counter, atoms comprising a vial of gas, a cat, a friend of Wigner, etc.), which of those interactions constitutes ‘measurement,’ and why? During the past several decades, worries about the MP largely abated due to a popular sense that environmental decoherence took care of defining measurement in a unitary-only picture (even though there were numerous criticisms of that approach—e.g., Dugić and Jeknić-Dugić, 2012; Fields, 2010; Kastner, 2014c). However, there remains a marked lack of consensus, and recently there has been a resurgence of concern around this issue. Griffiths goes so far as to remark that:

However, perhaps the situation is not so dire. The present author would like to issue a gentle reminder that in fact there is a strong contender for solving the measurement problem in the Relativistic Transactional Interpretation (e.g., Kastner, 2012); which must be carefully distinguished from the original TI of Cramer (1986). Making that distinction clear is a major objective of the present work. First, however, it is well known that about a decade after Cramer’s original proposal, Maudlin (1996; 2nd ed. 2002) raised what appeared at the time to be a fatal objection to TI, and at that point a consensus developed that TI was not viable. What went largely unnoticed after Maudlin’s apparent disposal of TI were several publications demonstrating that the Maudlin objection was not in fact fatal (e.g., Marchildon, 2006; Kastner, 2006; Kastner 2012, Chapter 5). More importantly, however, is that the Maudlin objection is itself completely nonviable once the relativistic level of the transactional picture (RTI) is taken into account (Kastner 2017a).In view of the ongoing concern about the MP, this more recent nullification of the Maudlin objection is briefly reviewed herein, as well as the RTI solution to the measurement problem, including quantitative criteria for the processes of emission and absorption (Kastner 2012, Section 6.3.4). The latter were taken as primitive in the original Cramer account, apparently leading many researchers to discount it. The RTI development, which remedies these lacunae in the original TI, does not seem to have penetrated the community, since a recent review by L. Marchildon of Cramer’s latest book (Cramer 2016) completely omits it. Based only on the older version of TI presented in Cramer’s book, Marchildon expresses his worry that

“In an important sense, TI is not better defined than the the Copenhagen interpretation…in Cramer’s view, transactions play the part of collapse. True, they are somewhat immune to questions like “When does the collapse occur?,” but they require emitters and absorbers. These should be macroscopic (classical) objects if transactions are truly irreversible. The classical-quantum distinction or apparatus definition therefore plagues Cramer’s view just as it does Bohr’s or von Neumann’s.” (Marchildon 2017

)In fact, however, this is no longer the case. Emission and absorption are now quantitatively defined at the microscopic level, and the microscopic/macroscopic transition is quantitatively defined (although fundamentally indeterministic).1 So the issue leading to Marchildon’s assessment that TI fares no better than the Copenhagen Interpretation is precisely what has been resolved in the relativistic extension of TI (RTI). Since this is a serious misunderstanding of the present status of the transactional interpretation, I shall deal with that first (following a brief review of basic principles of TI), and shall subsequently review the nullification of the Maudlin challenge.

[For the full paper, click the link below. Agree? Disagree? Comments? Post them here.]

On the Status of the Measurement Problem Arxiv3

]]>