RTI is a different theory from conventional quantum mechanics; it passes empirical tests routinely failed by the conventional theory

I continue to get claims that the Transactional Formulation of quantum mechanics (now called RTI) isn’t really a “theory” because it doesn’t lead to “empirically testable differences” from conventional quantum theory. So I thought I’d share an email reply to someone who recently made this claim:

But RTI does lead to empirically testable differences with conventional QM: it predicts that we’ll get measurement results, while the conventional theory fails to predict that (as illustrated by the macroscopic superposition reductios like Schrodinger’s Cat and Wigner’s Friend, with resulting inconsistencies). Thus, RTI continues to be confirmed by every empirical test (every experiment), while the conventional theory founders on the measurement problem (failure to account for a measurement transition and observed outcomes) in every experiment. That’s the anomaly issue [that I discussed in a previous email]. RTI remedies that anomaly.

The fact that this prediction is not about deviations from the Born Rule does not disqualify it as an empirical prediction.

This situation is precisely analogous to someone insisting that special relativity has no empirically testable difference with Newton’s theory even though Newton’s theory fails to predict the already-observed Mercury precession and SR does predict it. The insistence (which of course we recognize as absurd in this case) would come from the failure to admit that the observed Mercury precession is an anomaly for Newton’s theory because its adherents have gotten so used to just helping themselves to the observed data even though their theory does not account for it in any consistent way. Here, ‘Mercury precession’ is ‘measurement problem’. Of course that didn’t happen in the SR/Mercury case, but the measurement problem is in fact directly analogous to this sort of theoretical anomaly. Failure to recognize it as an anomaly does not make it any less an anomaly. (Laura Gradowski discusses this issue of anomaly suppression/avoidance in her dissertation: https://academicworks.cuny.edu/gc_etds/5091/. It’s a feature of theory entrenchment on the part of a community, that blocks progress.)

The other issue is that one can have two different theories that ARE empirically equivalent. That’s an identified issue, called ’empirical underdetermination of theories’. Then, the choice between theories is guided by issues like self-consistency, which conventional QM already fails spectacularly (e.g. Frauchiger-Renner). Thus, even if RTI were empirically equivalent (which it isn’t, as I note above), it employs a different model of field behavior and is not subject to the FR-type inconsistencies.

The FR scenario was the nail in the coffin, even though there’s continued denial about that. What keeps people entrenched in the conventional theory is the enormous press of history and authority (all those smart guys couldn’t solve the measurement problem, so it must not be a real problem) and lack of recognition that there is a legitimate alternative theory.

In any case, it’s not tenable to claim that a theory that does predict what is observed (‘we get outcomes’) has no empirically testable differences from a theory that fails to predict what is observed (‘we get outcomes’). Even if it’s become customary to give conventional QM a pass on that failure.

P.S. The DeBroglie-Bohm pilot wave approach also fails on Frauchiger-Renner, since an ‘inner observer’s state assignment based on an alleged hidden variable-based outcome is subject to being wrong according to an ‘outer observer’ who (according to conventional QM) must describe the inner observer as being an a superposition.

The point is that conventional QM is doomed by its insistence on unbroken unitarity. RTI is a different theory in that unitarity IS explicitly broken; thus its ability to predict that measurement transitions occur, with observed outcomes.

5 thoughts on “RTI is a different theory from conventional quantum mechanics; it passes empirical tests routinely failed by the conventional theory

  1. What attracts me personally to RTI is the manner in which it is founded on ‘transaction’ which involves mediated exchange of mass/energy/information, between an active ‘giver’ and an active ‘receiver’. In other words there is a balanced foundation between the act of conferral and the act of reception/selection. This idea of a basic symmetry in the bilaterally active process of exchange is foundational in many mystical belief systems including Taoism (Yang/Yin) and Kabbalah (Chochmah/Binah). It also has deep and important implications, I think, for how the natural world including living organisms, really works on the basis of mediated informational exchange. And this, in turn, has important moral and ethical implications that, I think, are captured and made foundational in Baruch Spinoza’s ‘Ethics’–which seems a long way off from theoretical physics, but, I think are fundamentally connected, nevertheless.

  2. I have recently watched a video with John Cramer discussing TI and this discussion with a couple of interesting folks asking him some good questions highlighted, at least for me, some interesting ideas that I had not fully appreciated before…

    see: https://www.youtube.com/watch?v=vVrPerLn7sE

    One insight that was raised for me is that the communicative ‘handshake’ connection taking place in the quantum substratum between the emitter and absorber may actually be dynamically understood as a ‘resonance’ phenomenon in which the potential emitter and absorber are actually sending out repeated waves that run backward and forward in time and thus can be understood as potentially setting up dynamically interactive ‘time loops’. And the idea that there may be a ‘circularly interactive’ connection between the emitter sending forward multiple ‘retarded waves’ and absorbers sending backward multiple ‘advanced waves’ that can enter into a circularly dynamic ‘resonance’ phenomenon that runs in a temporal loop. And how this may depend on some type of ‘phase locking’ phenomenon. Which implies that one can use resonance theory to figure out the dynamics of the process through which the inter-connection between the potential emitter and a particular potential absorber comes into a stage of ‘fruition’ that forms the foundation for the actualization of the transfer of the mass/energy/information that then transitions in the context of physical actuality (Bohemian explicative order) from the emitter to the absorber. One aspect of the transactional interpretation that I find baffling is why it should be one particular absorber among many potential possible absorbers that forms the completed ‘connection’ between the potential emitter and that particular absorber from among the population of potential absorbers that is eventually ‘connected in’. What this raises is the question of how a resonance forms between potential absorber and the one particular absorber that ends up being the ‘receiver’ that completes the connection.

Leave a comment