More on Entropy and the Arrow of Time

This is somewhat technical. It’s for those interested in the puzzle of how we get the irreversible processes we see all around us from laws that are supposedly reversible. The trick: they are not all reversible. A crucial part of the physics of Nature involves an irreversible step that has long been neglected. The paper is an invited contribution to the journal EntropyClick here to read.

 

 

17 thoughts on “More on Entropy and the Arrow of Time

  1. Your explanation of 2nd law is convincing. But there’s a question about TI’s treatment of the measurement process. As you say it happens in two steps. First, the transition from density operator to density matrix by completion of all OW/CW transactions. This first step is well explained by TI. Second, the selection of a result: an eigenvalue of the observable operator. This is the so-called collapse, not explained by TI.

    RK>> “The second step in the measurement transition is non-unitary collapse to one of the outcomes from the set of possible outcomes … This can be understood as a generalized form of spontaneous symmetry breaking, a weighted symmetry breaking: i.e., actualization of one of a set of possible states where in general the latter may not be equally probable.”
    RK>> Now, of course there is no causal, deterministic story for how the system ends up in one state as opposed to the others; but that shouldn’t be a surprise to us by now. It’s a reflection of genuine ontological indeterminacy– which means: something truly unpredictable happens.

    So TI’s explanation of collapse is “spontaneous, ontologically indeterminate, truly unpredictable, probabilistic actualization”. That’s just standard Copenhagen position. The collapse (second step) is unexplainable; it just happens.

    RK>> “Once you have a mixed state that can be interpreted epistemically, it’s legitimate to say that the system has a determinate but unknown property. In contrast, you can’t legitimately say that in the standard “decoherence” approach, because at best you only get improper mixed states, and those cannot be interpreted epistemically.”

    You’re right. For explaining the first step TI is much better than decoherence, because the correlations become exactly zero, not just approximate.

    RK>> So once you have an epistemic mixture, as in TI, the measurement problem is solved.

    No, there’s still the second step. One specific result must get selected, or chosen, or instantiated, from the epistemic mixture of determinate but [i]unknown[/i] properties.

    RK>> Or maybe, as I’ve explored a little in my latest book, perhaps this is Nature’s way of leaving room for volition…?
    RK>> Remember Dyson’s remark that: “I think our consciousness is not just a passive epiphenomenon carried along by the chemical events in our brains, but is an active agent forcing the molecular complexes to make choices between one quantum state and another. In other words, mind is already inherent in every electron, and the processes of human consciousness differ only in degree but not in kind from the processes of choice between quantum states which we call “chance” when they are made by electrons.” (Disturbing the Universe).

    It’s reasonable to speculate that “mind is inherent in every electron”, a theory sometimes called “conscious particles”.

    RK>> I should clarify that actually I don’t deny that consciousness has anything to do with collapse. What I do deny is the standard appeal to an ill-defined ‘external conscious observer’. Absorber response triggers the measurement transition, as described in my 2nd law paper. Now, whether absorbers have consciousness is a separate — and very intriguing — issue, as I’ve noted above.

    Ok. The “standard ill-defined conscious observer” is the experimenter, or observer, who, according to Wigner for instance, causes the collapse. You deny that idea. Instead, maybe the particles or absorbers have some sort of “proto-consciousness” that collapses the wave. (Or, maybe not).

    The TI model – like every interpretation – is incomplete without an extra hypothesis or postulate to explain collapse. Even if, like MWI, they try to pretend there’s no collapse. You, Dyson and I agree that one possible hypothesis is the theory of conscious particles or absorbers, or something like that.

    ** BTW RK quotes are taken from the referenced paper, elsewhere on your blog, and also a PF thread.

    1. Thanks, but it’s not correct to say that TI’s treatment of collapse is the same as Copenhagen. Copenhagen has to appeal to the perceptions of an ill-defined ‘outside observer,’ whereas TI does not. What counts as an absorber (i.e that which responds) is quantitatively well-defined in TI. In contrast, what counts as an ‘outside observer’ in an observer-dependent approach like Copenhagen is notoriously ill-defined; that’s the “Heisenberg Cut”.
      TI gives quantitative, albeit indeterministic, conditions for the non-unitary transition from a pure to a mixed state. If the resulting mixed state can be interpreted epistemically, which it can, then the transition from pure to mixed state subsumes indeterministic collapse to one outcome; they need not even be thought of as two distinct steps. I.e., an epistemic mixture is epistemic precisely because one outcome obtains, but we just don’t know which one.
      You’re essentially demanding a deterministic, causal explanation for a situation that is fundamentally indeterministic. If a process is truly indeterministic, then there simply is no continuous, causal story about why/how it ends up where it does. (The same kind of thing occurs in spontaneous symmetry breaking under the Higgs mechanism, but nobody considers that incomplete.) And (as you’ve noted above) this is actually a good thing, because it provides an opening for free will. See, also: http://philsci-archive.pitt.edu/11893/
      The crucial point is that TI PHYSICALLY explains the measurement transition–why it has the form that it does, and its specific quantitative features (pure to mixed state with Born Rule describing the weights). Copenhagen and other unitary-only approaches simply cannot explain the physics of the measurement transition and must take it as an ad hoc rule.

      1. As I said, Copenhagen’s treatment of collapse is the same as TI only in one way: the result – the eigenvector selected – is inherently unpredictable. We all know there are also many differences, as you wrote. I am not “demanding a causal explanation for a situation that is fundamentally indeterministic”. I just want to clearly delineate the difference between the current explanation of standard QM, and any (partially) causal explanation. Such as, I think, the volitional hypothesis (let’s call it VH). It’s true, the two steps you defined “need not” be thought of as two steps. But I think you were right, originally, to do so. The first step is perfectly well understood – in TI, at least. It’s the second step, the indeterministic selection, which needs to be isolated, separated, for further study. VH addresses specifically that second step. It says the particle (or absorbers, or wavefunction) has some sort of “free will” and “chooses” the resulting eigenvector.

        My confusion is, you keep saying the situation is fundamentally indeterministic. That’s certainly true according to current QM (and TI, and Copenhagen). But at the same time you’re proposing VH (only as a possibility, of course). Now, I’d say VH is NOT fundamentally indeterministic, because the particle volitionally chooses an outcome. Perhaps your view is as follows. Even though the particle (or absorbers, or wavefunction, or whatever) “knows” what choice it “wants” to make, from our point of view, it’s still indeterministic because we can know nothing about the particle’s inner state of volition. Thus you say, in THE BORN RULE AND FREE WILL: WHY LIBERTARIAN AGENT-CAUSAL FREE WILL IS NOT “ANTISCIENTIFIC” :

        RK>> In the libertarian “agent causation” view of free will, free choices are attributable only to the choosing agent, as opposed to a specific cause or causes outside the agent.

        in terms of VH, this would seem to mean that the choice of the particle (or absorbers, wavefunction, whatever) can’t be known to us since there are no cause or causes outside of it, that we could observe, that would tell us anything about that choice.

        However in another blog posting you said:

        RK>> “As a realist interpretation, PTI does embrace the idea of understanding what is going on within physical objects …”

        So, I’m confused where you stand here. Question: Does volitional hypothesis imply the eigenvector selection is still fundamentally indeterministic, or not?

        Finally, I agree with your paper:

        RK>> An often-repeated claim in the philosophical literature on free will is that agent causation necessarily implies lawlessness, and is therefore “antiscientific.”
        RK>> … The main purpose of this paper is to argue that this argument fails because a human agent cannot be assumed to be modeled in this way.

  2. Thanks for your further comments/questions. I can’t agree with your characterization of the 2 approaches as the same regarding collapse. In TI, collapse could occur as a form of spontaneous symmetry breaking, as in the Higgs mechanism, OR it could occur as a result of volition; or it could turn out that spontaneous symmetry breaking is related to volition. These are all speculative issues (see below). But Copenhagen can’t even explain why we get an epistemic mixed state–it has no physical treatment of collapse. So I’m balking at comparing TI to Copenhagen even regarding collapse, because Copenhagen just has nothing at all to offer physically.
    And as I noted before, there is no requirement to treat the measurement transition as 2 different stages; once you have the epistemic mixed state, it’s logically consistent to say that collapse has taken place, even though there is no deterministic cause for it. Copenhagen can’t say any of that because it has nothing to say about the measurement transition.
    Now, regarding the question of “how collapse occurs”: I should clarify that this is really not part of the transactional interpretation. It is a purely metaphysical question that goes beyond the interpretation of the quantum formalism. It’s on a par with a question like “how does one vacuum state get chosen from the infinity of possible states in the Higgs mechanism”? Nobody says that the Higgs model is ‘incomplete’ because it does not provide an answer to that question. So in effect, it would be a double standard to say that TI is incomplete because it does not explain (i.e., specify the ‘missing cause’) for the actualization of one outcome over the others.
    In discussing volition, I’m going beyond the TI formulation and exploring this metaphysical issue. But I am not taking a stand one way or the other, because I consider this issue speculative. It’s possible that volition is the ‘missing cause’, or it’s possible that collapse is just another form of spontaneous symmetry breaking as in the Higgs mechanism.
    Of course, it feels unsatisfying because we want to know ‘why’ something occurred in an indeterministic situation. And perhaps there is some missing cause for it–perhaps we can point to volition as the missing cause. But this is all speculative, a matter for further inquiry.
    Regarding your question “Does volitional hypothesis imply the eigenvector selection is still fundamentally indeterministic, or not?” I think it’s still indeterministic, if an act of volition is a primary (uncaused) cause–since there is no prior condition that determines that act of volition.
    Also, thanks for your comments on my paper on the Born Rule and free will.

  3. From “Theories of Consciousness” by William Seager: “we would have no idea whether the actual world contained type A or type B physical stuff”
    Actually, it’s more about some arbitrary “we” willing to accept or not what descriptions of reality are telling “us”. (P)TI (as opposed to for instance the Copenhagen Interpretation) can communicate with other theories such as Robert Rosen’s anticipatory systems and biology more fundamental than (conventional) physics and (conventional) computation, Ilya Prigogine’s indeterminism, and Terrence Deacon’s incomplete nature. That doesn’t imply that consciousness is in some mystical way more fundamental, but that we can observe different phenomena of the same reality in order to understand it (if we want to). One is free as long as he/she wants to deny to “us” the right to understand what is going on.

      1. a link from your next blog post: https://www.wired.com/2010/02/what-is-time
        It doesn’t matter much whether it was written in 2010 or any other year. Or this text http://www.universetoday.com/130704/are-we-living-in-a-simulation where I’ve left a few comments. There’s no way to be convincing enough and to make it obvious as for instance that the Earth is round. I’ve had a discussion about Prigogine because what if a measurement is infinitely precise. But it can’t be. But what if it can? But it can’t……

  4. Thanks Ruth for another great paper on fundamental issues. I found it googling ideas on the possibility to derive the second law from quantum collapse (an idea that seems kind of obvious as you say in the intro above). If fundamental (quantum) physics were time-reversible, there would be a conflict with thermodynamics, but Von Neumann’s Process 1 (collapse) makes quantum physics irreversible, which suggests the idea to consider Process 1 as the microscopic origin of the second law.

    Re “In fact, von Neumann himself showed that his “Process 1” is irreversible and always entropy-increasing [link to VN’s book]. However, he seemed to have veered away from using that fact in deriving the Second Law, because he thought of the measurement transition as dependent on an external perceiving consciousness, and as such is not a real physical process.”

    Regardless of any specific mechanism for Process 1 (GRW, TI, or others), would it be correct to consider just the reality and (apparent) randomness of Process 1 as the microscopic origin of the second law?

    Also, is there a more recent and rigorous proof that “Process 1… is always entropy-increasing”? I believe Von Neumann’s proof in the book has been criticized.

    1. Thanks Guilio! Yes, I do think that a genuinely non-unitary process leads to entropy increase, independently of specific interpretations. This just seems to follow from the fact that a pure state has the lowest possible entropy, while a mixed state always will have more. But I’d be curious to see the criticisms of VN’s proof–is there a reference you could point to?

      1. For example Zeh, The Physical Basis of the Direction of Time, p. 130: “Would the collapse, if used in this way as part of the dynamics of wave functions, now specify an arrow of time that could perhaps even be responsible for irreversible thermodynamics?…”

        A mixed state is not yet the “real outcome” (unless one follows Everett). Something still needs to choose one of many alternative possible outcomes (which is why decoherence does not solve the quantum measurement problem, or so I think). I see Process 1 as decoherence plus choice, and only the choice part is non-unitary. So, does the transition to a single outcome always increase entropy?

      2. I think the key is that the transition from a pure to a mixed state really does introduce an objective ‘coarse graining’, and that’s all one needs for genuine entropy increase, which is based on probabilistic ‘blurring’ of the phase space trajectory. That is, the irreversible step does take place in the transition from pure to mixed state, without having to take into account the collapse to a particular outcome. So I do think that all one needs for entropy increase is that objective coarse-graining, which is not available in the deterministic unitary evolution of a pure state.
        But to clarify: there is no real mixed state obtained through ‘decoherence’ in a unitary-only account (i.e. Everett). At best you get an improper mixed state. You need a proper mixed state (available for example through TI via absorber response) to get real coarse-graining.

    1. Not sure if I mentioned this earlier or if you’ve read it, but this paper discusses these issues and why the transition from pure to mixed gives you entropy increase: https://arxiv.org/abs/1612.08734
      Basically, one needs ontological uncertainty to enter in order to get the increase in phase space volume corresponding to entropy increase. Without it, the trajectories just spread out over phase space, but there is no actual increase in volume. (The paper talks about ‘collapse’, but that’s the second step in the measurement transition, and one only really needs the first–the introduction of real uncertainty about which outcome obtains, as expressed by the mixed state).

      1. Thanks Ruth, this paper seems very similar to the paper linked in your OP.

        Decoherence shows that the (reduced) density matrix of a measured system plus measuring apparatus becomes diagonal after interaction with the environment, but the info that is lost in the reduced density matrix reappears (scrambled) in the environment. I am guessing that the second law can only be derived if the full density matrix (system plus apparatus plus environment) becomes diagonal. Correct?

        Is there a difference between “the system can be in one of several possible states, with classical probabilities” (mixed state) and “the system IS in one of several possible states, but we don’t know which one”? The two seem the same, but does the concept of potentia introduce a difference?

        I have been reading Albert’s book (Time and Chance). He basically says that “good” thermodynamic states (those for which entropy increases) are stable against random perturbations, while “bad” states (those for which entropy would decrease) are unstable against random perturbations. So any kind of really random perturbation (a “swerve” in Lucretius’ terms), and in particular ant kind of really random quantum collapse, would enforce thermodynamics at the microscopic level.

      2. Re the first paragraph: the issue is that in unitary-only dynamics, the reduced “mixed” state is not a proper mixture for the system. In contrast, with genuine collapse, the system’s state is a proper mixture. Once you have a proper mixture for the system, which you get in TI, you don’t need to worry about the apparatus and environment.
        Re second paragraph: the two statements refer to the first and second step of the Von Neumann measurement transition. Both situations can be described by a proper mixed state, so it doesn’t really matter for the second law. Once we have a proper mixed state, we have the second law.
        Re the Albert material: yes, but it’s more precise to state the condition in terms of the proper mixed state. The difference between a proper and improper mixed state is that you get the latter by ‘tracing over’ (ignoring) the degrees of freedom of another component system (such as an ancillary particle or ‘apparatus’), where the composite of system+ancillary is in a pure state. The proper mixture obtains from the non-unitary ‘measurement’ transition without having to trace over or ignore any other system.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s