Alice and Bob Test the Basic Assumptions of Reality

Title: “A Strong No-Go Theorem on Wigner’s Friend Paradox.”

Author: Kok-Wei Bong et al.

Reference: https://arxiv.org/pdf/1907.05607.pdf 

There’s one thing nearly everyone in physics agrees upon: quantum theory is bizarre. Niels Bohr, one of its pioneers, famously said that “anybody who is not shocked by quantum mechanics cannot possibly have understood it.” Yet it is also undoubtedly one of the most precise theories humankind has concocted, with its intricacies continually verified in hundreds of experiments to date. It is difficult to wrap our heads around its concepts because a quantum world does not equate to human experience; our daily lives reside in the classical realm, as does our language. In introductory quantum mechanics classes, the notion of a wave function often relies on shaky verbiage: we postulate a wave function that propagates in a wavelike fashion but is detected as an infinitesimally small point object, “collapsing” upon observation. The nature of the “collapse” — how exactly a wave function collapses, or if it even does at all — comprises what is known as the quantum measurement problem. 

As a testament to its confounding qualities, there exists a long menu of interpretations of quantum mechanics. The most popular is the Copenhagen interpretation, which asserts that particles do not have definite properties until they are observed and the wavefunction undergoes a collapse. This is the quantum mechanics all undergraduate physics majors are introduced to, yet plenty more interpretations exist, some with slightly different flavorings of the Copenhagen dish — containing a base of wavefunction collapse with varying toppings. A new work, by Kok-Wei Bong et al., is now providing a lens through which to discern and test Copenhagen-like interpretations, casting the quantum measurement problem in a new light. But before we dive into this rich tapestry of observation and the basic nature of reality, let’s get a feel for what we’re dealing with. 

Above, a summary of the Copenhagen interpretation. In this interpretation, particles only gain properties upon measurement. Source: afriendman.org

The story starts as a historical one, with high-profile skeptics of quantum theory. In response to its advent, Einstein, Podolsky, and Rosen (EPR) proposed hidden variable theories which sought to retain the idea that reality was inherently deterministic, built on relativistic notions while probabilities could be explained away by some unseen, underlying mechanism. Bell formulated a theorem to address the EPR paper, showing that the probabilistic paradigm posed by quantum mechanics cannot be entirely described by hidden variables. 

In seeking to show that quantum mechanics is an incomplete theory, EPR focused their work on what they found to be the most objectionable phenomenon: entanglement. Since entanglement is often misrepresented, let’s provide a brief overview here. When a particle decays into two daughter particles, we can perform subsequent measurements on each of those particles. When the spin angular momentum of one particle is measured, the spin angular momentum of the other particle is simultaneously measured to be exactly the value that adds to give the total spin angular momentum of the original particle (pre-decay). In this way, knowledge about the one particle gives us knowledge about the other particle; the systems are entangled. A paradox ensues since it appears that some information must have been transmitted between the two particles instantaneously. Yet entanglement phenomena really come down to a lack of sufficient information, as we are unsure of the spin measured on one particle until it is transmitted to the measurer of the second particle. 

We can illustrate this by considering the classical analogue. Think of a ball splitting in two — each ball travels in some direction and the sum total of the individual spin angular momenta is equal to the total spin angular momenta that was initiated when they existed as a group. However, I am free to catch one of the balls, or to perform a state-altering measurement, and this does not affect the value obtained by the other ball. Once the pieces are free of each other, they can acquire angular momentum from outside influences, breaking the collective initial “total” of spin angular momentum. I am also free to track these results from a distance, and as we can physically see the balls come loose and fly off in opposite directions (a form of measurement), we have all the information we need about the system. In considering the quantum version, we are left to confront a feature of quantum measurement: measurement itself alters the system that is being measured. The classical and quantum seem to contradict one another.

A visualization of quantum entanglement between two fermions (spin-1/2 particles): if one particle is measured to have spin +1/2, the other is simultaneously found to have spin -1/2.

Bell’s Theorem made this contradiction concrete and testable by considering two entangled qubits and predicting their correlations. He posited that, if a pair of spin-½ particles in a collective singlet state were traveling in opposite directions from each other, their spins can be independently measured at distant locations with respect to axes of choice. The probability of obtaining values corresponding to an entanglement scenario then depends on the relative angle between each particle’s axes. Over many iterations of this experiment, correlations can be constructed by taking the average of the products of measurement pairs. Comparing to the case of a hidden variable theory, with an upper-limit given by assuming an underlying deterministic reality, results in inequalities that hold should these hidden variable theories be viable. Experiments designed to test the assumptions of quantum mechanics have thus far all resulted in violations of Bell-type inequalities, leaving quantum theory on firm footing. 

Now, the Kok-Wei Bong et al. research is building upon these foundations. Via consideration of Wigner’s friend paradox, the team formulated a new no-go theorem (a type of theorem that asserts a particular situation to be physically impossible) that reconsiders our ideas of what reality means and which axioms we can use to model it. Bell’s Theorem, although seeking to test our baseline assumptions of the quantum world, still necessarily rests upon a few axioms. This new theorem shows that one of a few assumptions (deemed the Local Friendliness assumptions), which had previously seemed entirely reasonable, must be incorrect in order to be compatible with quantum theory:

  1. Absoluteness of observed events: Every event exists absolutely, not relatively. While the event’s details may be observer-dependent, the existence of the event is not.
  2. Locality: Local settings cannot influence distant outcomes (no superluminal communication).
  3. No-superdeterminism: We can freely choose the settings in our experiment and, before making this choice, our variables will not be correlated with those settings.

The work relies on the presumptive ability of a superobserver, a special kind of observer that is able to manipulate the states controlled by a friend, another observer. In the context of the “friend” being cast as an artificial intelligence algorithm in a large quantum computer, with the programmer as the superobserver, this scenario becomes slightly less fantastical. Essentially, this thought experiment digs into our ideas of the scale of applicability of quantum mechanics — what an observer is, and if quantum theory similarly applies to all observers.

To illustrate this more precisely, and consider where we might hit some snags in this analysis, let’s look at a quantum superposition state,

\vert \psi \rangle = \frac{1}{\sqrt{2}} (\vert\uparrow \rangle + \vert\downarrow \rangle).

If we were to take everything we learned in university quantum mechanics courses at face value, we could easily recognize that, upon measurement, this state can be found in either the \vert\uparrow \rangle or \vert\downarrow \rangle state with equal probability. However, let us now turn our attention toward the Wigner’s friend scenario: image that Wigner has a friend inside a laboratory performing an experiment and Wigner himself is outside of the laboratory, positioned ideally as a superobserver (he can freely perform any and all quantum experiments on the laboratory from his vantage point). Going back to the superposition state above, it remains true that Wigner’s friend can observe either up or down states with 50% probability upon measurement. However, we also know that states must evolve unitarily. Wigner, still positioned outside of the laboratory, continues to observe a superposition of states with ill-defined measurement outcomes. Hence, a paradox, and one formed due to the fundamental assumption that quantum mechanics applies at all scales to all observers. This is the heart of the quantum measurement problem.

An illustration of the setup of the extended Wigner’s friend scenario, now including laboratories controlled by Charlie and Debbie with superobservers Alice and Bob. Charlie and Debbie make measurements on an entangled state, while Alice and Bob make measurements on the laboratories of Charlie and Debbie, respectively. Source: Kok-Wei Bong et al.

Now, let’s extend this scenario, taking our usual friends Alice and Bob as superobservers to two separate laboratories. Charlie, in the laboratory observed by Alice, has a system of spin-½ particles with an associated Hilbert space, while Debbie, in the laboratory observed by Bob, has her own system of spin-½ particles with an associated Hilbert space. Within their separate laboratories, they make measurements of the spins of the particles along the z-axis and record their results. Then Alice and Bob, still situated outside the laboratories of their respective friends, can make three different types of measurements on the systems, one of which they choose to perform randomly. First, Alice could look inside Charlie’s laboratory, view his measurement, and assign it to her own. Second, Alice could restore the laboratory to some previous state. Third, Alice could erase Charlie’s record of results and instead perform her own random measurement directly on the particle. Bob can perform the same measurements using Debbie’s laboratory.

With this setup, the new theorem then identifies a set of inequalities derived from Local Friendliness correlations, which are extended from those given by Bell’s Theorem and can be independently violated. The authors then concocted a proof-of-principle experiment, which relies on explicitly thinking of the friends Charlie and Debbie as qubits, rather than people. Using the three measurement settings and choosing systems of polarization-encoded photons, the photon paths are then the “friends” (the photon either takes Charlie’s path, Debbie’s path, or some superposition of the two). After running this experiment some thousands of times, the authors concluded that their Local Friendliness inequalities should be violated, implying that one of the three initial assumptions cannot be correct. 

The primary difference between this work and Bell’s Theorem is that it contains no prior assumptions about the underlying determinism of reality, including any hidden variables that could be used to predetermine the outcomes of events. The theorem itself is therefore built upon assumptions strictly weaker than those of Bell’s inequalities, meaning that any violations would lead to strictly stronger conclusions. This paves a promising pathway for future questions and experiments about the nature of observation and measurement, narrowing down the large menu of interpretations of quantum mechanics. The question of which of the assumptions — absoluteness of observed events, locality, and no-superdeterminism — is incorrect is left as an open question. While the first two are widely used throughout physics, the assumption of no-superdeterminism digs down into the question of what measurement really means and what is classed as an observer. These points will doubtlessly be in contention as physicists continue to explore the oddities that quantum theory has to offer, but this new theorem offers promising results on the path to understanding the quirky quantum world.

Further Reading:

  1. More details on Bell’s Theorem: https://arxiv.org/pdf/quant-ph/0402001.pdf 
  2. Frank Wilczek’s column on entanglement: https://www.quantamagazine.org/entanglement-made-simple-20160428/ 
  3. Philosophical issues in quantum theory: https://plato.stanford.edu/entries/qt-issues/ 

The following two tabs change content below.

Amara McCune

Amara McCune is a PhD student in theoretical physics at UC Santa Barbara, focusing on phenomenology. She has bachelor’s degrees in physics and mathematics from Stanford University and currently spends the majority of her time in the theory groups of UC Berkeley and Lawrence Berkeley National Lab. Her interests include BSM model building, the interface of cosmology and particle physics, and flavor physics.

One Reply to “Alice and Bob Test the Basic Assumptions of Reality”

Leave a Reply to Kat Hu Cancel reply

Your email address will not be published. Required fields are marked *