An Anomalous Anomaly: The New Fermilab Muon g-2 Results

By Andre Sterenberg Frankenthal and Amara McCune

This is the final post of a three-part series on the Muon g-2 experiment. Check out posts 1 and 2 on the theoretical and experimental aspects of g-2 physics. 

The last couple of weeks have been exciting in the world of precision physics and stress tests of the Standard Model (SM). The Muon g-2 Collaboration at Fermilab released their very first results with a measurement of the anomalous magnetic moment of the muon to an accuracy of 462 parts per billion (ppb), which largely agrees with previous experimental results and amplifies the tension with the accepted theoretical prediction to a 4.2\sigma discrepancy. These first results feature less than 10% of the total data planned to be collected, so even more precise measurements are foreseen in the next few years.

But on the very same day that Muon g-2 announced their results and published their main paper on PRL and supporting papers on Phys. Rev. A and Phys. Rev. D, Nature published a new lattice QCD calculation which seems to contradict previous theoretical predictions of the g-2 of the muon and moves the theory value much closer to the experimental one. There will certainly be hot debate in the coming months and years regarding the validity of this new calculation, but it does not stop from muddying the waters in the g-2 sphere. We cover both the new experimental and theoretical results in more detail below.

Experimental announcement

The main paper in Physical Review Letters summarizes the experimental method and reports the measured numbers and associated uncertainties. The new Fermilab measurement of the muon g-2 is 3.3 standard deviations (\sigma) away from the predicted SM value. This means that, assuming all systematic effects are accounted for, the probability that the null hypothesis (i.e. that the true muon g-2 number is actually the one predicted by the SM) could result in such a discrepant measurement is less than 1 in 1,000. Combining this latest measurement with the previous iteration of the experiment at Brookhaven in the early 2000s, the discrepancy grows to 4.2\sigma, or smaller than 1 in 300,000 probability that it is just a statistical fluke. This is not yet the 5\sigma threshold that seems to be the golden standard in particle physics to claim a discovery, but it is a tantalizing result. The figure below from the paper illustrates well the tension between experiment and theory.

Comparison between experimental measurements of the anomalous magnetic moment of the muon (right, top to bottom: Brookhaven, Fermilab, combination) and the theoretical prediction by the Standard Model (left). The discrepancy has grown to 4.2 sigma. Source: https://journals.aps.org/prl/pdf/10.1103/PhysRevLett.126.141801

This first publication is just the first round of results planned by the Collaboration, and corresponds to less than 10% of the data that will be collected throughout the total runtime of the experiment. With this limited dataset, the statistical uncertainty (434 ppb) dominates over the systematic uncertainty (157 pbb), but that is expected to change as more data is acquired and analyzed. When the statistical uncertainty eventually dips below, it will be critically important to control the systematics as much as possible, to attain the ultimate target goal of a 140 ppb total uncertainty measurement. The table below shows the actual measurements performed by the Collaboration.

Table of measurements for each sub-run of the Run-1 period, from left to right: precession frequency, equivalent precession frequency of a proton (i.e. measures the magnetic field), and the ratio between the two quantities. Source: https://journals.aps.org/prl/pdf/10.1103/PhysRevLett.126.141801

The largest sources of systematic uncertainties stem from the electrostatic quadrupoles (ESQ) in the experiment. While the uniform magnetic field ensures the centripetal motion of muons in the storage ring, it is also necessary to keep them confined to the horizontal plane. Four sets of ESQ uniformly spaced in azimuth provide vertical focusing of the muon beam. However, after data-taking, two resistors in the ESQ system were found to be damaged. This means that the time profile of ESQ activation was not perfectly matched to the time profile of the muon beam. In particular, during the first 100 microseconds after each muon bunch injection, muons were not getting the correct focusing momentum, which affected the expected phase of the “wiggle plot” measurement. All told, this issue added 75 ppb of systematic uncertainty to the budget. Nevertheless, because statistical uncertainties dominate in this first stage of the experiment, the unexpected ESQ damage was not a showstopper. The Collaboration expects this problem to be fully mitigated in subsequent data-taking runs.

To guard against any possible human bias, an interesting blinding policy was implemented: the master clock of the entire experiment was shifted by an unknown value, chosen by two people outside the Collaboration and kept in a vault for the duration of the data-taking and processing. Without knowing this shift, it is impossible to deduce the correct value of the g-2. At the same time, this still allows experimenters to carry out the analysis through the end, and only then remove the clock shift to reveal the unblinded measurement. In a way this is like a key unlocking the final result. (This was not the only protection against bias, only the more salient and curious one.)

Lattice QCD + BMW results

On the same day that Fermilab announced the Muon g-2 experimental results, a group known as the BMW (Budapest-Marseille-Wuppertal) Collaboration published its own results on the theoretical value of muon g-2 using new techniques in lattice QCD. The group’s results can be found in Nature (the journal, jury’s still out on whether they’re in actual Nature), or at the preprint here. In short, their calculations bring them much closer to the experimental value than previous collaborations, bringing their methods into tension with the findings of previous lattice QCD groups. What’s different? To this end, let’s dive a little deeper into the details of lattice QCD. 

The BMW group published this plot, showing their results (top line) compared with the results of other groups using lattice QCD (remaining green squares) and the R-ratio (the usual data-driven techniques, red circles). The purple bar gives the range of values for the anomalous magnetic moment that would signal no new physics – we can see that the BMW results are closer to this region than the R-ratio results, which have similarly small error bars. Source: BMW Collaboration

As outlined in the first post of this series, the main tool of high-energy particle physics rests in perturbation theory, which we can think of graphically via Feynman diagrams, starting with tree-level diagrams and going to higher orders via loops. Equivalently, this corresponds to calculations in which terms are proportional to some coupling parameter that describes the strength of the force in question. Each higher order term comes with one more factor of the relevant coupling, and our errors in these calculations are generally attributable to either uncertainties in the coupling measurements themselves or the neglecting of higher order terms.

These coupling parameters are secretly functions of the energy scale being studied, and so at each energy scale, we need to recalculate these couplings. This makes sense intuitively because forces have different strengths at different energy scales — e.g. gravity is much weaker on a particle scale than a planetary one. In quantum electrodynamics (QED), for example, these couplings are fairly small when in the energy scale of the electron. This means that we really don’t need to go to higher orders in perturbation theory, since these terms quickly become irrelevant with higher powers of this coupling. This is the beauty of perturbation theory: typically, we need only consider the first few orders, vastly simplifying the process.

However, QCD does not share this convenience, as it comes with a coupling parameter that decreases with increasing energy scale. At high enough energies, we can indeed employ the wonders of perturbation theory to make calculations in QCD (this high-energy behavior is known as asymptotic freedom). But at lower energies, at length scales around that of a proton, the coupling constant is greater than one, which means that the first-order term in the perturbative expansion is the least relevant term, with higher and higher orders making greater contributions. In fact, this signals the breakdown of the perturbative technique. Because the mass of the muon is in this same energy regime, we cannot use perturbation theory in quantum field theory to calculate g-2. We then turn to simulations, and since cannot entirely simulate spacetime (because it consists of infinite points), we must instead break it up into a discretized set of points dubbed the lattice. 

A visualization of the lattice used in simulations. Particles like quarks are placed on the points of the lattice, with force-carrying gauge bosons (in this case, gluons) forming the links between them. Source: Lawrence Livermore National Laboratory

This naturally introduces new sources of uncertainty into our calculations. To employ lattice QCD, we need to first consider which lattice spacing to use — the distance between each spacetime point — where a smaller lattice spacing is preferable in order to come closer to a description of spacetime. Introducing this lattice spacing comes with its own systematic uncertainties. Further, this discretization can be computationally challenging, as larger numbers of points quickly eat up computing power. Standard numerical techniques become too computationally expensive to employ, and so statistical techniques as well as Monte Carlo integration are used instead, which again introduces sources of error.

Difficulties are also introduced by the fact that a discretized space does not respect the same symmetries that a continuous space does, and some symmetries simply cannot be kept simultaneously with others. This leads to a challenge in which groups using lattice QCD must pick which symmetries to preserve as well as consider the implications of ignoring the ones they choose not to simulate. All of this adds up to mean that lattice QCD calculations of g-2 have historically been accompanied by very large error bars — that is, until the much smaller error bars from the BMW group’s recent findings. 

These results are not without controversy. The group employs a “staggered fermion” approach to discretizing the lattice, in which a single type of fermion known as a Dirac fermion is put on each lattice point, with additional structure described by neighboring points. Upon taking the “continuum limit,” or the limit that the spacing between points on the lattice goes to zero (hence simulating a continuous space), this results in a theory with four fermions, rather than the sixteen that live in the Standard Model. There are a few advantages to this method, both in terms of reducing computational time and having smaller discretization errors. However, it is still unclear if this approach is valid, and the lattice community is then questioning if these results are not computing observables in some other quantum field theory, rather than the SM quantum field theory. 

The future of g-2

Overall, while a 4.2\sigma discrepancy is certainly more alluring than the previous 3.7\sigma, the conflict between the experimental results and the Standard Model is still somewhat murky. It is crucial to note that the new 4.2\sigma benchmark does not include the BMW group’s calculations, and further incorporation of these values could shift the benchmark around. A consensus from the lattice community on the acceptability of the BMW group’s results is needed, as well as values from other lattice groups utilizing similar methods (which should be steadily rolling out as the months go on). It seems that the future of muon g-2 now rests in the hands of lattice QCD.

At the same time, more and more precise measurements should be coming out of the Muon g-2 Collaboration in the next few years, which will hopefully guide theorists in their quest to accurately predict the anomalous magnetic moment of the muon and help us reach a verdict on this tantalizing evidence of new boundaries in our understanding of elementary particle physics.

Further Reading

BMW paper: https://arxiv.org/pdf/2002.12347.pdf

Muon g-2 Collaboration papers:

  1. Main result (PRL): Phys. Rev. Lett. 126, 141801 (2021)
  2. Precession frequency measurement (Phys. Rev. D): Phys. Rev. D 103, 072002 (2021)
  3. Magnetic field measurement (Phys. Rev. A): Phys. Rev. A 103, 042208 (2021)
  4. Beam dynamics (to be published in Phys. Rev. Accel. Beams): https://arxiv.org/abs/2104.03240

(Almost) Everything You’ve Ever Wanted to Know About Muon g-2 – Experiment edition

This is post #2 of a three-part series on the Muon g-2 experiment. Check out Amara McCune’s post on the theory of g-2 physics for an excellent introduction to the topic.

As we all eagerly await the latest announcement from the Muon g-2 Collaboration on April 7th, it is a good time to think about the experimental aspects of the measurement and to appreciate just how difficult it is and the persistent and collaborative effort that has gone into obtaining one of the most precise results in particle physics to date.

The main “output” of the experiment (after all data-taking runs are complete) is a single number: the g-factor of the muon, measured to an unprecedented accuracy of 140 parts per billion (ppb) at Fermilab’s Muon Campus, a four-fold improvement over the previous iteration of the experiment that took place at Brookhaven National Lab in the early 2000s. But to arrive at this seemingly simple result, a painstaking measurement effort is required. As a reminder (see Amara’s post for more details), what is actually measured is the anomalous deviation from 2 of the magnetic moment of the muon, a_\mu, which is given by

a_\mu = \frac{g-2}{2}.

Experimental method

The core tenet of the experimental approach relies on the behavior of muons when subjected to a uniform magnetic field. If muons can be placed in a uniform circular trajectory around a storage ring with uniform magnetic field, then they will travel around this ring with a characteristic frequency, referred to as its cyclotron frequency (symbol \omega_c). At the same time, if the muons are polarized, meaning that their spin vector points along a particular direction when first injected into the storage ring, then this spin vector will also rotate when subjected to a uniform magnetic field. The frequency of the spin vector rotation is called the spin frequency (symbol \omega_s).

If the cyclotron and spin frequencies of the muon were exactly the same, then it would have an anomalous magnetic moment a_\mu of zero. In other words, the anomalous magnetic moment measures the discrepancy between the behavior of the muon itself and its spin vector when under a magnetic field. As Amara discussed at length in the previous post in this series, such discrepancy arises because of specific quantum-mechanical contributions to the muon’s magnetic moment from several higher-order interactions with other particles. We refer to the differing frequencies as the precession of the muon’s spin motion compared to its cyclotron motion.

If the anomalous magnetic moment is not zero, then one way to measure it is to directly record the cyclotron and spin frequencies and subtract them. In a way, this is what is done in the experiment: the anomalous precession frequency can be measured as

\omega_a = \omega_s - \omega_c = -a_\mu \frac{eB}{m_\mu}

where m_\mu is the muon mass, e is the muon charge, and B is the (ideally) uniform magnetic field. Once the precession frequency and the exact magnetic field are measured, one can immediately invert this equation to obtain a_\mu.

In practice, the best way to measure \omega_a is to rewrite the equation above into more experimentally amenable quantities:

a_\mu = \left( \frac{g_e}{2} \right) \left( \frac{\mu_p}{\mu_e} \right) \left( \frac{m_\mu}{m_e} \right) \left( \frac{\omega_a}{\langle \omega_p \rangle} \right)

where \mu_p/\mu_e is the proton-to-electron magnetic moment ratio, g_e is the electon g-factor, and \langle \omega_p \rangle is the free proton’s Larmor frequency averaged over the muon beam spatial transverse distribution. The Larmor frequency measures the proton’s magnetic moment precession about the magnetic field and is directly proportional to B. The a_\mu written in this form has the considerable advantage that all of the quantities have been independently and very accurately measured: to 0.00028 ppb (g_e), to 3 ppb (\mu_p/\mu_e), and to 22 ppb (m_\mu/m_e). Recalling that the final desired accuracy for the left-hand side of the equation above is 140 ppb leads to a budget of 70 ppb for each of the \omega_a and \omega_p measurements. This is perhaps a good point to stop and appreciate just how small these uncertainty budgets are: 1 ppb is a 1/1000000000 level of accuracy!

We have now distilled the measurement into two numbers: \omega_a, the anomalous precession frequency, and \omega_p, the free proton Larmor frequency which is directly proportional to the magnetic field (the quantity we’re actually interested in). Their uncertainty budgets are roughly 70 ppb for each, so let’s take a look at how they are able to measure these two numbers to such an accuracy. First, we’ll introduce the experimental setup, and then describe the two measurements.

Experimental setup

The polarized muons in the experiment are produced by a beam of pions, which are themselves produced when a beam of 8 GeV protons created by Fermilab’s linear accelerator strikes a nickel-iron target. The pions are selected to have a momentum close to the required for the experiment: 3.11 GeV/c. Each pion then decays to a muon and a muon-neutrino (more than 99% of the time), and a very particular momentum is selected for the muons: 3.094 GeV/c. Only muons with this specific momentum (or very close) are allowed to enter the storage ring. This momentum has a special significance in the experimental design and is colloquially referred to as the “magic momentum” (and muons, upon entering the storage ring, travel along a circular trajectory with a “magic radius” which corresponds to the magic momentum). The reason for this special momentum is, very simplistically, the fortuitous cancelation of some electric and magnetic field effects that would need to be accounted for otherwise and that would therefore reduce the accuracy of the measurement. Here’s a sketch of the injection pipeline:

Sketch of muon production and injection into the storage ring starting from a beam of protons at Fermilab. Source: David Sweigart’s thesis.

Muons with the magic momentum are injected into the muon storage ring, pictured below. The storage ring (the same one from Brookhaven which was moved to Fermilab in 2013) is responsible for keeping muons circulating in orbit until they decay, with a vertical magnetic field of 1.45 T, uniform within 25 ppm (quite a feat and made possible via a painstaking effort called magnet “shimming”). The muon lifetime is 2 microseconds in its own frame of reference, but in the laboratory frame and with a 3.094 GeV/c momentum this increases to 64 microseconds. The storage ring has a roughly 45 m circumference, so muons can travel up to hundreds of times around the ring before decaying.

Photo of the Muon g-2 storage ring superconducting coil before assembly at Fermilab (2014). Source: personal photo. Link to the full assembled storage ring.

When they do eventually decay, the most likely decay products are positrons (or electrons, depending on the muon charge), electron-antineutrinos, and muon-neutrinos. The latter two are neutral particles and essentially invisible, but the positrons are charged and therefore bend under the magnetic field in the ring. The magic momentum and magic radius only apply to muons – positrons will bend inwards and eventually hit one of the 24 calorimeters placed strategically around the ring. A sketch of the situation is shown below.

Muon decay and positron trajectories into the calorimeters of the Muon g-2 experiment at Fermilab. Different positron energies correspond to different bends under the magnetic field and different impact positions on the calorimeters. Source: Aaron Fienberg’s thesis.

Calorimeters are detectors that can precisely measure the total energy of a particle. Furthermore, with transversal segmentation, they can also measure the incident position of the positrons. The calorimeters used in the experiment are made of lead fluoride (PbF2) crystals, which are Cherenkov radiators and therefore have an extremely fast response (Cherenkov radiation is emitted instantaneously when an incident particle travels faster than light in a medium – not in vacuum though since that’s not possible!). Very precise timing information about decay positrons is essential to infer the position of the decaying muon along the storage ring, and the experiment manages to achieve a remarkable sub-100 ps precision on the positron arrival time (which is then compared to the muon injection time for an absolute time calibration).

\omega_a measurement

The key aspect of the \omega_a measurement is that the direction and energy distributions of the decay positrons are correlated with the direction of the spin of the decaying muons. So, by measuring the energy and arrival time of each positron with one of the 24 calorimeters, one can deduce (to some degree of confidence) the spin direction of the parent muon.

But recall that the spin direction itself is not constant in time — it oscillates with \omega_s frequency, while the muons themselves travel around the ring with \omega_c frequency. By measuring the energy of the most energetic positrons (the degree of correlation between muon spin and positron energy is highest for more energetic positrons), one should find an oscillation that is roughly proportional to the spin oscillation, “corrected” by the fact that muons themselves are moving around the ring. Since the position of each calorimeter is known, accurately measuring the arrival time of the positron relative to the injection of the muon beam into the storage ring, combined with its energy information, gives an idea of how far along in its cyclotron motion the muon was when it decayed. These are the crucial bits of information needed to measure the difference in the two frequencies, \omega_s and \omega_c, which is proportional to the anomalous magnetic moment of the muon.

All things considered, with the 24 calorimeters in the experiment one can count the number of positrons with some minimum energy (the threshold used is roughly 1.7 GeV) arriving as a function of time (remember, the most energetic positrons are more relevant since their energy and position have the strongest correlation to the muon spin). Plotting a histogram of these positrons, one arrives at the famous “wiggle plot”, shown below.

An example of a “wiggle plot” showing the number of positrons recorded by the calorimeters as a function of time. The oscillations are due to the precession of the spin motion compared to the cyclotron motion of the muon. Note: these are “blinded” results, and do not correspond to the real measurement yet (see text). Source: David Sweigart’s thesis.

This histogram of the number of positrons versus time is plotted modulo some time constant, otherwise it would be too long to show in a single page. But the characteristic features are very visible: 1) the overall number of positrons decreases as muons decay away and there are fewer of them around; and 2) the oscillation in the number of energetic positrons is due to the precession of the muon spin relative to its cyclotron motion — whenever muon spin and muon momentum are aligned, we see a greater number of energetic positrons, and vice-versa when the two vectors are anti-aligned. In this way, the oscillation visible in the plot is directly proportional to the precession frequency, i.e. how much ahead the spin vector oscillates compared to the momentum vector itself.

In its simplest formulation, this wiggle plot can be fitted to a basic five-parameter model:

N(t) = N_0 \; e^{-t/\tau} \left[ 1 + A_0 \; \text{cos}(\omega_a t + \phi_0) \right]

where the five parameters are: N_0, the initial number of positrons; \tau, the time-dilated muon lifetime; A_0, the amplitude of the oscillation which is related to the asymmetry in the positron’s transverse impact position; \omega_a, the sought-after spin precession frequency; and \phi_0, the phase of the oscillation.

The five-parameter model captures the essence of the measurement, but in practice, to arrive at the highest possible accuracy many additional effects need to be considered. Just to highlight a few: Muons do not all have exactly the right magic momentum, leading to orbital deviations from the magic radius and a different decay positron trajectory to the calorimeter. And because muons are injected in bunches into the storage ring and not one by one, sometimes decay positrons from more than one muon arrive simultaneously at a calorimeter — such pileup positrons need to be carefully separated and accounted for. A third major systematic effect is the presence of non-ideal electric and/or magnetic fields, which can introduce important deviations in the expected motion of the muons and their subsequent decay positrons. In the end, to correct for all these effects, the five-parameter model is augmented to an astounding 22-parameter model! Such is the level of detail that a precision measurement requires. The table below illustrates the expected systematic uncertainty budget for the \omega_a measurement.

CategoryBrookhaven [ppb]Fermilab [ppb]Improvements
Gain changes12020Better laser calibration; low-energy threshold
Pileup8040Low-energy samples recorded; calorimeter transverse segmentation
Lost muons9020Better collimation in ring
Coherent Betatron Oscillation70< 30Higher n value (frequency); better match of beam line to storage ring
Electric field and pitch5030Improved tracker; precise storage ring simulations
Total18070
Estimated systematic uncertainties for the \omega_a measurement, compared to the previous iteration of the experiment at Brookhaven. The total is added in quadrature. Adapted from the Muon g-2 Technical Design Report (TDR).

Note: the wiggle plot above was taken from David Sweigart’s thesis, which features a blinded analysis of the data, where \omega_a is replaced by R, and the two are related by:

\omega_a(R) = 2 \pi \left(0.2291 \text{MHz} \right) \left[ 1 + (R + \Delta R) / 10^6 \right] .

Here R is the blinded parameter that is used instead of \omega_a, and \Delta R is an arbitrary offset that is independently chosen by each different analysis group. This ensures that results from one group do not influence the others and allows all analysis to have the same (unknown) reference. We can expect a similar analysis (and probably several different types of \omega_a analyses) in the announcement on April 7th, except that the blinded R modification will be removed and the true number unveiled.

\omega_p measurement

The measurement of the Larmor frequency \omega_p (and of the magnetic field B) is equally important to the determination of a_\mu and proceeds separately from the \omega_a measurement. The key ingredient here is an extremely accurate mapping of the magnetic field with a two-prong approach: removable proton Nuclear Magnetic Resonance (NMR) probes and fixed NMR probes inside the ring.

The 17 removable probes sit inside a motorized trolley and circle around the ring periodically (every 3 days) to get a very clear and detailed picture of the magnetic field inside the storage ring (the operating principle is that the measured free proton precession frequency is proportional to the magnitude of the external magnetic field). The trolley cannot be run concurrently with the muon beam and so the experiment must be paused for these precise measurements. To complement these probes, 378 fixed probes are installed inside the ring to continuously monitor the magnetic field, albeit with less detail. The removable probes are therefore used to calibrate the measurements made by the fixed probes, or conversely the fixed probes serve as a sort of “interpolation” data between the NMR probe runs.

In addition to the magnetic field, an understanding of the muon beam transverse spatial distribution is also important. The \langle \omega_p \rangle term that enters the anomalous magnetic moment equation above is given by the average magnetic field (measured with the probes) weighted by the transverse spatial distribution of muons when going around the ring. This distribution is accurately measured with a set of three trackers placed immediately upstream of calorimeters at three strategic locations around the storage ring.

The trackers feature pairs of straw wires at stereo angles to each other that can accurately reconstruct the trajectory of decay positrons. The charged positrons ionize some of the gas molecules inside the straws, and the released charge gets swept up to electrodes at the straw end by an electric field inside the straw. The amount and location of the charge yield information on position of the positron, and the 8 layers of a tracker together give precise information on the positron trajectory. With this approach, the magnetic field can be measured and then corrected via a set of 200 concentric coils with independent current settings to an accuracy of a few ppm when averaged azimuthally. The expected systematic uncertainty budget for the \omega_p measurement is shown in the table below.

CategoryBrookhaven [ppb]Fermilab [ppb]Improvements
Absolute probe calibration5035More uniform field for calibration
Trolley probe calibration9030Better alignment between trolley and the plunging probe
Trolley measurement5030More uniform field, less position uncertainty
Fixed probe interpolation7030More stable temperature
Muon distribution3010More uniform field, better understanding of muon distribution
Time-dependent external magnetic field5Direct measurement of external field, active feedback
Trolley temperature, others10030Trolley temperature monitor, etc.
Total17070
Estimated systematic uncertainties for the \langle \omega_p \rangle measurement, compared to the previous iteration of the experiment at Brookhaven. The total is added in quadrature. Adapted from the Muon g-2 Technical Design Report (TDR) and from arxiv:1909.13742.

Conclusions

The announcement on April 7th of the first Muon g-2 results at Fermilab (E989) is very exciting for those following along over the past few years. Since the full data-taking has not been completed yet, it’s likely that these results are not the ultimate ones produced by the Collaboration. But even if they manage to match the accuracy of the previous iteration of the experiment at Brookhaven (E821), we can already learn something about whether the central value of a_\mu shifts up or down or stays roughly constant. If it stays the same even after a decade of intense effort to make an entire new measurement, this could be a strong sign of new physics lurking around! But let’s wait and see what the Collaboration has in store for us. Here’s a link to the event on April 7th.

Amara and I will conclude this series with a 3rd post after the announcement discussing the things we learn from it. Stay tuned!

Further Reading:

(Almost) Everything You’ve Ever Wanted to Know About Muon g-2, Theoretically

This is post #1 of a three-part series on the Muon g-2 experiment.

April 7th is an eagerly anticipated day. It recalls eagerly anticipated days of years past, which, just like the spring Wednesday one week from today, are marked with an announcement. It harkens back to the discovery of the top quark, the premier observation of tau neutrinos, or the first Higgs boson signal. There have been more than a few misfires along the way, like BICEP2’s purported gravitational wave background, but these days always beget something interesting for the future of physics, even if only an impetus to keep searching. In this case, all the hype surrounds one number: muon g-2. 

This quantity describes the anomalous magnetic dipole moment of the muon, the second-heaviest lepton after the electron, and has been the object of questioning ever since the first measured value was published at CERN in December 1961. Nearly sixty years later, the experiment has gone through a series of iterations, each seeking greater precision on its measured value in order to ascertain its difference from the theoretically-predicted value. New versions of the experiment, at CERN, Brookhaven National Laboratory, and Fermilab, seemed to point toward something unexpected: a discrepancy between the values calculated using the formalism of quantum field theory and the Muon g-2 experimental value. April 7th is an eagerly anticipated day precisely because it could confirm this suspicion. 

It would be a welcome confirmation, although certain to let loose a flock of ambulance-chasers eager to puzzle out the origins of the discrepancy (indeed, many papers are already appearing on the arXiv to hedge their bets on the announcement). Tensions between our theoretical and measured values are, one could argue, exactly what physicists are on the prowl for. We know the Standard Model (SM) is incomplete, and our job is to fill in the missing pieces, tweak the inconsistencies, and extend the model where necessary. This task prerequisites some notion of where we’re going wrong and where to look next. Where better to start than a tension between theory and experiment? Let’s dig in. 

What’s so special about the muon?

The muon is roughly 207 times heavier than the electron, but shares most of its other properties. Like the electron, it has a negative charge which we denote e, and like the other leptons it is not a composite particle, meaning there are no known constituents that make up a muon. Its larger mass proves auspicious in probing physics, as this makes it particularly sensitive to the effects of virtual particles. These are not particles per se — as the name suggests, they are not strictly real — but are instead intermediate players that mediate interactions, and are represented by internal lines in Feynman diagrams like this:

Figure 1: The tree-level channel for muon decay. Source: Imperial College London

Above, we can see one of the main decay channels for the muon: first the muon decays into a muon neutrino \nu_{\mu} and a W^{-} boson, which is one of the three bosons that mediates weak force interactions. Then, the  W^{-} boson decays into an electron e^{-} and electron neutrino \nu_{e}. However, we can’t “stop” this process and observe the W^{-} boson, only the final states of \nu_{\mu}, \nu_{e}, and e^{-}. More precisely, this virtual particle is an excitation of the W^{-} quantum field; they conserve both energy and momentum, but do not necessarily have the same mass as their real counterparts, and are essentially temporary fields.

Given the mass dependence, you could then ask why we don’t instead carry out these experiments using the tau, the even heavier cousin of the muon, and the reason for this has to do with lifetime. The muon is a short-lived particle, meaning it cannot travel long distances without decaying, but the roughly 64 microseconds of life that the accelerator gives it turns out to be enough to measure its decay products. Those products are exactly what our experiments are probing, as we would like to observe the muon’s interactions with other particles. The tau could actually be a similarly useful probe, especially as it could couple more strongly to beyond the Standard Model (BSM) physics due to its heavier mass, but we currently lack the detection capabilities for such an experiment (a few ideas are in the works).

What exactly is the anomalous magnetic dipole moment?

The “g” in “g-2” refers to a quantity called the g-factor, also known as the dimensionless magnetic moment due to its proportionality to the (dimension-ful) magnetic moment \mu, which describes the strength of a magnetic source. This relationship for the muon can be expressed mathematically as

\mu = g \frac{e}{2m_{\mu}} \textbf{S},

where \textbf{S} gives the particle’s spin, e is the charge of an electron, and m_{\mu} is the muon’s mass. Since the “anomalous” part of the anomalous magnetic dipole moment is the muon’s difference from g = 2, we further parametrize this difference by defining the anomalous magnetic dipole moment directly as

a_{\mu} = \frac{g-2}{2}

Where does this difference come from?

The calculation of the anomalous magnetic dipole moment proceeds mostly through quantum electrodynamics (QED), the quantum theory of electromagnetism (which includes photon and lepton interactions), but it also gets contributions from the electroweak sector (W^{-}, W^{+}, Z, and Higgs boson interactions) and the hadronic sector (quark and gluon interactions). We can explicitly split up the SM value of a_{\mu} according to each of these contributions,

a_{\mu}^{SM} = a_{\mu}^{QED} + a_{\mu}^{EW} + a_{\mu}^{Had}.

We classify the interactions of muons with SM particles (or, more generally, between any particles) according to their order in perturbation theory. Tree-level diagrams are interactions like the decay channel in Figure 1, which involve only three-point interactions between particles and can be drawn graphically in a tree-like fashion. The next level of diagrams that contribute are at loop-level, which include an additional leg and usually, as the name suggests, contain some loop-like shape (further orders up involve multiple loops). Calculating the total probability amplitude for a given process necessitates a sum over all possible diagrams, although higher-order diagrams usually do not contribute as much and can generally (but not always) be ignored. In the case of the anomalous magnetic dipole moment, the difference between the tree-level value of g = 2 comes from including the loop-level processes from fields in all the sectors outlined above. We can visualize these effects through the following loop diagrams,

Figure 2: The loop contributions from each of QED, electroweak, and hadronic processes. Source: Particle Data Group

In each of these diagrams, two muons decay to a photon with an internal loop of interactions in some combination of particles . From left to right: the loop is comprised of 1) two muons and a photon \gamma, 2) two muons and a Z boson, 3) two W bosons and a neutrino \nu, and 4) two muons and a photon \gamma, which has some interactions involving hadrons.

Why does this value matter?

In calculating the anomalous magnetic dipole moment, we sum over all of the Feynman loop diagrams that come from known interactions, and these can be directly related to terms in our theory (formally, operators in the Lagrangian) that give rise to a magnetic moment. Working in an SM framework, this means summing over the muon’s quantum interactions with all relevant SM fields, which show up as both external and internal Feynamn diagram lines.

The current accepted experimental value is 116,592,091 \times 10^{-11}, while the SM makes a prediction of 116,591,830 \times 10^{-11} (both come with various error bars on the last 1-2 digits). Although they seem close, they differ by a factor of 3.7 \sigma (standard deviation), which is not quite the 5 \sigma threshold that physicists require to signal a discovery. Of course, this could change with next week’s announcement. Given the increased precision of the latest run of Muon g-2, these values could be confirmed up to 4 \sigma or greater, which would certainly give credence to a mismatch.

Why do the values not agree?

You’ve landed on the key question. There could be several possible explanations for the discrepancy, lying at the roots of both theory and experiment. Historically, it has not been uncommon for anomalies to ultimately be tied back to some experimental or systematic error, either having to do with instrument calibration or some statistical fluctuations. Fermilab’s latest run of Muon g-2 aims to deliver a value with a precision of 1 in 140 parts per billion, while the SM calculation yields a precision of 1 in 400 parts per billion. This means that next week, the Fermilab Muon g-2 collaboration should be able to tell us if these values agree.

Figure 3: The SM contributions to the anomalous magnetic dipole moment are detailed, with values given \times 10^11. HVP is the hadronic vacuum polarization (a process in which the virtual photon loop contains a quark-antiquark pair), while HLbL is hadronic light by light scattering (a process that is similar but involves more virtual photons). These two are main source of uncertainty in the SM theory prediction. Source: Muon g-2 Theory Initiative.

On the theory side, the majority of the SM contribution to the anomalous magnetic dipole moment comes from QED, which is probably the most well-understood and well-tested sector of the SM. But there are also contributions from the electroweak and hadronic sectors — the former can also be calculated precisely, but the latter is much less understood and cannot be computed from first principles. This is due to the fact that the muon’s mass scale is also at the scale of a phenomenon known as confinement, in which quarks cannot be isolated from the hadrons that they form. This has the effect of making calculations in perturbation theory (the prescription outlined above) much more difficult. These calculations can proceed from phenomenology (having some input from experimental parameters) or from a technique called lattice QCD, in which processes in quantum chromodynamics (QCD, the theory of quarks and gluons) are done on a discretized space using various computational methods.

Lattice QCD is an active area of research and the computations are accompanied in turn by large error bars, although the last 20 years of progress in this field has refined the calculations from where they were the last time a Muon g-2 collaboration announced its results. The question as to how much wiggle room theory can provide was addressed as part of the Muon g-2 Theory Initiative, which published its results last summer and used two different techniques to calculate and verify its value for the SM theory prediction. Their methods significantly improved upon previous uncertainty estimations, meaning that although we could argue that the theory should be more understood before pursuing further avenues for an explanation of the anomaly, this holds less weight in the light these advancements.

These further avenues would be, of course, the most exciting and third possible answer to this question: that this difference signals new physics. If particles beyond the SM interacted with the muon in such a way that generated loop diagrams like the ones above, these could very well contribute to the anomalous magnetic dipole moment. Perhaps adding these contributions to the SM value would land us closer to the experimental value. In this way, we can see the incredible power of Muon g-2 as a probe: by measuring the muon’s anomalous magnetic dipole moment to a precision comparable to the SM calculation, we essentially test the completeness of the SM itself.

What could this new physics be?

There are several places we can begin to look. The first and perhaps most natural is within the realm of supersymmetry, which predicts, via a symmetry between fermions (spin-½ particles) and bosons (spin-1 particles), further particle interactions for the muon that would contribute to the value of a_{\mu}. However, this idea probably ultimately falls short: any significant addition to a_{\mu} would have to come from particles in the mass range of 100-500 GeV, which we have been ardently searching for at CERN, to no avail. Some still hold out hope that supersymmetry may prevail in the end, but for now, there’s simply no evidence for its existence.

Another popular alternative has to do with the “dark photon”, which is a hypothetical particle that would mix with the SM photon (the ordinary photon) and couple to charged SM particles, including the muon.  Direct searches are underway for such dark photons, although this scenario is currently disfavored, as it is conjectured that dark photons primarily decay into pairs of charged leptons. The parameter space of possibilities for its existence has been continually whittled down by experiments at BaBar and CERN.

In general, generating new physics involves inserting new degrees of freedom (fields, and hence particles) into our models. There is a vast array of BSM physics that is continually being studied. Although we have a few motivating factors for what new particles that contribute to  a_{\mu} could be,  without sufficient underlying principles and evidence to make our case, it’s anyone’s game. A confirmation of the anomaly on April 7th would surely set off a furious search for potential solutions — however, the precision required to even quash the anomaly would in itself be a wondrous and interesting result.

How do we make these measurements?

Great question! For this I defer to our resident Muon g-2 experimental expert, Andre Sterenberg-Frankenthal, who will be posting a comprehensive answer to this question in the next few days. Stay tuned.

Further Resources:

  1. Fermilab’s Muon g-2 website (where the results will be announced!): https://muon-g-2.fnal.gov/
  2. More details on contributions to the anomalous magnetic dipole moment: https://pdg.lbl.gov/2019/reviews/rpp2018-rev-g-2-muon-anom-mag-moment.pdf 
  3. The Muon g-2 Theory Initiative’s results in all of its 196-page glory: https://arxiv.org/pdf/2006.04822.pdf

Maleficent dark matter: Part II

In Part I of the series we saw how dark matter could cause mass extinction by inducing biosphere-wide cancer, stirring up volcanoes, or launching comets from the Oort cloud. In this second and final part, we explore its other options for maleficence.

World-devouring dark matter

The dark matter wind that we encountered in Part I has yet another trick to bring the show on this watery orb to an abrupt stop. As J. F. Acevedo,  J. Bramante, A. Goodman, J. Kopp, and T. Opferkuch put it in their abstract, “Dark matter can be captured by celestial objects and accumulate at their centers, forming a core of dark matter that can collapse to a small black hole, provided that the annihilation rate is small or zero. If the nascent black hole is big enough, it will grow to consume the star or planet.” Before you go looking for the user guide to an Einstein-Rosen bridge, we draw your attention to their main text: “As, evidently, neither the Sun nor the Earth has suffered this fate yet, we will be able to set limits on dark matter properties.” For once we are more excited about limits than discovery prospects.

Limits on dark matter from its sparing our planet and star. Image source: Acevedo et al.

R-rated dark matter

Enough about destructions of life en masse. Let us turn to selective executions.

Macro dark matter” is the idea that dark matter comprises not of elementary particles but composite objects that weigh anywhere between micrograms and tonnes, and scatter on nuclei with macroscopic geometric cross sections. As per J. J. Sidhu, R. J. Scherrer and G. Starkman, since the dark wind blows at around 300 km/s, a dark macro encountering a human body would produce something akin to gunshot or a meteor strike, only more gruesome. Using 10 years of data on the well-monitored human population in the US, Canada and Western Europe, and assuming that it takes at least 100 J of energy deposition to cause significant bodily damage, they derive limits on dark matter cross sections and masses shown in the adjoining figure.

We’re afraid there’s nothing much you can do about a macro with your name on it.

Limits on dark matter from its sparing of human lives. Image source: Sidhu et al.

Inciteful dark matter

Dark matter could sometimes kill despite no interactions with the Standard Model beyond gravity. [Movie spoilers ahead.] In the film Dark Matter, a cosmology graduate student is discouraged from pursuing research on the titular topic by his advisor, who in the end rejects his dissertation. His graduation and Nobel Prize dreams thwarted, and confidante Meryl Streep’s constant empathy forgotten, the student ends up putting a bullet in the advisor and himself (yes, in that order). Senior MOND advocates, take note.

Vital dark matter

Lest we suspect by now that dark matter has a hotline to the Grim Reaper’s office, D. Hooper and J. H. Steffen clarify that it could in fact breathe life into desolate pebbles in the void. Without dark matter, rocky planets on remote orbits, or rogue planets ejected from their star system, are expected to be cold and inhospitable. But in galactic regions where dark matter populations are high, it could capture in such planets, self-annihilate, and warm them from the inside to temperatures that liquefy water, paving the way for life to “emerge, evolve, and survive“. The fires of this mechanism would blaze on long after main sequence stars cease to shine!

And perhaps one day these creatures may use the very DNA they got from dark matter to detect it.

———————————————–

Bibliography.

[6] Dark Matter, Destroyer of Worlds: Neutrino, Thermal, and Existential Signatures from Black Holes in the Sun and Earth, J. F. Acevedo,  J. Bramante, A. Goodman, J. Kopp, and T. Opferkuch, arXiv: 2012.09176 [hep-ph]  

[7] Death and serious injury from dark matter, J. J. Sidhu, R. J. Scherrer and G. Starkman, Phys. Lett. B 803 (2020) 135300  

[8] Dark Matter and The Habitability of Planets, D Hooper & J. H. Steffen, JCAP 07 (2012) 046  

[9] New Dark Matter Detectors using DNA or RNA for Nanometer Tracking, A. Drukier, K. Freese, A. Lopez, D. Spergel, C. Cantor, G. Church & T. Sano, arXiv: 1206.6809 [astro-ph.IM]  

The LHC’s Newest Experiment

Article Title: “FASER: ForwArd Search ExpeRiment at the LHC”

Authors: The FASER Collaboration 

Reference: https://arxiv.org/abs/1901.04468

When the LHC starts up again for its 3rd run of data taking, there will be a new experiment on the racetrack. FASER, the ForwArd Search ExpeRiment at the LHC is an innovative new experiment that just like its acronym, will stretch LHC collisions to get the most out of them we can. 

While the current LHC detectors are great, the have a (literal) hole. General purpose detectors (like ATLAS and CMS) are essentially giant cylinders with the incoming particle beams passing through the central axis of the cylinder before colliding. Because they have to leave room for the incoming beam of particles, they can’t detect anything too close to the beam axis. This typically isn’t a problem, because when a heavy new particle, like Higgs boson, is produced, its decay products fly off in all directions, so it is very unlikely that all of the particles produced would end up moving along the beam axis. However if you are looking for very light particles, they will often be produced in ‘imbalanced’ collisions, where one of the protons contributes a lot more energy than the other one, and the resulting particles therefore mostly carry on in the direction of the proton, along the beam axis. Because these general purpose detectors have to have a gap in them for the beams to enter they have no hope of detecting such collisions. 

That’s where FASER comes in.

A diagram of the FASER detector.

 FASER is specifically looking for new light “long-lived” particles (LLP’s) that could be produced in LHC collisions and then carry on in the direction of the beam. Long-lived means that once produced they can travel for a while before decaying back into Standard Model particles. Many popular models of dark matter have particles that could fit this bill, including axion-like particles, dark photons, and heavy neutral leptons.  To search for these particles FASER will be placed approximately 500 meters down the line from the ATLAS interaction point, in a former service tunnel. They will be looking for the signatures of LLP’s that made were produced in collisions at the ATLAS interaction point, traveled through the ground and eventually decayed in volume of their detector. 

A map showing where FASER will be located, around 500 meters downstream of the ATLAS interaction point.

Any particles reaching FASER will travel through hundreds of meters of rock and concrete, filtering out a large amount of the Standard Model particles produced in the LHC collisions. But the LLP’s FASER is looking for interact very feebly with the Standard Model so they should sail right through. FASER also has dedicated detector elements to veto any remaining muons that might make it through the ground, allowing FASER  be able to almost entirely eliminate any backgrounds that would mimic an LLP signal. This low background and their unique design will allow them to break new ground in the search for LLP’s in the coming LHC run. 

A diagram showing how particles reach FASER. Starting at the ATLAS interaction point, protons and other charged particles get deflected away by the LHC, but the long-lived particles (LLP’s) that FASER is searching for would continue straight through the ground to the FASER detector.

In addition to their program searching for new particles, FASER will also feature a neutrino detector. This will allow them to detect the copious and highly energetic neutrinos produced in LHC collisions which actually haven’t been studied yet. In fact, this will be the first direct detection of neutrinos produced in a particle collider, and will enable them to test neutrino properties at energies much higher than any previous human-made source. 

FASER is a great example of physicists thinking up clever ways to get more out of our beloved LHC collisions. Currently being installed, it will be one of the most exciting new developments of the LHC Run III, so look out for their first results in a few years!

 

Read More: 

The FASER Collaboration’s Detector Design Page

Press Release for CERN’s Approval of FASER

Announcement and Description of FASER’s Neutrino program

Maleficent dark matter: Part I

We might not have gotten here without dark matter. It was the gravitational pull of dark matter, which makes up most of the mass of galactic structures, that kept heavy elements — the raw material of Earth-like rocky planets — from flying away after the first round of supernovae at the advent of the stelliferous era. Without this invisible pull, all structures would have been much smaller than seen today, and stars much more rare.

Thus with knowledge of dark matter comes existential gratitude. But the microscopic identity of dark matter is one of the biggest scientific enigmas of our times, and what we don’t know could yet kill us. This two-part series is about the dangerous end of our ignorance, reviewing some inconvenient prospects sketched out in the dark matter literature. Reader discretion is advised.

[Note: The scenarios outlined here are based on theoretical speculations of dark matter’s identity. Such as they are, these are unlikely to occur, and even if they do, extremely unlikely within the lifetime of our species, let alone that of an individual. In other words, nobody’s sleep or actuarial tables need be disturbed.]

The dark matter wind could blow in mischief. Image source: Freese et al.

Carcinogenic dark matter

Maurice Goldhaber quipped that “you could feel it in your bones” that protons are cosmologically long-lived, as otherwise our bodies would have self-administered a lethal dose of ionizing radiation. (This observation sets a lower limit on the proton lifetime at a comforting 10^7 times the age of the universe.) Could we laugh similarly about dark matter? The Earth is probably amid a wind of particle dark matter, a wind that could trigger fatal ionization in our cells if encountered too frequently. The good news is that if dark matter is made of weakly interacting massive particles (WIMPs), K. Freese and C. Savage report safety: “Though WIMP interactions are a source of radiation in the body, the annual exposure is negligible compared to that from other natural sources (including radon and cosmic rays), and the WIMP collisions are harmless to humans.

The bad news is that the above statement assumes dark matter is distributed smoothly in the Galactic halo. There are interesting cosmologies in which dark matter collects in high-density “clumps” (a.k.a. “subhalos”, “mini-halos”,  or “mini-clusters”). According to J. I. Collar, the Earth encountering these clumps every 30–100 million years could explain why mass extinctions of life occur periodically on that timescale. During transits through the clumps, dark matter particles could undergo high rates of elastic collisions with nuclei in life forms, injecting 100–200 keV of energy per micrometer of transit, just right to “induce a non-negligible amount of radiation damage to all living tissue“. We are in no hurry for the next dark clump.

Eruptive dark matter

If your dark matter clump doesn’t wipe out life efficiently via cancer,  A. Abbas and S. Abbas recommend waiting another five million years. It takes that long for the clump dark matter to gravitationally capture in Earth, settle in its core, self-annihilate, and heat the mantle, setting off planet-wide volcanic fireworks. The resulting chain of events would end, as the authors rattle off enthusiastically, in “the depletion of the ozone layer, global temperature changes, acid rain, and a decrease in surface ocean alkalinity.”

Dark matter settling in the Earth’s core could spell doom. Image source: J. Bramante & A. Goodman.

Armageddon dark matter

If cancer and volcanoes are not dark matter’s preferred methods of prompting mass extinction, it could get the job done with old-fashioned meteorite impacts.

It is usually supposed that dark matter occupies a spherical halo that surrounds the visible, star-and-gas-crammed, disk of the Milky Way.  This baryonic pancake was formed when matter, starting out in a spinning sphere, cooled down by radiating photons and shrunk in size along the axis of rotation; due to conservation of angular momentum the radial extent was preserved. No such dissipative process is known to govern dark matter, thus it retains its spherical shape. However, a small component of dark matter might have still cooled by emitting some unseen radiation such as “dark photons“. That would result in a “dark disk” sub-structure co-existing in the Galactic midplane with the visible disk. Every 35 million years the Solar System crosses the Galactic midplane, and when that happens, a dark disk of surface density of 10 M_\odot/pc^2 could tidally perturb the Oort Cloud and send comets shooting toward the inner planets, causing periodic mass extinctions. So suggest L. Randall and M. Reece, whose arXiv comment “4 figures, no dinosaurs” is as much part of the particle physics lore as Randall’s book that followed the paper, Dark Matter and the Dinosaurs.

We note in passing that SNOLAB, the underground laboratory in Sudbury, ON that houses the dark matter experiments DAMIC, DEAP, and PICO, and future home of NEWS-G, SENSEISuper-CDMS and ARGO, is located in the Creighton Mine — where ore deposits were formed by a two billion year-old comet impact. Perhaps the dark disk nudges us to detect its parent halo.

A drift (horizontal passage) in Creighton Mine, 2.1 km underground. Around the corner is SNOLAB, where several experiments searching for dark matter are located. The mine owes its existence to a meteorite impact — perhaps triggered by a Galactic disk of dark matter. Photo: N. Raj.

——————
In the second part of the series we will look — if we’re still here — at more surprises that dark matter could have planned for us. Stay tuned.

Bibliography.

[1] Dark Matter collisions with the Human Body, K. Freese & D. Savage, Phys.Lett.B 717 (2012) 25-28.

[2] Clumpy cold dark matter and biological extinctions, J. I. Collar, Phys.Lett.B 368 (1996) 266-269.

[3] Volcanogenic dark matter and mass extinctions, S. Abbas & A. Abbas, Astropart.Phys. 8 (1998) 317-320

[4] Dark Matter as a Trigger for Periodic Comet Impacts, L. Randall & M. Reece, Phys.Rev.Lett. 112 (2014) 161301

[5] Dark Matter and the Dinosaurs, L. Randall, Harper Collins: Ecco Press‎ (2015)

Adjectivous dark matter

How would you describe dark matter in one word? Mysterious? Ubiquitous? Massive? Theorists often try to settle the question in the title of a paper — with a single adjective. Thanks to this penchant we now have the possibility of cannibal dark matter. To be sure, in the dark world eating one’s own kind is not considered forbidden, not even repulsive. Just a touch selfish, maybe. And it could still make you puffy — if you’re inflatable and not particularly inelastic. Otherwise it makes you plain superheavy.

Below are more uni-verbal dark matter candidates in the literature. Some do make you wonder if the title preceded the premise. But they all remind you of how much fun it is, this quest for dark matter’s identity. Keep an eye out on arXiv for these gems!

anapole, asymmetric, atomic, brane-world, charged, co-interacting, coloured, cryptobaryonic, disformal, fluid, freeze-twin, gluequark, GUTzilla, homeopathic, impeded, inflaxion, Kaluza-Klein, luminous, macro, minimal, monodromy, \nu-inflaton, parafermionic, relaxion, resonant, self-destructing, self-interacting, singlet-doublet, spectator, super-cool, technicolor, topological, undulating, unparticle, wave.

This image has an empty alt attribute; its file name is cannibnoun.png
Dark matter, in addition to consuming physicists, could also be cannibal. Image: Gan Khoon Lay, Noun Project.

Alice and Bob Test the Basic Assumptions of Reality

Title: “A Strong No-Go Theorem on Wigner’s Friend Paradox.”

Author: Kok-Wei Bong et al.

Reference: https://arxiv.org/pdf/1907.05607.pdf 

There’s one thing nearly everyone in physics agrees upon: quantum theory is bizarre. Niels Bohr, one of its pioneers, famously said that “anybody who is not shocked by quantum mechanics cannot possibly have understood it.” Yet it is also undoubtedly one of the most precise theories humankind has concocted, with its intricacies continually verified in hundreds of experiments to date. It is difficult to wrap our heads around its concepts because a quantum world does not equate to human experience; our daily lives reside in the classical realm, as does our language. In introductory quantum mechanics classes, the notion of a wave function often relies on shaky verbiage: we postulate a wave function that propagates in a wavelike fashion but is detected as an infinitesimally small point object, “collapsing” upon observation. The nature of the “collapse” — how exactly a wave function collapses, or if it even does at all — comprises what is known as the quantum measurement problem. 

As a testament to its confounding qualities, there exists a long menu of interpretations of quantum mechanics. The most popular is the Copenhagen interpretation, which asserts that particles do not have definite properties until they are observed and the wavefunction undergoes a collapse. This is the quantum mechanics all undergraduate physics majors are introduced to, yet plenty more interpretations exist, some with slightly different flavorings of the Copenhagen dish — containing a base of wavefunction collapse with varying toppings. A new work, by Kok-Wei Bong et al., is now providing a lens through which to discern and test Copenhagen-like interpretations, casting the quantum measurement problem in a new light. But before we dive into this rich tapestry of observation and the basic nature of reality, let’s get a feel for what we’re dealing with. 

Above, a summary of the Copenhagen interpretation. In this interpretation, particles only gain properties upon measurement. Source: afriendman.org

The story starts as a historical one, with high-profile skeptics of quantum theory. In response to its advent, Einstein, Podolsky, and Rosen (EPR) proposed hidden variable theories which sought to retain the idea that reality was inherently deterministic, built on relativistic notions while probabilities could be explained away by some unseen, underlying mechanism. Bell formulated a theorem to address the EPR paper, showing that the probabilistic paradigm posed by quantum mechanics cannot be entirely described by hidden variables. 

In seeking to show that quantum mechanics is an incomplete theory, EPR focused their work on what they found to be the most objectionable phenomenon: entanglement. Since entanglement is often misrepresented, let’s provide a brief overview here. When a particle decays into two daughter particles, we can perform subsequent measurements on each of those particles. When the spin angular momentum of one particle is measured, the spin angular momentum of the other particle is simultaneously measured to be exactly the value that adds to give the total spin angular momentum of the original particle (pre-decay). In this way, knowledge about the one particle gives us knowledge about the other particle; the systems are entangled. A paradox ensues since it appears that some information must have been transmitted between the two particles instantaneously. Yet entanglement phenomena really come down to a lack of sufficient information, as we are unsure of the spin measured on one particle until it is transmitted to the measurer of the second particle. 

We can illustrate this by considering the classical analogue. Think of a ball splitting in two — each ball travels in some direction and the sum total of the individual spin angular momenta is equal to the total spin angular momenta that was initiated when they existed as a group. However, I am free to catch one of the balls, or to perform a state-altering measurement, and this does not affect the value obtained by the other ball. Once the pieces are free of each other, they can acquire angular momentum from outside influences, breaking the collective initial “total” of spin angular momentum. I am also free to track these results from a distance, and as we can physically see the balls come loose and fly off in opposite directions (a form of measurement), we have all the information we need about the system. In considering the quantum version, we are left to confront a feature of quantum measurement: measurement itself alters the system that is being measured. The classical and quantum seem to contradict one another.

A visualization of quantum entanglement between two fermions (spin-1/2 particles): if one particle is measured to have spin +1/2, the other is simultaneously found to have spin -1/2.

Bell’s Theorem made this contradiction concrete and testable by considering two entangled qubits and predicting their correlations. He posited that, if a pair of spin-½ particles in a collective singlet state were traveling in opposite directions from each other, their spins can be independently measured at distant locations with respect to axes of choice. The probability of obtaining values corresponding to an entanglement scenario then depends on the relative angle between each particle’s axes. Over many iterations of this experiment, correlations can be constructed by taking the average of the products of measurement pairs. Comparing to the case of a hidden variable theory, with an upper-limit given by assuming an underlying deterministic reality, results in inequalities that hold should these hidden variable theories be viable. Experiments designed to test the assumptions of quantum mechanics have thus far all resulted in violations of Bell-type inequalities, leaving quantum theory on firm footing. 

Now, the Kok-Wei Bong et al. research is building upon these foundations. Via consideration of Wigner’s friend paradox, the team formulated a new no-go theorem (a type of theorem that asserts a particular situation to be physically impossible) that reconsiders our ideas of what reality means and which axioms we can use to model it. Bell’s Theorem, although seeking to test our baseline assumptions of the quantum world, still necessarily rests upon a few axioms. This new theorem shows that one of a few assumptions (deemed the Local Friendliness assumptions), which had previously seemed entirely reasonable, must be incorrect in order to be compatible with quantum theory:

  1. Absoluteness of observed events: Every event exists absolutely, not relatively. While the event’s details may be observer-dependent, the existence of the event is not.
  2. Locality: Local settings cannot influence distant outcomes (no superluminal communication).
  3. No-superdeterminism: We can freely choose the settings in our experiment and, before making this choice, our variables will not be correlated with those settings.

The work relies on the presumptive ability of a superobserver, a special kind of observer that is able to manipulate the states controlled by a friend, another observer. In the context of the “friend” being cast as an artificial intelligence algorithm in a large quantum computer, with the programmer as the superobserver, this scenario becomes slightly less fantastical. Essentially, this thought experiment digs into our ideas of the scale of applicability of quantum mechanics — what an observer is, and if quantum theory similarly applies to all observers.

To illustrate this more precisely, and consider where we might hit some snags in this analysis, let’s look at a quantum superposition state,

\vert \psi \rangle = \frac{1}{\sqrt{2}} (\vert\uparrow \rangle + \vert\downarrow \rangle).

If we were to take everything we learned in university quantum mechanics courses at face value, we could easily recognize that, upon measurement, this state can be found in either the \vert\uparrow \rangle or \vert\downarrow \rangle state with equal probability. However, let us now turn our attention toward the Wigner’s friend scenario: image that Wigner has a friend inside a laboratory performing an experiment and Wigner himself is outside of the laboratory, positioned ideally as a superobserver (he can freely perform any and all quantum experiments on the laboratory from his vantage point). Going back to the superposition state above, it remains true that Wigner’s friend can observe either up or down states with 50% probability upon measurement. However, we also know that states must evolve unitarily. Wigner, still positioned outside of the laboratory, continues to observe a superposition of states with ill-defined measurement outcomes. Hence, a paradox, and one formed due to the fundamental assumption that quantum mechanics applies at all scales to all observers. This is the heart of the quantum measurement problem.

An illustration of the setup of the extended Wigner’s friend scenario, now including laboratories controlled by Charlie and Debbie with superobservers Alice and Bob. Charlie and Debbie make measurements on an entangled state, while Alice and Bob make measurements on the laboratories of Charlie and Debbie, respectively. Source: Kok-Wei Bong et al.

Now, let’s extend this scenario, taking our usual friends Alice and Bob as superobservers to two separate laboratories. Charlie, in the laboratory observed by Alice, has a system of spin-½ particles with an associated Hilbert space, while Debbie, in the laboratory observed by Bob, has her own system of spin-½ particles with an associated Hilbert space. Within their separate laboratories, they make measurements of the spins of the particles along the z-axis and record their results. Then Alice and Bob, still situated outside the laboratories of their respective friends, can make three different types of measurements on the systems, one of which they choose to perform randomly. First, Alice could look inside Charlie’s laboratory, view his measurement, and assign it to her own. Second, Alice could restore the laboratory to some previous state. Third, Alice could erase Charlie’s record of results and instead perform her own random measurement directly on the particle. Bob can perform the same measurements using Debbie’s laboratory.

With this setup, the new theorem then identifies a set of inequalities derived from Local Friendliness correlations, which are extended from those given by Bell’s Theorem and can be independently violated. The authors then concocted a proof-of-principle experiment, which relies on explicitly thinking of the friends Charlie and Debbie as qubits, rather than people. Using the three measurement settings and choosing systems of polarization-encoded photons, the photon paths are then the “friends” (the photon either takes Charlie’s path, Debbie’s path, or some superposition of the two). After running this experiment some thousands of times, the authors concluded that their Local Friendliness inequalities should be violated, implying that one of the three initial assumptions cannot be correct. 

The primary difference between this work and Bell’s Theorem is that it contains no prior assumptions about the underlying determinism of reality, including any hidden variables that could be used to predetermine the outcomes of events. The theorem itself is therefore built upon assumptions strictly weaker than those of Bell’s inequalities, meaning that any violations would lead to strictly stronger conclusions. This paves a promising pathway for future questions and experiments about the nature of observation and measurement, narrowing down the large menu of interpretations of quantum mechanics. The question of which of the assumptions — absoluteness of observed events, locality, and no-superdeterminism — is incorrect is left as an open question. While the first two are widely used throughout physics, the assumption of no-superdeterminism digs down into the question of what measurement really means and what is classed as an observer. These points will doubtlessly be in contention as physicists continue to explore the oddities that quantum theory has to offer, but this new theorem offers promising results on the path to understanding the quirky quantum world.

Further Reading:

  1. More details on Bell’s Theorem: https://arxiv.org/pdf/quant-ph/0402001.pdf 
  2. Frank Wilczek’s column on entanglement: https://www.quantamagazine.org/entanglement-made-simple-20160428/ 
  3. Philosophical issues in quantum theory: https://plato.stanford.edu/entries/qt-issues/ 

Machine Learning The LHC ABC’s

Article Title: ABCDisCo: Automating the ABCD Method with Machine Learning

Authors: Gregor Kasieczka, Benjamin Nachman, Matthew D. Schwartz, David Shih

Reference: arxiv:2007.14400

When LHC experiments try to look for the signatures of new particles in their data they always apply a series of selection criteria to the recorded collisions. The selections pick out events that look similar to the sought after signal. Often they then compare the observed number of events passing these criteria to the number they would expect to be there from ‘background’ processes. If they see many more events in real data than the predicted background that is evidence of the sought after signal. Crucial to whole endeavor is being able to accurately estimate the number of events background processes would produce. Underestimate it and you may incorrectly claim evidence of a signal, overestimate it and you may miss the chance to find a highly sought after signal.

However it is not always so easy to estimate the expected number of background events. While LHC experiments do have high quality simulations of the Standard Model processes that produce these backgrounds they aren’t perfect. Particularly processes involving the strong force (aka Quantum Chromodynamics, QCD) are very difficult to simulate, and refining these simulations is an active area of research. Because of these deficiencies we don’t always trust background estimates based solely on these simulations, especially when applying very specific selection criteria.

Therefore experiments often employ ‘data-driven’ methods where they estimate the amount background events by using control regions in the data. One of the most widely used techniques is called the ABCD method.

An illustration of the ABCD method. The signal region, A, is defined as the region in which f and g are greater than some value. The amount of background in region A is estimated using regions B C and D which are dominated by background.

The ABCD method can applied if the selection of signal-like events involves two independent variables f and g. If one defines the ‘signal region’, A,  (the part of the data in which we are looking for a signal) as having f  and g each greater than some amount, then one can use the neighboring regions B, C, and D to estimate the amount of background in region A. If the number of signal events outside region A is small, the number of background events in region A can be estimated as N_A = N_B * (N_C/N_D).

In modern analyses often one of these selection requirements involves the score of a neural network trained to identify the sought after signal. Because neural networks are powerful learners one often has to be careful that they don’t accidentally learn about the other variable that will be used in the ABCD method, such as the mass of the signal particle. If two variables become correlated, a background estimate with the ABCD method will not be possible. This often means augmenting the neural network either during training or after the fact so that it is intentionally ‘de-correlated’ with respect to the other variable. While there are several known techniques to do this, it is still a tricky process and often good background estimates come with a trade off of reduced classification performance.

In this latest work the authors devise a way to have the neural networks help with the background estimate rather than hindering it. The idea is rather than training a single network to classify signal-like events, they simultaneously train two networks both trying to identify the signal. But during this training they use a groovy technique called ‘DisCo’ (short for Distance Correlation) to ensure that these two networks output is independent from each other. This forces the networks to learn to use independent information to identify the signal. This then allows these networks to be used in an ABCD background estimate quite easily.

The authors try out this new technique, dubbed ‘Double DisCo’, on several examples. They demonstrate they are able to have quality background estimates using the ABCD method while achieving great classification performance. They show that this method improves upon the previous state of the art technique of decorrelating a single network from a fixed variable like mass and using cuts on the mass and classifier to define the ABCD regions (called ‘Single Disco’ here).

Using the task of identifying jets containing boosted top quarks, they compare the classification performance (x-axis) and quality of the ABCD background estimate (y-axis) achievable with the new Double DisCo technique (yellow points) and previously state of the art Single DisCo (blue points). One can see the Double DisCo method is able to achieve higher background rejection with a similar or better amount of ABCD closure.

While there have been many papers over the last few years about applying neural networks to classification tasks in high energy physics, not many have thought about how to use them to improve background estimates as well. Because of their importance, background estimates are often the most time consuming part of a search for new physics. So this technique is both interesting and immediately practical to searches done with LHC data. Hopefully it will be put to use in the near future!

Further Reading:

Quanta Magazine Article “How Artificial Intelligence Can Supercharge the Search for New Particles

Recent ATLAS Summary on New Machine Learning Techniques “Machine learning qualitatively changes the search for new particles

CERN Tutorial on “Background Estimation with the ABCD Method

Summary of Paper of Previous Decorrelation Techniques used in ATLAS “Performance of mass-decorrelated jet substructure observables for hadronic two-body decay tagging in ATLAS

Inside the Hologram

This is from arXiv:2009.04476 [hep-th] Inside the Hologram: Reconstructing the bulk observer’s experience

 

Figure 1 (adapted from arXiv:2009.04476 [hep-th]):  The setup showing the reference system coupled to a black hole

The holographic principle in high energy physics has been proved to be quite rigorous. It has allowed physicists to understand phenomena that were too computationally difficult beforehand. The principle states that a theory describing gravity is related to a quantum theory. The gravity theory is typically called “the bulk” and the quantum theory is called “the boundary theory,” because it lives on the boundary of the gravity theory. Because the quantum theory is on the boundary, it is one dimension less than the gravity theory. Like a dictionary, physical concepts from gravity can be translated into physical concepts in the quantum theory. This idea of holography was introduced a few decades ago and a plethora of work has stemmed from it.

Much of the work regarding holography has been studied as seen from an asymptotic frame. This is a frame of reference that is “far away” from what we are studying, i.e. somewhere at the boundary where the quantum theory lives.

However, there still remains an open question. Instead of an asymptotic frame of reference, what about an internal reference frame? This is an observer inside the gravity theory, i.e. close to what we are studying. It seems that we do not have a quantum theory framework for describing physics for these reference frames. The authors of this paper explore this idea as they answer the question: how can we describe the quantum physics for an internal reference frame?

For classical physics, the usual observer is a probe particle. Since the authors want to understand the quantum aspects of the observer, they choose to have the observer be made up of a black hole that is entangled with a reference system. One way to see that black holes have quantum behavior is by studying Stephen Hawking’s work. Particularly, he showed that black hole can emit thermal radiation. This prediction is now known as Hawking radiation.

The authors proceed to measure the proper time between and energy distribution of the observer. Moreover, the researchers propose a new perspective on “time,” relating the notion of time in General Relativity to the notion of time in the quantum mechanics point of view, which may be valid outside the scope of holography.

The results have proven to be quite novel as it fills some of the gaps we have about our knowledge of holography. It is also a step towards understanding what the notion of time means in holographic gravitational systems.