By Andre Sterenberg Frankenthal and Amara McCune
This is the final post of a three-part series on the Muon g-2 experiment. Check out posts 1 and 2 on the theoretical and experimental aspects of g-2 physics.
The last couple of weeks have been exciting in the world of precision physics and stress tests of the Standard Model (SM). The Muon g-2 Collaboration at Fermilab released their very first results with a measurement of the anomalous magnetic moment of the muon to an accuracy of 462 parts per billion (ppb), which largely agrees with previous experimental results and amplifies the tension with the accepted theoretical prediction to a 4.2 discrepancy. These first results feature less than 10% of the total data planned to be collected, so even more precise measurements are foreseen in the next few years.
But on the very same day that Muon g-2 announced their results and published their main paper on PRL and supporting papers on Phys. Rev. A and Phys. Rev. D, Nature published a new lattice QCD calculation which seems to contradict previous theoretical predictions of the g-2 of the muon and moves the theory value much closer to the experimental one. There will certainly be hot debate in the coming months and years regarding the validity of this new calculation, but it does not stop from muddying the waters in the g-2 sphere. We cover both the new experimental and theoretical results in more detail below.
Experimental announcement
The main paper in Physical Review Letters summarizes the experimental method and reports the measured numbers and associated uncertainties. The new Fermilab measurement of the muon g-2 is 3.3 standard deviations () away from the predicted SM value. This means that, assuming all systematic effects are accounted for, the probability that the null hypothesis (i.e. that the true muon g-2 number is actually the one predicted by the SM) could result in such a discrepant measurement is less than 1 in 1,000. Combining this latest measurement with the previous iteration of the experiment at Brookhaven in the early 2000s, the discrepancy grows to 4.2, or smaller than 1 in 300,000 probability that it is just a statistical fluke. This is not yet the 5 threshold that seems to be the golden standard in particle physics to claim a discovery, but it is a tantalizing result. The figure below from the paper illustrates well the tension between experiment and theory.
This first publication is just the first round of results planned by the Collaboration, and corresponds to less than 10% of the data that will be collected throughout the total runtime of the experiment. With this limited dataset, the statistical uncertainty (434 ppb) dominates over the systematic uncertainty (157 pbb), but that is expected to change as more data is acquired and analyzed. When the statistical uncertainty eventually dips below, it will be critically important to control the systematics as much as possible, to attain the ultimate target goal of a 140 ppb total uncertainty measurement. The table below shows the actual measurements performed by the Collaboration.
The largest sources of systematic uncertainties stem from the electrostatic quadrupoles (ESQ) in the experiment. While the uniform magnetic field ensures the centripetal motion of muons in the storage ring, it is also necessary to keep them confined to the horizontal plane. Four sets of ESQ uniformly spaced in azimuth provide vertical focusing of the muon beam. However, after data-taking, two resistors in the ESQ system were found to be damaged. This means that the time profile of ESQ activation was not perfectly matched to the time profile of the muon beam. In particular, during the first 100 microseconds after each muon bunch injection, muons were not getting the correct focusing momentum, which affected the expected phase of the “wiggle plot” measurement. All told, this issue added 75 ppb of systematic uncertainty to the budget. Nevertheless, because statistical uncertainties dominate in this first stage of the experiment, the unexpected ESQ damage was not a showstopper. The Collaboration expects this problem to be fully mitigated in subsequent data-taking runs.
To guard against any possible human bias, an interesting blinding policy was implemented: the master clock of the entire experiment was shifted by an unknown value, chosen by two people outside the Collaboration and kept in a vault for the duration of the data-taking and processing. Without knowing this shift, it is impossible to deduce the correct value of the g-2. At the same time, this still allows experimenters to carry out the analysis through the end, and only then remove the clock shift to reveal the unblinded measurement. In a way this is like a key unlocking the final result. (This was not the only protection against bias, only the more salient and curious one.)
Lattice QCD + BMW results
On the same day that Fermilab announced the Muon g-2 experimental results, a group known as the BMW (Budapest-Marseille-Wuppertal) Collaboration published its own results on the theoretical value of muon g-2 using new techniques in lattice QCD. The group’s results can be found in Nature (the journal, jury’s still out on whether they’re in actual Nature), or at the preprint here. In short, their calculations bring them much closer to the experimental value than previous collaborations, bringing their methods into tension with the findings of previous lattice QCD groups. What’s different? To this end, let’s dive a little deeper into the details of lattice QCD.
As outlined in the first post of this series, the main tool of high-energy particle physics rests in perturbation theory, which we can think of graphically via Feynman diagrams, starting with tree-level diagrams and going to higher orders via loops. Equivalently, this corresponds to calculations in which terms are proportional to some coupling parameter that describes the strength of the force in question. Each higher order term comes with one more factor of the relevant coupling, and our errors in these calculations are generally attributable to either uncertainties in the coupling measurements themselves or the neglecting of higher order terms.
These coupling parameters are secretly functions of the energy scale being studied, and so at each energy scale, we need to recalculate these couplings. This makes sense intuitively because forces have different strengths at different energy scales — e.g. gravity is much weaker on a particle scale than a planetary one. In quantum electrodynamics (QED), for example, these couplings are fairly small when in the energy scale of the electron. This means that we really don’t need to go to higher orders in perturbation theory, since these terms quickly become irrelevant with higher powers of this coupling. This is the beauty of perturbation theory: typically, we need only consider the first few orders, vastly simplifying the process.
However, QCD does not share this convenience, as it comes with a coupling parameter that decreases with increasing energy scale. At high enough energies, we can indeed employ the wonders of perturbation theory to make calculations in QCD (this high-energy behavior is known as asymptotic freedom). But at lower energies, at length scales around that of a proton, the coupling constant is greater than one, which means that the first-order term in the perturbative expansion is the least relevant term, with higher and higher orders making greater contributions. In fact, this signals the breakdown of the perturbative technique. Because the mass of the muon is in this same energy regime, we cannot use perturbation theory in quantum field theory to calculate g-2. We then turn to simulations, and since cannot entirely simulate spacetime (because it consists of infinite points), we must instead break it up into a discretized set of points dubbed the lattice.
This naturally introduces new sources of uncertainty into our calculations. To employ lattice QCD, we need to first consider which lattice spacing to use — the distance between each spacetime point — where a smaller lattice spacing is preferable in order to come closer to a description of spacetime. Introducing this lattice spacing comes with its own systematic uncertainties. Further, this discretization can be computationally challenging, as larger numbers of points quickly eat up computing power. Standard numerical techniques become too computationally expensive to employ, and so statistical techniques as well as Monte Carlo integration are used instead, which again introduces sources of error.
Difficulties are also introduced by the fact that a discretized space does not respect the same symmetries that a continuous space does, and some symmetries simply cannot be kept simultaneously with others. This leads to a challenge in which groups using lattice QCD must pick which symmetries to preserve as well as consider the implications of ignoring the ones they choose not to simulate. All of this adds up to mean that lattice QCD calculations of g-2 have historically been accompanied by very large error bars — that is, until the much smaller error bars from the BMW group’s recent findings.
These results are not without controversy. The group employs a “staggered fermion” approach to discretizing the lattice, in which a single type of fermion known as a Dirac fermion is put on each lattice point, with additional structure described by neighboring points. Upon taking the “continuum limit,” or the limit that the spacing between points on the lattice goes to zero (hence simulating a continuous space), this results in a theory with four fermions, rather than the sixteen that live in the Standard Model. There are a few advantages to this method, both in terms of reducing computational time and having smaller discretization errors. However, it is still unclear if this approach is valid, and the lattice community is then questioning if these results are not computing observables in some other quantum field theory, rather than the SM quantum field theory.
The future of g-2
Overall, while a 4.2 discrepancy is certainly more alluring than the previous 3.7, the conflict between the experimental results and the Standard Model is still somewhat murky. It is crucial to note that the new 4.2 benchmark does not include the BMW group’s calculations, and further incorporation of these values could shift the benchmark around. A consensus from the lattice community on the acceptability of the BMW group’s results is needed, as well as values from other lattice groups utilizing similar methods (which should be steadily rolling out as the months go on). It seems that the future of muon g-2 now rests in the hands of lattice QCD.
At the same time, more and more precise measurements should be coming out of the Muon g-2 Collaboration in the next few years, which will hopefully guide theorists in their quest to accurately predict the anomalous magnetic moment of the muon and help us reach a verdict on this tantalizing evidence of new boundaries in our understanding of elementary particle physics.
Further Reading
BMW paper: https://arxiv.org/pdf/2002.12347.pdf
Muon g-2 Collaboration papers:
- Main result (PRL): Phys. Rev. Lett. 126, 141801 (2021)
- Precession frequency measurement (Phys. Rev. D): Phys. Rev. D 103, 072002 (2021)
- Magnetic field measurement (Phys. Rev. A): Phys. Rev. A 103, 042208 (2021)
- Beam dynamics (to be published in Phys. Rev. Accel. Beams): https://arxiv.org/abs/2104.03240
ParticleBites Authors
Latest posts by ParticleBites Authors (see all)
- An Anomalous Anomaly: The New Fermilab Muon g-2 Results - April 19, 2021