(Almost) Everything You’ve Ever Wanted to Know About Muon g-2 – Experiment edition

This is post #2 of a three-part series on the Muon g-2 experiment. Check out Amara McCune’s post on the theory of g-2 physics for an excellent introduction to the topic.

As we all eagerly await the latest announcement from the Muon g-2 Collaboration on April 7th, it is a good time to think about the experimental aspects of the measurement and to appreciate just how difficult it is and the persistent and collaborative effort that has gone into obtaining one of the most precise results in particle physics to date.

The main “output” of the experiment (after all data-taking runs are complete) is a single number: the g-factor of the muon, measured to an unprecedented accuracy of 140 parts per billion (ppb) at Fermilab’s Muon Campus, a four-fold improvement over the previous iteration of the experiment that took place at Brookhaven National Lab in the early 2000s. But to arrive at this seemingly simple result, a painstaking measurement effort is required. As a reminder (see Amara’s post for more details), what is actually measured is the anomalous deviation from 2 of the magnetic moment of the muon, a_\mu, which is given by

a_\mu = \frac{g-2}{2}.

Experimental method

The core tenet of the experimental approach relies on the behavior of muons when subjected to a uniform magnetic field. If muons can be placed in a uniform circular trajectory around a storage ring with uniform magnetic field, then they will travel around this ring with a characteristic frequency, referred to as its cyclotron frequency (symbol \omega_c). At the same time, if the muons are polarized, meaning that their spin vector points along a particular direction when first injected into the storage ring, then this spin vector will also rotate when subjected to a uniform magnetic field. The frequency of the spin vector rotation is called the spin frequency (symbol \omega_s).

If the cyclotron and spin frequencies of the muon were exactly the same, then it would have an anomalous magnetic moment a_\mu of zero. In other words, the anomalous magnetic moment measures the discrepancy between the behavior of the muon itself and its spin vector when under a magnetic field. As Amara discussed at length in the previous post in this series, such discrepancy arises because of specific quantum-mechanical contributions to the muon’s magnetic moment from several higher-order interactions with other particles. We refer to the differing frequencies as the precession of the muon’s spin motion compared to its cyclotron motion.

If the anomalous magnetic moment is not zero, then one way to measure it is to directly record the cyclotron and spin frequencies and subtract them. In a way, this is what is done in the experiment: the anomalous precession frequency can be measured as

\omega_a = \omega_s - \omega_c = -a_\mu \frac{eB}{m_\mu}

where m_\mu is the muon mass, e is the muon charge, and B is the (ideally) uniform magnetic field. Once the precession frequency and the exact magnetic field are measured, one can immediately invert this equation to obtain a_\mu.

In practice, the best way to measure \omega_a is to rewrite the equation above into more experimentally amenable quantities:

a_\mu = \left( \frac{g_e}{2} \right) \left( \frac{\mu_p}{\mu_e} \right) \left( \frac{m_\mu}{m_e} \right) \left( \frac{\omega_a}{\langle \omega_p \rangle} \right)

where \mu_p/\mu_e is the proton-to-electron magnetic moment ratio, g_e is the electon g-factor, and \langle \omega_p \rangle is the free proton’s Larmor frequency averaged over the muon beam spatial transverse distribution. The Larmor frequency measures the proton’s magnetic moment precession about the magnetic field and is directly proportional to B. The a_\mu written in this form has the considerable advantage that all of the quantities have been independently and very accurately measured: to 0.00028 ppb (g_e), to 3 ppb (\mu_p/\mu_e), and to 22 ppb (m_\mu/m_e). Recalling that the final desired accuracy for the left-hand side of the equation above is 140 ppb leads to a budget of 70 ppb for each of the \omega_a and \omega_p measurements. This is perhaps a good point to stop and appreciate just how small these uncertainty budgets are: 1 ppb is a 1/1000000000 level of accuracy!

We have now distilled the measurement into two numbers: \omega_a, the anomalous precession frequency, and \omega_p, the free proton Larmor frequency which is directly proportional to the magnetic field (the quantity we’re actually interested in). Their uncertainty budgets are roughly 70 ppb for each, so let’s take a look at how they are able to measure these two numbers to such an accuracy. First, we’ll introduce the experimental setup, and then describe the two measurements.

Experimental setup

The polarized muons in the experiment are produced by a beam of pions, which are themselves produced when a beam of 8 GeV protons created by Fermilab’s linear accelerator strikes a nickel-iron target. The pions are selected to have a momentum close to the required for the experiment: 3.11 GeV/c. Each pion then decays to a muon and a muon-neutrino (more than 99% of the time), and a very particular momentum is selected for the muons: 3.094 GeV/c. Only muons with this specific momentum (or very close) are allowed to enter the storage ring. This momentum has a special significance in the experimental design and is colloquially referred to as the “magic momentum” (and muons, upon entering the storage ring, travel along a circular trajectory with a “magic radius” which corresponds to the magic momentum). The reason for this special momentum is, very simplistically, the fortuitous cancelation of some electric and magnetic field effects that would need to be accounted for otherwise and that would therefore reduce the accuracy of the measurement. Here’s a sketch of the injection pipeline:

Sketch of muon production and injection into the storage ring starting from a beam of protons at Fermilab. Source: David Sweigart’s thesis.

Muons with the magic momentum are injected into the muon storage ring, pictured below. The storage ring (the same one from Brookhaven which was moved to Fermilab in 2013) is responsible for keeping muons circulating in orbit until they decay, with a vertical magnetic field of 1.45 T, uniform within 25 ppm (quite a feat and made possible via a painstaking effort called magnet “shimming”). The muon lifetime is 2 microseconds in its own frame of reference, but in the laboratory frame and with a 3.094 GeV/c momentum this increases to 64 microseconds. The storage ring has a roughly 45 m circumference, so muons can travel up to hundreds of times around the ring before decaying.

Photo of the Muon g-2 storage ring superconducting coil before assembly at Fermilab (2014). Source: personal photo. Link to the full assembled storage ring.

When they do eventually decay, the most likely decay products are positrons (or electrons, depending on the muon charge), electron-antineutrinos, and muon-neutrinos. The latter two are neutral particles and essentially invisible, but the positrons are charged and therefore bend under the magnetic field in the ring. The magic momentum and magic radius only apply to muons – positrons will bend inwards and eventually hit one of the 24 calorimeters placed strategically around the ring. A sketch of the situation is shown below.

Muon decay and positron trajectories into the calorimeters of the Muon g-2 experiment at Fermilab. Different positron energies correspond to different bends under the magnetic field and different impact positions on the calorimeters. Source: Aaron Fienberg’s thesis.

Calorimeters are detectors that can precisely measure the total energy of a particle. Furthermore, with transversal segmentation, they can also measure the incident position of the positrons. The calorimeters used in the experiment are made of lead fluoride (PbF2) crystals, which are Cherenkov radiators and therefore have an extremely fast response (Cherenkov radiation is emitted instantaneously when an incident particle travels faster than light in a medium – not in vacuum though since that’s not possible!). Very precise timing information about decay positrons is essential to infer the position of the decaying muon along the storage ring, and the experiment manages to achieve a remarkable sub-100 ps precision on the positron arrival time (which is then compared to the muon injection time for an absolute time calibration).

\omega_a measurement

The key aspect of the \omega_a measurement is that the direction and energy distributions of the decay positrons are correlated with the direction of the spin of the decaying muons. So, by measuring the energy and arrival time of each positron with one of the 24 calorimeters, one can deduce (to some degree of confidence) the spin direction of the parent muon.

But recall that the spin direction itself is not constant in time — it oscillates with \omega_s frequency, while the muons themselves travel around the ring with \omega_c frequency. By measuring the energy of the most energetic positrons (the degree of correlation between muon spin and positron energy is highest for more energetic positrons), one should find an oscillation that is roughly proportional to the spin oscillation, “corrected” by the fact that muons themselves are moving around the ring. Since the position of each calorimeter is known, accurately measuring the arrival time of the positron relative to the injection of the muon beam into the storage ring, combined with its energy information, gives an idea of how far along in its cyclotron motion the muon was when it decayed. These are the crucial bits of information needed to measure the difference in the two frequencies, \omega_s and \omega_c, which is proportional to the anomalous magnetic moment of the muon.

All things considered, with the 24 calorimeters in the experiment one can count the number of positrons with some minimum energy (the threshold used is roughly 1.7 GeV) arriving as a function of time (remember, the most energetic positrons are more relevant since their energy and position have the strongest correlation to the muon spin). Plotting a histogram of these positrons, one arrives at the famous “wiggle plot”, shown below.

An example of a “wiggle plot” showing the number of positrons recorded by the calorimeters as a function of time. The oscillations are due to the precession of the spin motion compared to the cyclotron motion of the muon. Note: these are “blinded” results, and do not correspond to the real measurement yet (see text). Source: David Sweigart’s thesis.

This histogram of the number of positrons versus time is plotted modulo some time constant, otherwise it would be too long to show in a single page. But the characteristic features are very visible: 1) the overall number of positrons decreases as muons decay away and there are fewer of them around; and 2) the oscillation in the number of energetic positrons is due to the precession of the muon spin relative to its cyclotron motion — whenever muon spin and muon momentum are aligned, we see a greater number of energetic positrons, and vice-versa when the two vectors are anti-aligned. In this way, the oscillation visible in the plot is directly proportional to the precession frequency, i.e. how much ahead the spin vector oscillates compared to the momentum vector itself.

In its simplest formulation, this wiggle plot can be fitted to a basic five-parameter model:

N(t) = N_0 \; e^{-t/\tau} \left[ 1 + A_0 \; \text{cos}(\omega_a t + \phi_0) \right]

where the five parameters are: N_0, the initial number of positrons; \tau, the time-dilated muon lifetime; A_0, the amplitude of the oscillation which is related to the asymmetry in the positron’s transverse impact position; \omega_a, the sought-after spin precession frequency; and \phi_0, the phase of the oscillation.

The five-parameter model captures the essence of the measurement, but in practice, to arrive at the highest possible accuracy many additional effects need to be considered. Just to highlight a few: Muons do not all have exactly the right magic momentum, leading to orbital deviations from the magic radius and a different decay positron trajectory to the calorimeter. And because muons are injected in bunches into the storage ring and not one by one, sometimes decay positrons from more than one muon arrive simultaneously at a calorimeter — such pileup positrons need to be carefully separated and accounted for. A third major systematic effect is the presence of non-ideal electric and/or magnetic fields, which can introduce important deviations in the expected motion of the muons and their subsequent decay positrons. In the end, to correct for all these effects, the five-parameter model is augmented to an astounding 22-parameter model! Such is the level of detail that a precision measurement requires. The table below illustrates the expected systematic uncertainty budget for the \omega_a measurement.

CategoryBrookhaven [ppb]Fermilab [ppb]Improvements
Gain changes12020Better laser calibration; low-energy threshold
Pileup8040Low-energy samples recorded; calorimeter transverse segmentation
Lost muons9020Better collimation in ring
Coherent Betatron Oscillation70< 30Higher n value (frequency); better match of beam line to storage ring
Electric field and pitch5030Improved tracker; precise storage ring simulations
Total18070
Estimated systematic uncertainties for the \omega_a measurement, compared to the previous iteration of the experiment at Brookhaven. The total is added in quadrature. Adapted from the Muon g-2 Technical Design Report (TDR).

Note: the wiggle plot above was taken from David Sweigart’s thesis, which features a blinded analysis of the data, where \omega_a is replaced by R, and the two are related by:

\omega_a(R) = 2 \pi \left(0.2291 \text{MHz} \right) \left[ 1 + (R + \Delta R) / 10^6 \right] .

Here R is the blinded parameter that is used instead of \omega_a, and \Delta R is an arbitrary offset that is independently chosen by each different analysis group. This ensures that results from one group do not influence the others and allows all analysis to have the same (unknown) reference. We can expect a similar analysis (and probably several different types of \omega_a analyses) in the announcement on April 7th, except that the blinded R modification will be removed and the true number unveiled.

\omega_p measurement

The measurement of the Larmor frequency \omega_p (and of the magnetic field B) is equally important to the determination of a_\mu and proceeds separately from the \omega_a measurement. The key ingredient here is an extremely accurate mapping of the magnetic field with a two-prong approach: removable proton Nuclear Magnetic Resonance (NMR) probes and fixed NMR probes inside the ring.

The 17 removable probes sit inside a motorized trolley and circle around the ring periodically (every 3 days) to get a very clear and detailed picture of the magnetic field inside the storage ring (the operating principle is that the measured free proton precession frequency is proportional to the magnitude of the external magnetic field). The trolley cannot be run concurrently with the muon beam and so the experiment must be paused for these precise measurements. To complement these probes, 378 fixed probes are installed inside the ring to continuously monitor the magnetic field, albeit with less detail. The removable probes are therefore used to calibrate the measurements made by the fixed probes, or conversely the fixed probes serve as a sort of “interpolation” data between the NMR probe runs.

In addition to the magnetic field, an understanding of the muon beam transverse spatial distribution is also important. The \langle \omega_p \rangle term that enters the anomalous magnetic moment equation above is given by the average magnetic field (measured with the probes) weighted by the transverse spatial distribution of muons when going around the ring. This distribution is accurately measured with a set of three trackers placed immediately upstream of calorimeters at three strategic locations around the storage ring.

The trackers feature pairs of straw wires at stereo angles to each other that can accurately reconstruct the trajectory of decay positrons. The charged positrons ionize some of the gas molecules inside the straws, and the released charge gets swept up to electrodes at the straw end by an electric field inside the straw. The amount and location of the charge yield information on position of the positron, and the 8 layers of a tracker together give precise information on the positron trajectory. With this approach, the magnetic field can be measured and then corrected via a set of 200 concentric coils with independent current settings to an accuracy of a few ppm when averaged azimuthally. The expected systematic uncertainty budget for the \omega_p measurement is shown in the table below.

CategoryBrookhaven [ppb]Fermilab [ppb]Improvements
Absolute probe calibration5035More uniform field for calibration
Trolley probe calibration9030Better alignment between trolley and the plunging probe
Trolley measurement5030More uniform field, less position uncertainty
Fixed probe interpolation7030More stable temperature
Muon distribution3010More uniform field, better understanding of muon distribution
Time-dependent external magnetic field5Direct measurement of external field, active feedback
Trolley temperature, others10030Trolley temperature monitor, etc.
Total17070
Estimated systematic uncertainties for the \langle \omega_p \rangle measurement, compared to the previous iteration of the experiment at Brookhaven. The total is added in quadrature. Adapted from the Muon g-2 Technical Design Report (TDR) and from arxiv:1909.13742.

Conclusions

The announcement on April 7th of the first Muon g-2 results at Fermilab (E989) is very exciting for those following along over the past few years. Since the full data-taking has not been completed yet, it’s likely that these results are not the ultimate ones produced by the Collaboration. But even if they manage to match the accuracy of the previous iteration of the experiment at Brookhaven (E821), we can already learn something about whether the central value of a_\mu shifts up or down or stays roughly constant. If it stays the same even after a decade of intense effort to make an entire new measurement, this could be a strong sign of new physics lurking around! But let’s wait and see what the Collaboration has in store for us. Here’s a link to the event on April 7th.

Amara and I will conclude this series with a 3rd post after the announcement discussing the things we learn from it. Stay tuned!

Further Reading:

The Delirium over Helium

Title: “New evidence supporting the existence of the hypothetic X17 particle”

Authors: A.J. Krasznahorkay, M. Csatlós, L. Csige, J. Gulyás, M. Koszta, B. Szihalmi, and J. Timár; D.S. Firak, A. Nagy, and N.J. Sas; A. Krasznahorkay

Reference: https://arxiv.org/pdf/1910.10459.pdf

This is an update to the excellent “Delirium over Beryllium” bite written by Flip Tanedo back in 2016 introducing the Beryllium anomaly (I highly recommend starting there first if you just opened this page). At the time, the Atomki collaboration in Decebren, Hungary, had just found an unexpected excess on the angular correlation distribution of electron-positron pairs from internal pair conversion in the transition of excited states of Beryllium. According to them, this excess is consistent with a new boson of mass 17 MeV/c2, nicknamed the “X17” particle. (Note: for reference, 1 GeV/c2 is roughly the mass of a proton; for simplicity, from now on I’ll omit the “c2” term by setting c, the speed of light, to 1 and just refer to masses in MeV or GeV. Here’s a nice explanation of this procedure.)

A few weeks ago, the Atomki group released a new set of results that uses an updated spectrometer and measures the same observable (positron-electron angular correlation) but from transitions of Helium excited states instead of Beryllium. Interestingly, they again find a similar excess on this distribution, which could similarly be explained by a boson with mass ~17 MeV. There are still many questions surrounding this result, and lots of skeptical voices, but the replication of this anomaly in a different system (albeit not yet performed by independent teams) certainly raises interesting questions that seem to warrant further investigation by other researchers worldwide.

Nuclear physics and spectroscopy

The paper reports the production of excited states of Helium nuclei from the bombardment of tritium atoms with protons. To a non-nuclear physicist, this may not be immediately obvious, but nuclei can be in excited states just as electrons around atoms. The entire quantum wavefunction of the nucleus is usually found in the ground state, but can be excited by various mechanisms such as the proton bombardment used in this case. Protons with a specific energy (0.9 MeV) were targeted at tritium atoms to initiate the reaction 3H(p, γ)4He, in nuclear physics notation. The equivalent particle physics notation is p + 3H → He* → He + γ (→ e+ e), where ‘*’ denotes an excited state.

This particular proton energy serves to excite the newly-produced Helium atoms into a state with energy of 20.49 MeV. This energy is sufficiently close to the Jπ = 0 state (i.e. negative parity and quantum number J = 0), which is the second excited state in the ladder of states of Helium. This state has a centroid energy of 21.01 MeV and a wide “sigma” (or decay width) of 0.84 MeV. Note that energies of the first two excited states of Helium overlap quite a bit, so actually sometimes nuclei will be found in the first excited state instead, which is not phenomenologically interesting in this case.

Figure 1. Sketch of the energy distributions for the first two excited quantum states of Helium nuclei. The second excited state (with centroid energy of 21.01 MeV) exhibits an anomaly in the electron-positron angular correlation distribution in transitions to the ground state. Proton bombardment with 0.9 MeV protons yields Helium nuclei at 20.49 MeV, therefore producing both first and second excited states, which are overlapping.

With this reaction, experimentalists can obtain transitions from the Jπ = 0 excited state back to the ground state with Jπ = 0+. These transitions typically produce a gamma ray (photon) with 21.01 MeV energy, but occasionally the photon will internally convert into an electron-positron pair, which is the experimental signature of interest here. A sketch of the experimental concept is shown below. In particular, the two main observables measured by the researchers are the invariant mass of the electron-positron pair, and the angular separation (or angular correlation) between them, in the lab frame.

Figure 2. Schematic representation of the production of excited Helium states from proton bombardment, followed by their decay back to the ground state with the emission of an “X” particle. X here can refer to a photon converting into a positron-electron pair, in which case this is an internal pair creation (IPC) event, or to the hypothetical “X17” particle, which is the process of interest in this experiment. Adapted from 1608.03591.

The measurement

For this latest measurement, the researchers upgraded the spectrometer apparatus to include 6 arms instead of the previous 5. Below is a picture of the setup with the 6 arms shown and labeled. The arms are at azimuthal positions of 0, 60, 120, 180, 240, and 300 degrees, and oriented perpendicularly to the proton beam.

Figure 3. The Atomki nuclear spectrometer. This is an upgraded detector from the previous one used to detect the Beryllium anomaly, featuring 6 arms instead of 5. Each arm has both plastic scintillators for measuring electrons’ and positrons’ energies, as well as a silicon strip-based detector to measure their hit impact positions. Image credit: A. Krasznahorkay.

The arms consist of plastic scintillators to detect the scintillation light produced by the electrons and positrons striking the plastic material. The amount of light collected is proportional to the energy of the particles. In addition, silicon strip detectors are used to measure the hit position of these particles, so that the correlation angle can be determined with better precision.

With this setup, the experimenters can measure the energy of each particle in the pair and also their incident positions (and, from these, construct the main observables: invariant mass and separation angle). They can also look at the scalar sum of energies of the electron and positron (Etot), and use it to zoom in on regions where they expect more events due to the new “X17” boson: since the second excited state lives around 21.01 MeV, the signal-enriched region is defined as 19.5 MeV < Etot < 22.0 MeV. They can then use the orthogonal region, 5 MeV < Etot < 19 MeV (where signal is not expected to be present), to study background processes that could potentially contaminate the signal region as well.

The figure below shows the angular separation (or correlation) between electron-positron pairs. The red asterisks are the main data points, and consist of events with Etot in the signal region (19.5 MeV < Etot < 22.0 MeV). We can clearly see the bump occurring around angular separations of 115 degrees. The black asterisks consist of events in the orthogonal region, 5 MeV < Etot < 19 MeV. Clearly there is no bump around 115 degrees here. The researchers then assume that the distribution of background events in the orthogonal region (black asterisks) has the same shape inside the signal region (red asterisks), so they fit the black asterisks to a smooth curve (blue line), and rescale this curve to match the number of events in the signal region in the 40 to 90 degrees sub-range (the first few red asterisks). Finally, the re-scaled blue curve is used in the 90 to 135 degrees sub-range (the last few red asterisks) as the expected distribution.

Figure 4. Angular correlation between positrons and electrons emitted in Helium nuclear transitions to the ground state. Red dots are data in the signal region (sum of positron and electron energies between 19.5 and 22 MeV), and black dots are data in the orthogonal region (sum of energies between 5 and 19 MeV). The smooth blue curve is a fit to the orthogonal region data, which is then re-scaled to to be used as background estimation in the signal region. The blue, black, and magenta histograms are Monte Carlo simulations of expected backgrounds. The green curve is a fit to the data with the hypothesis of a new “X17” particle.

In addition to the data points and fitted curves mentioned above, the figure also reports the researchers’ estimates of the physics processes that cause the observed background. These are the black and magenta histograms, and their sum is the blue histogram. Finally, there is also a green curve on top of the red data, which is the best fit to a signal hypothesis, that is, assuming that a new particle with mass 16.84 ± 0.16 MeV is responsible for the bump in the high-angle region of the angular correlation plot.

The other main observable, the invariant mass of the electron-positron pair, is shown below.

Figure 5. Invariant mass distribution of emitted electrons and positrons in the transitions of Helium nuclei to the ground state. Red asterisks are data in the signal region (sum of electron and positron energies between 19.5 and 22 MeV), and black asterisks are data in the orthogonal region (sum of energies between 5 and 19 MeV). The green smooth curve is the best fit to the data assuming the existence of a 17 MeV particle.

The invariant mass is constructed from the equation

m_{e^+e^-} = \sqrt{1 - y^2} E_{\textrm{tot}} \; \textrm{sin}(\theta/2) + 2m_e^2 \left(1 + \frac{1+y^2}{1-y^2}\, \textrm{cos} \, \theta \right)

where all relevant quantities refer to electron and positron observables: Etot is as before the sum of their energies, y is the ratio of their energy difference over their sum (y \equiv (E_{e^+} - E_{e^-})/E_{\textrm{tot}}), θ is the angular separation between them, and me is the electron and positron mass. This is just one of the standard ways to calculate the invariant mass of two daughter particles in a reaction, when the known quantities are the angular separation between them and their individual energies in the lab frame.

The red asterisks are again the data in the signal region (19.5 MeV < Etot < 22 MeV), and the black asterisks are the data in the orthogonal region (5 MeV < Etot < 19 MeV). The green curve is a new best fit to a signal hypothesis, and in this case the best-fit scenario is a new particle with mass 17.00 ± 0.13 MeV, which is statistically compatible with the fit in the angular correlation plot. The significance of this fit is 7.2 sigma, which means the probability of the background hypothesis (i.e. no new particle) producing such large fluctuations in data is less than 1 in 390,682,215,445! It is remarkable and undeniable that a peak shows up in the data — the only question is whether it really is due to a new particle, or whether perhaps the authors failed to consider all possible backgrounds, or even whether there may have been an unexpected instrumental anomaly of some sort.

According to the authors, the same particle that could explain the anomaly in the Beryllium case could also explain the anomaly here. I think this claim needs independent validation by the theory community. In any case, it is very interesting that similar excesses show up in two “independent” systems such as the Beryllium and the Helium transitions.

Some possible theoretical interpretations

There are a few particle interpretations of this result that can be made compatible with current experimental constraints. Here I’ll just briefly summarize some of the possibilities. For a more in-depth view from a theoretical perspective, check out Flip’s “Delirium over Beryllium” bite.

The new X17 particle could be the vector gauge boson (or mediator) of a protophobic force, i.e. a force that interacts preferentially with neutrons but not so much with protons. This would certainly be an unusual and new force, but not necessarily impossible. Theorists have to work hard to make this idea work, as you can see here.

Another possibility is that the X17 is a vector boson with axial couplings to quarks, which could explain, in the case of the original Beryllium anomaly, why the excess appears in only some transitions but not others. There are complete theories proposed with such vector bosons that could fit within current experimental constraints and explain the Beryllium anomaly, but they also include new additional particles in a dark sector to make the whole story work. If this is the case, then there might be new accessible experimental observables to confirm the existence of this dark sector and the vector boson showing up in the nuclear transitions seen by the Atomki group. This model is proposed here.

However, an important caveat about these explanations is in order: so far, they only apply to the Beryllium anomaly. I believe the theory community needs to validate the authors’ assumption that the same particle could explain this new anomaly in Helium, and that there aren’t any additional experimental constraints associated with the Helium signature. As far as I can tell, this has not been shown yet. In fact, the similar invariant mass is the only evidence so far that this could be due to the same particle. An independent and thorough theoretical confirmation is needed with high-stake claims such as this one.

Questions and criticisms

In the years since the first Beryllium anomaly result, a few criticisms about the paper and about the experimental team’s history have been laid out. I want to mention some of those to point out that this is still a contentious result.

First, there is the group’s history of repeated claims of new particle discoveries every so often since the early 2000s. After experimental refutation of these claims by more precise measurements, there isn’t a proper and thorough discussion of why the original excesses were seen in the first place, and why they have subsequently disappeared. Especially for such groundbreaking claims, a consistent history of solid experimental attitude towards one’s own research is very valuable when making future claims.

Second, others have mentioned that some fit curves seem to pass very close to most data points (n.b. I can’t seem to find the blog post where I originally read this or remember its author – if you know where it is, please let me know so I can give proper credit!). Take a look at the plot below, which shows the observed Etot distribution. In experimental plots, there is usually a statistical fluctuation of data points around the “mean” behavior, which is natural and expected. Below, in contrast, the data points are remarkably close to the fit. This doesn’t in itself mean there is anything wrong here, but it does raise an interesting question of how the plot and the fit were produced. It could be that this is not a fit to some prior expected behavior, but just an “interpolation”. Still, if that’s the case, then it’s not clear (to me, at least) what role the interpolation curve plays.

Figure 6. Sum of electron and positron energies distribution produced in the decay of Helium nuclei to the ground state. Black dots are data and the red curve is a fit.

Third, there is also the background fit to data in Figure 4 (black asterisks and blue line). As Ethan Siegel has pointed out, you can see how well the background fit matches data, but only in the 40 to 90 degrees sub-range. In the 90 to 135 degrees sub-range, the background fit is actually quite poorer. In a less favorable interpretation of the results, this may indicate that whatever effect is causing the anomalous peak in the red asterisks is also causing the less-than-ideal fit in the black asterisks, where no signal due to a new boson is expected. If the excess is caused by some instrumental error instead, you’d expect to see effects in both curves. In any case, the background fit (blue curve) constructed from the black asterisks does not actually model the bump region very well, which weakens the argument for using it throughout all of the data. A more careful analysis of the background is warranted here.

Fourth, another criticism comes from the simplistic statistical treatment the authors employ on the data. They fit the red asterisks in Figure 4 with the “PDF”:

\textrm{PDF}(e^+ e^-) = N_{Bg} \times \textrm{PDF}(\textrm{data}) + N_{Sig} \times \textrm{PDF}(\textrm{sig})

where PDF stands for “Probability Density Function”, and in this case they are combining two PDFs: one derived from data, and one assumed from the signal hypothesis. The two PDFs are then “re-scaled” by the expected number of background events (N_{Bg}) and signal events (N_{sig}), according to Monte Carlo simulations. However, as others have pointed out, when you multiply a PDF by a yield such as N_{Bg}, you no longer have a PDF! A variable that incorporates yields is no longer a probability. This may just sound like a semantics game, but it does actually point to the simplicity of the treatment, and makes one wonder if there could be additional (and perhaps more serious) statistical blunders made in the course of data analysis.

Fifth, there is also of course the fact that no other experiments have seen this particle so far. This doesn’t mean that it’s not there, but particle physics is in general a field with very few “low-hanging fruits”. Most of the “easy” discoveries have already been made, and so every claim of a new particle must be compatible with dozens of previous experimental and theoretical constraints. It can be a tough business. Another example of this is the DAMA experiment, which has made claims of dark matter detection for almost 2 decades now, but no other experiments were able to provide independent verification (and in fact, several have provided independent refutations) of their claims.

I’d like to add my own thoughts to the previous list of questions and considerations.

The authors mention they correct the calibration of the detector efficiency with a small energy-dependent term based on a GEANT3 simulation. The updated version of the GEANT library, GEANT4, has been available for at least 20 years. I haven’t actually seen any results that use GEANT3 code since I’ve started in physics. Is it possible that the authors are missing a rather large effect in their physics expectations by using an older simulation library? I’m not sure, but just like the simplistic PDF treatment and the troubling background fit to the signal region, it doesn’t inspire as much confidence. It would be nice to at least have a more detailed and thorough explanation of what the simulation is actually doing (which maybe already exists but I haven’t been able to find?). This could also be due to a mismatch in the nuclear physics and high-energy physics communities that I’m not aware of, and perhaps nuclear physicists tend to use GEANT3 a lot more than high-energy physicists.

Also, it’s generally tricky to use Monte Carlo simulation to estimate efficiencies in data. One needs to make sure the experimental apparatus is well understood and be confident that their simulation reproduces all the expected features of the setup, which is often difficult to do in practice, as collider experimentalists know too well. I’d really like to see a more in-depth discussion of this point.

Finally, a more technical issue: from the paper, it’s not clear to me how the best fit to the data (red asterisks) was actually constructed. The authors claim:

Using the composite PDF described in Equation 1 we first performed a list of fits by fixing the simulated particle mass in the signal PDF to a certain value, and letting RooFit estimate the best values for NSig andNBg. Letting the particle mass lose in the fit, the best fitted mass is calculated for the best fit […]

When they let loose the particle mass in the fit, do they keep the “NSig” and “NBg” found with a fixed-mass hypothesis? If so, which fixed-mass NSig and which NBg do they use? And if not, what exactly was the purpose of performing the fixed-mass fits originally? I don’t think I fully got the point here.

Where to go from here

Despite the many questions surrounding the experimental approach, it’s still an interesting result that deserves further exploration. If it holds up with independent verification from other experiments, it would be an undeniable breakthrough, one that particle physicists have been craving for a long time now.

And independent verification is key here. Ideally other experiments need to confirm that they also see this new boson before the acceptance of this result grows wider. Many upcoming experiments will be sensitive to a new X17 boson, as the original paper points out. In the next few years, we will actually have the possibility to probe this claim from multiple angles. Dedicated standalone experiments at the LHC such as FASER and CODEX-b will be able to probe highly long-lived signatures coming from the proton-proton interaction point, and so should be sensitive to new particles such as axion-like particles (ALPs).

Another experiment that could have sensitivity to X17, and has come online this year, is PADME (disclaimer: I am a collaborator on this experiment). PADME stands for Positron Annihilation into Dark Matter Experiment and its main goal is to look for dark photons produced in the annihilation between positrons and electrons. You can find more information about PADME here, and I will write a more detailed post about the experiment in the future, but the gist is that PADME is a fixed-target experiment striking a beam of positrons (beam energy: 550 MeV) against a fixed target made of diamond (carbon atoms). The annihilation between positrons in the beam and electrons in the carbon atoms could give rise to a photon and a new dark photon via kinetic mixing. By measuring the incoming positron and the outgoing photon momenta, we can infer the missing mass which is carried away by the (invisible) dark photon.

If the dark photon is the X17 particle (a big if), PADME might be able to see it as well. Our dark photon mass sensitivity is roughly between 1 and 22 MeV, so a 17 MeV boson would be within our reach. But more interestingly, using the knowledge of where the new particle hypothesis lies, we might actually be able to set our beam energy to produce the X17 in resonance (using a beam energy of roughly 282 MeV). The resonance beam energy increases the number of X17s produced and could give us even higher sensitivity to investigate the claim.

An important caveat is that PADME can provide independent confirmation of X17, but cannot refute it. If the coupling between the new particle and our ordinary particles is too feeble, PADME might not see evidence for it. This wouldn’t necessarily reject the claim by Atomki, it would just mean that we would need a more sensitive apparatus to detect it.  This might be achievable with the next generation of PADME, or with the new experiments mentioned above coming online in a few years.

Finally, in parallel with the experimental probes of the X17 hypothesis, it’s critical to continue gaining a better theoretical understanding of this anomaly. In particular, an important check is whether the proposed theoretical models that could explain the Beryllium excess also work for the new Helium excess. Furthermore, theorists have to work very hard to make these models compatible with all current experimental constraints, so they can look a bit contrived. Perhaps a thorough exploration of the theory landscape could lead to more models capable of explaining the observed anomalies as well as evading current constraints.

Conclusions

The recent results from the Atomki group raise the stakes in the search for Physics Beyond the Standard Model. The reported excesses in the angular correlation between electron-positron pairs in two different systems certainly seems intriguing. However, there are still a lot of questions surrounding the experimental methods, and given the nature of the claims made, a crystal-clear understanding of the results and the setup need to be achieved. Experimental verification by at least one independent group is also required if the X17 hypothesis is to be confirmed. Finally, parallel theoretical investigations that can explain both excesses are highly desirable.

As Flip mentioned after the first excess was reported, even if this excess turns out to have an explanation other than a new particle, it’s a nice reminder that there could be interesting new physics in the light mass parameter space (e.g. MeV-scale), and a new boson in this range could also account for the dark matter abundance we see leftover from the early universe. But as Carl Sagan once said, extraordinary claims require extraordinary evidence.

In any case, this new excess gives us a chance to witness the scientific process in action in real time. The next few years should be very interesting, and hopefully will see the independent confirmation of the new X17 particle, or a refutation of the claim and an explanation of the anomalies seen by the Atomki group. So, stay tuned!

Further reading

CERN news

Ethan Siegel’s Forbes post

Flip Tanedo’s “Delirium over Beryllium” bite

Matt Strassler’s blog

Quanta magazine article on the original Beryllium anomaly

Protophobic force interpretation

Vector boson with axial couplings to quarks interpretation

Lazy photons at the LHC

Title: “Search for long-lived particles using delayed photons in
proton-proton collisions at √ s = 13 TeV”

Author: CMS Collaboration

Reference: https://arxiv.org/abs/1909.06166 (submitted to Phys. Rev. D)

An interesting group of searches for new physics at the LHC that has been gaining more attention in recent years relies on reconstructing and identifying physics objects that are displaced from the original proton-proton collision point. Several theoretical models predict such signatures due to the decay of long-lived particles (LLPs) that are produced in these collisions. Theories with LLPs typically feature a suppression of the available phase space for the decay of these particles, or a weak coupling between them and Standard Model (SM) particles.

An appealing feature of these signatures is that backgrounds can be greatly reduced by searching for displaced objects, since most SM physics display only prompt particles (i.e. produced immediately following the collision within the primary vertex resolution). Given that the sensitivity to new physics is determined both by the presence of signal events as well as by the absence of background events, the sensitivity to models with LLPs is increased with the expectation of low SM backgrounds.

A recent search for new physics with LLPs performed by the CMS Collaboration uses delayed photons as its driving experimental signature. For this search, events of interest contain delayed photons and missing transverse momentum (MET1). This signature is predicted in the LHC by various theories such as Gauge-Mediated Supersymmetry Breaking (GMSB), where long-lived supersymmetric particles produced in proton-proton (pp) collisions decay in a peculiar pattern, giving rise to stable particles that escape the detector (hence the MET) and also photons that are displaced from the interaction point. The expected signature is shown in Figure 1.

Figure 1. Example Feynman diagrams of Gauge-Mediated Supersymmetry Breaking (GMSB) processes that can give rise to final-state signatures at CMS consisting of two (left) or one (right) displaced photons and supersymmetric particles that escape the detector and show up as missing transverse momentum (MET) .

The main challenge of this analysis is the physics reconstruction of delayed photons, something that the LHC experiments were not originally designed to do. Both the detector and the physics software are optimized for prompt objects originating from the pp interaction point, where the vast majority of relevant physics happens at the LHC. This difference is illustrated in Figure 2.

Figure 2. Difference between prompt and displaced photons as seen at CMS with the electromagnetic calorimeter (ECAL). The ECAL crystals are oriented towards the interaction point and so for displaced photons both the shape of the electromagnetic shower generated inside the crystals and the arrival time of the photons are different from prompt photons produced at the proton-proton collision. Source: https://cms.cern/news/its-never-too-late-photons-cms

In order to reconstruct delayed photons, a separate reconstruction algorithm was developed that specifically looked for signs of photons out-of-sync with the pp collision. Before this development, out-of-time photons in the detector were considered something of an ‘incomplete or misleading reconstruction’ and discarded from the analysis workflow.

In order to use delayed photons in analysis, a precise understanding of CMS’s calorimeter timing capabilities is required. The collaboration measured the timing resolution of the electromagnetic calorimeter to be around 400 ps, and that sets the detector’s sensitivity to delayed photons.

Other relevant components of this analysis include a dedicated trigger (for 2017 data), developed to select events consistent with a single displaced photon. The identification of a displaced photon at the trigger level relies on the shape of the electromagnetic shower it deposits on the calorimeter: displaced photons produce a more elliptical shower, whereas prompt photons produce a more circular one. In addition, an auxiliary trigger (used for 2016 data, before the special trigger was developed) requires two photons, but no displacement.

The event selection requires one or two well-reconstructed high-momentum photons in the detector (depending on year), and at least 3 jets. The two main kinematic features of the event, the large arrival time (i.e. consistent with time of production delayed relative to the pp collision) and large MET, are used instead to extract the signal and the background yields (see below).

In general for LLP searches, it is difficult to estimate the expected background from Monte Carlo (MC) simulation alone, since a large fraction of backgrounds are due to detector inefficiencies and/or physics events that have poor MC modeling. Instead, this analysis estimates the background from the data itself, by using the so-called ‘ABCD’ method.

The ABCD method consists of placing events in data that pass the signal selection on a 2D histogram, with a suitable choice of two kinematic quantities as the X and Y variables. These two variables must be uncorrelated, a very important assumption. Then this histogram is divided into 4 regions or ‘bins’, and the one with the highest fraction of expected signal events becomes the ‘signal’ bin (call it the ‘C’ bin). The other three bins should contain mostly background events, and with the assumption that X and Y variables are uncorrelated, it is possible to predict the background in C by using the observed backgrounds in A, B, and D:

C_{\textrm{pred}} = \frac{B_{\textrm{obs}} \times D_{\textrm{obs}}}{A_{\textrm{obs}}}

Using this data-driven background prediction for the C bin, all that remains is to compare the actual observed yield in C to figure out if there is an excess, which could be attributed to the new physics under investigation:

\textrm{excess} = C_{\textrm{obs}} - C_{\textrm{pred}}

As an example, Table 1 shows the number of background events predicted and the number of events in data observed for the 2016 run.

Table 1. Observed yield in data (N_{\textrm{obs}}^{\textrm{data}}) and predicted background yields (N_{\text{bkg(no C)}}^{\textrm{post-fit}} and N_{\textrm{bkg}}^{\textrm{post-fit}}) in the LHC 2016 run, for all four bins (C is the signal-enriched bin). The observed data is entirely compatible with the predicted background, and no excess is seen.

Combining all the data, the CMS Collaboration did not find an excess of events over the expected background, which would have suggested evidence of new physics. Using statistical analysis, the collaboration can place upper limits on the possible mass and lifetime of new supersymmetric particles predicted by GMSB, based on the absence of excess events. The final result of this analysis is shown in Figure 3, where a mass of up to 500 GeV for the neutralino particle is excluded for lifetimes of 1 meter (here we measure lifetime in units of length by multiplying by the speed of light: c\tau = 1 meter).

Figure 3. Exclusion plots for the GMSB model, featuring exclusion contours of masses and lifetimes for the lightest supersymmetric particle (the neutralino). At its most sensitive mass region (around lifetimes of 1 meter) the CMS result excludes a mass of under 500 GeV for the neutralino, while for lower masses (100-200 GeV) the lifetime exclusion is quite high at around 100 meters or so.

The mass coverage of this search is higher than the previous search done by the ATLAS Collaboration with run 1 data (i.e. years 2010-2012 only), with a much higher sensitivity to longer lifetimes (up to a factor of 10 depending on the mass). But the ATLAS detector has a longitudinally-segmented calorimeter which allows them to precisely measure the direction of displaced photons, and when they do release their results for this search using run 2 data (2016-2018), it should also feature quite a large gain in sensitivity, potentially overshadowing this CMS result. So stay tuned for this exciting cat-and-mouse game between LHC experiments!

Further Reading

Footnotes

1: Here we use the notation MET instead of P_T^{\textrm{miss}} when referring to missing transverse momentum for typesetting reasons.

When light and light collide

Title: “Evidence for light-by-light scattering in heavy-ion collisions with the ATLAS detector at the LHC”

Author: ATLAS Collaboration

Reference: doi:10.1038/nphys4208

According to classical wave theory, two electromagnetic waves that happen to cross each other in space will not interfere. In fact, this is a crucial feature of the conventional definition of a wave, in contrast to a corpuscle or particle: when two waves meet, they briefly occupy the same space at the same time, each without “knowing” about the other’s existence. Particles, on the other hand, do interact (or scatter) when they get close to each other, and the results of this encounter can be measured.

The mathematical backbone for this idea is the so-called superposition principle. It arises from the fact that the equations describing wave propagation are all linear in the fields and sources, meaning that they don’t occur in squares or cubes of those quantities. When we take two such waves that happen to be nearby, the linearity of the equations implies that we can treat the overall wave as just a linear superposition of two separate waves, traveling in different directions. The equations do not distinguish between the two scenarios.

This story gets more interesting after the insights by Albert Einstein who predicted the quantization of light, and the subsequent formal development of Quantum Electrodynamics in the 1940s by Shin’ichirō Tomonaga, Julian Schwinger and Richard Feynman. In the quantum theory of light, it is no longer “just” a wave, but rather a dual entity that can be treated as both a wave and as a particle called photon.

The new interpretation of light as a particle opens up the possibility that two electromagnetic waves may indeed interact when crossing each other, just as particles would. The mathematical equivalent statement is that the quantum theory of light yields wave equations that are not entirely linear anymore, but instead contain a small non-linear part. The non-linear contributions have to be tiny, otherwise we would already have detected the effects, but nonetheless they are predicted to occur by quantum theory. In this context, it is called light-by-light scattering.

Detection

In 2017, The ATLAS experiment at the LHC observed the first direct evidence of light-by-light scattering, using collisions of relativistic heavy-ions. Since this is such a rare phenomenon, it took us a long time to become directly experimentally sensitive to it.

The experiment is based on so-called Ultra-Peripheral Collisions (UPC) of lead ions (Z=82) at a center-of-mass energy of 5.02 TeV (or about 5,000 times the mass of the proton). In UPC collisions, the two oncoming beams are far enough apart that the beam particles are less likely to undergo hard scattering, and instead just “graze” each other. This means that the strong force is not usually involved in such interactions, since its range is tiny and it only has intra-nuclear coverage. Instead, the electromagnetic interaction dominates in UPC events.

The figure below shows how UPCs proceed. The grazing of two lead ions leads to large electromagnetic fields in the space between the ions, and this interaction can be interpreted as an exchange of photons between them, since photons are the mediator of electromagnetism. Then, by also looking for two final-state photons in the ATLAS detector (the ‘X’ on the right in the figure), the light-by-light process, \gamma \gamma \rightarrow \gamma \gamma, can be probed (\gamma stands for photons).

Left: Feynman diagrams of light-by-light scattering. Two incoming photons interact and two outgoing photons are produced. Right: how to measure light-by-light scattering using Ultra-Peripheral Collisions (UPC) with lead ions in the LHC. Source: https://doi.org/10.1038/nphys4208

In order to isolate this particular process from all the other physics happening at the LHC, a series of increasingly tighter selections (or cuts) is applied to the acquired data. The final cuts are optimized to obtain the maximum possible sensitivity to the light-by-light scattering process. This sensitivity depends on how likely it is to select a candidate signal event, and similarly on how unlikely it is to select a background event. The main background physics that could mimic the signature of light-by-light scattering (two photons in the ATLAS calorimeter on an UPC lead-ion data run) include \gamma \gamma \rightarrow e^+ e^-, where the final-state electrons and positrons are misidentified by the detector as photons; central exclusive production (CEP) of photons in g g \rightarrow \gamma \gamma (where g is a gluon); and hadronic fakes from \pi_0 production in low-pT dijet events, where \pi_0’s (neutral pions) decay to a pair of photons.

The various applied selections begin with a dedicated trigger which selects events with moderate activity in the calorimeter and very little activity elsewhere. This is what is expected in a lead-ion UPC collision, since the ions just escape down the LHC beam pipe and are not detected, leaving the two photons as the only visible products. Then a set of selections is applied to ensure that the recorded activity in the calorimeter is compatible with two photons. These selections rely mostly on the shape of electromagnetic showers deposited on calorimeter crystals, which varies for different types of incident particles.

Finally, a series of extra selections is applied to minimize the number of possible background events, such as vetoing any events containing charged-particle tracks in the ATLAS tracker, which effectively removes \gamma \gamma \rightarrow  e^+ e^- events with electrons and positrons mis-tagged as photons. Note that this also removes some of the real light-by-light signal events (about ~10%), where final-state photons undergo photon conversion after interacting with tracker material, but in this case the trade-off is certainly worth it. Another such selection is the requirement that the transverse momentum of the diphoton system be less than 2 GeV. This removes contributions from other fake-photon backgrounds (such as cosmic-ray muons), because it ensures that the net transverse momentum of the system is small, and thus likely to originate in the ion-ion interaction.

The same exact set of selections is applied to both data and Monte Carlo (MC) simulations of the experiment. The MC simulations yield an estimate of how many background and signal events should be expected in data. The table below shows the results.

Selection cuts and number of events left after each cut, applied to both data and MC samples, including backgrounds and signal. Source: https://doi.org/10.1038/nphys4208

The penultimate row contains the sum of all cuts and a comparison between total number of expected background events (2.6), light-by-light scattering events (7.3), and data (13). The sum of 2.6+7.3=9.9 events certainly seems compatible with the observed data, given the quoted uncertainties in the last row. In fact, it is possible to estimate the significance of this result by asking how likely it would be for the background-only hypothesis (that is, pretending light-by-light scattering doesn’t exist and only including the backgrounds) to yield 13 observed events. This likelihood is tiny, 5\times10^{-6}, which corresponds to a significance of 4.4 sigma!

In addition to the number of events, in the figure below the paper also plots the diphoton invariant mass distribution for the 13 observed data events (black points), and for the MC simulations (signal in red and backgrounds in blue and gray). This comparison provides further evidence that we do indeed see light-by-light scattering in the ATLAS data.

Left: distribution of diphoton acoplanarity. Right: distribution of diphoton invariant mass with final selection cuts, both on data and MC backgrounds and signal. Source: https://doi.org/10.1038/nphys4208

Finally, given the observed number of events in data and the expected number of MC background events, it is possible to measure the cross-section of light-by-light scattering (as a reminder, the cross-section of a process measures how likely it is to occur in collisions). The ATLAS collaboration calculates the cross-section of light-by-light scattering with the formula:

\sigma_{\text{fid}} = \frac{N_{\text{data}} - N_{\text{bkg}}}{C \times \int L dt}

Where N_{\text{data}} is the number of observed events in data, N_{\text{bkg}} is the number of background events in MC, \int L dt is the total amount of data collected, and C is a correction factor which translates all of the detector inefficiencies into a single number. You can think of the entire denominator as the “effective” amount of data that was analyzed, and the numerator as the “effective” number of signal events that was seen. The ratio of the two quantities yields the probability of seeing light-by-light scattering in a single collision. The ATLAS collaboration found this value to be 70 \pm 24 \; (\text{statistical}) \pm 17 \;(\text{systematic}) nb, which agrees with the theorized values of 45 \pm 9 nb  and 49 \pm 10 nb within uncertainties.

To conclude, the measurement of light-by-light scattering by ATLAS is an exciting result which offers us a direct glimpse into the stark differences between classical and quantum physics in an accessible (and dare I say) amusing way!