Planckian dark matter: DEAP edition

Title: First direct detection constraints on Planck-scale mass dark matter with multiple-scatter signatures using the DEAP-3600 detector.

Reference: https://arxiv.org/abs/2108.09405.

Here is a broad explainer of the paper via breaking down its title.

Direct detection.

The term in use for a kind of astronomy, ‘dark matter astronomy’, that has been in action since the 1980s. The word “astronomy” usually evokes telescopes pointing at something in the sky and catching its light. But one could also catch other things, e.g., neutrinos, cosmic rays and gravitational waves, to learn about what’s out there: that counts as astronomy too! As touched upon elsewhere in these pages, we think dark matter is flying into Earth at about 300 km/s, making its astronomy a possibility. But we are yet to conclusively catch dark particles. The unique challenge, unlike astronomy with light or neutrinos or gravity waves, is that we do not quite know the character of dark matter. So we must first imagine what it could be, and accordingly design a telescope/detector. That is challenging, too. We only really know that dark matter exists on the size scale of small galaxies: 10^{19} metres. Whereas our detectors are at best a metre across. This vast gulf in scales can only be addressed by theoretical models.

Multiple-scatter signatures.

How heavy all the dark matter is in the neighbourhood of the Sun has been ballparked, but that does not tell us how far apart the dark particles are from each other, i.e. if they are lightweights huddled close, or anvils socially distanced. Usually dark matter experiments (there are dozens around the world!) look for dark particles bunched a few centimetres apart, called WIMPs. This experiment looked, for the first time, for dark particles that may be  30 kilometres apart. In particular they looked for “MIMPs” — multiply interacting massive particles — dark matter that leaves a “track” in the detector as opposed to a single “burst” characteristic of a WIMP. As explained here, to discover very dilute dark particles like DEAP-3600 wanted to, one must necessarily look for tracks. So they carefully analyzed the waveforms of energy dumps in the detector (e.g., from radioactive material, cosmic muons, etc.) to pick out telltale tracks of dark matter.

Figure above: Simulated waveforms for two benchmark parameters.

DEAP-3600 detector.

The largest dark matter detector built so far, the 130 cm-diameter, 3.3 tonne liquid argon-based DEAP (“Dark matter Experiment using Argon Pulse-shaped discrimination”) in SNOLAB, Canada.  Three years of data recorded on whatever passed through the detector were used. That amounts to the greatest integrated flux of dark particles through a detector in a dark matter experiment so far, enabling them to probe the frontier of “diluteness” in dark matter.

Planck-scale mass.

By looking for the dilutest dark particles, DEAP-3600 is the first laboratory experiment to say something about dark matter that may weigh a “Planck mass” — about 22 micrograms, or 1.2 \times 10^{19} GeV/c^2 — the greatest mass an elementary particle could have. That’s like breaking the sound barrier. Nothing prevents you from moving faster than sound, but you’d transition to a realm of new physical effects. Similarly nothing prevents an experiment from probing dark matter particles beyond the Planck mass. But novel intriguing theoretical possibilities for dark matter’s unknown identity are now impacted by this result, e.g., large composite particles, solitonic balls, and charged mini-black holes.

Constraints.

The experiment did not discover dark matter, but has mapped out its masses and nucleon scattering cross sections that are now ruled out thanks to its extensive search.

Image

Image

Figure above: For two classes of models of composite dark matter, DEAP-3600 limits on its cross sections for scattering on nucleons  versus its unknown mass. Also displayed are previously placed limits from various other searches.

[Full disclosure: the author was part of the experimental search, which was based on proposals in [a] [b]. It is hoped that this search leads the way for other collaborations to, using their own fun tricks, cast an even wider net than DEAP did.]

Further reading.

[1] Proposals for detection of (super-)Planckian dark matter via purely gravitational interactions:

Laser interferometers as dark matter detectors,

Gravitational direct detection of dark matter.

[2] Constraints on (super-)Planckian dark matter from recasting searches in etched plastic and ancient underground mica.

[3] A recent multi-scatter search for dark matter reaching masses of 10^{12} GeV/c^2.

[4] Look out for Benjamin Broerman‘s PhD thesis featuring results from a multi-scatter search in the bubble chamber-based PICO-40L.

A new boson at 151 GeV?! Not quite yet

Title: “Accumulating Evidence for the Associate Production of
a Neutral Scalar with Mass around 151 GeV”

Authors: Andreas Crivellin et al.

Reference: https://arxiv.org/abs/2109.02650

Everyone in particle physics is hungry for the discovery of a new particle not in the standard model, that will point the way forward to a better understanding of nature. And recent anomalies: potential Lepton Flavor Universality violation in B meson decays and the recent experimental confirmation of the muon g-2 anomaly, have renewed peoples hopes that there may new particles lurking nearby within our experimental reach. While these anomalies are exciting, if they are confirmed they would be ‘indirect’ evidence for new physics, revealing concrete a hole in the standard model, but not definitely saying what it is that fills that hole.  We would then would really like to ‘directly’ observe what was causing the anomaly, so we can know exactly what the new particle is and study it in detail. A direct observation usually involves being able to produce it in a collider, which is what the high momentum experiments at the LHC (ATLAS and CMS) are designed to look for.

By now these experiments have done hundreds of different analyses of their data searching for potential signals of new particles being produced in their collisions and so far haven’t found anything. But in this recent paper, a group of physicists outside these collaborations argue that they may have missed such a signal in their own data. Whats more, they claim statistical evidence for this new particle at the level of around 5-sigma, which is the threshold usually corresponding to a ‘discovery’ in particle physics.  If true, this would of course be huge, but there are definitely reasons to be a bit skeptical.

This group took data from various ATLAS and CMS papers that were looking for something else (mostly studying the Higgs) and noticed that multiple of them had an excess of events at a particle energy, 151 GeV. In order to see how significant theses excesses were in combination, they constructed a statistical model that combined evidence from the many different channels simultaneously. Then they evaluate that the probability of there being an excess at the same energy in all of these channels without a new particle is extremely low, and thus claim evidence for this new particle at 5.1-sigma (local). 

 
4 plots in different channels showing the purported excess at 151 GeV in different channels.
FIgure 1 from the paper. This shows the invariant mass spectrum of the new hypothetical boson mass in the different channels the authors consider. The authors have combined CMS and ATLAS data from different analyses and normalized everything to be consistent in order to make such plot. The pink line shows the purported signal at 151 GeV. The largest significance comes from the channel where the new boson decays into two photons and is produced in association with something that decays invisibly (which produces missing energy).
A plot of the significance (p-value) as a function of the mass of the new particle. Combing all the channels, the significance reaches the level of 5-sigma. One can see that the significance is dominated by diphoton channels.

This is a of course a big claim, and one reason to be skeptical is because they don’t have a definitive model, they cannot predict exactly how much signal you would expect to see in each of these different channels. This means that when combining the different channels, they have to let the relative strength of the signal in each channel be a free parameter. They are also combining the data a multitude of different CMS and ATLAS papers, essentially selected because they are showing some sort of fluctuation around 151 GeV. So this sort of cherry picking of data and no constraints on the relative signal strengths means that their final significance should be taken with several huge grains of salt.

The authors further attempt to quantify a global significance, which would account of the look-elsewhere effect , but due to the way they have selected their datasets  it is not really possible in this case (in this humble experimenter’s opinion).

Still, with all of those caveats, it is clear that there is some excesses in the data around 151 GeV, and it should be worth experimental collaborations’ time to investigate it further. Most of the data the authors use comes control regions of from analyses that were focused solely on the Higgs, so this motivates the experiments expanding their focus a bit to cover these potential signals. The authors also propose a new search that would be sensitive to their purported signal, which would look for a new scalar decaying to two new particles that decay to pairs of photons and bottom quarks respectively (H->SS*-> γγ bb).

 

In an informal poll on Twitter, most were not convinced a new particle has been found, but the ball is now in ATLAS and CMS’s courts to analyze the data themselves and see what they find. 

 

 

Read More:

An Anomalous Anomaly : The New Fermilab Muon g-2 Results” A Particle Bites post about one recent exciting anomaly 

The flavour of new physics” Cern Courier article about the recent anomalies relating to lepton flavor violation 

Unveiling Hidden Physics at the LHC” Recent whitepaper that contains a good review of the recent anomalies relevant for LHC physics 

For a good discussion of this paper claiming a new boson, see this Twitter thread

Universality of Black Hole Entropy

A range of supermassive black holes lights up this new image from NASA’s NuSTAR. All of the dots are active black holes tucked inside the hearts of galaxies, with colors representing different energies of X-ray light.

It was not until the past few decades that physicists have made remarkable experimental advancements in the study of black holes, such as with the Event Horizon Telescope and the Laser Interferometer Gravitational-Wave Observatory.

On the theoretical side, there are still lingering questions regarding the thermodynamics of these objects.  It is well known that black holes have a simple formula for their entropy. It was first postulated by Jacob Bekenstein and Stephen Hawking  that the entropy is proportional to the area of its event horizon. The universality of this formula is quite impressive and has stood the test of time.

However, there is more to the story of black hole thermodynamics. Even though the entropy is proportional to its area, there are sub-leading terms that also contribute. Theoretical physicists like to focus on the logarithmic corrections to this formula and investigate whether it is just as universal as the leading term.

Examining a certain class of black holes in four dimensions, Hristov and Reys have shown such a universal result may exist. They focused on a set of spacetimes, that asymptote for large radial distance, to a negatively curved spacetime, called Anti-de Sitter.  These Anti-de Sitter spacetimes have been at the forefront of high energy theory due to the AdS/CFT correspondence.

Moreover, they found that the logarithmic term is proportional to its Euler Characteristic, a topologically invariant quantity, and a single dynamical coefficient, that depends on the spacetime background. Their work is a stepping stone in understanding the structure of the entropy for these asymptotically AdS black holes.

How to find invisible particles in a collider

 You might have heard that one of the big things we are looking for in collider experiments are ever elusive dark matter particles. But given that dark matter particles are expected to interact very rarely with regular matter, how would you know if you happened to make some in a collision? The so called ‘direct detection’ experiments have to operate giant multi-ton detectors in extremely low-background environments in order to be sensitive to an occasional dark matter interaction. In the noisy environment of a particle collider like the LHC, in which collisions producing sprays of particles happen every 25 nanoseconds, the extremely rare interaction of the dark matter with our detector is likely to be missed. But instead of finding dark matter by seeing it in our detector, we can instead find it by not seeing it. That may sound paradoxical, but its how most collider based searches for dark matter work. 

The trick is based on every physicists favorite principle: the conservation of energy and momentum. We know that energy and momentum will be conserved in a collision, so if we know the initial momentum of the incoming particles, and measure everything that comes out, then any invisible particles produced will show up as an imbalance between the two. In a proton-proton collider like the LHC we don’t know the initial momentum of the particles along the beam axis, but we do that they were traveling along that axis. That means that the net momentum in the direction away from the beam axis (the ‘transverse’ direction) should be zero. So if we see a momentum imbalance going away from the beam axis, we know that there is some ‘invisible’ particle traveling in the opposite direction.

A sketch of what the signature of an invisible particle would like in a detector. Note this is a 2D cross section of the detector, with the beam axis traveling through the center of the diagram. There are two signals measured in the detector moving ‘up’ away from the beam pipe. Momentum conservation means there must have been some particle produced which is traveling ‘down’ and was not measured by the detector. Figure borrowed from here  

We normally refer to the amount of transverse momentum imbalance in an event as its ‘missing momentum’. Any collisions in which an invisible particle was produced will have missing momentum as tell-tale sign. But while it is a very interesting signature, missing momentum can actually be very difficult to measure. That’s because in order to tell if there is anything missing, you have to accurately measure the momentum of every particle in the collision. Our detectors aren’t perfect, any particles we miss, or mis-measure the momentum of, will show up as a ‘fake’ missing energy signature. 

A picture of a particularly noisy LHC collision, with a large number of tracks
Can you tell if there is any missing energy in this collision? Its not so easy… Figure borrowed from here

Even if you can measure the missing energy well, dark matter particles are not the only ones invisible to our detector. Neutrinos are notoriously difficult to detect and will not get picked up by our detectors, producing a ‘missing energy’ signature. This means that any search for new invisible particles, like dark matter, has to understand the background of neutrino production (often from the decay of a Z or W boson) very well. No one ever said finding the invisible would be easy!

However particle physicists have been studying these processes for a long time so we have gotten pretty good at measuring missing energy in our events and modeling the standard model backgrounds. Missing energy is a key tool that we use to search for dark matter, supersymmetry and other physics beyond the standard model.

Read More:

What happens when energy goes missing?” ATLAS blog post by Julia Gonski

How to look for supersymmetry at the LHC“, blog post by Matt Strassler

“Performance of missing transverse momentum reconstruction with the ATLAS detector using proton-proton collisions at √s = 13 TeV” Technical Paper by the ATLAS Collaboration

“Search for new physics in final states with an energetic jet or a hadronically decaying W or Z boson and transverse momentum imbalance at √s= 13 TeV” Search for dark matter by the CMS Collaboration

Strings 2021 – an overview

Strings 2021 Flyer

It was that time of year again when the entire string theory community comes together to discuss current research programs, the status of string theory and more recently, the social issues common in the field. This annual conference has been held in various countries but for the first time in its 35 year-long history has been hosted in Latin America at the ICTP South American Institute for Fundamental Research (ICTP-SAIFR).

One positive aspect of its virtual platform has been the increase in the number of participants attending the conference. Similar to Strings 2020 held in South Africa, more than two thousand participants were registered for the conference. In addition to research talks on technical subtopics, participants were involved in daily informal discussions on topics such as the black hole information paradox, ensemble averaging, and cosmology and string theory. More junior participants were involved in the poster sessions and gong shows, held in the first week of the conference.

One particular discussion session I would like to point out was panel discussion on the 4 generations of women in string theory, featuring women from different age groups and how they have dealt with issues of gender and implicit bias in their current or previous roles in academia.

To say the very least, the conference was a major success and has shown the effectiveness of virtual platforms for upcoming years, possibly including Strings 2022 to be held in Vienna.

For the string theory enthusiasts reading this, recordings of the conference can be found here.

Might I inquire?

Is N=2 large?queried  Kitano, Yamada and Yamazaki in their paper title. Exactly five months later, they co-wrote with Matsudo a paper titled “N=2 is large“, proving that their question was, after all, rhetorical. 

Papers ask the darndest things. Collected below are titular posers from the field’s literature that keep us up at night. 

Image: Rahmat Dwi Kahyo, Noun Project.

Who?

Who you gonna call?

Who is afraid of quadratic divergences?
Who is afraid of CPT violations?

How?

How big are penguins?

How bright is the proton?
How brightly does the glasma shine?
How bright can the brightest neutrino source be?

How stable is the photon?
(Abstract: “Yes, the photon.”)

How heavy is the cold photon?

How good is the Villain approximation?
How light is dark?

How many solar neutrino experiments are wrong?
How many thoughts are there?

How would a kilonova look on camera?
How does a collapsing star look?

How much information is in a jet?

How fast can a black hole eat?

How straight is the Linac and how much does it vibrate?
How do we build Bjorken’s 1000 TeV machine?

How black is a constituent quark?

How neutral are atoms?

How long does hydrogen live?

How does a pseudoscalar glueball come unglued?

How warm is too warm?

How degenerate can we be?

How the heck is it possible that a system emitting only a dozen particles can be described by fluid dynamics?

Bonus

How I spent my summer vacation

Why?

Why is \  F^2_\pi \gamma_{\rho \pi \pi^2}/m^2_\rho \cong 0?

Why trust a theory?

Why be natural?

Why do things fall?

Why do we flush gas in gaseous detectors?

Why continue with nuclear physics?

What and why are Siberian snakes?

Why does the proton beam have a hole?

Why are physicists going underground?

Why unify?

Why do nucleons behave like nucleons inside nuclei and not like peas in a meson soup?

Why i?

Bonus

The best why

Why I would be very sad if a Higgs boson were discovered

Why the proton is getting bigger

Why I think that dark matter has large self-interactions

Why Nature appears to ‘read books on free field theory’

String Dualities and Corrections to Gravity

Based on arXiv:2012.15677 [hep-th]

Figure 1: a torus is an example of a geometry that has T-duality

Physicists have been searching for ways to describe the interplay between gravity and quantum mechanics – quantum gravity – for the last century. The problem of finding a consistent theory of quantum gravity still looms physicists to this day. Fortunately, string theory is the most promising candidate for such a task. 

One of the strengths of string theory is that at low energies, the equations arising from string theory are shown to be precisely Einstein’s theory of general relativity. Let’s break down what this means. First, we must make sure we know the definition of a coupling constant. Theories of physics are typically described by some parameter that signifies the strength of the interaction. This parameter is called the coupling constant of that theory. According to quantum field theory, the value of the coupling constant depends on the energy. We often plot the logarithm of the energy and the coupling constant to understand how the theory behaves at a certain energy scale. The slope of this plot is called the beta function and when this function is zero, that point is called a fixed point. These fixed points are interesting since they imply that the quantum theory does not have any notion of scale.

Back to string theory, its coupling constant is called α′ (said as alpha-prime). At weak coupling, when α′ is small, we can similarly find the beta function for string theory. At the quantum level, string theory must have a vanishing beta function. At the corresponding fixed point, we find that the Einstein’s equations of motion emerge. This is quite remarkable!

We can go even further. Due to the smallness of α′, we can expand the beta function perturbatively. All the subleading terms in α′, which are infinite in number, are considered to be corrections to general relativity. Therefore, we can understand how general relativity is modified via string theory. It becomes technically challenging to compute these corrections and little is known about what the full expansion looks like.

Fortunately for physicists, string theories are interesting in other ways that could help figure us out these corrections to gravity. Particularly, the string energy spectrum that has radii R and radii α′/R look exactly the same. This relation is called T-duality. An example of a geometry that has this duality is the torus, see Figure 1. Because we know that certain dualities for strings must hold, we can use this to guess what the higher order correction must look like. Codina, Hohm and Marques took advantage of this idea to find corrections to the third power of α′. Using a simple scenario where the graviton is the only field in the theory, they were able to predict what the corrections must be.

This method can be applied at higher orders in α′ as well as a theory with more fields than the graviton, although technical challenges still arise. Due to the structure of how T-duality was used, the authors can also use their results to study cosmological models. Finally, the theory result confirms that string theory should be T-duality at all orders of α′.

 

Measuring the Tau’s g-2 Too

Title : New physics and tau g2 using LHC heavy ion collisions

Authors: Lydia Beresford and Jesse Liu

Reference: https://arxiv.org/abs/1908.05180

Since April, particle physics has been going crazy with excitement over the recent announcement of the muon g-2 measurement which may be our first laboratory hint of physics beyond the Standard Model. The paper with the new measurement has racked up over 100 citations in the last month. Most of these papers are theorists proposing various models to try an explain the (controversial) discrepancy between the measured value of the muon’s magnetic moment and the Standard Model prediction. The sheer number of papers shows there are many many models that can explain the anomaly. So if the discrepancy is real,  we are going to need new measurements to whittle down the possibilities.

Given that the current deviation is in the magnetic moment of the muon, one very natural place to look next would be the magnetic moment of the tau lepton. The tau, like the muon, is a heavier cousin of the electron. It is the heaviest lepton, coming in at 1.78 GeV, around 17 times heavier than the muon. In many models of new physics that explain the muon anomaly the shift in the magnetic moment of a lepton is proportional to the mass of the lepton squared. This would explain why we are a seeing a discrepancy in the muon’s magnetic moment and not the electron (though there is a actually currently a small hint of a deviation for the electron too). This means the tau should be 280 times more sensitive than the muon to the new particles in these models. The trouble is that the tau has a much shorter lifetime than the muon, decaying away in just 10-13 seconds. This means that the techniques used to measure the muons magnetic moment, based on magnetic storage rings, won’t work for taus. 

Thats where this new paper comes in. It details a new technique to try and measure the tau’s magnetic moment using heavy ion collisions at the LHC. The technique is based on light-light collisions (previously covered on Particle Bites) where two nuclei emit photons that then interact to produce new particles. Though in classical electromagnetism light doesn’t interact with itself (the beam from two spotlights pass right through each other) at very high energies each photon can split into new particles, like a pair of tau leptons and then those particles can interact. Though the LHC normally collides protons, it also has runs colliding heavier nuclei like lead as well. Lead nuclei have more charge than protons so they emit high energy photons more often than protons and lead to more light-light collisions than protons. 

Light-light collisions which produce tau leptons provide a nice environment to study the interaction of the tau with the photon. A particles magnetic properties are determined by its interaction with photons so by studying these collisions you can measure the tau’s magnetic moment. 

However studying this process is be easier said than done. These light-light collisions are “Ultra Peripheral” because the lead nuclei are not colliding head on, and so the taus produced generally don’t have a large amount of momentum away from the beamline. This can make them hard to reconstruct in detectors which have been designed to measure particles from head on collisions which typically have much more momentum. Taus can decay in several different ways, but always produce at least 1 neutrino which will not be detected by the LHC experiments further reducing the amount of detectable momentum and meaning some information about the collision will lost. 

However one nice thing about these events is that they should be quite clean in the detector. Because the lead nuclei remain intact after emitting the photon, the taus won’t come along with the bunch of additional particles you often get in head on collisions. The level of background processes that could mimic this signal also seems to be relatively minimal. So if the experimental collaborations spend some effort in trying to optimize their reconstruction of low momentum taus, it seems very possible to perform a measurement like this in the near future at the LHC. 

The authors of this paper estimate that such a measurement with a the currently available amount of lead-lead collision data would already supersede the previous best measurement of the taus anomalous magnetic moment and further improvements could go much farther. Though the measurement of the tau’s magnetic moment would still be far less precise than that of the muon and electron, it could still reveal deviations from the Standard Model in realistic models of new physics. So given the recent discrepancy with the muon, the tau will be an exciting place to look next!

Read More:

An Anomalous Anomaly: The New Fermilab Muon g-2 Results

When light and light collide

Another Intriguing Hint of New Physics Involving Leptons

Sinusoidal dark matter: ANAIS Edition

Title: Annual Modulation Results from Three Years Exposure of ANAIS-112.

Reference: https://arxiv.org/abs/2103.01175.

This is an exciting couple of months to be a particle physicist. The much-awaited results from Fermilab’s Muon g-2 experiment delivered all the excitement we had hoped for. (Don’t miss our excellent theoretical and experimental refreshers by A. McCune and A. Frankenthal, and the post-announcement follow-up.) Not long before that, the LHCb collaboration confirmed the R_K flavor anomaly, a possible sign of violation of lepton universality, and set the needle at 3.1 standard deviations off the Standard Model (SM). That same month the ANAIS dark matter experiment took on the mighty DAMA/LIBRA, the subject of this post.

In its quest to confirm or refute its 20 year-old predecessor at Brookhaven National Lab, the Fermilab Muon g-2 experiment used the same storage ring magnet — though refurbished — and the same measurement technique. As the April 7 result is consistent with the BNL measurement, this removes much doubt from the experimental end of the discrepancy, although of course, unthought-of correlated systematics may lurk. A similar philosophy is at work with the ANAIS experiment, which uses the same material, technique and location (on the continental scale) as DAMA/LIBRA.

As my colleague M. Talia covers here and I touch upon here, an isotropic distribution of dark matter velocities in the Galactic frame would turn into an anisotropic “wind” in the solar frame as the Solar System orbits around the center of the Milky Way. Furthermore, in the Earth’s frame the wind would reverse direction every half-year as we go around the Sun. If we set up a “sail” in the form of a particle detector, this annual modulation could be observed — if dark matter interacts with SM states. The amplitude of this modulation S_m is given by

R(t) = S_m \cos(\omega (t - t_0)) + R_0 \phi_{\rm bg}(t)~,

where

R(t) is the rate of event collection per unit mass of detector per unit energy of recoil at some time t,

\omega = 2\pi/(365 \ {\rm days}),

R_0 captures any unmodulated rate in the detector with \phi_{\rm bg} its probability distribution in time, and

t_0 is fixed by the start date of the experiment so that the event rate is highest when we move maximally upwind on June 02.

The DAMA/LIBRA experiment in Italy’s Gran Sasso National Laboratory, using 250 kg of radiopure thallium-doped sodium-iodide [NaI(Tl)] crystals, claims to observe a modulation every year over the last 20 years, with S_m = 0.0103 \pm 0.0008 /day/kg/keV in the 2–6 keV energy range at the level of 12.9 \sigma.

It is against this serious claim that the experiments ANAIS, COSINE, SABRE and COSINUS have mounted a cross-verification campaign. Sure, the DAMA/LIBRA result is disfavored by conventional searches counting unmodulated dark matter events (see, e.g. Figure 3 here or this recent COSINE-100 paper). But it cannot get cleaner than a like-by-like comparison independent of assumptions about dark matter pertaining either to its microscopic behavior or to its phase space distribution in the Earth’s vicinity. Doing just that, ANAIS (for Annual Modulation with NaI Scintillators) in Spain’s Canfranc Underground Laboratory, using 112.5 kg of radiopure NaI(Tl) over 3 years, has a striking counter-claim summed up in this figure:

Figure caption: The ANAIS 3 year dataset is consistent with no annual modulation, in tension with DAMA’s observation of significant modulation over 20 years.

ANAIS’ error bars are unsurprisingly larger than DAMA/LIBRA’s given their smaller dataset, but the modulation amplitude they measure is unmistakably consistent with zero and far out from DAMA/LIBRA. The plot below is visual confirmation of non-modulation with the label indicating the best-fit S_m under the modulation hypothesis.

Figure caption: ANAIS-112 event rate data in two energy ranges, fitted to a null hypothesis and a modulation hypothesis; the latter gives a fit consistent with zero modulation. The event rate here falls with time as the time-distribution of the background \phi_{\rm bkg} (t) is modeled as an exponential function accounting for isotopic decays in the target material.

The ANAIS experimenters carry out a few neat checks of their result. The detector is split into 9 pieces, and just to be sure of no differences in systematics and backgrounds among them, every piece is analyzed for a modulation signal. Next they treat t_0 as a free parameter, equivalent to making no assumptions about the direction of the dark matter wind. Finally they vary the time bin size in analyzing the event rate such as in the figure above. In every case the measurement is consistent with the null hypothesis.

Exactly how far away is the ANAIS result from DAMA/LIBRA? There are two ways to quantify it. In the first, ANAIS take their central values and uncertainties to compute a 3.3 \sigma (2.6 \sigma) deviation from DAMA/LIBRA’s central values S_m^{\rm D/L} in the 1–6 keV (2–6 keV) bin. In the second way, the ANAIS uncertainty \sigma^{\rm AN}_m is directly compared to DAMA using the ratio S_m^{\rm D/L}/\sigma^{\rm AN}_m, giving 2.5 \sigma and 2.7 \sigma in those energy bins. With 5 years of data — as scheduled for now — this latter sensitivity is expected to grow to 3 \sigma. And with 10 years, it could get to 5 \sigma — and we can all go home.

Further reading.

[1] Watch out for the imminent results of the KDK experiment set out to study the electron capture decay of potassium-40, a contaminant in NaI; the rate of this background has been predicted but never measured.

[2] The COSINE-100 experiment in Yangyang National Lab, South Korea (note: same hemisphere as DAMA/LIBRA and ANAIS) published results in 2019 using a small dataset that couldn’t make a decisive statement about DAMA/LIBRA, but they are scheduled to improve on that with an announcement some time this year. Their detector material, too, is NaI(Tl).

[3] The SABRE experiment, also with NaI(Tl), will be located in both hemispheres to rule out direction-related systematics. One will be right next to DAMA/LIBRA at the Gran Sasso Laboratory in Italy, the other at Stawell Underground Physics Laboratory in Australia. ParticleBites’ M. Talia is excited about the latter.

[4] The COSINUS experiment, using undoped NaI crystals in Gran Sasso, aims to improve on DAMA/LIBRA by lowering the nuclear recoil energy threshold and with better background discrimination.

[5] Testing DAMA, article in Symmetry Magazine.

An Anomalous Anomaly: The New Fermilab Muon g-2 Results

By Andre Sterenberg Frankenthal and Amara McCune

This is the final post of a three-part series on the Muon g-2 experiment. Check out posts 1 and 2 on the theoretical and experimental aspects of g-2 physics. 

The last couple of weeks have been exciting in the world of precision physics and stress tests of the Standard Model (SM). The Muon g-2 Collaboration at Fermilab released their very first results with a measurement of the anomalous magnetic moment of the muon to an accuracy of 462 parts per billion (ppb), which largely agrees with previous experimental results and amplifies the tension with the accepted theoretical prediction to a 4.2\sigma discrepancy. These first results feature less than 10% of the total data planned to be collected, so even more precise measurements are foreseen in the next few years.

But on the very same day that Muon g-2 announced their results and published their main paper on PRL and supporting papers on Phys. Rev. A and Phys. Rev. D, Nature published a new lattice QCD calculation which seems to contradict previous theoretical predictions of the g-2 of the muon and moves the theory value much closer to the experimental one. There will certainly be hot debate in the coming months and years regarding the validity of this new calculation, but it does not stop from muddying the waters in the g-2 sphere. We cover both the new experimental and theoretical results in more detail below.

Experimental announcement

The main paper in Physical Review Letters summarizes the experimental method and reports the measured numbers and associated uncertainties. The new Fermilab measurement of the muon g-2 is 3.3 standard deviations (\sigma) away from the predicted SM value. This means that, assuming all systematic effects are accounted for, the probability that the null hypothesis (i.e. that the true muon g-2 number is actually the one predicted by the SM) could result in such a discrepant measurement is less than 1 in 1,000. Combining this latest measurement with the previous iteration of the experiment at Brookhaven in the early 2000s, the discrepancy grows to 4.2\sigma, or smaller than 1 in 300,000 probability that it is just a statistical fluke. This is not yet the 5\sigma threshold that seems to be the golden standard in particle physics to claim a discovery, but it is a tantalizing result. The figure below from the paper illustrates well the tension between experiment and theory.

Comparison between experimental measurements of the anomalous magnetic moment of the muon (right, top to bottom: Brookhaven, Fermilab, combination) and the theoretical prediction by the Standard Model (left). The discrepancy has grown to 4.2 sigma. Source: https://journals.aps.org/prl/pdf/10.1103/PhysRevLett.126.141801

This first publication is just the first round of results planned by the Collaboration, and corresponds to less than 10% of the data that will be collected throughout the total runtime of the experiment. With this limited dataset, the statistical uncertainty (434 ppb) dominates over the systematic uncertainty (157 pbb), but that is expected to change as more data is acquired and analyzed. When the statistical uncertainty eventually dips below, it will be critically important to control the systematics as much as possible, to attain the ultimate target goal of a 140 ppb total uncertainty measurement. The table below shows the actual measurements performed by the Collaboration.

Table of measurements for each sub-run of the Run-1 period, from left to right: precession frequency, equivalent precession frequency of a proton (i.e. measures the magnetic field), and the ratio between the two quantities. Source: https://journals.aps.org/prl/pdf/10.1103/PhysRevLett.126.141801

The largest sources of systematic uncertainties stem from the electrostatic quadrupoles (ESQ) in the experiment. While the uniform magnetic field ensures the centripetal motion of muons in the storage ring, it is also necessary to keep them confined to the horizontal plane. Four sets of ESQ uniformly spaced in azimuth provide vertical focusing of the muon beam. However, after data-taking, two resistors in the ESQ system were found to be damaged. This means that the time profile of ESQ activation was not perfectly matched to the time profile of the muon beam. In particular, during the first 100 microseconds after each muon bunch injection, muons were not getting the correct focusing momentum, which affected the expected phase of the “wiggle plot” measurement. All told, this issue added 75 ppb of systematic uncertainty to the budget. Nevertheless, because statistical uncertainties dominate in this first stage of the experiment, the unexpected ESQ damage was not a showstopper. The Collaboration expects this problem to be fully mitigated in subsequent data-taking runs.

To guard against any possible human bias, an interesting blinding policy was implemented: the master clock of the entire experiment was shifted by an unknown value, chosen by two people outside the Collaboration and kept in a vault for the duration of the data-taking and processing. Without knowing this shift, it is impossible to deduce the correct value of the g-2. At the same time, this still allows experimenters to carry out the analysis through the end, and only then remove the clock shift to reveal the unblinded measurement. In a way this is like a key unlocking the final result. (This was not the only protection against bias, only the more salient and curious one.)

Lattice QCD + BMW results

On the same day that Fermilab announced the Muon g-2 experimental results, a group known as the BMW (Budapest-Marseille-Wuppertal) Collaboration published its own results on the theoretical value of muon g-2 using new techniques in lattice QCD. The group’s results can be found in Nature (the journal, jury’s still out on whether they’re in actual Nature), or at the preprint here. In short, their calculations bring them much closer to the experimental value than previous collaborations, bringing their methods into tension with the findings of previous lattice QCD groups. What’s different? To this end, let’s dive a little deeper into the details of lattice QCD. 

The BMW group published this plot, showing their results (top line) compared with the results of other groups using lattice QCD (remaining green squares) and the R-ratio (the usual data-driven techniques, red circles). The purple bar gives the range of values for the anomalous magnetic moment that would signal no new physics – we can see that the BMW results are closer to this region than the R-ratio results, which have similarly small error bars. Source: BMW Collaboration

As outlined in the first post of this series, the main tool of high-energy particle physics rests in perturbation theory, which we can think of graphically via Feynman diagrams, starting with tree-level diagrams and going to higher orders via loops. Equivalently, this corresponds to calculations in which terms are proportional to some coupling parameter that describes the strength of the force in question. Each higher order term comes with one more factor of the relevant coupling, and our errors in these calculations are generally attributable to either uncertainties in the coupling measurements themselves or the neglecting of higher order terms.

These coupling parameters are secretly functions of the energy scale being studied, and so at each energy scale, we need to recalculate these couplings. This makes sense intuitively because forces have different strengths at different energy scales — e.g. gravity is much weaker on a particle scale than a planetary one. In quantum electrodynamics (QED), for example, these couplings are fairly small when in the energy scale of the electron. This means that we really don’t need to go to higher orders in perturbation theory, since these terms quickly become irrelevant with higher powers of this coupling. This is the beauty of perturbation theory: typically, we need only consider the first few orders, vastly simplifying the process.

However, QCD does not share this convenience, as it comes with a coupling parameter that decreases with increasing energy scale. At high enough energies, we can indeed employ the wonders of perturbation theory to make calculations in QCD (this high-energy behavior is known as asymptotic freedom). But at lower energies, at length scales around that of a proton, the coupling constant is greater than one, which means that the first-order term in the perturbative expansion is the least relevant term, with higher and higher orders making greater contributions. In fact, this signals the breakdown of the perturbative technique. Because the mass of the muon is in this same energy regime, we cannot use perturbation theory in quantum field theory to calculate g-2. We then turn to simulations, and since cannot entirely simulate spacetime (because it consists of infinite points), we must instead break it up into a discretized set of points dubbed the lattice. 

A visualization of the lattice used in simulations. Particles like quarks are placed on the points of the lattice, with force-carrying gauge bosons (in this case, gluons) forming the links between them. Source: Lawrence Livermore National Laboratory

This naturally introduces new sources of uncertainty into our calculations. To employ lattice QCD, we need to first consider which lattice spacing to use — the distance between each spacetime point — where a smaller lattice spacing is preferable in order to come closer to a description of spacetime. Introducing this lattice spacing comes with its own systematic uncertainties. Further, this discretization can be computationally challenging, as larger numbers of points quickly eat up computing power. Standard numerical techniques become too computationally expensive to employ, and so statistical techniques as well as Monte Carlo integration are used instead, which again introduces sources of error.

Difficulties are also introduced by the fact that a discretized space does not respect the same symmetries that a continuous space does, and some symmetries simply cannot be kept simultaneously with others. This leads to a challenge in which groups using lattice QCD must pick which symmetries to preserve as well as consider the implications of ignoring the ones they choose not to simulate. All of this adds up to mean that lattice QCD calculations of g-2 have historically been accompanied by very large error bars — that is, until the much smaller error bars from the BMW group’s recent findings. 

These results are not without controversy. The group employs a “staggered fermion” approach to discretizing the lattice, in which a single type of fermion known as a Dirac fermion is put on each lattice point, with additional structure described by neighboring points. Upon taking the “continuum limit,” or the limit that the spacing between points on the lattice goes to zero (hence simulating a continuous space), this results in a theory with four fermions, rather than the sixteen that live in the Standard Model. There are a few advantages to this method, both in terms of reducing computational time and having smaller discretization errors. However, it is still unclear if this approach is valid, and the lattice community is then questioning if these results are not computing observables in some other quantum field theory, rather than the SM quantum field theory. 

The future of g-2

Overall, while a 4.2\sigma discrepancy is certainly more alluring than the previous 3.7\sigma, the conflict between the experimental results and the Standard Model is still somewhat murky. It is crucial to note that the new 4.2\sigma benchmark does not include the BMW group’s calculations, and further incorporation of these values could shift the benchmark around. A consensus from the lattice community on the acceptability of the BMW group’s results is needed, as well as values from other lattice groups utilizing similar methods (which should be steadily rolling out as the months go on). It seems that the future of muon g-2 now rests in the hands of lattice QCD.

At the same time, more and more precise measurements should be coming out of the Muon g-2 Collaboration in the next few years, which will hopefully guide theorists in their quest to accurately predict the anomalous magnetic moment of the muon and help us reach a verdict on this tantalizing evidence of new boundaries in our understanding of elementary particle physics.

Further Reading

BMW paper: https://arxiv.org/pdf/2002.12347.pdf

Muon g-2 Collaboration papers:

  1. Main result (PRL): Phys. Rev. Lett. 126, 141801 (2021)
  2. Precession frequency measurement (Phys. Rev. D): Phys. Rev. D 103, 072002 (2021)
  3. Magnetic field measurement (Phys. Rev. A): Phys. Rev. A 103, 042208 (2021)
  4. Beam dynamics (to be published in Phys. Rev. Accel. Beams): https://arxiv.org/abs/2104.03240