The XENON1T Excess : The Newest Craze in Particle Physics

Paper: Observation of Excess Electronic Recoil Events in XENON1T

Authors: XENON1T Collaboration

Recently the particle physics world has been abuzz with a new result from the XENON1T experiment who may have seen a revolutionary signal. XENON1T is one of the world’s most sensitive dark matter experiments. The experiment consists of a huge tank of Xenon placed deep underground in the Gran Sasso mine in Italy. It is a ‘direct-detection’ experiment, hunting for very rare signals of dark matter particles from space interacting with their detector. It was originally designed to look for WIMP’s, Weakly Interacting Massive Particles, who used to be everyone’s favorite candidate for dark matter. However, given recent null results by WIMP-hunting  direct-detection experiments, and collider experiments at the LHC, physicists have started to broaden their dark matter horizons. Experiments like XENON1T, who were designed to look for heavy WIMP’s colliding off of Xenon nuclei have realized that they can also be very sensitive to much lighter particles by looking for electron recoils. New particles that are much lighter than traditional WIMP’s would not leave much of an impact on large Xenon nuclei, but they can leave a signal in the detector if they instead scatter off of the electrons around those nuclei. These electron recoils can be identified by the ionization and scintillation signals they leave in the detector, allowing them to be distinguished from nuclear recoils.

In this recent result, the XENON1T collaboration searched for these electron recoils in the energy range of 1-200 keV with unprecedented sensitivity.  Their extraordinary sensitivity is due to its exquisite control over backgrounds and extremely low energy threshold for detection. Rather than just being impressed, what has gotten many physicists excited is that the latest data shows an excess of events above expected backgrounds in the 1-7 keV region. The statistical significance of the excess is 3.5 sigma, which in particle physics is enough to claim ‘evidence’ of an anomaly but short of the typical 5-sigma required to claim discovery.

The XENON1T data that has caused recent excitement. The ‘excess’ is the spike in the data (black points) above the background model (red line) in the 1-7 keV region. The significance of the excess is around 3.5 sigma.

So what might this excess mean? The first, and least fun answer, is nothing. 3.5 sigma is not enough evidence to claim discovery, and those well versed in particle physics history know that there have been numerous excesses with similar significances have faded away with more data. Still it is definitely an intriguing signal, and worthy of further investigation.

The pessimistic explanation is that it is due to some systematic effect or background not yet modeled by the XENON1T collaboration. Many have pointed out that one should be skeptical of signals that appear right at the edge of an experiments energy detection threshold. The so called ‘efficiency turn on’, the function that describes how well an experiment can reconstruct signals right at the edge of detection, can be difficult to model. However, there are good reasons to believe this is not the case here. First of all the events of interest are actually located in the flat part of their efficiency curve (note the background line is flat below the excess), and the excess rises above this flat background. So to explain this excess their efficiency would have to somehow be better at low energies than high energies, which seems very unlikely. Or there would have to be a very strange unaccounted for bias where some higher energy events were mis-reconstructed at lower energies. These explanations seem even more implausible given that the collaboration performed an electron reconstruction calibration using the radioactive decays of Radon-220 over exactly this energy range and were able to model the turn on and detection efficiency very well.

Results of a calibration done to radioactive decays of Radon-220. One can see that data in the efficiency turn on (right around 2 keV) is modeled quite well and no excesses are seen.

However the possibility of a novel Standard Model background is much more plausible. The XENON collaboration raises the possibility that the excess is due to a previously unobserved background from tritium β-decays. Tritium decays to Helium-3 and an electron and a neutrino with a half-life of around 12 years. The energy released in this decay is 18.6 keV, giving the electron having an average energy of a few keV. The expected energy spectrum of this decay matches the observed excess quite well. Additionally, the amount of contamination needed to explain the signal is exceedingly small. Around 100 parts-per-billion of H2 would lead to enough tritium to explain the signal, which translates to just 3 tritium atoms per kilogram of liquid Xenon. The collaboration tries their best to investigate this possibility, but they neither rule out or confirm such a small amount of tritium contamination. However, other similar contaminants, like diatomic oxygen have been confirmed to be below this level by 2 orders of magnitude, so it is not impossible that they were able to avoid this small amount of contamination.

So while many are placing their money on the tritium explanation, there is the exciting possibility remains that this is our first direct evidence of physics Beyond the Standard Model (BSM)! So if the signal really is a new particle or interaction what would it be? Currently it it is quite hard to pin down exactly based on the data. The analysis was specifically searching for two signals that would have shown up in exactly this energy range: axions produced in the sun, and neutrinos produced in the sun interacting with electrons via a large (BSM) magnetic moment. Both of these models provide good fits to the signal shape, with the axion explanation being slightly preferred. However since this result has been released, many have pointed out that these models would actually be in conflict with constraints from astrophysical measurements. In particular, the axion model they searched for would have given stars an additional way to release energy, causing them to cool at a faster rate than in the Standard Model. The strength of interaction between axions and electrons needed to explain the XENON1T excess is incompatible with the observed rates of stellar cooling. There are similar astrophysical constraints on neutrino magnetic moments that also make it unlikely.

This has left door open for theorists to try to come up with new explanations for these excess events, or think of clever ways to alter existing models to avoid these constraints. And theorists are certainly seizing this opportunity! There are new explanations appearing on the arXiv every day, with no sign of stopping. In the roughly 2 weeks since the XENON1T announced their result and this post is being written, there have already been 50 follow up papers! Many of these explanations involve various models of dark matter with some additional twist, such as being heated up in the sun or being boosted to a higher energy in some other way.

A collage of different models trying to explain the XENON1T excess (center). Each plot is from a separate paper released in the first week and a half following the original announcement. Source

So while theorists are currently having their fun with this, the only way we will figure out the true cause of this this anomaly is with more data. The good news is that the XENON collaboration is already preparing for the XENONnT experiment that will serve as a follow to XENON1T. XENONnT will feature a larger active volume of Xenon and a lower background level, allowing them to potentially confirm this anomaly at the 5-sigma level with only a few months of data. If  the excess persists, more data would also allow them to better determine the shape of the signal; allowing them to possibly distinguish between the tritium shape and a potential new physics explanation. If real, other liquid Xenon experiments like LUX and PandaX should also be able to independently confirm the signal in the near future. The next few years should be a very exciting time for these dark matter experiments so stay tuned!

Read More:

Quanta Magazine Article “Dark Matter Experiment Finds Unexplained Signal”

Previous ParticleBites Post on Axion Searches

Blog Post “Hail the XENON Excess”

A Tau Neutrino Runs into a Candy Shop…

We recently discussed some curiosities in the data from the IceCube neutrino detector. This is a follow up Particle Bite on some of the sugary nomenclature IceCube uses to characterize some of its events.

As we explained previously, IceCube is a gigantic ultra-high energy cosmic neutrino detector in Antarctica. These neutrinos have energies between 10-100 times higher than the protons colliding at the Large Hadron Collider, and their origin and nature are largely a mystery. One thing that IceCube can tell us about these neutrinos is their flavor composition; see e.g. this post for a crash course in neutrino flavor.

When neutrinos interact with ambient nuclei through a W boson (charged current interactions), the following types of events might be seen:

Types of Ice Cube Events
Typical charged current events in IceCube. Displays from the IceCube collaboration.

I refer you to this series of posts for a gentle introduction to the Feynman diagrams above. The key is that the high energy neutrino interacts with an nucleus, breaking it apart (the remnants are called X above) and ejecting a high energy charged lepton which can be used to identify the flavor of the neutrino.

  • Muons travel a long distance and leave behind a trail of Cerenkov radiation called a track.
  • Electrons don’t travel as far and deposit all of their energy into a shower. These are also sometimes called cascades because of the chain of particles produced in the ‘bang’.
  • Taus typically leave a more dramatic signal, a double bang, when the tau is formed and then subsequently decays into more hadrons (X’ above).

In fact, the tau events can be further classified depending on how this ‘double bang’ is resolved—and it seems like someone was playing a popular candy-themed mobile game when naming these:

Types of tau events in IceCube from Cowan.
Types of candy-themed tau events in IceCube from D. Cowan at the TeVPA 2 conference.

In this figure from the TeVPA 2 conference proceedings, we find some silly classifications of what tau events look like according to their energy:

  • Lollipop: The tau is produced outside the detector so that the first ‘bang’ isn’t seen. Instead, there’s a visible track that leads to the second (observable) bang. The track is the stick and the bang is the lollipop head.
  • Inverted lollipop: Similar to the lollipop, except now the first ‘bang’ is seen in the detector but the second ‘bang’ occurs outside the detector and is not observed.
  • Sugardaddy: The tau is produced outside the detector but decays into a muon inside the detector. This looks almost like a muon track except that the tau produces less Cerenkov light so that one can identify the point where the tau decays into a muon.
  • Double pulse: While this isn’t candy-themed, it’s still very interesting. This is a double bang where the two bangs can’t be distinguished spatially. However, since one bang occurs slightly after the other, one can distinguish them in the time: it’s a “double bang” in time rather than space.
  • Tautsie pop: This is a low energy version of the sugardaddy where the shower-to-track energy is used to discriminate against background.

While the names may be silly, counting these types of events in IceCube is one of the exciting frontiers of flavor physics. And while we might be forgiven for thinking that neutrino physics is all about measuring very `small’ things—let me share the following graphic from Francis Halzen’s recent talk at the AMS Days workshop at CERN, overlaying one of the shower events over Madison, Wisconsin to give a sense of scale:

From F. Halzen on behalf of the IceCube collaboration.
From F. Halzen on behalf of the IceCube collaboration; from AMS Days at CERN 2015.

The Glashow Resonance on Ice

Are cosmic neutrinos trying to tell us something, deep in the Antarctic ice?


“Glashow resonance as a window into cosmic neutrino sources,”
by Barger, Lu, Learned, Marfatia, Pakvasa, and Weiler
Phys.Rev. D90 (2014) 121301 [1407.3255]

Related work: Anchordoqui et al. [1404.0622], Learned and Weiler [1407.0739], Ibe and Kaneta [1407.2848]

Is there an neutrino energy cutoff preventing Glashow resonance events in IceCube?
Is there an neutrino energy cutoff preventing Glashow resonance events in IceCube?

The IceCube Neutrino Observatory is a gigantic neutrino detector located in the Antarctic. Like an iceberg, only a small fraction of the lab is above ground: 86 strings extend to a depth of 2.5 kilometers into the ice, with each string instrumented with 60 detectors.

2 PeV event from the IceCube 3 year analysis; nicknamed "Big Bird." From 1405.5303.
2 PeV event from the IceCube 3 year analysis; nicknamed “Big Bird.” From 1405.5303.

These detectors search ultra high energy neutrinos by looking for Cerenkov radiation as they pass through the ice. This is really the optical version of a sonic boom. An example event is shown above, where the color and size of the spheres indicate the strength of the Cerenkov signal in each detector.

IceCube has released data for its first three years of running (1405.5303) and has found three events with very large energies: 1-2 peta-electron-volts: that’s ten thousand times the mass of the Higgs boson. In addition, there’s a spectrum of neutrinos in the 10-1000 TeV range.

Glashow resonance diagram.
Glashow resonance diagram.

These ultra high energy neutrinos are believed to originate from outside our galaxy through processes involving particle acceleration by black holes. One expects the flux of such neutrinos to go as a power law of the energy, \Phi \sim E^{-\alpha} where \alpha = 2 is a estimate from certain acceleration models. The existence of the three super high energy events at the PeV scale has led some people to think about a known deviation from the power law spectrum: the Glashow resonance. This is the sharp increase in the rate of neutrino interactions with matter coming from the resonant production of W bosons, as shown in the Feynman diagram to the left.

The Glashow resonance sticks out like a sore thumb in the spectrum. The position of the resonance is set by the energy required for an electron anti-neutrino to hit an electron at rest such that the center of mass energy is the W boson mass.

Sharp peak in the neutrino scattering rate from the Glashow resonance; image from Engel, Seckel, and Stanev in astro-ph/0101216.

If you work through the math on the back of an envelope, you’ll find that the resonance occurs for incident electron anti-neutrinos with an energy of  6.3 PeV; see figure to the leftt. This is “right around the corner” from the 1-2 PeV events already seen, and one might wonder whether it’s significant that we haven’t seen anything.

The authors of [1407.3255] have found that the absence of Glashow resonant neutrino events in IceCube is not yet a bona-fide “anomaly.” In fact, they point out that the future observation or non-observation of such neutrinos can give us valuable hints about the hard-to-study origin of these ultra high energy neutrinos. They present  six simple particle physics scenarios for how high energy neutrinos can be formed from cosmic rays that were accelerated by astrophysical accelerators like black holes. Each of these processes predict a ratio of neutrino and anti-neutrinos flavors at Earth (this includes neutrino oscillation effects over long distances). Since the Glashow resonance only occurs for electron anti-neutrinos, the authors point out that the appearance or non-appearance of the Glashow resonance in future data can constrain what types of processes may have produced these high energy neutrinos.

In more speculative work, the authors of [1404.0622] suggest that the absence of Glashow resonance events may even suggest some kind of new physics that impose a “speed limit” on neutrinos propagating through space that prevents neutrinos from ever reaching 6.3 PeV (see top figure).

Further Reading:

  • 1007.1247, Halzen and Klein, “IceCube: An Instrument for Neutrino Astronomy.” A review of the IceCube experiment.
  • hep-ph/9410384, Gaisser, Halzen, and Stanev, “Particle Physics with High Energy Neutrinos.” An older review of ultra high energy neutrinos.

Neutrinoless Double Beta Decay Experiments

Title: Neutrinoless Double Beta Decay Experiments
Author: Alberto Garfagnini
Published: arXiv:1408.2455 [hep-ex]

Neutrinoless double beta decay is a theorized process that, if observed, would provide evidence that the neutrino is its own antiparticle. The relatively recent discovery of neutrino mass from oscillation experiments makes this search particularly relevant, since the Majorana mechanism that requires particles to be self-conjugate can also provide mass. A variety of experiments based on different techniques hope to observe this process. Before providing an experimental overview, we first discuss the theory itself.

Figure 1: Neutrinoless double beta decay.

Beta decay occurs when an electron or positron is released along with a corresponding neutrino. Double beta decay is simply the simultaneous beta decay  of two neutrons in a nucleus. “Neutrinoless,” of course, means that this decay occurs without the accompanying neutrinos; in this case, the two neutrinos in the beta decay annihilate with one another, which is only possible if they are self-conjugate. Figures 1 and 2 demonstrate the process by formula and image, respectively.

Figure 2: Double beta decay & neutrinoless double beta decay, from

The lack of accompanying neutrinos in such a decay violates lepton number, meaning this process is forbidden unless neutrinos are Majorana fermions. Without delving into a full explanation, this simply means that a particle is its own antiparticle (though more information is given in the references.) The importance lies in the lepton number of a neutrino. Neutrinoless double beta decay would require a nucleus to absorb two neutrinos, then decay into two protons and two electrons (to conserve charge). The only way in which this process does not violate lepton number is if the lepton charge is the same for a neutrino and an antineutrino; in other words, if they are the same particle.

The experiments currently searching for neutrinoless double beta decay can be classified according to the material used for detection. A partial list of active and future experiments is provided below.

1. EXO (Enriched Xenon Observatory): New Mexico, USA. The detector is filled with liquid 136Xe, which provides worse energy resolution than gaseous xenon, but is compensated by the use of both scintillating and ionizing signals. The collaboration finds no statistically significant evidence for 0νββ decay, and place a lower limit on the half life of 1.1 * 1025 years at 90% confidence.

2. KamLAND-Zen: Kamioka underground neutrino observatory near Toyama, Japan.  Like EXO, the experiment uses liquid xenon, but in the past has required purification due to aluminum contaminations in the detector. They report a 0νββ half life 90% CL at 2.6 * 1025 years. Figure 3 shows the energy spectra of candidate events with the best fit background.

Figure 2: KamLAND-Zen energy spectra of selected candidate events together with the best-fit backgrounds and 2νββ decays.

3. GERDA (Germanium Dectetor Array): Laboratori Nazionali del Gran Sasso, Italy. GERDA utilizes High Purity 76Ge diodes, which provide excellent energy resolution but typically have very large backgrounds. To prevent signal contamination, GERDA has ultra-pure shielding that protect measurements from environmental radiation background sources. The half life is bound below at  90% confidence by 2.1 * 1025 years.

 4. MAJORANA: South Dakota, USA.  This experiment is under construction, but a prototpye is expected to begin running in 2014. If results from GERDA and MAJORANA look good, there is talk of building a next generation germanium experiment that combines diodes from each detector.

 5. CUORE: Laboratori Nazionali del Gran Sasso, Italy. CUORE is a 130Te bolometric direct detector, meaning that it has two layers: an absorber made of crystal that releases energy when struck, and a sensor which detects the induced temperature changes. The experiment is currently under construction, so there are no definite results, but it expects to begin taking data in 2015.

While these results do not seem to show the existence of 0νββ decay, such an observation would demonstrate the existence of Majorana fermions and give an estimate of the absolute neutrino mass scale. However, a missing observation would be just as significant in the role of scientific discovery, since this would imply that the neutrino is not in fact its own antiparticle. To get a better limit on the half life, more advanced detector technologies are necessary; it will be interesting to see if MAJORANA and CUORE will have better sensitivity to this process.


Further Reading:


Fractional particles in the sky

Title: Goldstone Bosons as Fractional Cosmic Neutrinos

Author: Steven Weinberg (University of Texas, Austin)
Published: Phys.Rev.Lett. 110 (2013) 241301 [arXiv:1305.1971]

The Standard Model includes three types of neutrinos—the nearly-massless, charge-less partners of the leptons. Recent measurements from the Planck satellite, however, find that the ‘effective number of neutrinos’ in the early universe is N_\text{eff} = 3.36 \pm 0.34. This is consistent with the Standard Model, but one may wonder what it means if this number really were fractional amount larger than three.

fractionalneutrinoPhysically, N_\text{eff} is actually a count of the number of light particles during recombination: the time in the early universe where the temperature had cooled enough for protons and electrons to form hydrogen. A snapshot era is imprinted on the cosmic microwave background (CMB). Particles whose masses are much less than the temperature—like neutrinos—are effectively ‘radiation’ during this era and affect the features of the CMB; see the appendix below for a rough sketch. In this way, cosmological observations can tell us about the spectrum of light particles.

The number N_\text{eff} is defined as part of the ratio between photons and non-photon contributions to the ‘radiation’ density of the universe. It is normalized to count the number of light fermion–anti-fermion pairs. In this paper, Steven Weinberg points out that a light bosonic particle can give a fractional contribution to this counting. First of all, fermionic and bosonic contributions to the energy density differ by 7/8ths due to the difference between Fermi and Bose statistics. Secondly, a boson that is its own antiparticle picks up an additional 1/2, so that it looks like a light boson should contribute

\displaystyle \Delta N_\text{eff} = \frac{1}{2} \left(\frac{7}{8}\right)^{-1} = \frac{4}{7} = 0.57.

We have two immediate problems:

  1. This is still larger than the observed mean that we’d like to hit, \Delta N_\text{eff} = 0.36.
  2. We’re implicitly assuming a new light scalar particle but quantum corrections generically make scalars very massive. (This is the essence of the Hierarchy problem associated with the Higgs mass.)

To address the second point, Weinberg assumes the new particle is a Goldstone boson—scalar particles which are naturally light because they’re associated with spontaneous symmetry breaking. For example, the lowest energy state of a ferromagnet breaks rotational symmetry since all the spins align in one direction. “Spin wave” excitations cost little energy and behave like light particles. Similarly, the strong force breaks chiral symmetry—which relates the behavior of left- and right-handed fermions. The pions are Goldstone bosons from this breaking and indeed have masses much smaller than other nuclear states like the proton. In this paper, Weinberg imagines that a new symmetry is broken spontaneously and the resulting Goldstone boson is the light state which can contribute to the number of light degrees of freedom in the early universe, N_\text{eff}.

This set up also gives a way to address the first problem, how do we reduce the contribution of this particle, \Delta N_\text{eff}, to better match what we observe in the CMB? One crucial assumption in our estimate for \Delta N_\text{eff} was that the new light particle was in thermal equilibrium with neutrinos. As the universe cooled, the other Standard Model particles became too heavy to be produced thermally and their entropy had to go towards heating up the lighter particles. If the Goldstone boson fell out of thermal equilibrium too early—say, its interaction rate became too small to overcome the expanding distance between it and other particles—it won’t be heated by the heavy Standard Model particles. Because only the neutrinos are heated, the Goldstone contributes much less than 4/7 to N_\text{eff}. (A sketch of the argument is in the appendix below.)

Weinberg points out that there’s an intermediate possibility: if the Goldstone boson just happens to go out of thermal equilibrium when only the muons, electrons, and neutrinos are still thermally accessible, then the only temperature increase for the neutrinos that isn’t picked up by the Goldstone comes from the muon. The expression for the entropy goes like

\displaystyle s \sim T^3 \left(\text{Photon}\right) + \frac{7}{8} T^3 \left(\text{SM}\right)

where “SM” refers to the number of Standard Model particles: a left-handed electron, a right-handed electron, a left-handed muon, a right-handed muon, and three left-handed neutrinos. (See this discussion on handedness.) The famous 7/8 shows up for the fermions. In order to conserve entropy when we lose the two muons, the other particles have to heat up by a factor of $ latex (57/43)^{1/3}$. Meanwhile, the Goldstone boson temperature stays constant since it doesn’t interact enough with the other particles to heat up. The contribution of the Goldstone to the effective number of light particles in the early universe is thus scaled down:

\displaystyle \Delta N_\text{eff} = \frac{4}{7} \times \left(\frac{43}{57}\right)^{4/3} = 0.39,

This is now quite close to the \Delta N_\text{eff} = 0.36 \pm 0.34 measured from the CMB. Weinberg goes on to construct an explicit example of how the Goldstone might interact with the Higgs to produce the correct interaction rates. As an example of further model building, he then notes that one may further construct models of dark matter where the broken symmetry that produced the Goldstone is associated with the stability of the dark matter particle.



We briefly sketch how light particles can affect the cosmic microwave background. For details, see 1104.2333, the Snowmass paper 1309.5383, or the review in the PDG. Particles ‘decouple’ from the rest of the thermal particles in the early universe when their interaction rate is smaller than the expansion rate of the universe: the universe expands too quickly for the particles to stay in thermal equilibrium.

Neutrinos happen to decouple just before thermal electrons and positrons begin to annihilate. The energy from those annihilations thus go into heating the photons. From entropy conservation one can determine the fixed ratio between the neutrino and photon temperatures. This, in turn, allows one to determine the relative number and energy densities.

Additional contributions to the effective number of light particles N_\text{eff} thus lead to an increase in the energy density. In the radiation dominated era of the universe, this increases the expansion rate (Hubble parameter). One can then use two observables to pin down the additional contribution to N_\text{eff}.

CMB with the sound horizon \theta_s and diffusion scale \theta_d illustrated. Image from Lloyd Knox.

Tension between gravitational pull and pressure from radiation produces acoustic oscillations in the microwave background. Two features which are sensitive to the Hubble parameter are:

  1. The sound horizon. This is the scale of acoustic oscillations and can be seen in the peaks of the CMB power spectrum. The angular sound scale goes like 1/H.
  2. The diffusion scale. This measures the damping of small scale oscillations from photons diffusion. This scale goes like \sqrt{1/H}.

A heuristic picture of what these scales correspond to is shown if the figure. The measurement of these two parameters thus gives a fit for the Hubble parameter that can then give a fit for the effective number of light particles in the early universe, N_\text{eff}.