The Fermi LAT Data Depicting Dark Matter Detection

The center of the galaxy is brighter than astrophysicists expected. Could this be the result of the self-annihilation of dark matter? Chris Karwin, a graduate student from the University of California, Irvine presents the Fermi collaboration’s analysis.

Editor’s note: this is a guest post by one of the students involved in the published result.

Presenting: Fermi-LAT Observations of High-Energy Gamma-Ray Emission Toward the Galactic Center
Authors: The Fermi-LAT Collaboration (ParticleBites blogger is a co-author)
Reference: 1511.02938Astrophys.J. 819 (2016) no.1, 44
Artist rendition of the Fermi Gamma-ray Space telescope in orbit. Image: http://fermi.gsfc.nasa.gov
Artist rendition of the Fermi Gamma-ray Space telescope in orbit. Image from NASA.

Introduction

Like other telescopes, the Fermi Gamma-Ray Space Telescope is a satellite that scans the sky collecting light. Unlike many telescopes, it searches for very high energy light: gamma-rays. The satellite’s main component is the Large Area Telescope (LAT). When this detector is hit with a high-energy gamma-ray, it measures the the energy and the direction in the sky from where it originated. The data provided by the LAT is an all-sky photon counts map:

The Fermi-LAT provides an all-sky counts map of gamma-rays. The color scale correspond to the number of detected photons. Image: http://svs.gsfc.nasa.gov/cgi-bin/details.cgi?aid=10887
All-sky counts map of gamma-rays. The color scale correspond to the number of detected photons. Image from NASA.

In 2009, researchers noticed that there appeared to be an excess of gamma-rays coming from the galactic center. This excess is found by making a model of the known astrophysical gamma-ray sources and then comparing it to the data.

What makes the excess so interesting is that its features seem consistent with predictions from models of dark matter annihilation. Dark matter theory and simulations predict:

  1. The distribution of dark matter in space. The gamma rays coming from dark matter annihilation should follow this distribution, or spatial morphology.
  2. The particles to which dark matter directly annihilates. This gives a prediction for the expected energy spectrum of the gamma-rays.

Although a dark matter interpretation of the excess is a very exciting scenario that would tell us new things about particle physics, there are also other possible astrophysical explanations. For example, many physicists argue that the excess may be due to an unresolved population of milli-second pulsars. Another possible explanation is that it is simply due to the mis-modeling of the background. Regardless of the physical interpretation, the primary objective of the Fermi analysis is to characterize the excess.

The main systematic uncertainty of the experiment is our limited understanding of the backgrounds: the gamma rays produced by known astrophysical sources. In order to include this uncertainty in the analysis, four different background models are constructed. Although these models are methodically chosen so as to account for our lack of understanding, it should be noted that they do not necessarily span the entire range of possible error. For each of the background models, a gamma-ray excess is found. With the objective of characterizing the excess, additional components are then added to the model. Among the different components tested, it is found that the fit is most improved when dark matter is added. This is an indication that the signal may be coming from dark matter annihilation.

Analysis

This analysis is interested in the gamma rays coming from the galactic center. However, when looking towards the galactic center the telescope detects all of the gamma-rays coming from both the foreground and the background. The main challenge is to accurately model the gamma-rays coming from known astrophysical sources.

Schematic of the experiment. We are interested in gamma-rays coming from the galactic center, represented by the red circle. However, the LAT detects all of the gamma-rays coming from the foreground and background, represented by the blue region. The main challenge is to accurately model the gamma-rays coming from known astrophysical sources. Image: http://www.universetoday.com/106062/what-is-the-milky-way-2/
Schematic of the experiment. We are interested in gamma-rays coming from the galactic center, represented by the red circle. However, the LAT detects all of the gamma-rays coming from the foreground and background, represented by the blue region. The main challenge is to accurately model the gamma-rays coming from known astrophysical sources. Image adapted from Universe Today.

An overview of the analysis chain is as follows. The model of the observed region comes from performing a likelihood fit of the parameters for the known astrophysical sources. A likelihood fit is a statistical procedure that calculates the probability of observing the data given a set of parameters. In general there are two types of sources:

  1. Point sources such as known pulsars
  2. Diffuse sources due to the interaction of cosmic rays with the interstellar gas and radiation field

Parameters for these two types of sources are fit at the same time. One of the main uncertainties in the background is the cosmic ray source distribution. This is the number of cosmic ray sources as a function of distance from the center of the galaxy. It is believed that cosmic rays come from supernovae. However, the source distribution of supernova remnants is not well determined. Therefore, other tracers must be used. In this context a tracer refers to a measurement that can be made to infer the distribution of supernova remnants. This analysis uses both the distribution of OB stars and the distribution of pulsars as tracers. The former refers to OB associations, which are regions of O-type and B-type stars. These hot massive stars are progenitors of supernovae. In contrast to these progenitors, the distribution of pulsars is also used since pulsars are the end state of supernovae. These two extremes serve to encompass the uncertainty in the cosmic ray source distribution, although, as mentioned earlier, this uncertainty is by no means bracketing. Two of the four background model variants come from these distributions.

An overview of the analysis chain. In general there are two types of sources: point sources and diffuse source. The diffuse sources are due to the interaction of cosmic rays with interstellar gas and radiation fields. Spectral parameters for the diffuse sources are fit concurrently with the point sources using a likelihood fit. The question mark represents the possibility of an additional component possibly missing from the model, such as dark matter.

The information pertaining to the cosmic rays, gas, and radiation fields is input into a propagation code called GALPROP. This produces an all-sky gamma-ray intensity map for each of the physical processes that produce gamma-rays. These processes include the production of neutral pions due to the interaction of cosmic ray protons with the interstellar gas, which quickly decay into gamma-rays, cosmic ray electrons up-scattering low-energy photons of the radiation field via inverse Compton, and cosmic ray electrons interacting with the gas producing gamma-rays via Bremsstrahlung radiation.

Residual map for one of the background models. Image: http://arxiv.org/abs/1511.02938
Residual map for one of the background models. Image from 1511.02938

The maps of all the processes are then tuned to the data. In general, tuning is a procedure by which the background models are optimized for the particular data set being used. This is done using a likelihood analysis. There are two different tuning procedures used for this analysis. One tunes the normalization of the maps, and the other tunes both the normalization and the extra degrees of freedom related to the gas emission interior to the solar circle. These two tuning procedures, performed for the the two cosmic ray source models, make up the four different background models.

Point source models are then determined for each background model, and the spectral parameters for both diffuse sources and point sources are simultaneously fit using a likelihood analysis.

Results and Conclusion

Best fit dark matter spectra for the four different background models. Image: 1511.02938

In the plot of the best fit dark matter spectra for the four background models, the hatching of each curve corresponds to the statistical uncertainty of the fit. The systematic uncertainty can be interpreted as the region enclosed by the four curves. Results from other analyses of the galactic center are overlaid on the plot. This result shows that the galactic center analysis performed by the Fermi collaboration allows a broad range of possible dark matter spectra.

The Fermi analysis has shown that within systematic uncertainties a gamma-ray excess coming from the galactic center is detected. In order to try to explain this excess additional components were added to the model. Among the additional components tested it was found that the fit is most improved with that addition of a dark matter component. However, this does not establish that a dark matter signal has been detected. There is still a good chance that the excess can be due to something else, such as an unresolved population of millisecond pulsars or mis-modeling of the background. Further work must be done to better understand the background and better characterize the excess. Nevertheless, it remains an exciting prospect that the gamma-ray excess could be a signal of dark matter.

 

Background reading on dark matter and indirect detection:

Can’t Stop Won’t Stop: The Continuing Search for SUSY

Title: “Search for top squarks in final states with one isolated lepton, jets, and missing transverse momentum in √s = 13 TeV pp collisions with the ATLAS detector”
Author: The ATLAS Collaboration
Publication: Submitted 13 June 2016, arXiv 1606.03903

Things at the LHC are going great. Run II of the Large Hadron Collider is well underway, delivering higher energies and more luminosity than ever before. ATLAS and CMS also have an exciting lead to chase down– the diphoton excess that was first announced in December 2015. So what does lots of new data and a mysterious new excess have in common? They mean that we might finally get a hint at the elusive theory that keeps refusing our invitations to show up: supersymmetry.

Feynman diagram of stop decay from proton-proton collisions.
Figure 1: Feynman diagram of stop decay from proton-proton collisions.

People like supersymmetry because it fixes a host of things in the Standard Model. But most notably, it generates an extra Feynman diagram that cancels the quadratic divergence of the Higgs mass due to the top quark contribution. This extra diagram comes from the stop quark. So a natural SUSY solution would have a light stop mass, ideally somewhere close to the top mass of 175 GeV. This expected low mass due to “naturalness” makes the stop a great place to start looking for SUSY. But according to the newest results from the ATLAS Collaboration, we’re not going to be so lucky.

Using the full 2015 dataset (about 3.2 fb-1), ATLAS conducted a search for pair-produced stops, each decaying to a top quark and a neutralino, in this case playing the role of the lightest supersymmetric particle. The top then decays as tops do, to a W boson and a b quark. The W usually can do what it wants, but in this case the group chose to select for one W decaying leptonically and one decaying to jets (leptons are easier to reconstruct, but have a lower branching ratio from the W, so it’s a trade off.) This whole process is shown in Figure 1. So that gives a lepton from one W, jets from the others, and missing energy from the neutrino for a complete final state.

Transverse mass distribution in one of the signal regions.
Figure 2: Transverse mass distribution in one of the signal regions.

The paper does report an excess in the data, with a significance around 2.3 sigma. In Figure 2, you can see this excess overlaid with all the known background predictions, and two possible signal models for various gluino and stop masses. This signal in the 700-800 GeV mass range is right around the current limit for the stop, so it’s not entirely inconsistent. While these sorts of excesses come and go a lot in particle physics, it’s certainly an exciting reason to keep looking.

Figure 3 shows our status with the stop and neutralino, using 8 TeV data. All the shaded regions here are mass points for the stop and neutralino that physicists have excluded at 95% confidence. So where do we go from here? You can see a sliver of white space on this plot that hasn’t been excluded yet— that part is tough to probe because the mass splitting is so small, the neutralino emerges almost at rest, making it very hard to notice. It would be great to check out that parameter space, and there’s an effort underway to do just that. But at the end of the day, only more time (and more data) can tell.

(P.S. This paper also reports a gluino search—too much to cover in one post, but check it out if you’re interested!)

Limit curves for stop and neutralino masses, with 8 TeV ATLAS dataset.
Figure 3: Limit curves for stop and neutralino masses, with 8 TeV ATLAS dataset.

References & Further Reading

  1. “Supersymmetry, Part I (Theory), PDG Review
  2. “Supersymmetry, Part II (Experiment), PDG Review
  3. ATLAS Supersymmetry Public Results Twiki
  4. “Opening up the compressed region of stop searches at 13 TeV LHC”, arXiv hep-ph 1506.00653

 

Respecting your “Elders”

Theoretical physicists have designed a new way to explain the how dark matter interactions can explain the observed amount of dark matter in the universe today. This elastically decoupling dark matter framework, is a hybrid of conventional and novel dark matter models.

Presenting: Elastically Decoupling Dark Matter
Authors: Eric Kuflik, Maxim Perelstein, Nicolas Rey-Le Lorier, Yu-Dai Tsai
Reference: 1512.04545, Phys. Rev. Lett. 116, 221302 (2016)

The particle identity of dark matter is one of the biggest open questions in physics. The simplest and most widely assumed explanation is that dark matter is a weakly-interacting massive particle (WIMP). Assuming that dark matter starts out in thermal equilibrium in the hot plasma of the early universe, the present cosmic abundance of WIMPs is set by the balance of two effects:

  1. When two WIMPs find each other, they can annihilate into ordinary matter. This depletes the number of WIMPs in the universe.
  2. The universe is expanding, making it harder for WIMPs to find each other.

This process of “thermal freeze out” leads to an abundance of WIMPs controlled by the dark matter mass and interaction strength. The term ‘weakly-interacting massive particle’ comes from the observation that dark matter of roughly the mass of the weak force particle that interact through the weak nuclear force gives the experimentally measured dark matter density today.

Two ways for a new particle, X, to produce the observed dark matter abundance: (left) conventional WIMP annihilation into Standard Model (SM) particles versus (right) strongly-Interacting 3-to-2 interactions that reduce the amount of dark matter.
Two ways for a new particle, X, to produce the observed dark matter abundance: (left) WIMP annihilation into Standard Model (SM) particles versus (right) SIMP 3-to-2 interactions that reduce the amount of dark matter.

More recently, physicists noticed that dark matter with very large interactions with itself (not ordinary matter), can produce the correct dark matter density in another way. These “strongly interacting massive particle” models reduce regulate the amount of dark matter through 3-to-2 interactions that reduce the total number of dark matter particles rather than annihilation into ordinary matter.

The authors of 1512.04545 have proposed an intermediate road that interpolates between these two types of dark matter. The “elastically decoupling dark matter” (ELDER) scenario. ELDERs have both of the above interactions: they can annihilate pairwise into ordinary matter, or sets of three ELDERs can turn into two ELDERS.

ELDER scenario
Thermal history of ELDERs, adapted from 1512.04545.

The cosmic history of these ELDERs is as follows:

  1. ELDERs are produced in thermal bath immediately after the big bang.
  2. Pairs of ELDERS annihilate into ordinary matter. Like WIMPs, they interact weakly with ordinary matter.
  3. As the universe expands, the rate for annihilation into Standard Model particles falls below the rate at which the universe expands
  4. Assuming that the ELDERs interact strongly amongst themselves, the 3-to-2 number-changing process still occurs. Because this process distributes the energy of 3 ELDERs in the initial state to 2 ELDERs in the final state, the two outgoing ELDERs have more kinetic energy: they’re hotter. This turns out to largely counteract the effect of the expansion of the universe.

The neat effect here is the abundance of ELDERs is actually set by the interaction with ordinary matter, like WIMPs. However, because they have this 3-to-2 heating period, they are able to produce the observed present-day dark matter density for very different choices of interactions. In this sense, the authors show that this framework opens up a new “phase diagram” in the space of dark matter theories:

A "phase diagram" of dark matter models. The vertical axis represents the dark matter self-coupling strength while the horizontal axis represents the coupling to ordinary matter.
A “phase diagram” of dark matter models. The vertical axis represents the dark matter self-coupling strength while the horizontal axis represents the coupling to ordinary matter.

Background reading on dark matter: