Dark matter or Pulsars?

Title: 3FGL Demographics Outside the Galactic Plane using Supervised Machine Learning: Pulsar and Dark Matter Subhalo Interpretations
PublicationarXiv:1605.00711, accepted ApJ

The universe has a way of keeping scientists guessing. For over 70 years, scientists have been trying to understand the particle nature of dark matter. We’ve buried detectors deep underground to shield them from backgrounds, smashed particles together at inconceivably high energies, and dedicated instruments to observing where we have measured dark matter to be a dominant component. Like any good mystery, this has yielded more questions than answers.

There are a lot of ideas as to what the distribution of dark matter looks like in the universe. One example is from a paper by L. Pieri et al., (PRD 83 023518 (2011), arXiv:0908.0195). They simulated what the gamma-ray sky would look like from dark matter annihilation into b-quarks. The results of their simulation are shown below. The plot is an Aitoff projection in galactic coordinates (meaning that the center of the galaxy is at the center of the map).

Gamma-ray sky map from dark matter annihilation into bb using the Via Lacita simulation between 3-40 GeV
Gamma-ray sky map from dark matter annihilation into bb using the Via Lacita simulation between 3-40 GeV. L. Pieri et al., (PRD 83 023518 (2011), arXiv:0908.0195)

The obvious bright spot is the galactic center. This is because the center of the Milky Way has the highest density of dark matter nearby (F. Iocco, Pato, Bertone, Nature Physics 11, 245–248 (2015)). Just for some context the center of the Milky way is ~8.5 kiloparsecs or 27,700 light years away from us… so it’s a big neighborhood. However, the center of the galaxy is particularly hard to observe because the Galaxy itself is obstructing our view. As it turns out there are lots of stars, gas and dust in our Galaxy 🙂

This map also shows us that there are other regions of high dark matter density away from the Galactic center. These could be dark matter halos, dwarf spheroidal galaxies, galaxy clusters, or anything else with a high density of dark matter. The paper I’m discussing uses this simulation in combination with the Fermi-LAT 3rd source catalog (3FGL) (Fermi-LAT Collaboration, Astrophys. J. Suppl 218 (2015) arXiv:1501.02003).

Over 1/3 of the sources in the 3FGL are unassociated with a known astrophysical source (this means we don’t know what source is yielding gamma rays). The paper analyzes these sources to see if their gamma-ray flux is consistent with dark matter annihilation or if it’s more consistent with the spectral shape from pulsars, rapidly rotating neutron stars with strong magnetic fields that emit radio waves (and gamma-rays) in very regular pulses. These are a fascinating class of astrophysical objects and I’d highly recommend reading up on them (See NASA’s site). The challenge is that the gamma-ray flux from dark matter annihilation into b-quarks is surprisingly similar to that from pulsars (See below).

Gamma-ray spectra of dark matter annihilating into bb and tautau.
Gamma-ray spectra of dark matter annihilating into b-quarks and taus. (Image produced by me using DMFit)
Gamma-ray spectrum from pulsars from the globular cluster 47 Tuc.
Gamma-ray spectrum from pulsars from the globular cluster 47 Tuc (Fermi-LAT Collaboration)

 

 

 

 

 

 

 

 

 

They found 34 candidates sources which are consistent with both dark matter annihilation and pulsar spectra away from the Galactic plane using two different machine learning techniques. Generally, if a source can be explained by something other than dark matter, that’s the accepted interpretation. So, the currently favored astrophysical interpretations for these objects are pulsars. Yet, these objects could also be interpreted as dark matter annihilation taking place in ultra-faint dwarf galaxies or dark matter subhalos. Unfortunately, Fermi-LAT spectra are not sufficient to break degeneracies between the different scenarios. The distribution of the 34 dark matter subhalo candidates found in this work are shown below.

Galactic distribution of 34 high-latitude Galactic candidates (red circles) superimposed on a smoothed Fermi LAT all-sky map for energies E ≥ 1 GeV based on events collected during the period 2008 August 4–2015 August 4 (Credit: Fermi LAT Collaboration). High-latitude 3FGL pulsars (blue crosses) are also plotted for comparison.
Galactic distribution of 34 high-latitude Galactic candidates (red circles) superimposed on a smoothed Fermi LAT all-sky map for energies E ≥ 1 GeV based on events collected during the period 2008 August 4–2015 August 4 (Credit: Fermi LAT Collaboration).
High-latitude 3FGL pulsars (blue crosses) are also plotted for comparison.

 

 

 

The paper presents scenarios which support the pulsar interpretation and with the dark matter interpretation. If they are pulsars, they find the 34 found to be in excellent agreement with predictions from a new population that predicts many more pulsars than are currently found. However, if they are dark matter substructures, they also place upper limits on the number of Galactic subhalos surviving today and on dark matter annihilation cross sections. The cross section is shown below.

 

Upper limits on the dark matter annihilation cross section for the b-quark channel using the 14 subhalo candidates very far from the galactic plane (>20 degrees) (black solid line). The dashed red line is an upper limit derived from the Via Lactea II simulation when zero 3FGL subhalos are adopted (Schoonenberg et al. 2016). The blue line corresponds to the constraint for zero 3FGL subhalo candidates using the Aquarius simulation instead (Bertoni, Hooper, & Linden 2015). The horizontal dotted line marks the canonical thermal relic cross section (Steigman, Dasgupta, & Beacom 2012).
Upper limits on the dark matter annihilation cross section for the b-quark channel using the 14 subhalo candidates very far from the galactic plane (>20 degrees) (black solid line). The dashed red line is an upper limit derived from the Via Lactea II simulation when zero 3FGL subhalos are adopted (Schoonenberg et al. 2016). The blue line corresponds to the constraint for zero 3FGL subhalo candidates using the Aquarius simulation instead (Bertoni, Hooper, & Linden 2015). The horizontal dotted line marks the canonical thermal relic cross section (Steigman, Dasgupta, & Beacom 2012).

 

The only thing we can do (beyond waiting for more Fermi-LAT data) is try to identify these sources definitely as pulsars which requires extensive follow-up observations using other telescopes (in particular radio telescopes to look for pulses). So stay tuned!

Other reading: See also Chris Karwin’s ParticleBite on the Fermi-LAT analysis.

The Fermi LAT Data Depicting Dark Matter Detection

The center of the galaxy is brighter than astrophysicists expected. Could this be the result of the self-annihilation of dark matter? Chris Karwin, a graduate student from the University of California, Irvine presents the Fermi collaboration’s analysis.

Editor’s note: this is a guest post by one of the students involved in the published result.

Presenting: Fermi-LAT Observations of High-Energy Gamma-Ray Emission Toward the Galactic Center
Authors: The Fermi-LAT Collaboration (ParticleBites blogger is a co-author)
Reference: 1511.02938Astrophys.J. 819 (2016) no.1, 44
Artist rendition of the Fermi Gamma-ray Space telescope in orbit. Image: http://fermi.gsfc.nasa.gov
Artist rendition of the Fermi Gamma-ray Space telescope in orbit. Image from NASA.

Introduction

Like other telescopes, the Fermi Gamma-Ray Space Telescope is a satellite that scans the sky collecting light. Unlike many telescopes, it searches for very high energy light: gamma-rays. The satellite’s main component is the Large Area Telescope (LAT). When this detector is hit with a high-energy gamma-ray, it measures the the energy and the direction in the sky from where it originated. The data provided by the LAT is an all-sky photon counts map:

The Fermi-LAT provides an all-sky counts map of gamma-rays. The color scale correspond to the number of detected photons. Image: http://svs.gsfc.nasa.gov/cgi-bin/details.cgi?aid=10887
All-sky counts map of gamma-rays. The color scale correspond to the number of detected photons. Image from NASA.

In 2009, researchers noticed that there appeared to be an excess of gamma-rays coming from the galactic center. This excess is found by making a model of the known astrophysical gamma-ray sources and then comparing it to the data.

What makes the excess so interesting is that its features seem consistent with predictions from models of dark matter annihilation. Dark matter theory and simulations predict:

  1. The distribution of dark matter in space. The gamma rays coming from dark matter annihilation should follow this distribution, or spatial morphology.
  2. The particles to which dark matter directly annihilates. This gives a prediction for the expected energy spectrum of the gamma-rays.

Although a dark matter interpretation of the excess is a very exciting scenario that would tell us new things about particle physics, there are also other possible astrophysical explanations. For example, many physicists argue that the excess may be due to an unresolved population of milli-second pulsars. Another possible explanation is that it is simply due to the mis-modeling of the background. Regardless of the physical interpretation, the primary objective of the Fermi analysis is to characterize the excess.

The main systematic uncertainty of the experiment is our limited understanding of the backgrounds: the gamma rays produced by known astrophysical sources. In order to include this uncertainty in the analysis, four different background models are constructed. Although these models are methodically chosen so as to account for our lack of understanding, it should be noted that they do not necessarily span the entire range of possible error. For each of the background models, a gamma-ray excess is found. With the objective of characterizing the excess, additional components are then added to the model. Among the different components tested, it is found that the fit is most improved when dark matter is added. This is an indication that the signal may be coming from dark matter annihilation.

Analysis

This analysis is interested in the gamma rays coming from the galactic center. However, when looking towards the galactic center the telescope detects all of the gamma-rays coming from both the foreground and the background. The main challenge is to accurately model the gamma-rays coming from known astrophysical sources.

Schematic of the experiment. We are interested in gamma-rays coming from the galactic center, represented by the red circle. However, the LAT detects all of the gamma-rays coming from the foreground and background, represented by the blue region. The main challenge is to accurately model the gamma-rays coming from known astrophysical sources. Image: http://www.universetoday.com/106062/what-is-the-milky-way-2/
Schematic of the experiment. We are interested in gamma-rays coming from the galactic center, represented by the red circle. However, the LAT detects all of the gamma-rays coming from the foreground and background, represented by the blue region. The main challenge is to accurately model the gamma-rays coming from known astrophysical sources. Image adapted from Universe Today.

An overview of the analysis chain is as follows. The model of the observed region comes from performing a likelihood fit of the parameters for the known astrophysical sources. A likelihood fit is a statistical procedure that calculates the probability of observing the data given a set of parameters. In general there are two types of sources:

  1. Point sources such as known pulsars
  2. Diffuse sources due to the interaction of cosmic rays with the interstellar gas and radiation field

Parameters for these two types of sources are fit at the same time. One of the main uncertainties in the background is the cosmic ray source distribution. This is the number of cosmic ray sources as a function of distance from the center of the galaxy. It is believed that cosmic rays come from supernovae. However, the source distribution of supernova remnants is not well determined. Therefore, other tracers must be used. In this context a tracer refers to a measurement that can be made to infer the distribution of supernova remnants. This analysis uses both the distribution of OB stars and the distribution of pulsars as tracers. The former refers to OB associations, which are regions of O-type and B-type stars. These hot massive stars are progenitors of supernovae. In contrast to these progenitors, the distribution of pulsars is also used since pulsars are the end state of supernovae. These two extremes serve to encompass the uncertainty in the cosmic ray source distribution, although, as mentioned earlier, this uncertainty is by no means bracketing. Two of the four background model variants come from these distributions.

An overview of the analysis chain. In general there are two types of sources: point sources and diffuse source. The diffuse sources are due to the interaction of cosmic rays with interstellar gas and radiation fields. Spectral parameters for the diffuse sources are fit concurrently with the point sources using a likelihood fit. The question mark represents the possibility of an additional component possibly missing from the model, such as dark matter.

The information pertaining to the cosmic rays, gas, and radiation fields is input into a propagation code called GALPROP. This produces an all-sky gamma-ray intensity map for each of the physical processes that produce gamma-rays. These processes include the production of neutral pions due to the interaction of cosmic ray protons with the interstellar gas, which quickly decay into gamma-rays, cosmic ray electrons up-scattering low-energy photons of the radiation field via inverse Compton, and cosmic ray electrons interacting with the gas producing gamma-rays via Bremsstrahlung radiation.

Residual map for one of the background models. Image: http://arxiv.org/abs/1511.02938
Residual map for one of the background models. Image from 1511.02938

The maps of all the processes are then tuned to the data. In general, tuning is a procedure by which the background models are optimized for the particular data set being used. This is done using a likelihood analysis. There are two different tuning procedures used for this analysis. One tunes the normalization of the maps, and the other tunes both the normalization and the extra degrees of freedom related to the gas emission interior to the solar circle. These two tuning procedures, performed for the the two cosmic ray source models, make up the four different background models.

Point source models are then determined for each background model, and the spectral parameters for both diffuse sources and point sources are simultaneously fit using a likelihood analysis.

Results and Conclusion

Best fit dark matter spectra for the four different background models. Image: 1511.02938

In the plot of the best fit dark matter spectra for the four background models, the hatching of each curve corresponds to the statistical uncertainty of the fit. The systematic uncertainty can be interpreted as the region enclosed by the four curves. Results from other analyses of the galactic center are overlaid on the plot. This result shows that the galactic center analysis performed by the Fermi collaboration allows a broad range of possible dark matter spectra.

The Fermi analysis has shown that within systematic uncertainties a gamma-ray excess coming from the galactic center is detected. In order to try to explain this excess additional components were added to the model. Among the additional components tested it was found that the fit is most improved with that addition of a dark matter component. However, this does not establish that a dark matter signal has been detected. There is still a good chance that the excess can be due to something else, such as an unresolved population of millisecond pulsars or mis-modeling of the background. Further work must be done to better understand the background and better characterize the excess. Nevertheless, it remains an exciting prospect that the gamma-ray excess could be a signal of dark matter.

 

Background reading on dark matter and indirect detection:

Respecting your “Elders”

Theoretical physicists have designed a new way to explain the how dark matter interactions can explain the observed amount of dark matter in the universe today. This elastically decoupling dark matter framework, is a hybrid of conventional and novel dark matter models.

Presenting: Elastically Decoupling Dark Matter
Authors: Eric Kuflik, Maxim Perelstein, Nicolas Rey-Le Lorier, Yu-Dai Tsai
Reference: 1512.04545, Phys. Rev. Lett. 116, 221302 (2016)

The particle identity of dark matter is one of the biggest open questions in physics. The simplest and most widely assumed explanation is that dark matter is a weakly-interacting massive particle (WIMP). Assuming that dark matter starts out in thermal equilibrium in the hot plasma of the early universe, the present cosmic abundance of WIMPs is set by the balance of two effects:

  1. When two WIMPs find each other, they can annihilate into ordinary matter. This depletes the number of WIMPs in the universe.
  2. The universe is expanding, making it harder for WIMPs to find each other.

This process of “thermal freeze out” leads to an abundance of WIMPs controlled by the dark matter mass and interaction strength. The term ‘weakly-interacting massive particle’ comes from the observation that dark matter of roughly the mass of the weak force particle that interact through the weak nuclear force gives the experimentally measured dark matter density today.

Two ways for a new particle, X, to produce the observed dark matter abundance: (left) conventional WIMP annihilation into Standard Model (SM) particles versus (right) strongly-Interacting 3-to-2 interactions that reduce the amount of dark matter.
Two ways for a new particle, X, to produce the observed dark matter abundance: (left) WIMP annihilation into Standard Model (SM) particles versus (right) SIMP 3-to-2 interactions that reduce the amount of dark matter.

More recently, physicists noticed that dark matter with very large interactions with itself (not ordinary matter), can produce the correct dark matter density in another way. These “strongly interacting massive particle” models reduce regulate the amount of dark matter through 3-to-2 interactions that reduce the total number of dark matter particles rather than annihilation into ordinary matter.

The authors of 1512.04545 have proposed an intermediate road that interpolates between these two types of dark matter. The “elastically decoupling dark matter” (ELDER) scenario. ELDERs have both of the above interactions: they can annihilate pairwise into ordinary matter, or sets of three ELDERs can turn into two ELDERS.

ELDER scenario
Thermal history of ELDERs, adapted from 1512.04545.

The cosmic history of these ELDERs is as follows:

  1. ELDERs are produced in thermal bath immediately after the big bang.
  2. Pairs of ELDERS annihilate into ordinary matter. Like WIMPs, they interact weakly with ordinary matter.
  3. As the universe expands, the rate for annihilation into Standard Model particles falls below the rate at which the universe expands
  4. Assuming that the ELDERs interact strongly amongst themselves, the 3-to-2 number-changing process still occurs. Because this process distributes the energy of 3 ELDERs in the initial state to 2 ELDERs in the final state, the two outgoing ELDERs have more kinetic energy: they’re hotter. This turns out to largely counteract the effect of the expansion of the universe.

The neat effect here is the abundance of ELDERs is actually set by the interaction with ordinary matter, like WIMPs. However, because they have this 3-to-2 heating period, they are able to produce the observed present-day dark matter density for very different choices of interactions. In this sense, the authors show that this framework opens up a new “phase diagram” in the space of dark matter theories:

A "phase diagram" of dark matter models. The vertical axis represents the dark matter self-coupling strength while the horizontal axis represents the coupling to ordinary matter.
A “phase diagram” of dark matter models. The vertical axis represents the dark matter self-coupling strength while the horizontal axis represents the coupling to ordinary matter.

Background reading on dark matter:

QCD, CP, PQ, and the Axion

Figure 1: Axions– exciting new elementary particles, or a detergent? (credit to The Big Blog Theory, [5])

Before we dig into all the physics behind these acronyms (beyond SM physics! dark matter!), let’s start by breaking down the title.

QCD, or quantum chromodynamics, is the study of how quarks and gluons interact. CP is the combined operation of charge-parity; it swaps a particle for its antiparticle, then switches left and right. CP symmetry states that applying both operators should leave the laws of physics invariant, which is true for electromagnetism. Interestingly it is violated by the weak force (this becomes the problem of matter-antimatter asymmetry [1]). But more importantly, the strong force maintains CP symmetry. In fact, that’s exactly the problem.

CP violation in QCD would give an electric dipole moment to the neutron. Experimentally, physicists have constrained this value pretty tightly around zero. But our QCD Lagrangian has a more complicated vacuum than first thought, giving it a term (Figure 2) with a phase parameter that would break CP [2]. Basically, our issue is that the theory predicts some degree of CP violation, but experimentally we just don’t see it. This is known as the strong CP problem.

Figure 2: QCD Lagrangian term allowing for CP violation. Current experimental constraints place θeff ≤ 10^−10 .
Figure 2: QCD Lagrangian term allowing for CP violation. Current experimental constraints place θeff ≤ 10^−10 .

Naturally physicists want to find a fix for this problem, bringing us to the rest of the article title. The most recognized solution is the Peccei-Quinn, or PQ theory. The idea is that the phase parameter is not a constant but actually another symmetry of the Standard Model. This symmetry, called U(1)_PQ, is spontaneously broken, meaning that all states of the theory share the symmetry except for the ground state.

This may sound a bit similar to the Higgs mechanism, because it is. In both cases, we get a non-zero vacuum expectation value and an extra massless boson, called a Goldstone boson. However, as with the Higgs mechanism, the new boson is not exactly massless. Very few things are exact in physics, and approximate symmetry breaking means our massless Goldstone boson gains a tiny bit of mass after all. This new particle created from PQ theory is called an axion. This new axion effectively steps into the role of the phase parameter, allowing its value to relax to 0.

Is it reasonable to imagine some extra massive Standard Model particle bouncing around that we haven’t detected yet? Sure. Perhaps the axion is so heavy that we haven’t yet probed the necessary energy range at the LHC. Or maybe it interacts so rarely that we’ve been looking in the right places and just haven’t had the statistics. But any undiscovered massive particle floating around should make you think about dark matter. In fact, the axion is one of the few remaining viable candidates for DM, and lots of people are looking pretty hard for it.

One of the largest collaborations is ADMX at the University of Washington, which uses an RF cavity in a superconducting magnet to detect the very rare conversion of a DM axion into a microwave photon [3]. In order to be a good dark matter candidate, the axion would have to be fairly small, and some theories place its mass below 1 eV (for reference, the neutrinos are of mass ~0.3 eV). ADMX has eliminated possible masses on the micro-eV order. However, theorists are clever, and there’s a lot of model tuning available that can pin the axion mass practically anywhere you want it to be.

Now is the time to factor in the recent buzz about a diphoton excess at 750 GeV (see the January 2016 ParticleBites post to catch up on this.) Recent papers are trying to place the axion at this mass, since that resonance is yet to be explained by Standard Model processes.

Figure 3: Plot of data for recent diphoton excess observed at the LHC.
Figure 3: Plot of data for recent diphoton excess observed at the LHC.

For example, one can consider aligned QCD axion models, in which there are multiple axions with decay constants around the weak scale, in line with the dark matter relic abundance [4]. The models can get pretty diverse from here, suffice it to say that there are many possibilities. Though this excess is still far from confirmed, it is always exciting to speculate about what we don’t know and how we can figure it out. Because of strong CP and these recent model developments, the axion has earned a place pretty high up on this speculation list.

 

References

  1. The Mystery of CP Violation, Gabriella Sciola, MIT
  2. TASI Lectures on the Strong CP Problem
  3. Axion Dark Matter Experiment (ADMX)
  4. Quality of the Peccei-Quinn symmetry in the Aligned QCD Axion and Cosmological Implications, arXiv: 1603.0209 [hep-ph].
  5. The Big Blog Theory on axions

 

Monojet Dark Matter Searches at the LHC

Now is a good time to be a dark matter experiment. The astrophysical evidence for its existence is almost undeniable (such as gravitational lensing and the cosmic microwave background; see the “Further Reading” list if you want to know more.) Physicists are pulling out all the stops trying to pin DM down by any means necessary.

However, by its very nature, it is extremely difficult to detect; dark matter is called dark because it has no known electromagnetic interactions, meaning it doesn’t couple to the photon. It does, however, have very noticeable gravitational effects, and some theories allow for the possibility of weak interactions as well.

While there are a wide variety of experiments searching for dark matter right now, the scope of this post will be a bit narrower, focusing on a common technique used to look for dark matter at the LHC, known as ‘monojets’. We rely on the fact that a quark-quark interaction could actually produce dark matter particle candidates, known as weakly interacting massive particles (WIMPs), through some unknown process. Most likely, the dark matter would then pass through the detector without any interactions, kind of like neutrinos. But if it doesn’t have any interactions, how do we expect to actually see anything? Figure 1 shows the overall Feynman diagram of the interaction; I’ll explain how and why each of these particles comes into the picture.

Figure 1: Feynman diagram for dark matter production process.
Figure 1: Feynman diagram for dark matter production process.

The answer is a pretty useful metric used by particle physicists to measure things that don’t interact, known as ‘missing transverse energy’ or MEt. When two protons are accelerated down the beam line, their initial momentum in the transverse plane is necessarily zero. Your final state can have all kinds of decay products in that plane, but by conversation of momentum, their magnitude and direction have to add up to zero in the end. If you add up all your momentum in the transverse plane and get a non-zero value, you know the remaining momentum was taken away by these non-interacting particles. In our case, dark matter is going to be the missing piece of the puzzle.

Figure 2: Event display for one of the monojet candidates in the ATLAS 7 data.
Figure 2: Event display for one of the monojet candidates in the ATLAS 7 TeV data.

Now our search method is to collide protons and look for… well, nothing. That’s not an easy thing to do. So let’s add another particle to our final state: a single jet that was radiated off one of the initial protons. This is a pretty common occurrence in LHC collisions, so we’re not ruining our statistics. But now we have an extra handle on selecting these events, since that radiated single jet is going to recoil off the missing energy in the final state.

An actual event display from the ATLAS detector is shown in Figure 2 (where the single jet is shown in yellow in the transverse plane of the detector).

No results have been released yet from the monojet groups with the 13 and 14 TeV data. However, the same method was using in 2012-2013 LHC data, and has provided some results that can be compared to current knowledge. Figure 3 shows the WIMP-nucleon cross section as a function of WIMP mass from CMS at the LHC (EPJC 75 (2015) 235), overlaid with other exclusions from a variety of experiments. Anything above/right of these curves is the excluded region.

From here we can see that the LHC can provide better sensitivity to low mass regions with spin dependent couplings to DM. It’s worth giving the brief caveat that these comparisons are extremely model dependent and require a lot of effective field theory; notes on this are also given in the Further Reading list. The current results look pretty thorough, and a large region of the WIMP mass seems to have been excluded. Interestingly, some searches observe slight excesses in regions that other experiments have ruled out; in this way, these ‘exclusions’ are not necessarily as cut and dry as they may seem. The dark matter mystery is still far from a resolution, but the LHC may be able to get us a little bit closer.

cms 1cms2

 

 

 

 

 

 

 

 

 

 

 

With all this incoming data and such a wide variety of searches ongoing, it’s likely that dark matter will remain a hot topic in physics for decades to come, with or without a discovery. In the words of dark matter pioneer Vera Rubin, “We have peered into a new world, and have seen that it is more mysterious and more complex than we had imagined. Still more mysteries of the universe remain hidden. Their discovery awaits the adventurous scientists of the future. I like it this way.“

 

References & Further Reading:

  • Links to the CMS and ATLAS 8 TeV monojet analyses
  • “Dark Matter: A Primer”, arXiv hep-ph 1006.2483
  • Effective Field Theory notes
  • “Simplified Models for Dark Matter Searches at the LHC”, arXiv hep-ph 1506.03116
  • “Search for dark matter at the LHC using missing transverse energy”, Sarah Malik, CMS Collaboration Moriond talk

 

Dark Photons from the Center of the Earth

Presenting: Dark Photons from the Center of the Earth
Author: J. Feng, J. Smolinsky, P. Tanedo (disclosure: blog post is by an author on the paper)
Reference: arXiv:1509.07525

Dark matter may be collecting in the center of the Earth. A recent paper explores way to detect its decay products here on the surface.

Dark matter may collect in the Earth and annihilate in to dark photons, which propagate to the surface before decaying into pairs of particles that can be detected by IceCube.
Dark matter may collect in the Earth and annihilate in to dark photons, which propagate to the surface before decaying into pairs of particles that can be detected by a large-volume neutrino detector like IceCube. Image from arXiv:1509.07525.

Our entire galaxy is gravitationally held together by a halo of dark matter, whose particle properties remain one of the biggest open questions in high energy physics. One class of theories assumes that the dark matter particles interact through a dark photon, a hypothetical particle which mediates a force analogous to how the ordinary photon mediates electromagnetism.

These theories also permit the  ordinary and dark photons to have a small quantum mechanical mixing. This effectively means that the dark photon can interact very weakly with ordinary matter and mediate interactions between ordinary matter and dark matter—this gives a handle for ways to detect dark matter.

While most methods for detecting dark matter focus on building detectors that are sensitive to the “wind” of dark matter bombarding (and mostly passing through) the Earth as the solar system zooms through the galaxy, the authors of 1509.07525 follow up on an idea initially proposed in the mid-80’s: dark matter hitting the Earth might get stuck in the Earth’s gravitational potential and build up in its core.

These dark matter particles can then find each other and annihilate. If they annihilate into very weakly interacting particles, then these may be detected at the surface of the Earth. A typical example is dark matter annihilation into neutrinos. In 1509.07525, the authors examine the case where the dark matter annihilates into dark photons, which can pass through the Earth as easily as a neutrino and decay into pairs of electrons or muons near the surface.

These decays can be detected in large neutrino detectors, such as the IceCube neutrino observatory (previously featured in ParticleBites). In the case where the dark matter is very heavy (e.g. TeV in mass) and the dark photons are very light (e.g. 200 MeV), these dark photons are very boosted and their decay products point back to the center of the Earth. This is a powerful discriminating feature against background cosmic ray events.  The number of signal events expected is shown in the following contour plot:

Number of signal (Nsig) dark photon decays expected at the IceCube detector in the plane of dark photon mixing over dark photon mass.
Number of signal dark photon decays expected at the IceCube detector in the plane of dark photon mixing over dark photon mass. Image from arXiv 1509.07525. Blue region is in tension with direct detection bounds (from ariv:1507.04007), while the gray regions are in tension with beam dump and supernovae bounds, see e.g. arXiv:1311.029.

While similar analyses for dark photon-mediated dark matter capture by celestial bodies and annihilation have been studied—see e.g. Pospelov et al., Delaunay et al., Schuster et al., and Meade et al.—the authors of 1509.07525 focus on the case of dark matter capture in the Earth (rather than, say, the sun) and subsequent annihilation to dark photons (rather than neutrinos).

  1. The annihilation rate at the center of the Earth is greatly increased do to Sommerfeld enhancement: because the captured dark matter has very low velocity, it is much more likely to annihilate with other captured dark matter particles due to mutual attraction from dark photon exchange.
  2. This causes the Earth to quickly saturate with dark matter, leading to larger annihilation rates than one would naively expect in the case where the Earth were not yet dark matter saturated such that annihilation and capture occur at equal rates.
  3. In addition using directional information to identify signal events against cosmic ray backgrounds, the authors identified kinematic quantities—the opening angle of the Standard Model decay products and the time delay between them—as ways to further discriminate signal from background. Unfortunately their analysis implies that these features lie just outside of the IceCube sensitivity to them.

Finally, the authors point out the possibility of large enhancements coming from the so-called dark disk, an enhancement in the low velocity phase space density of dark matter. If that is the case, then the estimated reach may increase by an order of magnitude.

 

Suggested further reading:

More Dark Matter in Dwarf Galaxies

Hi Particlebiters,

This is part 2 in my “series” on dark matter in dwarf galaxies. In my previous post, I explained a bit about WIMP-like dark matter and why we look for its signature in these particular type of small galaxies (dsphs) that are orbiting the Milky Way. About 2 months ago, the Dark Energy Survey (DES) collaboration released its first year of data to the public.  Similar to SDSS, DES is also a surveying the optical-near infrared sky using the 4 m Victor M. Blanco Telescope at Cerro Tololo Inter-American Observatory in Chile.   The important distinction between SDSS and DES is that while SDSS observes the northern galactic latitudes, DES observes southern galactic latitudes. Since this is a whole new region of observation, we expect a lot of new exciting things to come out of the data… And sure enough exciting things came. Eight new dsphs candidates were discovered and published in the first data release orbiting. Dwarf galaxies are very old (> 13 billion years old) and have little gas, dust and star formation. I say candidates, because to confirm that these dsph candidates not something else, follow-up observations with other telescopes have to be done.

RetII

However, that doesn’t mean that we (we being everyone since the Fermi data is public) can’t have a look at these potential dark matter targets. On the same day that the new DES candidate dsphs were released, the Fermi-LAT team had a look. Of the eight candidates, most were far away (~100 kpc or ~300k light years). This distance makes looking for dark matter difficult because a signal will be very weak. However, there was one candidate that was only 32 kpc away (DES J0335.6-5403 or Reticulum II), making it the most interesting search target. You can see the counts map of Reticulum II on the right.

 

DES-dsphs

 

The results (on the left) showed that there was no clear WIMP-like dark matter signature coming from any of the candidates (shucks!!). However, the closest target wasn’t totally boring. Another team found a small excess (~2 sigma) in Reticulum II. When the Fermi-LAT team compared analysis methods, we found that there results were optimistic, yet not inconsistent with ours. This got the New York Times’s writer Dennis Overbye excited :).

 

 

The good news is that DES is going to continue for at least 4 more years, which means we’ll have many more opportunities to search for dark matter in dsphs. What we need to find is nearby dsphs. And even more exciting, the Large Synoptic Survey Telescope (LSST) will start taking in the 2020s. This telescope will have access to ~half of the sky (more on the LSST in a future post ;)). This will give us many more targets in the years to come, so stay tuned!

Regina

Dark Matter in Dwarf Galaxies

Hi Particle Biters,

Last week there were some exciting new results from the Fermi-LAT collaboration in dark matter searches! Dark matter is an exciting topic – we believe that 85% of the known matter in the universe is stuff that we can’t see. “Dark matter” itself is a very broad idea. I’ll need to start by making some assumptions on the kind of dark matter we’re looking for. We assume a dark matter candidate exists as a particle that is both massive and interacts on the scale of the “weak” force (specifically a Weakly Interacting Massive Particle – WIMP). There are lots of reasons that motivate this type of a candidate (cosmic microwave background, baryon acoustic oscillations, large scale structure, gravitational lensing, and galactic rotation curves to name a few), and I can elaborate more in another post if readers are interested. For now, just trust me when I say WIMPs are a well motivated dark matter candidate.

Based on things like galactic rotation curves we can guess the distribution of dark matter in our galaxy – and the highest concentration is in the very center. The problem with looking fimg-thingor dark matter in the center of the galaxy is that there are lots of backgrounds. First, there are ~8 kilo-parsecs worth of spiral arms in between us and the center. That’s a lot of stuff to try to understand. Plus there is a super massive black hole – Sagittarius A* – and an unknown number of pulsars (for example) also in the region of the galactic center. Other good, less chaotic, places to search for WIMPs are small, old satellite galaxies orbiting the Milky Way. One class of these types of galaxies is called dwarf spheroidal galaxies (dsphs) (not to be confused with Disney-type dwarfs). They don’t have enough visible matter to be gravitationally bound (even though they are!), so we know that there must be a high concentration of dark matter in these systems. Below is a picture of the location of some known dsphs. In total about 25 are known that Fermi-LAT will analyze. (you can see Flip’s post for more info too!)

map_milky_way_1000_1000_0

Many of these dsphs were discovered by the Sloan Digital Sky Survey (SDSS). SDSS is a 2.5 m optical telescope located at Apache Point Observatory in New Mexico. It has been surveying the northern sky since 2000. The Fermi-LAT collaboration then points at the location of these dsphs and looks for gamma-rays (high energy photons) which would indicate dark matter annihilation. Since these are old systems, there shouldn’t be any gamma-ray emission from these targets that isn’t from dark matter.

Fermi-DMdsphs-P8The Fermi-LAT collaboration has submitted the newest searches for these known dsphs in a paper on arXiv on March 9th. They used 6 years of data and the newest, best event analysis and reconstruction (called Pass 8). The results are on the left: the y-axis shows the cross section times the thermally averaged velocity that we are sensitive to and the x-axis shows different dark matter masses. The blue line shows the previous results obtained by Fermi-LAT. The dashed black line shows what we would expect to see given a specific model of dark matter. In this case we assume that the dark matter decays into b-quarks and anti-b quarks and then from then gamma rays are produced. The solid black line shows the limit of what we actually observed. (Unfortunately no discovery!! That would look like a sharp divergence from the expectation).

Although we haven’t found an indication of dark matter annihilation, we are just becoming sensitive to the “thermal relic” (or the amount of dark matter expected after the big bang shown in the dashed gray line ). So the next few years of these searches are going to be very exciting. I’ll also hint at a future post… there is currently another collaboration (the Dark Energy Survey, or DES), which similarly to SDSS will find more dsphs for us to use as targets – which will only improve our sensitivity. In the next post I’ll talk about these results.

I hope you’ve enjoyed this post! Please post any questions/comments.

Regina

First post, New results!

Hello Particlebites community!

This is my first post as a new contributor to Particlebites. Here’s a little bite about me: I’m an experimental particle physicist working at the University of Mainz in Germany as a postdoctoral researcher. I work on the ATLAS experiment at CERN and have for the past 6 years or so. (I’ll fill out my bio soon). This has been a really exciting time to be a particle physicist! Only a year ago the Higgs was discovered at the LHC, and just  a few weeks ago Francois Englert and Peter Higgs won the Nobel prize for their work predicting it’s existence almost 50 years ago. These results are exciting further strengthen the Standard Model.

However, my post today is regarding brand new results hot off the press from the Large Underground Xenon (LUX) experiment. The purpose of LUX is to search for WIMP-like dark matter. After 85 days of running, they achieved the world’s highest sensitivity. Unfortunately they didn’t find any excess signal – which also means that the earlier CDMS-Si potential signal is also ruled out. But the potential for discovery is really exciting. The next run will be over 300 days and should further increase the sensitivity by ~5x depending on the cross section. Dark matter is out there… it’s just really good at hiding.

Everyone should check out their paper: http://luxdarkmatter.org/papers/LUX_First_Results_2013.pdf

And attached is a picture of their exclusion plot (unfortunately they didn’t include a legend on this version – it’s described in the caption in the paper and I’ve included it here).

The LUX 90% confidence limit on the spin- independent elastic WIMP-nucleon cross section (blue), together with the ±1σ variation from repeated trials, where trials fluctuating below the expected number of events for zero BG are forced to 2.3 (blue shaded). We also show Edelweiss II (dark yellow line), CDMS II (green line), ZEPLIN-III (magenta line) and XENON100 100 live- day (orange line), and 225 live-day (red line) results. The inset (same axis units) also shows the regions measured from annual modulation in CoGeNT (light red, shaded), along with exclusion limits from low threshold re-analysis of CDMS II data (upper green line), 95% allowed region from CDMS II silicon detectors (green shaded) and centroid (green x), 90% allowed region from CRESST II (yellow shaded) and DAMA/LIBRA allowed region interpreted by (grey shaded).
The LUX 90% confidence limit on the spin- independent elastic WIMP-nucleon cross section (blue), together with the ±1σ variation from repeated trials, where trials fluctuating below the expected number of events for zero BG are forced to 2.3 (blue shaded). We also show Edelweiss II (dark yellow line), CDMS II (green line), ZEPLIN-III (magenta line) and XENON100 100 live- day (orange line), and 225 live-day (red line) results. The inset (same axis units) also shows the regions measured from annual modulation in CoGeNT (light red, shaded), along with exclusion limits from low threshold re-analysis of CDMS II data (upper green line), 95% allowed region from CDMS II silicon detectors (green shaded) and centroid (green x), 90% allowed region from CRESST II (yellow shaded) and DAMA/LIBRA allowed region interpreted by (grey shaded).

Happy reading! And I hope you enjoy my future posts.