The title of this paper sounds like some standard astrophysics analyses; but, dig a little deeper and you’ll find – what I think – is an incredibly interesting, surprising and unexpected observation.
Last year, using the WM Keck Observatory and the Gemini North Telescope in Manuakea, Hawaii, the Dragonfly Telephoto Array observed the Coma cluster (a large cluster of galaxies in the constellation Coma – I’ve included a Hubble Image to the left). The team identified a population of large, very low surface brightness (ie: not a lot of stars), spheroidal galaxies around an Ultra Diffuse Galaxy (UDG) called Dragonfly 44 (shown below). They determined that Dragonfly 44 has so few stars that gravity could not hold it together – so some other matter had to be involved – namely DARK MATTER (my favorite kind of unknown matter).
The team used the DEIMOS instrument installed on Keck II to measure the velocities of stars for 33.5 hours over a period of six nights so they could determine the galaxy’s mass. Observations of Dragonfly 44’s rotational speed suggest that it has a mass of about one trillion solar masses, about the same as the Milky Way. However, the galaxy emits only 1% of the light emitted by the Milky Way. In other words, the Milky Way has more than a hundred times more stars than Dragonfly 44. I’ve also included the Mass-to-Light ratio plot vs. the dynamical mass. This illustrates how unique Dragonfly 44 is compared to other dark matter dominated galaxies like dwarf spheroidal galaxies.
What is particularly exciting is that we don’t understand how galaxies like this form.
Their research indicates that these UDGs could be failed galaxies, with the sizes, dark matter content, and globular cluster systems of much more luminous objects. But we’ll need to discover more to fully understand them.
Further reading (works by the same authors)
Forty-Seven Milky Way-Sized, Extremely Diffuse Galaxies in the Coma Cluster,arXiv: 1410.8141
Spectroscopic Confirmation of the Existence of Large, Diffuse Galaxies in the Coma Cluster: arXiv: 1504.03320
Title: Estimating the GeV Emission of Millisecond Pulsars in Dwarf Spheroidal Galaxies br> Publication: arXiv: 1607.06390, submitted to ApJL br>
Howdy, particlebite enthusiasts! I’m blogging this week from ICHEP. Over the next week there will be a lot of exciting updates from the particle physics community… like what happened to that 750 GeV bump? are there any new bumps for us to be excited about? have we broken the standard model yet? But all these will come later in the week – today is registration. But in the mean time, there have been a lot of interesting papers circulating about disentangling dark matter from our favorite astrophysical background – pulsars.
The paper, which is the focus of this post, delves deeper into understanding potential gamma-ray emission found in dwarf spheroidal galaxies (dsphs) from pulsars. The density of millisecond pulsars (MSPs) is related to the density of stars in a cluster. In low-density stellar environments, such as dsphs, the abundance of MSPs is expected to be proportional to stellar mass (it’s much higher for globular cluster and the Galactic center). Remember, the advantage over dsphs in looking for a dark matter signal when compared with, for example, the Galactic center is that they have many fewer detectable gamma-ray emitting sources – like MSPs (see arXiv: 1503.02641 for a recent Fermi-LAT paper). However, as we get more and more sensitive, the probability of detecting gamma rays from astrophysical sources in dsphs goes up as well.
This work estimates what the gamma-ray flux should be (known as the luminosity function) for MSPs found in dsphs. They assume that the number of MSPs is proportional to the stellar density and that the spectrum is similar to the 90 known MSPs in the Galactic disk (see the figure on the right). It fits the gamma-ray spectrum to a broken power law. We can then scale this result to the number of predicted MSPs in each dsph and distance of the dsph. This is then used as a prediction of the gamma-ray spectrum we would expect from MSPs coming from an individual dsph.
They found was that for the highest stellar mass dsphs (Fornax, Draco – usually the classical ones, for example), there is a modest MSP population. However, even for the largest classical dsph, Fornax, the predicted MSP flux > 500 MeV is~ 10−12 ph cm−2s−1 , which is about an order of magnitude below the typical flux upper limits obtained at high Galactic latitudes after six years of the LAT survey, ∼ 10−10 ph cm−2s−1 (see arXiv: 1503.02641 again). The predicted flux and sensitivity is shown below.
So all in all, this is good news for dsphs as dark matter targets. Understanding the backgrounds is imperative for having confidence in an analysis if a signal is found, and this gives us more confidence that we understand one of the dominant backgrounds in the hunt for dark matter.
Title: 3FGL Demographics Outside the Galactic Plane using Supervised Machine Learning: Pulsar and Dark Matter Subhalo Interpretations Publication: arXiv:1605.00711, accepted ApJ
The universe has a way of keeping scientists guessing. For over 70 years, scientists have been trying to understand the particle nature of dark matter. We’ve buried detectors deep underground to shield them from backgrounds, smashed particles together at inconceivably high energies, and dedicated instruments to observing where we have measured dark matter to be a dominant component. Like any good mystery, this has yielded more questions than answers.
There are a lot of ideas as to what the distribution of dark matter looks like in the universe. One example is from a paper by L. Pieri et al., (PRD 83 023518 (2011), arXiv:0908.0195). They simulated what the gamma-ray sky would look like from dark matter annihilation into b-quarks. The results of their simulation are shown below. The plot is an Aitoff projection in galactic coordinates (meaning that the center of the galaxy is at the center of the map).
The obvious bright spot is the galactic center. This is because the center of the Milky Way has the highest density of dark matter nearby (F. Iocco, Pato, Bertone, Nature Physics 11, 245–248 (2015)). Just for some context the center of the Milky way is ~8.5 kiloparsecs or 27,700 light years away from us… so it’s a big neighborhood. However, the center of the galaxy is particularly hard to observe because the Galaxy itself is obstructing our view. As it turns out there are lots of stars, gas and dust in our Galaxy 🙂
This map also shows us that there are other regions of high dark matter density away from the Galactic center. These could be dark matter halos, dwarf spheroidal galaxies, galaxy clusters, or anything else with a high density of dark matter. The paper I’m discussing uses this simulation in combination with the Fermi-LAT 3rd source catalog (3FGL) (Fermi-LAT Collaboration, Astrophys. J. Suppl 218 (2015) arXiv:1501.02003).
Over 1/3 of the sources in the 3FGL are unassociated with a known astrophysical source (this means we don’t know what source is yielding gamma rays). The paper analyzes these sources to see if their gamma-ray flux is consistent with dark matter annihilation or if it’s more consistent with the spectral shape from pulsars, rapidly rotating neutron stars with strong magnetic fields that emit radio waves (and gamma-rays) in very regular pulses. These are a fascinating class of astrophysical objects and I’d highly recommend reading up on them (See NASA’s site). The challenge is that the gamma-ray flux from dark matter annihilation into b-quarks is surprisingly similar to that from pulsars (See below).
They found 34 candidates sources which are consistent with both dark matter annihilation and pulsar spectra away from the Galactic plane using two different machine learning techniques. Generally, if a source can be explained by something other than dark matter, that’s the accepted interpretation. So, the currently favored astrophysical interpretations for these objects are pulsars. Yet, these objects could also be interpreted as dark matter annihilation taking place in ultra-faint dwarf galaxies or dark matter subhalos. Unfortunately, Fermi-LAT spectra are not sufficient to break degeneracies between the different scenarios. The distribution of the 34 dark matter subhalo candidates found in this work are shown below.
The paper presents scenarios which support the pulsar interpretation and with the dark matter interpretation. If they are pulsars, they find the 34 found to be in excellent agreement with predictions from a new population that predicts many more pulsars than are currently found. However, if they are dark matter substructures, they also place upper limits on the number of Galactic subhalos surviving today and on dark matter annihilation cross sections. The cross section is shown below.
The only thing we can do (beyond waiting for more Fermi-LAT data) is try to identify these sources definitely as pulsars which requires extensive follow-up observations using other telescopes (in particular radio telescopes to look for pulses). So stay tuned!
The center of the galaxy is brighter than astrophysicists expected. Could this be the result of the self-annihilation of dark matter? Chris Karwin, a graduate student from the University of California, Irvine presents the Fermi collaboration’s analysis.
Editor’s note: this is a guest post by one of the students involved in the published result.
Presenting: Fermi-LAT Observations of High-Energy Gamma-Ray Emission Toward the Galactic Center Authors: The Fermi-LAT Collaboration (ParticleBites blogger is a co-author) Reference:1511.02938, Astrophys.J. 819 (2016) no.1, 44
Like other telescopes, the Fermi Gamma-Ray Space Telescope is a satellite that scans the sky collecting light. Unlike many telescopes, it searches for very high energy light: gamma-rays. The satellite’s main component is the Large Area Telescope (LAT). When this detector is hit with a high-energy gamma-ray, it measures the the energy and the direction in the sky from where it originated. The data provided by the LAT is an all-sky photon counts map:
In 2009, researchers noticed that there appeared to be an excess of gamma-rays coming from the galactic center. This excess is found by making a model of the known astrophysical gamma-ray sources and then comparing it to the data.
What makes the excess so interesting is that its features seem consistent with predictions from models of dark matter annihilation. Dark matter theory and simulations predict:
The distribution of dark matter in space. The gamma rays coming from dark matter annihilation should follow this distribution, or spatial morphology.
The particles to which dark matter directly annihilates. This gives a prediction for the expected energy spectrum of the gamma-rays.
Although a dark matter interpretation of the excess is a very exciting scenario that would tell us new things about particle physics, there are also other possible astrophysical explanations. For example, many physicists argue that the excess may be due to an unresolved population of milli-second pulsars. Another possible explanation is that it is simply due to the mis-modeling of the background. Regardless of the physical interpretation, the primary objective of the Fermi analysis is to characterize the excess.
The main systematic uncertainty of the experiment is our limited understanding of the backgrounds: the gamma rays produced by known astrophysical sources. In order to include this uncertainty in the analysis, four different background models are constructed. Although these models are methodically chosen so as to account for our lack of understanding, it should be noted that they do not necessarily span the entire range of possible error. For each of the background models, a gamma-ray excess is found. With the objective of characterizing the excess, additional components are then added to the model. Among the different components tested, it is found that the fit is most improved when dark matter is added. This is an indication that the signal may be coming from dark matter annihilation.
This analysis is interested in the gamma rays coming from the galactic center. However, when looking towards the galactic center the telescope detects all of the gamma-rays coming from both the foreground and the background. The main challenge is to accurately model the gamma-rays coming from known astrophysical sources.
An overview of the analysis chain is as follows. The model of the observed region comes from performing a likelihood fit of the parameters for the known astrophysical sources. A likelihood fit is a statistical procedure that calculates the probability of observing the data given a set of parameters. In general there are two types of sources:
Point sources such as known pulsars
Diffuse sources due to the interaction of cosmic rays with the interstellar gas and radiation field
Parameters for these two types of sources are fit at the same time. One of the main uncertainties in the background is the cosmic ray source distribution. This is the number of cosmic ray sources as a function of distance from the center of the galaxy. It is believed that cosmic rays come from supernovae. However, the source distribution of supernova remnants is not well determined. Therefore, other tracers must be used. In this context a tracer refers to a measurement that can be made to infer the distribution of supernova remnants. This analysis uses both the distribution of OB stars and the distribution of pulsars as tracers. The former refers to OB associations, which are regions of O-type and B-type stars. These hot massive stars are progenitors of supernovae. In contrast to these progenitors, the distribution of pulsars is also used since pulsars are the end state of supernovae. These two extremes serve to encompass the uncertainty in the cosmic ray source distribution, although, as mentioned earlier, this uncertainty is by no means bracketing. Two of the four background model variants come from these distributions.
The information pertaining to the cosmic rays, gas, and radiation fields is input into a propagation code called GALPROP. This produces an all-sky gamma-ray intensity map for each of the physical processes that produce gamma-rays. These processes include the production of neutral pions due to the interaction of cosmic ray protons with the interstellar gas, which quickly decay into gamma-rays, cosmic ray electrons up-scattering low-energy photons of the radiation field via inverse Compton, and cosmic ray electrons interacting with the gas producing gamma-rays via Bremsstrahlung radiation.
The maps of all the processes are then tuned to the data. In general, tuning is a procedure by which the background models are optimized for the particular data set being used. This is done using a likelihood analysis. There are two different tuning procedures used for this analysis. One tunes the normalization of the maps, and the other tunes both the normalization and the extra degrees of freedom related to the gas emission interior to the solar circle. These two tuning procedures, performed for the the two cosmic ray source models, make up the four different background models.
Point source models are then determined for each background model, and the spectral parameters for both diffuse sources and point sources are simultaneously fit using a likelihood analysis.
Results and Conclusion
In the plot of the best fit dark matter spectra for the four background models, the hatching of each curve corresponds to the statistical uncertainty of the fit. The systematic uncertainty can be interpreted as the region enclosed by the four curves. Results from other analyses of the galactic center are overlaid on the plot. This result shows that the galactic center analysis performed by the Fermi collaboration allows a broad range of possible dark matter spectra.
The Fermi analysis has shown that within systematic uncertainties a gamma-ray excess coming from the galactic center is detected. In order to try to explain this excess additional components were added to the model. Among the additional components tested it was found that the fit is most improved with that addition of a dark matter component. However, this does not establish that a dark matter signal has been detected. There is still a good chance that the excess can be due to something else, such as an unresolved population of millisecond pulsars or mis-modeling of the background. Further work must be done to better understand the background and better characterize the excess. Nevertheless, it remains an exciting prospect that the gamma-ray excess could be a signal of dark matter.
Background reading on dark matter and indirect detection:
Theoretical physicists have designed a new way to explain the how dark matter interactions can explain the observed amount of dark matter in the universe today. This elastically decoupling dark matter framework, is a hybrid of conventional and novel dark matter models.
The particle identity of dark matter is one of the biggest open questions in physics. The simplest and most widely assumed explanation is that dark matter is a weakly-interacting massive particle (WIMP). Assuming that dark matter starts out in thermal equilibrium in the hot plasma of the early universe, the present cosmic abundance of WIMPs is set by the balance of two effects:
When two WIMPs find each other, they can annihilate into ordinary matter. This depletes the number of WIMPs in the universe.
The universe is expanding, making it harder for WIMPs to find each other.
This process of “thermal freeze out” leads to an abundance of WIMPs controlled by the dark matter mass and interaction strength. The term ‘weakly-interacting massive particle’ comes from the observation that dark matter of roughly the mass of the weak force particle that interact through the weak nuclear force gives the experimentally measured dark matter density today.
More recently, physicists noticed that dark matter with very large interactions with itself (not ordinary matter), can produce the correct dark matter density in another way. These “strongly interacting massive particle” models reduce regulate the amount of dark matter through 3-to-2 interactions that reduce the total number of dark matter particles rather than annihilation into ordinary matter.
The authors of 1512.04545 have proposed an intermediate road that interpolates between these two types of dark matter. The “elastically decoupling dark matter” (ELDER) scenario. ELDERs have both of the above interactions: they can annihilate pairwise into ordinary matter, or sets of three ELDERs can turn into two ELDERS.
The cosmic history of these ELDERs is as follows:
ELDERs are produced in thermal bath immediately after the big bang.
Pairs of ELDERS annihilate into ordinary matter. Like WIMPs, they interact weakly with ordinary matter.
As the universe expands, the rate for annihilation into Standard Model particles falls below the rate at which the universe expands
Assuming that the ELDERs interact strongly amongst themselves, the 3-to-2 number-changing process still occurs. Because this process distributes the energy of 3 ELDERs in the initial state to 2 ELDERs in the final state, the two outgoing ELDERs have more kinetic energy: they’re hotter. This turns out to largely counteract the effect of the expansion of the universe.
The neat effect here is the abundance of ELDERs is actually set by the interaction with ordinary matter, like WIMPs. However, because they have this 3-to-2 heating period, they are able to produce the observed present-day dark matter density for very different choices of interactions. In this sense, the authors show that this framework opens up a new “phase diagram” in the space of dark matter theories:
Before we dig into all the physics behind these acronyms (beyond SM physics! dark matter!), let’s start by breaking down the title.
QCD, or quantum chromodynamics, is the study of how quarks and gluons interact. CP is the combined operation of charge-parity; it swaps a particle for its antiparticle, then switches left and right. CP symmetry states that applying both operators should leave the laws of physics invariant, which is true for electromagnetism. Interestingly it is violated by the weak force (this becomes the problem of matter-antimatter asymmetry ). But more importantly, the strong force maintains CP symmetry. In fact, that’s exactly the problem.
CP violation in QCD would give an electric dipole moment to the neutron. Experimentally, physicists have constrained this value pretty tightly around zero. But our QCD Lagrangian has a more complicated vacuum than first thought, giving it a term (Figure 2) with a phase parameter that would break CP . Basically, our issue is that the theory predicts some degree of CP violation, but experimentally we just don’t see it. This is known as the strong CP problem.
Naturally physicists want to find a fix for this problem, bringing us to the rest of the article title. The most recognized solution is the Peccei-Quinn, or PQ theory. The idea is that the phase parameter is not a constant but actually another symmetry of the Standard Model. This symmetry, called U(1)_PQ, is spontaneously broken, meaning that all states of the theory share the symmetry except for the ground state.
This may sound a bit similar to the Higgs mechanism, because it is. In both cases, we get a non-zero vacuum expectation value and an extra massless boson, called a Goldstone boson. However, as with the Higgs mechanism, the new boson is not exactly massless. Very few things are exact in physics, and approximate symmetry breaking means our massless Goldstone boson gains a tiny bit of mass after all. This new particle created from PQ theory is called an axion. This new axion effectively steps into the role of the phase parameter, allowing its value to relax to 0.
Is it reasonable to imagine some extra massive Standard Model particle bouncing around that we haven’t detected yet? Sure. Perhaps the axion is so heavy that we haven’t yet probed the necessary energy range at the LHC. Or maybe it interacts so rarely that we’ve been looking in the right places and just haven’t had the statistics. But any undiscovered massive particle floating around should make you think about dark matter. In fact, the axion is one of the few remaining viable candidates for DM, and lots of people are looking pretty hard for it.
One of the largest collaborations is ADMX at the University of Washington, which uses an RF cavity in a superconducting magnet to detect the very rare conversion of a DM axion into a microwave photon . In order to be a good dark matter candidate, the axion would have to be fairly small, and some theories place its mass below 1 eV (for reference, the neutrinos are of mass ~0.3 eV). ADMX has eliminated possible masses on the micro-eV order. However, theorists are clever, and there’s a lot of model tuning available that can pin the axion mass practically anywhere you want it to be.
Now is the time to factor in the recent buzz about a diphoton excess at 750 GeV (see the January 2016 ParticleBites post to catch up on this.) Recent papers are trying to place the axion at this mass, since that resonance is yet to be explained by Standard Model processes.
For example, one can consider aligned QCD axion models, in which there are multiple axions with decay constants around the weak scale, in line with the dark matter relic abundance . The models can get pretty diverse from here, suffice it to say that there are many possibilities. Though this excess is still far from confirmed, it is always exciting to speculate about what we don’t know and how we can figure it out. Because of strong CP and these recent model developments, the axion has earned a place pretty high up on this speculation list.
Now is a good time to be a dark matter experiment. The astrophysical evidence for its existence is almost undeniable (such as gravitational lensing and the cosmic microwave background; see the “Further Reading” list if you want to know more.) Physicists are pulling out all the stops trying to pin DM down by any means necessary.
However, by its very nature, it is extremely difficult to detect; dark matter is called dark because it has no known electromagnetic interactions, meaning it doesn’t couple to the photon. It does, however, have very noticeable gravitational effects, and some theories allow for the possibility of weak interactions as well.
While there are a wide variety of experiments searching for dark matter right now, the scope of this post will be a bit narrower, focusing on a common technique used to look for dark matter at the LHC, known as ‘monojets’. We rely on the fact that a quark-quark interaction could actually produce dark matter particle candidates, known as weakly interacting massive particles (WIMPs), through some unknown process. Most likely, the dark matter would then pass through the detector without any interactions, kind of like neutrinos. But if it doesn’t have any interactions, how do we expect to actually see anything? Figure 1 shows the overall Feynman diagram of the interaction; I’ll explain how and why each of these particles comes into the picture.
The answer is a pretty useful metric used by particle physicists to measure things that don’t interact, known as ‘missing transverse energy’ or MEt. When two protons are accelerated down the beam line, their initial momentum in the transverse plane is necessarily zero. Your final state can have all kinds of decay products in that plane, but by conversation of momentum, their magnitude and direction have to add up to zero in the end. If you add up all your momentum in the transverse plane and get a non-zero value, you know the remaining momentum was taken away by these non-interacting particles. In our case, dark matter is going to be the missing piece of the puzzle.
Now our search method is to collide protons and look for… well, nothing. That’s not an easy thing to do. So let’s add another particle to our final state: a single jet that was radiated off one of the initial protons. This is a pretty common occurrence in LHC collisions, so we’re not ruining our statistics. But now we have an extra handle on selecting these events, since that radiated single jet is going to recoil off the missing energy in the final state.
An actual event display from the ATLAS detector is shown in Figure 2 (where the single jet is shown in yellow in the transverse plane of the detector).
No results have been released yet from the monojet groups with the 13 and 14 TeV data. However, the same method was using in 2012-2013 LHC data, and has provided some results that can be compared to current knowledge. Figure 3 shows the WIMP-nucleon cross section as a function of WIMP mass from CMS at the LHC (EPJC 75 (2015) 235), overlaid with other exclusions from a variety of experiments. Anything above/right of these curves is the excluded region.
From here we can see that the LHC can provide better sensitivity to low mass regions with spin dependent couplings to DM. It’s worth giving the brief caveat that these comparisons are extremely model dependent and require a lot of effective field theory; notes on this are also given in the Further Reading list. The current results look pretty thorough, and a large region of the WIMP mass seems to have been excluded. Interestingly, some searches observe slight excesses in regions that other experiments have ruled out; in this way, these ‘exclusions’ are not necessarily as cut and dry as they may seem. The dark matter mystery is still far from a resolution, but the LHC may be able to get us a little bit closer.
With all this incoming data and such a wide variety of searches ongoing, it’s likely that dark matter will remain a hot topic in physics for decades to come, with or without a discovery. In the words of dark matter pioneer Vera Rubin, “We have peered into a new world, and have seen that it is more mysterious and more complex than we had imagined. Still more mysteries of the universe remain hidden. Their discovery awaits the adventurous scientists of the future. I like it this way.“
Presenting: Dark Photons from the Center of the Earth Author: J. Feng, J. Smolinsky, P. Tanedo (disclosure: blog post is by an author on the paper) Reference: arXiv:1509.07525
Dark matter may be collecting in the center of the Earth. A recent paper explores way to detect its decay products here on the surface.
Our entire galaxy is gravitationally held together by a halo of dark matter, whose particle properties remain one of the biggest open questions in high energy physics. One class of theories assumes that the dark matter particles interact through a dark photon, a hypothetical particle which mediates a force analogous to how the ordinary photon mediates electromagnetism.
These theories also permit the ordinary and dark photons to have a small quantum mechanical mixing. This effectively means that the dark photon can interact very weakly with ordinary matter and mediate interactions between ordinary matter and dark matter—this gives a handle for ways to detect dark matter.
While most methods for detecting dark matter focus on building detectors that are sensitive to the “wind” of dark matter bombarding (and mostly passing through) the Earth as the solar system zooms through the galaxy, the authors of 1509.07525 follow up on an idea initially proposed in the mid-80’s: dark matter hitting the Earth might get stuck in the Earth’s gravitational potential and build up in its core.
These dark matter particles can then find each other and annihilate. If they annihilate into very weakly interacting particles, then these may be detected at the surface of the Earth. A typical example is dark matter annihilation into neutrinos. In 1509.07525, the authors examine the case where the dark matter annihilates into dark photons, which can pass through the Earth as easily as a neutrino and decay into pairs of electrons or muons near the surface.
These decays can be detected in large neutrino detectors, such as the IceCube neutrino observatory (previously featured in ParticleBites). In the case where the dark matter is very heavy (e.g. TeV in mass) and the dark photons are very light (e.g. 200 MeV), these dark photons are very boosted and their decay products point back to the center of the Earth. This is a powerful discriminating feature against background cosmic ray events. The number of signal events expected is shown in the following contour plot:
While similar analyses for dark photon-mediated dark matter capture by celestial bodies and annihilation have been studied—see e.g. Pospelov et al., Delaunay et al., Schuster et al., and Meade et al.—the authors of 1509.07525 focus on the case of dark matter capture in the Earth (rather than, say, the sun) and subsequent annihilation to dark photons (rather than neutrinos).
The annihilation rate at the center of the Earth is greatly increased do to Sommerfeld enhancement: because the captured dark matter has very low velocity, it is much more likely to annihilate with other captured dark matter particles due to mutual attraction from dark photon exchange.
This causes the Earth to quickly saturate with dark matter, leading to larger annihilation rates than one would naively expect in the case where the Earth were not yet dark matter saturated such that annihilation and capture occur at equal rates.
In addition using directional information to identify signal events against cosmic ray backgrounds, the authors identified kinematic quantities—the opening angle of the Standard Model decay products and the time delay between them—as ways to further discriminate signal from background. Unfortunately their analysis implies that these features lie just outside of the IceCube sensitivity to them.
Finally, the authors point out the possibility of large enhancements coming from the so-called dark disk, an enhancement in the low velocity phase space density of dark matter. If that is the case, then the estimated reach may increase by an order of magnitude.
This is part 2 in my “series” on dark matter in dwarf galaxies. In my previous post, I explained a bit about WIMP-like dark matter and why we look for its signature in these particular type of small galaxies (dsphs) that are orbiting the Milky Way. About 2 months ago, the Dark Energy Survey (DES) collaboration released its first year of data to the public. Similar to SDSS, DES is also a surveying the optical-near infrared sky using the 4 m Victor M. Blanco Telescope at Cerro Tololo Inter-American Observatory in Chile. The important distinction between SDSS and DES is that while SDSS observes the northern galactic latitudes, DES observes southern galactic latitudes. Since this is a whole new region of observation, we expect a lot of new exciting things to come out of the data… And sure enough exciting things came. Eight new dsphs candidates were discovered and published in the first data release orbiting. Dwarf galaxies are very old (> 13 billion years old) and have little gas, dust and star formation. I say candidates, because to confirm that these dsph candidates not something else, follow-up observations with other telescopes have to be done.
However, that doesn’t mean that we (we being everyone since the Fermi data is public) can’t have a look at these potential dark matter targets. On the same day that the new DES candidate dsphs were released, the Fermi-LAT team had a look. Of the eight candidates, most were far away (~100 kpc or ~300k light years). This distance makes looking for dark matter difficult because a signal will be very weak. However, there was one candidate that was only 32 kpc away (DES J0335.6-5403 or Reticulum II), making it the most interesting search target. You can see the counts map of Reticulum II on the right.
The results (on the left) showed that there was no clear WIMP-like dark matter signature coming from any of the candidates (shucks!!). However, the closest target wasn’t totally boring. Another team found a small excess (~2 sigma) in Reticulum II. When the Fermi-LAT team compared analysis methods, we found that there results were optimistic, yet not inconsistent with ours. This got the New York Times’s writer Dennis Overbye excited :).
The good news is that DES is going to continue for at least 4 more years, which means we’ll have many more opportunities to search for dark matter in dsphs. What we need to find is nearby dsphs. And even more exciting, the Large Synoptic Survey Telescope (LSST) will start taking in the 2020s. This telescope will have access to ~half of the sky (more on the LSST in a future post ;)). This will give us many more targets in the years to come, so stay tuned!
Last week there were some exciting new results from the Fermi-LAT collaboration in dark matter searches! Dark matter is an exciting topic – we believe that 85% of the known matter in the universe is stuff that we can’t see. “Dark matter” itself is a very broad idea. I’ll need to start by making some assumptions on the kind of dark matter we’re looking for. We assume a dark matter candidate exists as a particle that is both massive and interacts on the scale of the “weak” force (specifically a Weakly Interacting Massive Particle – WIMP). There are lots of reasons that motivate this type of a candidate (cosmic microwave background, baryon acoustic oscillations, large scale structure, gravitational lensing, and galactic rotation curves to name a few), and I can elaborate more in another post if readers are interested. For now, just trust me when I say WIMPs are a well motivated dark matter candidate.
Based on things like galactic rotation curves we can guess the distribution of dark matter in our galaxy – and the highest concentration is in the very center. The problem with looking for dark matter in the center of the galaxy is that there are lots of backgrounds. First, there are ~8 kilo-parsecs worth of spiral arms in between us and the center. That’s a lot of stuff to try to understand. Plus there is a super massive black hole – Sagittarius A* – and an unknown number of pulsars (for example) also in the region of the galactic center. Other good, less chaotic, places to search for WIMPs are small, old satellite galaxies orbiting the Milky Way. One class of these types of galaxies is called dwarf spheroidal galaxies (dsphs) (not to be confused with Disney-type dwarfs). They don’t have enough visible matter to be gravitationally bound (even though they are!), so we know that there must be a high concentration of dark matter in these systems. Below is a picture of the location of some known dsphs. In total about 25 are known that Fermi-LAT will analyze. (you can see Flip’s post for more info too!)
Many of these dsphs were discovered by the Sloan Digital Sky Survey (SDSS). SDSS is a 2.5 m optical telescope located at Apache Point Observatory in New Mexico. It has been surveying the northern sky since 2000. The Fermi-LAT collaboration then points at the location of these dsphs and looks for gamma-rays (high energy photons) which would indicate dark matter annihilation. Since these are old systems, there shouldn’t be any gamma-ray emission from these targets that isn’t from dark matter.
The Fermi-LAT collaboration has submitted the newest searches for these known dsphs in a paper on arXiv on March 9th. They used 6 years of data and the newest, best event analysis and reconstruction (called Pass 8). The results are on the left: the y-axis shows the cross section times the thermally averaged velocity that we are sensitive to and the x-axis shows different dark matter masses. The blue line shows the previous results obtained by Fermi-LAT. The dashed black line shows what we would expect to see given a specific model of dark matter. In this case we assume that the dark matter decays into b-quarks and anti-b quarks and then from then gamma rays are produced. The solid black line shows the limit of what we actually observed. (Unfortunately no discovery!! That would look like a sharp divergence from the expectation).
Although we haven’t found an indication of dark matter annihilation, we are just becoming sensitive to the “thermal relic” (or the amount of dark matter expected after the big bang shown in the dashed gray line ). So the next few years of these searches are going to be very exciting. I’ll also hint at a future post… there is currently another collaboration (the Dark Energy Survey, or DES), which similarly to SDSS will find more dsphs for us to use as targets – which will only improve our sensitivity. In the next post I’ll talk about these results.
I hope you’ve enjoyed this post! Please post any questions/comments.