If dark matter actually consists of a new kind of particle, then the most up-and-coming candidate is the axion. The axion is a consequence of the Peccei-Quinn mechanism, a plausible solution to the “strong CP problem,” or why the strong nuclear force conserves the CP-symmetry although there are no reasons for it to. It is a very light neutral boson, named by Frank Wilczek after a detergent brand (in a move that obviously dates its introduction in the ’70s).
Most experiments that try to directly detect dark matter have looked for WIMPs (weakly interacting massive particles). However, as those searches have not borne fruit, the focus started turning to axions, which make for good candidates given their properties and the fact that if they exist, then they exist in multitudes throughout the galaxies. Axions “speak” to the QCD part of the Standard Model, so they can appear in interaction vertices with hadronic loops. The end result is that axions passing through a magnetic field will convert to photons.
In practical terms, their detection boils down to having strong magnets, sensitive electronics and an electromagnetically very quiet place at one’s disposal. One can then sit back and wait for the hypothesized axions to pass through the detector as earth moves through the dark matter halo surrounding the Milky Way. Which is precisely why such experiments are known as “haloscopes.”
Now, the most veteran haloscope of all published significant new results. Alas, it is still empty-handed, but we can look at why its update is important and how it was reached.
ADMX (Axion Dark Matter eXperiment) of the University of Washington has been around for a quarter-century. By listening for signals from axions, it progressively gnaws away at the space of allowed values for their mass and coupling to photons, focusing on an area of interest:
Unlike higher values, this area is not excluded by astrophysical considerations (e.g. stars cooling off through axion emission) and other types of experiments (such as looking for axions from the sun). In addition, the bands above the lines denoted “KSVZ” and “DFSZ” are special. They correspond to the predictions of two models with favorable theoretical properties. So, ADMX is dedicated to scanning this parameter space. And the new analysis added one more year of data-taking, making a significant dent in this ballpark.
As mentioned, the presence of axions would be inferred from a stream of photons in the detector. The excluded mass range was scanned by “tuning” the experiment to different frequencies, while at each frequency step longer observation times probed smaller values for the axion-photon coupling.
Two things that this search needs is a lot of quiet and some good amplification, as the signal from a typical axion is expected to be as weak as the signal from a mobile phone left on the surface of Mars (around 10-23W). The setup is indeed stripped of noise by being placed in a dilution refrigerator, which keeps its temperature at a few tenths of a degree above absolute zero. This is practically the domain governed by quantum noise, so advantage can be taken of the finesse of quantum technology: for the first time ADMX used SQUIDs, superconducting quantum interference devices, for the amplification of the signal.
In the end, a good chunk of the parameter space which is favored by the theory might have been excluded, but the haloscope is ready to look at the rest of it. Just think of how, one day, a pulse inside a small device in a university lab might be a messenger of the mysteries unfolding across the cosmos.
Information, gold and chicken. What do they all have in common? They can all come in the form of nuggets. Naturally one would then be compelled to ask: “what about fundamental particles? Could they come in nugget form? Could that hold the key to dark matter?” Lucky for you this has become the topic of some ongoing research.
A ‘nugget’ in this context refers to large macroscopic ‘clumps’ of matter formed in the early universe that could possibly survive up until the present day to serve as a dark matter candidate. Much like nuggets of the edible variety, one must be careful to combine just the right ingredients in just the right way. In fact, there are generally three requirements to forming such an exotic state of matter:
(At least) two different vacuum states separated by a potential ‘barrier’ where a phase transition occurs (known as a first-order phase transition).
A charge which is conserved globally which can accumulate in a small part of space.
An excess of matter over antimatter on the cosmological scale, or in other words, a large non-zero macroscopic number density of global charge.
Back in the 1980s, before much work was done in the field of lattice quantum chromodynamics (lQCD), Edward Witten put forward the idea that the Standard Model QCD sector could in fact accommodate such an exotic form of matter. Quite simply this would occur at the early phase of the universe when the quarks undergo color confinement to form hadrons. In particular Witten’s were realized as large macroscopic clumps of ‘quark matter’ with a very large concentration of baryon number, . However, with the advancement of lQCD techniques, the phase transition in which the quarks become confined looks more like a continuous ‘crossover’ (i.e. a second-order phase transition), making the idea in the Standard Model somewhat unfeasible.
Theorists, particularly those interested in dark matter, are not confined (for lack of a better term) to the strict details of the Standard Model and most often look to the formation of sometimes complicated ‘dark sectors’ invisible to us but readily able to provide the much needed dark matter candidate.
The problem of obtaining a first-order phase transition to form our quark nuggets need not be a problem if we consider a QCD-type theory that does not interact with the Standard Model particles. More specifically, we can consider a set of dark quarks, dark gluons with arbitrary characteristics like masses, couplings, numbers of flavors or numbers of colors (which of course are quite settled for the Standard Model QCD case). In fact, looking at the numbers of flavors and colors of dark QCD in Figure 1, we can see in the white unshaded region a number of models that can exist with a first-order phase transition, as required to form these dark quark nuggets.
As with normal quarks, the distinction between the two phases actually refers to a process known as chiral symmetry breaking. When the temperature of the universe cools to this particular scale, color confinement of quarks occurs around the same time, such that no single-color quark can be observed on its own – only in colorless bound states.
Forming a nugget
As we have briefly mentioned so far, the dark nuggets are formed as the universe undergoes a ‘dark’ phase transition from a phase where the dark color is unconfined to a phase where it is confined. At some critical temperature, due to the nature of first-order phase transitions, bubbles of the new confined phase (full of dark hadrons) begin to nucleate out of the dark quark-gluon plasma. The growth of these bubbles are driven by a difference in pressure, characteristic of the fact that the unconfined and confined phase vacuums states are of different energy. With this emerging bubble wall, the almost massless particles from the dark plasma scatter from the wall containing heavy dark (anti)baryons and hence a large amount of dark baryon number accumulates in this phase. Eventually, as these bubbles merge and coalesce, we would expect local regions of remaining dark quark-gluon plasma, unconfined and stable from collapse due to the Fermi degeneracy pressure (see reference below for more on this). An illustration is shown in Figure 2. Calculations with varying energy scales of confinement estimate their masses are anywhere between to grams with radii from to cm and so can truly be classed as macroscopic dark objects!
How do we know they could be there?
There are a number of ways to infer the existence of dark quark nuggets, but two of the main ones are: (i) as a dark matter candidate and (ii) through probes of the dark QCD model that provides them. Cosmologically, the latter can imply the existence of a dark form of radiation which ultimately can lead to effects on the Cosmic Microwave Background Radiation (CMB). In a similar vein, one recent avenue of study today is the production of a steady background of gravitational waves emerging from the existence of a first-order phase transition – one of the key requirements for dark quark nugget formation. More importantly, they can be probed through astrophysical means if they share some coupling (albeit small) with the Standard Model particles. The standard technique of direct detection with Earth-based experiments could be the way to go – but furthermore, there may be the possibility of cosmic ray production from collisions of multiple dark quark nuggets. Among these are a number of other observations over the massive range of nugget sizes and masses shown in Figure 3.
To conclude, note that in such a generic framework, a number of well-motivated theories may predict (or in fact have unavoidable) instances of quark nuggets that may serve as interesting dark matter candidates with a lot of fun phenomenology to play with. It is only up to the theorist’s imagination where to go from here!
Direct detection strategies for dark matter (DM) have grown significantly from the dominant narrative of looking for scattering of these ghostly particles off of large and heavy nuclei. Such experiments involve searches for the Weakly-Interacting Massive Particles (WIMPs) in the many GeV (gigaelectronvolt) mass range. Such candidates for DM are predicted by many beyond Standard Model (SM) theories, one of the most popular involving a very special and unique extension called supersymmetry. Once dubbed the “WIMP Miracle”, these types of particles were found to possess just the right properties to be suitable as dark matter. However, as these experiments become more and more sensitive, the null results put a lot of stress on their feasibility.
Typical detectors like that of LUX, XENON, PandaX and ZEPLIN, detect flashes of light (scintillation) from the result of particle collisions in noble liquids like argon or xenon. Other cryogenic-type detectors, used in experiments like CDMS, cool semiconductor arrays down to very low temperatures to search for ionization and phonon (quantized lattice vibration) production in crystals. Already incredibly successful at deriving direct detection limits for heavy dark matter, new ideas are emerging to look into the lighter side.
Recently, DM below the GeV range have become the new target of a huge range of detection methods, utilizing new techniques and functional materials – semiconductors, superconductors and even superfluid helium. In such a situation, recoils from the much lighter electrons in fact become much more sensitive than those of such large and heavy nuclear targets.
There are several ways that one can consider light dark matter interacting with electrons. One popular consideration is to introduce a new gauge boson that has a very small ‘kinetic’ mixing with the ordinary photon of the Standard Model. If massive, these ‘dark photons’ could also be potentially dark matter candidates themselves and an interesting avenue for new physics. The specifics of their interaction with the electron are then determined by the mass of the dark photon and the strength of its mixing with the SM photon.
Typically the gap between the valence and conduction bands in semiconductors like silicon and germanium is around an electronvolt (eV). When the energy of the dark matter particle exceeds the band gap, electron excitations in the material can usually be detected through a complicated secondary cascade of electron-hole pair generation. Below the band gap however, there is not enough energy to excite the electron to the conduction band, and so detection proceeds through low-energy multi-phonon excitations, with the dominant being the emission of two back-to-back phonons.
In both these regimes, the absorption rate of dark matter in the material is directly related to the properties of the material, namely its optical properties. In particular, the absorption rate for ordinary SM photons is determined by the polarization tensor in the medium, and in turn the complex conductivity, , through what is known as the optical theorem. Ultimately this describes the response of the material to an electromagnetic field, which has been measured in several energy ranges. This ties together the astrophysical properties of how the dark matter moves through space and the fundamental description of DM-electron interactions at the particle level.
In a more technical sense, the rate of DM absorption, in events per unit time per unit target mass, is given by the following equation:
– mass density of the target material
– local dark matter mass density (0.3 GeV/cm3) in the galactic halo
– mass of the dark photon particle
– kinetic mixing parameter (in-medium)
– absorption rate of ordinary SM photons
Shown in Figure 1, the projected sensitivity at 90% confidence limit (C.L.) for a 1 kg-year exposure of semiconductor target to dark photon detection can be almost an order of magnitude greater than existing nuclear recoil experiments. Dependence is shown on the kinetic mixing parameter and the mass of the dark photon. Limits are also shown for existing semiconductor experiments, known as DAMIC and CDMSLite with 0.6 and 70 kg-day exposure, respectively.
Furthermore, in the millielectronvolt-kiloelectronvolt range, these could provide much stronger constraints than any of those that currently exist from sources in astrophysics, even at this exposure. These materials also provide a novel way of detecting DM in a single experiment, so long as improvements are made in phonon detection.
These possibilities, amongst a plethora of other detection materials and strategies, can open up a significant area of parameter space for finally closing in on the identity of the ever-elusive dark matter!
Over the past decade, a new trend has been emerging in physics, one that is motivated by several key questions: what do we know about the origin of our universe? What do we know about its composition? And how will the universe evolve from here? To delve into these questions naturally requires a thorough examination of the universe via the astrophysics lens. But studying the universe on a large scale alone does not provide a complete picture. In fact, it is just as important to see the universe on the smallest possible scales, necessitating the trendy and (fairly) new hybrid field of particle astrophysics. In this post, we will look specifically at the cosmic microwave background (CMB), classically known as a pillar of astrophysics, within the context of particle physics, providing a better understanding of the broader questions that encompass both fields.
Essentially, the CMB is just what we see when we look into the sky and we aren’t looking at anything else. Okay, fine. But if we’re not looking at something in particular, why do we see anything at all? The answer requires us to jump back a few billion years to the very early universe.
Immediately after the Big Bang, it was impossible for particles to form atoms without immediately being broken apart by constant bombardment from stray photons. About 380,000 thousand years after the Big Bang, the Universe expanded and cooled to a temperature of about 3,000 K, allowing the first formation of stable hydrogen atoms. Since hydrogen is electrically neutral, the leftover photons could no longer interact, meaning that at that point their paths would remain unaltered indefinitely. These are the photons that we observe as CMB; Figure 1 shows this idea diagrammatically below. From our present observation point, we measure the CMB to have a temperature of about 2.76 K.
Since this radiation has been unimpeded since that specific point (known as the point of ‘recombination’), we can think of the CMB as a snapshot of the very early universe. It is interesting, then, to examine the regularity of the spectrum; the CMB is naturally not perfectly uniform, and the slight temperature variations can provide a lot of information about how the universe formed. In the early primordial soup universe, slight random density fluctuations exerted a greater gravitational pull on their surroundings, since they had slightly more mass. This process continues, and very large dense patches occur in an otherwise uniform space, heating up the photons in that area accordingly. The Planck satellite, launched in 2009, provides some beautiful images of the temperature anisotropies of the universe, as seen in Figure 2. Some of these variations can be quite severe, as in the recently released results about a supervoid aligned with an especially cold spot in the CMB (see Further Reading, item 4).
So what does this all have to do with particles? We’ve talked about a lot of astrophysics so far, so let’s tie it all together. The big correlation here is dark matter. The CMB has given us strong evidence that our universe has a flat geometry, and from general relativity, this provides restrictions on the mass, energy, and density of the universe. In this way, we know that atomic matter can constitute only 5% of the universe, and analysis of the peaks in the CMB gives an estimate of 26% for the total dark matter presence. The rest of the universe is believed to be dark energy (see Figure 3).
Both dark matter and dark energy are huge questions in particle physics that could be the subject of a whole other post. But the CMB plays a big role in making our questions a bit more precise. The CMB is one of several pieces of strong evidence that require the existence of dark matter and dark energy to justify what we observe in the universe. Some potential dark matter candidates include weakly interacting massive particles (WIMPs), sterile neutrinos, or the lightest supersymmetric particle, all of which bring us back to particle physics for experimentation. Dark energy is not as well understood, and there are still a wide variety of disparate theories to explain its true identity. But it is clear that the future of particle physics will likely be closely tied to astrophysics, so as a particle physicist it’s wise to keep an eye out for new developments in both fields!
The Large Hadron Collider is the world’s largest proton collider, and in a mere five years of active data acquisition, it has already achieved fame for the discovery of the elusive Higgs Boson in 2012. Though the LHC is currently off to allow for a series of repairs and upgrades, it is scheduled to begin running again within the month, this time with a proton collision energy of 13 TeV. This is nearly double the previous run energy of 8 TeV, opening the door to a host of new particle productions and processes. Many physicists are keeping their fingers crossed that another big discovery is right around the corner. Here are a few specific things that will be important in Run II.
1. Luminosity scaling
Though this is a very general category, it is a huge component of the Run II excitement. This is simply due to the scaling of luminosity with collision energy, which gives a remarkable increase in discovery potential for the energy increase.
If you’re not familiar, luminosity is the number of events per unit time and cross sectional area. Integrated luminosity sums this instantaneous value over time, giving a metric in the units of 1/area.
In the particle physics world, luminosities are measured in inverse femtobarns, where 1 fb-1 = 1/(10-43 m2). Each of the two main detectors at CERN, ATLAS and CMS, collected 30 fb-1 by the end of 2012. The main point is that more luminosity means more events in which to search for new physics.
Figure 1 shows the ratios of LHC luminosities for 7 vs. 8 TeV, and again for 13 vs. 8 TeV. Since the plot is in log scale on the y axis, it’s easy to tell that 13 to 8 TeV is a very large ratio. In fact, 100 fb-1 at 8 TeV is the equivalent of 1 fb-1 at 13 TeV. So increasing the energy by a factor less than 2 increase the integrated luminosity by a factor of 100! This means that even in the first few months of running at 13 TeV, there will be a huge amount of data available for analysis, leading to the likely release of many analyses shortly after the beginning of data acquisition.
Supersymmetry theory proposes the existence of a superpartner for every particle in the Standard Model, effectively doubling the number of fundamental particles in the universe. This helps to answer many questions in particle physics, namely the question of where the particle masses came from, known as the ‘hierarchy’ problem (see the further reading list for some good explanations.)
Current mass limits on many supersymmetric particles are getting pretty high, concerning some physicists about the feasibility of finding evidence for SUSY. Many of these particles have already been excluded for masses below the order of a TeV, making it very difficult to create them with the LHC as is. While there is talk of another LHC upgrade to achieve energies even higher than 14 TeV, for now the SUSY searches will have to make use of the energy that is available.
Figure 2 shows the cross sections for various supersymmetric particle pair production, including squark (the supersymmetric top quark) and gluino (the supersymmetric gluon). Given the luminosity scaling described previously, these cross sections tell us that with only 1 fb-1, physicists will be able to surpass the existing sensitivity for these supersymmetric processes. As a result, there will be a rush of searches being performed in a very short time after the run begins.
3. Dark Matter
Dark matter is one of the greatest mysteries in particle physics to date (see past particlebites posts for more information). It is also one of the most difficult mysteries to solve, since dark matter candidate particles are by definition very weakly interacting. In the LHC, potential dark matter creation is detected as missing transverse energy (MET) in the detector, since the particles do not leave tracks or deposit energy.
One of the best ways to ‘see’ dark matter at the LHC is in signatures with mono-jet or photon signatures; these are jets/photons that do not occur in pairs, but rather occur singly as a result of radiation. Typically these signatures have very high transverse momentum (pT) jets, giving a good primary vertex, and large amounts of MET, making them easier to observe. Figure 3 shows a Feynman diagram of such a decay, with the MET recoiling off a jet or a photon.
Though the topics in this post will certainly be popular in the next few years at the LHC, they do not even begin to span the huge volume of physics analyses that we can expect to see emerging from Run II data. The next year alone has the potential to be a groundbreaking one, so stay tuned!
Last Thursday, Nobel Laureate Sam Tingpresented the latest results (CERN press release) from the Alpha Magnetic Spectrometer (AMS-02) experiment, a particle detector attached to the International Space Station—think “ATLAS/CMS in space.” Instead of beams of protons, the AMS detector examines cosmic rays in search of signatures of new physics such as the products of dark matter annihilation in our galaxy.
In fact, this is just the latest chapter in an ongoing mystery involving the energy spectrum of cosmic positrons. Recall that positrons are the antimatter versions of electrons with identical properties except having opposite charge. They’re produced from known astrophysical processes when high-energy cosmic rays (mostly protons) crash into interstellar gas—in this case they’re known as `secondaries’ because they’re a product of the `primary’ cosmic rays.
The dynamics of charged particles in the galaxy are difficult to simulate due to the presence of intense and complicated magnetic fields. However, the diffusion models generically predict that the positron fraction—the number of positrons divided by the total number of positrons and electrons—decreases with energy. (This ratio of fluxes is a nice quantity because some astrophysical uncertainties cancel.)
This prediction, however, is in stark contrast with the observed positron fraction from recent satellite experiments:
The rising fraction had been hinted in balloon-based experiments for several decades, but the satellite experiments have been able to demonstrate this behavior conclusively because they can access higher energies. In their first set of results last year (shown above), AMS gave the most precise measurements of the positron fraction as far as 350 GeV. Yesterday’s announcement extended these results to 500 GeV and added the following observations:
First they claim that they have measured the maximum of the positron fraction to be 275 GeV. This is close to the edge of the data they’re releasing, but the plot of the positron fraction slope is slightly more convincing:
The observation of a maximum in what was otherwise a fairly featureless rising curve is key for interpretations of the excess, as we discuss below. A second observation is a bit more curious: while neither the electron nor the positron spectra follow a simple power law, , the total electron or positron flux does follow such a power law over a range of energies.
This is a little harder to interpret since the flux form electrons also, in principle, includes different sources of background. Note that this plot reaches higher energies than the positron fraction—part of the reason for this is that it is more difficult to distinguish between electrons and positrons at high energies. This is because the identification depends on how the particle bends in the AMS magnetic field and higher energy particles bend less. This, incidentally, is also why the FERMI data has much larger error bars in the first plot above—FERMI doesn’t have its own magnetic field and must rely on that of the Earth for charge discrimination.
So what should one make of the latest results?
The most optimistic hope is that this is a signal of dark matter, and at this point this is more of a ‘wish’ than a deduction. Independently of AMS, we know is that dark matter exists in a halo that surrounds our galaxy. The simplest dark matter models also assume that when two dark matter particles find each other in this halo, they can annihilate into Standard Model particle–anti-particle pairs, such as electrons and positrons—the latter potentially yielding the rising positron fraction signal seen by AMS.
From a particle physics perspective, this would be the most exciting possibility. The ‘smoking gun’ signature of such a scenario would be a steep drop in the positron fraction at the mass of the dark matter particle. This is because the annihilation occurs at low velocities so that the energy of the annihilation products is set by the dark matter mass. This is why the observation of a maximum in the positron fraction is interesting: the dark matter interpretation of this excess hinges on how steeply the fraction drops off.
There are, however, reasons to be skeptical.
One attractive feature of dark matter annihilations is thermal freeze out: the observation that the annihilation rate determines how much dark matter exists today after being in thermal equilibrium in the early universe. The AMS excess is suggestive of heavy (~TeV scale) dark matter with an annihilation rate three orders of magnitude larger than the rate required for thermal freeze out.
A study of the types of spectra one expects from dark matter annihilation shows fits that are somewhat in conflict with the combined observations of the positron fraction, total electron/positron flux, and the anti-proton flux (see 0809.2409). The anti-proton flux, in particular, does not have any known excess that would otherwise be predicted by dark matter annihilation into quarks.
There are ways around these issues, such as invoking mechanisms to enhance the present day annihilation rate, perhaps with the annihilation only creating leptons and not quarks. However, these are additional bells and whistles that model-builders must impose on the dark matter sector. It is also important to consider alternate explanations of the Pamela/FERMI/AMS positron fraction excess due to astrophysical phenomena. There are at least two very plausible candidates:
Pulsars are neutron stars that are known to emit “primary” electron/positron pairs. A nearby pulsar may be responsible for the observed rising positron fraction. See 1304.1791 for a recent discussion.
Alternately, supernova remnants may also generate a “secondary” spectrum of positrons from acceleration along shock waves (0909.4060, 0903.2794, 1402.0855).
Both of these scenarios are plausible and should temper the optimism that the rising positron fraction represents a measurement of dark matter. One useful handle to disfavor the astrophysical interpretations is to note that they would be anisotropic (not constant over all directions) whereas the dark matter signal would be isotropic. See 1405.4884 for a recent discussion. At the moment, the AMS measurements do not measure any anisotropy but are not yet sensitive enough to rule out astrophysical interpretations.
Finally, let us also point out an alternate approach to understand the positron fraction. The reason why it’s so difficult to study cosmic rays is that the complex magnetic fields in the galaxy are intractable to measure and, hence, make the trajectory of charged particles hopeless to trace backwards to their sources. Instead, the authors of 0907.1686 and 1305.1324 take an alternate approach: while we can’t determine the cosmic ray origins, we can look at the behavior of heavier cosmic ray particles and compare them to the positrons. This is because, as mentioned above, the bending of a charged particle in a magnetic field is determined by its mass and charge—quantities that are known for the various cosmic ray particles. Based on this, the authors are able to predict an upper bound for the positron fraction when one assumes that the positrons are secondaries (e.g in the case of supernovae remnant acceleration):
We see that the AMS-02 spectrum is just under the authors’ upper bound, and that the reported downturn is consistent with (even predicted from) the upper-bound. The authors’ analysis then suggests a non-dark matter explanation for the positron excess. See this post from Resonaances for a discussion of this point and an updated version of the above plot from the authors.
With that in mind, there are at least three things to look forward to in the future from AMS:
A corresponding upturn in the anti-proton flux is predicted in many types of dark matter annihilation models for the rising positron fraction. Thus far AMS-02 has not released anti-proton data due to the lower numbers of anti-protons.
Further sensitivity to the (an)isotropy of the excess is a critical test of the dark matter interpretation.
The shape of the drop-off with energy is also critical: a gradual drop-off is unlikely to come from dark matter whereas a steep drop off is considered to be a smoking gun for dark matter.
Only time will tell; though Ting suggested that new results would be presented at the upcoming AMS meeting at CERN in 2 months.
The recent Sackler Symposium on the Nature of Dark matter included three talks on various aspects of the Pamela/FERMI/AMS-02 rising positron fraction. You can view the videos here: Linden (pulsars), Galli (dark matter), Blum (upper bound on secondaries).
Title: Results on low mass WIMPs using an upgraded CRESST-II detector
Author: G. Angloher, A. Bento, C. Bucci, L. Canonica, A. Erb, F. v. Feilitzsch, N. Ferreiro Iachellini, P. Gorla, A. Gütlein, D. Hauff, P. Huff, J. Jochum, M. Kiefer, C. Kister, H. Kluck, H. Kraus, J.-C. Lanfranchi, J. Loebell, A. Münster, F. Petricca, W. Potzel, F. Pröbst, F. Reindl, S. Roth, K. Rottler, C. Sailer, K. Schäffner, J. Schieck, J. Schmaler, S. Scholl, S. Schönert, W. Seidel, M. v. Sivers, L. Stodolsky, C. Strandhagen, R. Strauss, A. Tanzke, M. Uffinger, A. Ulrich, I. Usherov, M. Wüstrich, S. Wawoczny, M. Willers, and A. Zöller
CRESST-II (Cryogenic Rare Event Search with Superconducting Thermometers) is a dark matter search experiment located at the Laboratori Nazionali del Gran Sasson in Italy. It is primarily involved with the search for WIMPs, or Weakly Interacting Massive Particles, which play a key role in both particle and astrophysics as a potential candidate for dark matter. If you are not yet intrigued enough about dark matter, see the list of references at the bottom of this post for more information. As dark matter candidates, WIMPs only interact via gravitational and weak forces, making them extremely difficult to detect.
CRESST-II attempts to detect WIMPs via elastic scattering off nuclei in scintillating CaWO4 crystals. This is a process known as direct detection, where scientists search for evidence of the WIMP itself; indirect detection requires searching for WIMP decay products. There are many challenges to direct detection, including the relatively low amount of recoil energy present in such scattering. An additional issue is the extremely high background, which is dominated by beta and gamma radiation of the nuclei. Overall, the experiment expects to obtain a few tens of events per kilogram-year.
In 2011, CREST-II reported a small excess of events outside of the predicted background levels. The statistical analysis makes use of a maximum likelihood function, which parameterizes each primary background to compute a total number of expected events. The results of this likelihood fit can be seen in Figure 1, where M1 and M2 are different mass hypotheses. From these values, CRESST-II reports a statistical significance of 4.7σ for M1, and 4.2σ for M2. Since a discovery is generally accepted to have a significance of 5σ, these numbers presented a pretty big cause for excitement.
In July of 2014, CRESST-II released a follow up paper: after some detector upgrades and further background reduction, these tantalizingly high significances have been revised, ruling out both mass hypotheses. The event excess was likely due to unidentified e–/γ background, which was reduced by a factor of 2 -10 via improved CaWO4 crystals used in this run. The elimination of these high signal significances is in agreement with other dark matter searches, which have also ruled out WIMP masses on the order of 20 GeV.
Figure 2 shows the most recent exclusion curve for the WIMP mass, which gives the cross section for production as a function of possible mass. The contour reported in the 2011 paper is shown in light blue. The 90% confidence limit from the 2014 paper is given in solid red, alongside the expected sensivity from the background model in light red. All other curves are due to data from other experiments; see the paper cited for more information.
Though this particular excess was ultimately not confirmed, these results overall present an optimistic picture for the dark matter search. Comparison between the limits from 2011 to 2014 show an much greater sensitivity for WIMP masses below 3 GeV, which were previously un-probed by other experiments. Additional detector improvements may result in even more stringent limit setting, shaping the dark matter search for future experiments.
Title: Effect of Black Holes in Local Dwarf Spheroidal Galaxies on Gamma-Ray Constraints on Dark Matter Annihilation br> Author: Alma X. Gonzalez-Morales, Stefano Profumo, Farinaldo S. Queiroz br> Published: arXiv:1406.2424 [astro-ph.HE] br>
In a previous ParticleBite we showed how dwarf spheroidal galaxies can tell us about dark matter interactions. As a short summary, these are dark matter-rich “satellite [sub-]galaxies” of the Milky Way that are ideal places to look for photons coming from dark matter annihilation into Standard Model particles. In this post we highlight a recent update to that analysis.
The rate at which a pair of dark matter particles annihilate in a galaxy is proportional to the square of the dark matter density. The authors point out that if the dwarf spheroidal galaxies contain intermediate mass black holes ( times the mass of the sun), then its possible that the dark matter in the dwarf is more densely packed near the black hole. The authors redo the FERMI analysis for DM annihilation in dwarf spheroidals with 4 years of data (see our previous ParticleBite) with the assumption that these dwarfs contain a black hole consistent with their observed properties.
While the dwarf galaxies have little stellar content, one can use the visible stars to measure the stellar velocity dispersion, . As a benchmark, the authors use the Tremaine relation to determine the black hole mass as a function of the observed velocity dispersion,
Here is the mass of the sun. Given this mass and its effect on the dark matter density, they can then calculate the J factor that encodes the `astrophysical’ line of slight integral of the squared dark matter density to observers on the Earth. Following the FERMI analysis, authors then set bounds on the dark matter annihilation cross section as a function of the dark matter mass for 15 dwarf spheroidals:
Observe that the bounds are significantly stronger than those in the original FERMI analysis. In particular, the strongest bounds thoroughly rule out the “40 GeV DM annihilating into a pair of b quarks” interpretation of a reported excess in gamma rays coming from the galactic center. These bounds, however, come with several caveats that are described in the paper. The largest caveat is that the existence of a black hole in any of these systems is only assumed. The authors note that numerical simulations suggest that there should be black holes in these systems, but to date there has been no verification of their existence.
See this blog post at io9 for a public-level exposition and video of observational evidence for the supermassive black hole (much heavier than the intermediate mass black holes posited in the dwarf spheroidals) at the center of the Milky Way.
See Ullio et al. (astro-ph/0101481) for an early paper describing the effect of black holes on the dark matter distribution.
Title: Dark Matter Constraints from Observations of 25 Milky Way Satellite Galaxies with the Fermi Large Area Telescope br> Author: FERMI-LAT Collaboration br> Published: Phys.Rev. D89 (2014) 042001 [arXiv:1310.0828]
Dark matter (DM) is `dark’ because it does not directly interact with light. We suspect, however, that dark matter does interact with other Standard Model (SM) particles such as quarks and leptons. Since these SM particles do typically interact with photons, dark matter is indirectly luminous. More specifically, when two dark matter particles find each other and annihilate, their products include a spectrum photons that can be detected by telescopes. For typical `weakly-interacting massive particle’ DM candidates, these photons are in the GeV (γ-ray) range.
This type of indirect detection is a powerful handle to search for dark matter in the galaxy. The most promising place to search for these annihilation products are places where we expect a high density of dark matter, such as the galactic center. In fact, there have been recent hints for precisely this signal (see, e.g. this astrobite). Unfortunately, the galactic center is a very complicated environment with lots of other sources of GeV-scale photons that can make a DM interpretation tricky without additional checks.
Fortunately, there are other galactic objects that are dense with dark matter and have relatively little stellar (visible) matter: dwarf spheroidals. These satellite galaxies of the Milky Way are ideal laboratories for dark matter annihilation. While they have less dark matter density than the galactic center, they also have far fewer background photons from ordinary matter. Our tool of choice is the space-based Fermi-Large Area Telescope which is sensitive to photons between 0.03 — 300 GeV and surveys the entire sky every three hours.
The photon flux from dark matter annihilation is a product of three factors:
The “particle physics factor” describes the dark matter properties: its mass and annihilation rate. The factor describes the spectrum of photons coming from the DM annihilation products. The “astrophysics” factor is a line of sight integral along the dark matter density . Note that the from this factor and the in the particle physics factor is simply the dark matter number density; the photon flux depends on how likely it is for DM particles to find each other. The astrophysics factor is sometimes called a J factor. For some of the dwarfs astronomers can determine the J factor based on the kinematics of the [few] stellar objects in the dwarf spheroidal.
One may use the morphology—or spatial distribution of dark matter—to help subtract background photons and fit data. For this ParticleBite we won’t discuss this step further except to emphasize that these fits are where all the astrophysics “muscle” enters. Each dwarf individually sets bounds on the dark matter profile, but one can combine (or “stack”) these results into a combined bound for each DM annihilation final state. The bounds differ depending on these annihilation products because each type of particle produces a different spectrum of photons that must be re-fit relative to the background. The dark matter mass controls the energy with which the ‘primary’ annihilation products are produced so that heavier dark matter masses yield more energetic photons.
In the above plots, the green and yellow bands represent the approximate expected 1σ and 2σ sensitivity while the solid black line is the observed bound. There is a slight excess at lower masses, though the most optimistic excess in the b-quark channel has a significance of TS ~ 8.7, where TS is a ‘test statistic’ measure introduced in the paper. The relevant comparison is that TS ~ 25 is the standard Fermi uses for a discovery, so this excess should be understood to be fairly modest. (Note that the paper also notes that the statistical analysis underestimates statistical significance so that if one were to convert this into p-values or σ, one would overestimate the significance.)
Note further that the “stacked” analysis is most sensitive to those dwarfs with the largest J factors. Of these, half showed an excess while the other half were consistent with no excess.
The most important feature of the above plots is the horizontal dashed line. This line represents the dark matter annihilation cross section (“annihilation rate”) that one predicts based on the requirement that the observed dark matter density is set by this annihilation process. (There are ways around this, but it remains the simplest and most natural possibility.) The relevant bounds on the dark matter models, then, comes from looking at the point where the solid line and the dashed horizontal line meet. Dark matter masses to the left (i.e. less than) this value are disfavored in the simplest models.
For example, for dark matter that annihilates to b-quarks, one finds that the dwarf spheroidals set a lower limit on the dark matter mass of around 10 Gev. We note that this bound based on 4 years of Fermi data is weaker than the previously published 2 year results due, in part, to a revised analysis.
The future? A gamma ray excess in the galactic center (see, e.g. this astrobite) may possibly be interpreted as a signal of dark matter with mass of around 40 GeV annihilating into b quarks. At the moment the dwarf spheroidal bounds are to weak to probe this region. Will it ever? Since Fermi samples the entire sky, any newly identified dwarf spheroidal (e.g. from the Sloan Digital Sky Survey) automatically makes the full 4 year dataset for that dwarf available. Since the bounds scale like (in the DM mass range below 200 GeV), one may roughly estimate the future sensitivity to the 40 GeV mass range as requiring 16 times more data. If we consider the next 4 years (doubling the observation time), this would require roughly 4 times more dwarfs to be identified. (See, e.g. this talk for a discussion.)
Further reading: some useful references for indirect detection of dark matter