Crystals are dark matter’s best friends

Article title: “Development of ultra-pure NaI(Tl) detector for COSINE-200 experiment”

Authors: B.J. Park et el.

Reference: arxiv:2004.06287

The landscape of direct detection of dark matter is a perplexing one; all experiments have so far come up with deafening silence, except for a single one which promises a symphony. This is the DAMA/LIBRA experiment in Gran Sasso, Italy, which has been seeing an annual modulation in its signal for two decades now.

Such an annual modulation is as dark-matter-like as it gets. First proposed by Katherine Freese in 1987, it would be the result of earth’s motion inside the galactic halo of dark matter in the same direction as the sun for half of the year and in the opposite direction during the other half. However, DAMA/LIBRA’s results are in conflict with other experiments – but with the catch that none of those used the same setup. The way to settle this is obviously to build more experiments with the DAMA/LIBRA setup. This is an ongoing effort which ultimately focuses on the crystals at its heart.

Cylindrical crystals wrapped in reflector, bounded by photomultipliers (PMTs) and surrounded by scintillators. (COSINE-100)

The specific crystals are made of the scintillating material thallium-doped sodium iodide, NaI(Tl). Dark matter particles, and particularly WIMPs, would collide elastically with atomic nuclei and the recoil would give off photons, which would eventually be captured by photomultiplier tubes at the ends of each crystal.

Right now a number of NaI(Tl)-based experiments are at various stages of preparation around the world, with COSINE-100 at the Yangyang mountain, S.Korea, already producing negative results. However, these are still not on equal footing with DAMA/LIBRA’s because of higher backgrounds at COSINE-100. What is the collaboration to do, then? The answer is focus even more on the crystals and how they are prepared.

Setup of the COSINE-100 experiment. (COSINE-100)

Over the last couple of years some serious R&D went into growing better crystals for COSINE-200, the planned upgrade of COSINE-100. Yes, a crystal is something that can and does grow. A seed placed inside the raw material, in this case NaI(Tl) powder, leads it to organize itself around the seed’s structure over the next hours or days.

In COSINE-100 the most annoying backgrounds came from within the crystals themselves because of the production process, because of natural radioactivity, and because of cosmogenically induced isotopes. Let’s see how each of these was tackled during the experiment’s mission towards a radiopure upgrade.

Improved techniques of growing and preparing the crystals reduced contamination from the materials of the grower device and from the ambient environment. At the same time different raw materials were tried out to put the inherent contamination under control.

Among a handful of naturally present radioactive isotopes particular care was given to 40K. 40K can decay characteristically to an X-ray of 3.2keV and a γ-ray of 1,460keV, a combination convenient for tagging it to a large extent. The tagging is done with the help of 2,000 liters of liquid scintillator surrounding the crystals. However, if the γ-ray escapes the crystal then the left-behind X-ray will mimic the expected signal from WIMPs… Eventually the dangerous 40K was brought down to levels comparable to those in DAMA/LIBRA through the investigation of various techniques and first materials.

But the main source of radioactive background in COSINE-100 was isotopes such as 3H or 22Na created inside the crystals by cosmic ray muons, after their production. Now, their abundance was reduced significantly by two simple moves: the crystals were grown locally at a very low altitude and installed underground within a few weeks (instead of being transported from a lab at 1,400 meters above sea in Colorado). Moreover, most of the remaining cosmogenic background is to decay away within a couple of years.

Components of the background, and temporal evolution of the cosmogenic radioactivity. (Source)

Where are these efforts standing? The energy range of interest for testing the DAMA/LIBRA signal is 1-6keV. This corresponds to a background target of 1 count/kg/day/keV. After the crystals R&D, the achieved contamination was less than about 0.34 counts. In short, everything is ready for COSINE-100 to upgrade to COSINE-200 and test the annual modulation without the previous ambiguities that stood in the way.

Learn more:

More on DAMA/LIBRA in ParticleBites.

Cross-checking the modulation.

The COSINE-100 experiment.

First COSINE-100 results.

The XENON1T Excess : The Newest Craze in Particle Physics

Paper: Observation of Excess Electronic Recoil Events in XENON1T

Authors: XENON1T Collaboration

Recently the particle physics world has been abuzz with a new result from the XENON1T experiment who may have seen a revolutionary signal. XENON1T is one of the world’s most sensitive dark matter experiments. The experiment consists of a huge tank of Xenon placed deep underground in the Gran Sasso mine in Italy. It is a ‘direct-detection’ experiment, hunting for very rare signals of dark matter particles from space interacting with their detector. It was originally designed to look for WIMP’s, Weakly Interacting Massive Particles, who used to be everyone’s favorite candidate for dark matter. However, given recent null results by WIMP-hunting  direct-detection experiments, and collider experiments at the LHC, physicists have started to broaden their dark matter horizons. Experiments like XENON1T, who were designed to look for heavy WIMP’s colliding off of Xenon nuclei have realized that they can also be very sensitive to much lighter particles by looking for electron recoils. New particles that are much lighter than traditional WIMP’s would not leave much of an impact on large Xenon nuclei, but they can leave a signal in the detector if they instead scatter off of the electrons around those nuclei. These electron recoils can be identified by the ionization and scintillation signals they leave in the detector, allowing them to be distinguished from nuclear recoils.

In this recent result, the XENON1T collaboration searched for these electron recoils in the energy range of 1-200 keV with unprecedented sensitivity.  Their extraordinary sensitivity is due to its exquisite control over backgrounds and extremely low energy threshold for detection. Rather than just being impressed, what has gotten many physicists excited is that the latest data shows an excess of events above expected backgrounds in the 1-7 keV region. The statistical significance of the excess is 3.5 sigma, which in particle physics is enough to claim ‘evidence’ of an anomaly but short of the typical 5-sigma required to claim discovery.

The XENON1T data that has caused recent excitement. The ‘excess’ is the spike in the data (black points) above the background model (red line) in the 1-7 keV region. The significance of the excess is around 3.5 sigma.

So what might this excess mean? The first, and least fun answer, is nothing. 3.5 sigma is not enough evidence to claim discovery, and those well versed in particle physics history know that there have been numerous excesses with similar significances have faded away with more data. Still it is definitely an intriguing signal, and worthy of further investigation.

The pessimistic explanation is that it is due to some systematic effect or background not yet modeled by the XENON1T collaboration. Many have pointed out that one should be skeptical of signals that appear right at the edge of an experiments energy detection threshold. The so called ‘efficiency turn on’, the function that describes how well an experiment can reconstruct signals right at the edge of detection, can be difficult to model. However, there are good reasons to believe this is not the case here. First of all the events of interest are actually located in the flat part of their efficiency curve (note the background line is flat below the excess), and the excess rises above this flat background. So to explain this excess their efficiency would have to somehow be better at low energies than high energies, which seems very unlikely. Or there would have to be a very strange unaccounted for bias where some higher energy events were mis-reconstructed at lower energies. These explanations seem even more implausible given that the collaboration performed an electron reconstruction calibration using the radioactive decays of Radon-220 over exactly this energy range and were able to model the turn on and detection efficiency very well.

Results of a calibration done to radioactive decays of Radon-220. One can see that data in the efficiency turn on (right around 2 keV) is modeled quite well and no excesses are seen.

However the possibility of a novel Standard Model background is much more plausible. The XENON collaboration raises the possibility that the excess is due to a previously unobserved background from tritium β-decays. Tritium decays to Helium-3 and an electron and a neutrino with a half-life of around 12 years. The energy released in this decay is 18.6 keV, giving the electron having an average energy of a few keV. The expected energy spectrum of this decay matches the observed excess quite well. Additionally, the amount of contamination needed to explain the signal is exceedingly small. Around 100 parts-per-billion of H2 would lead to enough tritium to explain the signal, which translates to just 3 tritium atoms per kilogram of liquid Xenon. The collaboration tries their best to investigate this possibility, but they neither rule out or confirm such a small amount of tritium contamination. However, other similar contaminants, like diatomic oxygen have been confirmed to be below this level by 2 orders of magnitude, so it is not impossible that they were able to avoid this small amount of contamination.

So while many are placing their money on the tritium explanation, there is the exciting possibility remains that this is our first direct evidence of physics Beyond the Standard Model (BSM)! So if the signal really is a new particle or interaction what would it be? Currently it it is quite hard to pin down exactly based on the data. The analysis was specifically searching for two signals that would have shown up in exactly this energy range: axions produced in the sun, and neutrinos produced in the sun interacting with electrons via a large (BSM) magnetic moment. Both of these models provide good fits to the signal shape, with the axion explanation being slightly preferred. However since this result has been released, many have pointed out that these models would actually be in conflict with constraints from astrophysical measurements. In particular, the axion model they searched for would have given stars an additional way to release energy, causing them to cool at a faster rate than in the Standard Model. The strength of interaction between axions and electrons needed to explain the XENON1T excess is incompatible with the observed rates of stellar cooling. There are similar astrophysical constraints on neutrino magnetic moments that also make it unlikely.

This has left door open for theorists to try to come up with new explanations for these excess events, or think of clever ways to alter existing models to avoid these constraints. And theorists are certainly seizing this opportunity! There are new explanations appearing on the arXiv every day, with no sign of stopping. In the roughly 2 weeks since the XENON1T announced their result and this post is being written, there have already been 50 follow up papers! Many of these explanations involve various models of dark matter with some additional twist, such as being heated up in the sun or being boosted to a higher energy in some other way.

A collage of different models trying to explain the XENON1T excess (center). Each plot is from a separate paper released in the first week and a half following the original announcement. Source

So while theorists are currently having their fun with this, the only way we will figure out the true cause of this this anomaly is with more data. The good news is that the XENON collaboration is already preparing for the XENONnT experiment that will serve as a follow to XENON1T. XENONnT will feature a larger active volume of Xenon and a lower background level, allowing them to potentially confirm this anomaly at the 5-sigma level with only a few months of data. If  the excess persists, more data would also allow them to better determine the shape of the signal; allowing them to possibly distinguish between the tritium shape and a potential new physics explanation. If real, other liquid Xenon experiments like LUX and PandaX should also be able to independently confirm the signal in the near future. The next few years should be a very exciting time for these dark matter experiments so stay tuned!

Read More:

Quanta Magazine Article “Dark Matter Experiment Finds Unexplained Signal”

Previous ParticleBites Post on Axion Searches

Blog Post “Hail the XENON Excess”

Listening for axions

If dark matter actually consists of a new kind of particle, then the most up-and-coming candidate is the axion. The axion is a consequence of the Peccei-Quinn mechanism, a plausible solution to the “strong CP problem,” or why the strong nuclear force conserves the CP-symmetry although there are no reasons for it to. It is a very light neutral boson, named by Frank Wilczek after a detergent brand (in a move that obviously dates its introduction in the ’70s).

Axion decay in a magnetic field: the result is a photon. (Source.)

Most experiments that try to directly detect dark matter have looked for WIMPs (weakly interacting massive particles). However, as those searches have not borne fruit, the focus started turning to axions, which make for good candidates given their properties and the fact that if they exist, then they exist in multitudes throughout the galaxies. Axions “speak” to the QCD part of the Standard Model, so they can appear in interaction vertices with hadronic loops. The end result is that axions passing through a magnetic field will convert to photons.

In practical terms, their detection boils down to having strong magnets, sensitive electronics and an electromagnetically very quiet place at one’s disposal. One can then sit back and wait for the hypothesized axions to pass through the detector as earth moves through the dark matter halo surrounding the Milky Way. Which is precisely why such experiments are known as “haloscopes.”

Now, the most veteran haloscope of all published significant new results. Alas, it is still empty-handed, but we can look at why its update is important and how it was reached.

ADMX (Axion Dark Matter eXperiment) of the University of Washington has been around for a quarter-century. By listening for signals from axions, it progressively gnaws away at the space of allowed values for their mass and coupling to photons, focusing on an area of interest:

ADMX_results_2020
Latest exclusion limits on the axion mass and coupling to photons.

Unlike higher values, this area is not excluded by astrophysical considerations (e.g. stars cooling off through axion emission) and other types of experiments (such as looking for axions from the sun). In addition, the bands above the lines denoted “KSVZ” and “DFSZ” are special. They correspond to the predictions of two models with favorable theoretical properties. So, ADMX is dedicated to scanning this parameter space. And the new analysis added one more year of data-taking, making a significant dent in this ballpark.

As mentioned, the presence of axions would be inferred from a stream of photons in the detector. The excluded mass range was scanned by “tuning” the experiment to different frequencies, while at each frequency step longer observation times probed smaller values for the axion-photon coupling.

Two things that this search needs is a lot of quiet and some good amplification, as the signal from a typical axion is expected to be as weak as the signal from a mobile phone left on the surface of Mars (around 10-23W). The setup is indeed stripped of noise by being placed in a dilution refrigerator, which keeps its temperature at a few tenths of a degree above absolute zero. This is practically the domain governed by quantum noise, so advantage can be taken of the finesse of quantum technology: for the first time ADMX used SQUIDs, superconducting quantum interference devices, for the amplification of the signal.

The heart of the experiment inside the refrigerator. The resonant frequency of the cavity is tuned to match the photons -hopefully- given off by axions. (Source.)




In the end, a good chunk of the parameter space which is favored by the theory might have been excluded, but the haloscope is ready to look at the rest of it. Just think of how, one day, a pulse inside a small device in a university lab might be a messenger of the mysteries unfolding across the cosmos.

References:

Publication by the ADMX collaboration. (arXiv)

Learn more:

  1. The theory behind axions.
  2. The hitchhiker’s guide to the dilution refrigerator.
  3. Intro to KSVZ and DFSZ axions (and more).
  4. Resonant cavities.

Three Birds with One Particle: The Possibilities of Axions

Title: “Axiogenesis”

Author: Raymond T. Co and Keisuke Harigaya

Reference: https://arxiv.org/pdf/1910.02080.pdf

On the laundry list of problems in particle physics, a rare three-for-one solution could come in the form of a theorized light scalar particle fittingly named after a detergent: the axion. Frank Wilczek coined this term in reference to its potential to “clean up” the Standard Model once he realized its applicability to multiple unsolved mysteries. Although Axion the dish soap has been somewhat phased out of our everyday consumer life (being now primarily sold in Latin America), axion particles remain as a key component of a physicist’s toolbox. While axions get a lot of hype as a promising dark matter candidate, and are now being considered as a solution to matter-antimatter asymmetry, they were originally proposed as a solution for a different Standard Model puzzle: the strong CP problem. 

The strong CP problem refers to a peculiarity of quantum chromodynamics (QCD), our theory of quarks, gluons, and the strong force that mediates them: while the theory permits charge-parity (CP) symmetry violation, the ardent experimental search for CP-violating processes in QCD has so far come up empty-handed. What does this mean from a physical standpoint? Consider the neutron electric dipole moment (eDM), which roughly describes the distribution of the three quarks comprising a neutron. Naively, we might expect this orientation to be a triangular one. However, measurements of the neutron eDM, carried out by tracking changes in neutron spin precession, return a value orders of magnitude smaller than classically expected. In fact, the incredibly small value of this parameter corresponds to a neutron where the three quarks are found nearly in a line. 

The classical picture of the neutron (left) looks markedly different from the picture necessitated by CP symmetry (right). The strong CP problem is essentially a question of why our mental image should look like the right picture instead of the left. Source: https://arxiv.org/pdf/1812.02669.pdf

This would not initially appear to be a problem. In fact, in the context of CP, this makes sense: a simultaneous charge conjugation (exchanging positive charges for negative ones and vice versa) and parity inversion (flipping the sign of spatial directions) when the quark arrangement is linear results in a symmetry. Yet there are a few subtleties that point to the existence of further physics. First, this tiny value requires an adjustment of parameters within the mathematics of QCD, carefully fitting some coefficients to cancel out others in order to arrive at the desired conclusion. Second, we do observe violation of CP symmetry in particle physics processes mediated by the weak interaction, such as kaon decay, which also involves quarks. 

These arguments rest upon the idea of naturalness, a principle that has been invoked successfully several times throughout the development of particle theory as a hint toward the existence of a deeper, more underlying theory. Naturalness (in one of its forms) states that such minuscule values are only allowed if they increase the overall symmetry of the theory, something that cannot be true if weak processes exhibit CP-violation where strong processes do not. This puts the strong CP problem squarely within the realm of “fine-tuning” problems in physics; although there is no known reason for CP symmetry conservation to occur, the theory must be modified to fit this observation. We then seek one of two things: either an observation of CP-violation in QCD or a solution that sets the neutron eDM, and by extension any CP-violating phase within our theory, to zero.

This term in the QCD Lagrangian allows for CP symmetry violation. Current measurements place the value of \theta at no greater than 10^{-10}. In Peccei-Quinn symmetry, \theta is promoted to a field.

When such an expected symmetry violation is nowhere to be found, where is a theoretician to look for such a solution? The most straightforward answer is to turn to a new symmetry. This is exactly what Roberto Peccei and Helen Quinn did in 1977, birthing the Peccei-Quinn symmetry, an extension of QCD which incorporates a CP-violating phase known as the \theta term. The main idea behind this theory is to promote \theta to a dynamical field, rather than keeping it a constant. Since quantum fields have associated particles, this also yields the particle we dub the axion. Looking back briefly to the neutron eDM picture of the strong CP problem, this means that the angular separation should also be dynamical, and hence be relegated to the minimum energy configuration: the quarks again all in a straight line. In the language of symmetries, the U(1) Peccei-Quinn symmetry is approximately spontaneously broken, giving us a non-zero vacuum expectation value and a nearly-massless Goldstone boson: our axion.

This is all great, but what does it have to do with dark matter? As it turns out, axions make for an especially intriguing dark matter candidate due to their low mass and potential to be produced in large quantities. For decades, this prowess was overshadowed by the leading WIMP candidate (weakly-interacting massive particles), whose parameter space has been slowly whittled down to the point where physicists are more seriously turning to alternatives. As there are several production-mechanisms in early universe cosmology for axions, and 100% of dark matter abundance could be explained through this generation, the axion is now stepping into the spotlight. 

This increased focus is causing some theorists to turn to further avenues of physics as possible applications for the axion. In a recent paper, Co and Harigaya examined the connection between this versatile particle and matter-antimatter asymmetry (also called baryon asymmetry). This latter term refers to the simple observation that there appears to be more matter than antimatter in our universe, since we are predominantly composed of matter, yet matter and antimatter also seem to be produced in colliders in equal proportions. In order to explain this asymmetry, without which matter and antimatter would have annihilated and we would not exist, physicists look for any mechanism to trigger an imbalance in these two quantities in the early universe. This theorized process is known as baryogenesis.

Here’s where the axion might play a part. The \theta term, which settles to zero in its possible solution to the strong CP problem, could also have taken on any value from 0 to 360 degrees very early on in the universe. Analyzing the axion field through the conjectures of quantum gravity, if there are no global symmetries then the initial axion potential cannot be symmetric [4]. By falling from some initial value through an uneven potential, which the authors describe as a wine bottle potential with a wiggly top, \theta would cycle several times through the allowed values before settling at its minimum energy value of zero. This causes the axion field to rotate, an asymmetry which could generate a disproportionality between the amounts of produced matter and antimatter. If the field were to rotate in one direction, we would see more matter than antimatter, while a rotation in the opposite direction would result instead in excess antimatter.

The team’s findings can be summarized in the plot above. Regions in purple, red, and above the orange lines (dependent upon a particular constant \xi which is proportional to weak scale quantities) signify excluded portions of the parameter space. The remaining white space shows values of the axion decay constant and mass where the currently measured amount of baryon asymmetry could be generated. Source: https://arxiv.org/pdf/1910.02080.pdf

Introducing a third fundamental mystery into the realm of axions begets the question of whether all three problems (strong CP, dark matter, and matter-antimatter asymmetry) can be solved simultaneously with axions. And, of course, there are nuances that could make alternative solutions to the strong CP problem more favorable or other dark matter candidates more likely. Like most theorized particles, there are several formulations of axion in the works. It is then necessary to turn our attention to experiment to narrow down the possibilities for how axions could interact with other particles, determine what their mass could be, and answer the all-important question: if they exist at all. Consequently, there are a plethora of axion-focused experiments up and running, with more on the horizon, that use a variety of methods spanning several subfields of physics. While these results begin to roll in, we can continue to investigate just how many problems we might be able to solve with one adaptable, soapy particle.

Learn More:

  1. A comprehensive introduction to the strong CP problem, the axion solution, and other potential solutions: https://arxiv.org/pdf/1812.02669.pdf 
  2. Axions as a dark matter candidate: https://www.symmetrymagazine.org/article/the-other-dark-matter-candidate
  3. More information on matter-antimatter asymmetry and baryogenesis: https://www.quantumdiaries.org/2015/02/04/where-do-i-come-from/
  4. The quantum gravity conjectures that axiogenesis builds upon: https://arxiv.org/abs/1810.05338
  5. An overview of current axion-focused experiments: https://www.annualreviews.org/doi/full/10.1146/annurev-nucl-102014-022120

Quark nuggets of wisdom

Article title: “Dark Quark Nuggets”

Authors: Yang Baia, Andrew J. Long, and Sida Lu

Reference: arXiv:1810.04360

Information, gold and chicken. What do they all have in common? They can all come in the form of nuggets. Naturally one would then be compelled to ask: “what about fundamental particles? Could they come in nugget form? Could that hold the key to dark matter?” Lucky for you this has become the topic of some ongoing research.

A ‘nugget’ in this context refers to large macroscopic ‘clumps’ of matter formed in the early universe that could possibly survive up until the present day to serve as a dark matter candidate. Much like nuggets of the edible variety, one must be careful to combine just the right ingredients in just the right way. In fact, there are generally three requirements to forming such an exotic state of matter:

  1. (At least) two different vacuum states separated by a potential ‘barrier’ where a phase transition occurs (known as a first-order phase transition).
  2. A charge which is conserved globally which can accumulate in a small part of space.
  3. An excess of matter over antimatter on the cosmological scale, or in other words, a large non-zero macroscopic number density of global charge.

Back in the 1980s, before much work was done in the field of lattice quantum chromodynamics (lQCD), Edward Witten put forward the idea that the Standard Model QCD sector could in fact accommodate such an exotic form of matter. Quite simply this would occur at the early phase of the universe when the quarks undergo color confinement to form hadrons. In particular Witten’s were realized as large macroscopic clumps of ‘quark matter’ with a very large concentration of baryon number, N_B > 10^{30}. However, with the advancement of lQCD techniques, the phase transition in which the quarks become confined looks more like a continuous ‘crossover’ (i.e. a second-order phase transition), making the idea in the Standard Model somewhat unfeasible.

Theorists, particularly those interested in dark matter, are not confined (for lack of a better term) to the strict details of the Standard Model and most often look to the formation of sometimes complicated ‘dark sectors’ invisible to us but readily able to provide the much needed dark matter candidate.

Dark QCD?

The problem of obtaining a first-order phase transition to form our quark nuggets need not be a problem if we consider a QCD-type theory that does not interact with the Standard Model particles. More specifically, we can consider a set of dark quarks, dark gluons with arbitrary characteristics like masses, couplings, numbers of flavors or numbers of colors (which of course are quite settled for the Standard Model QCD case). In fact, looking at the numbers of flavors and colors of dark QCD in Figure 1, we can see in the white unshaded region a number of models that can exist with a first-order phase transition, as required to form these dark quark nuggets.

Figure 1: The white unshaded region corresponds to dark QCD models which may permit a first-order phase transition and thus the existence of ‘dark quark nuggets’.

As with normal quarks, the distinction between the two phases actually refers to a process known as chiral symmetry breaking. When the temperature of the universe cools to this particular scale, color confinement of quarks occurs around the same time, such that no single-color quark can be observed on its own – only in colorless bound states.

Forming a nugget

As we have briefly mentioned so far, the dark nuggets are formed as the universe undergoes a ‘dark’ phase transition from a phase where the dark color is unconfined to a phase where it is confined. At some critical temperature, due to the nature of first-order phase transitions, bubbles of the new confined phase (full of dark hadrons) begin to nucleate out of the dark quark-gluon plasma. The growth of these bubbles are driven by a difference in pressure, characteristic of the fact that the unconfined and confined phase vacuums states are of different energy. With this emerging bubble wall, the almost massless particles from the dark plasma scatter from the wall containing heavy dark (anti)baryons and hence a large amount of dark baryon number accumulates in this phase. Eventually, as these bubbles merge and coalesce, we would expect local regions of remaining dark quark-gluon plasma, unconfined and stable from collapse due to the Fermi degeneracy pressure (see reference below for more on this). An illustration is shown in Figure 2. Calculations with varying energy scales of confinement estimate their masses are anywhere between 10^{-7} to 10^{23} grams with radii from 10^{-15} to 10^8 cm and so can truly be classed as macroscopic dark objects!

Figure 2: Dark Quark Nuggets are a phase of unconfined dark quark-gluon plasma kept stable by the balance between Fermi degeneracy pressure and vacuum pressure from the separation between the unconfined and confined phases.

How do we know they could be there? 

There are a number of ways to infer the existence of dark quark nuggets, but two of the main ones are: (i) as a dark matter candidate and (ii) through probes of the dark QCD model that provides them. Cosmologically, the latter can imply the existence of a dark form of radiation which ultimately can lead to effects on the Cosmic Microwave Background Radiation (CMB). In a similar vein, one recent avenue of study today is the production of a steady background of gravitational waves emerging from the existence of a first-order phase transition – one of the key requirements for dark quark nugget formation. More importantly, they can be probed through astrophysical means if they share some coupling (albeit small) with the Standard Model particles. The standard technique of direct detection with Earth-based experiments could be the way to go – but furthermore, there may be the possibility of cosmic ray production from collisions of multiple dark quark nuggets. Among these are a number of other observations over the massive range of nugget sizes and masses shown in Figure 3.

Figure 3: Range of dark quark nugget masses and sizes and their possible detection methods.

To conclude, note that in such a generic framework, a number of well-motivated theories may predict (or in fact have unavoidable) instances of quark nuggets that may serve as interesting dark matter candidates with a lot of fun phenomenology to play with. It is only up to the theorist’s imagination where to go from here!

References and further reading:

Dragonfly 44: A potential Dark Matter Galaxy

Title: A High Stellar Velocity Dispersion and ~100 Globular Clusters for the Ultra Diffuse Galaxy Dragonfly 44

PublicationApJ, v828, Number 1, arXiv: 1606.06291

The title of this paper sounds like some standard astrophysics analyses; but, dig a little deeper and you’ll find – what I think – is an incredibly interesting, surprising and unexpected observation.

The Coma Cluster: NASA, ESA, and the Hubble Heritage Team (STScI/AURA)

Last year, using the WM Keck Observatory and the Gemini North Telescope in Manuakea, Hawaii, the Dragonfly Telephoto Array observed the Coma cluster (a large cluster of galaxies in the constellation Coma – I’ve included a Hubble Image to the left). The team identified a population of large, very low surface brightness (ie: not a lot of stars), spheroidal galaxies around an Ultra Diffuse Galaxy (UDG) called Dragonfly 44 (shown below). They determined that Dragonfly 44 has so few stars that gravity could not hold it together – so some other matter had to be involved – namely DARK MATTER (my favorite kind of unknown matter).

 

The ultra-diffuse galaxy Dragonfly 44. The galaxy consists almost entirely of dark matter. It is surrounded by faint, compact sources. Image credit: Pieter van Dokkum / Roberto Abraham / Gemini Observatory / SDSS / AURA.
The ultra-diffuse galaxy Dragonfly 44. The galaxy consists almost entirely of dark matter. It is surrounded by faint, compact sources. Image credit: Pieter van Dokkum / Roberto Abraham / Gemini Observatory / SDSS / AURA

The team used the DEIMOS instrument installed on Keck II to measure the velocities of stars for 33.5 hours over a period of six nights so they could determine the galaxy’s mass. Observations of Dragonfly 44’s rotational speed suggest that it has a mass of about one trillion solar masses, about the same as the Milky Way. However, the galaxy emits only 1% of the light emitted by the Milky Way. In other words, the Milky Way has more than a hundred times more stars than Dragonfly 44. I’ve also included the Mass-to-Light ratio plot vs. the dynamical mass. This illustrates how unique Dragonfly 44 is compared to other dark matter dominated galaxies like dwarf spheroidal galaxies.

 

 

MLratio
Relation between dynamical mass-to-light ratio and dynamical mass. Open symbols are dispersion-dominated objects from Zaritsky, Gonzalez, & Zabludoff (2006) and Wolf et al. (2010). The UDGs VCC 1287 (Beasley et al. 2016) and Dragonfly 44 fall outside of the band defined by the other galaxies, having a very high M/L ratio for their mass.

What is particularly exciting is that we don’t understand how galaxies like this form.

Their research indicates that these UDGs could be failed galaxies, with the sizes, dark matter content, and globular cluster systems of much more luminous objects. But we’ll need to discover more to fully understand them.

 

 

 

 

 

 

 

 

Further reading (works by the same authors)
Forty-Seven Milky Way-Sized, Extremely Diffuse Galaxies in the Coma Cluster,arXiv: 1410.8141
Spectroscopic Confirmation of the Existence of Large, Diffuse Galaxies in the Coma Cluster: arXiv: 1504.03320

Dark matter of Pulsars?? cont…

Title: Estimating the GeV Emission of Millisecond Pulsars in Dwarf Spheroidal Galaxies
Publication: arXiv: 1607.06390, submitted to ApJL

Howdy, particlebite enthusiasts! I’m blogging this week from ICHEP. Over the next week there will be a lot of exciting updates from the particle physics community… like what happened to that 750 GeV bump? are there any new bumps for us to be excited about? have we broken the standard model yet? But all these will come later in the week – today is registration. But in the mean time, there have been a lot of interesting papers circulating about disentangling dark matter from our favorite astrophysical background – pulsars.

The paper, which is the focus of this post, delves deeper into understanding potential gamma-ray emission found in dwarf spheroidal galaxies (dsphs) from pulsars. The density of millisecond pulsars (MSPs) is related to the density of stars in a cluster. In low-density stellar environments, such as dsphs, the abundance of MSPs is expected to be proportional to stellar mass (it’s much higher for globular cluster and the Galactic center). Remember, the advantage over dsphs in looking for a dark matter signal when compared with, for example, the Galactic center is that they have many fewer detectable gamma-ray emitting sources – like MSPs (see arXiv: 1503.02641 for a recent Fermi-LAT paper). However, as we get more and more sensitive, the probability of detecting gamma rays from astrophysical sources in dsphs goes up as well.

MSP gamma-ray luminosity function (0.1–100 GeV) normalized to the stellar mass of the Milky Way. Error bars correspond to statistical uncertainty associated with the finite number of LAT-detected MSPs. The shaded gray band represents the 1σ statistical uncertainty on the broken power-law fit to these data and dashed gray lines represent the systematic uncertainty envelope (distances to LAT-detected MSPs, spatial distribution of Galactic MSPs, and effective selection function of LAT pulsar catalog).
MSP gamma-ray luminosity function (0.1–100 GeV) normalized to the stellar mass of the Milky Way. The shaded gray band represents the 1σ statistical uncertainty on the broken power-law fit to these data and dashed gray lines represent the systematic uncertainty envelope.

This work estimates what the gamma-ray flux should be (known as the luminosity function) for MSPs found in dsphs. They assume that the number of MSPs is proportional to the stellar density and that the spectrum is similar to the 90 known MSPs in the Galactic disk (see the figure on the right). It fits the gamma-ray spectrum to a broken power law. We can then scale this result to the number of predicted MSPs in each dsph and distance of the dsph. This is then used as a prediction of the gamma-ray spectrum we would expect from MSPs coming from an individual dsph.

 

They found was that for the highest stellar mass dsphs (Fornax, Draco – usually the classical ones, for example), there is a modest MSP population. However, even for the largest classical dsph, Fornax, the predicted MSP flux > 500 MeV is~ 10−12 ph cm−2s−1 , which is about an order of magnitude below the typical flux upper limits obtained at high Galactic latitudes after six years of the LAT survey, ∼ 10−10 ph cm−2s−1 (see arXiv: 1503.02641 again). The predicted flux and sensitivity is shown below.

 

Expected flux versus J-factor. Blue points indicate the predicted MSP contribution in 30 Mikly Way dsphs and dsph candidates. J-factor uncertainties are shown for kinematically confirmed dsphs only. The gray shaded band represents the typical upper limit derived in high Galactic latitude blank fields after 6 years of the LAT survey. The red curve represents a DM annihilation model that is consistent with both DM interpretations of the Galactic Center Excess and the characteristic spectral shape of MSPs.
Expected flux versus J-factor (remember J-factor scales with distance). Blue points indicate the predicted MSP contribution in 30 Milky Way dsphs and dsph candidates. The gray shaded band represents the sensitivity for 6 years of the LAT data. The red curve represents a DM annihilation model that is consistent with both DM interpretations of the Galactic Center Excess and the characteristic spectral shape of MSPs.

So all in all, this is good news for dsphs as dark matter targets. Understanding the backgrounds is imperative for having confidence in an analysis if a signal is found, and this gives us more confidence that we understand one of the dominant backgrounds in the hunt for dark matter.

Dark matter or Pulsars?

Title: 3FGL Demographics Outside the Galactic Plane using Supervised Machine Learning: Pulsar and Dark Matter Subhalo Interpretations
PublicationarXiv:1605.00711, accepted ApJ

The universe has a way of keeping scientists guessing. For over 70 years, scientists have been trying to understand the particle nature of dark matter. We’ve buried detectors deep underground to shield them from backgrounds, smashed particles together at inconceivably high energies, and dedicated instruments to observing where we have measured dark matter to be a dominant component. Like any good mystery, this has yielded more questions than answers.

There are a lot of ideas as to what the distribution of dark matter looks like in the universe. One example is from a paper by L. Pieri et al., (PRD 83 023518 (2011), arXiv:0908.0195). They simulated what the gamma-ray sky would look like from dark matter annihilation into b-quarks. The results of their simulation are shown below. The plot is an Aitoff projection in galactic coordinates (meaning that the center of the galaxy is at the center of the map).

Gamma-ray sky map from dark matter annihilation into bb using the Via Lacita simulation between 3-40 GeV
Gamma-ray sky map from dark matter annihilation into bb using the Via Lacita simulation between 3-40 GeV. L. Pieri et al., (PRD 83 023518 (2011), arXiv:0908.0195)

The obvious bright spot is the galactic center. This is because the center of the Milky Way has the highest density of dark matter nearby (F. Iocco, Pato, Bertone, Nature Physics 11, 245–248 (2015)). Just for some context the center of the Milky way is ~8.5 kiloparsecs or 27,700 light years away from us… so it’s a big neighborhood. However, the center of the galaxy is particularly hard to observe because the Galaxy itself is obstructing our view. As it turns out there are lots of stars, gas and dust in our Galaxy 🙂

This map also shows us that there are other regions of high dark matter density away from the Galactic center. These could be dark matter halos, dwarf spheroidal galaxies, galaxy clusters, or anything else with a high density of dark matter. The paper I’m discussing uses this simulation in combination with the Fermi-LAT 3rd source catalog (3FGL) (Fermi-LAT Collaboration, Astrophys. J. Suppl 218 (2015) arXiv:1501.02003).

Over 1/3 of the sources in the 3FGL are unassociated with a known astrophysical source (this means we don’t know what source is yielding gamma rays). The paper analyzes these sources to see if their gamma-ray flux is consistent with dark matter annihilation or if it’s more consistent with the spectral shape from pulsars, rapidly rotating neutron stars with strong magnetic fields that emit radio waves (and gamma-rays) in very regular pulses. These are a fascinating class of astrophysical objects and I’d highly recommend reading up on them (See NASA’s site). The challenge is that the gamma-ray flux from dark matter annihilation into b-quarks is surprisingly similar to that from pulsars (See below).

Gamma-ray spectra of dark matter annihilating into bb and tautau.
Gamma-ray spectra of dark matter annihilating into b-quarks and taus. (Image produced by me using DMFit)

Gamma-ray spectrum from pulsars from the globular cluster 47 Tuc.
Gamma-ray spectrum from pulsars from the globular cluster 47 Tuc (Fermi-LAT Collaboration)

 

 

 

 

 

 

 

 

 

They found 34 candidates sources which are consistent with both dark matter annihilation and pulsar spectra away from the Galactic plane using two different machine learning techniques. Generally, if a source can be explained by something other than dark matter, that’s the accepted interpretation. So, the currently favored astrophysical interpretations for these objects are pulsars. Yet, these objects could also be interpreted as dark matter annihilation taking place in ultra-faint dwarf galaxies or dark matter subhalos. Unfortunately, Fermi-LAT spectra are not sufficient to break degeneracies between the different scenarios. The distribution of the 34 dark matter subhalo candidates found in this work are shown below.

Galactic distribution of 34 high-latitude Galactic candidates (red circles) superimposed on a smoothed Fermi LAT all-sky map for energies E ≥ 1 GeV based on events collected during the period 2008 August 4–2015 August 4 (Credit: Fermi LAT Collaboration). High-latitude 3FGL pulsars (blue crosses) are also plotted for comparison.
Galactic distribution of 34 high-latitude Galactic candidates (red circles) superimposed on a smoothed Fermi LAT all-sky map for energies E ≥ 1 GeV based on events collected during the period 2008 August 4–2015 August 4 (Credit: Fermi LAT Collaboration).
High-latitude 3FGL pulsars (blue crosses) are also plotted for comparison.

 

 

 

The paper presents scenarios which support the pulsar interpretation and with the dark matter interpretation. If they are pulsars, they find the 34 found to be in excellent agreement with predictions from a new population that predicts many more pulsars than are currently found. However, if they are dark matter substructures, they also place upper limits on the number of Galactic subhalos surviving today and on dark matter annihilation cross sections. The cross section is shown below.

 

Upper limits on the dark matter annihilation cross section for the b-quark channel using the 14 subhalo candidates very far from the galactic plane (>20 degrees) (black solid line). The dashed red line is an upper limit derived from the Via Lactea II simulation when zero 3FGL subhalos are adopted (Schoonenberg et al. 2016). The blue line corresponds to the constraint for zero 3FGL subhalo candidates using the Aquarius simulation instead (Bertoni, Hooper, & Linden 2015). The horizontal dotted line marks the canonical thermal relic cross section (Steigman, Dasgupta, & Beacom 2012).
Upper limits on the dark matter annihilation cross section for the b-quark channel using the 14 subhalo candidates very far from the galactic plane (>20 degrees) (black solid line). The dashed red line is an upper limit derived from the Via Lactea II simulation when zero 3FGL subhalos are adopted (Schoonenberg et al. 2016). The blue line corresponds to the constraint for zero 3FGL subhalo candidates using the Aquarius simulation instead (Bertoni, Hooper, & Linden 2015). The horizontal dotted line marks the canonical thermal relic cross section (Steigman, Dasgupta, & Beacom 2012).

 

The only thing we can do (beyond waiting for more Fermi-LAT data) is try to identify these sources definitely as pulsars which requires extensive follow-up observations using other telescopes (in particular radio telescopes to look for pulses). So stay tuned!

Other reading: See also Chris Karwin’s ParticleBite on the Fermi-LAT analysis.

The Fermi LAT Data Depicting Dark Matter Detection

The center of the galaxy is brighter than astrophysicists expected. Could this be the result of the self-annihilation of dark matter? Chris Karwin, a graduate student from the University of California, Irvine presents the Fermi collaboration’s analysis.

Editor’s note: this is a guest post by one of the students involved in the published result.

Presenting: Fermi-LAT Observations of High-Energy Gamma-Ray Emission Toward the Galactic Center
Authors: The Fermi-LAT Collaboration (ParticleBites blogger is a co-author)
Reference: 1511.02938Astrophys.J. 819 (2016) no.1, 44

Artist rendition of the Fermi Gamma-ray Space telescope in orbit. Image: http://fermi.gsfc.nasa.gov
Artist rendition of the Fermi Gamma-ray Space telescope in orbit. Image from NASA.

Introduction

Like other telescopes, the Fermi Gamma-Ray Space Telescope is a satellite that scans the sky collecting light. Unlike many telescopes, it searches for very high energy light: gamma-rays. The satellite’s main component is the Large Area Telescope (LAT). When this detector is hit with a high-energy gamma-ray, it measures the the energy and the direction in the sky from where it originated. The data provided by the LAT is an all-sky photon counts map:

The Fermi-LAT provides an all-sky counts map of gamma-rays. The color scale correspond to the number of detected photons. Image: http://svs.gsfc.nasa.gov/cgi-bin/details.cgi?aid=10887
All-sky counts map of gamma-rays. The color scale correspond to the number of detected photons. Image from NASA.

In 2009, researchers noticed that there appeared to be an excess of gamma-rays coming from the galactic center. This excess is found by making a model of the known astrophysical gamma-ray sources and then comparing it to the data.

What makes the excess so interesting is that its features seem consistent with predictions from models of dark matter annihilation. Dark matter theory and simulations predict:

  1. The distribution of dark matter in space. The gamma rays coming from dark matter annihilation should follow this distribution, or spatial morphology.
  2. The particles to which dark matter directly annihilates. This gives a prediction for the expected energy spectrum of the gamma-rays.

Although a dark matter interpretation of the excess is a very exciting scenario that would tell us new things about particle physics, there are also other possible astrophysical explanations. For example, many physicists argue that the excess may be due to an unresolved population of milli-second pulsars. Another possible explanation is that it is simply due to the mis-modeling of the background. Regardless of the physical interpretation, the primary objective of the Fermi analysis is to characterize the excess.

The main systematic uncertainty of the experiment is our limited understanding of the backgrounds: the gamma rays produced by known astrophysical sources. In order to include this uncertainty in the analysis, four different background models are constructed. Although these models are methodically chosen so as to account for our lack of understanding, it should be noted that they do not necessarily span the entire range of possible error. For each of the background models, a gamma-ray excess is found. With the objective of characterizing the excess, additional components are then added to the model. Among the different components tested, it is found that the fit is most improved when dark matter is added. This is an indication that the signal may be coming from dark matter annihilation.

Analysis

This analysis is interested in the gamma rays coming from the galactic center. However, when looking towards the galactic center the telescope detects all of the gamma-rays coming from both the foreground and the background. The main challenge is to accurately model the gamma-rays coming from known astrophysical sources.

Schematic of the experiment. We are interested in gamma-rays coming from the galactic center, represented by the red circle. However, the LAT detects all of the gamma-rays coming from the foreground and background, represented by the blue region. The main challenge is to accurately model the gamma-rays coming from known astrophysical sources. Image: http://www.universetoday.com/106062/what-is-the-milky-way-2/
Schematic of the experiment. We are interested in gamma-rays coming from the galactic center, represented by the red circle. However, the LAT detects all of the gamma-rays coming from the foreground and background, represented by the blue region. The main challenge is to accurately model the gamma-rays coming from known astrophysical sources. Image adapted from Universe Today.

An overview of the analysis chain is as follows. The model of the observed region comes from performing a likelihood fit of the parameters for the known astrophysical sources. A likelihood fit is a statistical procedure that calculates the probability of observing the data given a set of parameters. In general there are two types of sources:

  1. Point sources such as known pulsars
  2. Diffuse sources due to the interaction of cosmic rays with the interstellar gas and radiation field

Parameters for these two types of sources are fit at the same time. One of the main uncertainties in the background is the cosmic ray source distribution. This is the number of cosmic ray sources as a function of distance from the center of the galaxy. It is believed that cosmic rays come from supernovae. However, the source distribution of supernova remnants is not well determined. Therefore, other tracers must be used. In this context a tracer refers to a measurement that can be made to infer the distribution of supernova remnants. This analysis uses both the distribution of OB stars and the distribution of pulsars as tracers. The former refers to OB associations, which are regions of O-type and B-type stars. These hot massive stars are progenitors of supernovae. In contrast to these progenitors, the distribution of pulsars is also used since pulsars are the end state of supernovae. These two extremes serve to encompass the uncertainty in the cosmic ray source distribution, although, as mentioned earlier, this uncertainty is by no means bracketing. Two of the four background model variants come from these distributions.

An overview of the analysis chain. In general there are two types of sources: point sources and diffuse source. The diffuse sources are due to the interaction of cosmic rays with interstellar gas and radiation fields. Spectral parameters for the diffuse sources are fit concurrently with the point sources using a likelihood fit. The question mark represents the possibility of an additional component possibly missing from the model, such as dark matter.

The information pertaining to the cosmic rays, gas, and radiation fields is input into a propagation code called GALPROP. This produces an all-sky gamma-ray intensity map for each of the physical processes that produce gamma-rays. These processes include the production of neutral pions due to the interaction of cosmic ray protons with the interstellar gas, which quickly decay into gamma-rays, cosmic ray electrons up-scattering low-energy photons of the radiation field via inverse Compton, and cosmic ray electrons interacting with the gas producing gamma-rays via Bremsstrahlung radiation.

Residual map for one of the background models. Image: http://arxiv.org/abs/1511.02938
Residual map for one of the background models. Image from 1511.02938

The maps of all the processes are then tuned to the data. In general, tuning is a procedure by which the background models are optimized for the particular data set being used. This is done using a likelihood analysis. There are two different tuning procedures used for this analysis. One tunes the normalization of the maps, and the other tunes both the normalization and the extra degrees of freedom related to the gas emission interior to the solar circle. These two tuning procedures, performed for the the two cosmic ray source models, make up the four different background models.

Point source models are then determined for each background model, and the spectral parameters for both diffuse sources and point sources are simultaneously fit using a likelihood analysis.

Results and Conclusion

Best fit dark matter spectra for the four different background models. Image: 1511.02938

In the plot of the best fit dark matter spectra for the four background models, the hatching of each curve corresponds to the statistical uncertainty of the fit. The systematic uncertainty can be interpreted as the region enclosed by the four curves. Results from other analyses of the galactic center are overlaid on the plot. This result shows that the galactic center analysis performed by the Fermi collaboration allows a broad range of possible dark matter spectra.

The Fermi analysis has shown that within systematic uncertainties a gamma-ray excess coming from the galactic center is detected. In order to try to explain this excess additional components were added to the model. Among the additional components tested it was found that the fit is most improved with that addition of a dark matter component. However, this does not establish that a dark matter signal has been detected. There is still a good chance that the excess can be due to something else, such as an unresolved population of millisecond pulsars or mis-modeling of the background. Further work must be done to better understand the background and better characterize the excess. Nevertheless, it remains an exciting prospect that the gamma-ray excess could be a signal of dark matter.

 

Background reading on dark matter and indirect detection:

Respecting your “Elders”

Theoretical physicists have designed a new way to explain the how dark matter interactions can explain the observed amount of dark matter in the universe today. This elastically decoupling dark matter framework, is a hybrid of conventional and novel dark matter models.

Presenting: Elastically Decoupling Dark Matter
Authors: Eric Kuflik, Maxim Perelstein, Nicolas Rey-Le Lorier, Yu-Dai Tsai
Reference: 1512.04545, Phys. Rev. Lett. 116, 221302 (2016)

The particle identity of dark matter is one of the biggest open questions in physics. The simplest and most widely assumed explanation is that dark matter is a weakly-interacting massive particle (WIMP). Assuming that dark matter starts out in thermal equilibrium in the hot plasma of the early universe, the present cosmic abundance of WIMPs is set by the balance of two effects:

  1. When two WIMPs find each other, they can annihilate into ordinary matter. This depletes the number of WIMPs in the universe.
  2. The universe is expanding, making it harder for WIMPs to find each other.

This process of “thermal freeze out” leads to an abundance of WIMPs controlled by the dark matter mass and interaction strength. The term ‘weakly-interacting massive particle’ comes from the observation that dark matter of roughly the mass of the weak force particle that interact through the weak nuclear force gives the experimentally measured dark matter density today.

Two ways for a new particle, X, to produce the observed dark matter abundance: (left) conventional WIMP annihilation into Standard Model (SM) particles versus (right) strongly-Interacting 3-to-2 interactions that reduce the amount of dark matter.
Two ways for a new particle, X, to produce the observed dark matter abundance: (left) WIMP annihilation into Standard Model (SM) particles versus (right) SIMP 3-to-2 interactions that reduce the amount of dark matter.

More recently, physicists noticed that dark matter with very large interactions with itself (not ordinary matter), can produce the correct dark matter density in another way. These “strongly interacting massive particle” models reduce regulate the amount of dark matter through 3-to-2 interactions that reduce the total number of dark matter particles rather than annihilation into ordinary matter.

The authors of 1512.04545 have proposed an intermediate road that interpolates between these two types of dark matter. The “elastically decoupling dark matter” (ELDER) scenario. ELDERs have both of the above interactions: they can annihilate pairwise into ordinary matter, or sets of three ELDERs can turn into two ELDERS.

ELDER scenario
Thermal history of ELDERs, adapted from 1512.04545.

The cosmic history of these ELDERs is as follows:

  1. ELDERs are produced in thermal bath immediately after the big bang.
  2. Pairs of ELDERS annihilate into ordinary matter. Like WIMPs, they interact weakly with ordinary matter.
  3. As the universe expands, the rate for annihilation into Standard Model particles falls below the rate at which the universe expands
  4. Assuming that the ELDERs interact strongly amongst themselves, the 3-to-2 number-changing process still occurs. Because this process distributes the energy of 3 ELDERs in the initial state to 2 ELDERs in the final state, the two outgoing ELDERs have more kinetic energy: they’re hotter. This turns out to largely counteract the effect of the expansion of the universe.

The neat effect here is the abundance of ELDERs is actually set by the interaction with ordinary matter, like WIMPs. However, because they have this 3-to-2 heating period, they are able to produce the observed present-day dark matter density for very different choices of interactions. In this sense, the authors show that this framework opens up a new “phase diagram” in the space of dark matter theories:

A "phase diagram" of dark matter models. The vertical axis represents the dark matter self-coupling strength while the horizontal axis represents the coupling to ordinary matter.
A “phase diagram” of dark matter models. The vertical axis represents the dark matter self-coupling strength while the horizontal axis represents the coupling to ordinary matter.

Background reading on dark matter: