You might have heard that one of the big things we are looking for in collider experiments are ever elusive dark matter particles. But given that dark matter particles are expected to interact very rarely with regular matter, how would you know if you happened to make some in a collision? The so called ‘direct detection’ experiments have to operate giant multi-ton detectors in extremely low-background environments in order to be sensitive to an occasional dark matter interaction. In the noisy environment of a particle collider like the LHC, in which collisions producing sprays of particles happen every 25 nanoseconds, the extremely rare interaction of the dark matter with our detector is likely to be missed. But instead of finding dark matter by seeing it in our detector, we can instead find it by not seeing it. That may sound paradoxical, but its how most collider based searches for dark matter work.
The trick is based on every physicists favorite principle: the conservation of energy and momentum. We know that energy and momentum will be conserved in a collision, so if we know the initial momentum of the incoming particles, and measure everything that comes out, then any invisible particles produced will show up as an imbalance between the two. In a proton-proton collider like the LHC we don’t know the initial momentum of the particles along the beam axis, but we do that they were traveling along that axis. That means that the net momentum in the direction away from the beam axis (the ‘transverse’ direction) should be zero. So if we see a momentum imbalance going away from the beam axis, we know that there is some ‘invisible’ particle traveling in the opposite direction.
We normally refer to the amount of transverse momentum imbalance in an event as its ‘missing momentum’. Any collisions in which an invisible particle was produced will have missing momentum as tell-tale sign. But while it is a very interesting signature, missing momentum can actually be very difficult to measure. That’s because in order to tell if there is anything missing, you have to accurately measure the momentum of every particle in the collision. Our detectors aren’t perfect, any particles we miss, or mis-measure the momentum of, will show up as a ‘fake’ missing energy signature.
Even if you can measure the missing energy well, dark matter particles are not the only ones invisible to our detector. Neutrinos are notoriously difficult to detect and will not get picked up by our detectors, producing a ‘missing energy’ signature. This means that any search for new invisible particles, like dark matter, has to understand the background of neutrino production (often from the decay of a Z or W boson) very well. No one ever said finding the invisible would be easy!
However particle physicists have been studying these processes for a long time so we have gotten pretty good at measuring missing energy in our events and modeling the standard model backgrounds. Missing energy is a key tool that we use to search for dark matter, supersymmetry and other physics beyond the standard model.
We might not have gotten here without dark matter. It was the gravitational pull of dark matter, which makes up most of the mass of galactic structures, that kept heavy elements — the raw material of Earth-like rocky planets — from flying away after the first round of supernovae at the advent of the stelliferous era. Without this invisible pull, all structures would have been much smaller than seen today, and stars much more rare.
Thus with knowledge of dark matter comes existential gratitude. But the microscopic identity of dark matter is one of the biggest scientific enigmas of our times, and what we don’t know could yet kill us. This two-part series is about the dangerous end of our ignorance, reviewing some inconvenient prospects sketched out in the dark matter literature. Reader discretion is advised.
[Note: The scenarios outlined here are based on theoretical speculations of dark matter’s identity. Such as they are, these are unlikely to occur, and even if they do, extremely unlikely within the lifetime of our species, let alone that of an individual. In other words, nobody’s sleep or actuarial tables need be disturbed.]
Carcinogenic dark matter
Maurice Goldhaber quipped that “you could feel it in your bones” that protons are cosmologically long-lived, as otherwise our bodies would have self-administered a lethal dose of ionizing radiation. (This observation sets a lower limit on the proton lifetime at a comforting times the age of the universe.) Could we laugh similarly about dark matter? The Earth is probably amid a wind of particle dark matter, a wind that could trigger fatal ionization in our cells if encountered too frequently. The good news is that if dark matter is made of weakly interacting massive particles (WIMPs), K. Freese and C. Savage report safety: “Though WIMP interactions are a source of radiation in the body, the annual exposure is negligible compared to that from other natural sources (including radon and cosmic rays), and the WIMP collisions are harmless to humans.“
The bad news is that the above statement assumes dark matter is distributed smoothly in the Galactic halo. There are interesting cosmologies in which dark matter collects in high-density “clumps” (a.k.a. “subhalos”, “mini-halos”, or “mini-clusters”). According to J. I. Collar, the Earth encountering these clumps every 30–100 million years could explain why mass extinctions of life occur periodically on that timescale. During transits through the clumps, dark matter particles could undergo high rates of elastic collisions with nuclei in life forms, injecting 100–200 keV of energy per micrometer of transit, just right to “induce a non-negligible amount of radiation damage to all living tissue“. We are in no hurry for the next dark clump.
Eruptive dark matter
If your dark matter clump doesn’t wipe out life efficiently via cancer, A. Abbas and S. Abbas recommend waiting another five million years. It takes that long for the clump dark matter to gravitationally capture in Earth, settle in its core, self-annihilate, and heat the mantle, setting off planet-wide volcanic fireworks. The resulting chain of events would end, as the authors rattle off enthusiastically, in “the depletion of the ozone layer, global temperature changes, acid rain, and a decrease in surface ocean alkalinity.”
Armageddon dark matter
If cancer and volcanoes are not dark matter’s preferred methods of prompting mass extinction, it could get the job done with old-fashioned meteorite impacts.
It is usually supposed that dark matter occupies a spherical halo that surrounds the visible, star-and-gas-crammed, disk of the Milky Way. This baryonic pancake was formed when matter, starting out in a spinning sphere, cooled down by radiating photons and shrunk in size along the axis of rotation; due to conservation of angular momentum the radial extent was preserved. No such dissipative process is known to govern dark matter, thus it retains its spherical shape. However, a small component of dark matter might have still cooled by emitting some unseen radiation such as “dark photons“. That would result in a “dark disk” sub-structure co-existing in the Galactic midplane with the visible disk. Every 35 million years the Solar System crosses the Galactic midplane, and when that happens, a dark disk of surface density of 10 /pc could tidally perturb the Oort Cloud and send comets shooting toward the inner planets, causing periodic mass extinctions. So suggest L. Randall and M. Reece, whose arXiv comment “4 figures, no dinosaurs” is as much part of the particle physics lore as Randall’s book that followed the paper, Dark Matter and the Dinosaurs.
We note in passing that SNOLAB, the underground laboratory in Sudbury, ON that houses the dark matter experiments DAMIC, DEAP, and PICO, and future home of NEWS-G, SENSEI, Super-CDMS and ARGO, is located in the Creighton Mine — where ore deposits were formed by a two billion year-old comet impact. Perhaps the dark disk nudges us to detect its parent halo.
—————— In the second part of the series we will look — if we’re still here — at more surprises that dark matter could have planned for us. Stay tuned.
 Dark Matter collisions with the Human Body, K. Freese & D. Savage, Phys.Lett.B 717 (2012) 25-28.
 Clumpy cold dark matter and biological extinctions, J. I. Collar, Phys.Lett.B 368 (1996) 266-269.
 Volcanogenic dark matter and mass extinctions, S. Abbas & A. Abbas, Astropart.Phys. 8 (1998) 317-320
 Dark Matter as a Trigger for Periodic Comet Impacts, L. Randall & M. Reece, Phys.Rev.Lett. 112 (2014) 161301
 Dark Matter and the Dinosaurs, L. Randall, Harper Collins: Ecco Press (2015)
How would you describe dark matter in one word? Mysterious? Ubiquitous? Massive? Theorists often try to settle the question in the title of a paper — with a single adjective. Thanks to this penchant we now have the possibility of cannibal dark matter. To be sure, in the dark world eating one’s own kind is not considered forbidden, not even repulsive. Just a touch selfish, maybe. And it could still make you puffy — if you’re inflatable and not particularly inelastic. Otherwise it makes you plain superheavy.
Below are more uni-verbal dark matter candidates in the literature. Some do make you wonder if the title preceded the premise. But they all remind you of how much fun it is, this quest for dark matter’s identity. Keep an eye out on arXiv for these gems!
The landscape of direct detection of dark matter is a perplexing one; all experiments have so far come up with deafening silence, except for a single one which promises a symphony. This is the DAMA/LIBRA experiment in Gran Sasso, Italy, which has been seeing an annual modulation in its signal for two decades now.
Such an annual modulation is as dark-matter-like as it gets. First proposed by Katherine Freese in 1987, it would be the result of earth’s motion inside the galactic halo of dark matter in the same direction as the sun for half of the year and in the opposite direction during the other half. However, DAMA/LIBRA’s results are in conflict with other experiments – but with the catch that none of those used the same setup. The way to settle this is obviously to build more experiments with the DAMA/LIBRA setup. This is an ongoing effort which ultimately focuses on the crystals at its heart.
The specific crystals are made of the scintillating material thallium-doped sodium iodide, NaI(Tl). Dark matter particles, and particularly WIMPs, would collide elastically with atomic nuclei and the recoil would give off photons, which would eventually be captured by photomultiplier tubes at the ends of each crystal.
Right now a number of NaI(Tl)-based experiments are at various stages of preparation around the world, with COSINE-100 at the Yangyang mountain, S.Korea, already producing negative results. However, these are still not on equal footing with DAMA/LIBRA’s because of higher backgrounds at COSINE-100. What is the collaboration to do, then? The answer is focus even more on the crystals and how they are prepared.
Over the last couple of years some serious R&D went into growing better crystals for COSINE-200, the planned upgrade of COSINE-100. Yes, a crystal is something that can and does grow. A seed placed inside the raw material, in this case NaI(Tl) powder, leads it to organize itself around the seed’s structure over the next hours ordays.
In COSINE-100 the most annoying backgrounds came from within the crystals themselves because of the production process, because of natural radioactivity, and because of cosmogenically induced isotopes. Let’s see how each of these was tackled during the experiment’s mission towards a radiopure upgrade.
Improved techniques of growing and preparing the crystals reduced contamination from the materials of the grower device and from the ambient environment. At the same time different raw materials were tried out to put the inherent contamination under control.
Among a handful of naturally present radioactive isotopes particular care was given to 40K. 40K can decay characteristically to an X-ray of 3.2keV and a γ-ray of 1,460keV, a combination convenient for tagging it to a large extent. The tagging is done with the help of 2,000 liters of liquid scintillator surrounding the crystals. However, if the γ-ray escapes the crystal then the left-behind X-ray will mimic the expected signal from WIMPs… Eventually the dangerous 40K was brought down to levels comparable to those in DAMA/LIBRA through the investigation of various techniques and first materials.
But the main source of radioactive background in COSINE-100 was isotopes such as 3H or 22Na created inside the crystalsby cosmic ray muons,after their production. Now, their abundance was reduced significantly by two simple moves: the crystals were grown locally at a very low altitude and installed underground within a few weeks (instead of being transported from a lab at 1,400 meters above sea in Colorado). Moreover, most of the remaining cosmogenic background is to decay away within a couple of years.
Where are these efforts standing? The energy range of interest for testing the DAMA/LIBRA signal is 1-6keV. This corresponds to a background target of 1 count/kg/day/keV. After the crystals R&D, the achieved contamination was less than about 0.34 counts. In short, everything is ready for COSINE-100 to upgrade to COSINE-200 and test the annual modulation without the previous ambiguities that stood in the way.
If dark matter actually consists of a new kind of particle, then the most up-and-coming candidate is the axion. The axion is a consequence of the Peccei-Quinn mechanism, a plausible solution to the “strong CP problem,” or why the strong nuclear force conserves the CP-symmetry although there are no reasons for it to. It is a very light neutral boson, named by Frank Wilczek after a detergent brand (in a move that obviously dates its introduction in the ’70s).
Most experiments that try to directly detect dark matter have looked for WIMPs (weakly interacting massive particles). However, as those searches have not borne fruit, the focus started turning to axions, which make for good candidates given their properties and the fact that if they exist, then they exist in multitudes throughout the galaxies. Axions “speak” to the QCD part of the Standard Model, so they can appear in interaction vertices with hadronic loops. The end result is that axions passing through a magnetic field will convert to photons.
In practical terms, their detection boils down to having strong magnets, sensitive electronics and an electromagnetically very quiet place at one’s disposal. One can then sit back and wait for the hypothesized axions to pass through the detector as earth moves through the dark matter halo surrounding the Milky Way. Which is precisely why such experiments are known as “haloscopes.”
Now, the most veteran haloscope of all published significant new results. Alas, it is still empty-handed, but we can look at why its update is important and how it was reached.
ADMX (Axion Dark Matter eXperiment) of the University of Washington has been around for a quarter-century. By listening for signals from axions, it progressively gnaws away at the space of allowed values for their mass and coupling to photons, focusing on an area of interest:
Unlike higher values, this area is not excluded by astrophysical considerations (e.g. stars cooling off through axion emission) and other types of experiments (such as looking for axions from the sun). In addition, the bands above the lines denoted “KSVZ” and “DFSZ” are special. They correspond to the predictions of two models with favorable theoretical properties. So, ADMX is dedicated to scanning this parameter space. And the new analysis added one more year of data-taking, making a significant dent in this ballpark.
As mentioned, the presence of axions would be inferred from a stream of photons in the detector. The excluded mass range was scanned by “tuning” the experiment to different frequencies, while at each frequency step longer observation times probed smaller values for the axion-photon coupling.
Two things that this search needs is a lot of quiet and some good amplification, as the signal from a typical axion is expected to be as weak as the signal from a mobile phone left on the surface of Mars (around 10-23W). The setup is indeed stripped of noise by being placed in a dilution refrigerator, which keeps its temperature at a few tenths of a degree above absolute zero. This is practically the domain governed by quantum noise, so advantage can be taken of the finesse of quantum technology: for the first time ADMX used SQUIDs, superconducting quantum interference devices, for the amplification of the signal.
In the end, a good chunk of the parameter space which is favored by the theory might have been excluded, but the haloscope is ready to look at the rest of it. Just think of how, one day, a pulse inside a small device in a university lab might be a messenger of the mysteries unfolding across the cosmos.
Information, gold and chicken. What do they all have in common? They can all come in the form of nuggets. Naturally one would then be compelled to ask: “what about fundamental particles? Could they come in nugget form? Could that hold the key to dark matter?” Lucky for you this has become the topic of some ongoing research.
A ‘nugget’ in this context refers to large macroscopic ‘clumps’ of matter formed in the early universe that could possibly survive up until the present day to serve as a dark matter candidate. Much like nuggets of the edible variety, one must be careful to combine just the right ingredients in just the right way. In fact, there are generally three requirements to forming such an exotic state of matter:
(At least) two different vacuum states separated by a potential ‘barrier’ where a phase transition occurs (known as a first-order phase transition).
A charge which is conserved globally which can accumulate in a small part of space.
An excess of matter over antimatter on the cosmological scale, or in other words, a large non-zero macroscopic number density of global charge.
Back in the 1980s, before much work was done in the field of lattice quantum chromodynamics (lQCD), Edward Witten put forward the idea that the Standard Model QCD sector could in fact accommodate such an exotic form of matter. Quite simply this would occur at the early phase of the universe when the quarks undergo color confinement to form hadrons. In particular Witten’s were realized as large macroscopic clumps of ‘quark matter’ with a very large concentration of baryon number, . However, with the advancement of lQCD techniques, the phase transition in which the quarks become confined looks more like a continuous ‘crossover’ (i.e. a second-order phase transition), making the idea in the Standard Model somewhat unfeasible.
Theorists, particularly those interested in dark matter, are not confined (for lack of a better term) to the strict details of the Standard Model and most often look to the formation of sometimes complicated ‘dark sectors’ invisible to us but readily able to provide the much needed dark matter candidate.
The problem of obtaining a first-order phase transition to form our quark nuggets need not be a problem if we consider a QCD-type theory that does not interact with the Standard Model particles. More specifically, we can consider a set of dark quarks, dark gluons with arbitrary characteristics like masses, couplings, numbers of flavors or numbers of colors (which of course are quite settled for the Standard Model QCD case). In fact, looking at the numbers of flavors and colors of dark QCD in Figure 1, we can see in the white unshaded region a number of models that can exist with a first-order phase transition, as required to form these dark quark nuggets.
As with normal quarks, the distinction between the two phases actually refers to a process known as chiral symmetry breaking. When the temperature of the universe cools to this particular scale, color confinement of quarks occurs around the same time, such that no single-color quark can be observed on its own – only in colorless bound states.
Forming a nugget
As we have briefly mentioned so far, the dark nuggets are formed as the universe undergoes a ‘dark’ phase transition from a phase where the dark color is unconfined to a phase where it is confined. At some critical temperature, due to the nature of first-order phase transitions, bubbles of the new confined phase (full of dark hadrons) begin to nucleate out of the dark quark-gluon plasma. The growth of these bubbles are driven by a difference in pressure, characteristic of the fact that the unconfined and confined phase vacuums states are of different energy. With this emerging bubble wall, the almost massless particles from the dark plasma scatter from the wall containing heavy dark (anti)baryons and hence a large amount of dark baryon number accumulates in this phase. Eventually, as these bubbles merge and coalesce, we would expect local regions of remaining dark quark-gluon plasma, unconfined and stable from collapse due to the Fermi degeneracy pressure (see reference below for more on this). An illustration is shown in Figure 2. Calculations with varying energy scales of confinement estimate their masses are anywhere between to grams with radii from to cm and so can truly be classed as macroscopic dark objects!
How do we know they could be there?
There are a number of ways to infer the existence of dark quark nuggets, but two of the main ones are: (i) as a dark matter candidate and (ii) through probes of the dark QCD model that provides them. Cosmologically, the latter can imply the existence of a dark form of radiation which ultimately can lead to effects on the Cosmic Microwave Background Radiation (CMB). In a similar vein, one recent avenue of study today is the production of a steady background of gravitational waves emerging from the existence of a first-order phase transition – one of the key requirements for dark quark nugget formation. More importantly, they can be probed through astrophysical means if they share some coupling (albeit small) with the Standard Model particles. The standard technique of direct detection with Earth-based experiments could be the way to go – but furthermore, there may be the possibility of cosmic ray production from collisions of multiple dark quark nuggets. Among these are a number of other observations over the massive range of nugget sizes and masses shown in Figure 3.
To conclude, note that in such a generic framework, a number of well-motivated theories may predict (or in fact have unavoidable) instances of quark nuggets that may serve as interesting dark matter candidates with a lot of fun phenomenology to play with. It is only up to the theorist’s imagination where to go from here!
Direct detection strategies for dark matter (DM) have grown significantly from the dominant narrative of looking for scattering of these ghostly particles off of large and heavy nuclei. Such experiments involve searches for the Weakly-Interacting Massive Particles (WIMPs) in the many GeV (gigaelectronvolt) mass range. Such candidates for DM are predicted by many beyond Standard Model (SM) theories, one of the most popular involving a very special and unique extension called supersymmetry. Once dubbed the “WIMP Miracle”, these types of particles were found to possess just the right properties to be suitable as dark matter. However, as these experiments become more and more sensitive, the null results put a lot of stress on their feasibility.
Typical detectors like that of LUX, XENON, PandaX and ZEPLIN, detect flashes of light (scintillation) from the result of particle collisions in noble liquids like argon or xenon. Other cryogenic-type detectors, used in experiments like CDMS, cool semiconductor arrays down to very low temperatures to search for ionization and phonon (quantized lattice vibration) production in crystals. Already incredibly successful at deriving direct detection limits for heavy dark matter, new ideas are emerging to look into the lighter side.
Recently, DM below the GeV range have become the new target of a huge range of detection methods, utilizing new techniques and functional materials – semiconductors, superconductors and even superfluid helium. In such a situation, recoils from the much lighter electrons in fact become much more sensitive than those of such large and heavy nuclear targets.
There are several ways that one can consider light dark matter interacting with electrons. One popular consideration is to introduce a new gauge boson that has a very small ‘kinetic’ mixing with the ordinary photon of the Standard Model. If massive, these ‘dark photons’ could also be potentially dark matter candidates themselves and an interesting avenue for new physics. The specifics of their interaction with the electron are then determined by the mass of the dark photon and the strength of its mixing with the SM photon.
Typically the gap between the valence and conduction bands in semiconductors like silicon and germanium is around an electronvolt (eV). When the energy of the dark matter particle exceeds the band gap, electron excitations in the material can usually be detected through a complicated secondary cascade of electron-hole pair generation. Below the band gap however, there is not enough energy to excite the electron to the conduction band, and so detection proceeds through low-energy multi-phonon excitations, with the dominant being the emission of two back-to-back phonons.
In both these regimes, the absorption rate of dark matter in the material is directly related to the properties of the material, namely its optical properties. In particular, the absorption rate for ordinary SM photons is determined by the polarization tensor in the medium, and in turn the complex conductivity, , through what is known as the optical theorem. Ultimately this describes the response of the material to an electromagnetic field, which has been measured in several energy ranges. This ties together the astrophysical properties of how the dark matter moves through space and the fundamental description of DM-electron interactions at the particle level.
In a more technical sense, the rate of DM absorption, in events per unit time per unit target mass, is given by the following equation:
– mass density of the target material
– local dark matter mass density (0.3 GeV/cm3) in the galactic halo
– mass of the dark photon particle
– kinetic mixing parameter (in-medium)
– absorption rate of ordinary SM photons
Shown in Figure 1, the projected sensitivity at 90% confidence limit (C.L.) for a 1 kg-year exposure of semiconductor target to dark photon detection can be almost an order of magnitude greater than existing nuclear recoil experiments. Dependence is shown on the kinetic mixing parameter and the mass of the dark photon. Limits are also shown for existing semiconductor experiments, known as DAMIC and CDMSLite with 0.6 and 70 kg-day exposure, respectively.
Furthermore, in the millielectronvolt-kiloelectronvolt range, these could provide much stronger constraints than any of those that currently exist from sources in astrophysics, even at this exposure. These materials also provide a novel way of detecting DM in a single experiment, so long as improvements are made in phonon detection.
These possibilities, amongst a plethora of other detection materials and strategies, can open up a significant area of parameter space for finally closing in on the identity of the ever-elusive dark matter!
Over the past decade, a new trend has been emerging in physics, one that is motivated by several key questions: what do we know about the origin of our universe? What do we know about its composition? And how will the universe evolve from here? To delve into these questions naturally requires a thorough examination of the universe via the astrophysics lens. But studying the universe on a large scale alone does not provide a complete picture. In fact, it is just as important to see the universe on the smallest possible scales, necessitating the trendy and (fairly) new hybrid field of particle astrophysics. In this post, we will look specifically at the cosmic microwave background (CMB), classically known as a pillar of astrophysics, within the context of particle physics, providing a better understanding of the broader questions that encompass both fields.
Essentially, the CMB is just what we see when we look into the sky and we aren’t looking at anything else. Okay, fine. But if we’re not looking at something in particular, why do we see anything at all? The answer requires us to jump back a few billion years to the very early universe.
Immediately after the Big Bang, it was impossible for particles to form atoms without immediately being broken apart by constant bombardment from stray photons. About 380,000 thousand years after the Big Bang, the Universe expanded and cooled to a temperature of about 3,000 K, allowing the first formation of stable hydrogen atoms. Since hydrogen is electrically neutral, the leftover photons could no longer interact, meaning that at that point their paths would remain unaltered indefinitely. These are the photons that we observe as CMB; Figure 1 shows this idea diagrammatically below. From our present observation point, we measure the CMB to have a temperature of about 2.76 K.
Since this radiation has been unimpeded since that specific point (known as the point of ‘recombination’), we can think of the CMB as a snapshot of the very early universe. It is interesting, then, to examine the regularity of the spectrum; the CMB is naturally not perfectly uniform, and the slight temperature variations can provide a lot of information about how the universe formed. In the early primordial soup universe, slight random density fluctuations exerted a greater gravitational pull on their surroundings, since they had slightly more mass. This process continues, and very large dense patches occur in an otherwise uniform space, heating up the photons in that area accordingly. The Planck satellite, launched in 2009, provides some beautiful images of the temperature anisotropies of the universe, as seen in Figure 2. Some of these variations can be quite severe, as in the recently released results about a supervoid aligned with an especially cold spot in the CMB (see Further Reading, item 4).
So what does this all have to do with particles? We’ve talked about a lot of astrophysics so far, so let’s tie it all together. The big correlation here is dark matter. The CMB has given us strong evidence that our universe has a flat geometry, and from general relativity, this provides restrictions on the mass, energy, and density of the universe. In this way, we know that atomic matter can constitute only 5% of the universe, and analysis of the peaks in the CMB gives an estimate of 26% for the total dark matter presence. The rest of the universe is believed to be dark energy (see Figure 3).
Both dark matter and dark energy are huge questions in particle physics that could be the subject of a whole other post. But the CMB plays a big role in making our questions a bit more precise. The CMB is one of several pieces of strong evidence that require the existence of dark matter and dark energy to justify what we observe in the universe. Some potential dark matter candidates include weakly interacting massive particles (WIMPs), sterile neutrinos, or the lightest supersymmetric particle, all of which bring us back to particle physics for experimentation. Dark energy is not as well understood, and there are still a wide variety of disparate theories to explain its true identity. But it is clear that the future of particle physics will likely be closely tied to astrophysics, so as a particle physicist it’s wise to keep an eye out for new developments in both fields!
The Large Hadron Collider is the world’s largest proton collider, and in a mere five years of active data acquisition, it has already achieved fame for the discovery of the elusive Higgs Boson in 2012. Though the LHC is currently off to allow for a series of repairs and upgrades, it is scheduled to begin running again within the month, this time with a proton collision energy of 13 TeV. This is nearly double the previous run energy of 8 TeV, opening the door to a host of new particle productions and processes. Many physicists are keeping their fingers crossed that another big discovery is right around the corner. Here are a few specific things that will be important in Run II.
1. Luminosity scaling
Though this is a very general category, it is a huge component of the Run II excitement. This is simply due to the scaling of luminosity with collision energy, which gives a remarkable increase in discovery potential for the energy increase.
If you’re not familiar, luminosity is the number of events per unit time and cross sectional area. Integrated luminosity sums this instantaneous value over time, giving a metric in the units of 1/area.
In the particle physics world, luminosities are measured in inverse femtobarns, where 1 fb-1 = 1/(10-43 m2). Each of the two main detectors at CERN, ATLAS and CMS, collected 30 fb-1 by the end of 2012. The main point is that more luminosity means more events in which to search for new physics.
Figure 1 shows the ratios of LHC luminosities for 7 vs. 8 TeV, and again for 13 vs. 8 TeV. Since the plot is in log scale on the y axis, it’s easy to tell that 13 to 8 TeV is a very large ratio. In fact, 100 fb-1 at 8 TeV is the equivalent of 1 fb-1 at 13 TeV. So increasing the energy by a factor less than 2 increase the integrated luminosity by a factor of 100! This means that even in the first few months of running at 13 TeV, there will be a huge amount of data available for analysis, leading to the likely release of many analyses shortly after the beginning of data acquisition.
Supersymmetry theory proposes the existence of a superpartner for every particle in the Standard Model, effectively doubling the number of fundamental particles in the universe. This helps to answer many questions in particle physics, namely the question of where the particle masses came from, known as the ‘hierarchy’ problem (see the further reading list for some good explanations.)
Current mass limits on many supersymmetric particles are getting pretty high, concerning some physicists about the feasibility of finding evidence for SUSY. Many of these particles have already been excluded for masses below the order of a TeV, making it very difficult to create them with the LHC as is. While there is talk of another LHC upgrade to achieve energies even higher than 14 TeV, for now the SUSY searches will have to make use of the energy that is available.
Figure 2 shows the cross sections for various supersymmetric particle pair production, including squark (the supersymmetric top quark) and gluino (the supersymmetric gluon). Given the luminosity scaling described previously, these cross sections tell us that with only 1 fb-1, physicists will be able to surpass the existing sensitivity for these supersymmetric processes. As a result, there will be a rush of searches being performed in a very short time after the run begins.
3. Dark Matter
Dark matter is one of the greatest mysteries in particle physics to date (see past particlebites posts for more information). It is also one of the most difficult mysteries to solve, since dark matter candidate particles are by definition very weakly interacting. In the LHC, potential dark matter creation is detected as missing transverse energy (MET) in the detector, since the particles do not leave tracks or deposit energy.
One of the best ways to ‘see’ dark matter at the LHC is in signatures with mono-jet or photon signatures; these are jets/photons that do not occur in pairs, but rather occur singly as a result of radiation. Typically these signatures have very high transverse momentum (pT) jets, giving a good primary vertex, and large amounts of MET, making them easier to observe. Figure 3 shows a Feynman diagram of such a decay, with the MET recoiling off a jet or a photon.
Though the topics in this post will certainly be popular in the next few years at the LHC, they do not even begin to span the huge volume of physics analyses that we can expect to see emerging from Run II data. The next year alone has the potential to be a groundbreaking one, so stay tuned!
Last Thursday, Nobel Laureate Sam Tingpresented the latest results (CERN press release) from the Alpha Magnetic Spectrometer (AMS-02) experiment, a particle detector attached to the International Space Station—think “ATLAS/CMS in space.” Instead of beams of protons, the AMS detector examines cosmic rays in search of signatures of new physics such as the products of dark matter annihilation in our galaxy.
In fact, this is just the latest chapter in an ongoing mystery involving the energy spectrum of cosmic positrons. Recall that positrons are the antimatter versions of electrons with identical properties except having opposite charge. They’re produced from known astrophysical processes when high-energy cosmic rays (mostly protons) crash into interstellar gas—in this case they’re known as `secondaries’ because they’re a product of the `primary’ cosmic rays.
The dynamics of charged particles in the galaxy are difficult to simulate due to the presence of intense and complicated magnetic fields. However, the diffusion models generically predict that the positron fraction—the number of positrons divided by the total number of positrons and electrons—decreases with energy. (This ratio of fluxes is a nice quantity because some astrophysical uncertainties cancel.)
This prediction, however, is in stark contrast with the observed positron fraction from recent satellite experiments:
The rising fraction had been hinted in balloon-based experiments for several decades, but the satellite experiments have been able to demonstrate this behavior conclusively because they can access higher energies. In their first set of results last year (shown above), AMS gave the most precise measurements of the positron fraction as far as 350 GeV. Yesterday’s announcement extended these results to 500 GeV and added the following observations:
First they claim that they have measured the maximum of the positron fraction to be 275 GeV. This is close to the edge of the data they’re releasing, but the plot of the positron fraction slope is slightly more convincing:
The observation of a maximum in what was otherwise a fairly featureless rising curve is key for interpretations of the excess, as we discuss below. A second observation is a bit more curious: while neither the electron nor the positron spectra follow a simple power law, , the total electron or positron flux does follow such a power law over a range of energies.
This is a little harder to interpret since the flux form electrons also, in principle, includes different sources of background. Note that this plot reaches higher energies than the positron fraction—part of the reason for this is that it is more difficult to distinguish between electrons and positrons at high energies. This is because the identification depends on how the particle bends in the AMS magnetic field and higher energy particles bend less. This, incidentally, is also why the FERMI data has much larger error bars in the first plot above—FERMI doesn’t have its own magnetic field and must rely on that of the Earth for charge discrimination.
So what should one make of the latest results?
The most optimistic hope is that this is a signal of dark matter, and at this point this is more of a ‘wish’ than a deduction. Independently of AMS, we know is that dark matter exists in a halo that surrounds our galaxy. The simplest dark matter models also assume that when two dark matter particles find each other in this halo, they can annihilate into Standard Model particle–anti-particle pairs, such as electrons and positrons—the latter potentially yielding the rising positron fraction signal seen by AMS.
From a particle physics perspective, this would be the most exciting possibility. The ‘smoking gun’ signature of such a scenario would be a steep drop in the positron fraction at the mass of the dark matter particle. This is because the annihilation occurs at low velocities so that the energy of the annihilation products is set by the dark matter mass. This is why the observation of a maximum in the positron fraction is interesting: the dark matter interpretation of this excess hinges on how steeply the fraction drops off.
There are, however, reasons to be skeptical.
One attractive feature of dark matter annihilations is thermal freeze out: the observation that the annihilation rate determines how much dark matter exists today after being in thermal equilibrium in the early universe. The AMS excess is suggestive of heavy (~TeV scale) dark matter with an annihilation rate three orders of magnitude larger than the rate required for thermal freeze out.
A study of the types of spectra one expects from dark matter annihilation shows fits that are somewhat in conflict with the combined observations of the positron fraction, total electron/positron flux, and the anti-proton flux (see 0809.2409). The anti-proton flux, in particular, does not have any known excess that would otherwise be predicted by dark matter annihilation into quarks.
There are ways around these issues, such as invoking mechanisms to enhance the present day annihilation rate, perhaps with the annihilation only creating leptons and not quarks. However, these are additional bells and whistles that model-builders must impose on the dark matter sector. It is also important to consider alternate explanations of the Pamela/FERMI/AMS positron fraction excess due to astrophysical phenomena. There are at least two very plausible candidates:
Pulsars are neutron stars that are known to emit “primary” electron/positron pairs. A nearby pulsar may be responsible for the observed rising positron fraction. See 1304.1791 for a recent discussion.
Alternately, supernova remnants may also generate a “secondary” spectrum of positrons from acceleration along shock waves (0909.4060, 0903.2794, 1402.0855).
Both of these scenarios are plausible and should temper the optimism that the rising positron fraction represents a measurement of dark matter. One useful handle to disfavor the astrophysical interpretations is to note that they would be anisotropic (not constant over all directions) whereas the dark matter signal would be isotropic. See 1405.4884 for a recent discussion. At the moment, the AMS measurements do not measure any anisotropy but are not yet sensitive enough to rule out astrophysical interpretations.
Finally, let us also point out an alternate approach to understand the positron fraction. The reason why it’s so difficult to study cosmic rays is that the complex magnetic fields in the galaxy are intractable to measure and, hence, make the trajectory of charged particles hopeless to trace backwards to their sources. Instead, the authors of 0907.1686 and 1305.1324 take an alternate approach: while we can’t determine the cosmic ray origins, we can look at the behavior of heavier cosmic ray particles and compare them to the positrons. This is because, as mentioned above, the bending of a charged particle in a magnetic field is determined by its mass and charge—quantities that are known for the various cosmic ray particles. Based on this, the authors are able to predict an upper bound for the positron fraction when one assumes that the positrons are secondaries (e.g in the case of supernovae remnant acceleration):
We see that the AMS-02 spectrum is just under the authors’ upper bound, and that the reported downturn is consistent with (even predicted from) the upper-bound. The authors’ analysis then suggests a non-dark matter explanation for the positron excess. See this post from Resonaances for a discussion of this point and an updated version of the above plot from the authors.
With that in mind, there are at least three things to look forward to in the future from AMS:
A corresponding upturn in the anti-proton flux is predicted in many types of dark matter annihilation models for the rising positron fraction. Thus far AMS-02 has not released anti-proton data due to the lower numbers of anti-protons.
Further sensitivity to the (an)isotropy of the excess is a critical test of the dark matter interpretation.
The shape of the drop-off with energy is also critical: a gradual drop-off is unlikely to come from dark matter whereas a steep drop off is considered to be a smoking gun for dark matter.
Only time will tell; though Ting suggested that new results would be presented at the upcoming AMS meeting at CERN in 2 months.
The recent Sackler Symposium on the Nature of Dark matter included three talks on various aspects of the Pamela/FERMI/AMS-02 rising positron fraction. You can view the videos here: Linden (pulsars), Galli (dark matter), Blum (upper bound on secondaries).