The P5 Report & The Future of Particle Physics (Part 1)

Particle physics is the epitome of ‘big science’. To answer our most fundamental questions out about physics requires world class experiments that push the limits of whats technologically possible. Such incredible sophisticated experiments, like those at the LHC, require big facilities to make them possible,  big collaborations to run them, big project planning to make dreams of new facilities a reality, and committees with big acronyms to decide what to build.

Enter the Particle Physics Project Prioritization Panel (aka P5) which is tasked with assessing the landscape of future projects and laying out a roadmap for the future of the field in the US. And because these large projects are inevitably an international endeavor, the report they released last week has a large impact on the global direction of the field. The report lays out a vision for the next decade of neutrino physics, cosmology, dark matter searches and future colliders. 

P5 follows the community-wide brainstorming effort known as the Snowmass Process in which researchers from all areas of particle physics laid out a vision for the future. The Snowmass process led to a particle physics ‘wish list’, consisting of all the projects and research particle physicists would be excited to work on. The P5 process is the hard part, when this incredibly exciting and diverse research program has to be made to fit within realistic budget scenarios. Advocates for different projects and research areas had to make a case of what science their project could achieve and a detailed estimate of the costs. The panel then takes in all this input and makes a set of recommendations of how the budget should be allocated, what should projects be realized and what hopes are dashed. Though the panel only produces a set of recommendations, they are used quite extensively by the Department of Energy which actually allocates funding. If your favorite project is not endorsed by the report, its very unlikely to be funded. 

Particle physics is an incredibly diverse field, covering sub-atomic to cosmic scales, so recommendations are divided up into several different areas. In this post I’ll cover the panel’s recommendations for neutrino physics and the cosmic frontier. Future colliders, perhaps the spiciest topic, will be covered in a follow up post.

The Future of Neutrino Physics

For those in the neutrino physics community all eyes were on the panels recommendations regarding the Deep Underground Neutrino Experiment (DUNE). DUNE is the US’s flagship particle physics experiment for the coming decade and aims to be the definitive worldwide neutrino experiment in the years to come. A high powered beam of neutrinos will be produced at Fermilab and sent 800 miles through the earth’s crust towards several large detectors placed in a mine in South Dakota. Its a much bigger project than previous neutrino experiments, unifying essentially the entire US community into a single collaboration.

DUNE is setup to produce world leading measurements of neutrino oscillations, the property by which neutrinos produced in one ‘flavor state’, (eg an electron-neutrino) gradually changes its state with sinusoidal probability (eg into a muon neutrino) as it propagates through space. This oscillation is made possible by a simple quantum mechanical weirdness: neutrino’s flavor state, whether it couples to electrons muons or taus, is not the same as its mass state. Neutrinos of a definite mass are therefore a mixture of the different flavors and visa versa.

Detailed measurements of this oscillation are the best way we know to determine several key neutrino properties. DUNE aims to finally pin down two crucial neutrino properties: their ‘mass ordering’, which will solidify how the different neutrino flavors and measured mass differences all fit together, and their ‘CP-violation’ which specifies whether neutrinos and their anti-matter counterparts behave the same or not. DUNE’s main competitor is the Hyper-Kamiokande experiment in Japan, another next-generation neutrino experiment with similar goals.

A depiction of the DUNE experiment. A high intensity proton beam at Fermilab is used to create a concentrated beam of neutrinos which are then sent through 800 miles of the Earth’s crust towards detectors placed deep underground South Dakota. Source

Construction of the DUNE experiment has been ongoing for several years and unfortunately has not been going quite as well as hoped. It has faced significant schedule delays and cost overruns. DUNE is now not expected to start taking data until 2031, significantly behind Hyper-Kamiokande’s projected 2027 start. These delays may lead to Hyper-K making these definitive neutrino measurements years before DUNE, which would be a significant blow to the experiment’s impact. This left many DUNE collaborators worried about its broad support from the community.

It came as a relief then when P5 report re-affirmed the strong science case for DUNE, calling it the “ultimate long baseline” neutrino experiment. The report strongly endorsed the completion of the first phase of DUNE. However, it recommended a pared-down version of its upgrade, advocating for an earlier beam upgrade in lieu of additional detectors. This re-imagined upgrade will still achieve the core physics goals of the original proposal with a significant cost savings. With this report, and news that the beleaguered underground cavern construction in South Dakota is now 90% complete, was certainly welcome holiday news to the neutrino community. This is also sets up a decade-long race between DUNE and Hyper-K to be the first to measure these key neutrino properties.

Cosmic Implications

While we normally think of particle physics as focused on the behavior of sub-atomic particles, its really about the study of fundamental forces and laws, no matter the method. This means that telescopes to study the oldest light in the universe, the Cosmic Microwave Background (CMB), fall into the same budget category as giant accelerators studying sub-atomic particles. Though the experiments in these two areas look very different, the questions they seek to answer are cross-cutting. Understanding how particles interact at very high energies helps us understand the earliest moments of the universe, when such particles were all interacting in a hot dense plasma. Likewise, by studying the these early moments of the universe and its large-scale evolution can tell us about what kinds of particles and forces are influencing its dynamics. When asking fundamental questions about the universe, one needs both the sharpest microscopes and the grandest panoramas possible.

The most prominent example of this blending of the smallest and largest scales in particle physics is dark matter. Some of our best evidence for dark matter comes analyzing the cosmic microwave background to determine how the primordial plasma behaved. These studies showed that some type of ‘cold’, matter that doesn’t interact with light, aka dark matter, was necessary to form the first clumps that eventually seeded the formation of galaxies. Without it, the universe would be much more soup-y and structureless than what we see to today.

The “cosmic web” galaxy clusters from the Millenium simulation. Measuring and understanding this web can tell us a lot about the fundamental constituents of the universe. Source

To determine what dark matter is then requires an attack from two fronts: design experiments here on earth attempting directly detect it, and further study its cosmic implications to look for more clues as to its properties.

The panel recommended next generation telescopes to study the CMB as a top priority. The so called ‘Stage 4’ CMB experiment would deploy telescopes in both the south pole and Chile’s Atacama desert to better characterize sources of atmospheric noise. The CMB has been studied extensively before, but the increased precision of CMS-S4 could shed light on mysteries like dark energy, dark matter, inflation, and the recent Hubble Tension. Given the past fruitfulness of these efforts, I think few doubted the science case for such a next generation experiment.

A mockup of one of the CMS-S4 telescopes which will be based in the Chilean desert. Note the person for scale on the right (source)

The P5 report recommended a suite of new dark matter experiments in the next decade, including the ‘ultimate’ liquid Xenon based dark matter search. Such an experiment would follow in the footsteps of massive noble gas experiments like LZ and XENONnT which have been hunting for a favored type of dark matter called WIMP’s for the last few decades. These experiments essentially build giant vats of liquid Xenon, carefully shield from any sources of external radiation, and look for signs of dark matter particles bumping into any of the Xenon atoms. The larger the vat of Xenon, the higher chance a dark matter particle will bump into something. Current generation experiments have ~7 tons of Xenon, and the next generation experiment would be even larger. The next generation aims to reach the so called ‘neutrino floor’, the point as which the experiments would be sensitive enough to observe astrophysical neutrinos bumping into the Xenon. Such neutrino interactions would look extremely similar to those of dark matter, and thus represent an unavoidable background which would signal the ultimate sensitivity of this type of experiment. WIMP’s could still be hiding in a basement below this neutrino floor, but finding them would be exceedingly difficult.

A photo of the current XENONnT experiment. This pristine cavity is then filled with liquid Xenon and closely monitored for signs of dark matter particles bumping into one of the Xenon atoms. Credit: XENON Collaboration

WIMP’s are not the only dark matter candidates in town, and recent years have also seen an explosion of interest in the broad range of dark matter possibilities, with axions being a prominent example. Other kinds of dark matter could have very different properties than WIMPs and have had much fewer dedicated experiments to search for them. There is ‘low hanging fruit’ to pluck in the way of relatively cheap experiments which can achieve world-leading sensitivity. Previously, these ‘table top’ sized experiments had a notoriously difficult time obtaining funding, as they were often crowded out of the budgets by the massive flagship projects. However, small experiments can be crucial to ensuring our best chance of dark matter discovery, as they fill in the blinds pots missed by the big projects.

The panel therefore recommended creating a new pool of funding set aside for these smaller scale projects. Allowing these smaller scale projects to flourish is important for the vibrancy and scientific diversity of the field, as the centralization of ‘big science’ projects can sometimes lead to unhealthy side effects. This specific recommendation also mirrors a broader trend of the report: to attempt to rebalance the budget portfolio to be spread more evenly and less dominated by the large projects.

A pie chart comparing the budget porfolio in 2023 (left) versus the projected budget in 2033 (right). Currently most of the budget is being taken up by the accelerator upgrades and cavern construction of DUNE, with some amount for the LHC upgrades. But by 2033 the panel recommends a much more equitable balance between different research area.

What Didn’t Make It

Any report like this comes with some tough choices. Budget realities mean not all projects can be funded. Besides the pairing down of some of DUNE’s upgrades, one of the biggest areas that was recommended against were ‘accessory experiments at the LHC’. In particular, MATHUSULA and the Forward Physics Facility were two experiments that proposed to build additional detectors near already existing LHC collision points to look for particles that may be missed by the current experiments. By building new detectors hundreds of meters away from the collision point, shielded by concrete and the earth, they can obtained unique sensitivity to ‘long lived’ particles capable of traversing such distances. These experiments would follow in the footsteps of the current FASER experiment, which is already producing impressive results.

While FASER found success as a relatively ‘cheap’ experiment, reusing detector components from and situating itself in a beam tunnel, these new proposals were asking for quite a bit more. The scale of these detectors would have required new caverns to be built, significantly increasing the cost. Given the cost and specialized purpose of these detectors, the panel recommended against their construction. These collaborations may now try to find ways to pare down their proposal so they can apply to the new small project portfolio.

Another major decision by the panel was to recommend against hosting a new Higgs factor collider in the US. But that will discussed more in a future post.

Conclusions

The P5 panel was faced with a difficult task, the total cost of all projects they were presented with was three times the budget. But they were able to craft a plan that continues the work of the previous decade, addresses current shortcomings and lays out an inspiring vision for the future. So far the community seems to be strongly rallying behind it. At time of writing, over 2700 community members from undergraduates to senior researchers have signed a petition endorsing the panels recommendations. This strong show of support will be key for turning these recommendations into actual funding, and hopefully lobbying congress to even increase funding so that more of this vision can be realized.

For those interested the full report as well as executive summaries of different areas can be found on the P5 website. Members of the US particle physics community are also encouraged to sign the petition endorsing the recommendations here.

And stayed tuned for part 2 of our coverage which will discuss the implications of the report on future colliders!

The Mini and Micro Boone Mystery, Part 1 Experiment

Title: “Search for an Excess of Electron Neutrino Interactions in MicroBooNE Using Multiple Final State Topologies”

Authors: The MiniBoone Collaboration

Reference: https://arxiv.org/abs/2110.14054

This is the first post in a series on the latest MicroBooNE results, covering the experimental side. Click here to read about the theory side. 

The new results from the MicroBoone experiment received a lot of excitement last week, being covered by several major news outlets. But unlike most physics news stories that make the press, it was a null result; they did not see any evidence for new particles or interactions. So why is it so interesting? Particle physics experiments produce null results every week, but what made this one newsworthy is that MicroBoone was trying to check the results from two previous experiments LSND and MiniBoone, that did see something anomalous with very high statistical evidence. If the LSND/MiniBoone result was confirmed, it would have been a huge breakthrough in particle physics, but now that it wasn’t many physicists are scratching their heads trying to make sense of these seemingly conflicting results. However, the MicroBoone experiment is not exactly the same as MiniBoone/LSND, and understanding the differences between the two sets of experiments may play an important role in unraveling this mystery.

Accelerator Neutrino Basics

All of these experiments are ‘accelerator neutrino experiments’, so lets first review what that means. Neutrino’s are ‘ghostly’ particles that are difficult to study (check out this post for more background on neutrinos).  Because they only couple through the weak force, neutrinos don’t like to interact with anything very much. So in order to detect them you need both a big detector with a lot of active material and a source with a lot of neutrinos. These experiments are designed to detect neutrinos produced in a human-made beam. To make the beam, a high energy beam of protons is directed at a target. These collisions produce a lot of particles, including unstable bound states of quarks like pions and kaons. These unstable particles have charge, so we can use magnets to focus them into a well-behaved beam.  When the pions and kaons decay they usually produce a muon and a muon neutrino. The beam of pions and kaons is pointed at an underground detector located a few hundred meters (or kilometers!) away, and then given time to decay. After they decay there will be a nice beam of muons and muon neutrinos. The muons can be stopped by some kind of shielding (like the earth’s crust), but the neutrinos will sail right through to the detector.

A diagram showing the basics of how a neutrino beam is made. Source

Nearly all of the neutrinos from the beam will still pass right through your detector, but a few of them will interact, allowing you to learn about their properties.

All of these experiments are considered ‘short-baseline’ because the distance between the neutrino source and the detector is only a few hundred meters (unlike the hundreds of kilometers in other such experiments). These experiments were designed to look for oscillation of the beam’s muon neutrinos into electron neutrinos which then interact with their detector (check out this post for some background on neutrino oscillations). Given the types of neutrinos we know about and their properties, this should be too short of a distance for neutrinos to oscillate, so any observed oscillation would be an indication something new (beyond the Standard Model) was going on.

The LSND + MiniBoone Anomaly

So the LSND and MiniBoone ‘anomaly’ was an excess of events above backgrounds that looked like electron neutrinos interacting with their detector. Both detectors were based on similar technology and were a similar distance from their neutrino source. Their detectors were essentially big tanks of mineral oil lined with light-detecting sensors.

An engineer styling inside the LSND detector. Source

At these energies the most common way neutrinos interact is to scatter against a neutron to produce a proton and a charged lepton (called a ‘charged current’ interaction). Electron neutrinos will produce outgoing electrons and muon neutrinos will produce outgoing muons.

A diagram of a ‘charged current’ interaction. A muon neutrino comes in and scatters against a neutron, producing a muon and a proton. Source

When traveling through the mineral oil these charged leptons will produce a ring of Cherenkov light which is detected by the sensors on the edge of the detector. Muons and electrons can be differentiated based on the characteristics of the Cherenkov light they emit. Electrons will undergo multiple scatterings off of the detector material while muons will not. This makes the Cherenkov rings of electrons ‘fuzzier’ than those of muons. High energy photons can produce electrons positron pairs which look very similar to a regular electron signal and are thus a source of background. 

A comparison of muon and electron Cherenkov rings from the Super-Kamiokande experiment. Electrons produce fuzzier rings than muons. Source

Even with a good beam and a big detector, the feebleness of neutrino interactions means that it takes a while to get a decent number of potential events. The MiniBoone experiment ran for 17 years looking for electron neutrinos scattering in their detector. In MiniBoone’s most recent analysis, they saw around 600 more events than would be expected if there were no anomalous electron neutrinos reaching their detector. The statistical significance of this excess, 4.8-sigma, was very high. Combining with LSND which saw a similar excess, the significance was above 6-sigma. This means its very unlikely this is a statistical fluctuation. So either there is some new physics going on or one of their backgrounds has been seriously under-estimated. This excess of events is what has been dubbed the ‘MiniBoone anomaly’.

The number of events seen in the MiniBoone experiment as a function of the energy seen in the interaction. The predicted number of events from various known background sources are shown in the colored histograms. The best fit to the data including the signal of anomalous oscillations is shown by the dashed line. One can see that at low energies the black data points lie significantly above these backgrounds and strongly favor the oscillation hypothesis.

The MicroBoone Result

The MicroBoone experiment was commissioned to verify the MiniBoone anomaly as well as test out a new type of neutrino detector technology. The MicroBoone is the first major neutrino experiment to use a ‘Liquid Argon Time Projection Chamber’ detector. This new detector technology allows more detailed reconstruction of what is happening when a neutrino scatters in the detector. The the active volume of the detector is liquid Argon, which allows both light and charge to propagate through it. When a neutrino scatters in the liquid Argon, scintillation light is produced that is collected in sensors. As charged particles created in the collision pass through the liquid Argon they ionize atoms they pass by. An electric field applied to the detector causes this produced charge to drift towards a mesh of wires where it can be collected. By measuring the difference in arrival time between the light and the charge, as well as the amount of charge collected at different positions and times, the precise location and trajectory of the particles produced in the collision can be determined. 

A beautiful reconstructed event in the MicroBoone detector. The colored lines show the tracks of different particles produced in the collision, all coming from a single point where the neutrino interaction took place. One can also see that one of the tracks produced a shower of particles away from the interaction vertex.

This means that unlike the MiniBoone and LSND, MicroBoone can see not just the lepton, but also the hadronic particles (protons, pions, etc) produced when a neutrino scatters in their detector. This means that the same type of neutrino interaction actually looks very different in their detector. So when they went to test the MiniBoone anomaly they adopted multiple different strategies of what exactly to look for. In the first case they looked for the type of interaction that an electron neutrino would have most likely produced: an outgoing electron and proton whose kinematics match those of a charged current interaction. Their second set of analyses, designed to mimic the MiniBoone selection, are slightly more general. They require one electron and any number of protons, but no pions. Their third analysis is the most general and requires an electron along with anything else. 

These different analyses have different levels of sensitivity to the MiniBoone anomaly, but all of them are found to be consistent with a background-only hypothesis: there is no sign of any excess events. Three out of four of them even see slightly less events than the expected background. 

A summary of the different MicroBoone analyses. The Y-axis shows the ratio of observed to expected number of events expected if there was only background present. The red lines show the excess predicted to be seen if the MiniBoone anomaly produced a signal in each channel. One can see that the black data points are much more consistent with the grey bands showing the background only prediction than amount predicted if the MiniBoone anomaly was present.

Overall the MicroBoone data rejects the hypothesis that the MiniBoone anomaly is due to electron neutrino charged current interactions at quite high significance (>3sigma). So if its not electron neutrinos causing the MiniBoone anomaly, what is it?

What’s Going On?

Given that MicroBoone did not see any signal, many would guess that MiniBoone’s claim of an excess must be flawed and they have underestimated one of their backgrounds. Unfortunately it is not very clear what that could be. If you look at the low-energy region where MiniBoone has an excess, there are three major background sources: decays of the Delta baryon that produce a photon (shown in tan), neutral pions decaying to pairs of photons (shown in red), and backgrounds from true electron neutrinos (shown in various shades of green). However all of these sources of background seem quite unlikely to be the source of the MiniBoone anomaly.

Before releasing these results, MicroBoone performed a dedicated search for Delta baryons decaying into photons, and saw a rate in agreement with the theoretical prediction MiniBoone used, and well below the amount needed to explain the MiniBoone excess.

Backgrounds from true electron neutrinos produced in the beam, as well as from the decays of muons, should not concentrate only at low energies like the excess does, and their rate has also been measured within MiniBoone data by looking at other signatures.

The decay of a neutral pions can produce two photons, and if one of them escapes detection, a single photon will mimic their signal. However one would expect that it would be more likely that photons would escape the detector near its edges, but the excess events are distributed uniformly in the detector volume.

So now the mystery of what could be causing this excess is even greater. If it is a background, it seems most likely it is from an unknown source not previously considered. As will be discussed in our part 2 post, its possible that MiniBoone anomaly was caused by a more exotic form of new physics; possibly the excess events in MiniBoone were not really coming from the scattering of electron neutrinos but something else that produced a similar signature in their detector. Some of these explanations included particles that decayed into pairs of electrons or photons. These sorts of explanations should be testable with MicroBoone data but will require dedicated analyses for their different signatures.

So on the experimental side, we now we are left to scratch our heads and wait for new results from MicroBoone that may help get to the bottom of this.

Click here for part 2 of our MicroBoone coverage that goes over the theory side of the story!

Read More

Is the Great Neutrino Puzzle Pointing to Multiple Missing Particles?” – Quanta Magazine article on the new MicroBoone result

“Can MiniBoone be Right?” – Resonaances blog post summarizing the MiniBoone anomaly prior to the the MicroBoone results

A review of different types of neutrino detectors – from the T2K experiment

How to find invisible particles in a collider

 You might have heard that one of the big things we are looking for in collider experiments are ever elusive dark matter particles. But given that dark matter particles are expected to interact very rarely with regular matter, how would you know if you happened to make some in a collision? The so called ‘direct detection’ experiments have to operate giant multi-ton detectors in extremely low-background environments in order to be sensitive to an occasional dark matter interaction. In the noisy environment of a particle collider like the LHC, in which collisions producing sprays of particles happen every 25 nanoseconds, the extremely rare interaction of the dark matter with our detector is likely to be missed. But instead of finding dark matter by seeing it in our detector, we can instead find it by not seeing it. That may sound paradoxical, but its how most collider based searches for dark matter work. 

The trick is based on every physicists favorite principle: the conservation of energy and momentum. We know that energy and momentum will be conserved in a collision, so if we know the initial momentum of the incoming particles, and measure everything that comes out, then any invisible particles produced will show up as an imbalance between the two. In a proton-proton collider like the LHC we don’t know the initial momentum of the particles along the beam axis, but we do that they were traveling along that axis. That means that the net momentum in the direction away from the beam axis (the ‘transverse’ direction) should be zero. So if we see a momentum imbalance going away from the beam axis, we know that there is some ‘invisible’ particle traveling in the opposite direction.

A sketch of what the signature of an invisible particle would like in a detector. Note this is a 2D cross section of the detector, with the beam axis traveling through the center of the diagram. There are two signals measured in the detector moving ‘up’ away from the beam pipe. Momentum conservation means there must have been some particle produced which is traveling ‘down’ and was not measured by the detector. Figure borrowed from here  

We normally refer to the amount of transverse momentum imbalance in an event as its ‘missing momentum’. Any collisions in which an invisible particle was produced will have missing momentum as tell-tale sign. But while it is a very interesting signature, missing momentum can actually be very difficult to measure. That’s because in order to tell if there is anything missing, you have to accurately measure the momentum of every particle in the collision. Our detectors aren’t perfect, any particles we miss, or mis-measure the momentum of, will show up as a ‘fake’ missing energy signature. 

A picture of a particularly noisy LHC collision, with a large number of tracks
Can you tell if there is any missing energy in this collision? Its not so easy… Figure borrowed from here

Even if you can measure the missing energy well, dark matter particles are not the only ones invisible to our detector. Neutrinos are notoriously difficult to detect and will not get picked up by our detectors, producing a ‘missing energy’ signature. This means that any search for new invisible particles, like dark matter, has to understand the background of neutrino production (often from the decay of a Z or W boson) very well. No one ever said finding the invisible would be easy!

However particle physicists have been studying these processes for a long time so we have gotten pretty good at measuring missing energy in our events and modeling the standard model backgrounds. Missing energy is a key tool that we use to search for dark matter, supersymmetry and other physics beyond the standard model.

Read More:

What happens when energy goes missing?” ATLAS blog post by Julia Gonski

How to look for supersymmetry at the LHC“, blog post by Matt Strassler

“Performance of missing transverse momentum reconstruction with the ATLAS detector using proton-proton collisions at √s = 13 TeV” Technical Paper by the ATLAS Collaboration

“Search for new physics in final states with an energetic jet or a hadronically decaying W or Z boson and transverse momentum imbalance at √s= 13 TeV” Search for dark matter by the CMS Collaboration

Crystals are dark matter’s best friends

Article title: “Development of ultra-pure NaI(Tl) detector for COSINE-200 experiment”

Authors: B.J. Park et el.

Reference: arxiv:2004.06287

The landscape of direct detection of dark matter is a perplexing one; all experiments have so far come up with deafening silence, except for a single one which promises a symphony. This is the DAMA/LIBRA experiment in Gran Sasso, Italy, which has been seeing an annual modulation in its signal for two decades now.

Such an annual modulation is as dark-matter-like as it gets. First proposed by Katherine Freese in 1987, it would be the result of earth’s motion inside the galactic halo of dark matter in the same direction as the sun for half of the year and in the opposite direction during the other half. However, DAMA/LIBRA’s results are in conflict with other experiments – but with the catch that none of those used the same setup. The way to settle this is obviously to build more experiments with the DAMA/LIBRA setup. This is an ongoing effort which ultimately focuses on the crystals at its heart.

Cylindrical crystals wrapped in reflector, bounded by photomultipliers (PMTs) and surrounded by scintillators. (COSINE-100)

The specific crystals are made of the scintillating material thallium-doped sodium iodide, NaI(Tl). Dark matter particles, and particularly WIMPs, would collide elastically with atomic nuclei and the recoil would give off photons, which would eventually be captured by photomultiplier tubes at the ends of each crystal.

Right now a number of NaI(Tl)-based experiments are at various stages of preparation around the world, with COSINE-100 at the Yangyang mountain, S.Korea, already producing negative results. However, these are still not on equal footing with DAMA/LIBRA’s because of higher backgrounds at COSINE-100. What is the collaboration to do, then? The answer is focus even more on the crystals and how they are prepared.

Setup of the COSINE-100 experiment. (COSINE-100)

Over the last couple of years some serious R&D went into growing better crystals for COSINE-200, the planned upgrade of COSINE-100. Yes, a crystal is something that can and does grow. A seed placed inside the raw material, in this case NaI(Tl) powder, leads it to organize itself around the seed’s structure over the next hours or days.

In COSINE-100 the most annoying backgrounds came from within the crystals themselves because of the production process, because of natural radioactivity, and because of cosmogenically induced isotopes. Let’s see how each of these was tackled during the experiment’s mission towards a radiopure upgrade.

Improved techniques of growing and preparing the crystals reduced contamination from the materials of the grower device and from the ambient environment. At the same time different raw materials were tried out to put the inherent contamination under control.

Among a handful of naturally present radioactive isotopes particular care was given to 40K. 40K can decay characteristically to an X-ray of 3.2keV and a γ-ray of 1,460keV, a combination convenient for tagging it to a large extent. The tagging is done with the help of 2,000 liters of liquid scintillator surrounding the crystals. However, if the γ-ray escapes the crystal then the left-behind X-ray will mimic the expected signal from WIMPs… Eventually the dangerous 40K was brought down to levels comparable to those in DAMA/LIBRA through the investigation of various techniques and first materials.

But the main source of radioactive background in COSINE-100 was isotopes such as 3H or 22Na created inside the crystals by cosmic ray muons, after their production. Now, their abundance was reduced significantly by two simple moves: the crystals were grown locally at a very low altitude and installed underground within a few weeks (instead of being transported from a lab at 1,400 meters above sea in Colorado). Moreover, most of the remaining cosmogenic background is to decay away within a couple of years.

Components of the background, and temporal evolution of the cosmogenic radioactivity. (Source)

Where are these efforts standing? The energy range of interest for testing the DAMA/LIBRA signal is 1-6keV. This corresponds to a background target of 1 count/kg/day/keV. After the crystals R&D, the achieved contamination was less than about 0.34 counts. In short, everything is ready for COSINE-100 to upgrade to COSINE-200 and test the annual modulation without the previous ambiguities that stood in the way.

Learn more:

More on DAMA/LIBRA in ParticleBites.

Cross-checking the modulation.

The COSINE-100 experiment.

First COSINE-100 results.

The XENON1T Excess : The Newest Craze in Particle Physics

Paper: Observation of Excess Electronic Recoil Events in XENON1T

Authors: XENON1T Collaboration

Recently the particle physics world has been abuzz with a new result from the XENON1T experiment who may have seen a revolutionary signal. XENON1T is one of the world’s most sensitive dark matter experiments. The experiment consists of a huge tank of Xenon placed deep underground in the Gran Sasso mine in Italy. It is a ‘direct-detection’ experiment, hunting for very rare signals of dark matter particles from space interacting with their detector. It was originally designed to look for WIMP’s, Weakly Interacting Massive Particles, who used to be everyone’s favorite candidate for dark matter. However, given recent null results by WIMP-hunting  direct-detection experiments, and collider experiments at the LHC, physicists have started to broaden their dark matter horizons. Experiments like XENON1T, who were designed to look for heavy WIMP’s colliding off of Xenon nuclei have realized that they can also be very sensitive to much lighter particles by looking for electron recoils. New particles that are much lighter than traditional WIMP’s would not leave much of an impact on large Xenon nuclei, but they can leave a signal in the detector if they instead scatter off of the electrons around those nuclei. These electron recoils can be identified by the ionization and scintillation signals they leave in the detector, allowing them to be distinguished from nuclear recoils.

In this recent result, the XENON1T collaboration searched for these electron recoils in the energy range of 1-200 keV with unprecedented sensitivity.  Their extraordinary sensitivity is due to its exquisite control over backgrounds and extremely low energy threshold for detection. Rather than just being impressed, what has gotten many physicists excited is that the latest data shows an excess of events above expected backgrounds in the 1-7 keV region. The statistical significance of the excess is 3.5 sigma, which in particle physics is enough to claim ‘evidence’ of an anomaly but short of the typical 5-sigma required to claim discovery.

The XENON1T data that has caused recent excitement. The ‘excess’ is the spike in the data (black points) above the background model (red line) in the 1-7 keV region. The significance of the excess is around 3.5 sigma.

So what might this excess mean? The first, and least fun answer, is nothing. 3.5 sigma is not enough evidence to claim discovery, and those well versed in particle physics history know that there have been numerous excesses with similar significances have faded away with more data. Still it is definitely an intriguing signal, and worthy of further investigation.

The pessimistic explanation is that it is due to some systematic effect or background not yet modeled by the XENON1T collaboration. Many have pointed out that one should be skeptical of signals that appear right at the edge of an experiments energy detection threshold. The so called ‘efficiency turn on’, the function that describes how well an experiment can reconstruct signals right at the edge of detection, can be difficult to model. However, there are good reasons to believe this is not the case here. First of all the events of interest are actually located in the flat part of their efficiency curve (note the background line is flat below the excess), and the excess rises above this flat background. So to explain this excess their efficiency would have to somehow be better at low energies than high energies, which seems very unlikely. Or there would have to be a very strange unaccounted for bias where some higher energy events were mis-reconstructed at lower energies. These explanations seem even more implausible given that the collaboration performed an electron reconstruction calibration using the radioactive decays of Radon-220 over exactly this energy range and were able to model the turn on and detection efficiency very well.

Results of a calibration done to radioactive decays of Radon-220. One can see that data in the efficiency turn on (right around 2 keV) is modeled quite well and no excesses are seen.

However the possibility of a novel Standard Model background is much more plausible. The XENON collaboration raises the possibility that the excess is due to a previously unobserved background from tritium β-decays. Tritium decays to Helium-3 and an electron and a neutrino with a half-life of around 12 years. The energy released in this decay is 18.6 keV, giving the electron having an average energy of a few keV. The expected energy spectrum of this decay matches the observed excess quite well. Additionally, the amount of contamination needed to explain the signal is exceedingly small. Around 100 parts-per-billion of H2 would lead to enough tritium to explain the signal, which translates to just 3 tritium atoms per kilogram of liquid Xenon. The collaboration tries their best to investigate this possibility, but they neither rule out or confirm such a small amount of tritium contamination. However, other similar contaminants, like diatomic oxygen have been confirmed to be below this level by 2 orders of magnitude, so it is not impossible that they were able to avoid this small amount of contamination.

So while many are placing their money on the tritium explanation, there is the exciting possibility remains that this is our first direct evidence of physics Beyond the Standard Model (BSM)! So if the signal really is a new particle or interaction what would it be? Currently it it is quite hard to pin down exactly based on the data. The analysis was specifically searching for two signals that would have shown up in exactly this energy range: axions produced in the sun, and neutrinos produced in the sun interacting with electrons via a large (BSM) magnetic moment. Both of these models provide good fits to the signal shape, with the axion explanation being slightly preferred. However since this result has been released, many have pointed out that these models would actually be in conflict with constraints from astrophysical measurements. In particular, the axion model they searched for would have given stars an additional way to release energy, causing them to cool at a faster rate than in the Standard Model. The strength of interaction between axions and electrons needed to explain the XENON1T excess is incompatible with the observed rates of stellar cooling. There are similar astrophysical constraints on neutrino magnetic moments that also make it unlikely.

This has left door open for theorists to try to come up with new explanations for these excess events, or think of clever ways to alter existing models to avoid these constraints. And theorists are certainly seizing this opportunity! There are new explanations appearing on the arXiv every day, with no sign of stopping. In the roughly 2 weeks since the XENON1T announced their result and this post is being written, there have already been 50 follow up papers! Many of these explanations involve various models of dark matter with some additional twist, such as being heated up in the sun or being boosted to a higher energy in some other way.

A collage of different models trying to explain the XENON1T excess (center). Each plot is from a separate paper released in the first week and a half following the original announcement. Source

So while theorists are currently having their fun with this, the only way we will figure out the true cause of this this anomaly is with more data. The good news is that the XENON collaboration is already preparing for the XENONnT experiment that will serve as a follow to XENON1T. XENONnT will feature a larger active volume of Xenon and a lower background level, allowing them to potentially confirm this anomaly at the 5-sigma level with only a few months of data. If  the excess persists, more data would also allow them to better determine the shape of the signal; allowing them to possibly distinguish between the tritium shape and a potential new physics explanation. If real, other liquid Xenon experiments like LUX and PandaX should also be able to independently confirm the signal in the near future. The next few years should be a very exciting time for these dark matter experiments so stay tuned!

Read More:

Quanta Magazine Article “Dark Matter Experiment Finds Unexplained Signal”

Previous ParticleBites Post on Axion Searches

Blog Post “Hail the XENON Excess”

Listening for axions

If dark matter actually consists of a new kind of particle, then the most up-and-coming candidate is the axion. The axion is a consequence of the Peccei-Quinn mechanism, a plausible solution to the “strong CP problem,” or why the strong nuclear force conserves the CP-symmetry although there are no reasons for it to. It is a very light neutral boson, named by Frank Wilczek after a detergent brand (in a move that obviously dates its introduction in the ’70s).

Axion decay in a magnetic field: the result is a photon. (Source.)

Most experiments that try to directly detect dark matter have looked for WIMPs (weakly interacting massive particles). However, as those searches have not borne fruit, the focus started turning to axions, which make for good candidates given their properties and the fact that if they exist, then they exist in multitudes throughout the galaxies. Axions “speak” to the QCD part of the Standard Model, so they can appear in interaction vertices with hadronic loops. The end result is that axions passing through a magnetic field will convert to photons.

In practical terms, their detection boils down to having strong magnets, sensitive electronics and an electromagnetically very quiet place at one’s disposal. One can then sit back and wait for the hypothesized axions to pass through the detector as earth moves through the dark matter halo surrounding the Milky Way. Which is precisely why such experiments are known as “haloscopes.”

Now, the most veteran haloscope of all published significant new results. Alas, it is still empty-handed, but we can look at why its update is important and how it was reached.

ADMX (Axion Dark Matter eXperiment) of the University of Washington has been around for a quarter-century. By listening for signals from axions, it progressively gnaws away at the space of allowed values for their mass and coupling to photons, focusing on an area of interest:

ADMX_results_2020
Latest exclusion limits on the axion mass and coupling to photons.

Unlike higher values, this area is not excluded by astrophysical considerations (e.g. stars cooling off through axion emission) and other types of experiments (such as looking for axions from the sun). In addition, the bands above the lines denoted “KSVZ” and “DFSZ” are special. They correspond to the predictions of two models with favorable theoretical properties. So, ADMX is dedicated to scanning this parameter space. And the new analysis added one more year of data-taking, making a significant dent in this ballpark.

As mentioned, the presence of axions would be inferred from a stream of photons in the detector. The excluded mass range was scanned by “tuning” the experiment to different frequencies, while at each frequency step longer observation times probed smaller values for the axion-photon coupling.

Two things that this search needs is a lot of quiet and some good amplification, as the signal from a typical axion is expected to be as weak as the signal from a mobile phone left on the surface of Mars (around 10-23W). The setup is indeed stripped of noise by being placed in a dilution refrigerator, which keeps its temperature at a few tenths of a degree above absolute zero. This is practically the domain governed by quantum noise, so advantage can be taken of the finesse of quantum technology: for the first time ADMX used SQUIDs, superconducting quantum interference devices, for the amplification of the signal.

The heart of the experiment inside the refrigerator. The resonant frequency of the cavity is tuned to match the photons -hopefully- given off by axions. (Source.)




In the end, a good chunk of the parameter space which is favored by the theory might have been excluded, but the haloscope is ready to look at the rest of it. Just think of how, one day, a pulse inside a small device in a university lab might be a messenger of the mysteries unfolding across the cosmos.

References:

Publication by the ADMX collaboration. (arXiv)

Learn more:

  1. The theory behind axions.
  2. The hitchhiker’s guide to the dilution refrigerator.
  3. Intro to KSVZ and DFSZ axions (and more).
  4. Resonant cavities.

Three Birds with One Particle: The Possibilities of Axions

Title: “Axiogenesis”

Author: Raymond T. Co and Keisuke Harigaya

Reference: https://arxiv.org/pdf/1910.02080.pdf

On the laundry list of problems in particle physics, a rare three-for-one solution could come in the form of a theorized light scalar particle fittingly named after a detergent: the axion. Frank Wilczek coined this term in reference to its potential to “clean up” the Standard Model once he realized its applicability to multiple unsolved mysteries. Although Axion the dish soap has been somewhat phased out of our everyday consumer life (being now primarily sold in Latin America), axion particles remain as a key component of a physicist’s toolbox. While axions get a lot of hype as a promising dark matter candidate, and are now being considered as a solution to matter-antimatter asymmetry, they were originally proposed as a solution for a different Standard Model puzzle: the strong CP problem. 

The strong CP problem refers to a peculiarity of quantum chromodynamics (QCD), our theory of quarks, gluons, and the strong force that mediates them: while the theory permits charge-parity (CP) symmetry violation, the ardent experimental search for CP-violating processes in QCD has so far come up empty-handed. What does this mean from a physical standpoint? Consider the neutron electric dipole moment (eDM), which roughly describes the distribution of the three quarks comprising a neutron. Naively, we might expect this orientation to be a triangular one. However, measurements of the neutron eDM, carried out by tracking changes in neutron spin precession, return a value orders of magnitude smaller than classically expected. In fact, the incredibly small value of this parameter corresponds to a neutron where the three quarks are found nearly in a line. 

The classical picture of the neutron (left) looks markedly different from the picture necessitated by CP symmetry (right). The strong CP problem is essentially a question of why our mental image should look like the right picture instead of the left. Source: https://arxiv.org/pdf/1812.02669.pdf

This would not initially appear to be a problem. In fact, in the context of CP, this makes sense: a simultaneous charge conjugation (exchanging positive charges for negative ones and vice versa) and parity inversion (flipping the sign of spatial directions) when the quark arrangement is linear results in a symmetry. Yet there are a few subtleties that point to the existence of further physics. First, this tiny value requires an adjustment of parameters within the mathematics of QCD, carefully fitting some coefficients to cancel out others in order to arrive at the desired conclusion. Second, we do observe violation of CP symmetry in particle physics processes mediated by the weak interaction, such as kaon decay, which also involves quarks. 

These arguments rest upon the idea of naturalness, a principle that has been invoked successfully several times throughout the development of particle theory as a hint toward the existence of a deeper, more underlying theory. Naturalness (in one of its forms) states that such minuscule values are only allowed if they increase the overall symmetry of the theory, something that cannot be true if weak processes exhibit CP-violation where strong processes do not. This puts the strong CP problem squarely within the realm of “fine-tuning” problems in physics; although there is no known reason for CP symmetry conservation to occur, the theory must be modified to fit this observation. We then seek one of two things: either an observation of CP-violation in QCD or a solution that sets the neutron eDM, and by extension any CP-violating phase within our theory, to zero.

This term in the QCD Lagrangian allows for CP symmetry violation. Current measurements place the value of \theta at no greater than 10^{-10}. In Peccei-Quinn symmetry, \theta is promoted to a field.

When such an expected symmetry violation is nowhere to be found, where is a theoretician to look for such a solution? The most straightforward answer is to turn to a new symmetry. This is exactly what Roberto Peccei and Helen Quinn did in 1977, birthing the Peccei-Quinn symmetry, an extension of QCD which incorporates a CP-violating phase known as the \theta term. The main idea behind this theory is to promote \theta to a dynamical field, rather than keeping it a constant. Since quantum fields have associated particles, this also yields the particle we dub the axion. Looking back briefly to the neutron eDM picture of the strong CP problem, this means that the angular separation should also be dynamical, and hence be relegated to the minimum energy configuration: the quarks again all in a straight line. In the language of symmetries, the U(1) Peccei-Quinn symmetry is approximately spontaneously broken, giving us a non-zero vacuum expectation value and a nearly-massless Goldstone boson: our axion.

This is all great, but what does it have to do with dark matter? As it turns out, axions make for an especially intriguing dark matter candidate due to their low mass and potential to be produced in large quantities. For decades, this prowess was overshadowed by the leading WIMP candidate (weakly-interacting massive particles), whose parameter space has been slowly whittled down to the point where physicists are more seriously turning to alternatives. As there are several production-mechanisms in early universe cosmology for axions, and 100% of dark matter abundance could be explained through this generation, the axion is now stepping into the spotlight. 

This increased focus is causing some theorists to turn to further avenues of physics as possible applications for the axion. In a recent paper, Co and Harigaya examined the connection between this versatile particle and matter-antimatter asymmetry (also called baryon asymmetry). This latter term refers to the simple observation that there appears to be more matter than antimatter in our universe, since we are predominantly composed of matter, yet matter and antimatter also seem to be produced in colliders in equal proportions. In order to explain this asymmetry, without which matter and antimatter would have annihilated and we would not exist, physicists look for any mechanism to trigger an imbalance in these two quantities in the early universe. This theorized process is known as baryogenesis.

Here’s where the axion might play a part. The \theta term, which settles to zero in its possible solution to the strong CP problem, could also have taken on any value from 0 to 360 degrees very early on in the universe. Analyzing the axion field through the conjectures of quantum gravity, if there are no global symmetries then the initial axion potential cannot be symmetric [4]. By falling from some initial value through an uneven potential, which the authors describe as a wine bottle potential with a wiggly top, \theta would cycle several times through the allowed values before settling at its minimum energy value of zero. This causes the axion field to rotate, an asymmetry which could generate a disproportionality between the amounts of produced matter and antimatter. If the field were to rotate in one direction, we would see more matter than antimatter, while a rotation in the opposite direction would result instead in excess antimatter.

The team’s findings can be summarized in the plot above. Regions in purple, red, and above the orange lines (dependent upon a particular constant \xi which is proportional to weak scale quantities) signify excluded portions of the parameter space. The remaining white space shows values of the axion decay constant and mass where the currently measured amount of baryon asymmetry could be generated. Source: https://arxiv.org/pdf/1910.02080.pdf

Introducing a third fundamental mystery into the realm of axions begets the question of whether all three problems (strong CP, dark matter, and matter-antimatter asymmetry) can be solved simultaneously with axions. And, of course, there are nuances that could make alternative solutions to the strong CP problem more favorable or other dark matter candidates more likely. Like most theorized particles, there are several formulations of axion in the works. It is then necessary to turn our attention to experiment to narrow down the possibilities for how axions could interact with other particles, determine what their mass could be, and answer the all-important question: if they exist at all. Consequently, there are a plethora of axion-focused experiments up and running, with more on the horizon, that use a variety of methods spanning several subfields of physics. While these results begin to roll in, we can continue to investigate just how many problems we might be able to solve with one adaptable, soapy particle.

Learn More:

  1. A comprehensive introduction to the strong CP problem, the axion solution, and other potential solutions: https://arxiv.org/pdf/1812.02669.pdf 
  2. Axions as a dark matter candidate: https://www.symmetrymagazine.org/article/the-other-dark-matter-candidate
  3. More information on matter-antimatter asymmetry and baryogenesis: https://www.quantumdiaries.org/2015/02/04/where-do-i-come-from/
  4. The quantum gravity conjectures that axiogenesis builds upon: https://arxiv.org/abs/1810.05338
  5. An overview of current axion-focused experiments: https://www.annualreviews.org/doi/full/10.1146/annurev-nucl-102014-022120

Quark nuggets of wisdom

Article title: “Dark Quark Nuggets”

Authors: Yang Baia, Andrew J. Long, and Sida Lu

Reference: arXiv:1810.04360

Information, gold and chicken. What do they all have in common? They can all come in the form of nuggets. Naturally one would then be compelled to ask: “what about fundamental particles? Could they come in nugget form? Could that hold the key to dark matter?” Lucky for you this has become the topic of some ongoing research.

A ‘nugget’ in this context refers to large macroscopic ‘clumps’ of matter formed in the early universe that could possibly survive up until the present day to serve as a dark matter candidate. Much like nuggets of the edible variety, one must be careful to combine just the right ingredients in just the right way. In fact, there are generally three requirements to forming such an exotic state of matter:

  1. (At least) two different vacuum states separated by a potential ‘barrier’ where a phase transition occurs (known as a first-order phase transition).
  2. A charge which is conserved globally which can accumulate in a small part of space.
  3. An excess of matter over antimatter on the cosmological scale, or in other words, a large non-zero macroscopic number density of global charge.

Back in the 1980s, before much work was done in the field of lattice quantum chromodynamics (lQCD), Edward Witten put forward the idea that the Standard Model QCD sector could in fact accommodate such an exotic form of matter. Quite simply this would occur at the early phase of the universe when the quarks undergo color confinement to form hadrons. In particular Witten’s were realized as large macroscopic clumps of ‘quark matter’ with a very large concentration of baryon number, N_B > 10^{30}. However, with the advancement of lQCD techniques, the phase transition in which the quarks become confined looks more like a continuous ‘crossover’ (i.e. a second-order phase transition), making the idea in the Standard Model somewhat unfeasible.

Theorists, particularly those interested in dark matter, are not confined (for lack of a better term) to the strict details of the Standard Model and most often look to the formation of sometimes complicated ‘dark sectors’ invisible to us but readily able to provide the much needed dark matter candidate.

Dark QCD?

The problem of obtaining a first-order phase transition to form our quark nuggets need not be a problem if we consider a QCD-type theory that does not interact with the Standard Model particles. More specifically, we can consider a set of dark quarks, dark gluons with arbitrary characteristics like masses, couplings, numbers of flavors or numbers of colors (which of course are quite settled for the Standard Model QCD case). In fact, looking at the numbers of flavors and colors of dark QCD in Figure 1, we can see in the white unshaded region a number of models that can exist with a first-order phase transition, as required to form these dark quark nuggets.

Figure 1: The white unshaded region corresponds to dark QCD models which may permit a first-order phase transition and thus the existence of ‘dark quark nuggets’.

As with normal quarks, the distinction between the two phases actually refers to a process known as chiral symmetry breaking. When the temperature of the universe cools to this particular scale, color confinement of quarks occurs around the same time, such that no single-color quark can be observed on its own – only in colorless bound states.

Forming a nugget

As we have briefly mentioned so far, the dark nuggets are formed as the universe undergoes a ‘dark’ phase transition from a phase where the dark color is unconfined to a phase where it is confined. At some critical temperature, due to the nature of first-order phase transitions, bubbles of the new confined phase (full of dark hadrons) begin to nucleate out of the dark quark-gluon plasma. The growth of these bubbles are driven by a difference in pressure, characteristic of the fact that the unconfined and confined phase vacuums states are of different energy. With this emerging bubble wall, the almost massless particles from the dark plasma scatter from the wall containing heavy dark (anti)baryons and hence a large amount of dark baryon number accumulates in this phase. Eventually, as these bubbles merge and coalesce, we would expect local regions of remaining dark quark-gluon plasma, unconfined and stable from collapse due to the Fermi degeneracy pressure (see reference below for more on this). An illustration is shown in Figure 2. Calculations with varying energy scales of confinement estimate their masses are anywhere between 10^{-7} to 10^{23} grams with radii from 10^{-15} to 10^8 cm and so can truly be classed as macroscopic dark objects!

Figure 2: Dark Quark Nuggets are a phase of unconfined dark quark-gluon plasma kept stable by the balance between Fermi degeneracy pressure and vacuum pressure from the separation between the unconfined and confined phases.

How do we know they could be there? 

There are a number of ways to infer the existence of dark quark nuggets, but two of the main ones are: (i) as a dark matter candidate and (ii) through probes of the dark QCD model that provides them. Cosmologically, the latter can imply the existence of a dark form of radiation which ultimately can lead to effects on the Cosmic Microwave Background Radiation (CMB). In a similar vein, one recent avenue of study today is the production of a steady background of gravitational waves emerging from the existence of a first-order phase transition – one of the key requirements for dark quark nugget formation. More importantly, they can be probed through astrophysical means if they share some coupling (albeit small) with the Standard Model particles. The standard technique of direct detection with Earth-based experiments could be the way to go – but furthermore, there may be the possibility of cosmic ray production from collisions of multiple dark quark nuggets. Among these are a number of other observations over the massive range of nugget sizes and masses shown in Figure 3.

Figure 3: Range of dark quark nugget masses and sizes and their possible detection methods.

To conclude, note that in such a generic framework, a number of well-motivated theories may predict (or in fact have unavoidable) instances of quark nuggets that may serve as interesting dark matter candidates with a lot of fun phenomenology to play with. It is only up to the theorist’s imagination where to go from here!

References and further reading:

Dragonfly 44: A potential Dark Matter Galaxy

Title: A High Stellar Velocity Dispersion and ~100 Globular Clusters for the Ultra Diffuse Galaxy Dragonfly 44

PublicationApJ, v828, Number 1, arXiv: 1606.06291

The title of this paper sounds like some standard astrophysics analyses; but, dig a little deeper and you’ll find – what I think – is an incredibly interesting, surprising and unexpected observation.

The Coma Cluster: NASA, ESA, and the Hubble Heritage Team (STScI/AURA)

Last year, using the WM Keck Observatory and the Gemini North Telescope in Manuakea, Hawaii, the Dragonfly Telephoto Array observed the Coma cluster (a large cluster of galaxies in the constellation Coma – I’ve included a Hubble Image to the left). The team identified a population of large, very low surface brightness (ie: not a lot of stars), spheroidal galaxies around an Ultra Diffuse Galaxy (UDG) called Dragonfly 44 (shown below). They determined that Dragonfly 44 has so few stars that gravity could not hold it together – so some other matter had to be involved – namely DARK MATTER (my favorite kind of unknown matter).

 

The ultra-diffuse galaxy Dragonfly 44. The galaxy consists almost entirely of dark matter. It is surrounded by faint, compact sources. Image credit: Pieter van Dokkum / Roberto Abraham / Gemini Observatory / SDSS / AURA.
The ultra-diffuse galaxy Dragonfly 44. The galaxy consists almost entirely of dark matter. It is surrounded by faint, compact sources. Image credit: Pieter van Dokkum / Roberto Abraham / Gemini Observatory / SDSS / AURA

The team used the DEIMOS instrument installed on Keck II to measure the velocities of stars for 33.5 hours over a period of six nights so they could determine the galaxy’s mass. Observations of Dragonfly 44’s rotational speed suggest that it has a mass of about one trillion solar masses, about the same as the Milky Way. However, the galaxy emits only 1% of the light emitted by the Milky Way. In other words, the Milky Way has more than a hundred times more stars than Dragonfly 44. I’ve also included the Mass-to-Light ratio plot vs. the dynamical mass. This illustrates how unique Dragonfly 44 is compared to other dark matter dominated galaxies like dwarf spheroidal galaxies.

 

 

MLratio
Relation between dynamical mass-to-light ratio and dynamical mass. Open symbols are dispersion-dominated objects from Zaritsky, Gonzalez, & Zabludoff (2006) and Wolf et al. (2010). The UDGs VCC 1287 (Beasley et al. 2016) and Dragonfly 44 fall outside of the band defined by the other galaxies, having a very high M/L ratio for their mass.

What is particularly exciting is that we don’t understand how galaxies like this form.

Their research indicates that these UDGs could be failed galaxies, with the sizes, dark matter content, and globular cluster systems of much more luminous objects. But we’ll need to discover more to fully understand them.

 

 

 

 

 

 

 

 

Further reading (works by the same authors)
Forty-Seven Milky Way-Sized, Extremely Diffuse Galaxies in the Coma Cluster,arXiv: 1410.8141
Spectroscopic Confirmation of the Existence of Large, Diffuse Galaxies in the Coma Cluster: arXiv: 1504.03320

Dark matter of Pulsars?? cont…

Title: Estimating the GeV Emission of Millisecond Pulsars in Dwarf Spheroidal Galaxies
Publication: arXiv: 1607.06390, submitted to ApJL

Howdy, particlebite enthusiasts! I’m blogging this week from ICHEP. Over the next week there will be a lot of exciting updates from the particle physics community… like what happened to that 750 GeV bump? are there any new bumps for us to be excited about? have we broken the standard model yet? But all these will come later in the week – today is registration. But in the mean time, there have been a lot of interesting papers circulating about disentangling dark matter from our favorite astrophysical background – pulsars.

The paper, which is the focus of this post, delves deeper into understanding potential gamma-ray emission found in dwarf spheroidal galaxies (dsphs) from pulsars. The density of millisecond pulsars (MSPs) is related to the density of stars in a cluster. In low-density stellar environments, such as dsphs, the abundance of MSPs is expected to be proportional to stellar mass (it’s much higher for globular cluster and the Galactic center). Remember, the advantage over dsphs in looking for a dark matter signal when compared with, for example, the Galactic center is that they have many fewer detectable gamma-ray emitting sources – like MSPs (see arXiv: 1503.02641 for a recent Fermi-LAT paper). However, as we get more and more sensitive, the probability of detecting gamma rays from astrophysical sources in dsphs goes up as well.

MSP gamma-ray luminosity function (0.1–100 GeV) normalized to the stellar mass of the Milky Way. Error bars correspond to statistical uncertainty associated with the finite number of LAT-detected MSPs. The shaded gray band represents the 1σ statistical uncertainty on the broken power-law fit to these data and dashed gray lines represent the systematic uncertainty envelope (distances to LAT-detected MSPs, spatial distribution of Galactic MSPs, and effective selection function of LAT pulsar catalog).
MSP gamma-ray luminosity function (0.1–100 GeV) normalized to the stellar mass of the Milky Way. The shaded gray band represents the 1σ statistical uncertainty on the broken power-law fit to these data and dashed gray lines represent the systematic uncertainty envelope.

This work estimates what the gamma-ray flux should be (known as the luminosity function) for MSPs found in dsphs. They assume that the number of MSPs is proportional to the stellar density and that the spectrum is similar to the 90 known MSPs in the Galactic disk (see the figure on the right). It fits the gamma-ray spectrum to a broken power law. We can then scale this result to the number of predicted MSPs in each dsph and distance of the dsph. This is then used as a prediction of the gamma-ray spectrum we would expect from MSPs coming from an individual dsph.

 

They found was that for the highest stellar mass dsphs (Fornax, Draco – usually the classical ones, for example), there is a modest MSP population. However, even for the largest classical dsph, Fornax, the predicted MSP flux > 500 MeV is~ 10−12 ph cm−2s−1 , which is about an order of magnitude below the typical flux upper limits obtained at high Galactic latitudes after six years of the LAT survey, ∼ 10−10 ph cm−2s−1 (see arXiv: 1503.02641 again). The predicted flux and sensitivity is shown below.

 

Expected flux versus J-factor. Blue points indicate the predicted MSP contribution in 30 Mikly Way dsphs and dsph candidates. J-factor uncertainties are shown for kinematically confirmed dsphs only. The gray shaded band represents the typical upper limit derived in high Galactic latitude blank fields after 6 years of the LAT survey. The red curve represents a DM annihilation model that is consistent with both DM interpretations of the Galactic Center Excess and the characteristic spectral shape of MSPs.
Expected flux versus J-factor (remember J-factor scales with distance). Blue points indicate the predicted MSP contribution in 30 Milky Way dsphs and dsph candidates. The gray shaded band represents the sensitivity for 6 years of the LAT data. The red curve represents a DM annihilation model that is consistent with both DM interpretations of the Galactic Center Excess and the characteristic spectral shape of MSPs.

So all in all, this is good news for dsphs as dark matter targets. Understanding the backgrounds is imperative for having confidence in an analysis if a signal is found, and this gives us more confidence that we understand one of the dominant backgrounds in the hunt for dark matter.