The P5 Report & The Future of Particle Physics (Part 1)

Particle physics is the epitome of ‘big science’. To answer our most fundamental questions out about physics requires world class experiments that push the limits of whats technologically possible. Such incredible sophisticated experiments, like those at the LHC, require big facilities to make them possible,  big collaborations to run them, big project planning to make dreams of new facilities a reality, and committees with big acronyms to decide what to build.

Enter the Particle Physics Project Prioritization Panel (aka P5) which is tasked with assessing the landscape of future projects and laying out a roadmap for the future of the field in the US. And because these large projects are inevitably an international endeavor, the report they released last week has a large impact on the global direction of the field. The report lays out a vision for the next decade of neutrino physics, cosmology, dark matter searches and future colliders. 

P5 follows the community-wide brainstorming effort known as the Snowmass Process in which researchers from all areas of particle physics laid out a vision for the future. The Snowmass process led to a particle physics ‘wish list’, consisting of all the projects and research particle physicists would be excited to work on. The P5 process is the hard part, when this incredibly exciting and diverse research program has to be made to fit within realistic budget scenarios. Advocates for different projects and research areas had to make a case of what science their project could achieve and a detailed estimate of the costs. The panel then takes in all this input and makes a set of recommendations of how the budget should be allocated, what should projects be realized and what hopes are dashed. Though the panel only produces a set of recommendations, they are used quite extensively by the Department of Energy which actually allocates funding. If your favorite project is not endorsed by the report, its very unlikely to be funded. 

Particle physics is an incredibly diverse field, covering sub-atomic to cosmic scales, so recommendations are divided up into several different areas. In this post I’ll cover the panel’s recommendations for neutrino physics and the cosmic frontier. Future colliders, perhaps the spiciest topic, will be covered in a follow up post.

The Future of Neutrino Physics

For those in the neutrino physics community all eyes were on the panels recommendations regarding the Deep Underground Neutrino Experiment (DUNE). DUNE is the US’s flagship particle physics experiment for the coming decade and aims to be the definitive worldwide neutrino experiment in the years to come. A high powered beam of neutrinos will be produced at Fermilab and sent 800 miles through the earth’s crust towards several large detectors placed in a mine in South Dakota. Its a much bigger project than previous neutrino experiments, unifying essentially the entire US community into a single collaboration.

DUNE is setup to produce world leading measurements of neutrino oscillations, the property by which neutrinos produced in one ‘flavor state’, (eg an electron-neutrino) gradually changes its state with sinusoidal probability (eg into a muon neutrino) as it propagates through space. This oscillation is made possible by a simple quantum mechanical weirdness: neutrino’s flavor state, whether it couples to electrons muons or taus, is not the same as its mass state. Neutrinos of a definite mass are therefore a mixture of the different flavors and visa versa.

Detailed measurements of this oscillation are the best way we know to determine several key neutrino properties. DUNE aims to finally pin down two crucial neutrino properties: their ‘mass ordering’, which will solidify how the different neutrino flavors and measured mass differences all fit together, and their ‘CP-violation’ which specifies whether neutrinos and their anti-matter counterparts behave the same or not. DUNE’s main competitor is the Hyper-Kamiokande experiment in Japan, another next-generation neutrino experiment with similar goals.

A depiction of the DUNE experiment. A high intensity proton beam at Fermilab is used to create a concentrated beam of neutrinos which are then sent through 800 miles of the Earth’s crust towards detectors placed deep underground South Dakota. Source

Construction of the DUNE experiment has been ongoing for several years and unfortunately has not been going quite as well as hoped. It has faced significant schedule delays and cost overruns. DUNE is now not expected to start taking data until 2031, significantly behind Hyper-Kamiokande’s projected 2027 start. These delays may lead to Hyper-K making these definitive neutrino measurements years before DUNE, which would be a significant blow to the experiment’s impact. This left many DUNE collaborators worried about its broad support from the community.

It came as a relief then when P5 report re-affirmed the strong science case for DUNE, calling it the “ultimate long baseline” neutrino experiment. The report strongly endorsed the completion of the first phase of DUNE. However, it recommended a pared-down version of its upgrade, advocating for an earlier beam upgrade in lieu of additional detectors. This re-imagined upgrade will still achieve the core physics goals of the original proposal with a significant cost savings. With this report, and news that the beleaguered underground cavern construction in South Dakota is now 90% complete, was certainly welcome holiday news to the neutrino community. This is also sets up a decade-long race between DUNE and Hyper-K to be the first to measure these key neutrino properties.

Cosmic Implications

While we normally think of particle physics as focused on the behavior of sub-atomic particles, its really about the study of fundamental forces and laws, no matter the method. This means that telescopes to study the oldest light in the universe, the Cosmic Microwave Background (CMB), fall into the same budget category as giant accelerators studying sub-atomic particles. Though the experiments in these two areas look very different, the questions they seek to answer are cross-cutting. Understanding how particles interact at very high energies helps us understand the earliest moments of the universe, when such particles were all interacting in a hot dense plasma. Likewise, by studying the these early moments of the universe and its large-scale evolution can tell us about what kinds of particles and forces are influencing its dynamics. When asking fundamental questions about the universe, one needs both the sharpest microscopes and the grandest panoramas possible.

The most prominent example of this blending of the smallest and largest scales in particle physics is dark matter. Some of our best evidence for dark matter comes analyzing the cosmic microwave background to determine how the primordial plasma behaved. These studies showed that some type of ‘cold’, matter that doesn’t interact with light, aka dark matter, was necessary to form the first clumps that eventually seeded the formation of galaxies. Without it, the universe would be much more soup-y and structureless than what we see to today.

The “cosmic web” galaxy clusters from the Millenium simulation. Measuring and understanding this web can tell us a lot about the fundamental constituents of the universe. Source

To determine what dark matter is then requires an attack from two fronts: design experiments here on earth attempting directly detect it, and further study its cosmic implications to look for more clues as to its properties.

The panel recommended next generation telescopes to study the CMB as a top priority. The so called ‘Stage 4’ CMB experiment would deploy telescopes in both the south pole and Chile’s Atacama desert to better characterize sources of atmospheric noise. The CMB has been studied extensively before, but the increased precision of CMS-S4 could shed light on mysteries like dark energy, dark matter, inflation, and the recent Hubble Tension. Given the past fruitfulness of these efforts, I think few doubted the science case for such a next generation experiment.

A mockup of one of the CMS-S4 telescopes which will be based in the Chilean desert. Note the person for scale on the right (source)

The P5 report recommended a suite of new dark matter experiments in the next decade, including the ‘ultimate’ liquid Xenon based dark matter search. Such an experiment would follow in the footsteps of massive noble gas experiments like LZ and XENONnT which have been hunting for a favored type of dark matter called WIMP’s for the last few decades. These experiments essentially build giant vats of liquid Xenon, carefully shield from any sources of external radiation, and look for signs of dark matter particles bumping into any of the Xenon atoms. The larger the vat of Xenon, the higher chance a dark matter particle will bump into something. Current generation experiments have ~7 tons of Xenon, and the next generation experiment would be even larger. The next generation aims to reach the so called ‘neutrino floor’, the point as which the experiments would be sensitive enough to observe astrophysical neutrinos bumping into the Xenon. Such neutrino interactions would look extremely similar to those of dark matter, and thus represent an unavoidable background which would signal the ultimate sensitivity of this type of experiment. WIMP’s could still be hiding in a basement below this neutrino floor, but finding them would be exceedingly difficult.

A photo of the current XENONnT experiment. This pristine cavity is then filled with liquid Xenon and closely monitored for signs of dark matter particles bumping into one of the Xenon atoms. Credit: XENON Collaboration

WIMP’s are not the only dark matter candidates in town, and recent years have also seen an explosion of interest in the broad range of dark matter possibilities, with axions being a prominent example. Other kinds of dark matter could have very different properties than WIMPs and have had much fewer dedicated experiments to search for them. There is ‘low hanging fruit’ to pluck in the way of relatively cheap experiments which can achieve world-leading sensitivity. Previously, these ‘table top’ sized experiments had a notoriously difficult time obtaining funding, as they were often crowded out of the budgets by the massive flagship projects. However, small experiments can be crucial to ensuring our best chance of dark matter discovery, as they fill in the blinds pots missed by the big projects.

The panel therefore recommended creating a new pool of funding set aside for these smaller scale projects. Allowing these smaller scale projects to flourish is important for the vibrancy and scientific diversity of the field, as the centralization of ‘big science’ projects can sometimes lead to unhealthy side effects. This specific recommendation also mirrors a broader trend of the report: to attempt to rebalance the budget portfolio to be spread more evenly and less dominated by the large projects.

A pie chart comparing the budget porfolio in 2023 (left) versus the projected budget in 2033 (right). Currently most of the budget is being taken up by the accelerator upgrades and cavern construction of DUNE, with some amount for the LHC upgrades. But by 2033 the panel recommends a much more equitable balance between different research area.

What Didn’t Make It

Any report like this comes with some tough choices. Budget realities mean not all projects can be funded. Besides the pairing down of some of DUNE’s upgrades, one of the biggest areas that was recommended against were ‘accessory experiments at the LHC’. In particular, MATHUSULA and the Forward Physics Facility were two experiments that proposed to build additional detectors near already existing LHC collision points to look for particles that may be missed by the current experiments. By building new detectors hundreds of meters away from the collision point, shielded by concrete and the earth, they can obtained unique sensitivity to ‘long lived’ particles capable of traversing such distances. These experiments would follow in the footsteps of the current FASER experiment, which is already producing impressive results.

While FASER found success as a relatively ‘cheap’ experiment, reusing detector components from and situating itself in a beam tunnel, these new proposals were asking for quite a bit more. The scale of these detectors would have required new caverns to be built, significantly increasing the cost. Given the cost and specialized purpose of these detectors, the panel recommended against their construction. These collaborations may now try to find ways to pare down their proposal so they can apply to the new small project portfolio.

Another major decision by the panel was to recommend against hosting a new Higgs factor collider in the US. But that will discussed more in a future post.

Conclusions

The P5 panel was faced with a difficult task, the total cost of all projects they were presented with was three times the budget. But they were able to craft a plan that continues the work of the previous decade, addresses current shortcomings and lays out an inspiring vision for the future. So far the community seems to be strongly rallying behind it. At time of writing, over 2700 community members from undergraduates to senior researchers have signed a petition endorsing the panels recommendations. This strong show of support will be key for turning these recommendations into actual funding, and hopefully lobbying congress to even increase funding so that more of this vision can be realized.

For those interested the full report as well as executive summaries of different areas can be found on the P5 website. Members of the US particle physics community are also encouraged to sign the petition endorsing the recommendations here.

And stayed tuned for part 2 of our coverage which will discuss the implications of the report on future colliders!

Hullabaloo Over The Hubble Constant

Title: The Expansion of the Universe is Faster than Expected

Author: Adam Riess

Reference: Nature   Arxiv

There is a current crisis in the field of cosmology and it may lead to our next breakthrough in understanding the universe.  In the late 1990’s measurements of distant supernovae showed that contrary to expectations at the time, the universe’s expansion was accelerating rather than slowing down. This implied the existence of a mysterious “dark energy” throughout the universe, propelling this accelerated expansion. Today, some people once again think that our measurements of the current expansion rate, the Hubble constant, are indicating that there is something about the universe we don’t understand.

The current cosmological standard model, called ΛCDM, is a phenomenological model of describing all contents of the universe. It includes regular visible matter, Cold Dark Matter (CDM), and dark energy. It is an extremely bare-bones model; assuming dark matter interacts only gravitationally and that dark energy is just a simple cosmological constant (Λ) which gives a constant energy density to space itself.  For the last 20 years this model has been rigorously tested but new measurements might be beginning to show that it has some holes. Measurements of the early universe based on ΛCDM and extrapolated to today predict a different rate of expansion than what is currently being measured, and cosmologists are taking this war over the Hubble constant very seriously.

The Measurements

On one side of this Hubble controversy are measurements from the early universe. The most important of these is based on the Cosmic Microwave Background (CMB), light directly from the hot plasma of the Big Bang that has been traveling billions of years directly to our telescopes. This light from the early universe is nearly uniform in temperature, but by analyzing the pattern of slightly hotter and colder spots, cosmologists can extract the 6 free parameters of ΛCDM. These parameters encode the relative amount of energy contained in regular matter, dark matter, and dark energy. Then based on these parameters, they can infer what the current expansion rate of the universe should be. The current best measurements of the CMB come from the Planck collaboration which can infer the Hubble constant with a precision of less than 1%.

The Cosmic Microwave Background (CMB). Blue spots are slightly colder than average and red spots are slightly hotter. By fitting a model to this data, one can determine the energy contents of the early universe.

On the other side of the debate are the late-universe (or local) measurements of the expansion. The most famous of these is based on a ‘distance ladder’, where several stages of measurements are used to calibrate distances of astronomical objects. First, geometric properties are used to calibrate the brightness of pulsating stars (Cepheids). Cepheids are then used to calibrate the absolute brightness of exploding supernovae. The expansion rate of the universe can then be measured by relating the red-shift (the amount the light from these objects has been stretched by the universe’s expansion) and the distance of these supernovae. This is the method that was used to discover dark energy in 1990’s and earned its pioneers a Nobel prize. As they have collected more data and techniques have been refined, the measurement’s precision has improved dramatically.

In the last few years the tension between the two values of the Hubble constant has steadily grown. This had let cosmologists to scrutinize both sets of measurements very closely but so far no flaws have been found. Both of these measurements are incredibly complex, and many cosmologists still assumed that there was some unknown systematic error in one of them that was the culprit. But recently, other measurements both the early and late universe have started to weigh in and they seem to agree with the Planck and distance ladder results. Currently the tension between the early and late measurements of the Hubble constant sits between 4 to 6 sigma, depending on which set of measurements you combine. While there are still many who believe there is something wrong with the measurements, others have started to take seriously that this is pointing to a real issue with ΛCDM, and there is something in the universe we don’t understand. In other words, New Physics!

A comparison of the early universe and late universe measurements of the Hubble constant. Different combinations of measurements are shown for each. The tension is between 4 and 6 sigma on depending on which set of measurements you combine

The Models

So what ideas have theorists put forward that can explain the disagreement? In general theorists have actually had a hard time trying to come up with models that can explain this disagreement while not running afoul of the multitude of other cosmological data we have, but some solutions have been found. Two of the most promising approaches involve changing the composition of universe just before the time the CMB was emitted.

The first of these is called Early Dark Energy. It is a phenomenological model that posits the existence of another type of dark energy, that behaves similarly to a cosmological constant early in the universe but then fades away relatively quickly as the universe expands. This model is able to slightly improve Planck’s fit to the CMB data while changing the contents of the early universe enough to alter the predicted Hubble constant to be consistent with the local value. Critics of the model have feel that its parameters had to been finely tuned for the solution to work. However there has been some work in mimicking its success with a particle-physics based model.

The other notable attempt at resolving the tension involves adding additional types of neutrinos and positing that neutrinos interact with each other in a much stronger way than the Standard Model. This similarly changes the interpretation of the CMB measurements to predict a larger expansion rate. The authors also posit that this new physics in the neutrino sector may be related to current anomalies seen in neutrino physics experiments that are also currently lacking an explanation. However follow up work has showed that it is hard to reconcile such strongly self-interacting neutrinos with laboratory experiments and other cosmological probes.

The Future

At present the situation remains very unclear. Some cosmologists believe this is the end of ΛCDM, and others still believe there is an issue with one of the measurements. For those who believe new physics is the solution, there is no consensus about what the best model is. However, the next few years should start to clarify things. Other late-universe measurements of the Hubble constant, using gravitational lensing or even gravitational waves, should continue to improve their precision and could give skeptics greater confidence to the distance ladder result. Next generation CMB experiments will eventually come online as well, and will offer greater precision than the Planck measurement. Theorists will probably come up with more possible resolutions, and point out additional measurements to be made that can confirm or refute their models. For those hoping for a breakthrough in our understanding of the universe, this is definitely something to keep an eye on!

Read More

Quanta Magazine Article on the controversy 

Astrobites Article on Hubble Tension

Astrobites Article on using gravitational lensing to measure the Hubble Constant

The Hubble Hunters Guide

Dragonfly 44: A potential Dark Matter Galaxy

Title: A High Stellar Velocity Dispersion and ~100 Globular Clusters for the Ultra Diffuse Galaxy Dragonfly 44

PublicationApJ, v828, Number 1, arXiv: 1606.06291

The title of this paper sounds like some standard astrophysics analyses; but, dig a little deeper and you’ll find – what I think – is an incredibly interesting, surprising and unexpected observation.

The Coma Cluster: NASA, ESA, and the Hubble Heritage Team (STScI/AURA)

Last year, using the WM Keck Observatory and the Gemini North Telescope in Manuakea, Hawaii, the Dragonfly Telephoto Array observed the Coma cluster (a large cluster of galaxies in the constellation Coma – I’ve included a Hubble Image to the left). The team identified a population of large, very low surface brightness (ie: not a lot of stars), spheroidal galaxies around an Ultra Diffuse Galaxy (UDG) called Dragonfly 44 (shown below). They determined that Dragonfly 44 has so few stars that gravity could not hold it together – so some other matter had to be involved – namely DARK MATTER (my favorite kind of unknown matter).

 

The ultra-diffuse galaxy Dragonfly 44. The galaxy consists almost entirely of dark matter. It is surrounded by faint, compact sources. Image credit: Pieter van Dokkum / Roberto Abraham / Gemini Observatory / SDSS / AURA.
The ultra-diffuse galaxy Dragonfly 44. The galaxy consists almost entirely of dark matter. It is surrounded by faint, compact sources. Image credit: Pieter van Dokkum / Roberto Abraham / Gemini Observatory / SDSS / AURA

The team used the DEIMOS instrument installed on Keck II to measure the velocities of stars for 33.5 hours over a period of six nights so they could determine the galaxy’s mass. Observations of Dragonfly 44’s rotational speed suggest that it has a mass of about one trillion solar masses, about the same as the Milky Way. However, the galaxy emits only 1% of the light emitted by the Milky Way. In other words, the Milky Way has more than a hundred times more stars than Dragonfly 44. I’ve also included the Mass-to-Light ratio plot vs. the dynamical mass. This illustrates how unique Dragonfly 44 is compared to other dark matter dominated galaxies like dwarf spheroidal galaxies.

 

 

MLratio
Relation between dynamical mass-to-light ratio and dynamical mass. Open symbols are dispersion-dominated objects from Zaritsky, Gonzalez, & Zabludoff (2006) and Wolf et al. (2010). The UDGs VCC 1287 (Beasley et al. 2016) and Dragonfly 44 fall outside of the band defined by the other galaxies, having a very high M/L ratio for their mass.

What is particularly exciting is that we don’t understand how galaxies like this form.

Their research indicates that these UDGs could be failed galaxies, with the sizes, dark matter content, and globular cluster systems of much more luminous objects. But we’ll need to discover more to fully understand them.

 

 

 

 

 

 

 

 

Further reading (works by the same authors)
Forty-Seven Milky Way-Sized, Extremely Diffuse Galaxies in the Coma Cluster,arXiv: 1410.8141
Spectroscopic Confirmation of the Existence of Large, Diffuse Galaxies in the Coma Cluster: arXiv: 1504.03320

Searching for Magnetic Monopoles with MoEDAL

Article: Search for magnetic monopoles with the MoEDAL prototype trapping detector in 8 TeV proton-proton collisions at the LHC
Authors: The ATLAS Collaboration
Reference:  arXiv:1604.06645v4 [hep-ex]

Somewhere in a tiny corner of the massive LHC cavern, nestled next to the veteran LHCb detector, a new experiment is coming to life.

The Monopole & Exotics Detector at the LHC, nicknamed the MoEDAL experiment, recently published its first ever results on the search for magnetic monopoles and other highly ionizing new particles. The data collected for this result is from the 2012 run of the LHC, when the MoEDAL detector was still a prototype. But it’s still enough to achieve the best limit to date on the magnetic monopole mass.

Figure 1: Breaking a magnet.

Magnetic monopoles are a very appealing idea. From basic electromagnetism, we expect to swap electric and magnetic fields under duality without changing Maxwell’s equations. Furthermore, Dirac showed that a magnetic monopole is not inconsistent with quantum electrodynamics (although they do not appear natually.) The only problem is that in the history of scientific experimentation, we’ve never actually seen one. We know that if we break a magnet in half, we will get two new magnetics, each with its own North and South pole (see Figure 1).

This is proving to be a thorn in the side of many physicists. Finding a magnetic monopole would be great from a theoretical standpoint. Many Grand Unified Theories predict monopoles as a natural byproduct of symmetry breaking in the early universe. In fact, the theory of cosmological inflation so confidently predicts a monopole that its absence is known as the “monopole problem”. There have been occasional blips of evidence for monopoles in the past (such as a single event in a detector), but nothing has been reproducible to date.

Enter MoEDAL (Figure 2). It is the seventh addition to the LHC family, having been approved in 2010. If the monopole is a fundamental particle, it will be produced in proton-proton collisions. It is also expected to be very massive and long-lived. MoEDAL is designed to search for such a particle with a three-subdetector system.

Figure 2: The MoEDAL detector.
Figure 2: The MoEDAL detector.

The Nuclear Track Detector is composed of plastics that are damaged when a charged particle passes through them. The size and shape of the damage can then be observed with an optical microscope. Next is the TimePix Radiation Monitor system, a pixel detector which absorbs charge deposits induced by ionizing radiation. The newest addition is the Trapping Detector system, which is simply a large aluminum volume that will trap a monopole with its large nuclear magnetic moment.

The collaboration collected data using these distinct technologies in 2012, and studied the resulting materials and signals. The ultimate limit in the paper excludes spin-0 and spin-1/2 monopoles with masses between 100 GeV and 3500 GeV, and a magnetic charge > 0.5gD (the Dirac magnetic charge). See Figures 3 and 4 for the exclusion curves. It’s worth noting that this upper limit is larger than any fundamental particle we know of to date. So this is a pretty stringent result.

Figure 3: Cross-section upper limits at 95% confidence level for DY spin-1/2 monopole production as a function of mass, with different charge models.
Figure 3: Cross-section upper limits at 95% confidence level for DY spin-1/2 monopole production as
a function of mass, with different charge models.

Figure 4: Cross-section upper limits at 95% confidence level for DY spin-1/2 monopole production as a function of charge, with different mass models.
Figure 4: Cross-section upper limits at 95% confidence level for DY spin-1/2 monopole production as
a function of charge, with different mass models.

 

As for moving forward, we’ve only talked about monopoles here, but the physics programme for MoEDAL is vast. Since the detector technology is fairly broad-based, it is possible to find anything from SUSY to Universal Extra Dimensions to doubly charged particles. Furthermore, this paper is only published on LHC data from September to December of 2012, which is not a whole lot. In fact, we’ve collected over 25x that much data in this year’s run alone (although this detector was not in use this year.) More data means better statistics and more extensive limits, so this is definitely a measurement that will be greatly improved in future runs. A new version of the detector was installed in 2015, and we can expect to see new results within the next few years.

 

Further Reading:

  1. CERN press release 
  2. The MoEDAL collaboration website 
  3. “The Phyiscs Programme of the MoEDAL experiment at the LHC”. arXiv.1405.7662v4 [hep-ph]
  4. “Introduction to Magnetic Monopoles”. arxiv.1204.30771 [hep-th]
  5. Condensed matter physics has recently made strides in the study of a different sort of monopole; see “Observation of Magnetic Monopoles in Spin Ice”, arxiv.0908.3568 [cond-mat.dis-nn]

 

The CMB sheds light on galaxy clusters: Observing the kSZ signal with ACT and BOSS

Article: Detection of the pairwise kinematic Sunyaev-Zel’dovich effect with BOSS DR11 and the Atacama Cosmology Telescope
Authors: F. De Bernardis, S. Aiola, E. M. Vavagiakis, M. D. Niemack, N. Battaglia, and the ACT Collaboration
Reference: arXiv:1607.02139

Editor’s note: this post is written by one of the students involved in the published result.

Like X-rays shining through your body can inform you about your health, the cosmic microwave background (CMB) shining through galaxy clusters can tell us about the universe we live in. When light from the CMB is distorted by the high energy electrons present in galaxy clusters, it’s called the Sunyaev-Zel’dovich effect. A new 4.1σ measurement of the kinematic Sunyaev-Zel’dovich (kSZ) signal has been made from the most recent Atacama Cosmology Telescope (ACT) cosmic microwave background (CMB) maps and galaxy data from the Baryon Oscillation Spectroscopic Survey (BOSS). With steps forward like this one, the kinematic Sunyaev-Zel’dovich signal could become a probe of cosmology, astrophysics and particle physics alike.

The Kinematic Sunyaev-Zel’dovich Effect

It rolls right off the tongue, but what exactly is the kinematic Sunyaev-Zel’dovich signal? Galaxy clusters distort the cosmic microwave background before it reaches Earth, so we can learn about these clusters by looking at these CMB distortions. In our X-ray metaphor, the map of the CMB is the image of the X-ray of your arm, and the galaxy clusters are the bones. Galaxy clusters are the largest gravitationally bound structures we can observe, so they serve as important tools to learn more about our universe. In its essence, the Sunyaev-Zel’dovich effect is inverse-Compton scattering of cosmic microwave background photons off of the gas in these galaxy clusters, whereby the photons gain a “kick” in energy by interacting with the high energy electrons present in the clusters.

The Sunyaev-Zel’dovich effect can be divided up into two categories: thermal and kinematic. The thermal Sunyaev-Zel’dovich (tSZ) effect is the spectral distortion of the cosmic microwave background in a characteristic manner due to the photons gaining, on average, energy from the hot (~107 – 108 K) gas of the galaxy clusters. The kinematic (or kinetic) Sunyaev-Zel’dovich (kSZ) effect is a second-order effect—about a factor of 10 smaller than the tSZ effect—that is caused by the motion of galaxy clusters with respect to the cosmic microwave background rest frame. If the CMB photons pass through galaxy clusters that are moving, they are Doppler shifted due to the cluster’s peculiar velocity (the velocity that cannot be explained by Hubble’s law, which states that objects recede from us at a speed proportional to their distance). The kinematic Sunyaev-Zel’dovich effect is the only known way to directly measure the peculiar velocities of objects at cosmological distances, and is thus a valuable source of information for cosmology. It allows us to probe megaparsec and gigaparsec scales – that’s around 30,000 times the diameter of the Milky Way!

A schematic of the Sunyaev-Zel’dovich effect resulting in higher energy (or blue shifted) photons of the cosmic microwave background (CMB) when viewed through the hot gas present in galaxy clusters. Source: UChicago Astronomy.

 

Measuring the kSZ Effect

To make the measurement of the kinematic Sunyaev-Zel’dovich signal, the Atacama Cosmology Telescope (ACT) collaboration used a combination of cosmic microwave background maps from two years of observations by ACT. The CMB map used for the analysis overlapped with ~68000 galaxy sources from the Large Scale Structure (LSS) DR11 catalog of the Baryon Oscillation Spectroscopic Survey (BOSS). The catalog lists the coordinate positions of galaxies along with some of their properties. The most luminous of these galaxies were assumed to be located at the centers of galaxy clusters, so temperature signals from the CMB map were taken at the coordinates of these galaxy sources in order to extract the Sunyaev-Zel’dovich signal.

While the smallness of the kSZ signal with respect to the tSZ signal and the noise level in current CMB maps poses an analysis challenge, there exist several approaches to extracting the kSZ signal. To make their measurement, the ACT collaboration employed a pairwise statistic. “Pairwise” refers to the momentum between pairs of galaxy clusters, and “statistic” indicates that a large sample is used to rule out the influence of unwanted effects.

Here’s the approach: nearby galaxy clusters move towards each other on average, due to gravity. We can’t easily measure the three-dimensional momentum of clusters, but the average pairwise momentum can be estimated by using the line of sight component of the momentum, along with other information such as redshift and angular separations between clusters. The line of sight momentum is directly proportional to the measured kSZ signal: the microwave temperature fluctuation which is measured from the CMB map. We want to know if we’re measuring the kSZ signal when we look in the direction of galaxy clusters in the CMB map. Using the observed CMB temperature to find the line of sight momenta of galaxy clusters, we can estimate the mean pairwise momentum as a function of cluster separation distance, and check to see if we find that nearby galaxies are indeed falling towards each other. If so, we know that we’re observing the kSZ effect in action in the CMB map.

For the measurement quoted in their paper, the ACT collaboration finds the average pairwise momentum as a function of galaxy cluster separation, and explores a variety of error determinations and sources of systematic error. The most conservative errors based on simulations give signal-to-noise estimates that vary between 3.6 and 4.1.

The mean pairwise momentum estimator and best fit model for a selection of 20000 objects from the DR11 Large Scale Structure catalog, plotted as a function of comoving separation. The dashed line is the linear model, and the solid line is the model prediction including nonlinear redshift space corrections. The best fit provides a 4.1σ evidence of the kSZ signal in the ACTPol-ACT CMB map. Source: arXiv:1607.02139.
The mean pairwise momentum estimator and best fit model for a selection of 20000 objects from the DR11 Large Scale Structure catalog, plotted as a function of comoving separation. The dashed line is the linear model, and the solid line is the model prediction including nonlinear redshift space corrections. The best fit provides a 4.1σ evidence of the kSZ signal in the ACTPol-ACT CMB map. Source: arXiv:1607.02139.

The ACT and BOSS results are an improvement on the 2012 ACT detection, and are comparable with results from the South Pole Telescope (SPT) collaboration that use galaxies from the Dark Energy Survey. The ACT and BOSS measurement represents a step forward towards improved extraction of kSZ signals from CMB maps. Future surveys such as Advanced ACTPol, SPT-3G, the Simons Observatory, and next-generation CMB experiments will be able to apply the methods discussed here to improved CMB maps in order to achieve strong detections of the kSZ effect. With new data that will enable better measurements of galaxy cluster peculiar velocities, the pairwise kSZ signal will become a powerful probe of our universe in the years to come.

Implications and Future Experiments

One interesting consequence for particle physics will be more stringent constraints on the sum of the neutrino masses from the pairwise kinematic Sunyaev-Zel’dovich effect. Upper bounds on the neutrino mass sum from cosmological measurements of large scale structure and the CMB have the potential to determine the neutrino mass hierarchy, one of the next major unknowns of the Standard Model to be resolved, if the mass hierarchy is indeed a “normal hierarchy” with ν3 being the heaviest mass state. If the upper bound of the neutrino mass sum is measured to be less than 0.1 eV, the inverted hierarchy scenario would be ruled out, due to there being a lower limit on the mass sum of ~0.095 eV for an inverted hierarchy and ~0.056 eV for a normal hierarchy.

Forecasts for kSZ measurements in combination with input from Planck predict possible constraints on the neutrino mass sum with a precision of 0.29 eV, 0.22 eV and 0.096 eV for Stage II (ACTPol + BOSS), Stage III (Advanced ACTPol + BOSS) and Stage IV (next generation CMB experiment + DESI) surveys respectively, with the possibility of much improved constraints with optimal conditions. As cosmic microwave background maps are improved and Sunyaev-Zel’dovich analysis methods are developed, we have a lot to look forward to.

 

Background reading: