The P5 Report & The Future of Particle Physics (Part 1)

Particle physics is the epitome of ‘big science’. To answer our most fundamental questions out about physics requires world class experiments that push the limits of whats technologically possible. Such incredible sophisticated experiments, like those at the LHC, require big facilities to make them possible,  big collaborations to run them, big project planning to make dreams of new facilities a reality, and committees with big acronyms to decide what to build.

Enter the Particle Physics Project Prioritization Panel (aka P5) which is tasked with assessing the landscape of future projects and laying out a roadmap for the future of the field in the US. And because these large projects are inevitably an international endeavor, the report they released last week has a large impact on the global direction of the field. The report lays out a vision for the next decade of neutrino physics, cosmology, dark matter searches and future colliders. 

P5 follows the community-wide brainstorming effort known as the Snowmass Process in which researchers from all areas of particle physics laid out a vision for the future. The Snowmass process led to a particle physics ‘wish list’, consisting of all the projects and research particle physicists would be excited to work on. The P5 process is the hard part, when this incredibly exciting and diverse research program has to be made to fit within realistic budget scenarios. Advocates for different projects and research areas had to make a case of what science their project could achieve and a detailed estimate of the costs. The panel then takes in all this input and makes a set of recommendations of how the budget should be allocated, what should projects be realized and what hopes are dashed. Though the panel only produces a set of recommendations, they are used quite extensively by the Department of Energy which actually allocates funding. If your favorite project is not endorsed by the report, its very unlikely to be funded. 

Particle physics is an incredibly diverse field, covering sub-atomic to cosmic scales, so recommendations are divided up into several different areas. In this post I’ll cover the panel’s recommendations for neutrino physics and the cosmic frontier. Future colliders, perhaps the spiciest topic, will be covered in a follow up post.

The Future of Neutrino Physics

For those in the neutrino physics community all eyes were on the panels recommendations regarding the Deep Underground Neutrino Experiment (DUNE). DUNE is the US’s flagship particle physics experiment for the coming decade and aims to be the definitive worldwide neutrino experiment in the years to come. A high powered beam of neutrinos will be produced at Fermilab and sent 800 miles through the earth’s crust towards several large detectors placed in a mine in South Dakota. Its a much bigger project than previous neutrino experiments, unifying essentially the entire US community into a single collaboration.

DUNE is setup to produce world leading measurements of neutrino oscillations, the property by which neutrinos produced in one ‘flavor state’, (eg an electron-neutrino) gradually changes its state with sinusoidal probability (eg into a muon neutrino) as it propagates through space. This oscillation is made possible by a simple quantum mechanical weirdness: neutrino’s flavor state, whether it couples to electrons muons or taus, is not the same as its mass state. Neutrinos of a definite mass are therefore a mixture of the different flavors and visa versa.

Detailed measurements of this oscillation are the best way we know to determine several key neutrino properties. DUNE aims to finally pin down two crucial neutrino properties: their ‘mass ordering’, which will solidify how the different neutrino flavors and measured mass differences all fit together, and their ‘CP-violation’ which specifies whether neutrinos and their anti-matter counterparts behave the same or not. DUNE’s main competitor is the Hyper-Kamiokande experiment in Japan, another next-generation neutrino experiment with similar goals.

A depiction of the DUNE experiment. A high intensity proton beam at Fermilab is used to create a concentrated beam of neutrinos which are then sent through 800 miles of the Earth’s crust towards detectors placed deep underground South Dakota. Source

Construction of the DUNE experiment has been ongoing for several years and unfortunately has not been going quite as well as hoped. It has faced significant schedule delays and cost overruns. DUNE is now not expected to start taking data until 2031, significantly behind Hyper-Kamiokande’s projected 2027 start. These delays may lead to Hyper-K making these definitive neutrino measurements years before DUNE, which would be a significant blow to the experiment’s impact. This left many DUNE collaborators worried about its broad support from the community.

It came as a relief then when P5 report re-affirmed the strong science case for DUNE, calling it the “ultimate long baseline” neutrino experiment. The report strongly endorsed the completion of the first phase of DUNE. However, it recommended a pared-down version of its upgrade, advocating for an earlier beam upgrade in lieu of additional detectors. This re-imagined upgrade will still achieve the core physics goals of the original proposal with a significant cost savings. With this report, and news that the beleaguered underground cavern construction in South Dakota is now 90% complete, was certainly welcome holiday news to the neutrino community. This is also sets up a decade-long race between DUNE and Hyper-K to be the first to measure these key neutrino properties.

Cosmic Implications

While we normally think of particle physics as focused on the behavior of sub-atomic particles, its really about the study of fundamental forces and laws, no matter the method. This means that telescopes to study the oldest light in the universe, the Cosmic Microwave Background (CMB), fall into the same budget category as giant accelerators studying sub-atomic particles. Though the experiments in these two areas look very different, the questions they seek to answer are cross-cutting. Understanding how particles interact at very high energies helps us understand the earliest moments of the universe, when such particles were all interacting in a hot dense plasma. Likewise, by studying the these early moments of the universe and its large-scale evolution can tell us about what kinds of particles and forces are influencing its dynamics. When asking fundamental questions about the universe, one needs both the sharpest microscopes and the grandest panoramas possible.

The most prominent example of this blending of the smallest and largest scales in particle physics is dark matter. Some of our best evidence for dark matter comes analyzing the cosmic microwave background to determine how the primordial plasma behaved. These studies showed that some type of ‘cold’, matter that doesn’t interact with light, aka dark matter, was necessary to form the first clumps that eventually seeded the formation of galaxies. Without it, the universe would be much more soup-y and structureless than what we see to today.

The “cosmic web” galaxy clusters from the Millenium simulation. Measuring and understanding this web can tell us a lot about the fundamental constituents of the universe. Source

To determine what dark matter is then requires an attack from two fronts: design experiments here on earth attempting directly detect it, and further study its cosmic implications to look for more clues as to its properties.

The panel recommended next generation telescopes to study the CMB as a top priority. The so called ‘Stage 4’ CMB experiment would deploy telescopes in both the south pole and Chile’s Atacama desert to better characterize sources of atmospheric noise. The CMB has been studied extensively before, but the increased precision of CMS-S4 could shed light on mysteries like dark energy, dark matter, inflation, and the recent Hubble Tension. Given the past fruitfulness of these efforts, I think few doubted the science case for such a next generation experiment.

A mockup of one of the CMS-S4 telescopes which will be based in the Chilean desert. Note the person for scale on the right (source)

The P5 report recommended a suite of new dark matter experiments in the next decade, including the ‘ultimate’ liquid Xenon based dark matter search. Such an experiment would follow in the footsteps of massive noble gas experiments like LZ and XENONnT which have been hunting for a favored type of dark matter called WIMP’s for the last few decades. These experiments essentially build giant vats of liquid Xenon, carefully shield from any sources of external radiation, and look for signs of dark matter particles bumping into any of the Xenon atoms. The larger the vat of Xenon, the higher chance a dark matter particle will bump into something. Current generation experiments have ~7 tons of Xenon, and the next generation experiment would be even larger. The next generation aims to reach the so called ‘neutrino floor’, the point as which the experiments would be sensitive enough to observe astrophysical neutrinos bumping into the Xenon. Such neutrino interactions would look extremely similar to those of dark matter, and thus represent an unavoidable background which would signal the ultimate sensitivity of this type of experiment. WIMP’s could still be hiding in a basement below this neutrino floor, but finding them would be exceedingly difficult.

A photo of the current XENONnT experiment. This pristine cavity is then filled with liquid Xenon and closely monitored for signs of dark matter particles bumping into one of the Xenon atoms. Credit: XENON Collaboration

WIMP’s are not the only dark matter candidates in town, and recent years have also seen an explosion of interest in the broad range of dark matter possibilities, with axions being a prominent example. Other kinds of dark matter could have very different properties than WIMPs and have had much fewer dedicated experiments to search for them. There is ‘low hanging fruit’ to pluck in the way of relatively cheap experiments which can achieve world-leading sensitivity. Previously, these ‘table top’ sized experiments had a notoriously difficult time obtaining funding, as they were often crowded out of the budgets by the massive flagship projects. However, small experiments can be crucial to ensuring our best chance of dark matter discovery, as they fill in the blinds pots missed by the big projects.

The panel therefore recommended creating a new pool of funding set aside for these smaller scale projects. Allowing these smaller scale projects to flourish is important for the vibrancy and scientific diversity of the field, as the centralization of ‘big science’ projects can sometimes lead to unhealthy side effects. This specific recommendation also mirrors a broader trend of the report: to attempt to rebalance the budget portfolio to be spread more evenly and less dominated by the large projects.

A pie chart comparing the budget porfolio in 2023 (left) versus the projected budget in 2033 (right). Currently most of the budget is being taken up by the accelerator upgrades and cavern construction of DUNE, with some amount for the LHC upgrades. But by 2033 the panel recommends a much more equitable balance between different research area.

What Didn’t Make It

Any report like this comes with some tough choices. Budget realities mean not all projects can be funded. Besides the pairing down of some of DUNE’s upgrades, one of the biggest areas that was recommended against were ‘accessory experiments at the LHC’. In particular, MATHUSULA and the Forward Physics Facility were two experiments that proposed to build additional detectors near already existing LHC collision points to look for particles that may be missed by the current experiments. By building new detectors hundreds of meters away from the collision point, shielded by concrete and the earth, they can obtained unique sensitivity to ‘long lived’ particles capable of traversing such distances. These experiments would follow in the footsteps of the current FASER experiment, which is already producing impressive results.

While FASER found success as a relatively ‘cheap’ experiment, reusing detector components from and situating itself in a beam tunnel, these new proposals were asking for quite a bit more. The scale of these detectors would have required new caverns to be built, significantly increasing the cost. Given the cost and specialized purpose of these detectors, the panel recommended against their construction. These collaborations may now try to find ways to pare down their proposal so they can apply to the new small project portfolio.

Another major decision by the panel was to recommend against hosting a new Higgs factor collider in the US. But that will discussed more in a future post.

Conclusions

The P5 panel was faced with a difficult task, the total cost of all projects they were presented with was three times the budget. But they were able to craft a plan that continues the work of the previous decade, addresses current shortcomings and lays out an inspiring vision for the future. So far the community seems to be strongly rallying behind it. At time of writing, over 2700 community members from undergraduates to senior researchers have signed a petition endorsing the panels recommendations. This strong show of support will be key for turning these recommendations into actual funding, and hopefully lobbying congress to even increase funding so that more of this vision can be realized.

For those interested the full report as well as executive summaries of different areas can be found on the P5 website. Members of the US particle physics community are also encouraged to sign the petition endorsing the recommendations here.

And stayed tuned for part 2 of our coverage which will discuss the implications of the report on future colliders!

High Energy Physics: What Is It Really Good For?

Article: Forecasting the Socio-Economic Impact of the Large Hadron Collider: a Cost-Benefit Analysis to 2025 and Beyond
Authors: Massimo Florio, Stefano Forte, Emanuela Sirtori
Reference: arXiv:1603.00886v1 [physics.soc-ph]

Imagine this. You’re at a party talking to a non-physicist about your research.

If this scenario already has you cringing, imagine you’re actually feeling pretty encouraged this time. Your everyday analogy for the Higgs mechanism landed flawlessly and you’re even getting some interested questions in return. Right when you’re feeling like Neil DeGrasse Tyson himself, your flow grinds to a halt and you have to stammer an awkward answer to the question every particle physicist has nightmares about.

“Why are we spending so much money to discover these fundamental particles? Don’t they seem sort of… useless?”

Well, fair question. While us physicists simply get by with a passion for the field, a team of Italian economists actually did the legwork on this one. And they came up with a really encouraging answer.

The paper being summarized here performed a cost-benefit analysis of the LHC from 1993 to 2025, in order to estimate its eventual impact on the world at large. Not only does that include benefit to future scientific endeavors, but to industry and even the general public as well. To do this, they called upon some classic non-physics notions, so let’s start with a quick economics primer.

  • A cost benefit analysis (CBA) is a common thing to do before launching a large-scale investment project. The LHC collaboration is a particularly tough thing to analyze; it is massive, international, complicated, and has a life span of several decades.
  • In general, basic research is notoriously difficult to justify to funding agencies, since there are no immediate applications. (A similar problem is encountered with environmental CBAs, so there are some overlapping ideas between the two.) Something that taxpayers fund without getting any direct use of the end product is referred to as a non-use value.
  • When trying to predict the future gets fuzzy, economists define something called a quasi option value. For the LHC, this includes aspects of timing and resource allocation (for example, what potential quality-of-life benefits come from discovering supersymmetry, and how bad would it have been if we pushed these off another 100 years?)
  • One can also make a general umbrella term for the benefit of pure knowledge, called an existence value. This involves a sort of social optimization; basically what taxpayers are willing to pay to get more knowledge.

The actual equation used to represent the different costs and benefits at play here is below.

cbaEq_2

 

 

 

 

Let’s break this down by terms.

PVCu is the sum of operating costs and capital associated with getting the project off the ground and continuing its operation.

PVBu is the economic value of the benefits. Here is where we have to break down even further, into who is benefitting and what they get out of it:

  1. Scientists, obviously. They get to publish new research and keep having jobs. Same goes for students and post-docs.
  2. Technological industry. Not only do they get wrapped up in the supply chain of building these machines, but basic research can quickly turn into very profitable new ideas for private companies.
  3. Everyone else. Because it’s fun to tour the facilities or go to public lectures. Plus CERN even has an Instagram now.

Just to give you an idea of how much overlap there really is between all these sources of benefit,  Figure 1 shows the monetary amount of goods procured from industry for the LHC. Figure 2 shows the number of ROOT software downloads, which, if you are at all familiar with ROOT, may surprise you (yes, it really is very useful outside of HEP!)

 

Amount of money (thousands of Euros) spent on industry for the LHC. pCp is past procurement, tHp1 is the total high tech procurement, and tHp2 is the high tech procurement for orders > 50 kCHF.
Figure 1: Amount of money (thousands of Euros) spent on industry for the LHC. pCp is past procurement, tHp1 is the total high tech procurement, and tHp2 is the high tech procurement for orders > 50 kCHF.

Figure 2: Number of ROOT software downloads over time.
Figure 2: Number of ROOT software downloads over time.

 

 

 

 

 

 

 

 

 

 

The rightmost term encompasses the non-use value, which is the difference between the sum of the quasi-option value QOV0 and existence value EXV0. If it sounded hard to measure a quasi-option value, it really is. In fact, the authors of this paper simply set it to 0, as a worst case value.

The other values come from in-depth interviews of over 1500 people, including all different types of physicists and industry representatives, as well as previous research papers. This data is then funneled into a computable matrix model, with a cell for each cost/benefit variable, for each year in the LHC lifetime. One can then create a conditional probability distribution function for the NPV value using Monte Carlo simulations to deal with the stochastic variables.

The end PDF is shown in Figure 2, with an expected NPV of 2.9 billion Euro! This also shows a expected benefit/cost ratio of 1.2; a project is generally considered justifiable if this ratio is greater than 1. If this all seems terribly exciting (it is), it never hurts to contact your Congressman and tell them just how much you love physics. It may not seem like much, but it will help ensure that the scientific community continues to get projects on the level of the LHC, even during tough federal budget times.

Figure 2: Net present value PDF (left) and cumulative distribution (right).
Figure 3: Net present value PDF (left) and cumulative distribution (right).

 

 

 

 

 

 

 

 

 

 

Here’s hoping this article helped you avoid at least one common source of awkwardness at a party. Unfortunately we can’t help you field concerns about the LHC destroying the world. You’re on your own with that one.

 

Further Reading:

  1. Another supercollider that didn’t get so lucky: The SSC story
  2. More on cost-benefit analysis

 

What Scientists Should Know About Science Hack Day

One of the failures of conventional science outreach is that it’s easy to say what our science is about, but it’s very difficult to convey what it’s like to do science. And on top of that, how can we do this in a way that:

  • scales and can be ported to different places
  • generates and nurtures a continued interest in science
  • can patch on to practical and useful citizen science
  • requires only a modest input from specialists?

Well, now there’s a killer-app for that.

 

Science Hack Day: San Francisco

I recently had the distinct privilege of participating in this year’s Science Hack Day (SHD) in San Francisco as a Science Ambassador. On the surface, SHD is a science-themed hackathon: a weekend where people get together to collaboratively develop on neat ideas. More than this, though, Science Hack Day encapsulates precisely the joy of collaborative discovery and problem solving that drew me into a career in research.

Massively Multiplayer Science
Ariel Waldman, ‘global instigator’ for Science Hack Day, gives the open remarks at Science Hack Day: SF.

I cannot understate how much this resonated with me as a scientist: over the course of about 30 hours, SHD was able to create a microcosm of how we do science, and it was able to do so in a way that brought together people of very different age groups, genders, ethnicities, and professional backgrounds to hack and learn and create. Many of these projects made use of open data sets, and many of them ended up open source: either in the form of GitHub repositories for software or instructables for more physical creations.

The hacks ranged from fun— such as a board game based on the immune system, to practical—a Chrome app that overlays CO2 emissions onto Google Maps. They were marketable—a 3D candy pen, and mesmerizing—an animation of 15 years of hand-drawn solar records. Some were simply inspiring, such as coordinating a day to view the rings of saturn to spark interest in science.

The Best Parts of Grad School in 30 Hours

One thing that especially rang true to me was that Science Hack Day—like the actual day-to-day science done by researchers—is not about deliverables. The science isn’t the poster that you glued together at your 4th grade science fair (or the journal article that is similarly glued together decades later), it’s all of the action before that. It’s about casual brainstorming, “literature reviews” looking for existing off-the-shelf tools, bumping into experts and getting their feedback, the many times things break—and the breakthroughs from understanding why, and then the devil-may-care kludges to get a prototype up and running with your teammates.

All that is what I want to convey when people tell me that particle physics sounds neat, but what exactly is it that we do all day long in the ivory tower? Now I know the answer: we’re science hacking—and you can try it out, too.

DSC02999
Ariel’s tips for Science Hack Day are also useful reminders for academic researchers… and really, probably for everyone.

And this, if you ask me, is precisely what needs to be injected into science outreach. It’s always fantastic when people are wow’ed by inspiring talks by charismatic scientists—but nothing can replace the pure joy of actually putting on the proverbial lab coat and losing yourself in curiosity-based tinkering and problem solving.

Boots on the ground outreach

I owe a lot to Matt Bellis, a physicist and SHD veteran, for preparing me for SHD. He describes the event from the point of view of a scientist as “boots on the ground outreach.” Science Hack Day is a way to “engage” with the science-minded public in a meaningful way.

And by “engage” I mean “make cool things.” I also mean “interact with as colleagues rather than as a teacher.”

And by “science-minded public,” I really mean a slice of the public are already interested in science, but are also interested in participating as citizen scientists, continuing to tinker with code on GitHub or even just spreading the joy of science-themed hacking to their respective communities. This is science wanting to go viral, and the SHD participants want to be patient zeroes.

Image courtesy of Matt Biddulph.
A shot of the crowd before project presentations at SHD:SF 2015. Image courtesy of Matt Biddulph.

SHD is free, volunteer driven (an Avogadro’s number of thank yous to the SHD organizers and volunteers), and open to the community. The demographics of the crowd at SHD:SF was a lot closer to the actual population of San Francisco, and is thus a lot closer to the demographics that we academics want to also see reflected in the academy. Events like SHD aren’t just preaching to the choir, it’s a real opportunity to promote STEM fields broadly to underrepresented groups.

In fact, think about the moment that you were hooked on science. For many of us, those moments are a combination of serendipity and opportunity. What would it take to bring that to make that spark accessible? SHD is one such event. And in fact, it even generated a science hack for precisely that.

Open Science

There was also a valuable message to glean from the crowd at the event: people want to play with data. And for the general public, they’re even happier when academics provide tools to play with data.

Data doesn’t even have to be what you conventionally think of as data. Alex Parker’s “solar archive” won the “best use of data” award for a dataset of 15 years of daily hand drawn images of the sun by astronomers at the Mount Wilson Observatory. Alex’s team used image processing techniques to clean, organize, and animate the images. The result is hypnotic to watch, but is also a gateway to actual science education: what are these sun spots that they’re annotating? How did they draw these images? What can we learn from this record?

opensciencehack
Giving a “lightning talk” on data sets in particle physics. Slides available.

Open data sets are a little more difficult in particle physics: collider data is notoriously subtle to perform analyses—mostly because background subtraction typically requires advanced physics background. Nevertheless, our field is evolving slowly and there are now some options available. See my lightning talk slides for a brief discussion of these with links.

The point, though, is that there is demand. And for the public, the more people demand open data sets—even just for “playing”—the more scientists will understand the potential for productive partnerships with citizen scientists. And for scientists: make your tools available. This holds true even for technical tools—GitHub is a great way to get your colleagues to pick up the research directions you find exciting by sharing Mathematica or Jupyter notebooks!

 

The Science Hack Day Movement

SHD San Francisco participants video chatted with participants from parallel SHD events going on in Berlin and Madagascar. Photo courtesy of Matt Biddulph.
SHD San Francisco participants were able to video chat with participants from parallel SHD events going on in Berlin and Madagascar. Photo courtesy of Matt Biddulph.

A quick look at the SHD main page shows that Science Hack Days are popping up all over the world. In true open source spirit, SHD even has a set of resources for putting together your own Science Hack Day event. In other words, Science Hack Day scales—you can build upon the experiences of past events to build your own. I suspect that there is untapped potential to seed Science Hack Day into universities, where many computer science departments have experience with hackathons and many physics departments have a large set of lecture demonstrations that may be amenable to hacking.

Needless to say, the weekend turned me into a Science Hack Day believer. I strongly encourage anyone of any scientific background to try out one of these events: it’s a weekend that doesn’t require any advance planning (though it helps to brainstorm), and you’ll be surprised at neat things you can develop, and what neat new friends you make along the way. And that, to me, is a summary of what’s great about doing science.

See you at the next Science Hack Day!

 


Many thanks to the people who made SHD:SF so magical for me: Jun Axup and Rose Broome for delightful conversations and their enthusiasm, Matt Biddulph for taking photos, Mayank Kedia, Kris Kooi, and Chrisantha Perera for hacking with me, all of the volunteers and sponsors (especially the Sloan and Moore foundations for supporting the ambassador program), Matt Bellis for passing on his past projects and data sets, and all of the wonderful hackers who I got to learn from and chat with. Most importantly, though, huge thanks and gratitude to Ariel Waldman, who is the driving force of the Science Hack Day movement and has brought so much joy and science to so many people while simultaneously being incredibly modest about her contributions.