The P5 Report & The Future of Particle Physics (Part 1)

Particle physics is the epitome of ‘big science’. To answer our most fundamental questions out about physics requires world class experiments that push the limits of whats technologically possible. Such incredible sophisticated experiments, like those at the LHC, require big facilities to make them possible,  big collaborations to run them, big project planning to make dreams of new facilities a reality, and committees with big acronyms to decide what to build.

Enter the Particle Physics Project Prioritization Panel (aka P5) which is tasked with assessing the landscape of future projects and laying out a roadmap for the future of the field in the US. And because these large projects are inevitably an international endeavor, the report they released last week has a large impact on the global direction of the field. The report lays out a vision for the next decade of neutrino physics, cosmology, dark matter searches and future colliders. 

P5 follows the community-wide brainstorming effort known as the Snowmass Process in which researchers from all areas of particle physics laid out a vision for the future. The Snowmass process led to a particle physics ‘wish list’, consisting of all the projects and research particle physicists would be excited to work on. The P5 process is the hard part, when this incredibly exciting and diverse research program has to be made to fit within realistic budget scenarios. Advocates for different projects and research areas had to make a case of what science their project could achieve and a detailed estimate of the costs. The panel then takes in all this input and makes a set of recommendations of how the budget should be allocated, what should projects be realized and what hopes are dashed. Though the panel only produces a set of recommendations, they are used quite extensively by the Department of Energy which actually allocates funding. If your favorite project is not endorsed by the report, its very unlikely to be funded. 

Particle physics is an incredibly diverse field, covering sub-atomic to cosmic scales, so recommendations are divided up into several different areas. In this post I’ll cover the panel’s recommendations for neutrino physics and the cosmic frontier. Future colliders, perhaps the spiciest topic, will be covered in a follow up post.

The Future of Neutrino Physics

For those in the neutrino physics community all eyes were on the panels recommendations regarding the Deep Underground Neutrino Experiment (DUNE). DUNE is the US’s flagship particle physics experiment for the coming decade and aims to be the definitive worldwide neutrino experiment in the years to come. A high powered beam of neutrinos will be produced at Fermilab and sent 800 miles through the earth’s crust towards several large detectors placed in a mine in South Dakota. Its a much bigger project than previous neutrino experiments, unifying essentially the entire US community into a single collaboration.

DUNE is setup to produce world leading measurements of neutrino oscillations, the property by which neutrinos produced in one ‘flavor state’, (eg an electron-neutrino) gradually changes its state with sinusoidal probability (eg into a muon neutrino) as it propagates through space. This oscillation is made possible by a simple quantum mechanical weirdness: neutrino’s flavor state, whether it couples to electrons muons or taus, is not the same as its mass state. Neutrinos of a definite mass are therefore a mixture of the different flavors and visa versa.

Detailed measurements of this oscillation are the best way we know to determine several key neutrino properties. DUNE aims to finally pin down two crucial neutrino properties: their ‘mass ordering’, which will solidify how the different neutrino flavors and measured mass differences all fit together, and their ‘CP-violation’ which specifies whether neutrinos and their anti-matter counterparts behave the same or not. DUNE’s main competitor is the Hyper-Kamiokande experiment in Japan, another next-generation neutrino experiment with similar goals.

A depiction of the DUNE experiment. A high intensity proton beam at Fermilab is used to create a concentrated beam of neutrinos which are then sent through 800 miles of the Earth’s crust towards detectors placed deep underground South Dakota. Source

Construction of the DUNE experiment has been ongoing for several years and unfortunately has not been going quite as well as hoped. It has faced significant schedule delays and cost overruns. DUNE is now not expected to start taking data until 2031, significantly behind Hyper-Kamiokande’s projected 2027 start. These delays may lead to Hyper-K making these definitive neutrino measurements years before DUNE, which would be a significant blow to the experiment’s impact. This left many DUNE collaborators worried about its broad support from the community.

It came as a relief then when P5 report re-affirmed the strong science case for DUNE, calling it the “ultimate long baseline” neutrino experiment. The report strongly endorsed the completion of the first phase of DUNE. However, it recommended a pared-down version of its upgrade, advocating for an earlier beam upgrade in lieu of additional detectors. This re-imagined upgrade will still achieve the core physics goals of the original proposal with a significant cost savings. With this report, and news that the beleaguered underground cavern construction in South Dakota is now 90% complete, was certainly welcome holiday news to the neutrino community. This is also sets up a decade-long race between DUNE and Hyper-K to be the first to measure these key neutrino properties.

Cosmic Implications

While we normally think of particle physics as focused on the behavior of sub-atomic particles, its really about the study of fundamental forces and laws, no matter the method. This means that telescopes to study the oldest light in the universe, the Cosmic Microwave Background (CMB), fall into the same budget category as giant accelerators studying sub-atomic particles. Though the experiments in these two areas look very different, the questions they seek to answer are cross-cutting. Understanding how particles interact at very high energies helps us understand the earliest moments of the universe, when such particles were all interacting in a hot dense plasma. Likewise, by studying the these early moments of the universe and its large-scale evolution can tell us about what kinds of particles and forces are influencing its dynamics. When asking fundamental questions about the universe, one needs both the sharpest microscopes and the grandest panoramas possible.

The most prominent example of this blending of the smallest and largest scales in particle physics is dark matter. Some of our best evidence for dark matter comes analyzing the cosmic microwave background to determine how the primordial plasma behaved. These studies showed that some type of ‘cold’, matter that doesn’t interact with light, aka dark matter, was necessary to form the first clumps that eventually seeded the formation of galaxies. Without it, the universe would be much more soup-y and structureless than what we see to today.

The “cosmic web” galaxy clusters from the Millenium simulation. Measuring and understanding this web can tell us a lot about the fundamental constituents of the universe. Source

To determine what dark matter is then requires an attack from two fronts: design experiments here on earth attempting directly detect it, and further study its cosmic implications to look for more clues as to its properties.

The panel recommended next generation telescopes to study the CMB as a top priority. The so called ‘Stage 4’ CMB experiment would deploy telescopes in both the south pole and Chile’s Atacama desert to better characterize sources of atmospheric noise. The CMB has been studied extensively before, but the increased precision of CMS-S4 could shed light on mysteries like dark energy, dark matter, inflation, and the recent Hubble Tension. Given the past fruitfulness of these efforts, I think few doubted the science case for such a next generation experiment.

A mockup of one of the CMS-S4 telescopes which will be based in the Chilean desert. Note the person for scale on the right (source)

The P5 report recommended a suite of new dark matter experiments in the next decade, including the ‘ultimate’ liquid Xenon based dark matter search. Such an experiment would follow in the footsteps of massive noble gas experiments like LZ and XENONnT which have been hunting for a favored type of dark matter called WIMP’s for the last few decades. These experiments essentially build giant vats of liquid Xenon, carefully shield from any sources of external radiation, and look for signs of dark matter particles bumping into any of the Xenon atoms. The larger the vat of Xenon, the higher chance a dark matter particle will bump into something. Current generation experiments have ~7 tons of Xenon, and the next generation experiment would be even larger. The next generation aims to reach the so called ‘neutrino floor’, the point as which the experiments would be sensitive enough to observe astrophysical neutrinos bumping into the Xenon. Such neutrino interactions would look extremely similar to those of dark matter, and thus represent an unavoidable background which would signal the ultimate sensitivity of this type of experiment. WIMP’s could still be hiding in a basement below this neutrino floor, but finding them would be exceedingly difficult.

A photo of the current XENONnT experiment. This pristine cavity is then filled with liquid Xenon and closely monitored for signs of dark matter particles bumping into one of the Xenon atoms. Credit: XENON Collaboration

WIMP’s are not the only dark matter candidates in town, and recent years have also seen an explosion of interest in the broad range of dark matter possibilities, with axions being a prominent example. Other kinds of dark matter could have very different properties than WIMPs and have had much fewer dedicated experiments to search for them. There is ‘low hanging fruit’ to pluck in the way of relatively cheap experiments which can achieve world-leading sensitivity. Previously, these ‘table top’ sized experiments had a notoriously difficult time obtaining funding, as they were often crowded out of the budgets by the massive flagship projects. However, small experiments can be crucial to ensuring our best chance of dark matter discovery, as they fill in the blinds pots missed by the big projects.

The panel therefore recommended creating a new pool of funding set aside for these smaller scale projects. Allowing these smaller scale projects to flourish is important for the vibrancy and scientific diversity of the field, as the centralization of ‘big science’ projects can sometimes lead to unhealthy side effects. This specific recommendation also mirrors a broader trend of the report: to attempt to rebalance the budget portfolio to be spread more evenly and less dominated by the large projects.

A pie chart comparing the budget porfolio in 2023 (left) versus the projected budget in 2033 (right). Currently most of the budget is being taken up by the accelerator upgrades and cavern construction of DUNE, with some amount for the LHC upgrades. But by 2033 the panel recommends a much more equitable balance between different research area.

What Didn’t Make It

Any report like this comes with some tough choices. Budget realities mean not all projects can be funded. Besides the pairing down of some of DUNE’s upgrades, one of the biggest areas that was recommended against were ‘accessory experiments at the LHC’. In particular, MATHUSULA and the Forward Physics Facility were two experiments that proposed to build additional detectors near already existing LHC collision points to look for particles that may be missed by the current experiments. By building new detectors hundreds of meters away from the collision point, shielded by concrete and the earth, they can obtained unique sensitivity to ‘long lived’ particles capable of traversing such distances. These experiments would follow in the footsteps of the current FASER experiment, which is already producing impressive results.

While FASER found success as a relatively ‘cheap’ experiment, reusing detector components from and situating itself in a beam tunnel, these new proposals were asking for quite a bit more. The scale of these detectors would have required new caverns to be built, significantly increasing the cost. Given the cost and specialized purpose of these detectors, the panel recommended against their construction. These collaborations may now try to find ways to pare down their proposal so they can apply to the new small project portfolio.

Another major decision by the panel was to recommend against hosting a new Higgs factor collider in the US. But that will discussed more in a future post.

Conclusions

The P5 panel was faced with a difficult task, the total cost of all projects they were presented with was three times the budget. But they were able to craft a plan that continues the work of the previous decade, addresses current shortcomings and lays out an inspiring vision for the future. So far the community seems to be strongly rallying behind it. At time of writing, over 2700 community members from undergraduates to senior researchers have signed a petition endorsing the panels recommendations. This strong show of support will be key for turning these recommendations into actual funding, and hopefully lobbying congress to even increase funding so that more of this vision can be realized.

For those interested the full report as well as executive summaries of different areas can be found on the P5 website. Members of the US particle physics community are also encouraged to sign the petition endorsing the recommendations here.

And stayed tuned for part 2 of our coverage which will discuss the implications of the report on future colliders!

Moriond 2023 Recap

Every year since 1966,  particle physicists have gathered in the Alps to unveil and discuss their most important results of the year (and to ski). This year I had the privilege to attend the Moriond QCD session so I thought I would post a recap here. It was a packed agenda spanning 6 days of talks, and featured a lot of great results over many different areas of particle physics, so I’ll have to stick to the highlights here.

FASER Observes First Collider Neutrinos

Perhaps the most exciting result of Moriond came from the FASER experiment, a small detector recently installed in the LHC tunnel downstream from the ATLAS collision point. They announced the first ever observation of neutrinos produced in a collider. Neutrinos are produced all the time in LHC collisions, but because they very rarely interact, and current experiments were not designed to look for them, no one had ever actually observed them in a detector until now. Based on data collected during collisions from last year, FASER observed 153 candidate neutrino events, with a negligible amount of predicted backgrounds; an unmistakable observation.

Black image showing colorful tracks left by particles produced in a neutrino interaction
A neutrino candidate in the FASER emulsion detector. Source

This first observation opens the door for studying the copious high energy neutrinos produced in colliders, which sit in an energy range currently unprobed by other neutrino experiments. The FASER experiment is still very new, so expect more exciting results from them as they continue to analyze their data. A first search for dark photons was also released which should continue to improve with more luminosity. On the neutrino side, they have yet to release full results based on data from their emulsion detector which will allow them to study electron and tau neutrinos in addition to the muon neutrinos this first result is based on.

New ATLAS and CMS Results

The biggest result from the general purpose LHC experiments was ATLAS and CMS both announcing that they have observed the simultaneous production of 4 top quarks. This is one of the rarest Standard Model processes ever observed, occurring a thousand times less frequently than a Higgs being produced. Now that it has been observed the two experiments will use Run-3 data to study the process in more detail in order to look for signs of new physics.

Event displays from ATLAS and CMS showing the signature of 4 top events in their respective detectors
Candidate 4 top events from ATLAS (left) and CMS (right).

ATLAS also unveiled an updated measurement of the mass of the W boson. Since CDF announced its measurement last year, and found a value in tension with the Standard Model at ~7-sigma, further W mass measurements have become very important. This ATLAS result was actually a reanalysis of their previous measurement, with improved PDF’s and statistical methods. Though still not as precise as the CDF measurement, these improvements shrunk their errors slightly (from 19 to 16 MeV).  The ATLAS measurement reports a value of the W mass in very good agreement with the Standard Model, and approximately 4-sigma in tension with the CDF value. These measurements are very complex, and work is going to be needed to clarify the situation.

CMS had an intriguing excess (2.8-sigma global) in a search for a Higgs-like particle decaying into an electron and muon. This kind of ‘flavor violating’ decay would be a clear indication of physics beyond the Standard Model. Unfortunately it does not seem like ATLAS has any similar excess in their data.

Status of Flavor Anomalies

At the end of 2022, LHCb announced that the golden channel of the flavor anomalies, the R(K) anomaly, had gone away upon further analysis. Many of the flavor physics talks at Moriond seemed to be dealing with this aftermath.

Of the remaining flavor anomalies, R(D), a ratio describing the decay rates of B mesons in final states with D mesons and taus versus D mesons plus muons or electrons, has still been attracting interest. LHCb unveiled a new measurement that focused on hadronically taus and found a value that agreed with the Standard Model prediction. However this new measurement had larger error bars than others so it only brought down the world average slightly. The deviation currently sits at around 3-sigma.

A summary plot showing all the measurements of R(D) and R(D*). The newest LHCb measurement is shown in the red band / error bar on the left. The world average still shows a 3-sigma deviation to the SM prediction

An interesting theory talk pointed out that essentially any new physics which would produce a deviation in R(D) should also produce a deviation in another lepton flavor ratio, R(Λc), because it features the same b->clv transition. However LHCb’s recent measurement of R(Λc) actually found a small deviation in the opposite direction as R(D). The two results are only incompatible at the ~1.5-sigma level for now, but it’s something to continue to keep an eye on if you are following the flavor anomaly saga.

It was nice to see that the newish Belle II experiment is now producing some very nice physics results. The highlight of which was a world-best measurement of the mass of the tau lepton. Look out for more nice Belle II results as they ramp up their luminosity, and hopefully they can weigh in on the R(D) anomaly soon.

A fit to the invariant mass the visible decay products of the tau lepton, used to determine its intrinsic mass. An impressive show of precision from Belle II

Theory Pushes for Precision

The focus of much of the theory talks was about trying to advance our precision in predictions of standard model physics. This ‘bread and butter’ physics is sometimes overlooked in scientific press, but is an absolutely crucial part of the particle physics ecosystem. As experiments reach better and better precision, improved theory calculations are required to accurately model backgrounds, predict signals, and have precise standard model predictions to compare to so that deviations can be spotted. Nice results in this area included evidence for an intrinsic amount of charm quarks inside the proton from the NNPDF collaboration, very precise extraction of CKM matrix elements by using lattice QCD, and two different proposals for dealing with tricky aspects regarding the ‘flavor’ of QCD jets.

Final Thoughts

Those were all the results that stuck out to me. But this is of course a very biased sampling! I am not qualified enough to point out the highlights of the heavy ion sessions or much of the theory presentations. For a more comprehensive overview, I recommend checking out the slides for the excellent experimental and theoretical summary talks. Additionally there was the Moriond Electroweak conference that happened the week before the QCD one, which covers many of the same topics but includes neutrino physics results and dark matter direct detection. Overall it was a very enjoyable conference and really showcased the vibrancy of the field!

The Search for Simplicity : The Higgs Boson’s Self Coupling

When students first learn quantum field theory, the mathematical language the underpins the behavior of elementary particles, they start with the simplest possible interaction you can write down : a particle with no spin and no charge scattering off another copy of itself. One then eventually moves on to the more complicated interactions that describe the behavior of fundamental particles of the Standard Model. They may quickly forget this simplified interaction as a unrealistic toy example, greatly simplified compared to the complexity the real world. Though most interactions that underpin particle physics are indeed quite a bit more complicated, nature does hold a special place for simplicity. This barebones interaction is predicted to occur in exactly one scenario : a Higgs boson scattering off itself. And one of the next big targets for particle physics is to try and observe it.

A feynman diagram consisting of two dotted lines coming merging together to form a single line.
A Feynman diagram of the simplest possible interaction in quantum field theory, a spin-zero particle interacting with itself.

The Higgs is the only particle without spin in the Standard Model, and the only one that doesn’t carry any type of charge. So even though particles such as gluons can interact with other gluons, its never two of the same kind of gluons (the two interacting gluons will always carry different color charges). The Higgs is the only one that can have this ‘simplest’ form of self-interaction. Prominent theorist Nima Arkani-Hamed has said that the thought of observing this “simplest possible interaction in nature gives [him] goosebumps“.

But more than being interesting for its simplicity, this self-interaction of the Higgs underlies a crucial piece of the Standard Model: the story of how particles got their mass. The Standard Model tells us that the reason all fundamental particles have mass is their interaction with the Higgs field. Every particle’s mass is proportional to the strength of the Higgs field. The fact that particles have any mass at all is tied to the fact that the lowest energy state of the Higgs field is at a non-zero value. According to the Standard Model, early in the universe’s history when the temperature were much higher, the Higgs potential had a different shape, with its lowest energy state at field value of zero. At this point all the particles we know about were massless. As the universe cooled the shape of the Higgs potential morphed into a ‘wine bottle’ shape, and the Higgs field moved into the new minimum at non-zero value where it sits today. The symmetry of the initial state, in which the Higgs was at the center of its potential, was ‘spontaneously broken’  as its new minimum, at a location away from the center, breaks the rotation symmetry of the potential. Spontaneous symmetry breaking is a very deep theoretical idea that shows up not just in particle physics but in exotic phases of matter as well (eg superconductors). 

A diagram showing the ‘unbroken’ Higgs potential in the very early universe (left) and the ‘wine bottle’ shape it has today (right). When the Higgs at the center of its potential it has a rotational symmetry, there are no preferred directions. But once it finds it new minimum that symmetry is broken. The Higgs now sits at a particular field value away from the center and a preferred direction exists in the system. 

This fantastical story of how particle’s gained their masses, one of the crown jewels of the Standard Model, has not yet been confirmed experimentally. So far we have studied the Higgs’s interactions with other particles, and started to confirm the story that it couples to particles in proportion to their mass. But to confirm this story of symmetry breaking we will to need to study the shape of the Higgs’s potential, which we can probe only through its self-interactions. Many theories of physics beyond the Standard Model, particularly those that attempt explain how the universe ended up with so much matter and very little anti-matter, predict modifications to the shape of this potential, further strengthening the importance of this measurement.

Unfortunately observing the Higgs interacting with itself and thus measuring the shape of its potential will be no easy feat. The key way to observe the Higgs’s self-interaction is to look for a single Higgs boson splitting into two. Unfortunately in the Standard Model additional processes that can produce two Higgs bosons quantum mechanically interfere with the Higgs self interaction process which produces two Higgs bosons, leading to a reduced production rate. It is expected that a Higgs boson scattering off itself occurs around 1000 times less often than the already rare processes which produce a single Higgs boson.  A few years ago it was projected that by the end of the LHC’s run (with 20 times more data collected than is available today), we may barely be able to observe the Higgs’s self-interaction by combining data from both the major experiments at the LHC (ATLAS and CMS).

Fortunately, thanks to sophisticated new data analysis techniques, LHC experimentalists are currently significantly outpacing the projected sensitivity. In particular, powerful new machine learning methods have allowed physicists to cut away background events mimicking the di-Higgs signal much more than was previously thought possible. Because each of the two Higgs bosons can decay in a variety of ways, the best sensitivity will be obtained by combining multiple different ‘channels’ targeting different decay modes. It is therefore going to take a village of experimentalists each working hard to improve the sensitivity in various different channels to produce the final measurement. However with the current data set, the sensitivity is still a factor of a few away from the Standard Model prediction. Any signs of this process are only expected to come after the LHC gets an upgrade to its collision rate a few years from now.

Limit plots on HH production in various different decay modes.
Current experimental limits on the simultaneous production of two Higgs bosons, a process sensitive to the Higgs’s self-interaction, from ATLAS (left) and CMS (right). The predicted rate from the Standard Model is shown in red in each plot while the current sensitivity is shown with the black lines. This process is searched for in a variety of different decay modes of the Higgs (various rows on each plot). The combined sensitivity across all decay modes for each experiment allows them currently to rule out the production of two Higgs bosons at 3-4 times the rate predicted by the Standard Model. With more data collected both experiments will gain sensitivity to the range predicted by the Standard Model.

While experimentalists will work as hard as they can to study this process at the LHC, to perform a precision measurement of it, and really confirm the ‘wine bottle’ shape of the potential, its likely a new collider will be needed. Studying this process in detail is one of the main motivations to build a new high energy collider, with the current leading candidates being an even bigger proton-proton collider to succeed the LHC or a new type of high energy muon collider.

Various pictorial representations of the uncertainty on the Higgs potential shape.
A depiction of our current uncertainty on the shape of the Higgs potential (center), our expected uncertainty at the end of the LHC (top right) and the projected uncertainty a new muon collider could achieve (bottom right). The Standard Model expectation is the tan line and the brown band shows the experimental uncertainty. Adapted from Nathaniel Craig’s talkhere

The quest to study nature’s simplest interaction will likely span several decades. But this long journey gives particle physicists a roadmap for the future, and a treasure worth traveling great lengths for.

Read More:

CERN Courier Interview with Nima Arkani-Hamed on the future of Particle Physics on the importance of the Higgs’s self-coupling

Wikipedia Article and Lecture Notes on Spontaneous symmetry breaking

Recent ATLAS Measurements of the Higgs Self Coupling

Making Smarter Snap Judgments at the LHC

Collisions at the Large Hadron Collider happen fast. 40 million times a second, bunches of 1011 protons are smashed together. The rate of these collisions is so fast that the computing infrastructure of the experiments can’t keep up with all of them. We are not able to read out and store the result of every collision that happens, so we have to ‘throw out’ nearly all of them. Luckily most of these collisions are not very interesting anyways. Most of them are low energy interactions of quarks and gluons via the strong force that have been already been studied at previous colliders. In fact, the interesting processes, like ones that create a Higgs boson, can happen billions of times less often than the uninteresting ones.

The LHC experiments are thus faced with a very interesting challenge, how do you decide extremely quickly whether an event is interesting and worth keeping or not? This what the ‘trigger’ system, the Marie Kondo of LHC experiments, are designed to do. CMS for example has a two-tiered trigger system. The first level has 4 microseconds to make a decision and must reduce the event rate from 40 millions events per second to 100,000. This speed requirement means the decision has to be made using at the hardware level, requiring the use of specialized electronics to quickly to synthesize the raw information from the detector into a rough idea of what happened in the event. Selected events are then passed to the High Level Trigger (HLT), which has 150 milliseconds to run versions of the CMS reconstruction algorithms to further reduce the event rate to a thousand per second.

While this system works very well for most uses of the data, like measuring the decay of Higgs bosons, sometimes it can be a significant obstacle. If you want to look through the data for evidence of a new particle that is relatively light, it can be difficult to prevent the trigger from throwing out possible signal events. This is because one of the most basic criteria the trigger uses to select ‘interesting’ events is that they leave a significant amount of energy in the detector. But the decay products of a new particle that is relatively light won’t have a substantial amount of energy and thus may look ‘uninteresting’ to the trigger.

In order to get the most out of their collisions, experimenters are thinking hard about these problems and devising new ways to look for signals the triggers might be missing. One idea is to save additional events from the HLT in a substantially reduced size. Rather than saving the raw information from the event, that can be fully processed at a later time, instead the only the output of the quick reconstruction done by the trigger is saved. At the cost of some precision, this can reduce the size of each event by roughly two orders of magnitude, allowing events with significantly lower energy to be stored. CMS and ATLAS have used this technique to look for new particles decaying to two jets and LHCb has used it to look for dark photons. The use of these fast reconstruction techniques allows them to search for, and rule out the existence of, particles with much lower masses than otherwise possible. As experiments explore new computing infrastructures (like GPU’s) to speed up their high level triggers, they may try to do even more sophisticated analyses using these techniques. 

But experimenters aren’t just satisfied with getting more out of their high level triggers, they want to revamp the low-level ones as well. In order to get these hardware-level triggers to make smarter decisions, experimenters are trying get them to run machine learning models. Machine learning has become very popular tool to look for rare signals in LHC data. One of the advantages of machine learning models is that once they have been trained, they can make complex inferences in a very short amount of time. Perfect for a trigger! Now a group of experimentalists have developed a library that can translate the most popular types machine learning models into a format that can be run on the Field Programmable Gate Arrays used in lowest level triggers. This would allow experiments to quickly identify events from rare signals that have complex signatures that the current low-level triggers don’t have time to look for. 

The LHC experiments are working hard to get the most out their collisions. There could be particles being produced in LHC collisions already but we haven’t been able to see them because of our current triggers, but these new techniques are trying to cover our current blind spots. Look out for new ideas on how to quickly search for interesting signatures, especially as we get closer the high luminosity upgrade of the LHC.

Read More:

CERN Courier article on programming FPGA’s

IRIS HEP Article on a recent workshop on Fast ML techniques

CERN Courier article on older CMS search for low mass dijet resonances

ATLAS Search using ‘trigger-level’ jets

LHCb Search for Dark Photons using fast reconstruction based on a high level trigger

Paper demonstrating the feasibility of running ML models for jet tagging on FPGA’s

The CMB sheds light on galaxy clusters: Observing the kSZ signal with ACT and BOSS

Article: Detection of the pairwise kinematic Sunyaev-Zel’dovich effect with BOSS DR11 and the Atacama Cosmology Telescope
Authors: F. De Bernardis, S. Aiola, E. M. Vavagiakis, M. D. Niemack, N. Battaglia, and the ACT Collaboration
Reference: arXiv:1607.02139

Editor’s note: this post is written by one of the students involved in the published result.

Like X-rays shining through your body can inform you about your health, the cosmic microwave background (CMB) shining through galaxy clusters can tell us about the universe we live in. When light from the CMB is distorted by the high energy electrons present in galaxy clusters, it’s called the Sunyaev-Zel’dovich effect. A new 4.1σ measurement of the kinematic Sunyaev-Zel’dovich (kSZ) signal has been made from the most recent Atacama Cosmology Telescope (ACT) cosmic microwave background (CMB) maps and galaxy data from the Baryon Oscillation Spectroscopic Survey (BOSS). With steps forward like this one, the kinematic Sunyaev-Zel’dovich signal could become a probe of cosmology, astrophysics and particle physics alike.

The Kinematic Sunyaev-Zel’dovich Effect

It rolls right off the tongue, but what exactly is the kinematic Sunyaev-Zel’dovich signal? Galaxy clusters distort the cosmic microwave background before it reaches Earth, so we can learn about these clusters by looking at these CMB distortions. In our X-ray metaphor, the map of the CMB is the image of the X-ray of your arm, and the galaxy clusters are the bones. Galaxy clusters are the largest gravitationally bound structures we can observe, so they serve as important tools to learn more about our universe. In its essence, the Sunyaev-Zel’dovich effect is inverse-Compton scattering of cosmic microwave background photons off of the gas in these galaxy clusters, whereby the photons gain a “kick” in energy by interacting with the high energy electrons present in the clusters.

The Sunyaev-Zel’dovich effect can be divided up into two categories: thermal and kinematic. The thermal Sunyaev-Zel’dovich (tSZ) effect is the spectral distortion of the cosmic microwave background in a characteristic manner due to the photons gaining, on average, energy from the hot (~107 – 108 K) gas of the galaxy clusters. The kinematic (or kinetic) Sunyaev-Zel’dovich (kSZ) effect is a second-order effect—about a factor of 10 smaller than the tSZ effect—that is caused by the motion of galaxy clusters with respect to the cosmic microwave background rest frame. If the CMB photons pass through galaxy clusters that are moving, they are Doppler shifted due to the cluster’s peculiar velocity (the velocity that cannot be explained by Hubble’s law, which states that objects recede from us at a speed proportional to their distance). The kinematic Sunyaev-Zel’dovich effect is the only known way to directly measure the peculiar velocities of objects at cosmological distances, and is thus a valuable source of information for cosmology. It allows us to probe megaparsec and gigaparsec scales – that’s around 30,000 times the diameter of the Milky Way!

A schematic of the Sunyaev-Zel’dovich effect resulting in higher energy (or blue shifted) photons of the cosmic microwave background (CMB) when viewed through the hot gas present in galaxy clusters. Source: UChicago Astronomy.

 

Measuring the kSZ Effect

To make the measurement of the kinematic Sunyaev-Zel’dovich signal, the Atacama Cosmology Telescope (ACT) collaboration used a combination of cosmic microwave background maps from two years of observations by ACT. The CMB map used for the analysis overlapped with ~68000 galaxy sources from the Large Scale Structure (LSS) DR11 catalog of the Baryon Oscillation Spectroscopic Survey (BOSS). The catalog lists the coordinate positions of galaxies along with some of their properties. The most luminous of these galaxies were assumed to be located at the centers of galaxy clusters, so temperature signals from the CMB map were taken at the coordinates of these galaxy sources in order to extract the Sunyaev-Zel’dovich signal.

While the smallness of the kSZ signal with respect to the tSZ signal and the noise level in current CMB maps poses an analysis challenge, there exist several approaches to extracting the kSZ signal. To make their measurement, the ACT collaboration employed a pairwise statistic. “Pairwise” refers to the momentum between pairs of galaxy clusters, and “statistic” indicates that a large sample is used to rule out the influence of unwanted effects.

Here’s the approach: nearby galaxy clusters move towards each other on average, due to gravity. We can’t easily measure the three-dimensional momentum of clusters, but the average pairwise momentum can be estimated by using the line of sight component of the momentum, along with other information such as redshift and angular separations between clusters. The line of sight momentum is directly proportional to the measured kSZ signal: the microwave temperature fluctuation which is measured from the CMB map. We want to know if we’re measuring the kSZ signal when we look in the direction of galaxy clusters in the CMB map. Using the observed CMB temperature to find the line of sight momenta of galaxy clusters, we can estimate the mean pairwise momentum as a function of cluster separation distance, and check to see if we find that nearby galaxies are indeed falling towards each other. If so, we know that we’re observing the kSZ effect in action in the CMB map.

For the measurement quoted in their paper, the ACT collaboration finds the average pairwise momentum as a function of galaxy cluster separation, and explores a variety of error determinations and sources of systematic error. The most conservative errors based on simulations give signal-to-noise estimates that vary between 3.6 and 4.1.

The mean pairwise momentum estimator and best fit model for a selection of 20000 objects from the DR11 Large Scale Structure catalog, plotted as a function of comoving separation. The dashed line is the linear model, and the solid line is the model prediction including nonlinear redshift space corrections. The best fit provides a 4.1σ evidence of the kSZ signal in the ACTPol-ACT CMB map. Source: arXiv:1607.02139.
The mean pairwise momentum estimator and best fit model for a selection of 20000 objects from the DR11 Large Scale Structure catalog, plotted as a function of comoving separation. The dashed line is the linear model, and the solid line is the model prediction including nonlinear redshift space corrections. The best fit provides a 4.1σ evidence of the kSZ signal in the ACTPol-ACT CMB map. Source: arXiv:1607.02139.

The ACT and BOSS results are an improvement on the 2012 ACT detection, and are comparable with results from the South Pole Telescope (SPT) collaboration that use galaxies from the Dark Energy Survey. The ACT and BOSS measurement represents a step forward towards improved extraction of kSZ signals from CMB maps. Future surveys such as Advanced ACTPol, SPT-3G, the Simons Observatory, and next-generation CMB experiments will be able to apply the methods discussed here to improved CMB maps in order to achieve strong detections of the kSZ effect. With new data that will enable better measurements of galaxy cluster peculiar velocities, the pairwise kSZ signal will become a powerful probe of our universe in the years to come.

Implications and Future Experiments

One interesting consequence for particle physics will be more stringent constraints on the sum of the neutrino masses from the pairwise kinematic Sunyaev-Zel’dovich effect. Upper bounds on the neutrino mass sum from cosmological measurements of large scale structure and the CMB have the potential to determine the neutrino mass hierarchy, one of the next major unknowns of the Standard Model to be resolved, if the mass hierarchy is indeed a “normal hierarchy” with ν3 being the heaviest mass state. If the upper bound of the neutrino mass sum is measured to be less than 0.1 eV, the inverted hierarchy scenario would be ruled out, due to there being a lower limit on the mass sum of ~0.095 eV for an inverted hierarchy and ~0.056 eV for a normal hierarchy.

Forecasts for kSZ measurements in combination with input from Planck predict possible constraints on the neutrino mass sum with a precision of 0.29 eV, 0.22 eV and 0.096 eV for Stage II (ACTPol + BOSS), Stage III (Advanced ACTPol + BOSS) and Stage IV (next generation CMB experiment + DESI) surveys respectively, with the possibility of much improved constraints with optimal conditions. As cosmic microwave background maps are improved and Sunyaev-Zel’dovich analysis methods are developed, we have a lot to look forward to.

 

Background reading:

Monojet Dark Matter Searches at the LHC

Now is a good time to be a dark matter experiment. The astrophysical evidence for its existence is almost undeniable (such as gravitational lensing and the cosmic microwave background; see the “Further Reading” list if you want to know more.) Physicists are pulling out all the stops trying to pin DM down by any means necessary.

However, by its very nature, it is extremely difficult to detect; dark matter is called dark because it has no known electromagnetic interactions, meaning it doesn’t couple to the photon. It does, however, have very noticeable gravitational effects, and some theories allow for the possibility of weak interactions as well.

While there are a wide variety of experiments searching for dark matter right now, the scope of this post will be a bit narrower, focusing on a common technique used to look for dark matter at the LHC, known as ‘monojets’. We rely on the fact that a quark-quark interaction could actually produce dark matter particle candidates, known as weakly interacting massive particles (WIMPs), through some unknown process. Most likely, the dark matter would then pass through the detector without any interactions, kind of like neutrinos. But if it doesn’t have any interactions, how do we expect to actually see anything? Figure 1 shows the overall Feynman diagram of the interaction; I’ll explain how and why each of these particles comes into the picture.

Figure 1: Feynman diagram for dark matter production process.
Figure 1: Feynman diagram for dark matter production process.

The answer is a pretty useful metric used by particle physicists to measure things that don’t interact, known as ‘missing transverse energy’ or MEt. When two protons are accelerated down the beam line, their initial momentum in the transverse plane is necessarily zero. Your final state can have all kinds of decay products in that plane, but by conversation of momentum, their magnitude and direction have to add up to zero in the end. If you add up all your momentum in the transverse plane and get a non-zero value, you know the remaining momentum was taken away by these non-interacting particles. In our case, dark matter is going to be the missing piece of the puzzle.

Figure 2: Event display for one of the monojet candidates in the ATLAS 7 data.
Figure 2: Event display for one of the monojet candidates in the ATLAS 7 TeV data.

Now our search method is to collide protons and look for… well, nothing. That’s not an easy thing to do. So let’s add another particle to our final state: a single jet that was radiated off one of the initial protons. This is a pretty common occurrence in LHC collisions, so we’re not ruining our statistics. But now we have an extra handle on selecting these events, since that radiated single jet is going to recoil off the missing energy in the final state.

An actual event display from the ATLAS detector is shown in Figure 2 (where the single jet is shown in yellow in the transverse plane of the detector).

No results have been released yet from the monojet groups with the 13 and 14 TeV data. However, the same method was using in 2012-2013 LHC data, and has provided some results that can be compared to current knowledge. Figure 3 shows the WIMP-nucleon cross section as a function of WIMP mass from CMS at the LHC (EPJC 75 (2015) 235), overlaid with other exclusions from a variety of experiments. Anything above/right of these curves is the excluded region.

From here we can see that the LHC can provide better sensitivity to low mass regions with spin dependent couplings to DM. It’s worth giving the brief caveat that these comparisons are extremely model dependent and require a lot of effective field theory; notes on this are also given in the Further Reading list. The current results look pretty thorough, and a large region of the WIMP mass seems to have been excluded. Interestingly, some searches observe slight excesses in regions that other experiments have ruled out; in this way, these ‘exclusions’ are not necessarily as cut and dry as they may seem. The dark matter mystery is still far from a resolution, but the LHC may be able to get us a little bit closer.

cms 1cms2

 

 

 

 

 

 

 

 

 

 

 

With all this incoming data and such a wide variety of searches ongoing, it’s likely that dark matter will remain a hot topic in physics for decades to come, with or without a discovery. In the words of dark matter pioneer Vera Rubin, “We have peered into a new world, and have seen that it is more mysterious and more complex than we had imagined. Still more mysteries of the universe remain hidden. Their discovery awaits the adventurous scientists of the future. I like it this way.“

 

References & Further Reading:

  • Links to the CMS and ATLAS 8 TeV monojet analyses
  • “Dark Matter: A Primer”, arXiv hep-ph 1006.2483
  • Effective Field Theory notes
  • “Simplified Models for Dark Matter Searches at the LHC”, arXiv hep-ph 1506.03116
  • “Search for dark matter at the LHC using missing transverse energy”, Sarah Malik, CMS Collaboration Moriond talk

 

Dark Photons from the Center of the Earth

Presenting: Dark Photons from the Center of the Earth
Author: J. Feng, J. Smolinsky, P. Tanedo (disclosure: blog post is by an author on the paper)
Reference: arXiv:1509.07525

Dark matter may be collecting in the center of the Earth. A recent paper explores way to detect its decay products here on the surface.

Dark matter may collect in the Earth and annihilate in to dark photons, which propagate to the surface before decaying into pairs of particles that can be detected by IceCube.
Dark matter may collect in the Earth and annihilate in to dark photons, which propagate to the surface before decaying into pairs of particles that can be detected by a large-volume neutrino detector like IceCube. Image from arXiv:1509.07525.

Our entire galaxy is gravitationally held together by a halo of dark matter, whose particle properties remain one of the biggest open questions in high energy physics. One class of theories assumes that the dark matter particles interact through a dark photon, a hypothetical particle which mediates a force analogous to how the ordinary photon mediates electromagnetism.

These theories also permit the  ordinary and dark photons to have a small quantum mechanical mixing. This effectively means that the dark photon can interact very weakly with ordinary matter and mediate interactions between ordinary matter and dark matter—this gives a handle for ways to detect dark matter.

While most methods for detecting dark matter focus on building detectors that are sensitive to the “wind” of dark matter bombarding (and mostly passing through) the Earth as the solar system zooms through the galaxy, the authors of 1509.07525 follow up on an idea initially proposed in the mid-80’s: dark matter hitting the Earth might get stuck in the Earth’s gravitational potential and build up in its core.

These dark matter particles can then find each other and annihilate. If they annihilate into very weakly interacting particles, then these may be detected at the surface of the Earth. A typical example is dark matter annihilation into neutrinos. In 1509.07525, the authors examine the case where the dark matter annihilates into dark photons, which can pass through the Earth as easily as a neutrino and decay into pairs of electrons or muons near the surface.

These decays can be detected in large neutrino detectors, such as the IceCube neutrino observatory (previously featured in ParticleBites). In the case where the dark matter is very heavy (e.g. TeV in mass) and the dark photons are very light (e.g. 200 MeV), these dark photons are very boosted and their decay products point back to the center of the Earth. This is a powerful discriminating feature against background cosmic ray events.  The number of signal events expected is shown in the following contour plot:

Number of signal (Nsig) dark photon decays expected at the IceCube detector in the plane of dark photon mixing over dark photon mass.
Number of signal dark photon decays expected at the IceCube detector in the plane of dark photon mixing over dark photon mass. Image from arXiv 1509.07525. Blue region is in tension with direct detection bounds (from ariv:1507.04007), while the gray regions are in tension with beam dump and supernovae bounds, see e.g. arXiv:1311.029.

While similar analyses for dark photon-mediated dark matter capture by celestial bodies and annihilation have been studied—see e.g. Pospelov et al., Delaunay et al., Schuster et al., and Meade et al.—the authors of 1509.07525 focus on the case of dark matter capture in the Earth (rather than, say, the sun) and subsequent annihilation to dark photons (rather than neutrinos).

  1. The annihilation rate at the center of the Earth is greatly increased do to Sommerfeld enhancement: because the captured dark matter has very low velocity, it is much more likely to annihilate with other captured dark matter particles due to mutual attraction from dark photon exchange.
  2. This causes the Earth to quickly saturate with dark matter, leading to larger annihilation rates than one would naively expect in the case where the Earth were not yet dark matter saturated such that annihilation and capture occur at equal rates.
  3. In addition using directional information to identify signal events against cosmic ray backgrounds, the authors identified kinematic quantities—the opening angle of the Standard Model decay products and the time delay between them—as ways to further discriminate signal from background. Unfortunately their analysis implies that these features lie just outside of the IceCube sensitivity to them.

Finally, the authors point out the possibility of large enhancements coming from the so-called dark disk, an enhancement in the low velocity phase space density of dark matter. If that is the case, then the estimated reach may increase by an order of magnitude.

 

Suggested further reading:

How to Turn On a Supercollider

Figure 1: CERN Control Centre excitement on June 5. Image from home.web.cern.ch.

After two years of slumber, the world’s biggest particle accelerator has come back to life. This marks the official beginning of Run 2 of the LHC, which will collide protons at nearly twice the energies achieve in Run 1. Results from this data were already presented at the recently concluded European Physical Society (EPS) Conference on High Energy Physics. And after achieving fame in 2012 through observation of the Higgs boson, it’s no surprise that the scientific community is waiting with bated breath to see what the LHC will do next.

The first official 13 TeV stable beam physics data arrived on June 5th. One of the first events recorded by the CMS detector is shown in Figure 2. But as it turns out, you can’t just walk up to the LHC, plug it back into the wall, and press the on switch (crazy, I know.) It takes an immense amount of work, planning, and coordination to even get the thing running.

Event display from one of the first Run 2 collisions.
Figure 2: Event display from one of the first Run 2 collisions.

The machine testing begins with the magnets. Since the LHC dipole magnets are superconducting, they need to be cooled to about 1.9K in order to function, which can take weeks. Each dipole circuit then must be tested to ensure functionality of the quench protection circuit, which will dump the beam in the event of sudden superconductivity loss. This process occurred between July and December of 2014.

Once the magnets are set, it’s time to start actually making beam. Immediately before entering the LHC, protons are circling around the Super Proton Synchroton, which acts as a pre-accelerator. Getting beam from the SPS to the LHC requires synchronization, a functional injection system, beam dump procedure, and a whole lot of other processes that are re-awoken and carefully tested. By April, beam commissioning was officially underway, meaning that protons were injected and circulating, and a mere 8 weeks later there were successful collisions at the safe energy of 6.5 TeV. As of right now, the CMS detector is reporting 84 pb-1 total integrated luminosity; a day-by-day breakdown can be seen in Figure 3.

CMS total integrated luminosity per day, from Ref 5.
Figure 3: CMS total integrated luminosity per day, from Ref 4.

But just having collisions does not mean that the LHC is up and fully functional. Sometimes things go wrong right when you least expect it. For example, the CMS magnet has been off to a bit of a rough start—there was an issue with its cooling system that kept the magnetic field off, meaning that charged particles would not bend. The LHC has also been taking the occasional week off for “scrubbing”, in which lots of protons are circulated to burn off electron clouds in the beam pipes.

This is all leading up to the next technical stop, when the CERN engineers get to go fix things that have broken and improve things that don’t work perfectly. So it’s a slow process, sure. But all the caution and extra steps and procedures are what make the LHC a one-of-a-kind experiment that has big sights set for the rest of Run 2. More posts to follow when more physics results arrive!

 

References:

  1. LHC Commissioning site
  2. Cyrogenics & Magnets at the LHC
  3. CERN collisions announcement
  4. CMS Public Luminosity results

Prospects for the International Linear Collider

Title: “Physics Case for the International Linear Collider”
Author: Linear Collider Collaboration (LCC) Physics Working Group
Published: arXiV hep-ex 1506:05992

For several years, rumors have been flying around the particle physics community about an entirely new accelerator facility, one that can take over for the LHC during its more extensive upgrades and can give physicists a different window into the complex world of the Standard Model and beyond. Through a few setbacks and moments of indecision, the project seems to have more momentum now than ever, so let’s go ahead and talk about the International Linear Collider: what it is, why we want it, and whether or not it will ever actually get off the ground.

The ILC is a proposed linear accelerator that will collide electrons and positrons, in comparison to the circular Large Hadron Collider ring that collides protons. So why make these design differences? Hasn’t the LHC done a lot for us? In two words: precision measurements!

Of course, the LHC got us the Higgs, and that’s great. But there are certain processes that physicists really want to look at now that occupy much higher fractions of the electron-positron cross section. In addition, the messiness associated with strong interactions is entirely gone with a lepton collider, leaving only a very well-defined initial state and easily calculable backgrounds. Let’s look specifically at what particular physical processes are motivating this design.

Higgs to fermion couplings, from CMS experiment (left) and projected for ILC (right).
Figure 1: Higgs to fermion couplings, from CMS experiment (left) and projected for ILC (right).

1. The Higgs. Everything always comes back to the Higgs, doesn’t it? We know that it’s out there, but beyond that, there are still many questions left unanswered. Physicists still want to determine whether the Higgs is composite, or whether it perhaps fits into a supersymmetric model of some kind. Additionally, we’re still uncertain about the couplings of the Higgs, both to the massive fermions and to itself. Figure 1 shows the current best estimate of Higgs couplings, which we expect to be proportional to the fermion mass, in comparison to how the precision of these measurements should improve with the ILC.

2.The Top Quark. Another thing that we’ve already discovered, but still want to know more about its characteristics and behaviors. We know that the Higgs field takes on a symmetry breaking value in all of space, due to the observed split of the electromagnetic and weak forces. As it turns out, it is the coupling of the Higgs to the top that provides this value, making it a key player in the Standard Model game.

3.New Physics. And of course there’s always the discovery potential. Since electron and positron beams can be polarized, we would be able to measure backgrounds with a whole new level of precision, providing a better image of possible decay chains that include dark matter or other beyond the SM particles.

Figure 2: ILC home page/Form One

Let’s move on to the actual design prospects for the ILC. Figure 2 shows the most recent blueprint of what such an accelerator would look like.  The ILC would have 2 separate detectors, and would be able to accelerate electrons/positrons to an energy of 500 GeV, with an option to upgrade to 1 TeV at a later point. The entire tunnel would be 31km long with two damping rings shown at the center. When accelerating electrons to extremely high energies, a linear collider is needed to offset extremely relativistic effects. For example, the Large Electron-Positron Collider synchrotron at CERN accelerates electrons to 50 GeV, giving them a relativistic gamma factor of 98,000. Compare that to a proton of 50 GeV in the same ring, which has a gamma of 54. That high gamma means that an electron requires an insane amount of energy to offset its synchrotron radiation, making a linear collider a more reasonable and cost effective choice.

 

Possible sites for the ILC in Japan.
Figure 3: Possible sites for the ILC in Japan.

In any large (read: expensive) experiment such as this, a lot of politics are going to come into play. The current highest bidder for the accelerator seems to be Japan, with possible construction sites in the mountain ranges (see Figure 3). The Japanese government is pretty eager to contribute a lot of funding to the project, something that other contenders have been reluctant to do (but such funding promises can very easily go awry, as the poor SSC shows us.) The Reference Design Reports report the estimated cost to be $6.7 billion, though U.S. Department of Energy officials have placed the cost closer to $20 billion. But the benefits of such a collaboration are immense. The infrastructure of such an accelerator could lead to the creation of a “new CERN”, one that could have as far-reaching influence in the future as CERN has enjoyed in the past few decades. Bringing together about 1000 scientists from more than 20 countries, the ILC truly has the potential to do great things for future international scientific collaboration, making it one of the most exciting prospects on the horizon of particle physics.

 

Further Reading:

  1. The International Linear Collider site: all things ILC
  2. ILC Reference Design Reports (RDR), for the very ambitious reader

A New Solution to the Hierarchy Problem?

Hello particle Chompers,

Today I want to discuss a slightly more advanced topic which I will not be able to explain in much detail, but goes by the name of the gauge Hierarchy problem or just the `the Hierarchy Problem‘. My main motivation is to simply make you curious enough that you will feel inspired to investigate it further for yourself since it is one of the outstanding problems in particle physics and one of the main motivations for the construction of the LHC. A second motivation is to bring to your attention a recent and exciting paper which proposes a potentially new solution to the hierarchy problem.

The hierarchy problem can roughly be stated as the problem of why the vacuum expectation value (VEV) of the Higgs boson, which determines the masses of the electroweak W and Z bosons, is so small compared to the highest energy scales thought to exist in the Universe. More specifically, the masses of the W and Z bosons (which define the weak scale) are roughly \sim 10^{2} GeV (see Figure 1) in particle physics units (remember in these units mass = energy!).

The W boson as it finds to its astonishment that it has a mass of only about 100 GeV instead of $latex 10^{19}$ GeV as expected.
The W boson as it finds to its astonishment that it has a mass of only about 100 GeV instead of 10^{19} GeV as expected.

On the other hand the highest energy scale thought to exist in the Universe is the planck scale at \sim 10^{19} GeV which is associated with the physics of gravity. Quantum field theory tells us that the Higgs VEV should get contributions from all energy scales (see Figure 2) so the question is why is the Higgs VEV, and thus the W and Z boson masses, a factor of roughly \sim 10^{17} smaller than it should be?

The Higgs vacuum expectation value receives contributions from all energy scales.
The Higgs vacuum expectation value receives contributions from all energy scales.

In the Standard Model (SM) there is no solution to this problem. Instead one must rely on a spectacularly miraculous numerical cancellation among the parameters of the SM Lagrangian. Miraculous numerical `coincidences’ like this make us physicists feel uncomfortable to the point that we give it the special name of `fine tuning’. The hierarchy problem is thus also known as the fine tuning problem.

A search for a solution to this problem has been at the forefront of particle physics for close to 40 years. It is the aversion to fine tuning which leads most physicist to believe there must be new physics beyond the SM whose dynamics are responsible for keeping the Higgs VEV small. Proposals include supersymmetrycomposite Higgs models, extra dimensions, as well as invoking the anthropic principle in the context of a multiverse. In many cases, these solutions require a variety of new particles at energies close to the weak scale (\sim 100-1000 GeV) and thus should be observable at the LHC. However the lack of evidence at the LHC for any physics beyond the SM is already bringing tension to many of these solutions. A solution which does not require new particles at the weak scale would thus be very attractive.

Recently a novel mechanism, which goes by the name of \emph{cosmological relaxation of the electroweak scale}, has been proposed which potentially offers such a solution. The details (which physicists are currently still digesting) are well beyond the scope of this blog. I will just mention that the mechanism incorporates two previously proposed mechanisms known as inflation^1 and the QCD axion^2 which solve other known problems. These are combined with the SM in a novel way such that the weak scale can arise naturally in our universe without any fine tuning and without new particles at the weak scale (or multiple universes)! And as a bonus, the axion in this mechanism (referred to as the `relaxion’) makes a good dark matter candidate!

Whether or not this mechanism turns out to be a solution to the hierarchy problem will of course require experimental tests and further theoretical scrutiny, but its a fascinating idea which combines aspects of quantum field theory and general relativity so I hope it will serve as motivation for you to begin learning more about these subjects!

\bf{Footnotes:}

1. Inflation is a theorized period of exponential accelerated expansion of our Universe in the moments just after the big bang. It was proposed as a solution to the problems of why our Universe is so flat and (mostly) homogenous while also explaining the structure we see throughout the Universe and in the cosmic microwave background.

2. Axions are particles proposed to explain why the amount of CP violation in the QCD sector in the SM is so small, which is known as the `strong CP problem‘.