What comes after the LHC? – The P5 Report & Future Colliders

This is the second part of our coverage of the P5 report and its implications for particle physics. To read the first part, click here

One of the thorniest questions in particle physics is ‘What comes after the LHC?’. This was one of the areas people were most uncertain what the P5 report would say. Globally, the field is trying to decide what to do once the LHC winds down in ~2040 While the LHC is scheduled to get an upgrade in the latter half of the decade and run until the end of the 2030’s, the field must start planning now for what comes next. For better or worse, big smash-y things seem to capture a lot of public interest, so the debate over what large collider project to build has gotten heated. Even Elon Musk is tweeting (X-ing?) memes about it.

Famously, the US’s last large accelerator project, the Superconducting Super Collider (SSC), was cancelled in the ’90s partway through its construction. The LHC’s construction itself often faced perilous funding situations, and required a CERN to make the unprecedented move of taking a loan to pay for its construction. So no one takes for granted that future large collider projects will ultimately come to fruition.

Desert or Discovery?

When debating what comes next, dashed hopes of LHC discoveries are top of mind. The LHC experiments were primarily designed to search for the Higgs boson, which they successfully found in 2012. However, many had predicted (perhaps over-confidently) it would also discover a slew of other particles, like those from supersymmetry or those heralding extra-dimensions of spacetime. These predictions stemmed from a favored principle of nature called ‘naturalness’ which argued additional particles nearby in energy to the Higgs were needed to keep its mass at a reasonable value. While there is still much LHC data to analyze, many searches for these particles have been performed so far and no signs of these particles have been seen.

These null results led to some soul-searching within particle physics. The motivations behind the ‘naturalness’ principle that said the Higgs had to be accompanied by other particles has been questioned within the field, and in New York Times op-eds.

No one questions that deep mysteries like the origins of dark matter, matter anti-matter asymmetry, and neutrino masses, remain. But with the Higgs filling in the last piece of the Standard Model, some worry that answers to these questions in the form of new particles may only exist at energy scales entirely out of the reach of human technology. If true, future colliders would have no hope of

A diagram of the particles of the Standard Model laid out as a function of energy. The LHC and other experiments have probed up to around 10^3 GeV, and found all the particles of the Standard Model. Some worry new particles may only exist at the extremely high energies of the Planck or GUT energy scales. This would imply a large large ‘desert’ in energy, many orders of magnitude in which no new particles exist. Figure adapted from here

The situation being faced now is qualitatively different than the pre-LHC era. Prior to the LHC turning on, ‘no lose theorems’, based on the mathematical consistency of the Standard Model, meant that it had to discover the Higgs or some other new particle like it. This made the justification for its construction as bullet-proof as one can get in science; a guaranteed Nobel prize discovery. But now with the last piece of the Standard Model filled in, there are no more free wins; guarantees of the Standard Model’s breakdown don’t occur until energy scales we would need solar-system sized colliders to probe. Now, like all other fields of science, we cannot predict what discoveries we may find with future collider experiments.

Still, optimists hope, and have their reasons to believe, that nature may not be so unkind as to hide its secrets behind walls so far outside our ability to climb. There are compelling models of dark matter that live just outside the energy reach of the LHC, and predict rates too low for direct detection experiments, but would be definitely discovered or ruled out by high energy colliders. The nature of the ‘phase transition’ that occurred in the very early universe, which may explain the prevalence of matter over anti-matter, can also be answered. There are also a slew of experimentalhints‘, all of which have significant question marks, but could point to new particles within the reach of a future collider.

Many also just advocate for building a future machine to study nature itself, with less emphasis on discovering new particles. They argue that even if we only further confirm the Standard Model, it is a worthwhile endeavor. Though we calculate Standard Model predictions for high energies, unless they are tested in a future collider we will not ‘know’ how if nature actually works like this until we test it in those regimes. They argue this is a fundamental part of the scientific process, and should not be abandoned so easily. Chief among the untested predictions are those surrounding the Higgs boson. The Higgs is a central somewhat mysterious piece of the Standard Model but is difficult to measure precisely in the noisy environment of the LHC. Future colliders would allow us to study it with much better precision, and verify whether it behaves as the Standard Model predicts or not.

Projects

These theoretical debates directly inform what colliders are being proposed and what their scientific case is.

Many are advocating for a “Higgs factory”, a collider of based on clean electron-positron collisions that could be used to study the Higgs in much more detail than the messy proton collisions of the LHC. Such a machine would be sensitive to subtle deviations of Higgs behavior from Standard Model predictions. Such deviations could come from the quantum effects of heavy, yet-undiscovered particles interacting with the Higgs. However, to determine what particles are causing those deviations, its likely one would need a new ‘discovery’ machine which has high enough energy to produce them.

Among the Higgs factory options are the International Linear Collider, a proposed 20km linear machine which would be hosted in Japan. ILC designs have been ‘ready to go’ for the last 10 years but the Japanese government has repeated waffled on whether to approve the project. Sitting in limbo for this long has led to many being pessimistic about the projects future, but certainly many in the global community would be ecstatic to work on such a machine if it was approved.

Designs for the ILC have been ready for nearly a decade, but its unclear if it will receive the greenlight from the Japanese government. Image source

Alternatively, some in the US have proposed building a linear collider based on a ‘cool copper’ cavities (C3) rather than the standard super conducting ones. These copper cavities can achieve more acceleration per meter than the standard super conducting ones, meaning a linear Higgs factory could be constructed with a reduced 8km footprint. A more compact design can significantly cut down on infrastructure costs that governments usually don’t like to use their science funding on. Advocates had proposed it as a cost-effective Higgs factory option, whose small footprint means it could potentially hosted in the US.

The Future-Circular-Collider (FCC), CERN’s successor to the LHC, would kill both birds with one extremely long stone. Similar to the progression from LEP to the LHC, this new proposed 90km collider would run as Higgs factory using electron-positron collisions starting in 2045 before eventually switching to a ~90 TeV proton-proton collider starting in ~2075.

An image of the proposed FCC overlayed on a map of the French/Swiss border
Designs for the massive 90km FCC ring surrounding Geneva

Such a machine would undoubtably answer many of the important questions in particle physics, however many have concerns about the huge infrastructure costs needed to dig such a massive tunnel and the extremely long timescale before direct discoveries could be made. Most of the current field would not be around 50 years from now to see what such a machine finds. The Future-Circular-Collider (FCC), CERN’s successor to the LHC, would kill both birds with one extremely long stone. Similar to the progression from LEP to the LHC, this new proposed 90km collider would run as Higgs factory using electron-positron collisions starting in 2045 before eventually switching to a ~90 TeV proton-proton collider starting in ~2075. Such a machine would undoubtably answer many of the important questions in particle physics, however many have concerns about the extremely long timescale before direct discoveries could be made. Most of the current field would not be around 50 years from now to see what such a machine finds. The FCC is also facing competition as Chinese physicists have proposed a very similar design (CEPC) which could potentially start construction much earlier.

During the snowmass process many in the US starting pushing for an ambitious alternative. They advocated a new type of machine that collides muons, the heavier cousin of electrons. A muon collider could reach the high energies of a discovery machine while also maintaining a clean environment that Higgs measurements can be performed in. However, muons are unstable, and collecting enough of them into formation to form a beam before they decay is a difficult task which has not been done before. The group of dedicated enthusiasts designed t-shirts and Twitter memes to capture the excitement of the community. While everyone agrees such a machine would be amazing, the key technologies necessary for such a collider are less developed than those of electron-positron and proton colliders. However, if the necessary technological hurdles could be overcome, such a machine could turn on decades before the planned proton-proton run of the FCC. It can also presents a much more compact design, at only 10km circumfrence, roughly three times smaller than the LHC. Advocates are particularly excited that this would allow it to be built within the site of Fermilab, the US’s flagship particle physics lab, which would represent a return to collider prominence for the US.

A proposed design for a muon collider. It relies on ambitious new technologies, but could potentially deliver similar physics to the FCC decades sooner and with a ten times smaller footprint. Source

Deliberation & Decision

This plethora of collider options, each coming with a very different vision of the field in 25 years time led to many contentious debates in the community. The extremely long timescales of these projects led to discussions of human lifespans, mortality and legacy being much more being much more prominent than usual scientific discourse.

Ultimately the P5 recommendation walked a fine line through these issues. Their most definitive decision was to recommend against a Higgs factor being hosted in the US, a significant blow to C3 advocates. The panel did recommend US support for any international Higgs factories which come to fruition, at a level ‘commensurate’ with US support for the LHC. What exactly ‘comensurate’ means in this context I’m sure will be debated in the coming years.

However, the big story to many was the panel’s endorsement of the muon collider’s vision. While recognizing the scientific hurdles that would need to be overcome, they called the possibility of muon collider hosted in the US a scientific ‘muon shot‘, that would reap huge gains. They therefore recommended funding for R&D towards they key technological hurdles that need to be addressed.

Because the situation is unclear on both the muon front and international Higgs factory plans, they recommended a follow up panel to convene later this decade when key aspects have clarified. While nothing was decided, many in the muon collider community took the report as a huge positive sign. While just a few years ago many dismissed talk of such a collider as fantastical, now a real path towards its construction has been laid down.

Hitoshi Murayama, chair of the P5 committee, cuts into a ‘Shoot for the Muon’ cake next to a smiling Lia Merminga, the director of Fermilab. Source

While the P5 report is only one step along the path to a future collider, it was an important one. Eyes will now turn towards reports from the different collider advocates. CERN’s FCC ‘feasibility study’, updates around the CEPC and, the International Muon Collider Collaboration detailed design report are all expected in the next few years. These reports will set up the showdown later this decade where concrete funding decisions will be made.

For those interested the full report as well as executive summaries of different areas can be found on the P5 website. Members of the US particle physics community are also encouraged to sign the petition endorsing the recommendations here.

The LHC is on turning on again! What does that mean?

Deep underground, on the border between Switzerland and France, the Large Hadron Collider (LHC) is starting back up again after a 4 year hiatus. Today, July 5th, the LHC had its first full energy collisions since 2018.  Whenever the LHC is running is exciting enough on its own, but this new run of data taking will also feature several upgrades to the LHC itself as well as the several different experiments that make use of its collisions. The physics world will be watching to see if the data from this new run confirms any of the interesting anomalies seen in previous datasets or reveals any other unexpected discoveries. 

New and Improved

During the multi-year shutdown the LHC itself has been upgraded. Noticably the energy of the colliding beams has been increased, from 13 TeV to 13.6 TeV. Besides breaking its own record for the highest energy collisions every produced, this 5% increase to the LHC’s energy will give a boost to searches looking for very rare high energy phenomena. The rate of collisions the LHC produces is also expected to be roughly 50% higher  previous maximum achieved in previous runs. At the end of this three year run it is expected that the experiments will have collected twice as much data as the previous two runs combined. 

The experiments have also been busy upgrading their detectors to take full advantage of this new round of collisions.

The ALICE experiment had the most substantial upgrade. It features a new silicon inner tracker, an upgraded time projection chamber, a new forward muon detector, a new triggering system and an improved data processing system. These upgrades will help in its study of exotic phase of matter called the quark gluon plasma, a hot dense soup of nuclear material present in the early universe. 

 

A diagram showing the various upgrades to the ALICE detector (source)

ATLAS and CMS, the two ‘general purpose’ experiments at the LHC, had a few upgrades as well. ATLAS replaced their ‘small wheel’ detector used to measure the momentum of muons. CMS replaced the inner most part its inner tracker, and installed a new GEM detector to measure muons close to the beamline. Both experiments also upgraded their software and data collection systems (triggers) in order to be more sensitive to the signatures of potential exotic particles that may have been missed in previous runs. 

The new ATLAS ‘small wheel’ being lowered into place. (source)

The LHCb experiment, which specializes in studying the properties of the bottom quark, also had major upgrades during the shutdown. LHCb installed a new Vertex Locator closer to the beam line and upgraded their tracking and particle identification system. It also fully revamped its trigger system to run entirely on GPU’s. These upgrades should allow them to collect 5 times the amount of data over the next two runs as they did over the first two. 

Run 3 will also feature a new smaller scale experiment, FASER, which will study neutrinos produced in the LHC and search for long-lived new particles

What will we learn?

One of the main goals in particle physics now is direct experimental evidence of a phenomena unexplained by the Standard Model. While very successful in many respects, the Standard Model leaves several mysteries unexplained such as the nature of dark matter, the imbalance of matter over anti-matter, and the origin of neutrino’s mass. All of these are questions many hope that the LHC can help answer.

Much of the excitement for Run-3 of the LHC will be on whether the additional data can confirm some of the deviations from the Standard Model which have been seen in previous runs.

One very hot topic in particle physics right now are a series of ‘flavor anomalies‘ seen by the LHCb experiment in previous LHC runs. These anomalies are deviations from the Standard Model predictions of how often certain rare decays of the b quarks should occur. With their dataset so far, LHCb has not yet had enough data to pass the high statistical threshold required in particle physics to claim a discovery. But if these anomalies are real, Run-3 should provide enough data to claim a discovery.

A summary of the various measurements making up the ‘flavor anomalies’. The blue lines and error bars indicate the measurements and their uncertainties. The yellow line and error bars indicates the standard model predictions and their uncertainties. Source

There are also a decent number ‘excesses’, potential signals of new particles being produced in LHC collisions, that have been seen by the ATLAS and CMS collaborations. The statistical significance of these excesses are all still quite low, and many such excesses have gone away with more data. But if one or more of these excesses was confirmed in the Run-3 dataset it would be a massive discovery.

While all of these anomalies are gamble, this new dataset will also certainly be used to measure various known entities with better precision, improving our understanding of nature no matter what. Our understanding of the Higgs boson, the top quark, rare decays of the bottom quark, rare standard model processes, the dynamics of the quark gluon plasma and many other areas will no doubt improve from this additional data.

In addition to these ‘known’ anomalies and measurements, whenever an experiment starts up again there is also the possibility of something entirely unexpected showing up. Perhaps one of the upgrades performed will allow the detection of something entirely new, unseen in previous runs. Perhaps FASER will see signals of long-lived particles missed by the other experiments. Or perhaps the data from the main experiments will be analyzed in a new way, revealing evidence of a new particle which had been missed up until now.

No matter what happens, the world of particle physics is a more exciting place when the LHC is running. So lets all cheers to that!

Read More:

CERN Run-3 Press Event / Livestream Recording “Join us for the first collisions for physics at 13.6 TeV!

Symmetry Magazine “What’s new for LHC Run 3?

CERN Courier “New data strengthens RK flavour anomaly

The Mini and Micro Boone Mystery, Part 1 Experiment

Title: “Search for an Excess of Electron Neutrino Interactions in MicroBooNE Using Multiple Final State Topologies”

Authors: The MiniBoone Collaboration

Reference: https://arxiv.org/abs/2110.14054

This is the first post in a series on the latest MicroBooNE results, covering the experimental side. Click here to read about the theory side. 

The new results from the MicroBoone experiment received a lot of excitement last week, being covered by several major news outlets. But unlike most physics news stories that make the press, it was a null result; they did not see any evidence for new particles or interactions. So why is it so interesting? Particle physics experiments produce null results every week, but what made this one newsworthy is that MicroBoone was trying to check the results from two previous experiments LSND and MiniBoone, that did see something anomalous with very high statistical evidence. If the LSND/MiniBoone result was confirmed, it would have been a huge breakthrough in particle physics, but now that it wasn’t many physicists are scratching their heads trying to make sense of these seemingly conflicting results. However, the MicroBoone experiment is not exactly the same as MiniBoone/LSND, and understanding the differences between the two sets of experiments may play an important role in unraveling this mystery.

Accelerator Neutrino Basics

All of these experiments are ‘accelerator neutrino experiments’, so lets first review what that means. Neutrino’s are ‘ghostly’ particles that are difficult to study (check out this post for more background on neutrinos).  Because they only couple through the weak force, neutrinos don’t like to interact with anything very much. So in order to detect them you need both a big detector with a lot of active material and a source with a lot of neutrinos. These experiments are designed to detect neutrinos produced in a human-made beam. To make the beam, a high energy beam of protons is directed at a target. These collisions produce a lot of particles, including unstable bound states of quarks like pions and kaons. These unstable particles have charge, so we can use magnets to focus them into a well-behaved beam.  When the pions and kaons decay they usually produce a muon and a muon neutrino. The beam of pions and kaons is pointed at an underground detector located a few hundred meters (or kilometers!) away, and then given time to decay. After they decay there will be a nice beam of muons and muon neutrinos. The muons can be stopped by some kind of shielding (like the earth’s crust), but the neutrinos will sail right through to the detector.

A diagram showing the basics of how a neutrino beam is made. Source

Nearly all of the neutrinos from the beam will still pass right through your detector, but a few of them will interact, allowing you to learn about their properties.

All of these experiments are considered ‘short-baseline’ because the distance between the neutrino source and the detector is only a few hundred meters (unlike the hundreds of kilometers in other such experiments). These experiments were designed to look for oscillation of the beam’s muon neutrinos into electron neutrinos which then interact with their detector (check out this post for some background on neutrino oscillations). Given the types of neutrinos we know about and their properties, this should be too short of a distance for neutrinos to oscillate, so any observed oscillation would be an indication something new (beyond the Standard Model) was going on.

The LSND + MiniBoone Anomaly

So the LSND and MiniBoone ‘anomaly’ was an excess of events above backgrounds that looked like electron neutrinos interacting with their detector. Both detectors were based on similar technology and were a similar distance from their neutrino source. Their detectors were essentially big tanks of mineral oil lined with light-detecting sensors.

An engineer styling inside the LSND detector. Source

At these energies the most common way neutrinos interact is to scatter against a neutron to produce a proton and a charged lepton (called a ‘charged current’ interaction). Electron neutrinos will produce outgoing electrons and muon neutrinos will produce outgoing muons.

A diagram of a ‘charged current’ interaction. A muon neutrino comes in and scatters against a neutron, producing a muon and a proton. Source

When traveling through the mineral oil these charged leptons will produce a ring of Cherenkov light which is detected by the sensors on the edge of the detector. Muons and electrons can be differentiated based on the characteristics of the Cherenkov light they emit. Electrons will undergo multiple scatterings off of the detector material while muons will not. This makes the Cherenkov rings of electrons ‘fuzzier’ than those of muons. High energy photons can produce electrons positron pairs which look very similar to a regular electron signal and are thus a source of background. 

A comparison of muon and electron Cherenkov rings from the Super-Kamiokande experiment. Electrons produce fuzzier rings than muons. Source

Even with a good beam and a big detector, the feebleness of neutrino interactions means that it takes a while to get a decent number of potential events. The MiniBoone experiment ran for 17 years looking for electron neutrinos scattering in their detector. In MiniBoone’s most recent analysis, they saw around 600 more events than would be expected if there were no anomalous electron neutrinos reaching their detector. The statistical significance of this excess, 4.8-sigma, was very high. Combining with LSND which saw a similar excess, the significance was above 6-sigma. This means its very unlikely this is a statistical fluctuation. So either there is some new physics going on or one of their backgrounds has been seriously under-estimated. This excess of events is what has been dubbed the ‘MiniBoone anomaly’.

The number of events seen in the MiniBoone experiment as a function of the energy seen in the interaction. The predicted number of events from various known background sources are shown in the colored histograms. The best fit to the data including the signal of anomalous oscillations is shown by the dashed line. One can see that at low energies the black data points lie significantly above these backgrounds and strongly favor the oscillation hypothesis.

The MicroBoone Result

The MicroBoone experiment was commissioned to verify the MiniBoone anomaly as well as test out a new type of neutrino detector technology. The MicroBoone is the first major neutrino experiment to use a ‘Liquid Argon Time Projection Chamber’ detector. This new detector technology allows more detailed reconstruction of what is happening when a neutrino scatters in the detector. The the active volume of the detector is liquid Argon, which allows both light and charge to propagate through it. When a neutrino scatters in the liquid Argon, scintillation light is produced that is collected in sensors. As charged particles created in the collision pass through the liquid Argon they ionize atoms they pass by. An electric field applied to the detector causes this produced charge to drift towards a mesh of wires where it can be collected. By measuring the difference in arrival time between the light and the charge, as well as the amount of charge collected at different positions and times, the precise location and trajectory of the particles produced in the collision can be determined. 

A beautiful reconstructed event in the MicroBoone detector. The colored lines show the tracks of different particles produced in the collision, all coming from a single point where the neutrino interaction took place. One can also see that one of the tracks produced a shower of particles away from the interaction vertex.

This means that unlike the MiniBoone and LSND, MicroBoone can see not just the lepton, but also the hadronic particles (protons, pions, etc) produced when a neutrino scatters in their detector. This means that the same type of neutrino interaction actually looks very different in their detector. So when they went to test the MiniBoone anomaly they adopted multiple different strategies of what exactly to look for. In the first case they looked for the type of interaction that an electron neutrino would have most likely produced: an outgoing electron and proton whose kinematics match those of a charged current interaction. Their second set of analyses, designed to mimic the MiniBoone selection, are slightly more general. They require one electron and any number of protons, but no pions. Their third analysis is the most general and requires an electron along with anything else. 

These different analyses have different levels of sensitivity to the MiniBoone anomaly, but all of them are found to be consistent with a background-only hypothesis: there is no sign of any excess events. Three out of four of them even see slightly less events than the expected background. 

A summary of the different MicroBoone analyses. The Y-axis shows the ratio of observed to expected number of events expected if there was only background present. The red lines show the excess predicted to be seen if the MiniBoone anomaly produced a signal in each channel. One can see that the black data points are much more consistent with the grey bands showing the background only prediction than amount predicted if the MiniBoone anomaly was present.

Overall the MicroBoone data rejects the hypothesis that the MiniBoone anomaly is due to electron neutrino charged current interactions at quite high significance (>3sigma). So if its not electron neutrinos causing the MiniBoone anomaly, what is it?

What’s Going On?

Given that MicroBoone did not see any signal, many would guess that MiniBoone’s claim of an excess must be flawed and they have underestimated one of their backgrounds. Unfortunately it is not very clear what that could be. If you look at the low-energy region where MiniBoone has an excess, there are three major background sources: decays of the Delta baryon that produce a photon (shown in tan), neutral pions decaying to pairs of photons (shown in red), and backgrounds from true electron neutrinos (shown in various shades of green). However all of these sources of background seem quite unlikely to be the source of the MiniBoone anomaly.

Before releasing these results, MicroBoone performed a dedicated search for Delta baryons decaying into photons, and saw a rate in agreement with the theoretical prediction MiniBoone used, and well below the amount needed to explain the MiniBoone excess.

Backgrounds from true electron neutrinos produced in the beam, as well as from the decays of muons, should not concentrate only at low energies like the excess does, and their rate has also been measured within MiniBoone data by looking at other signatures.

The decay of a neutral pions can produce two photons, and if one of them escapes detection, a single photon will mimic their signal. However one would expect that it would be more likely that photons would escape the detector near its edges, but the excess events are distributed uniformly in the detector volume.

So now the mystery of what could be causing this excess is even greater. If it is a background, it seems most likely it is from an unknown source not previously considered. As will be discussed in our part 2 post, its possible that MiniBoone anomaly was caused by a more exotic form of new physics; possibly the excess events in MiniBoone were not really coming from the scattering of electron neutrinos but something else that produced a similar signature in their detector. Some of these explanations included particles that decayed into pairs of electrons or photons. These sorts of explanations should be testable with MicroBoone data but will require dedicated analyses for their different signatures.

So on the experimental side, we now we are left to scratch our heads and wait for new results from MicroBoone that may help get to the bottom of this.

Click here for part 2 of our MicroBoone coverage that goes over the theory side of the story!

Read More

Is the Great Neutrino Puzzle Pointing to Multiple Missing Particles?” – Quanta Magazine article on the new MicroBoone result

“Can MiniBoone be Right?” – Resonaances blog post summarizing the MiniBoone anomaly prior to the the MicroBoone results

A review of different types of neutrino detectors – from the T2K experiment

Making Smarter Snap Judgments at the LHC

Collisions at the Large Hadron Collider happen fast. 40 million times a second, bunches of 1011 protons are smashed together. The rate of these collisions is so fast that the computing infrastructure of the experiments can’t keep up with all of them. We are not able to read out and store the result of every collision that happens, so we have to ‘throw out’ nearly all of them. Luckily most of these collisions are not very interesting anyways. Most of them are low energy interactions of quarks and gluons via the strong force that have been already been studied at previous colliders. In fact, the interesting processes, like ones that create a Higgs boson, can happen billions of times less often than the uninteresting ones.

The LHC experiments are thus faced with a very interesting challenge, how do you decide extremely quickly whether an event is interesting and worth keeping or not? This what the ‘trigger’ system, the Marie Kondo of LHC experiments, are designed to do. CMS for example has a two-tiered trigger system. The first level has 4 microseconds to make a decision and must reduce the event rate from 40 millions events per second to 100,000. This speed requirement means the decision has to be made using at the hardware level, requiring the use of specialized electronics to quickly to synthesize the raw information from the detector into a rough idea of what happened in the event. Selected events are then passed to the High Level Trigger (HLT), which has 150 milliseconds to run versions of the CMS reconstruction algorithms to further reduce the event rate to a thousand per second.

While this system works very well for most uses of the data, like measuring the decay of Higgs bosons, sometimes it can be a significant obstacle. If you want to look through the data for evidence of a new particle that is relatively light, it can be difficult to prevent the trigger from throwing out possible signal events. This is because one of the most basic criteria the trigger uses to select ‘interesting’ events is that they leave a significant amount of energy in the detector. But the decay products of a new particle that is relatively light won’t have a substantial amount of energy and thus may look ‘uninteresting’ to the trigger.

In order to get the most out of their collisions, experimenters are thinking hard about these problems and devising new ways to look for signals the triggers might be missing. One idea is to save additional events from the HLT in a substantially reduced size. Rather than saving the raw information from the event, that can be fully processed at a later time, instead the only the output of the quick reconstruction done by the trigger is saved. At the cost of some precision, this can reduce the size of each event by roughly two orders of magnitude, allowing events with significantly lower energy to be stored. CMS and ATLAS have used this technique to look for new particles decaying to two jets and LHCb has used it to look for dark photons. The use of these fast reconstruction techniques allows them to search for, and rule out the existence of, particles with much lower masses than otherwise possible. As experiments explore new computing infrastructures (like GPU’s) to speed up their high level triggers, they may try to do even more sophisticated analyses using these techniques. 

But experimenters aren’t just satisfied with getting more out of their high level triggers, they want to revamp the low-level ones as well. In order to get these hardware-level triggers to make smarter decisions, experimenters are trying get them to run machine learning models. Machine learning has become very popular tool to look for rare signals in LHC data. One of the advantages of machine learning models is that once they have been trained, they can make complex inferences in a very short amount of time. Perfect for a trigger! Now a group of experimentalists have developed a library that can translate the most popular types machine learning models into a format that can be run on the Field Programmable Gate Arrays used in lowest level triggers. This would allow experiments to quickly identify events from rare signals that have complex signatures that the current low-level triggers don’t have time to look for. 

The LHC experiments are working hard to get the most out their collisions. There could be particles being produced in LHC collisions already but we haven’t been able to see them because of our current triggers, but these new techniques are trying to cover our current blind spots. Look out for new ideas on how to quickly search for interesting signatures, especially as we get closer the high luminosity upgrade of the LHC.

Read More:

CERN Courier article on programming FPGA’s

IRIS HEP Article on a recent workshop on Fast ML techniques

CERN Courier article on older CMS search for low mass dijet resonances

ATLAS Search using ‘trigger-level’ jets

LHCb Search for Dark Photons using fast reconstruction based on a high level trigger

Paper demonstrating the feasibility of running ML models for jet tagging on FPGA’s

Jets: From Energy Deposits to Physics Objects

Title: “Jet energy scale and resolution in the CMS experiment in pp collisions at 8 TeV”
Author: The CMS Collaboration
Reference: arXiv:hep-ex:1607.03663v1.pdf

As a collider physicist, I care a lot about jets. They are fascinating objects that cover the ATLAS and CMS detectors during LHC operation and make event displays look really cool (see Figure 1.) Unfortunately, as interesting as jets are, they’re also somewhat complicated and difficult to measure. A recent paper from the CMS Collaboration details exactly how we reconstruct, simulate, and calibrate these objects.

This event was collected in August 2015. The two high-pT jets have an invariant mass of 6.9 TeV and the leading and subleading jet have a pT of 1.3 and 1.2 TeV respectively. (Image credit: ATLAS public results)
Figure 1: This event was collected in August 2015. The two high-pT jets have an invariant mass of 6.9 TeV and the leading and subleading jet have a pT of 1.3 and 1.2 TeV respectively. (Image credit: ATLAS public results)

For the uninitiated, a jet is the experimental signature of quarks or gluons that emerge from a high energy particle collision. Since these colored Standard Model particles cannot exist on their own due to confinement, they cluster or ‘hadronize’ as they move through a detector. The result is a spray of particles coming from the interaction point. This spray can contain mesons, charged and neutral hadrons, basically anything that is colorless as per the rules of QCD.

So what does this mess actually look like in a detector? ATLAS and CMS are designed to absorb most of a jet’s energy by the end of the calorimeters. If the jet has charged constituents, there will also be an associated signal in the tracker. It is then the job of the reconstruction algorithm to combine these various signals into a single object that makes sense. This paper discusses two different reconstructed jet types: calo jets and particle-flow (PF) jets. Calo jets are built only from energy deposits in the calorimeter; since the resolution of the calorimeter gets worse with higher energies, this method can get bad quickly. PF jets, on the other hand, are reconstructed by linking energy clusters in the calorimeters with signals in the trackers to create a complete picture of the object at the individual particle level. PF jets generally enjoy better momentum and spatial resolutions, especially at low energies (see Figure 2).

Jet-energy resolution for calorimeter and particle-flow jets as a function of the jet transverse momentum. The improvement in resolution, of almost a factor of two at low transverse momentum, remains sizable even for jets with very high transverse momentum. (Image credit: CMS Collaboration)
Jet-energy resolution for calorimeter and particle-flow jets as a function of the jet transverse momentum. The improvement in resolution, of almost a factor of two at low transverse momentum, remains sizable even for jets with very high transverse momentum.
(Image credit: CMS Collaboration)

Once reconstruction is done, we have a set of objects that we can now call jets. But we don’t want to keep all of them for real physics. Any given event will have a large number of pile up jets, which come from softer collisions between other protons in a bunch (in time), or leftover calorimeter signals from the previous bunch crossing (out of time). Being able to identify and subtract pile up considerably enhances our ability to calibrate the deposits that we know came from good physics objects. In this paper CMS reports a pile up reconstruction and identification efficiency of nearly 100% for hard scattering events, and they estimate that each jet energy is enhanced by about 10 GeV due to pileup alone.

Once the pile up is corrected, the overall jet energy correction (JEC) is determined via detector response simulation. The simulation is necessary to simulate how the initial quarks and gluons fragment, and the way in which those subsequent partons shower in the calorimeters. This correction is dependent on jet momentum (since the calorimeter resolution is as well), and jet pseudorapidity (different areas of the detector are made of different materials or have different total thickness.) Figure 3 shows the overall correction factors for several different jet radius R values.

Jet energy correction factors for a jet with pT = 30 GeV, as a function of eta (left). Note the spikes around 1.7 (TileGap3, very little absorber material) and 3 (beginning of endcaps.) Simulated jet energy response after JEC as a function of pT (right).
Figure 3: Jet energy correction factors for a jet with pT = 30 GeV, as a function of eta (left). Note the spikes around 1.7 (TileGap3, very little absorber material) and 3 (beginning of endcaps.) Simulated jet energy response after JEC as a function of pT (right).

Finally, we turn to data as a final check on how well these calibrations went. An example of such a check is the tag and probe method with dijet events. Here, we take a good clean event with two back-to-back jets, and ask for one low eta jet for a ‘tag’ jet. The other ‘probe’ jet, at arbitrary eta, is then measured using the previously derived corrections. If the resulting pT is close to the pT of the tag jet, we know the calibration was solid (this also gives us info on how calibrations perform as a function of eta.) A similar method known as pT balancing can be done with a single jet back to back with an easily reconstructed object, such as a Z boson or a photon.

This is really a bare bones outline of how jet calibration is done. In real life, there are systematic uncertainties, jet flavor dependence, correlations; the list goes on. But the entire procedure works remarkably well given the complexity of the task. Ultimately CMS reports a jet energy uncertainty of 3% for most physics analysis jets, and as low as 0.32% for some jets—a new benchmark for hadron colliders!

 

Further Reading:

  1. “Jets: The Manifestation of Quarks and Gluons.” Of Particular Significance, Matt Strassler.
  2. “Commissioning of the Particle-flow Event Reconstruction with the first LHC collisions recorded in the CMS detector.” The CMS Collaboration, CMS PAS PFT-10-001.
  3. “Determination of jet energy calibrations and transverse momentum resolution in CMS.” The CMS Collaboration, 2011 JINST 6 P11002.

How to Turn On a Supercollider

Figure 1: CERN Control Centre excitement on June 5. Image from home.web.cern.ch.

After two years of slumber, the world’s biggest particle accelerator has come back to life. This marks the official beginning of Run 2 of the LHC, which will collide protons at nearly twice the energies achieve in Run 1. Results from this data were already presented at the recently concluded European Physical Society (EPS) Conference on High Energy Physics. And after achieving fame in 2012 through observation of the Higgs boson, it’s no surprise that the scientific community is waiting with bated breath to see what the LHC will do next.

The first official 13 TeV stable beam physics data arrived on June 5th. One of the first events recorded by the CMS detector is shown in Figure 2. But as it turns out, you can’t just walk up to the LHC, plug it back into the wall, and press the on switch (crazy, I know.) It takes an immense amount of work, planning, and coordination to even get the thing running.

Event display from one of the first Run 2 collisions.
Figure 2: Event display from one of the first Run 2 collisions.

The machine testing begins with the magnets. Since the LHC dipole magnets are superconducting, they need to be cooled to about 1.9K in order to function, which can take weeks. Each dipole circuit then must be tested to ensure functionality of the quench protection circuit, which will dump the beam in the event of sudden superconductivity loss. This process occurred between July and December of 2014.

Once the magnets are set, it’s time to start actually making beam. Immediately before entering the LHC, protons are circling around the Super Proton Synchroton, which acts as a pre-accelerator. Getting beam from the SPS to the LHC requires synchronization, a functional injection system, beam dump procedure, and a whole lot of other processes that are re-awoken and carefully tested. By April, beam commissioning was officially underway, meaning that protons were injected and circulating, and a mere 8 weeks later there were successful collisions at the safe energy of 6.5 TeV. As of right now, the CMS detector is reporting 84 pb-1 total integrated luminosity; a day-by-day breakdown can be seen in Figure 3.

CMS total integrated luminosity per day, from Ref 5.
Figure 3: CMS total integrated luminosity per day, from Ref 4.

But just having collisions does not mean that the LHC is up and fully functional. Sometimes things go wrong right when you least expect it. For example, the CMS magnet has been off to a bit of a rough start—there was an issue with its cooling system that kept the magnetic field off, meaning that charged particles would not bend. The LHC has also been taking the occasional week off for “scrubbing”, in which lots of protons are circulated to burn off electron clouds in the beam pipes.

This is all leading up to the next technical stop, when the CERN engineers get to go fix things that have broken and improve things that don’t work perfectly. So it’s a slow process, sure. But all the caution and extra steps and procedures are what make the LHC a one-of-a-kind experiment that has big sights set for the rest of Run 2. More posts to follow when more physics results arrive!

 

References:

  1. LHC Commissioning site
  2. Cyrogenics & Magnets at the LHC
  3. CERN collisions announcement
  4. CMS Public Luminosity results

Prospects for the International Linear Collider

Title: “Physics Case for the International Linear Collider”
Author: Linear Collider Collaboration (LCC) Physics Working Group
Published: arXiV hep-ex 1506:05992

For several years, rumors have been flying around the particle physics community about an entirely new accelerator facility, one that can take over for the LHC during its more extensive upgrades and can give physicists a different window into the complex world of the Standard Model and beyond. Through a few setbacks and moments of indecision, the project seems to have more momentum now than ever, so let’s go ahead and talk about the International Linear Collider: what it is, why we want it, and whether or not it will ever actually get off the ground.

The ILC is a proposed linear accelerator that will collide electrons and positrons, in comparison to the circular Large Hadron Collider ring that collides protons. So why make these design differences? Hasn’t the LHC done a lot for us? In two words: precision measurements!

Of course, the LHC got us the Higgs, and that’s great. But there are certain processes that physicists really want to look at now that occupy much higher fractions of the electron-positron cross section. In addition, the messiness associated with strong interactions is entirely gone with a lepton collider, leaving only a very well-defined initial state and easily calculable backgrounds. Let’s look specifically at what particular physical processes are motivating this design.

Higgs to fermion couplings, from CMS experiment (left) and projected for ILC (right).
Figure 1: Higgs to fermion couplings, from CMS experiment (left) and projected for ILC (right).

1. The Higgs. Everything always comes back to the Higgs, doesn’t it? We know that it’s out there, but beyond that, there are still many questions left unanswered. Physicists still want to determine whether the Higgs is composite, or whether it perhaps fits into a supersymmetric model of some kind. Additionally, we’re still uncertain about the couplings of the Higgs, both to the massive fermions and to itself. Figure 1 shows the current best estimate of Higgs couplings, which we expect to be proportional to the fermion mass, in comparison to how the precision of these measurements should improve with the ILC.

2.The Top Quark. Another thing that we’ve already discovered, but still want to know more about its characteristics and behaviors. We know that the Higgs field takes on a symmetry breaking value in all of space, due to the observed split of the electromagnetic and weak forces. As it turns out, it is the coupling of the Higgs to the top that provides this value, making it a key player in the Standard Model game.

3.New Physics. And of course there’s always the discovery potential. Since electron and positron beams can be polarized, we would be able to measure backgrounds with a whole new level of precision, providing a better image of possible decay chains that include dark matter or other beyond the SM particles.

Figure 2: ILC home page/Form One

Let’s move on to the actual design prospects for the ILC. Figure 2 shows the most recent blueprint of what such an accelerator would look like.  The ILC would have 2 separate detectors, and would be able to accelerate electrons/positrons to an energy of 500 GeV, with an option to upgrade to 1 TeV at a later point. The entire tunnel would be 31km long with two damping rings shown at the center. When accelerating electrons to extremely high energies, a linear collider is needed to offset extremely relativistic effects. For example, the Large Electron-Positron Collider synchrotron at CERN accelerates electrons to 50 GeV, giving them a relativistic gamma factor of 98,000. Compare that to a proton of 50 GeV in the same ring, which has a gamma of 54. That high gamma means that an electron requires an insane amount of energy to offset its synchrotron radiation, making a linear collider a more reasonable and cost effective choice.

 

Possible sites for the ILC in Japan.
Figure 3: Possible sites for the ILC in Japan.

In any large (read: expensive) experiment such as this, a lot of politics are going to come into play. The current highest bidder for the accelerator seems to be Japan, with possible construction sites in the mountain ranges (see Figure 3). The Japanese government is pretty eager to contribute a lot of funding to the project, something that other contenders have been reluctant to do (but such funding promises can very easily go awry, as the poor SSC shows us.) The Reference Design Reports report the estimated cost to be $6.7 billion, though U.S. Department of Energy officials have placed the cost closer to $20 billion. But the benefits of such a collaboration are immense. The infrastructure of such an accelerator could lead to the creation of a “new CERN”, one that could have as far-reaching influence in the future as CERN has enjoyed in the past few decades. Bringing together about 1000 scientists from more than 20 countries, the ILC truly has the potential to do great things for future international scientific collaboration, making it one of the most exciting prospects on the horizon of particle physics.

 

Further Reading:

  1. The International Linear Collider site: all things ILC
  2. ILC Reference Design Reports (RDR), for the very ambitious reader