Making Smarter Snap Judgments at the LHC

Collisions at the Large Hadron Collider happen fast. 40 million times a second, bunches of 1011 protons are smashed together. The rate of these collisions is so fast that the computing infrastructure of the experiments can’t keep up with all of them. We are not able to read out and store the result of every collision that happens, so we have to ‘throw out’ nearly all of them. Luckily most of these collisions are not very interesting anyways. Most of them are low energy interactions of quarks and gluons via the strong force that have been already been studied at previous colliders. In fact, the interesting processes, like ones that create a Higgs boson, can happen billions of times less often than the uninteresting ones.

The LHC experiments are thus faced with a very interesting challenge, how do you decide extremely quickly whether an event is interesting and worth keeping or not? This what the ‘trigger’ system, the Marie Kondo of LHC experiments, are designed to do. CMS for example has a two-tiered trigger system. The first level has 4 microseconds to make a decision and must reduce the event rate from 40 millions events per second to 100,000. This speed requirement means the decision has to be made using at the hardware level, requiring the use of specialized electronics to quickly to synthesize the raw information from the detector into a rough idea of what happened in the event. Selected events are then passed to the High Level Trigger (HLT), which has 150 milliseconds to run versions of the CMS reconstruction algorithms to further reduce the event rate to a thousand per second.

While this system works very well for most uses of the data, like measuring the decay of Higgs bosons, sometimes it can be a significant obstacle. If you want to look through the data for evidence of a new particle that is relatively light, it can be difficult to prevent the trigger from throwing out possible signal events. This is because one of the most basic criteria the trigger uses to select ‘interesting’ events is that they leave a significant amount of energy in the detector. But the decay products of a new particle that is relatively light won’t have a substantial amount of energy and thus may look ‘uninteresting’ to the trigger.

In order to get the most out of their collisions, experimenters are thinking hard about these problems and devising new ways to look for signals the triggers might be missing. One idea is to save additional events from the HLT in a substantially reduced size. Rather than saving the raw information from the event, that can be fully processed at a later time, instead the only the output of the quick reconstruction done by the trigger is saved. At the cost of some precision, this can reduce the size of each event by roughly two orders of magnitude, allowing events with significantly lower energy to be stored. CMS and ATLAS have used this technique to look for new particles decaying to two jets and LHCb has used it to look for dark photons. The use of these fast reconstruction techniques allows them to search for, and rule out the existence of, particles with much lower masses than otherwise possible. As experiments explore new computing infrastructures (like GPU’s) to speed up their high level triggers, they may try to do even more sophisticated analyses using these techniques. 

But experimenters aren’t just satisfied with getting more out of their high level triggers, they want to revamp the low-level ones as well. In order to get these hardware-level triggers to make smarter decisions, experimenters are trying get them to run machine learning models. Machine learning has become very popular tool to look for rare signals in LHC data. One of the advantages of machine learning models is that once they have been trained, they can make complex inferences in a very short amount of time. Perfect for a trigger! Now a group of experimentalists have developed a library that can translate the most popular types machine learning models into a format that can be run on the Field Programmable Gate Arrays used in lowest level triggers. This would allow experiments to quickly identify events from rare signals that have complex signatures that the current low-level triggers don’t have time to look for. 

The LHC experiments are working hard to get the most out their collisions. There could be particles being produced in LHC collisions already but we haven’t been able to see them because of our current triggers, but these new techniques are trying to cover our current blind spots. Look out for new ideas on how to quickly search for interesting signatures, especially as we get closer the high luminosity upgrade of the LHC.

Read More:

CERN Courier article on programming FPGA’s

IRIS HEP Article on a recent workshop on Fast ML techniques

CERN Courier article on older CMS search for low mass dijet resonances

ATLAS Search using ‘trigger-level’ jets

LHCb Search for Dark Photons using fast reconstruction based on a high level trigger

Paper demonstrating the feasibility of running ML models for jet tagging on FPGA’s

Jets: From Energy Deposits to Physics Objects

Title: “Jet energy scale and resolution in the CMS experiment in pp collisions at 8 TeV”
Author: The CMS Collaboration
Reference: arXiv:hep-ex:1607.03663v1.pdf

As a collider physicist, I care a lot about jets. They are fascinating objects that cover the ATLAS and CMS detectors during LHC operation and make event displays look really cool (see Figure 1.) Unfortunately, as interesting as jets are, they’re also somewhat complicated and difficult to measure. A recent paper from the CMS Collaboration details exactly how we reconstruct, simulate, and calibrate these objects.

This event was collected in August 2015. The two high-pT jets have an invariant mass of 6.9 TeV and the leading and subleading jet have a pT of 1.3 and 1.2 TeV respectively. (Image credit: ATLAS public results)
Figure 1: This event was collected in August 2015. The two high-pT jets have an invariant mass of 6.9 TeV and the leading and subleading jet have a pT of 1.3 and 1.2 TeV respectively. (Image credit: ATLAS public results)

For the uninitiated, a jet is the experimental signature of quarks or gluons that emerge from a high energy particle collision. Since these colored Standard Model particles cannot exist on their own due to confinement, they cluster or ‘hadronize’ as they move through a detector. The result is a spray of particles coming from the interaction point. This spray can contain mesons, charged and neutral hadrons, basically anything that is colorless as per the rules of QCD.

So what does this mess actually look like in a detector? ATLAS and CMS are designed to absorb most of a jet’s energy by the end of the calorimeters. If the jet has charged constituents, there will also be an associated signal in the tracker. It is then the job of the reconstruction algorithm to combine these various signals into a single object that makes sense. This paper discusses two different reconstructed jet types: calo jets and particle-flow (PF) jets. Calo jets are built only from energy deposits in the calorimeter; since the resolution of the calorimeter gets worse with higher energies, this method can get bad quickly. PF jets, on the other hand, are reconstructed by linking energy clusters in the calorimeters with signals in the trackers to create a complete picture of the object at the individual particle level. PF jets generally enjoy better momentum and spatial resolutions, especially at low energies (see Figure 2).

Jet-energy resolution for calorimeter and particle-flow jets as a function of the jet transverse momentum. The improvement in resolution, of almost a factor of two at low transverse momentum, remains sizable even for jets with very high transverse momentum. (Image credit: CMS Collaboration)
Jet-energy resolution for calorimeter and particle-flow jets as a function of the jet transverse momentum. The improvement in resolution, of almost a factor of two at low transverse momentum, remains sizable even for jets with very high transverse momentum.
(Image credit: CMS Collaboration)

Once reconstruction is done, we have a set of objects that we can now call jets. But we don’t want to keep all of them for real physics. Any given event will have a large number of pile up jets, which come from softer collisions between other protons in a bunch (in time), or leftover calorimeter signals from the previous bunch crossing (out of time). Being able to identify and subtract pile up considerably enhances our ability to calibrate the deposits that we know came from good physics objects. In this paper CMS reports a pile up reconstruction and identification efficiency of nearly 100% for hard scattering events, and they estimate that each jet energy is enhanced by about 10 GeV due to pileup alone.

Once the pile up is corrected, the overall jet energy correction (JEC) is determined via detector response simulation. The simulation is necessary to simulate how the initial quarks and gluons fragment, and the way in which those subsequent partons shower in the calorimeters. This correction is dependent on jet momentum (since the calorimeter resolution is as well), and jet pseudorapidity (different areas of the detector are made of different materials or have different total thickness.) Figure 3 shows the overall correction factors for several different jet radius R values.

Jet energy correction factors for a jet with pT = 30 GeV, as a function of eta (left). Note the spikes around 1.7 (TileGap3, very little absorber material) and 3 (beginning of endcaps.) Simulated jet energy response after JEC as a function of pT (right).
Figure 3: Jet energy correction factors for a jet with pT = 30 GeV, as a function of eta (left). Note the spikes around 1.7 (TileGap3, very little absorber material) and 3 (beginning of endcaps.) Simulated jet energy response after JEC as a function of pT (right).

Finally, we turn to data as a final check on how well these calibrations went. An example of such a check is the tag and probe method with dijet events. Here, we take a good clean event with two back-to-back jets, and ask for one low eta jet for a ‘tag’ jet. The other ‘probe’ jet, at arbitrary eta, is then measured using the previously derived corrections. If the resulting pT is close to the pT of the tag jet, we know the calibration was solid (this also gives us info on how calibrations perform as a function of eta.) A similar method known as pT balancing can be done with a single jet back to back with an easily reconstructed object, such as a Z boson or a photon.

This is really a bare bones outline of how jet calibration is done. In real life, there are systematic uncertainties, jet flavor dependence, correlations; the list goes on. But the entire procedure works remarkably well given the complexity of the task. Ultimately CMS reports a jet energy uncertainty of 3% for most physics analysis jets, and as low as 0.32% for some jets—a new benchmark for hadron colliders!

 

Further Reading:

  1. “Jets: The Manifestation of Quarks and Gluons.” Of Particular Significance, Matt Strassler.
  2. “Commissioning of the Particle-flow Event Reconstruction with the first LHC collisions recorded in the CMS detector.” The CMS Collaboration, CMS PAS PFT-10-001.
  3. “Determination of jet energy calibrations and transverse momentum resolution in CMS.” The CMS Collaboration, 2011 JINST 6 P11002.

How to Turn On a Supercollider

Figure 1: CERN Control Centre excitement on June 5. Image from home.web.cern.ch.

After two years of slumber, the world’s biggest particle accelerator has come back to life. This marks the official beginning of Run 2 of the LHC, which will collide protons at nearly twice the energies achieve in Run 1. Results from this data were already presented at the recently concluded European Physical Society (EPS) Conference on High Energy Physics. And after achieving fame in 2012 through observation of the Higgs boson, it’s no surprise that the scientific community is waiting with bated breath to see what the LHC will do next.

The first official 13 TeV stable beam physics data arrived on June 5th. One of the first events recorded by the CMS detector is shown in Figure 2. But as it turns out, you can’t just walk up to the LHC, plug it back into the wall, and press the on switch (crazy, I know.) It takes an immense amount of work, planning, and coordination to even get the thing running.

Event display from one of the first Run 2 collisions.
Figure 2: Event display from one of the first Run 2 collisions.

The machine testing begins with the magnets. Since the LHC dipole magnets are superconducting, they need to be cooled to about 1.9K in order to function, which can take weeks. Each dipole circuit then must be tested to ensure functionality of the quench protection circuit, which will dump the beam in the event of sudden superconductivity loss. This process occurred between July and December of 2014.

Once the magnets are set, it’s time to start actually making beam. Immediately before entering the LHC, protons are circling around the Super Proton Synchroton, which acts as a pre-accelerator. Getting beam from the SPS to the LHC requires synchronization, a functional injection system, beam dump procedure, and a whole lot of other processes that are re-awoken and carefully tested. By April, beam commissioning was officially underway, meaning that protons were injected and circulating, and a mere 8 weeks later there were successful collisions at the safe energy of 6.5 TeV. As of right now, the CMS detector is reporting 84 pb-1 total integrated luminosity; a day-by-day breakdown can be seen in Figure 3.

CMS total integrated luminosity per day, from Ref 5.
Figure 3: CMS total integrated luminosity per day, from Ref 4.

But just having collisions does not mean that the LHC is up and fully functional. Sometimes things go wrong right when you least expect it. For example, the CMS magnet has been off to a bit of a rough start—there was an issue with its cooling system that kept the magnetic field off, meaning that charged particles would not bend. The LHC has also been taking the occasional week off for “scrubbing”, in which lots of protons are circulated to burn off electron clouds in the beam pipes.

This is all leading up to the next technical stop, when the CERN engineers get to go fix things that have broken and improve things that don’t work perfectly. So it’s a slow process, sure. But all the caution and extra steps and procedures are what make the LHC a one-of-a-kind experiment that has big sights set for the rest of Run 2. More posts to follow when more physics results arrive!

 

References:

  1. LHC Commissioning site
  2. Cyrogenics & Magnets at the LHC
  3. CERN collisions announcement
  4. CMS Public Luminosity results

Prospects for the International Linear Collider

Title: “Physics Case for the International Linear Collider”
Author: Linear Collider Collaboration (LCC) Physics Working Group
Published: arXiV hep-ex 1506:05992

For several years, rumors have been flying around the particle physics community about an entirely new accelerator facility, one that can take over for the LHC during its more extensive upgrades and can give physicists a different window into the complex world of the Standard Model and beyond. Through a few setbacks and moments of indecision, the project seems to have more momentum now than ever, so let’s go ahead and talk about the International Linear Collider: what it is, why we want it, and whether or not it will ever actually get off the ground.

The ILC is a proposed linear accelerator that will collide electrons and positrons, in comparison to the circular Large Hadron Collider ring that collides protons. So why make these design differences? Hasn’t the LHC done a lot for us? In two words: precision measurements!

Of course, the LHC got us the Higgs, and that’s great. But there are certain processes that physicists really want to look at now that occupy much higher fractions of the electron-positron cross section. In addition, the messiness associated with strong interactions is entirely gone with a lepton collider, leaving only a very well-defined initial state and easily calculable backgrounds. Let’s look specifically at what particular physical processes are motivating this design.

Higgs to fermion couplings, from CMS experiment (left) and projected for ILC (right).
Figure 1: Higgs to fermion couplings, from CMS experiment (left) and projected for ILC (right).

1. The Higgs. Everything always comes back to the Higgs, doesn’t it? We know that it’s out there, but beyond that, there are still many questions left unanswered. Physicists still want to determine whether the Higgs is composite, or whether it perhaps fits into a supersymmetric model of some kind. Additionally, we’re still uncertain about the couplings of the Higgs, both to the massive fermions and to itself. Figure 1 shows the current best estimate of Higgs couplings, which we expect to be proportional to the fermion mass, in comparison to how the precision of these measurements should improve with the ILC.

2.The Top Quark. Another thing that we’ve already discovered, but still want to know more about its characteristics and behaviors. We know that the Higgs field takes on a symmetry breaking value in all of space, due to the observed split of the electromagnetic and weak forces. As it turns out, it is the coupling of the Higgs to the top that provides this value, making it a key player in the Standard Model game.

3.New Physics. And of course there’s always the discovery potential. Since electron and positron beams can be polarized, we would be able to measure backgrounds with a whole new level of precision, providing a better image of possible decay chains that include dark matter or other beyond the SM particles.

Figure 2: ILC home page/Form One

Let’s move on to the actual design prospects for the ILC. Figure 2 shows the most recent blueprint of what such an accelerator would look like.  The ILC would have 2 separate detectors, and would be able to accelerate electrons/positrons to an energy of 500 GeV, with an option to upgrade to 1 TeV at a later point. The entire tunnel would be 31km long with two damping rings shown at the center. When accelerating electrons to extremely high energies, a linear collider is needed to offset extremely relativistic effects. For example, the Large Electron-Positron Collider synchrotron at CERN accelerates electrons to 50 GeV, giving them a relativistic gamma factor of 98,000. Compare that to a proton of 50 GeV in the same ring, which has a gamma of 54. That high gamma means that an electron requires an insane amount of energy to offset its synchrotron radiation, making a linear collider a more reasonable and cost effective choice.

 

Possible sites for the ILC in Japan.
Figure 3: Possible sites for the ILC in Japan.

In any large (read: expensive) experiment such as this, a lot of politics are going to come into play. The current highest bidder for the accelerator seems to be Japan, with possible construction sites in the mountain ranges (see Figure 3). The Japanese government is pretty eager to contribute a lot of funding to the project, something that other contenders have been reluctant to do (but such funding promises can very easily go awry, as the poor SSC shows us.) The Reference Design Reports report the estimated cost to be $6.7 billion, though U.S. Department of Energy officials have placed the cost closer to $20 billion. But the benefits of such a collaboration are immense. The infrastructure of such an accelerator could lead to the creation of a “new CERN”, one that could have as far-reaching influence in the future as CERN has enjoyed in the past few decades. Bringing together about 1000 scientists from more than 20 countries, the ILC truly has the potential to do great things for future international scientific collaboration, making it one of the most exciting prospects on the horizon of particle physics.

 

Further Reading:

  1. The International Linear Collider site: all things ILC
  2. ILC Reference Design Reports (RDR), for the very ambitious reader