A new boson at 151 GeV?! Not quite yet

Title: “Accumulating Evidence for the Associate Production of
a Neutral Scalar with Mass around 151 GeV”

Authors: Andreas Crivellin et al.

Reference: https://arxiv.org/abs/2109.02650

Everyone in particle physics is hungry for the discovery of a new particle not in the standard model, that will point the way forward to a better understanding of nature. And recent anomalies: potential Lepton Flavor Universality violation in B meson decays and the recent experimental confirmation of the muon g-2 anomaly, have renewed peoples hopes that there may new particles lurking nearby within our experimental reach. While these anomalies are exciting, if they are confirmed they would be ‘indirect’ evidence for new physics, revealing concrete a hole in the standard model, but not definitely saying what it is that fills that hole.  We would then would really like to ‘directly’ observe what was causing the anomaly, so we can know exactly what the new particle is and study it in detail. A direct observation usually involves being able to produce it in a collider, which is what the high momentum experiments at the LHC (ATLAS and CMS) are designed to look for.

By now these experiments have done hundreds of different analyses of their data searching for potential signals of new particles being produced in their collisions and so far haven’t found anything. But in this recent paper, a group of physicists outside these collaborations argue that they may have missed such a signal in their own data. Whats more, they claim statistical evidence for this new particle at the level of around 5-sigma, which is the threshold usually corresponding to a ‘discovery’ in particle physics.  If true, this would of course be huge, but there are definitely reasons to be a bit skeptical.

This group took data from various ATLAS and CMS papers that were looking for something else (mostly studying the Higgs) and noticed that multiple of them had an excess of events at a particle energy, 151 GeV. In order to see how significant theses excesses were in combination, they constructed a statistical model that combined evidence from the many different channels simultaneously. Then they evaluate that the probability of there being an excess at the same energy in all of these channels without a new particle is extremely low, and thus claim evidence for this new particle at 5.1-sigma (local). 

 
4 plots in different channels showing the purported excess at 151 GeV in different channels.
FIgure 1 from the paper. This shows the invariant mass spectrum of the new hypothetical boson mass in the different channels the authors consider. The authors have combined CMS and ATLAS data from different analyses and normalized everything to be consistent in order to make such plot. The pink line shows the purported signal at 151 GeV. The largest significance comes from the channel where the new boson decays into two photons and is produced in association with something that decays invisibly (which produces missing energy).
A plot of the significance (p-value) as a function of the mass of the new particle. Combing all the channels, the significance reaches the level of 5-sigma. One can see that the significance is dominated by diphoton channels.

This is a of course a big claim, and one reason to be skeptical is because they don’t have a definitive model, they cannot predict exactly how much signal you would expect to see in each of these different channels. This means that when combining the different channels, they have to let the relative strength of the signal in each channel be a free parameter. They are also combining the data a multitude of different CMS and ATLAS papers, essentially selected because they are showing some sort of fluctuation around 151 GeV. So this sort of cherry picking of data and no constraints on the relative signal strengths means that their final significance should be taken with several huge grains of salt.

The authors further attempt to quantify a global significance, which would account of the look-elsewhere effect , but due to the way they have selected their datasets  it is not really possible in this case (in this humble experimenter’s opinion).

Still, with all of those caveats, it is clear that there is some excesses in the data around 151 GeV, and it should be worth experimental collaborations’ time to investigate it further. Most of the data the authors use comes control regions of from analyses that were focused solely on the Higgs, so this motivates the experiments expanding their focus a bit to cover these potential signals. The authors also propose a new search that would be sensitive to their purported signal, which would look for a new scalar decaying to two new particles that decay to pairs of photons and bottom quarks respectively (H->SS*-> γγ bb).

 

In an informal poll on Twitter, most were not convinced a new particle has been found, but the ball is now in ATLAS and CMS’s courts to analyze the data themselves and see what they find. 

 

 

Read More:

An Anomalous Anomaly : The New Fermilab Muon g-2 Results” A Particle Bites post about one recent exciting anomaly 

The flavour of new physics” Cern Courier article about the recent anomalies relating to lepton flavor violation 

Unveiling Hidden Physics at the LHC” Recent whitepaper that contains a good review of the recent anomalies relevant for LHC physics 

For a good discussion of this paper claiming a new boson, see this Twitter thread

How to find invisible particles in a collider

 You might have heard that one of the big things we are looking for in collider experiments are ever elusive dark matter particles. But given that dark matter particles are expected to interact very rarely with regular matter, how would you know if you happened to make some in a collision? The so called ‘direct detection’ experiments have to operate giant multi-ton detectors in extremely low-background environments in order to be sensitive to an occasional dark matter interaction. In the noisy environment of a particle collider like the LHC, in which collisions producing sprays of particles happen every 25 nanoseconds, the extremely rare interaction of the dark matter with our detector is likely to be missed. But instead of finding dark matter by seeing it in our detector, we can instead find it by not seeing it. That may sound paradoxical, but its how most collider based searches for dark matter work. 

The trick is based on every physicists favorite principle: the conservation of energy and momentum. We know that energy and momentum will be conserved in a collision, so if we know the initial momentum of the incoming particles, and measure everything that comes out, then any invisible particles produced will show up as an imbalance between the two. In a proton-proton collider like the LHC we don’t know the initial momentum of the particles along the beam axis, but we do that they were traveling along that axis. That means that the net momentum in the direction away from the beam axis (the ‘transverse’ direction) should be zero. So if we see a momentum imbalance going away from the beam axis, we know that there is some ‘invisible’ particle traveling in the opposite direction.

A sketch of what the signature of an invisible particle would like in a detector. Note this is a 2D cross section of the detector, with the beam axis traveling through the center of the diagram. There are two signals measured in the detector moving ‘up’ away from the beam pipe. Momentum conservation means there must have been some particle produced which is traveling ‘down’ and was not measured by the detector. Figure borrowed from here  

We normally refer to the amount of transverse momentum imbalance in an event as its ‘missing momentum’. Any collisions in which an invisible particle was produced will have missing momentum as tell-tale sign. But while it is a very interesting signature, missing momentum can actually be very difficult to measure. That’s because in order to tell if there is anything missing, you have to accurately measure the momentum of every particle in the collision. Our detectors aren’t perfect, any particles we miss, or mis-measure the momentum of, will show up as a ‘fake’ missing energy signature. 

A picture of a particularly noisy LHC collision, with a large number of tracks
Can you tell if there is any missing energy in this collision? Its not so easy… Figure borrowed from here

Even if you can measure the missing energy well, dark matter particles are not the only ones invisible to our detector. Neutrinos are notoriously difficult to detect and will not get picked up by our detectors, producing a ‘missing energy’ signature. This means that any search for new invisible particles, like dark matter, has to understand the background of neutrino production (often from the decay of a Z or W boson) very well. No one ever said finding the invisible would be easy!

However particle physicists have been studying these processes for a long time so we have gotten pretty good at measuring missing energy in our events and modeling the standard model backgrounds. Missing energy is a key tool that we use to search for dark matter, supersymmetry and other physics beyond the standard model.

Read More:

What happens when energy goes missing?” ATLAS blog post by Julia Gonski

How to look for supersymmetry at the LHC“, blog post by Matt Strassler

“Performance of missing transverse momentum reconstruction with the ATLAS detector using proton-proton collisions at √s = 13 TeV” Technical Paper by the ATLAS Collaboration

“Search for new physics in final states with an energetic jet or a hadronically decaying W or Z boson and transverse momentum imbalance at √s= 13 TeV” Search for dark matter by the CMS Collaboration

Measuring the Tau’s g-2 Too

Title : New physics and tau g2 using LHC heavy ion collisions

Authors: Lydia Beresford and Jesse Liu

Reference: https://arxiv.org/abs/1908.05180

Since April, particle physics has been going crazy with excitement over the recent announcement of the muon g-2 measurement which may be our first laboratory hint of physics beyond the Standard Model. The paper with the new measurement has racked up over 100 citations in the last month. Most of these papers are theorists proposing various models to try an explain the (controversial) discrepancy between the measured value of the muon’s magnetic moment and the Standard Model prediction. The sheer number of papers shows there are many many models that can explain the anomaly. So if the discrepancy is real,  we are going to need new measurements to whittle down the possibilities.

Given that the current deviation is in the magnetic moment of the muon, one very natural place to look next would be the magnetic moment of the tau lepton. The tau, like the muon, is a heavier cousin of the electron. It is the heaviest lepton, coming in at 1.78 GeV, around 17 times heavier than the muon. In many models of new physics that explain the muon anomaly the shift in the magnetic moment of a lepton is proportional to the mass of the lepton squared. This would explain why we are a seeing a discrepancy in the muon’s magnetic moment and not the electron (though there is a actually currently a small hint of a deviation for the electron too). This means the tau should be 280 times more sensitive than the muon to the new particles in these models. The trouble is that the tau has a much shorter lifetime than the muon, decaying away in just 10-13 seconds. This means that the techniques used to measure the muons magnetic moment, based on magnetic storage rings, won’t work for taus. 

Thats where this new paper comes in. It details a new technique to try and measure the tau’s magnetic moment using heavy ion collisions at the LHC. The technique is based on light-light collisions (previously covered on Particle Bites) where two nuclei emit photons that then interact to produce new particles. Though in classical electromagnetism light doesn’t interact with itself (the beam from two spotlights pass right through each other) at very high energies each photon can split into new particles, like a pair of tau leptons and then those particles can interact. Though the LHC normally collides protons, it also has runs colliding heavier nuclei like lead as well. Lead nuclei have more charge than protons so they emit high energy photons more often than protons and lead to more light-light collisions than protons. 

Light-light collisions which produce tau leptons provide a nice environment to study the interaction of the tau with the photon. A particles magnetic properties are determined by its interaction with photons so by studying these collisions you can measure the tau’s magnetic moment. 

However studying this process is be easier said than done. These light-light collisions are “Ultra Peripheral” because the lead nuclei are not colliding head on, and so the taus produced generally don’t have a large amount of momentum away from the beamline. This can make them hard to reconstruct in detectors which have been designed to measure particles from head on collisions which typically have much more momentum. Taus can decay in several different ways, but always produce at least 1 neutrino which will not be detected by the LHC experiments further reducing the amount of detectable momentum and meaning some information about the collision will lost. 

However one nice thing about these events is that they should be quite clean in the detector. Because the lead nuclei remain intact after emitting the photon, the taus won’t come along with the bunch of additional particles you often get in head on collisions. The level of background processes that could mimic this signal also seems to be relatively minimal. So if the experimental collaborations spend some effort in trying to optimize their reconstruction of low momentum taus, it seems very possible to perform a measurement like this in the near future at the LHC. 

The authors of this paper estimate that such a measurement with a the currently available amount of lead-lead collision data would already supersede the previous best measurement of the taus anomalous magnetic moment and further improvements could go much farther. Though the measurement of the tau’s magnetic moment would still be far less precise than that of the muon and electron, it could still reveal deviations from the Standard Model in realistic models of new physics. So given the recent discrepancy with the muon, the tau will be an exciting place to look next!

Read More:

An Anomalous Anomaly: The New Fermilab Muon g-2 Results

When light and light collide

Another Intriguing Hint of New Physics Involving Leptons

The LHC’s Newest Experiment

Article Title: “FASER: ForwArd Search ExpeRiment at the LHC”

Authors: The FASER Collaboration 

Reference: https://arxiv.org/abs/1901.04468

When the LHC starts up again for its 3rd run of data taking, there will be a new experiment on the racetrack. FASER, the ForwArd Search ExpeRiment at the LHC is an innovative new experiment that just like its acronym, will stretch LHC collisions to get the most out of them we can. 

While the current LHC detectors are great, the have a (literal) hole. General purpose detectors (like ATLAS and CMS) are essentially giant cylinders with the incoming particle beams passing through the central axis of the cylinder before colliding. Because they have to leave room for the incoming beam of particles, they can’t detect anything too close to the beam axis. This typically isn’t a problem, because when a heavy new particle, like Higgs boson, is produced, its decay products fly off in all directions, so it is very unlikely that all of the particles produced would end up moving along the beam axis. However if you are looking for very light particles, they will often be produced in ‘imbalanced’ collisions, where one of the protons contributes a lot more energy than the other one, and the resulting particles therefore mostly carry on in the direction of the proton, along the beam axis. Because these general purpose detectors have to have a gap in them for the beams to enter they have no hope of detecting such collisions. 

That’s where FASER comes in.

A diagram of the FASER detector.

 FASER is specifically looking for new light “long-lived” particles (LLP’s) that could be produced in LHC collisions and then carry on in the direction of the beam. Long-lived means that once produced they can travel for a while before decaying back into Standard Model particles. Many popular models of dark matter have particles that could fit this bill, including axion-like particles, dark photons, and heavy neutral leptons.  To search for these particles FASER will be placed approximately 500 meters down the line from the ATLAS interaction point, in a former service tunnel. They will be looking for the signatures of LLP’s that made were produced in collisions at the ATLAS interaction point, traveled through the ground and eventually decayed in volume of their detector. 

A map showing where FASER will be located, around 500 meters downstream of the ATLAS interaction point.

Any particles reaching FASER will travel through hundreds of meters of rock and concrete, filtering out a large amount of the Standard Model particles produced in the LHC collisions. But the LLP’s FASER is looking for interact very feebly with the Standard Model so they should sail right through. FASER also has dedicated detector elements to veto any remaining muons that might make it through the ground, allowing FASER  be able to almost entirely eliminate any backgrounds that would mimic an LLP signal. This low background and their unique design will allow them to break new ground in the search for LLP’s in the coming LHC run. 

A diagram showing how particles reach FASER. Starting at the ATLAS interaction point, protons and other charged particles get deflected away by the LHC, but the long-lived particles (LLP’s) that FASER is searching for would continue straight through the ground to the FASER detector.

In addition to their program searching for new particles, FASER will also feature a neutrino detector. This will allow them to detect the copious and highly energetic neutrinos produced in LHC collisions which actually haven’t been studied yet. In fact, this will be the first direct detection of neutrinos produced in a particle collider, and will enable them to test neutrino properties at energies much higher than any previous human-made source. 

FASER is a great example of physicists thinking up clever ways to get more out of our beloved LHC collisions. Currently being installed, it will be one of the most exciting new developments of the LHC Run III, so look out for their first results in a few years!

 

Read More: 

The FASER Collaboration’s Detector Design Page

Press Release for CERN’s Approval of FASER

Announcement and Description of FASER’s Neutrino program

Machine Learning The LHC ABC’s

Article Title: ABCDisCo: Automating the ABCD Method with Machine Learning

Authors: Gregor Kasieczka, Benjamin Nachman, Matthew D. Schwartz, David Shih

Reference: arxiv:2007.14400

When LHC experiments try to look for the signatures of new particles in their data they always apply a series of selection criteria to the recorded collisions. The selections pick out events that look similar to the sought after signal. Often they then compare the observed number of events passing these criteria to the number they would expect to be there from ‘background’ processes. If they see many more events in real data than the predicted background that is evidence of the sought after signal. Crucial to whole endeavor is being able to accurately estimate the number of events background processes would produce. Underestimate it and you may incorrectly claim evidence of a signal, overestimate it and you may miss the chance to find a highly sought after signal.

However it is not always so easy to estimate the expected number of background events. While LHC experiments do have high quality simulations of the Standard Model processes that produce these backgrounds they aren’t perfect. Particularly processes involving the strong force (aka Quantum Chromodynamics, QCD) are very difficult to simulate, and refining these simulations is an active area of research. Because of these deficiencies we don’t always trust background estimates based solely on these simulations, especially when applying very specific selection criteria.

Therefore experiments often employ ‘data-driven’ methods where they estimate the amount background events by using control regions in the data. One of the most widely used techniques is called the ABCD method.

An illustration of the ABCD method. The signal region, A, is defined as the region in which f and g are greater than some value. The amount of background in region A is estimated using regions B C and D which are dominated by background.

The ABCD method can applied if the selection of signal-like events involves two independent variables f and g. If one defines the ‘signal region’, A,  (the part of the data in which we are looking for a signal) as having f  and g each greater than some amount, then one can use the neighboring regions B, C, and D to estimate the amount of background in region A. If the number of signal events outside region A is small, the number of background events in region A can be estimated as N_A = N_B * (N_C/N_D).

In modern analyses often one of these selection requirements involves the score of a neural network trained to identify the sought after signal. Because neural networks are powerful learners one often has to be careful that they don’t accidentally learn about the other variable that will be used in the ABCD method, such as the mass of the signal particle. If two variables become correlated, a background estimate with the ABCD method will not be possible. This often means augmenting the neural network either during training or after the fact so that it is intentionally ‘de-correlated’ with respect to the other variable. While there are several known techniques to do this, it is still a tricky process and often good background estimates come with a trade off of reduced classification performance.

In this latest work the authors devise a way to have the neural networks help with the background estimate rather than hindering it. The idea is rather than training a single network to classify signal-like events, they simultaneously train two networks both trying to identify the signal. But during this training they use a groovy technique called ‘DisCo’ (short for Distance Correlation) to ensure that these two networks output is independent from each other. This forces the networks to learn to use independent information to identify the signal. This then allows these networks to be used in an ABCD background estimate quite easily.

The authors try out this new technique, dubbed ‘Double DisCo’, on several examples. They demonstrate they are able to have quality background estimates using the ABCD method while achieving great classification performance. They show that this method improves upon the previous state of the art technique of decorrelating a single network from a fixed variable like mass and using cuts on the mass and classifier to define the ABCD regions (called ‘Single Disco’ here).

Using the task of identifying jets containing boosted top quarks, they compare the classification performance (x-axis) and quality of the ABCD background estimate (y-axis) achievable with the new Double DisCo technique (yellow points) and previously state of the art Single DisCo (blue points). One can see the Double DisCo method is able to achieve higher background rejection with a similar or better amount of ABCD closure.

While there have been many papers over the last few years about applying neural networks to classification tasks in high energy physics, not many have thought about how to use them to improve background estimates as well. Because of their importance, background estimates are often the most time consuming part of a search for new physics. So this technique is both interesting and immediately practical to searches done with LHC data. Hopefully it will be put to use in the near future!

Further Reading:

Quanta Magazine Article “How Artificial Intelligence Can Supercharge the Search for New Particles

Recent ATLAS Summary on New Machine Learning Techniques “Machine learning qualitatively changes the search for new particles

CERN Tutorial on “Background Estimation with the ABCD Method

Summary of Paper of Previous Decorrelation Techniques used in ATLAS “Performance of mass-decorrelated jet substructure observables for hadronic two-body decay tagging in ATLAS

Catching The Higgs Speeding

Article Title: Inclusive search for highly boosted Higgs bosons decaying to bottom quark-antiquark pairs in proton-proton collisions at √s= 13 TeV

Authors: The CMS Collaboration

Reference arxiv:2006.13251

Since the discovery of the Higgs boson one of the main tasks of the LHC experiments has been to study all of its properties and see if they match the Standard Model predictions. Most of this effort has gone into characterizing the different ways the Higgs can be produced (‘production modes’) and how often it decays into its different channels (‘branching ratios’). However if you are a fan of Sonic the Hedgehog, you might have also wondered ‘How often does the Higgs go really fast?’. While that might sound like a very silly question, it is actually a very interesting one to study, and what has been done in this recent CMS analysis.

But what does it mean for the Higgs to ‘go fast’? You might have thought that the Higgs moves quite slowly because it is the 2nd heaviest fundamental particle we know of, with a mass around 125 times that of a proton. But sometimes very energetic LHC collisions can have enough energy to not only to make a Higgs boson but give it a ‘kick’ as well. If the Higgs is produced with enough momentum that it moves away from the beamline at a speed relatively close to the speed of light we call it ‘boosted’.

Not only are these boosted Higgs just a very cool thing to study, they can also be crucial to seeing the effects of new particles interacting with the Higgs. If there was a new heavy particle interacting with the Higgs during its production you would expect to see the largest effect on the rates of Higgs production at high momentum. So if you don’t look specifically at the rates of these boosted Higgs production you might miss this clue of new physics.

Another benefit is that when the Higgs is produced with a boost it significantly changes its experimental signature, often making it easier to spot. The Higgs’s favorite decay channel, its decay into a pair of bottom quarks, is notoriously difficult to study. A bottom quark, like any quark produced in an LHC collision, does not reach the detector directly, but creates a huge shower of particles known as a jet. Because bottom quarks live long enough to travel a little bit away from the beam interaction point before decaying, their jets start a little bit displaced compared to other jets. This allows experimenters to ‘tag’ jets likely to have come from bottom quarks. In short, the experimental signature of this Higgs decay is two jets that look bottom-like. This signal is very hard to find amidst a background of jets produced via the strong force which occur at rates orders of magnitude more often than Higgs production. 

 

 But when a particle with high momentum decays, its decay products will be closer together in the reference frame of the detector. When the Higgs is produced with a boost, the two bottom quarks form a single large jet rather than two separated jets. This single jet should have the signature of two b quarks inside of it rather than just one. What’s more, the distribution of particles within the jet should form 2 distinct ‘prongs’, one coming from each of the bottom quarks, rather than a single core that would be characteristic of a jet produced by a single quark or gluon. These distinct characteristics help analyzers pick out events more likely to be boosted Higgs from regular QCD events. 

The end goal is to select events with these characteristics and then look for a excess of events that have an invariant mass of 125 GeV, which would be the tell-tale sign of the Higgs. When this search was performed they did see such a bump, an excess over the estimated background with a significance of 2.5 standard deviations. This is actually a stronger signal than they were expecting to see in the Standard Model. They measure the strength of the signal they see to be 3.7 ± 1.6 times the strength predicted by the Standard Model. 

Higgs bump plot
The result of the search for ‘boosted’ Higgs bosons decaying to b quarks. One can see an excess of events at 125 GeV in pink corresponding to the observed Higgs signal

The analyzers then study this excess more closely by checking the signal strength in different regions of Higgs momentum. What they see is that the excess is coming from the events with the highest momentum Higgs’s.  The significance of the excess of high momentum Higgs’s above the Standard Model prediction is about 2 standard deviations.

Higgs signal strength in different momentum bins
A plot showing the measured Higgs signal strength in different bins of the Higgs momentum. The signal strengths are normalized so the Standard Model prediction is always given by ‘1’ (shown in the gray dashed line. The measurement in each momentum bin are shown in the black points with red error bars. The overall measurement across all the bins is shown by the thick black line and the green region is the error bar.

So what we should we make of these extra speedy Higgs’s? Well first of all, the deviation from the Standard Model it is not very statistically significant yet, so it may disappear with further study. ATLAS is likely working on a similar measurement with their current dataset so we will wait to see if they confirm this excess. Another possibility is that the current predictions for the Standard Model, which are based on difficult perturbative QCD calculations, may be slightly off. Theorists will probably continue make improvements to these predictions in the coming years. But if we continue to see the same effect in future measurements, and the Standard Model prediction doesn’t budge, these speedy Higgs’s may turn out to be our first hint of the physics beyond the Standard Model!

Further Reading:

First Evidence the Higgs Talks to Other Generations“: previous ParticleBites post on recent Higgs boson news

A decade of advances in jet substructure“: Cern Courier article on techniques to identify boosted particles (like the Higgs) decaying into jets

Jets: From Energy Deposits to Physics Objects“: previous ParticleBites post on how jets are measured

First Evidence the Higgs Talks to Other Generations

Article Titles: “Measurement of Higgs boson decay to a pair of muons in proton-proton collisions at sqrt(S) = 13 TeV” and “A search for the dimuon decay of the Standard Model Higgs boson with the ATLAS detector”

Authors: The CMS Collaboration and The ATLAS Collaboration, respectively

References: CDS: CMS-PAS-HIG-19-006 and arxiv:2007.07830, respectively

Like parents who wonder if millennials have ever read a book by someone outside their generation, physicists have been wondering if the Higgs communicates with matter particles outside the 3rd generation. Since its discovery in 2012, phycists at the LHC experiments have been studying the Higgs in a variety of ways. However despite the fact that matter seems to be structured into 3 distinct ‘generations’ we have so far only seen the Higgs talking to the 3rd generation. In the Standard Model, the different generations of matter are 3 identical copies of the same kinds of particles, just with each generation having heavier masses. Due to the fact that the Higgs interacts with particles in proportion to their mass, this means it has been much easier to measure the Higgs talking to the third and heaviest generation of mater particles. But in order to test whether the Higgs boson really behaves exactly like the Standard Model predicts or has slight deviations -(indicating new physics), it is important to measure its interactions with particles from the other generations too. The 2nd generation particle the Higgs decays most often to is the charm quark, but the experimental difficulty of identifying charm quarks makes this an extremely difficult channel to probe (though it is being tried).

The best candidate for spotting the Higgs talking to the 2nd generation is by looking for the Higgs decaying to two muons which is exactly what ATLAS and CMS both did in their recent publications. However this is no easy task. Besides being notoriously difficult to produce, the Higgs only decays to dimuons two out of every 10,000 times it is produced. Additionally, there is a much larger background of Z bosons decaying to dimuon pairs that further hides the signal.

The branching ratio (fraction of decays to a given final state) of the Higgs boson as a function of its mass (the measured Higgs mass is around 125 GeV). The decay to a pair of muons is shown in gold, much below the other decays that have been observed.

CMS and ATLAS try to make the most of their data by splitting up events into multiple categories by applying cuts that target different the different ways Higgs bosons are produced: the fusion of two gluons, two vector bosons, two top quarks or radiated from a vector boson. Some of these categories are then further sub-divided to try and squeeze out as much signal as possible. Gluon fusion produces the most Higgs bosons, but it also the hardest to distinguish from the Z boson production background. The vector boson fusion process produces the 2nd most Higgs and is a more distinctive signature so it contributes the most to the overall measurement. In each of these sub-categories a separate machine learning classifier is trained to distinguish Higgs boson decays from background events. All together CMS uses 14 different categories of events and ATLAS uses 20. Backgrounds are estimated using both simulation and data-driven techniques, with slightly different methods in each category. To extract the overall amount of signal present, both CMS and ATLAS fit all of their respective categories at once with a single parameter controlling the strength of a Higgs boson signal.

At the end of the day, CMS and ATLAS are able to report evidence of Higgs decay to dimuons with a significance of 3-sigma and 2-sigma respectively (chalk up 1 point for CMS in their eternal rivalry!). Both of them find an amount of signal in agreement with the Standard Model prediction.

Combination of all the events used in the CMS (left) and ATLAS (right) searches for a Higgs decaying to dimuons. Events are weighted by the amount of expected signal in that bin. Despite this trick, the small evidence for a signal can be seen only be seen in the bottom panels showing the number of data events minus the predicted amount of background around 125 GeV.

CMS’s first evidence of this decay allows them to measuring the strength of the Higgs coupling to muons as compared to the Standard Model prediction. One can see this latest muon measurement sits right on the Standard Model prediction, and probes the Higgs’ coupling to a particle with much smaller mass than any of the other measurements.

CMS’s latest summary of Higgs couplings as a function of particle mass. This newest edition of the coupling to muons is shown in green. One can see that so far there is impressive agreement with the Standard Model across a mass range spanning 3 orders of magnitude!

As CMS and ATLAS collect more data and refine their techniques, they will certainly try to push their precision up to the 5-sigma level needed to claim discovery of the Higgs’s interaction with the 2nd generation. They will be on the lookout for any deviations from the expected behavior of the SM Higgs, which could indicate new physics!

Further Reading:

Older ATLAS Press Release “ATLAS searches for rare Higgs boson decays into muon pairs

Cern Courier Article “The Higgs adventure: five years in

Particle Bites Post “Studying the Higgs via Top Quark Couplings

Blog Post from Matt Strassler on “How the Higgs Field Works

The XENON1T Excess : The Newest Craze in Particle Physics

Paper: Observation of Excess Electronic Recoil Events in XENON1T

Authors: XENON1T Collaboration

Recently the particle physics world has been abuzz with a new result from the XENON1T experiment who may have seen a revolutionary signal. XENON1T is one of the world’s most sensitive dark matter experiments. The experiment consists of a huge tank of Xenon placed deep underground in the Gran Sasso mine in Italy. It is a ‘direct-detection’ experiment, hunting for very rare signals of dark matter particles from space interacting with their detector. It was originally designed to look for WIMP’s, Weakly Interacting Massive Particles, who used to be everyone’s favorite candidate for dark matter. However, given recent null results by WIMP-hunting  direct-detection experiments, and collider experiments at the LHC, physicists have started to broaden their dark matter horizons. Experiments like XENON1T, who were designed to look for heavy WIMP’s colliding off of Xenon nuclei have realized that they can also be very sensitive to much lighter particles by looking for electron recoils. New particles that are much lighter than traditional WIMP’s would not leave much of an impact on large Xenon nuclei, but they can leave a signal in the detector if they instead scatter off of the electrons around those nuclei. These electron recoils can be identified by the ionization and scintillation signals they leave in the detector, allowing them to be distinguished from nuclear recoils.

In this recent result, the XENON1T collaboration searched for these electron recoils in the energy range of 1-200 keV with unprecedented sensitivity.  Their extraordinary sensitivity is due to its exquisite control over backgrounds and extremely low energy threshold for detection. Rather than just being impressed, what has gotten many physicists excited is that the latest data shows an excess of events above expected backgrounds in the 1-7 keV region. The statistical significance of the excess is 3.5 sigma, which in particle physics is enough to claim ‘evidence’ of an anomaly but short of the typical 5-sigma required to claim discovery.

The XENON1T data that has caused recent excitement. The ‘excess’ is the spike in the data (black points) above the background model (red line) in the 1-7 keV region. The significance of the excess is around 3.5 sigma.

So what might this excess mean? The first, and least fun answer, is nothing. 3.5 sigma is not enough evidence to claim discovery, and those well versed in particle physics history know that there have been numerous excesses with similar significances have faded away with more data. Still it is definitely an intriguing signal, and worthy of further investigation.

The pessimistic explanation is that it is due to some systematic effect or background not yet modeled by the XENON1T collaboration. Many have pointed out that one should be skeptical of signals that appear right at the edge of an experiments energy detection threshold. The so called ‘efficiency turn on’, the function that describes how well an experiment can reconstruct signals right at the edge of detection, can be difficult to model. However, there are good reasons to believe this is not the case here. First of all the events of interest are actually located in the flat part of their efficiency curve (note the background line is flat below the excess), and the excess rises above this flat background. So to explain this excess their efficiency would have to somehow be better at low energies than high energies, which seems very unlikely. Or there would have to be a very strange unaccounted for bias where some higher energy events were mis-reconstructed at lower energies. These explanations seem even more implausible given that the collaboration performed an electron reconstruction calibration using the radioactive decays of Radon-220 over exactly this energy range and were able to model the turn on and detection efficiency very well.

Results of a calibration done to radioactive decays of Radon-220. One can see that data in the efficiency turn on (right around 2 keV) is modeled quite well and no excesses are seen.

However the possibility of a novel Standard Model background is much more plausible. The XENON collaboration raises the possibility that the excess is due to a previously unobserved background from tritium β-decays. Tritium decays to Helium-3 and an electron and a neutrino with a half-life of around 12 years. The energy released in this decay is 18.6 keV, giving the electron having an average energy of a few keV. The expected energy spectrum of this decay matches the observed excess quite well. Additionally, the amount of contamination needed to explain the signal is exceedingly small. Around 100 parts-per-billion of H2 would lead to enough tritium to explain the signal, which translates to just 3 tritium atoms per kilogram of liquid Xenon. The collaboration tries their best to investigate this possibility, but they neither rule out or confirm such a small amount of tritium contamination. However, other similar contaminants, like diatomic oxygen have been confirmed to be below this level by 2 orders of magnitude, so it is not impossible that they were able to avoid this small amount of contamination.

So while many are placing their money on the tritium explanation, there is the exciting possibility remains that this is our first direct evidence of physics Beyond the Standard Model (BSM)! So if the signal really is a new particle or interaction what would it be? Currently it it is quite hard to pin down exactly based on the data. The analysis was specifically searching for two signals that would have shown up in exactly this energy range: axions produced in the sun, and neutrinos produced in the sun interacting with electrons via a large (BSM) magnetic moment. Both of these models provide good fits to the signal shape, with the axion explanation being slightly preferred. However since this result has been released, many have pointed out that these models would actually be in conflict with constraints from astrophysical measurements. In particular, the axion model they searched for would have given stars an additional way to release energy, causing them to cool at a faster rate than in the Standard Model. The strength of interaction between axions and electrons needed to explain the XENON1T excess is incompatible with the observed rates of stellar cooling. There are similar astrophysical constraints on neutrino magnetic moments that also make it unlikely.

This has left door open for theorists to try to come up with new explanations for these excess events, or think of clever ways to alter existing models to avoid these constraints. And theorists are certainly seizing this opportunity! There are new explanations appearing on the arXiv every day, with no sign of stopping. In the roughly 2 weeks since the XENON1T announced their result and this post is being written, there have already been 50 follow up papers! Many of these explanations involve various models of dark matter with some additional twist, such as being heated up in the sun or being boosted to a higher energy in some other way.

A collage of different models trying to explain the XENON1T excess (center). Each plot is from a separate paper released in the first week and a half following the original announcement. Source

So while theorists are currently having their fun with this, the only way we will figure out the true cause of this this anomaly is with more data. The good news is that the XENON collaboration is already preparing for the XENONnT experiment that will serve as a follow to XENON1T. XENONnT will feature a larger active volume of Xenon and a lower background level, allowing them to potentially confirm this anomaly at the 5-sigma level with only a few months of data. If  the excess persists, more data would also allow them to better determine the shape of the signal; allowing them to possibly distinguish between the tritium shape and a potential new physics explanation. If real, other liquid Xenon experiments like LUX and PandaX should also be able to independently confirm the signal in the near future. The next few years should be a very exciting time for these dark matter experiments so stay tuned!

Read More:

Quanta Magazine Article “Dark Matter Experiment Finds Unexplained Signal”

Previous ParticleBites Post on Axion Searches

Blog Post “Hail the XENON Excess”

LHCb’s Flavor Mystery Deepens

Title: Measurement of CP -averaged observables in the B0→ K∗0µ+µ− decay

Authors: LHCb Collaboration

Refference: https://arxiv.org/abs/2003.04831

In the Standard Model, matter is organized in 3 generations; 3 copies of the same family of particles but with sequentially heavier masses. Though the Standard Model can successfully describe this structure, it offers no insight into why nature should be this way. Many believe that a more fundamental theory of nature would better explain where this structure comes from. A natural way to look for clues to this deeper origin is to check whether these different ‘flavors’ of particles really behave in exactly the same ways, or if there are subtle differences that may hint at their origin.

The LHCb experiment is designed to probe these types of questions. And in recent years, they have seen a series of anomalies, tensions between data and Standard Model predictions, that may be indicating the presence of new particles which talk to the different generations. In the Standard Model, the different generations can only interact with each other through the W boson, which means that quarks with the same charge can only interact through more complicated processes like those described by ‘penguin diagrams’.

The so called ‘penguin diagrams’ describe how rare decays like bottom quark → strange quark can happen in the Standard Model. The name comes from both their shape and a famous bar bet. Who says physicists don’t have a sense of humor?

These interactions typically have quite small rates in the Standard Model, meaning that the rate of these processes can be quite sensitive to new particles, even if they are very heavy or interact very weakly with the SM ones. This means that studying these sort of flavor decays is a promising avenue to search for new physics.

In a press conference last month, LHCb unveiled a new measurement of the angular distribution of the rare B0→K*0μ+μ– decay. The interesting part of this process involves a b → s transition (a bottom quark decaying into a strange quark), where number of anomalies have been seen in recent years.

Feynman diagrams of the decay being studied. A B meson (composed of a bottom and a down quark) decays into a Kaon (composed of a strange quark and a down quark) and a pair of muons. Because this decay is very rare in the Standard Mode (left diagram) it could be a good place to look for the effects of new particles (right diagram). Diagrams taken from here

Rather just measuring the total rate of this decay, this analysis focuses on measuring the angular distribution of the decay products. They also perform this mesaurement in different bins of ‘q^2’, the dimuon pair’s invariant mass. These choices allow the measurement to be less sensitive to uncertainties in the Standard Model prediction due to difficult to compute hadronic effects. This also allows the possibility of better characterizing the nature of whatever particle may be causing a deviation.

The kinematics of decay are fully described by 3 angles between the final state particles and q^2. Based on knowing the spins and polarizations of each of the particles, they can fully describe the angular distributions in terms of 8 parameters. They also have to account for the angular distribution of background events, and distortions of the true angular distribution that are caused by the detector. Once all such effects are accounted for, they are able to fit the full angular distribution in each q^2 bin to extract the angular coefficients in that bin.

This measurement is an update to their 2015 result, now with twice as much data. The previous result saw an intriguing tension with the SM at the level of roughly 3 standard deviations. The new result agrees well with the previous one, and mildly increases the tension to the level of 3.4 standard deviations.

LHCb’s measurement of P’5, an observable describing one part of the angular distribution of the decay. The orange boxes show the SM prediction of this value and the red, blue and black point shows LHCb’s most recent measurement (a combination of its ‘Run 1’ measurement and the more recent 2016 data). The grey regions are excluded from the measurement because they have large backgrounds from the decays of other mesons.

This latest result is even more interesting given that LHCb has seen an anomaly in another measurement (the R_k anomaly) involving the same b → s transition. This had led some to speculate that both effects could be caused by a single new particle. The most popular idea is a so-called ‘leptoquark’ that only interacts with some of the flavors.

LHCb is already hard at work on updating this measurement with more recent data from 2017 and 2018, which should once again double the number of events. Updates to the R_k measurement with new data are also hotly anticipated. The Belle II experiment has also recent started taking data and should be able to perform similar measurements. So we will have to wait and see if this anomaly is just a statistical fluke, or our first window into physics beyond the Standard Model!

Read More:

Symmetry Magazine “The mystery of particle generations”

Cern Courier “Anomalies persist in flavour-changing B decays”

Lecture Notes “Introduction to Flavor Physcis”

Making Smarter Snap Judgments at the LHC

Collisions at the Large Hadron Collider happen fast. 40 million times a second, bunches of 1011 protons are smashed together. The rate of these collisions is so fast that the computing infrastructure of the experiments can’t keep up with all of them. We are not able to read out and store the result of every collision that happens, so we have to ‘throw out’ nearly all of them. Luckily most of these collisions are not very interesting anyways. Most of them are low energy interactions of quarks and gluons via the strong force that have been already been studied at previous colliders. In fact, the interesting processes, like ones that create a Higgs boson, can happen billions of times less often than the uninteresting ones.

The LHC experiments are thus faced with a very interesting challenge, how do you decide extremely quickly whether an event is interesting and worth keeping or not? This what the ‘trigger’ system, the Marie Kondo of LHC experiments, are designed to do. CMS for example has a two-tiered trigger system. The first level has 4 microseconds to make a decision and must reduce the event rate from 40 millions events per second to 100,000. This speed requirement means the decision has to be made using at the hardware level, requiring the use of specialized electronics to quickly to synthesize the raw information from the detector into a rough idea of what happened in the event. Selected events are then passed to the High Level Trigger (HLT), which has 150 milliseconds to run versions of the CMS reconstruction algorithms to further reduce the event rate to a thousand per second.

While this system works very well for most uses of the data, like measuring the decay of Higgs bosons, sometimes it can be a significant obstacle. If you want to look through the data for evidence of a new particle that is relatively light, it can be difficult to prevent the trigger from throwing out possible signal events. This is because one of the most basic criteria the trigger uses to select ‘interesting’ events is that they leave a significant amount of energy in the detector. But the decay products of a new particle that is relatively light won’t have a substantial amount of energy and thus may look ‘uninteresting’ to the trigger.

In order to get the most out of their collisions, experimenters are thinking hard about these problems and devising new ways to look for signals the triggers might be missing. One idea is to save additional events from the HLT in a substantially reduced size. Rather than saving the raw information from the event, that can be fully processed at a later time, instead the only the output of the quick reconstruction done by the trigger is saved. At the cost of some precision, this can reduce the size of each event by roughly two orders of magnitude, allowing events with significantly lower energy to be stored. CMS and ATLAS have used this technique to look for new particles decaying to two jets and LHCb has used it to look for dark photons. The use of these fast reconstruction techniques allows them to search for, and rule out the existence of, particles with much lower masses than otherwise possible. As experiments explore new computing infrastructures (like GPU’s) to speed up their high level triggers, they may try to do even more sophisticated analyses using these techniques. 

But experimenters aren’t just satisfied with getting more out of their high level triggers, they want to revamp the low-level ones as well. In order to get these hardware-level triggers to make smarter decisions, experimenters are trying get them to run machine learning models. Machine learning has become very popular tool to look for rare signals in LHC data. One of the advantages of machine learning models is that once they have been trained, they can make complex inferences in a very short amount of time. Perfect for a trigger! Now a group of experimentalists have developed a library that can translate the most popular types machine learning models into a format that can be run on the Field Programmable Gate Arrays used in lowest level triggers. This would allow experiments to quickly identify events from rare signals that have complex signatures that the current low-level triggers don’t have time to look for. 

The LHC experiments are working hard to get the most out their collisions. There could be particles being produced in LHC collisions already but we haven’t been able to see them because of our current triggers, but these new techniques are trying to cover our current blind spots. Look out for new ideas on how to quickly search for interesting signatures, especially as we get closer the high luminosity upgrade of the LHC.

Read More:

CERN Courier article on programming FPGA’s

IRIS HEP Article on a recent workshop on Fast ML techniques

CERN Courier article on older CMS search for low mass dijet resonances

ATLAS Search using ‘trigger-level’ jets

LHCb Search for Dark Photons using fast reconstruction based on a high level trigger

Paper demonstrating the feasibility of running ML models for jet tagging on FPGA’s