Machine Learning The LHC ABC’s

Article Title: ABCDisCo: Automating the ABCD Method with Machine Learning

Authors: Gregor Kasieczka, Benjamin Nachman, Matthew D. Schwartz, David Shih

Reference: arxiv:2007.14400

When LHC experiments try to look for the signatures of new particles in their data they always apply a series of selection criteria to the recorded collisions. The selections pick out events that look similar to the sought after signal. Often they then compare the observed number of events passing these criteria to the number they would expect to be there from ‘background’ processes. If they see many more events in real data than the predicted background that is evidence of the sought after signal. Crucial to whole endeavor is being able to accurately estimate the number of events background processes would produce. Underestimate it and you may incorrectly claim evidence of a signal, overestimate it and you may miss the chance to find a highly sought after signal.

However it is not always so easy to estimate the expected number of background events. While LHC experiments do have high quality simulations of the Standard Model processes that produce these backgrounds they aren’t perfect. Particularly processes involving the strong force (aka Quantum Chromodynamics, QCD) are very difficult to simulate, and refining these simulations is an active area of research. Because of these deficiencies we don’t always trust background estimates based solely on these simulations, especially when applying very specific selection criteria.

Therefore experiments often employ ‘data-driven’ methods where they estimate the amount background events by using control regions in the data. One of the most widely used techniques is called the ABCD method.

An illustration of the ABCD method. The signal region, A, is defined as the region in which f and g are greater than some value. The amount of background in region A is estimated using regions B C and D which are dominated by background.

The ABCD method can applied if the selection of signal-like events involves two independent variables f and g. If one defines the ‘signal region’, A,  (the part of the data in which we are looking for a signal) as having f  and g each greater than some amount, then one can use the neighboring regions B, C, and D to estimate the amount of background in region A. If the number of signal events outside region A is small, the number of background events in region A can be estimated as N_A = N_B * (N_C/N_D).

In modern analyses often one of these selection requirements involves the score of a neural network trained to identify the sought after signal. Because neural networks are powerful learners one often has to be careful that they don’t accidentally learn about the other variable that will be used in the ABCD method, such as the mass of the signal particle. If two variables become correlated, a background estimate with the ABCD method will not be possible. This often means augmenting the neural network either during training or after the fact so that it is intentionally ‘de-correlated’ with respect to the other variable. While there are several known techniques to do this, it is still a tricky process and often good background estimates come with a trade off of reduced classification performance.

In this latest work the authors devise a way to have the neural networks help with the background estimate rather than hindering it. The idea is rather than training a single network to classify signal-like events, they simultaneously train two networks both trying to identify the signal. But during this training they use a groovy technique called ‘DisCo’ (short for Distance Correlation) to ensure that these two networks output is independent from each other. This forces the networks to learn to use independent information to identify the signal. This then allows these networks to be used in an ABCD background estimate quite easily.

The authors try out this new technique, dubbed ‘Double DisCo’, on several examples. They demonstrate they are able to have quality background estimates using the ABCD method while achieving great classification performance. They show that this method improves upon the previous state of the art technique of decorrelating a single network from a fixed variable like mass and using cuts on the mass and classifier to define the ABCD regions (called ‘Single Disco’ here).

Using the task of identifying jets containing boosted top quarks, they compare the classification performance (x-axis) and quality of the ABCD background estimate (y-axis) achievable with the new Double DisCo technique (yellow points) and previously state of the art Single DisCo (blue points). One can see the Double DisCo method is able to achieve higher background rejection with a similar or better amount of ABCD closure.

While there have been many papers over the last few years about applying neural networks to classification tasks in high energy physics, not many have thought about how to use them to improve background estimates as well. Because of their importance, background estimates are often the most time consuming part of a search for new physics. So this technique is both interesting and immediately practical to searches done with LHC data. Hopefully it will be put to use in the near future!

Further Reading:

Quanta Magazine Article “How Artificial Intelligence Can Supercharge the Search for New Particles

Recent ATLAS Summary on New Machine Learning Techniques “Machine learning qualitatively changes the search for new particles

CERN Tutorial on “Background Estimation with the ABCD Method

Summary of Paper of Previous Decorrelation Techniques used in ATLAS “Performance of mass-decorrelated jet substructure observables for hadronic two-body decay tagging in ATLAS

The XENON1T Excess : The Newest Craze in Particle Physics

Paper: Observation of Excess Electronic Recoil Events in XENON1T

Authors: XENON1T Collaboration

Recently the particle physics world has been abuzz with a new result from the XENON1T experiment who may have seen a revolutionary signal. XENON1T is one of the world’s most sensitive dark matter experiments. The experiment consists of a huge tank of Xenon placed deep underground in the Gran Sasso mine in Italy. It is a ‘direct-detection’ experiment, hunting for very rare signals of dark matter particles from space interacting with their detector. It was originally designed to look for WIMP’s, Weakly Interacting Massive Particles, who used to be everyone’s favorite candidate for dark matter. However, given recent null results by WIMP-hunting  direct-detection experiments, and collider experiments at the LHC, physicists have started to broaden their dark matter horizons. Experiments like XENON1T, who were designed to look for heavy WIMP’s colliding off of Xenon nuclei have realized that they can also be very sensitive to much lighter particles by looking for electron recoils. New particles that are much lighter than traditional WIMP’s would not leave much of an impact on large Xenon nuclei, but they can leave a signal in the detector if they instead scatter off of the electrons around those nuclei. These electron recoils can be identified by the ionization and scintillation signals they leave in the detector, allowing them to be distinguished from nuclear recoils.

In this recent result, the XENON1T collaboration searched for these electron recoils in the energy range of 1-200 keV with unprecedented sensitivity.  Their extraordinary sensitivity is due to its exquisite control over backgrounds and extremely low energy threshold for detection. Rather than just being impressed, what has gotten many physicists excited is that the latest data shows an excess of events above expected backgrounds in the 1-7 keV region. The statistical significance of the excess is 3.5 sigma, which in particle physics is enough to claim ‘evidence’ of an anomaly but short of the typical 5-sigma required to claim discovery.

The XENON1T data that has caused recent excitement. The ‘excess’ is the spike in the data (black points) above the background model (red line) in the 1-7 keV region. The significance of the excess is around 3.5 sigma.

So what might this excess mean? The first, and least fun answer, is nothing. 3.5 sigma is not enough evidence to claim discovery, and those well versed in particle physics history know that there have been numerous excesses with similar significances have faded away with more data. Still it is definitely an intriguing signal, and worthy of further investigation.

The pessimistic explanation is that it is due to some systematic effect or background not yet modeled by the XENON1T collaboration. Many have pointed out that one should be skeptical of signals that appear right at the edge of an experiments energy detection threshold. The so called ‘efficiency turn on’, the function that describes how well an experiment can reconstruct signals right at the edge of detection, can be difficult to model. However, there are good reasons to believe this is not the case here. First of all the events of interest are actually located in the flat part of their efficiency curve (note the background line is flat below the excess), and the excess rises above this flat background. So to explain this excess their efficiency would have to somehow be better at low energies than high energies, which seems very unlikely. Or there would have to be a very strange unaccounted for bias where some higher energy events were mis-reconstructed at lower energies. These explanations seem even more implausible given that the collaboration performed an electron reconstruction calibration using the radioactive decays of Radon-220 over exactly this energy range and were able to model the turn on and detection efficiency very well.

Results of a calibration done to radioactive decays of Radon-220. One can see that data in the efficiency turn on (right around 2 keV) is modeled quite well and no excesses are seen.

However the possibility of a novel Standard Model background is much more plausible. The XENON collaboration raises the possibility that the excess is due to a previously unobserved background from tritium β-decays. Tritium decays to Helium-3 and an electron and a neutrino with a half-life of around 12 years. The energy released in this decay is 18.6 keV, giving the electron having an average energy of a few keV. The expected energy spectrum of this decay matches the observed excess quite well. Additionally, the amount of contamination needed to explain the signal is exceedingly small. Around 100 parts-per-billion of H2 would lead to enough tritium to explain the signal, which translates to just 3 tritium atoms per kilogram of liquid Xenon. The collaboration tries their best to investigate this possibility, but they neither rule out or confirm such a small amount of tritium contamination. However, other similar contaminants, like diatomic oxygen have been confirmed to be below this level by 2 orders of magnitude, so it is not impossible that they were able to avoid this small amount of contamination.

So while many are placing their money on the tritium explanation, there is the exciting possibility remains that this is our first direct evidence of physics Beyond the Standard Model (BSM)! So if the signal really is a new particle or interaction what would it be? Currently it it is quite hard to pin down exactly based on the data. The analysis was specifically searching for two signals that would have shown up in exactly this energy range: axions produced in the sun, and neutrinos produced in the sun interacting with electrons via a large (BSM) magnetic moment. Both of these models provide good fits to the signal shape, with the axion explanation being slightly preferred. However since this result has been released, many have pointed out that these models would actually be in conflict with constraints from astrophysical measurements. In particular, the axion model they searched for would have given stars an additional way to release energy, causing them to cool at a faster rate than in the Standard Model. The strength of interaction between axions and electrons needed to explain the XENON1T excess is incompatible with the observed rates of stellar cooling. There are similar astrophysical constraints on neutrino magnetic moments that also make it unlikely.

This has left door open for theorists to try to come up with new explanations for these excess events, or think of clever ways to alter existing models to avoid these constraints. And theorists are certainly seizing this opportunity! There are new explanations appearing on the arXiv every day, with no sign of stopping. In the roughly 2 weeks since the XENON1T announced their result and this post is being written, there have already been 50 follow up papers! Many of these explanations involve various models of dark matter with some additional twist, such as being heated up in the sun or being boosted to a higher energy in some other way.

A collage of different models trying to explain the XENON1T excess (center). Each plot is from a separate paper released in the first week and a half following the original announcement. Source

So while theorists are currently having their fun with this, the only way we will figure out the true cause of this this anomaly is with more data. The good news is that the XENON collaboration is already preparing for the XENONnT experiment that will serve as a follow to XENON1T. XENONnT will feature a larger active volume of Xenon and a lower background level, allowing them to potentially confirm this anomaly at the 5-sigma level with only a few months of data. If  the excess persists, more data would also allow them to better determine the shape of the signal; allowing them to possibly distinguish between the tritium shape and a potential new physics explanation. If real, other liquid Xenon experiments like LUX and PandaX should also be able to independently confirm the signal in the near future. The next few years should be a very exciting time for these dark matter experiments so stay tuned!

Read More:

Quanta Magazine Article “Dark Matter Experiment Finds Unexplained Signal”

Previous ParticleBites Post on Axion Searches

Blog Post “Hail the XENON Excess”

Listening for axions

If dark matter actually consists of a new kind of particle, then the most up-and-coming candidate is the axion. The axion is a consequence of the Peccei-Quinn mechanism, a plausible solution to the “strong CP problem,” or why the strong nuclear force conserves the CP-symmetry although there are no reasons for it to. It is a very light neutral boson, named by Frank Wilczek after a detergent brand (in a move that obviously dates its introduction in the ’70s).

Axion decay in a magnetic field: the result is a photon. (Source.)

Most experiments that try to directly detect dark matter have looked for WIMPs (weakly interacting massive particles). However, as those searches have not borne fruit, the focus started turning to axions, which make for good candidates given their properties and the fact that if they exist, then they exist in multitudes throughout the galaxies. Axions “speak” to the QCD part of the Standard Model, so they can appear in interaction vertices with hadronic loops. The end result is that axions passing through a magnetic field will convert to photons.

In practical terms, their detection boils down to having strong magnets, sensitive electronics and an electromagnetically very quiet place at one’s disposal. One can then sit back and wait for the hypothesized axions to pass through the detector as earth moves through the dark matter halo surrounding the Milky Way. Which is precisely why such experiments are known as “haloscopes.”

Now, the most veteran haloscope of all published significant new results. Alas, it is still empty-handed, but we can look at why its update is important and how it was reached.

ADMX (Axion Dark Matter eXperiment) of the University of Washington has been around for a quarter-century. By listening for signals from axions, it progressively gnaws away at the space of allowed values for their mass and coupling to photons, focusing on an area of interest:

ADMX_results_2020
Latest exclusion limits on the axion mass and coupling to photons.

Unlike higher values, this area is not excluded by astrophysical considerations (e.g. stars cooling off through axion emission) and other types of experiments (such as looking for axions from the sun). In addition, the bands above the lines denoted “KSVZ” and “DFSZ” are special. They correspond to the predictions of two models with favorable theoretical properties. So, ADMX is dedicated to scanning this parameter space. And the new analysis added one more year of data-taking, making a significant dent in this ballpark.

As mentioned, the presence of axions would be inferred from a stream of photons in the detector. The excluded mass range was scanned by “tuning” the experiment to different frequencies, while at each frequency step longer observation times probed smaller values for the axion-photon coupling.

Two things that this search needs is a lot of quiet and some good amplification, as the signal from a typical axion is expected to be as weak as the signal from a mobile phone left on the surface of Mars (around 10-23W). The setup is indeed stripped of noise by being placed in a dilution refrigerator, which keeps its temperature at a few tenths of a degree above absolute zero. This is practically the domain governed by quantum noise, so advantage can be taken of the finesse of quantum technology: for the first time ADMX used SQUIDs, superconducting quantum interference devices, for the amplification of the signal.

The heart of the experiment inside the refrigerator. The resonant frequency of the cavity is tuned to match the photons -hopefully- given off by axions. (Source.)




In the end, a good chunk of the parameter space which is favored by the theory might have been excluded, but the haloscope is ready to look at the rest of it. Just think of how, one day, a pulse inside a small device in a university lab might be a messenger of the mysteries unfolding across the cosmos.

References:

Publication by the ADMX collaboration. (arXiv)

Learn more:

  1. The theory behind axions.
  2. The hitchhiker’s guide to the dilution refrigerator.
  3. Intro to KSVZ and DFSZ axions (and more).
  4. Resonant cavities.

Three Birds with One Particle: The Possibilities of Axions

Title: “Axiogenesis”

Author: Raymond T. Co and Keisuke Harigaya

Reference: https://arxiv.org/pdf/1910.02080.pdf

On the laundry list of problems in particle physics, a rare three-for-one solution could come in the form of a theorized light scalar particle fittingly named after a detergent: the axion. Frank Wilczek coined this term in reference to its potential to “clean up” the Standard Model once he realized its applicability to multiple unsolved mysteries. Although Axion the dish soap has been somewhat phased out of our everyday consumer life (being now primarily sold in Latin America), axion particles remain as a key component of a physicist’s toolbox. While axions get a lot of hype as a promising dark matter candidate, and are now being considered as a solution to matter-antimatter asymmetry, they were originally proposed as a solution for a different Standard Model puzzle: the strong CP problem. 

The strong CP problem refers to a peculiarity of quantum chromodynamics (QCD), our theory of quarks, gluons, and the strong force that mediates them: while the theory permits charge-parity (CP) symmetry violation, the ardent experimental search for CP-violating processes in QCD has so far come up empty-handed. What does this mean from a physical standpoint? Consider the neutron electric dipole moment (eDM), which roughly describes the distribution of the three quarks comprising a neutron. Naively, we might expect this orientation to be a triangular one. However, measurements of the neutron eDM, carried out by tracking changes in neutron spin precession, return a value orders of magnitude smaller than classically expected. In fact, the incredibly small value of this parameter corresponds to a neutron where the three quarks are found nearly in a line. 

The classical picture of the neutron (left) looks markedly different from the picture necessitated by CP symmetry (right). The strong CP problem is essentially a question of why our mental image should look like the right picture instead of the left. Source: https://arxiv.org/pdf/1812.02669.pdf

This would not initially appear to be a problem. In fact, in the context of CP, this makes sense: a simultaneous charge conjugation (exchanging positive charges for negative ones and vice versa) and parity inversion (flipping the sign of spatial directions) when the quark arrangement is linear results in a symmetry. Yet there are a few subtleties that point to the existence of further physics. First, this tiny value requires an adjustment of parameters within the mathematics of QCD, carefully fitting some coefficients to cancel out others in order to arrive at the desired conclusion. Second, we do observe violation of CP symmetry in particle physics processes mediated by the weak interaction, such as kaon decay, which also involves quarks. 

These arguments rest upon the idea of naturalness, a principle that has been invoked successfully several times throughout the development of particle theory as a hint toward the existence of a deeper, more underlying theory. Naturalness (in one of its forms) states that such minuscule values are only allowed if they increase the overall symmetry of the theory, something that cannot be true if weak processes exhibit CP-violation where strong processes do not. This puts the strong CP problem squarely within the realm of “fine-tuning” problems in physics; although there is no known reason for CP symmetry conservation to occur, the theory must be modified to fit this observation. We then seek one of two things: either an observation of CP-violation in QCD or a solution that sets the neutron eDM, and by extension any CP-violating phase within our theory, to zero.

This term in the QCD Lagrangian allows for CP symmetry violation. Current measurements place the value of \theta at no greater than 10^{-10}. In Peccei-Quinn symmetry, \theta is promoted to a field.

When such an expected symmetry violation is nowhere to be found, where is a theoretician to look for such a solution? The most straightforward answer is to turn to a new symmetry. This is exactly what Roberto Peccei and Helen Quinn did in 1977, birthing the Peccei-Quinn symmetry, an extension of QCD which incorporates a CP-violating phase known as the \theta term. The main idea behind this theory is to promote \theta to a dynamical field, rather than keeping it a constant. Since quantum fields have associated particles, this also yields the particle we dub the axion. Looking back briefly to the neutron eDM picture of the strong CP problem, this means that the angular separation should also be dynamical, and hence be relegated to the minimum energy configuration: the quarks again all in a straight line. In the language of symmetries, the U(1) Peccei-Quinn symmetry is approximately spontaneously broken, giving us a non-zero vacuum expectation value and a nearly-massless Goldstone boson: our axion.

This is all great, but what does it have to do with dark matter? As it turns out, axions make for an especially intriguing dark matter candidate due to their low mass and potential to be produced in large quantities. For decades, this prowess was overshadowed by the leading WIMP candidate (weakly-interacting massive particles), whose parameter space has been slowly whittled down to the point where physicists are more seriously turning to alternatives. As there are several production-mechanisms in early universe cosmology for axions, and 100% of dark matter abundance could be explained through this generation, the axion is now stepping into the spotlight. 

This increased focus is causing some theorists to turn to further avenues of physics as possible applications for the axion. In a recent paper, Co and Harigaya examined the connection between this versatile particle and matter-antimatter asymmetry (also called baryon asymmetry). This latter term refers to the simple observation that there appears to be more matter than antimatter in our universe, since we are predominantly composed of matter, yet matter and antimatter also seem to be produced in colliders in equal proportions. In order to explain this asymmetry, without which matter and antimatter would have annihilated and we would not exist, physicists look for any mechanism to trigger an imbalance in these two quantities in the early universe. This theorized process is known as baryogenesis.

Here’s where the axion might play a part. The \theta term, which settles to zero in its possible solution to the strong CP problem, could also have taken on any value from 0 to 360 degrees very early on in the universe. Analyzing the axion field through the conjectures of quantum gravity, if there are no global symmetries then the initial axion potential cannot be symmetric [4]. By falling from some initial value through an uneven potential, which the authors describe as a wine bottle potential with a wiggly top, \theta would cycle several times through the allowed values before settling at its minimum energy value of zero. This causes the axion field to rotate, an asymmetry which could generate a disproportionality between the amounts of produced matter and antimatter. If the field were to rotate in one direction, we would see more matter than antimatter, while a rotation in the opposite direction would result instead in excess antimatter.

The team’s findings can be summarized in the plot above. Regions in purple, red, and above the orange lines (dependent upon a particular constant \xi which is proportional to weak scale quantities) signify excluded portions of the parameter space. The remaining white space shows values of the axion decay constant and mass where the currently measured amount of baryon asymmetry could be generated. Source: https://arxiv.org/pdf/1910.02080.pdf

Introducing a third fundamental mystery into the realm of axions begets the question of whether all three problems (strong CP, dark matter, and matter-antimatter asymmetry) can be solved simultaneously with axions. And, of course, there are nuances that could make alternative solutions to the strong CP problem more favorable or other dark matter candidates more likely. Like most theorized particles, there are several formulations of axion in the works. It is then necessary to turn our attention to experiment to narrow down the possibilities for how axions could interact with other particles, determine what their mass could be, and answer the all-important question: if they exist at all. Consequently, there are a plethora of axion-focused experiments up and running, with more on the horizon, that use a variety of methods spanning several subfields of physics. While these results begin to roll in, we can continue to investigate just how many problems we might be able to solve with one adaptable, soapy particle.

Learn More:

  1. A comprehensive introduction to the strong CP problem, the axion solution, and other potential solutions: https://arxiv.org/pdf/1812.02669.pdf 
  2. Axions as a dark matter candidate: https://www.symmetrymagazine.org/article/the-other-dark-matter-candidate
  3. More information on matter-antimatter asymmetry and baryogenesis: https://www.quantumdiaries.org/2015/02/04/where-do-i-come-from/
  4. The quantum gravity conjectures that axiogenesis builds upon: https://arxiv.org/abs/1810.05338
  5. An overview of current axion-focused experiments: https://www.annualreviews.org/doi/full/10.1146/annurev-nucl-102014-022120

LHCb’s Flavor Mystery Deepens

Title: Measurement of CP -averaged observables in the B0→ K∗0µ+µ− decay

Authors: LHCb Collaboration

Refference: https://arxiv.org/abs/2003.04831

In the Standard Model, matter is organized in 3 generations; 3 copies of the same family of particles but with sequentially heavier masses. Though the Standard Model can successfully describe this structure, it offers no insight into why nature should be this way. Many believe that a more fundamental theory of nature would better explain where this structure comes from. A natural way to look for clues to this deeper origin is to check whether these different ‘flavors’ of particles really behave in exactly the same ways, or if there are subtle differences that may hint at their origin.

The LHCb experiment is designed to probe these types of questions. And in recent years, they have seen a series of anomalies, tensions between data and Standard Model predictions, that may be indicating the presence of new particles which talk to the different generations. In the Standard Model, the different generations can only interact with each other through the W boson, which means that quarks with the same charge can only interact through more complicated processes like those described by ‘penguin diagrams’.

The so called ‘penguin diagrams’ describe how rare decays like bottom quark → strange quark can happen in the Standard Model. The name comes from both their shape and a famous bar bet. Who says physicists don’t have a sense of humor?

These interactions typically have quite small rates in the Standard Model, meaning that the rate of these processes can be quite sensitive to new particles, even if they are very heavy or interact very weakly with the SM ones. This means that studying these sort of flavor decays is a promising avenue to search for new physics.

In a press conference last month, LHCb unveiled a new measurement of the angular distribution of the rare B0→K*0μ+μ– decay. The interesting part of this process involves a b → s transition (a bottom quark decaying into a strange quark), where number of anomalies have been seen in recent years.

Feynman diagrams of the decay being studied. A B meson (composed of a bottom and a down quark) decays into a Kaon (composed of a strange quark and a down quark) and a pair of muons. Because this decay is very rare in the Standard Mode (left diagram) it could be a good place to look for the effects of new particles (right diagram). Diagrams taken from here

Rather just measuring the total rate of this decay, this analysis focuses on measuring the angular distribution of the decay products. They also perform this mesaurement in different bins of ‘q^2’, the dimuon pair’s invariant mass. These choices allow the measurement to be less sensitive to uncertainties in the Standard Model prediction due to difficult to compute hadronic effects. This also allows the possibility of better characterizing the nature of whatever particle may be causing a deviation.

The kinematics of decay are fully described by 3 angles between the final state particles and q^2. Based on knowing the spins and polarizations of each of the particles, they can fully describe the angular distributions in terms of 8 parameters. They also have to account for the angular distribution of background events, and distortions of the true angular distribution that are caused by the detector. Once all such effects are accounted for, they are able to fit the full angular distribution in each q^2 bin to extract the angular coefficients in that bin.

This measurement is an update to their 2015 result, now with twice as much data. The previous result saw an intriguing tension with the SM at the level of roughly 3 standard deviations. The new result agrees well with the previous one, and mildly increases the tension to the level of 3.4 standard deviations.

LHCb’s measurement of P’5, an observable describing one part of the angular distribution of the decay. The orange boxes show the SM prediction of this value and the red, blue and black point shows LHCb’s most recent measurement (a combination of its ‘Run 1’ measurement and the more recent 2016 data). The grey regions are excluded from the measurement because they have large backgrounds from the decays of other mesons.

This latest result is even more interesting given that LHCb has seen an anomaly in another measurement (the R_k anomaly) involving the same b → s transition. This had led some to speculate that both effects could be caused by a single new particle. The most popular idea is a so-called ‘leptoquark’ that only interacts with some of the flavors.

LHCb is already hard at work on updating this measurement with more recent data from 2017 and 2018, which should once again double the number of events. Updates to the R_k measurement with new data are also hotly anticipated. The Belle II experiment has also recent started taking data and should be able to perform similar measurements. So we will have to wait and see if this anomaly is just a statistical fluke, or our first window into physics beyond the Standard Model!

Read More:

Symmetry Magazine “The mystery of particle generations”

Cern Courier “Anomalies persist in flavour-changing B decays”

Lecture Notes “Introduction to Flavor Physcis”

Making Smarter Snap Judgments at the LHC

Collisions at the Large Hadron Collider happen fast. 40 million times a second, bunches of 1011 protons are smashed together. The rate of these collisions is so fast that the computing infrastructure of the experiments can’t keep up with all of them. We are not able to read out and store the result of every collision that happens, so we have to ‘throw out’ nearly all of them. Luckily most of these collisions are not very interesting anyways. Most of them are low energy interactions of quarks and gluons via the strong force that have been already been studied at previous colliders. In fact, the interesting processes, like ones that create a Higgs boson, can happen billions of times less often than the uninteresting ones.

The LHC experiments are thus faced with a very interesting challenge, how do you decide extremely quickly whether an event is interesting and worth keeping or not? This what the ‘trigger’ system, the Marie Kondo of LHC experiments, are designed to do. CMS for example has a two-tiered trigger system. The first level has 4 microseconds to make a decision and must reduce the event rate from 40 millions events per second to 100,000. This speed requirement means the decision has to be made using at the hardware level, requiring the use of specialized electronics to quickly to synthesize the raw information from the detector into a rough idea of what happened in the event. Selected events are then passed to the High Level Trigger (HLT), which has 150 milliseconds to run versions of the CMS reconstruction algorithms to further reduce the event rate to a thousand per second.

While this system works very well for most uses of the data, like measuring the decay of Higgs bosons, sometimes it can be a significant obstacle. If you want to look through the data for evidence of a new particle that is relatively light, it can be difficult to prevent the trigger from throwing out possible signal events. This is because one of the most basic criteria the trigger uses to select ‘interesting’ events is that they leave a significant amount of energy in the detector. But the decay products of a new particle that is relatively light won’t have a substantial amount of energy and thus may look ‘uninteresting’ to the trigger.

In order to get the most out of their collisions, experimenters are thinking hard about these problems and devising new ways to look for signals the triggers might be missing. One idea is to save additional events from the HLT in a substantially reduced size. Rather than saving the raw information from the event, that can be fully processed at a later time, instead the only the output of the quick reconstruction done by the trigger is saved. At the cost of some precision, this can reduce the size of each event by roughly two orders of magnitude, allowing events with significantly lower energy to be stored. CMS and ATLAS have used this technique to look for new particles decaying to two jets and LHCb has used it to look for dark photons. The use of these fast reconstruction techniques allows them to search for, and rule out the existence of, particles with much lower masses than otherwise possible. As experiments explore new computing infrastructures (like GPU’s) to speed up their high level triggers, they may try to do even more sophisticated analyses using these techniques. 

But experimenters aren’t just satisfied with getting more out of their high level triggers, they want to revamp the low-level ones as well. In order to get these hardware-level triggers to make smarter decisions, experimenters are trying get them to run machine learning models. Machine learning has become very popular tool to look for rare signals in LHC data. One of the advantages of machine learning models is that once they have been trained, they can make complex inferences in a very short amount of time. Perfect for a trigger! Now a group of experimentalists have developed a library that can translate the most popular types machine learning models into a format that can be run on the Field Programmable Gate Arrays used in lowest level triggers. This would allow experiments to quickly identify events from rare signals that have complex signatures that the current low-level triggers don’t have time to look for. 

The LHC experiments are working hard to get the most out their collisions. There could be particles being produced in LHC collisions already but we haven’t been able to see them because of our current triggers, but these new techniques are trying to cover our current blind spots. Look out for new ideas on how to quickly search for interesting signatures, especially as we get closer the high luminosity upgrade of the LHC.

Read More:

CERN Courier article on programming FPGA’s

IRIS HEP Article on a recent workshop on Fast ML techniques

CERN Courier article on older CMS search for low mass dijet resonances

ATLAS Search using ‘trigger-level’ jets

LHCb Search for Dark Photons using fast reconstruction based on a high level trigger

Paper demonstrating the feasibility of running ML models for jet tagging on FPGA’s

Hullabaloo Over The Hubble Constant

Title: The Expansion of the Universe is Faster than Expected

Author: Adam Riess

Reference: Nature   Arxiv

There is a current crisis in the field of cosmology and it may lead to our next breakthrough in understanding the universe.  In the late 1990’s measurements of distant supernovae showed that contrary to expectations at the time, the universe’s expansion was accelerating rather than slowing down. This implied the existence of a mysterious “dark energy” throughout the universe, propelling this accelerated expansion. Today, some people once again think that our measurements of the current expansion rate, the Hubble constant, are indicating that there is something about the universe we don’t understand.

The current cosmological standard model, called ΛCDM, is a phenomenological model of describing all contents of the universe. It includes regular visible matter, Cold Dark Matter (CDM), and dark energy. It is an extremely bare-bones model; assuming dark matter interacts only gravitationally and that dark energy is just a simple cosmological constant (Λ) which gives a constant energy density to space itself.  For the last 20 years this model has been rigorously tested but new measurements might be beginning to show that it has some holes. Measurements of the early universe based on ΛCDM and extrapolated to today predict a different rate of expansion than what is currently being measured, and cosmologists are taking this war over the Hubble constant very seriously.

The Measurements

On one side of this Hubble controversy are measurements from the early universe. The most important of these is based on the Cosmic Microwave Background (CMB), light directly from the hot plasma of the Big Bang that has been traveling billions of years directly to our telescopes. This light from the early universe is nearly uniform in temperature, but by analyzing the pattern of slightly hotter and colder spots, cosmologists can extract the 6 free parameters of ΛCDM. These parameters encode the relative amount of energy contained in regular matter, dark matter, and dark energy. Then based on these parameters, they can infer what the current expansion rate of the universe should be. The current best measurements of the CMB come from the Planck collaboration which can infer the Hubble constant with a precision of less than 1%.

The Cosmic Microwave Background (CMB). Blue spots are slightly colder than average and red spots are slightly hotter. By fitting a model to this data, one can determine the energy contents of the early universe.

On the other side of the debate are the late-universe (or local) measurements of the expansion. The most famous of these is based on a ‘distance ladder’, where several stages of measurements are used to calibrate distances of astronomical objects. First, geometric properties are used to calibrate the brightness of pulsating stars (Cepheids). Cepheids are then used to calibrate the absolute brightness of exploding supernovae. The expansion rate of the universe can then be measured by relating the red-shift (the amount the light from these objects has been stretched by the universe’s expansion) and the distance of these supernovae. This is the method that was used to discover dark energy in 1990’s and earned its pioneers a Nobel prize. As they have collected more data and techniques have been refined, the measurement’s precision has improved dramatically.

In the last few years the tension between the two values of the Hubble constant has steadily grown. This had let cosmologists to scrutinize both sets of measurements very closely but so far no flaws have been found. Both of these measurements are incredibly complex, and many cosmologists still assumed that there was some unknown systematic error in one of them that was the culprit. But recently, other measurements both the early and late universe have started to weigh in and they seem to agree with the Planck and distance ladder results. Currently the tension between the early and late measurements of the Hubble constant sits between 4 to 6 sigma, depending on which set of measurements you combine. While there are still many who believe there is something wrong with the measurements, others have started to take seriously that this is pointing to a real issue with ΛCDM, and there is something in the universe we don’t understand. In other words, New Physics!

A comparison of the early universe and late universe measurements of the Hubble constant. Different combinations of measurements are shown for each. The tension is between 4 and 6 sigma on depending on which set of measurements you combine

The Models

So what ideas have theorists put forward that can explain the disagreement? In general theorists have actually had a hard time trying to come up with models that can explain this disagreement while not running afoul of the multitude of other cosmological data we have, but some solutions have been found. Two of the most promising approaches involve changing the composition of universe just before the time the CMB was emitted.

The first of these is called Early Dark Energy. It is a phenomenological model that posits the existence of another type of dark energy, that behaves similarly to a cosmological constant early in the universe but then fades away relatively quickly as the universe expands. This model is able to slightly improve Planck’s fit to the CMB data while changing the contents of the early universe enough to alter the predicted Hubble constant to be consistent with the local value. Critics of the model have feel that its parameters had to been finely tuned for the solution to work. However there has been some work in mimicking its success with a particle-physics based model.

The other notable attempt at resolving the tension involves adding additional types of neutrinos and positing that neutrinos interact with each other in a much stronger way than the Standard Model. This similarly changes the interpretation of the CMB measurements to predict a larger expansion rate. The authors also posit that this new physics in the neutrino sector may be related to current anomalies seen in neutrino physics experiments that are also currently lacking an explanation. However follow up work has showed that it is hard to reconcile such strongly self-interacting neutrinos with laboratory experiments and other cosmological probes.

The Future

At present the situation remains very unclear. Some cosmologists believe this is the end of ΛCDM, and others still believe there is an issue with one of the measurements. For those who believe new physics is the solution, there is no consensus about what the best model is. However, the next few years should start to clarify things. Other late-universe measurements of the Hubble constant, using gravitational lensing or even gravitational waves, should continue to improve their precision and could give skeptics greater confidence to the distance ladder result. Next generation CMB experiments will eventually come online as well, and will offer greater precision than the Planck measurement. Theorists will probably come up with more possible resolutions, and point out additional measurements to be made that can confirm or refute their models. For those hoping for a breakthrough in our understanding of the universe, this is definitely something to keep an eye on!

Read More

Quanta Magazine Article on the controversy 

Astrobites Article on Hubble Tension

Astrobites Article on using gravitational lensing to measure the Hubble Constant

The Hubble Hunters Guide

Letting the Machines Search for New Physics

Article: “Anomaly Detection for Resonant New Physics with Machine Learning”

Authors: Jack H. Collins, Kiel Howe, Benjamin Nachman

Reference : https://arxiv.org/abs/1805.02664

One of the main goals of LHC experiments is to look for signals of physics beyond the Standard Model; new particles that may explain some of the mysteries the Standard Model doesn’t answer. The typical way this works is that theorists come up with a new particle that would solve some mystery and they spell out how it interacts with the particles we already know about. Then experimentalists design a strategy of how to search for evidence of that particle in the mountains of data that the LHC produces. So far none of the searches performed in this way have seen any definitive evidence of new particles, leading experimentalists to rule out a lot of the parameter space of theorists favorite models.

A summary of searches the ATLAS collaboration has performed. The left columns show model being searched for, what experimental signature was looked at and how much data has been analyzed so far. The color bars show the regions that have been ruled out based on the null result of the search. As you can see, we have already covered a lot of territory.

Despite this extensive program of searches, one might wonder if we are still missing something. What if there was a new particle in the data, waiting to be discovered, but theorists haven’t thought of it yet so it hasn’t been looked for? This gives experimentalists a very interesting challenge, how do you look for something new, when you don’t know what you are looking for? One approach, which Particle Bites has talked about before, is to look at as many final states as possible and compare what you see in data to simulation and look for any large deviations. This is a good approach, but may be limited in its sensitivity to small signals. When a normal search for a specific model is performed one usually makes a series of selection requirements on the data, that are chosen to remove background events and keep signal events. Nowadays, these selection requirements are getting more complex, often using neural networks, a common type of machine learning model, trained to discriminate signal versus background. Without some sort of selection like this you may miss a smaller signal within the large amount of background events.

This new approach lets the neural network itself decide what signal to  look for. It uses part of the data itself to train a neural network to find a signal, and then uses the rest of the data to actually look for that signal. This lets you search for many different kinds of models at the same time!

If that sounds like magic, lets try to break it down. You have to assume something about the new particle you are looking for, and the technique here assumes it forms a resonant peak. This is a common assumption of searches. If a new particle were being produced in LHC collisions and then decaying, then you would get an excess of events where the invariant mass of its decay products have a particular value. So if you plotted the number of events in bins of invariant mass you would expect a new particle to show up as a nice peak on top of a relatively smooth background distribution. This is a very common search strategy, and often colloquially referred to as a ‘bump hunt’. This strategy was how the Higgs boson was discovered in 2012.

A histogram showing the invariant mass of photon pairs. The Higgs boson shows up as a bump at 125 GeV. Plot from here

The other secret ingredient we need is the idea of Classification Without Labels (abbreviated CWoLa, pronounced like koala). The way neural networks are usually trained in high energy physics is using fully labeled simulated examples. The network is shown a set of examples and then guesses which are signal and which are background. Using the true label of the event, the network is told which of the examples it got wrong, its parameters are updated accordingly, and it slowly improves. The crucial challenge when trying to train using real data is that we don’t know the true label of any of data, so its hard to tell the network how to improve. Rather than trying to use the true labels of any of the events, the CWoLA technique uses mixtures of events. Lets say you have 2 mixed samples of events, sample A and sample B, but you know that sample A has more signal events in it than sample B. Then, instead of trying to classify signal versus background directly, you can train a classifier to distinguish between events from sample A and events from sample B and what that network will learn to do is distinguish between signal and background. You can actually show that the optimal classifier for distinguishing the two mixed samples is the same as the optimal classifier of signal versus background. Even more amazing, this technique actually works quite well in practice, achieving good results even when there is only a few percent of signal in one of the samples.

An illustration of the CWoLa method. A classifier trained to distinguish between two mixed samples of signal and background events learns can learn to classify signal versus background. Taken from here

The technique described in the paper combines these two ideas in a clever way. Because we expect the new particle to show up in a narrow region of invariant mass, you can use some of your data to train a classifier to distinguish between events in a given slice of invariant mass from other events. If there is no signal with a mass in that region then the classifier should essentially learn nothing, but if there was a signal in that region that the classifier should learn to separate signal and background. Then one can apply that classifier to select events in the rest of your data (which hasn’t been used in the training) and look for a peak that would indicate a new particle. Because you don’t know ahead of time what mass any new particle should have, you scan over the whole range you have sufficient data for, looking for a new particle in each slice.

The specific case that they use to demonstrate the power of this technique is for new particles decaying to pairs of jets. On the surface, jets, the large sprays of particles produced when quark or gluon is made in a LHC collision, all look the same. But actually the insides of jets, their sub-structure, can contain very useful information about what kind of particle produced it. If a new particle that is produced decays into other particles, like top quarks, W bosons or some a new BSM particle, before decaying into quarks then there will be a lot of interesting sub-structure to the resulting jet, which can be used to distinguish it from regular jets. In this paper the neural network uses information about the sub-structure for both of the jets in event to determine if the event is signal-like or background-like.

The authors test out their new technique on a simulated dataset, containing some events where a new particle is produced and a large number of QCD background events. They train a neural network to distinguish events in a window of invariant mass of the jet pair from other events. With no selection applied there is no visible bump in the dijet invariant mass spectrum. With their technique they are able to train a classifier that can reject enough background such that a clear mass peak of the new particle shows up. This shows that you can find a new particle without relying on searching for a particular model, allowing you to be sensitive to particles overlooked by existing searches.

Demonstration of the bump hunt search. The shaded histogram is the amount of signal in the dataset. The different levels of blue points show the data remaining after applying tighter and tighter selection based on the neural network classifier score. The red line is the predicted amount of background events based on fitting the sideband regions. One can see that for the tightest selection (bottom set of points), the data forms a clear bump over the background estimate, indicating the presence of a new particle

This paper was one of the first to really demonstrate the power of machine-learning based searches. There is actually a competition being held to inspire researchers to try out other techniques on a mock dataset. So expect to see more new search strategies utilizing machine learning being released soon. Of course the real excitement will be when a search like this is applied to real data and we can see if machines can find new physics that us humans have overlooked!

Read More:

  1. Quanta Magazine Article “How Artificial Intelligence Can Supercharge the Search for New Particles”
  2. Blog Post on the CWoLa Method “Training Collider Classifiers on Real Data”
  3. Particle Bites Post “Going Rogue: The Search for Anything (and Everything) with ATLAS”
  4. Blog Post on applying ML to top quark decays “What does Bidirectional LSTM Neural Networks has to do with Top Quarks?”
  5. Extended Version of Original Paper “Extending the Bump Hunt with Machine Learning”

CMS catches the top quark running


CMS catches the top quark running

Article : “Running of the top quark mass from proton-proton collisions at √ s = 13 TeV“

Authors: The CMS Collaboration

Reference: https://arxiv.org/abs/1909.09193

When theorists were first developing quantum field theory in the 1940’s they quickly ran into a problem. Some of their calculations kept producing infinities which didn’t make physical sense. After scratching their heads for a while they eventually came up with a procedure known as renormalization to solve the problem.  Renormalization neatly hid away the infinities that were plaguing their calculations by absorbing them into the constants (like masses and couplings) in the theory, but it also produced some surprising predictions. Renormalization said that all these ‘constants’ weren’t actually constant at all! The value of these ‘constants’ depended on the energy scale at which you probed the theory.

One of the most famous realizations of this phenomena is the ‘running’ of the strong coupling constant. The value of a coupling encodes the strength of a force. The strong nuclear force, responsible for holding protons and neutrons together, is actually so strong at low energies our normal techniques for calculation don’t work. But in 1973, Gross, Wilczek and Politzer realized that in quantum chromodynamics (QCD), the quantum field theory describing the strong force, renormalization would make the strong coupling constant ‘run’ smaller at high energies. This meant at higher energies one could use normal perturbative techniques to do calculations. This behavior of the strong force is called ‘asymptotic freedom’ and earned them a Nobel prize. Thanks to asymptotic freedom, it is actually much easier for us to understand what QCD predicts for high energy LHC collisions than for the properties of bound states like the proton.  

Figure 1: The value of the strong coupling constant (α_s) is plotted as a function of the energy scale. Data from multiple experiments at different energies are compared to the prediction from QCD of how it should run.  From [5]
Now for the first time, CMS has measured the running of a new fundamental parameter, the mass of the top quark. More than just being a cool thing to see, measuring how the top quark mass runs tests our understanding of QCD and can also be sensitive to physics beyond the Standard Model. The top quark is the heaviest fundamental particle we know about, and many think that it has a key role to play in solving some puzzles of the Standard Model. In order to measure the top quark mass at different energies, CMS used the fact that the rate of producing a top quark-antiquark pair depends on the mass of the top quark. So by measuring this rate at different energies they can extract the top quark mass at different scales. 

Top quarks nearly always decay into W-bosons and b quarks. Like all quarks, the b quarks then create a large shower of particles before they reach the detector called a jet. The W-bosons can decay either into a lepton and a neutrino or two quarks. The CMS detector is very good at reconstructing leptons and jets, but neutrinos escape undetected. However one can infer the presence of neutrinos in an event because we know energy must be conserved in the collision, so if neutrinos are produced we will see ‘missing’ energy in the event. The CMS analyzers looked for top anti-top pairs where one W-boson decayed to an electron and a neutrino and the other decayed to a muon and a neutrino. By using information about the electron, muon, missing energy, and jets in an event, the kinematics of the top and anti-top pair can be reconstructed. 

The measured running of the top quark mass is shown in Figure 2. The data agree with the predicted running from QCD at the level of 1.1 sigma, and the no-running hypothesis is excluded at above 95% confidence level. Rather than being limited by the amount of data, the main uncertainties in this result come from the theoretical understanding of the top quark production and decay, which the analyzers need to model very precisely in order to extract the top quark mass. So CMS will need some help from theorists if they want to improve this result in the future. 

Figure 2: The ratio of the top quark mass compared to its mass at a reference scale (476 GeV) is plotted as a function of energy. The red line is the theoretical prediction of how the mass should run in QCD.

Read More:

  1. “The Strengths of Known Forces” https://profmattstrassler.com/articles-and-posts/particle-physics-basics/the-known-forces-of-nature/the-strength-of-the-known-forces/
  2. “Renormalization Made Easy” http://math.ucr.edu/home/baez/renormalization.html
  3. “Studying the Higgs via Top Quark Couplings” https://particlebites.com/?p=4718
  4. “The QCD Running Coupling” https://arxiv.org/abs/1604.08082
  5. CMS Measurement of QCD Running Coupling https://arxiv.org/abs/1609.05331

Going Rogue: The Search for Anything (and Everything) with ATLAS

Title: “A model-independent general search for new phenomena with the ATLAS detector at √s=13 TeV”

Author: The ATLAS Collaboration

Reference: ATLAS-PHYS-CONF-2017-001

 

When a single experimental collaboration has a few thousand contributors (and even more opinions), there are a lot of rules. These rules dictate everything from how you get authorship rights to how you get chosen to give a conference talk. In fact, this rulebook is so thorough that it could be the topic of a whole other post. But for now, I want to focus on one rule in particular, a rule that has only been around for a few decades in particle physics but is considered one of the most important practices of good science: blinding.

In brief, blinding is the notion that it’s experimentally compromising for a scientist to look at the data before finalizing the analysis. As much as we like to think of ourselves as perfectly objective observers, the truth is, when we really really want a particular result (let’s say a SUSY discovery), that desire can bias our work. For instance, imagine you were looking at actual collision data while you were designing a signal region. You might unconsciously craft your selection in such a way to force an excess of data over background prediction. To avoid such human influences, particle physics experiments “blind” their analyses while they are under construction, and only look at the data once everything else is in place and validated.

Figure 1: “Blind analysis: Hide results to seek the truth”, R. MacCounor & S. Perlmutter for Nature.com

This technique has kept the field of particle physics in rigorous shape for quite a while. But there’s always been a subtle downside to this practice. If we only ever look at the data after we finalize an analysis, we are trapped within the confines of theoretically motivated signatures. In this blinding paradigm, we’ll look at all the places that theory has shone a spotlight on, but we won’t look everywhere. Our whole game is to search for new physics. But what if amongst all our signal regions and hypothesis testing and neural net classifications… we’ve simply missed something?

It is this nagging question that motivates a specific method of combing the LHC datasets for new physics, one that the authors of this paper call a “structured, global and automated way to search for new physics.” With this proposal, we can let the data itself tell us where to look and throw unblinding caution to the winds.

The idea is simple: scan the whole ATLAS dataset for discrepancies, setting a threshold for what defines a feature as “interesting”. If this preliminary scan stumbles upon a mysterious excess of data over Standard Model background, don’t just run straight to Stockholm proclaiming a discovery. Instead, simply remember to look at this area again once more data is collected. If your feature of interest is a fluctuation, it will wash out and go away. If not, you can keep watching it until you collect enough statistics to do the running to Stockholm bit. Essentially, you let a first scan of the data rather than theory define your signal regions of interest. In fact, all the cool kids are doing it: H1, CDF, D0, and even ATLAS and CMS have performed earlier versions of this general search.

The nuts and bolts of this particular paper include 3.2 fb-1 of 2015 13 TeV LHC data to try out. Since the whole goal of this strategy is to be as general as possible, we might as well go big or go home with potential topologies. To that end, the authors comb through all the data and select any event “involving high pT isolated leptons (electrons and muons), photons, jets, b-tagged jets and missing transverse momentum”. All of the backgrounds are simply modeled with Monte Carlo simulation.

Once we have all these events, we need to sort them. Here, “the classification includes all possible final state configurations and object multiplicities, e.g. if a data event with seven reconstructed muons is found it is classified in a ‘7- muon’ event class (7μ).” When you add up all the possible permutations of objects and multiplicities, you come up with a cool 639 event classes with at least 1 data event and a Standard Model expectation of at least 0.1.

From here, it’s just a matter of checking data vs. MC agreement and the pulls for each event class. The authors also apply some measures to weed out the low stat or otherwise sketchy regions; for instance, 1 electron + many jets is more likely to be multijet faking a lepton and shouldn’t necessarily be considered as a good event category. Once this logic applied, you can plot all of your SRs together grouped by category; Figure 2 shows an example for the multijet events. The paper includes 10 of these plots in total, with regions ranging in complexity from nothing but 1μ1j to more complicated final states like ETmiss2μ1γ4j (say that five times fast.)

Figure 2: The number of events in data and for the different SM background predictions considered. The classes are labeled according to the multiplicity and type (e, μ, γ, j, b, ETmiss) of the reconstructed objects for this event class. The hatched bands indicate the total uncertainty of the SM prediction.

 

Once we can see data next to Standard Model prediction for all these categories, it’s necessary to have a way to measure just how unusual an excess may be. The authors of this paper implement an algorithm that searches for the region of largest deviation in the distributions of two variables that are good at discriminating background from new physics. These are the effective massthe sum of all jet and missing momenta, and the invariant mass, computed with all visible objects and no missing energy.

For each deviation found, a simple likelihood function is built as the convolution of probability density functions (pdfs): one Poissonian pdf to describe the event yields, and Gaussian pdfs for each systematic uncertainty. The integral of this function, p0, is the probability that the Standard Model expectation fluctuated to the observed yield. This p0 value is an industry standard in particle physics: a value of p0 < 3e-7 is our threshold for discovery.

Sadly (or reassuringly), the smallest p0 value found in this scan is 3e-04 (in the 1m1e4b2j event class). To figure out precisely how significant this value is, the authors ran a series of pseudoexperiments for each event class and applied the same scanning algorithm to them, to determine how often such a deviation would occur in a wholly different fake dataset. In fact, a p0 of 3e-04 was expected 70% of the pseudoexperiments.

So the excesses that were observed are not (so far) significant enough to focus on. But the beauty of this analysis strategy is that this deviation can be easily followed up with the addition of a newer dataset. Think of these general searches as the sidekick of the superheros that are our flagship SUSY, exotics, and dark matter searches. They can help us dot i’s and cross t’s, make sure nothing falls through the cracks— and eventually, just maybe, make a discovery.