## A symphony of data

Article title: “MUSiC: a model unspecific search for new physics in
proton-proton collisions at \sqrt{s} = 13 TeV”

Authors: The CMS Collaboration

Reference: https://arxiv.org/abs/2010.02984

First of all, let us take care of the spoilers: no new particles or phenomena have been found… Having taken this concern away, let us focus on the important concept behind MUSiC.

ATLAS and CMS, the two largest experiments using collisions at the LHC, are known as “general purpose experiments” for a good reason. They were built to look at a wide variety of physical processes and, up to now, each has checked dozens of proposed theoretical extensions of the Standard Model, in addition to checking the Model itself. However, in almost all cases their searches rely on definite theory predictions and focus on very specific combinations of particles and their kinematic properties. In this way, the experiments may still be far from utilizing their full potential. But now an algorithm named MUSiC is here to help.

MUSiC takes all events recorded by CMS that comprise of clean-cut particles and compares them against the expectations from the Standard Model, untethering itself from narrow definitions for the search conditions.

We should clarify here that an “event” is the result of an individual proton-proton collision (among the many happening each time the proton bunches cross), consisting of a bouquet of particles. First of all, MUSiC needs to work with events with particles that are well-recognized by the experiment’s detectors, to cut down on uncertainty. It must also use particles that are well-modeled, because it will rely on the comparison of data to simulation and, so, wants to be sure about the accuracy of the latter.

All this boils down to working with events with combinations of specific, but several, particles: electrons, muons, photons, hadronic jets from light-flavour (=up, down, strange) quarks or gluons and from bottom quarks, and deficits in the total transverse momentum (typically the signature of the uncatchable neutrinos or perhaps of unknown exotic particles). And to make things even more clean-cut, it keeps only events that include either an electron or a muon, both being well-understood characters.

These particles’ combinations result in hundreds of different “final states” caught by the detectors. However, they all correspond to only a dozen combos of particles created in the collisions according to the Standard Model, before some of them decay to lighter ones. For them, we know and simulate pretty well what we expect the experiment to measure.

MUSiC proceeded by comparing three kinematic quantities of these final states, as measured by CMS during the year 2016, to their simulated values. The three quantities of interest are the combined mass, combined transverse momentum and combined missing transverse momentum. It’s in their distributions that new particles would most probably show up, regardless of which theoretical model they follow. The range of values covered is pretty wide. All in all, the method extends the kinematic reach of usual searches, as it also does with the collection of final states.

So the kinematic distributions are checked against the simulated expectations in an automatized way, with MUSiC looking for every physicist’s dream: deviations. Any deviation from the simulation, meaning either fewer or more recorded events, is quantified by getting a probability value. This probability is calculated by also taking into account the much dreaded “look elsewhere effect”. (Which comes from the fact that, statistically, in a large number of distributions a random fluctuation that will mimic a genuine deviation is bound to appear sooner or later.)

When all’s said and done the collection of probabilities is overviewed. The MUSiC protocol says that any significant deviation will be scrutinized with more traditional methods – only that this need never actually arose in the 2016 data: all the data played along with the Standard Model, in all 1,069 examined final states and their kinematic ranges.

For the record, the largest deviation was spotted in the final state comprising three electrons, two generic hadronic jets and one jet coming from a bottom quark. Seven events were counted whereas the simulation gave 2.7±1.8 events (mostly coming from the production of a top plus an anti-top quark plus an intermediate vector boson from the collision; the fractional values are due to extrapolating to the amount of collected data). This excess was not seen in other related final states, “related” in that they also either include the same particles or have one less. Everything pointed to a fluctuation and the case was closed.

However, the goal of MUSiC was not strictly to find something new, but rather to demonstrate a method for model un-specific searches with collisions data. The mission seems to be accomplished, with CMS becoming even more general-purpose.

Another generic search method in ATLAS: Going Rogue: The Search for Anything (and Everything) with ATLAS

And a take with machine learning: Letting the Machines Seach for New Physics

Fancy checking a good old model-specific search? Uncovering a Higgs Hiding Behind Backgrounds

## A shortcut to truth

Article title: “Automated detector simulation and reconstruction
parametrization using machine learning”

Authors: D. Benjamin, S.V. Chekanov, W. Hopkins, Y. Li, J.R. Love

The simulation of particle collisions at the LHC is a pharaonic task. The messy chromodynamics of protons must be modeled; the statistics of the collision products must reflect the Standard Model; each particle has to travel through the detectors and interact with all the elements in its path. Its presence will eventually be reduced to electronic measurements, which, after all, is all we know about it.

The work of the simulation ends somewhere here, and that of the reconstruction starts; namely to go from electronic signals to particles. Reconstruction is a process common to simulation and to the real world. Starting from the tangle of statistical and detector effects that the actual measurements include, the goal is to divine the properties of the initial collision products.

Now, researchers at the Argonne National Laboratory looked into going from the simulated particles as produced in the collisions (aka “truth objects”) directly to the reconstructed ones (aka “reco objects”): bypassing the steps of the detailed interaction with the detectors and of the reconstruction algorithm could make the studies that use simulations much more speedy and efficient.

The team used a neural network which it trained on simulations of the full set. The goal was to have the network learn to produce the properties of the reco objects when given only the truth objects. The process succeeded in producing the transverse momenta of hadronic jets, and looks suitable for any kind of particle and for other kinematic quantities.

More specifically, the researchers began with two million simulated jet events, fully passed through the ATLAS experiment and the reconstruction algorithm. For each of them, the network took the kinematic properties of the truth jet as input and was trained to achieve the reconstructed transverse momentum.

The network was taught to perform multi-categorization: its output didn’t consist of a single node giving the momentum value, but of 400 nodes, each corresponding to a different range of values. The output of each node was the probability for that particular range. In other words, the result was a probability density function for the reconstructed momentum of a given jet.

The final step was to select the momentum randomly from this distribution. For half a million of test jets, all this resulted in good agreement with the actual reconstructed momenta, specifically within 5% for values above 20 GeV. In addition, it seems that the training was sensitive to the effects of quantities other than the target one (e.g. the effects of the position in the detector), as the neural network was able to pick up on the dependencies between the input variables. Also, hadronic jets are complicated animals, so it is expected that the method will work on other objects just as well.

All in all, this work showed the perspective for neural networks to imitate successfully the effects of the detector and the reconstruction. Simulations in large experiments typically take up loads of time and resources due to their size, intricacy and frequent need for updates in the hardware conditions. Such a shortcut, needing only small numbers of fully processed events, would speed up studies such as optimization of the reconstruction and detector upgrades.

Intro to neural networks: https://physicsworld.com/a/neural-networks-explained/

## LIGO and Gravitational Waves: A Hep-ex perspective

The exciting Twitter rumors have been confirmed! On Thursday, LIGO finally announced the first direct observation of gravitational waves, a prediction 100 years in the making. The media storm has been insane, with physicists referring to the discovery as “more significant than the discovery of the Higgs boson… the biggest scientific breakthrough of the century.” Watching Thursday’s press conference from CERN, it was hard not to make comparisons between the discovery of the Higgs and LIGO’s announcement.

Long standing Searches for well known phenomena

The Higgs boson was billed as the last piece of the Standard Model puzzle. The existence of the Higgs was predicted in the 1960s in order to explain the mass of vector bosons of the Standard Model, and avoid non-unitary amplitudes in W boson scattering. Even if the Higgs didn’t exist, particle physicists expected new physics to come into play at the TeV Scale, and experiments at the LHC were designed to find it.

Similarly, gravitational waves were the last untested fundamental prediction of General Relativity. At first, physicists remained skeptical of the existence of gravitational waves, but the search began in earnest with Joseph Webber in the 1950s (Forbes). Indirect evidence of gravitational waves was demonstrated a few decades later. A binary system consisting of a pulsar and neutron star was observed to release energy over time, presumably in the form of gravitational waves. Using Webber’s method for inspiration, LIGO developed two detectors of unprecedented precision in order to finally make direct observation.

Unlike the Higgs, General Relativity makes clear predictions about the properties of gravitational waves. Waves should travel at the speed of light, have two polarizations, and interact weakly with matter. Scientists at LIGO were even searching for a very particular signal, described as a characteristic “chirp”. With the upgrade to the LIGO detectors, physicists were certain they’d be capable of observing gravitational waves. The only outstanding question was how often these observations would happen.

The search for the Higgs involved more uncertainties. The one parameter essential for describing the Higgs, its mass, is not predicted by the Standard Model. While previous collider experiments at LEP and Fermilab were able to set limits on the Higgs mass, the observed properties of the Higgs were ultimately unknown before the discovery. No one knew whether or not the Higgs would be a Standard Model Higgs, or part of a more complicated theory like Supersymmetry or technicolor.

Monumental scientific endeavors

Answering the most difficult questions posed by the universe isn’t easy, or cheap. In terms of cost, both LIGO and the LHC represent billion dollar investments. Including the most recent upgrade, LIGO cost a total $1.1 billion, and when it was originally approved in 1992, “it represented the biggest investment the NSF had ever made” according to France Córdova, NSF director. The discovery of the Higgs was estimated by Forbes to cost a total of$13 billion, a hefty price to be paid by CERN’s member and observer states. Even the electricity bill costs more than \$200 million per year.

The large investment is necessitated by the sheer monstrosity of the experiments. LIGO consists of two identical detectors roughly 4 km long, built 3000 km apart. Because of it’s large size, LIGO is capable of measuring ripples in space 10000 times smaller than an atomic nucleus, the smallest scale ever measured by scientists (LIGO Fact Page). The size of the LIGO vacuum tubes is only surpassed by those at the LHC. At 27 km in circumference, the LHC is the single largest machine in the world, and the most powerful particle accelerator to date. It only took a handful of people to predict the existence of gravitational waves and the Higgs, but it took thousands of physicists and engineers to find them.

Life after Discovery

Even the language surrounding both announcements is strikingly similar. Rumors were circulating for months before the official press conferences, and the expectations from each respective community were very high. Both discoveries have been touted as the discoveries of the century, with many experts claiming that results would usher in a “new era” of particle physics or observational astronomy.

With a few years of hindsight, it is clear that the “new era” of particle physics has begun. Before Run I of the LHC, particle physicists knew they needed to search for the Higgs. Now that the Higgs has been discovered, there is much more uncertainty surrounding the field. The list of questions to try and answer is enormous. Physicists want to understand the source of the Dark Matter that makes up roughly 25% of the universe, from where neutrinos derive their mass, and how to quantize gravity. There are several ad hoc features of the Standard Model that merit additional explanation, and physicists are still searching for evidence of supersymmetry and grand unified theories. While the to-do list is long, and well understood, how to solve these problems is not. Measuring the properties of the Higgs does allow particle physicists to set limits on beyond the Standard Model Physics, but it’s unclear at which scale new physics will come into play, and there’s no real consensus about which experiments deserve the most support. For some in the field, this uncertainty can result in a great deal of anxiety and skepticism about the future. For others, the long to-do list is an absolutely thrilling call to action.

With regards to the LIGO experiment, the future is much more clear. LIGO has only published one event from 16 days of data taking. There is much more data already in the pipeline, and more interferometers like VIRGO and (e)LISA, planning to go online in the near future. Now that gravitational waves have been proven to exist, they can be used to observe the universe in a whole new way. The first event already contains an interesting surprise. LIGO has observed two inspriraling black holes of 36 and 29 solar masses, merging into a final black hole of 62 solar masses. The data thus confirmed the existence of heavy stellar black holes, with masses more than 25 times greater than the sun, and that binary black hole systems form in nature (Atrophysical Journal). When VIRGO comes online, it will be possible to triangulate the source of these gravitational waves as well. LIGO’s job is to watch, and see what other secrets the universe has in store.