New detectors on the block

Article title: “Toward Machine Learning Optimization of Experimental Design”

Authors: MODE Collaboration

Reference: https://inspirehep.net/literature/1850892 (pdf)

In a previous post we wondered if (machine learning) algorithms can replace the entire simulation of detectors and reconstruction of particles. But meanwhile some experimentalists have gone one step further – and wondered if algorithms can design detectors.

Indeed, the MODE collaboration stands for Machine-learning Optimized Design of Experiments and in its first paper promises nothing less than that.

The idea here is that the choice of characteristics that an experiment can have is vast (think number of units, materials, geometry, dimensions and so on), but its ultimate goal can still be described by a single “utility function”. For instance, the precision of the measurement on specific data can be thought of as a utility function.

Then, the whole process that leads to obtaining that function can be decomposed into a number of conceptual blocks: normally there are incoming particles, which move through and interact with detectors, resulting in measurements; from them, the characteristics of the particles are reconstructed; these are eventually analyzed to get relevant useful quantities, the utility function among them. Ultimately, chaining together these blocks creates a pipeline that models the experiment from one end to the other.

Now, another central notion is differentiation or, rather, the ability to be differentiated; if all the components of this model are differentiable, then the gradient of the utility function can be calculated. This leads to the holy grail: finding its extreme values, i.e. optimize the experiment’s design as a function of its numerous components.

Before we see whether the components are indeed differentiable and how the gradient gets calculated, here is an example of this pipeline concept for a muon radiography detector.

Discovering a hidden space in the Great Pyramid by using muons. ( Financial Times)

Muons are not just the trendy star of particle physics (as of April 2021), but they also find application in scanning closed volumes and revealing details about the objects in them. And yes, the Great Pyramid has been muographed successfully.

In terms of the pipeline described above, a muon radiography device could be modeled in the following way: Muons from cosmic rays are generated in the form of 4-vectors. Those are fed to a fast-simulation of the scanned volume and the detector. The interactions of the particles with the materials and the resulting signals on the electronics are simulated. This output goes into a reconstruction module, which recreates muon tracks. From them, an information-extraction module calculates the density of the scanned material. It can also produce a loss function for the measurement, which here would be the target quantity.

Conceptual layout of the optimization pipeline. (MODE collaboration)

This whole ritual is a standard process in experimental work, although the steps are usually quite separate from one another. In the MODE concept, however, not only are they linked together but also run iteratively. The optimization of the detector design proceeds in steps and in each of them the parameters of the device are changed in the simulation. This affects directly the detector module and indirectly the downstream modules of the pipeline. The loop of modification and validation can be constrained appropriately to keep everything within realistic values, and also to make the most important consideration of all enter the game – that is of course cost and the constraints that it brings along.

Descending towards the minimum. (Dezhi Yu)

As mentioned above, the proposed optimization proceeds in steps by optimizing the parameters along the gradient of the utility function. The most famous incarnation of gradient-based optimization is gradient descent which is customarily used in neural networks. Gradient descent guides the network towards the minimum value of the error that it produces, through the possible “paths” of its parameters.

In the MODE proposal the optimization is achieved through automatic differentiation (AD), the latest word in the calculation of derivatives in computer programs. To shamefully paraphrase Wikipedia, AD exploits the fact that every computer program, no matter how complicated, executes a sequence of elementary arithmetic operations and functions. By applying the chain rule repeatedly to these operations, derivatives can be computed automatically, accurately and efficiently.

Also, something was mentioned above about whether the components of the pipeline are “indeed differentiable”. It turns out that one isn’t. This is the simulation of the processes during the passage of particles through the detector, which is stochastic by nature. However, machine learning can learn how to mimic it, take its place, and provide perfectly fine and differentiable modules. (The brave of heart can follow the link at the end to find out about local generative surrogates.)

This method of designing detectors might sound like a thought experiment on steroids. But the point of MODE is that it’s the realistic way to take full advantage of the current developments in computation. And maybe to feel like we have really entered the third century of particle experiments.

Further reading:

The MODE website: https://mode-collaboration.github.io/

A Beginner’s Guide to Differentiable Programming: https://wiki.pathmind.com/differentiableprogramming

Black-Box Optimization with Local Generative Surrogates: https://arxiv.org/abs/2002.04632

A shortcut to truth

Article title: “Automated detector simulation and reconstruction
parametrization using machine learning”

Authors: D. Benjamin, S.V. Chekanov, W. Hopkins, Y. Li, J.R. Love

Reference: https://arxiv.org/abs/2002.11516 (https://iopscience.iop.org/article/10.1088/1748-0221/15/05/P05025)

Demonstration of probability density function as the output of a neural network. (Source: paper)

The simulation of particle collisions at the LHC is a pharaonic task. The messy chromodynamics of protons must be modeled; the statistics of the collision products must reflect the Standard Model; each particle has to travel through the detectors and interact with all the elements in its path. Its presence will eventually be reduced to electronic measurements, which, after all, is all we know about it.

The work of the simulation ends somewhere here, and that of the reconstruction starts; namely to go from electronic signals to particles. Reconstruction is a process common to simulation and to the real world. Starting from the tangle of statistical and detector effects that the actual measurements include, the goal is to divine the properties of the initial collision products.

Now, researchers at the Argonne National Laboratory looked into going from the simulated particles as produced in the collisions (aka “truth objects”) directly to the reconstructed ones (aka “reco objects”): bypassing the steps of the detailed interaction with the detectors and of the reconstruction algorithm could make the studies that use simulations much more speedy and efficient.

Display of a collision event involving hadronic jets at ATLAS. Each colored block corresponds to interaction with a detector element. (Source: ATLAS experiment)

The team used a neural network which it trained on simulations of the full set. The goal was to have the network learn to produce the properties of the reco objects when given only the truth objects. The process succeeded in producing the transverse momenta of hadronic jets, and looks suitable for any kind of particle and for other kinematic quantities.

More specifically, the researchers began with two million simulated jet events, fully passed through the ATLAS experiment and the reconstruction algorithm. For each of them, the network took the kinematic properties of the truth jet as input and was trained to achieve the reconstructed transverse momentum.

The network was taught to perform multi-categorization: its output didn’t consist of a single node giving the momentum value, but of 400 nodes, each corresponding to a different range of values. The output of each node was the probability for that particular range. In other words, the result was a probability density function for the reconstructed momentum of a given jet.

The final step was to select the momentum randomly from this distribution. For half a million of test jets, all this resulted in good agreement with the actual reconstructed momenta, specifically within 5% for values above 20 GeV. In addition, it seems that the training was sensitive to the effects of quantities other than the target one (e.g. the effects of the position in the detector), as the neural network was able to pick up on the dependencies between the input variables. Also, hadronic jets are complicated animals, so it is expected that the method will work on other objects just as well.

Comparison of the reconstructed transverse momentum between the full simulation and reconstruction (“Delphes”) and the neural net output. (Source: paper)

All in all, this work showed the perspective for neural networks to imitate successfully the effects of the detector and the reconstruction. Simulations in large experiments typically take up loads of time and resources due to their size, intricacy and frequent need for updates in the hardware conditions. Such a shortcut, needing only small numbers of fully processed events, would speed up studies such as optimization of the reconstruction and detector upgrades.

More reading:

Argonne Lab press release: https://www.anl.gov/article/learning-more-about-particle-collisions-with-machine-learning

Intro to neural networks: https://physicsworld.com/a/neural-networks-explained/

Letting the Machines Search for New Physics

Article: “Anomaly Detection for Resonant New Physics with Machine Learning”

Authors: Jack H. Collins, Kiel Howe, Benjamin Nachman

Reference : https://arxiv.org/abs/1805.02664

One of the main goals of LHC experiments is to look for signals of physics beyond the Standard Model; new particles that may explain some of the mysteries the Standard Model doesn’t answer. The typical way this works is that theorists come up with a new particle that would solve some mystery and they spell out how it interacts with the particles we already know about. Then experimentalists design a strategy of how to search for evidence of that particle in the mountains of data that the LHC produces. So far none of the searches performed in this way have seen any definitive evidence of new particles, leading experimentalists to rule out a lot of the parameter space of theorists favorite models.

A summary of searches the ATLAS collaboration has performed. The left columns show model being searched for, what experimental signature was looked at and how much data has been analyzed so far. The color bars show the regions that have been ruled out based on the null result of the search. As you can see, we have already covered a lot of territory.

Despite this extensive program of searches, one might wonder if we are still missing something. What if there was a new particle in the data, waiting to be discovered, but theorists haven’t thought of it yet so it hasn’t been looked for? This gives experimentalists a very interesting challenge, how do you look for something new, when you don’t know what you are looking for? One approach, which Particle Bites has talked about before, is to look at as many final states as possible and compare what you see in data to simulation and look for any large deviations. This is a good approach, but may be limited in its sensitivity to small signals. When a normal search for a specific model is performed one usually makes a series of selection requirements on the data, that are chosen to remove background events and keep signal events. Nowadays, these selection requirements are getting more complex, often using neural networks, a common type of machine learning model, trained to discriminate signal versus background. Without some sort of selection like this you may miss a smaller signal within the large amount of background events.

This new approach lets the neural network itself decide what signal to  look for. It uses part of the data itself to train a neural network to find a signal, and then uses the rest of the data to actually look for that signal. This lets you search for many different kinds of models at the same time!

If that sounds like magic, lets try to break it down. You have to assume something about the new particle you are looking for, and the technique here assumes it forms a resonant peak. This is a common assumption of searches. If a new particle were being produced in LHC collisions and then decaying, then you would get an excess of events where the invariant mass of its decay products have a particular value. So if you plotted the number of events in bins of invariant mass you would expect a new particle to show up as a nice peak on top of a relatively smooth background distribution. This is a very common search strategy, and often colloquially referred to as a ‘bump hunt’. This strategy was how the Higgs boson was discovered in 2012.

A histogram showing the invariant mass of photon pairs. The Higgs boson shows up as a bump at 125 GeV. Plot from here

The other secret ingredient we need is the idea of Classification Without Labels (abbreviated CWoLa, pronounced like koala). The way neural networks are usually trained in high energy physics is using fully labeled simulated examples. The network is shown a set of examples and then guesses which are signal and which are background. Using the true label of the event, the network is told which of the examples it got wrong, its parameters are updated accordingly, and it slowly improves. The crucial challenge when trying to train using real data is that we don’t know the true label of any of data, so its hard to tell the network how to improve. Rather than trying to use the true labels of any of the events, the CWoLA technique uses mixtures of events. Lets say you have 2 mixed samples of events, sample A and sample B, but you know that sample A has more signal events in it than sample B. Then, instead of trying to classify signal versus background directly, you can train a classifier to distinguish between events from sample A and events from sample B and what that network will learn to do is distinguish between signal and background. You can actually show that the optimal classifier for distinguishing the two mixed samples is the same as the optimal classifier of signal versus background. Even more amazing, this technique actually works quite well in practice, achieving good results even when there is only a few percent of signal in one of the samples.

An illustration of the CWoLa method. A classifier trained to distinguish between two mixed samples of signal and background events learns can learn to classify signal versus background. Taken from here

The technique described in the paper combines these two ideas in a clever way. Because we expect the new particle to show up in a narrow region of invariant mass, you can use some of your data to train a classifier to distinguish between events in a given slice of invariant mass from other events. If there is no signal with a mass in that region then the classifier should essentially learn nothing, but if there was a signal in that region that the classifier should learn to separate signal and background. Then one can apply that classifier to select events in the rest of your data (which hasn’t been used in the training) and look for a peak that would indicate a new particle. Because you don’t know ahead of time what mass any new particle should have, you scan over the whole range you have sufficient data for, looking for a new particle in each slice.

The specific case that they use to demonstrate the power of this technique is for new particles decaying to pairs of jets. On the surface, jets, the large sprays of particles produced when quark or gluon is made in a LHC collision, all look the same. But actually the insides of jets, their sub-structure, can contain very useful information about what kind of particle produced it. If a new particle that is produced decays into other particles, like top quarks, W bosons or some a new BSM particle, before decaying into quarks then there will be a lot of interesting sub-structure to the resulting jet, which can be used to distinguish it from regular jets. In this paper the neural network uses information about the sub-structure for both of the jets in event to determine if the event is signal-like or background-like.

The authors test out their new technique on a simulated dataset, containing some events where a new particle is produced and a large number of QCD background events. They train a neural network to distinguish events in a window of invariant mass of the jet pair from other events. With no selection applied there is no visible bump in the dijet invariant mass spectrum. With their technique they are able to train a classifier that can reject enough background such that a clear mass peak of the new particle shows up. This shows that you can find a new particle without relying on searching for a particular model, allowing you to be sensitive to particles overlooked by existing searches.

Demonstration of the bump hunt search. The shaded histogram is the amount of signal in the dataset. The different levels of blue points show the data remaining after applying tighter and tighter selection based on the neural network classifier score. The red line is the predicted amount of background events based on fitting the sideband regions. One can see that for the tightest selection (bottom set of points), the data forms a clear bump over the background estimate, indicating the presence of a new particle

This paper was one of the first to really demonstrate the power of machine-learning based searches. There is actually a competition being held to inspire researchers to try out other techniques on a mock dataset. So expect to see more new search strategies utilizing machine learning being released soon. Of course the real excitement will be when a search like this is applied to real data and we can see if machines can find new physics that us humans have overlooked!

Read More:

  1. Quanta Magazine Article “How Artificial Intelligence Can Supercharge the Search for New Particles”
  2. Blog Post on the CWoLa Method “Training Collider Classifiers on Real Data”
  3. Particle Bites Post “Going Rogue: The Search for Anything (and Everything) with ATLAS”
  4. Blog Post on applying ML to top quark decays “What does Bidirectional LSTM Neural Networks has to do with Top Quarks?”
  5. Extended Version of Original Paper “Extending the Bump Hunt with Machine Learning”