## A shortcut to truth

Article title: “Automated detector simulation and reconstruction
parametrization using machine learning”

Authors: D. Benjamin, S.V. Chekanov, W. Hopkins, Y. Li, J.R. Love

The simulation of particle collisions at the LHC is a pharaonic task. The messy chromodynamics of protons must be modeled; the statistics of the collision products must reflect the Standard Model; each particle has to travel through the detectors and interact with all the elements in its path. Its presence will eventually be reduced to electronic measurements, which, after all, is all we know about it.

The work of the simulation ends somewhere here, and that of the reconstruction starts; namely to go from electronic signals to particles. Reconstruction is a process common to simulation and to the real world. Starting from the tangle of statistical and detector effects that the actual measurements include, the goal is to divine the properties of the initial collision products.

Now, researchers at the Argonne National Laboratory looked into going from the simulated particles as produced in the collisions (aka “truth objects”) directly to the reconstructed ones (aka “reco objects”): bypassing the steps of the detailed interaction with the detectors and of the reconstruction algorithm could make the studies that use simulations much more speedy and efficient.

The team used a neural network which it trained on simulations of the full set. The goal was to have the network learn to produce the properties of the reco objects when given only the truth objects. The process succeeded in producing the transverse momenta of hadronic jets, and looks suitable for any kind of particle and for other kinematic quantities.

More specifically, the researchers began with two million simulated jet events, fully passed through the ATLAS experiment and the reconstruction algorithm. For each of them, the network took the kinematic properties of the truth jet as input and was trained to achieve the reconstructed transverse momentum.

The network was taught to perform multi-categorization: its output didn’t consist of a single node giving the momentum value, but of 400 nodes, each corresponding to a different range of values. The output of each node was the probability for that particular range. In other words, the result was a probability density function for the reconstructed momentum of a given jet.

The final step was to select the momentum randomly from this distribution. For half a million of test jets, all this resulted in good agreement with the actual reconstructed momenta, specifically within 5% for values above 20 GeV. In addition, it seems that the training was sensitive to the effects of quantities other than the target one (e.g. the effects of the position in the detector), as the neural network was able to pick up on the dependencies between the input variables. Also, hadronic jets are complicated animals, so it is expected that the method will work on other objects just as well.

All in all, this work showed the perspective for neural networks to imitate successfully the effects of the detector and the reconstruction. Simulations in large experiments typically take up loads of time and resources due to their size, intricacy and frequent need for updates in the hardware conditions. Such a shortcut, needing only small numbers of fully processed events, would speed up studies such as optimization of the reconstruction and detector upgrades.

Intro to neural networks: https://physicsworld.com/a/neural-networks-explained/

## Crystals are dark matter’s best friends

Article title: “Development of ultra-pure NaI(Tl) detector for COSINE-200 experiment”

Authors: B.J. Park et el.

Reference: arxiv:2004.06287

The landscape of direct detection of dark matter is a perplexing one; all experiments have so far come up with deafening silence, except for a single one which promises a symphony. This is the DAMA/LIBRA experiment in Gran Sasso, Italy, which has been seeing an annual modulation in its signal for two decades now.

Such an annual modulation is as dark-matter-like as it gets. First proposed by Katherine Freese in 1987, it would be the result of earth’s motion inside the galactic halo of dark matter in the same direction as the sun for half of the year and in the opposite direction during the other half. However, DAMA/LIBRA’s results are in conflict with other experiments – but with the catch that none of those used the same setup. The way to settle this is obviously to build more experiments with the DAMA/LIBRA setup. This is an ongoing effort which ultimately focuses on the crystals at its heart.

The specific crystals are made of the scintillating material thallium-doped sodium iodide, NaI(Tl). Dark matter particles, and particularly WIMPs, would collide elastically with atomic nuclei and the recoil would give off photons, which would eventually be captured by photomultiplier tubes at the ends of each crystal.

Right now a number of NaI(Tl)-based experiments are at various stages of preparation around the world, with COSINE-100 at the Yangyang mountain, S.Korea, already producing negative results. However, these are still not on equal footing with DAMA/LIBRA’s because of higher backgrounds at COSINE-100. What is the collaboration to do, then? The answer is focus even more on the crystals and how they are prepared.

Over the last couple of years some serious R&D went into growing better crystals for COSINE-200, the planned upgrade of COSINE-100. Yes, a crystal is something that can and does grow. A seed placed inside the raw material, in this case NaI(Tl) powder, leads it to organize itself around the seed’s structure over the next hours or days.

In COSINE-100 the most annoying backgrounds came from within the crystals themselves because of the production process, because of natural radioactivity, and because of cosmogenically induced isotopes. Let’s see how each of these was tackled during the experiment’s mission towards a radiopure upgrade.

Improved techniques of growing and preparing the crystals reduced contamination from the materials of the grower device and from the ambient environment. At the same time different raw materials were tried out to put the inherent contamination under control.

Among a handful of naturally present radioactive isotopes particular care was given to 40K. 40K can decay characteristically to an X-ray of 3.2keV and a γ-ray of 1,460keV, a combination convenient for tagging it to a large extent. The tagging is done with the help of 2,000 liters of liquid scintillator surrounding the crystals. However, if the γ-ray escapes the crystal then the left-behind X-ray will mimic the expected signal from WIMPs… Eventually the dangerous 40K was brought down to levels comparable to those in DAMA/LIBRA through the investigation of various techniques and first materials.

But the main source of radioactive background in COSINE-100 was isotopes such as 3H or 22Na created inside the crystals by cosmic ray muons, after their production. Now, their abundance was reduced significantly by two simple moves: the crystals were grown locally at a very low altitude and installed underground within a few weeks (instead of being transported from a lab at 1,400 meters above sea in Colorado). Moreover, most of the remaining cosmogenic background is to decay away within a couple of years.

Where are these efforts standing? The energy range of interest for testing the DAMA/LIBRA signal is 1-6keV. This corresponds to a background target of 1 count/kg/day/keV. After the crystals R&D, the achieved contamination was less than about 0.34 counts. In short, everything is ready for COSINE-100 to upgrade to COSINE-200 and test the annual modulation without the previous ambiguities that stood in the way.

More on DAMA/LIBRA in ParticleBites.

Cross-checking the modulation.

The COSINE-100 experiment.

First COSINE-100 results.

## Muon to electron conversion

Presenting: Section 3.2 of “Charged Lepton Flavor Violation: An Experimenter’s Guide”
Authors: R. Bernstein, P. Cooper
Reference1307.5787 (Phys. Rept. 532 (2013) 27)

Not all searches for new physics involve colliding protons at the the highest human-made energies. An alternate approach is to look for deviations in ultra-rare events at low energies. These deviations may be the quantum footprints of new, much heavier particles. In this bite, we’ll focus on the decay of a muon to an electron in the presence of a heavy atom.

The muon is a heavy version of the electron.There  are a few properties that make muons nice systems for precision measurements:

1. They’re easy to produce. When you smash protons into a dense target, like tungsten, you get lots of light hadrons—among them, the charged pions. These charged pions decay into muons, which one can then collect by bending their trajectories with magnetic fields. (Puzzle: why don’t pions decay into electrons? Answer below.)
2. They can replace electrons in atoms.  If you point this beam of muons into a target, then some of the muons will replace electrons in the target’s atoms. This is very nice because these “muonic atoms” are described by non-relativistic quantum mechanics with the electron mass replaced with ~100 MeV. (Muonic hydrogen was previous mentioned in this bite on the proton radius problem.)
3. They decay, and the decay products always include an electron that can be detected.  In vacuum it will decay into an electron and two neutrinos through the weak force, analogous to beta decay.
4. These decays are sensitive to virtual effects. You don’t need to directly create a new particle in order to see its effects. Potential new particles are constrained to be very heavy to explain their non-observation at the LHC. However, even these heavy particles can leave an  imprint on muon decay through ‘virtual effects’ according (roughly) to the Heisenberg uncertainty principle: you can quantum mechanically violate energy conservation, but only for very short times.

One should be surprised that muon conversion is even possible. The process $\mu \to e$ cannot occur in vacuum because it cannot simultaneously conserve energy and momentum. (Puzzle: why is this true? Answer below.) However, this process is allowed in the presence of a heavy nucleus that can absorb the additional momentum, as shown in the comic at the top of this post.

Muon  conversion experiments exploit this by forming muonic atoms in the 1state and waiting for the muon to convert into an electron which can then be detected. The upside is that all electrons from conversion have a fixed energy because they all come from the same initial state: 1s muonic aluminum at rest in the lab frame. This is in contrast with more common muon decay modes which involve two neutrinos and an electron; because this is a multibody final state, there is a smooth distribution of electron energies. This feature allows physicists to distinguish between the $\mu \to e$ conversion versus the more frequent muon decay $\mu \to e \nu_\mu \bar \nu_e$ in orbit or muon capture by the nucleus (similar to electron capture).

The Standard Model prediction for this rate is miniscule—it’s weighted by powers of the neutrino to the W boson mass ratio  (Puzzle: how does one see this? Answer below.). In fact, the current experimental bound on muon conversion comes from the Sindrum II experiment  looking at muonic gold which constrains the relative rate of muon conversion to muon capture by the gold nucleus to be less than $7 \times 10^{-13}$. This, in turn, constrains models of new physics that predict some level of charged lepton flavor violation—that is, processes that change the flavor of a charged lepton, say going from muons to electrons.

The plot on the right shows the energy scales that are indirectly probed by upcoming muonic aluminum experiments: the Mu2e experiment at Fermilab and the COMET experiment at J-PARC. The blue lines show bounds from another rare muon decay: muons decaying into an electron and photon. The black solid lines show the reach for muon conversion in muonic aluminum. The dashed lines correspond to different experimental sensitivities (capture rates for conversion, branching ratios for decay with a photon). Note that the energy scales probed can reach 1-10 PeV—that’s 1000-10,000 TeV—much higher than the energy scales direclty probed by the LHC! In this way, flavor experiments and high energy experiments are complimentary searches for new physics.

These “next generation” muon conversion experiments are currently under construction and promise to push the intensity frontier in conjunction with the LHC’s energy frontier.

### Solutions to exercises:

1. Why do pions decay into muons and not electrons? [Note: this requires some background in undergraduate-level particle physics.] One might expect that if a charged pion can decay into a muon and a neutrino, then it should also go into an electron and a neutrino. In fact, the latter should dominate since there’s much more phase space. However, the matrix element requires a virtual W boson exchange and thus depends on an [axial] vector current. The only vector available from the pion system is its 4-momentum. By momentum conservation this is $p_\pi = p_\mu + p_\nu$. The lepton momenta then contract with Dirac matrices on the leptonic current to give a dominant piece proportional to the lepton mass. Thus the amplitude for charged pion decay into a muon is much larger than the amplitude for decay into an electron.
2. Why can’t a muon decay into an electron in vacuum? The process $\mu \to e$ cannot simultaneously conserve energy and momentum. This is simplest to see in the reference frame where the muon is at rest. Momentum conservation requires the electron to also be at rest. However, a particle has rest energy equal to its mass, but now there’s now way a muon at rest can pass on all of its energy to an electron at rest.
3. Why is muon conversion in the Standard Model suppressed by the ration of the neutrino to W masses? This can be seen by drawing the Feynman diagram (fig below from 1401.6077). Flavor violation in the Standard Model requires a W boson. Because the W is much heavier than the muon, this must be virtual and appear only as an internal leg. Further, W‘s couple charged leptons to neutrinos, so there must also be a virtual neutrino. The evaluation of this diagram into an amplitude gives factors of the neutrino mass in the numerator (required for the fermion chirality flip) and the W mass in the denominator. For some details, see this post.