Proton Momentum: Hiding in Plain Sight?

Protons and neutrons at first glance seem like simple objects. They have well defined spin and electric charge, and we even know their quark compositions. Protons are composed of two up quarks and one down quark and for neutrons, two downs and one up. Further, if a proton is moving, it carries momentum, but how is this momentum distributed between its constituent quarks? In this post, we will see that most of the momentum of the proton is in fact not carried by its constituent quarks.

Before we start, we need to have a small discussion about isospin. This will let us immediately write down the results we need later. Isospin is a quantum number that in practice, allows us to package particles together. Protons and neutrons form an isospin doublet, which means they come in the same mathematical package. The proton is the isospin +1/2 component of this package, and the neutron is the isospin -1/2 component of this package. Similarly, up quarks and down quarks form their own isospin doublet, and they come in their own package. In our experiment, if we are careful to choose which particles to scatter off of eachother, our calculations will permit us to exchange components of isospin packages everywhere instead of redoing calculations from scratch. This exchange is what I will call the “isospin trick.” It turns out that if compare electron-proton scattering to electron-neutron scattering allows us to use this trick:

\text{Proton} \leftrightarrow \text{Neutron} \\ u \leftrightarrow d

Back to protons and neutrons. We know that protons and neutrons are composite particles, they themselves are made up of more fundamental objects. We need a way to “zoom into” these composite particles, to look inside them and we do this with the help of structure functions F(x). Structure functions for the proton and neutron encode how electric charge and momentum are distributed between the constituents. We assign u(x) and d(x) to be the probability of finding an up or down quark with momentum fraction x of the proton. Explicitly, these structure functions look like:

F(x) \equiv \sum_q e_q^2 q(x) \\ \ F_P(x) = \frac{4}{9}u(x) + \frac{1}{9}d(x) \\ \ F_N(x) = \frac{4}{9}d(x) + \frac{1}{9}u(x)

where the first line is the definition of a structure function. In this line, q denotes quarks, and e_q is the electric charge of quark q. In the second line, we have written out explicitly the structure function for the proton F_P(x), and invoked the isospin trick to immediately write down the structure function for the neutron F_N(x) in the third line. Observe that if we had attempted to write down F_N(x) following the definition in line 1, we would have gotten the same thing as the proton.

At this point we must turn to experiment to determine u(x) and d(x). The plot we will examine [1] is figure 17.6 taken from section 17.4 of Peskin and Schroeder, An Introduction to Quantum Field Theory. Some data is omitted to illustrate a point.

The momentum distribution of the quarks inside the proton. Some data has been omitted for the purposes of this discussion. The full plot is provided at the end of this post.

This plot shows the momentum distribution of the up and down quarks inside a proton. On the horizontal axis is the momentum fraction x and on the vertical axis is probability. The two curves represent the probability distribution of the up (u) and down (d) quarks inside the proton. Integrating these curves gives us the total percent of momentum stored in the up and down quarks which I will call uppercase U and uppercase D. We want to know know both U and D, so we need another equation to solve this system. Luckily we can repeat this experiment using neutrons instead of protons, obtain a similar set of curves, and integrate them to obtain the following system of equations:

\int_0^1 F_P(x) = \frac{4}{9}U(x) + \frac{1}{9}D(x) = 0.18 \\ \int_0^1 F_N(x) = \frac{4}{9}D(x) + \frac{1}{9}U(x) = 0.12

Solving this system for U and D yields U = 0.36 and D = 0.18. We immediately see that the total momentum carried by the up and down quarks is \sim 54% of the momentum of the proton. Said a different way, the three quarks that make up the proton, only carry half ot its momentum. One possible conclusion is that the proton has more “stuff” inside of it that is storing the remaining momentum. It turns out that this additional “stuff” are gluons, the mediators of the strong force. If we include gluons (and anti-quarks) in the momentum distribution, we can see that at low momentum fraction x, most of the proton momentum is stored in gluons. Throughout this discussion, we have neglected anti-quarks because even at low momentum fractions, they are sub-dominant to gluons. The full plot as seen in Peskin and Schroeder is provided below for completeness.

References

[1] – Peskin and Schroeder, An Introduction to Quantum Field Theory, Section 17.4, figure 17.6.

Further Reading

[A] – The uses of isospin in early nuclear and particle physics. This article follows the historical development of isospin and does a good job of motivating why physicists used it in the first place.

[B] – Fundamentals in Nuclear Theory, Ch3. This is a more technical treatment of isospin, roughly at the level of undergraduate advanced quantum mechanics.

[C] – Symmetries in Physics: Isospin and the Eightfold Way. This provides a slightly more group-theoretic perspective of isospin and connects it to the SU(3) symmetry group.

[D] – Structure Functions. This is the Particle Data Group treatment of structure functions.

[E] – Introduction to Parton Distribution Functions]. Not a paper, but an excellent overview of Parton Distribution Functions.

The neutrino from below

Article title: The ANITA Anomalous Events as Signatures of a Beyond Standard Model Particle and Supporting Observations from IceCube

Authors: Derek B. Fox, Steinn Sigurdsson, Sarah Shandera, Peter Mészáros, Kohta Murase, Miguel Mostafá, and Stephane Coutu 

Reference: arXiv:1809.09615v1

Neutrinos have arguably made history for being nothing other than controversial. In fact from their very inception, their proposal by Wolfgang Pauli was described in his own words as “something no theorist should ever do”. Years on, as we established the role that neutrinos played in the processes of our sun, it was then discovered that it simply wasn’t providing enough of them. In the end the only option was to concede that neutrinos were more complicated than we ever thought, opening up a new area of study of ‘flavor oscillations’ with the consequence that they may in fact possess a small, non-zero mass – to this day yet to be explained.

On a more recent note, neutrinos have sneakily raised eyebrows with a number of other interesting anomalies. The OPERA collaboration, founded between CERN in Geneva, Switzerland and Gran Sasso, Italy, made international news with reports that their speeds exceeded the speed of light. Such an observation would certainly shatter the very foundations of modern physics and so was met with plenty of healthy skepticism. Alas it was eventually traced back to a faulty timing cable and all was right with the world again. However this has not been the last time that neutrinos have been involved in another controversial anomaly.

The NASA-involved Antarctic Impulsive Transient Antenna (ANITA) is an experiment designed to search for very, very energetic neutrinos originating from outer space. As the name also suggests, the experiment consists of a series of radio antennae contained in a balloon floating above the southern Antarctic ice. Very energetic neutrinos can in fact produce intense radio wave signals when they pass through the Earth and scatter off atoms in the Antarctic ice. This may sound strange, as neutrinos are typically referred to as ‘elusive’, however at incredibly high energies their probability of scattering increases dramatically – to the point where the Earth is ‘opaque’ to these neutrinos.

ANITA typically searches for the electromagnetic components of cosmic rays in the atmosphere, reflecting off the ice surface and subsequently inverting the phase of the radio wave. Alternatively, a small number of events can occur in the direction of the horizon, without reflecting off the ice and hence not inverting the waveform. However, physicists were surprised to find signals originating from below the ice, without phase inversion, in a direction much too steep to originate from the horizon.

Why is this a surprise you may ask? Well any particle present in the SM at these energies would have trouble traversing such a long distance throughout the Earth, measured in one of the observations with a chord length of 5700 km, whereas a neutrino would be expected to only survive a few hundred km. Such events would be expected to be mainly involving \nu_{\tau} (tau neutrinos), since these have the potential to convert to a charged tau lepton shortly before arriving and hadronising into an air shower, which is simply not possible for electrons or muons which are absorbed by the ice in a much smaller distance. But even in the case of tau neutrinos, the probability of such an event occuring with the observed trajectory is very small (below one in a million), leading physicists to explore more exotic (and exciting) options.

A simple possibility is that the ultra-high energy neutrinos coming from space could interact within the Earth to produce a BSM (Beyond Standard Model) particle that passes through the Earth until it exits and decays back to an SM lepton and then hadronizes in a shower of particles. Such a situation is shown in Figure 1, where the BSM particle comes from the well-known and popular supersymmetric SM extension, known as the stau slepton \tilde{\tau}.

Figure 1: An ultra-high energy neutrino interacting within the Earth to produce a beyond Standard Model particle before decaying back to a charged lepton and hadronizing, leaving a radio signal for ANITA. From “Tests of new physics scenarios at neutrino telescopes” talk by Bhavesh Chauhan.

In some popular supersymmetric extensions to the Standard Model, the stau slepton is typically the next-to-lightest supersymmetric particle (or NLSP) and can in fact be quite long-lived. In the presence of a nucleus, the stau may convert to the tau lepton and the LSP, which is typically the neutralino. In the paper titled above, the stau NLSP \tilde{\tau}_R can exist within the Gauge-Mediated Supersymmetry Breaking Model (GMSB) and can be produced through ultra-high energy neutrino interactions with nucleons with a not-so-tiny branching ratio of BR \lesssim 10^{-4}. Of course the tension still remains for the direct observation of staus that can fit resonably within this scenario, but the prospects of observing effects of BSM physics without the efforts of expensive colliders.

But the attempts at new physics explanations don’t end there. There are some ideas that involve the decays of very heavy dark matter candidates in the galactic Milky-Way center. In a similar vein, another possibility comes form the well-motivated sterile neutrino – a BSM candidate to explain the small, non-zero mass of the neutrino. There are a number of explanations for a large flux of sterile neutrinos throughout the Earth, however the rate at which they interact with the Earth is much more suppressed than the light “active” neutrinos. It could be then hoped that they would make their passage to the ANITA detector after converting back to a tau lepton.

Anomalies like these come and go, however in any case, physicists remain interested in alternate pathways to new physics – or even a motivation to search in a more specific region with collider technology. But collecting more data first always helps!

References and further reading:

Neutrinos: What Do They Know? Do They Know Things?

Title: “Upper Bound of Neutrino Masses from Combined Cosmological Observations and Particle Physics Experiments”

Author: Loureiro et al. 

Reference: https://journals.aps.org/prl/pdf/10.1103/PhysRevLett.123.081301

Neutrinos are almost a lot of things. They are almost massless, a property that goes against the predictions of the Standard Model. Possessing this non-zero mass, they should travel at almost the speed of light, but not quite, in order to be consistent with the principles of special relativity. Yet each measurement of neutrino propagation speed returns a value that is, within experimental error, exactly the speed of light. Only coupled to the weak force, they are almost non-interacting, with 65 billion of them streaming from the sun through each square centimeter of Earth each second, almost undetected. 

How do all of these pieces fit together? The story of the neutrino begins in 1930, when Wolfgang Pauli propositioned an as-yet detected particle emitted during beta decay in order to explain an observed lack of energy and momentum conservation. In 1956, an antineutrino-positron annihilation producing two gamma rays was detected, confirming the existence of neutrinos. Yet with this confirmation came an assortment of growing mysteries. In the decades that followed, a series of experiments found that there are three distinct flavors of neutrino, one corresponding to each type of lepton: electron, muon, and tau particles. Subsequent measurements of propagating neutrinos then revealed a curious fact: these three flavors are anything but distinct. When the flavor of a neutrino is initially measured to be, say, an electron neutrino, a second measurement of flavor after it has traveled some distance could return the answer of muon neutrino. Measure yet again, and you could find yourself a tau neutrino. This process, in which the probability of measuring a neutrino in one of the three flavor states varies as it propagates, is known as neutrino oscillation. 

A representation of neutrino oscillation: three flavors of neutrino form a superposed wave. As a result, a measurement of neutrino flavor as the neutrino propagates switches between the three possible flavors. This mechanism implies that neutrinos are not massless, as previously thought. From: http://www.hyper-k.org/en/neutrino.html.

Neutrino oscillation threw a wrench into the Standard Model in terms of mass; neutrino oscillation implies that the masses of the three neutrino flavors cannot be equal to each other, and hence cannot all be zero. Specifically, only one of them would be allowed to be zero, with the remaining two non-zero and non-equal. While at first glance an oddity, oscillation arises naturally from underlying mathematics, and we can arrive at this conclusion via a simple analysis. To think about a neutrino, we consider two eigenstates (the state a particle is in when it is measured to have a certain observable quantity), one corresponding to flavor and one corresponding to mass. Because neutrinos are created in weak interactions which conserve flavor, they are initially in a flavor eigenstate. Flavor and mass eigenstates cannot be simultaneously determined, and so each flavor eigenstate is a linear combination of mass eigenstates, and vice versa. Now, consider the case of three flavors of neutrino. If all three flavors consisted of the same linear combination of mass eigenstates, there would be no varying superposition between them, since the different masses would travel at different speeds in accordance with special relativity. Since we experimentally observe an oscillation between neutrino flavors, we can conclude that their masses cannot all be the same.

Although this result was unexpected and provides the first known departure from the Standard Model, it is worth noting that it also neatly resolves a few outstanding experimental mysteries, such as the solar neutrino problem. Neutrinos in the sun are produced as electron neutrinos and are likely to interact with unbound electrons as they travel outward, transitioning them into a second mass state which can interact as any of the three flavors. By observing a solar neutrino flux roughly a third of its predicted value, physicists not only provided a potential answer to a previously unexplained phenomenon but also deduced that this second mass state must be larger than the state initially produced. Related flux measurements of neutrinos produced during charged particle interactions in the Earth’s upper atmosphere, which are primarily muon neutrinos, reveal that the third mass state is quite different from the first two mass states. This gives rise to two potential mass hierarchies: the normal (m_1 < m_2 \ll m_3) and inverted (m_3 \ll m_1 < m_2) ordering.

The PMNS matrix parametrizes the transformation between the neutrino mass eigenbasis and its flavor eigenbasis. The left vector represents a neutrino in the flavor basis, while the right represents the same neutrino in the mass basis. When an individual component of the transformation matrix is squared, it gives the probability to measure the specified mass for the the corresponding flavor.

However, this oscillation also means that it is difficult to discuss neutrino masses individually, as measuring the sum of neutrino masses is currently easier from a technical standpoint. With current precision in cosmology, we cannot distinguish the three neutrinos at the epoch in which they become free-traveling, although this could change with increased precision. Future experiments in beta decay could also lead to progress in pinpointing individual masses, although current oscillation experiments are only sensitive to mass-squared differences \Delta m_{ij}^2 = m_i^2 - m_j^2. Hence, we frame our models in terms of these mass splittings and the mass sum, which also makes it easier to incorporate cosmological data. Current models of neutrinos are phenomenological not directly derived from theory but consistent with both theoretical principles and experimental data. The mixing between states is mathematically described by the PMNS (Pontecorvo-Maki-Nakagawa-Sakata) matrix, which is parametrized by three mixing angles and a phase related to CP violation. These parameters, as in most phenomenological models, have to be inserted into the theory. There is usually a wide space of parameters in such models and constraining this space requires input from a variety of sources. In the case of neutrinos, both particle physics experiments and cosmological data provide key avenues for exploration into these parameters. In a recent paper, Loureiro et al. used such a strategy, incorporating data from the large scale structure of galaxies and the cosmic microwave background to provide new upper bounds on the sum of neutrino masses.

The group investigated two main classes of neutrino mass models: exact models and cosmological approximations. The former concerns models that integrate results from neutrino oscillation experiments and are parametrized by the smallest neutrino mass, while the latter class uses a model scheme in which the neutrino mass sum is related to an effective number of neutrino species N_{\nu} times an effective mass m_{eff} which is equal for each flavor. In exact models, Gaussian priors (an initial best-guess) were used with data sampling from a number of experimental results and error bars, depending on the specifics of the model in question. This includes possibilities such as fixing the mass splittings to their central values or assuming either a normal or inverted mass hierarchy. In cosmological approximations, N_{\nu} was fixed to a specific value depending on the particular cosmological model being studied, with the total mass sum sampled from data.

The end result of the group’s analysis, which shows the calculated neutrino mass bounds from 7 studied models, where the first 4 models are exact and the last 3 are cosmological approximations. The left column gives the probability distribution for the sum of neutrino masses, while the right column gives the probability distribution for the lightest neutrino in the model (not used in the cosmological approximation scheme). From: https://journals.aps.org/prl/pdf/10.1103/PhysRevLett.123.081301

The group ultimately demonstrated that cosmologically-based models result in upper bounds for the mass sum that are much lower than those generated from physically-motivated exact models, as we can see in the figure above. One of the models studied resulted in an upper bound that is not only different from those determined from neutrino oscillation experiments, but is inconsistent with known lower bounds. This puts us into the exciting territory that neutrinos have pushed us to again and again: a potential finding that goes against what we presently know. The calculated upper bound is also significantly different if the assumption is made that one of the neutrino masses is zero, with the mass sum contained in the remaining two neutrinos, setting the stage for future differentiation between neutrino masses. Although the group did not find any statistically preferable model, they provide a framework for studying neutrinos with a considerable amount of cosmological data, using results of the Planck, BOSS, and SDSS collaborations, among many others. Ultimately, the only way to arrive at a robust answer to the question of neutrino mass is to consider all of these possible sources of information for verification. 

With increased sensitivity in upcoming telescopes and a plethora of intriguing beta decay experiments on the horizon, we should be moving away from studies that update bounds and toward ones which make direct estimations. In these future experiments, previous analysis will prove vital in working toward an understanding of the underlying mass models and put us one step closer to unraveling the enigma of the neutrino. While there are still many open questions concerning their properties Why are their masses so small? Is the neutrino its own antiparticle? What governs the mass mechanism? studies like these help to grow intuition and prepare for the next phases of discovery. I’m excited to see what unexpected results come next for this almost elusive particle.

Further Reading:

  1. A thorough introduction to neutrino oscillation: https://arxiv.org/pdf/1802.05781.pdf
  2. Details on mass ordering: https://arxiv.org/pdf/1806.11051.pdf
  3. More information about solar neutrinos (from a fellow ParticleBites writer!): https://particlebites.com/?p=6778 
  4. A summary of current neutrino experiments and their expected results: https://www.symmetrymagazine.org/article/game-changing-neutrino-experiments 

Gamma Rays from Andromeda: Evidence of Dark Matter?

In a recent paper [1], Chris Karwin et al. report an excess in gamma rays that seem to be originating from our nearest galaxy Andromeda. The amount of excess is arguably insignificant, roughly 3% – 5%. However, the spatial location of this excess is what is interesting. The luminous region of Andromeda is roughly 70 Kpc in diameter [2], but the gamma ray excess is located roughly 120 – 200 Kpc from the center. Below is an approximately-to-scale diagram that illustrates the spatial extent of the excess:

Pictorial representation of the spatial extent of the gamma ray excess from Andromeda.

There may be many possible explanations and interpretations for this excess, one of the more exotic possibilities is dark matter annihilations. The dark matter paradigm says that galaxies form within “dark matter halos,” a cloud of dark matter that attracts luminous matter that will eventually form a galaxy [3]. The dark matter halo encompassing Andromeda has a virial radius of about 200 Kpc [4], well beyond the location of the gamma ray excess. This means that there is dark matter at the location of the excess, but how do we start with dark matter and end up with photons?

Within the “light mediator” paradigm of dark matter particle physics, dark matter can annihilate with itself into massive dark photons, the dark matter equivalent of the photon. These dark photons by ansatz, can interact with standard model and ultimately decay into photons. A schematic diagram of this process is provided below:

Dark matter (X) annihilates into dark photons (A) which couple to Standard Model particles that eventually decay into photons.

To recap,

  1. There is an excess of high energy, 1-100 GeV photons from Andromeda.
  2. The location of this excess is displaced from the luminous matter within Andromeda.
  3. This spatial displacement means that the cause of this excess is probably not the luminous matter within Andromeda.
  4. We do know however, that there is dark matter at the location of the excess.
  5. It is possible for dark matter to yield high energy photons so this excess may be evidence of dark matter.

References:

[1] – FERMI-LAT Observations Of Gamma Ray Emission Towards the Outer Halo of M31. This is the paper that reports the gamma ray excess from Andromeda.

[2] – A kinematically selected, metal-poor stellar halo in the outskirts of M31. This work estimates the size or extent of Andromeda.

[3] – The Connection between Galaxies and their Dark Matter Halos. This paper details how dark matter halos relate and impact galaxy formation.

[4] – Stellar mass map and dark matter distribution in M31. This paper determines various properties of the Andromeda dark matter halo.

Solar Neutrino Problem

Why should we even care about neutrinos coming from the sun in the first place? In the 1960’s, the processes governing the interior of the sun were not well understood. There was a strong suspicion that the suns main energy source was the fusion of Hydrogen into Helium, but there was no direct evidence for this hypothesis. This is because the photons produced in fusion processes have a mean free path of about ​10^(-10) times the radius of the sun [1]. That is to say, it takes thousands of years for the light produced inside the core of the sun to escape and be detected at Earth. Photons then are not a good experimental observable to use if we want to understand the interior of the sun.

Additionally, these fusion processes also produce neutrinos, which are essentially non-interacting. Their non-interactive properties on one hand means that they can escape the interior of the sun unimpeded. Neutrinos thus give us a direct probe into the core of the sun without the wait that photons require. On the other hand though, these same non-interactive properties mean that detecting them is extremely difficult.

The undertaking to understand and measure these neutrinos was headed by John Bahcall, who headed the theoretical development, and Ray Davis Jr, who headed the experimental development.

In 1963, John Bahcall gave the first prediction of the neutrino flux coming from the sun [1]. Five years later in 1968, Ray Davis provided the first measurement of the solar neutrino flux [2]. They found that the predicted value was about 2.5 times higher than the measured value. This discrepancy is what became known as the solar neutrino problem.

This plot shows the discrepancy between the measured (blue) and predicted (not blue) amounts of electron neutrinos from various experiments. Blue corresponds to experimental measurements. The other colors correspond to the predicted amount of neutrinos from various sources. This figure was first presented in a 2004 paper by Bahcall [3].

Broadly speaking, there were three causes for this discrepancy:

  1. The prediction was incorrect. This was Bahcalls domain. At lowest order, this could involve some combination of two things. First, incorrect modeling of the sun resulting in inaccurate neutrino fluxes. Second, inaccurate calculation of the observable signal resulting from the neutrino interactions with the detector. Bahcall and his collaborators spent 20 years refining this work and much more but the discrepancy persisted.
  2. The experimental measurement was incorrect. During those same 20 years, until the late 1980’s, Ray Davis’ experiment was the only active neutrino experiment [4]. He continued to improve the experimental sensitivity, but the discrepancy still persisted.
  3. New Physics. In 1968, B. Pontecorv and V. Gribov formulated Neutrino oscillations as we know it today. They proposed that Neutrino flavor eigenstates are linear combinations of mass eigenstates [5]. At a very hand-wavy level, this ansatz sounds reasonable because a neutrino of one flavor at production can change its identity while it propagates from the Sun to the Earth. This is because it is the mass eigenstates that have well-defined time-evolution in quantum mechanics.

It turns out that Pontecorv and Gribov had found the resolution to the Solar Neutrino problem. It would take an additional 30 years for experimental verification of neutrino oscillations by Super-K in 1998 [6], and Sundbury Neutrino Observatory (SNO) in 1999 [7].


References:

[1] – Solar Neutrinos I: Theoretical This paper lays out the motivation for why we should care about solar neutrinos at all.

[2] – Search for Neutrinos from the Sun The first announcement of the measurement of the solar neutrino flux.

[3] – Solar Models and Solar Neutrinos This is a summary of the Solar Neutrino Problem as presented by Bahcall in 2004.

[4] – The Evolution of Neutrino Astronomy A recounting of their journey in neutrino oscillations written by Bahcall and Davis.

[5] – Neutrino Astronomy and Lepton Charge This is the paper that laid down the mathematical groundwork for neutrino oscillations.

[6] – Evidence for Oscillation of Atmospheric Neutrinos The Super-K collaboration reporting their findings in support of neutrino flavor oscillations.

[7] – The Sudbury Neutrino Observatory The SNO collaboration announcing that they had strong experimental evidence for neutrino oscillations.

Additional References

[A] – Formalism of Neturino Osciallations: An Introduction. An accessible introduction to neutrino oscillations, this is useful for anyone who wants a primer on this topic.

[B] – Neutrino Masses, Mixing, and Oscillations. This is the Particle Data Group (PDG) treatment of neutrino mixing and oscillation.

[C] – Solving the mystery of the missing neutrinos. Writen by John Bahcall, this is a comprehensive discussion of the “missing neutrino” or “solar neutrino” problem.

Riding the wave to new physics

Article title: “Particle physics applications of the AWAKE acceleration scheme”

Authors: A. Caldwell, J. Chappell, P. Crivelli, E. Depero, J. Gall, S. Gninenko, E. Gschwendtner, A. Hartin, F. Keeble, J. Osborne, A. Pardons, A. Petrenko, A. Scaachi , and M. Wing

Reference: arXiv:1812.11164

On the energy frontier, the search for new physics remains a contentious issue – do we continue to build bigger, more powerful colliders? Or is this a too costly (or too impractical) an endeavor? The standard method of accelerating charged particles remains in the realm of radio-frequency (RF) cavities, possessing an electric field strength of about 100 Megavolts per meter, such as that proposed for the future Compact Linear Accelerator (CLIC) at CERN aiming for center-of-mass energies in the multi-TeV regime. Such a technology in the linear fashion is nothing new, being a key part of the SLAC National Accelerator Laboratory (California, USA) for decades before it’s shutdown around the early millennium. However, a device such as CLIC would still require more than ten times the space of SLAC, predicted to come in at around 10-50 km. Not only that, the walls of the cavities are based on normal conducting material and so tend to heat up very quickly and so are typically run in short pulses. And we haven’t even mentioned the costs yet!

Physicists are a smart bunch, however, and they’re always on the lookout for new technologies, new techniques and unique ways of looking at the same problem. As you may have guessed already, the limiting factor determining the length required for sufficient linear acceleration is the field gradient. But what if there were a way to achieve hundreds of times that of a standard RF cavity? The answer has been found in plasma wakefields – separated bunches of dense protons with the potential to drive electrons up to gigaelectronvolt energies in a matter of meters!

Wakefields of plasma are by no means a new idea, being proposed first at least four decades ago. However, most examples have demonstrated this idea using electrons or lasers to ‘drive’ the wakefield in the plasma. More specifically, this is known as the ‘drive beam’ which does not actually participate in the acceleration but provides the large electric field gradient for what is known as the ‘witness beam’ – the electrons. However, this has not been demonstrated using protons as the drive beam to penetrate much further into the plasma – until now.

In fact, very recently CERN has demonstrated proton-driven wakefield technology for the first time during the 2016-2018 run of AWAKE (which stands for Advanced Proton Driven Plasma Wakefield Acceleration Experiment, naturally), accelerating electrons to 2 GeV in only 10 m. The protons that drive the electrons are injected from the Super Proton Synchrotron (SPS) into a Rubidium gas, ionizing the atoms and altering their uniform electron distribution into an osscilating wavelike state. The electrons that ‘witness’ the wakefield then ‘ride the wave’ much like a surfer at the forefront of a water wave. Right now, AWAKE is just a proof of concept, however plans to scale up to 10 GeV electrons in the coming years could hopefully pave the pathway to LHC level proton energies – shooting electrons up to TeV energies!

Figure 1: A layout of AWAKE (Advanced Proton Driven Plasma Wakefield Acceleration Experiment).

In this article, we focus instead on the interesting physics applications of such a device. Bunches of electrons with energies up to TeV energies is so far unprecedented. The most obvious application would of course be a high energy linear electron-positron collider. However, let’s focus on some of the more novel experimental applications that are being discussed today, particularly those that could benefit from such a strong electromagnetic presence in almost a ‘tabletop physics’ configuration.

Awake in the dark

One of the most popular considerations when it comes to dark matter is the existence of dark photons, mediating interactions between dark and visible sector physics (see “The lighter side of Dark Matter” for more details). Finding them has been the subject of recent experimental and theoretical approaches, even with high-energy electron fixed-target experiments already. Figure 2 shows such an interaction, where A^\prime represents the dark photon. One experiment based at CERN known as the NA64 already searches for dark photons through incident electrons on a target, utilizing interactions of the SPS proton beam. In the standard picture, the dark photon is searched through the missing energy signature, leaving the detector without interacting but escaping with a portion of the energy. The energy of the electrons is of course not the issue when the SPS is used, however the number of electrons is.

Figure 2: Dark photon production from a fixed-target experiment with an electron-positron final state.

Assuming one could work with the AWAKE scheme, one could achieve numbers of electrons on target orders of magnitude larger – clearly enhancing the reach for masses and mixing of the dark photon. The idea would be to introduce a high number of energetic electron bunches to a tungsten target with a following 10 m long volume for the dark photon to decay (in accordance with Figure 2). Because of the opposite charges of the electron and positron, the final decay products can then be separated with magnetic fields and hence one can ultimately determine the dark photon invariant mass.

Figure 3 shows how much of an impact a larger number of on-target electrons would make for the discovery reach in the plane of kinetic mixing \epsilon vs mass of the dark photon m_{A^\prime} (again we refer the reader to “The lighter side of Dark Matter” for explanations of these parameters). With the existing NA64 setup, one can already see new areas of the parameter space being explored for 1010 – 1013 electrons. However a significant difference can be seen with the electron bunches provided by the AWAKE configuration, with an ambitious limit shown by the 1016 electrons at 1 TeV.

Figure 3: Exclusion limits in the \epsilon - m_{A^\prime} plane for the dark photon decaying to an electron-positron final state. The NA64 experiment using larger numbers of electrons is shown in the colored non-solid curves from 1010 to 1013 of total on-target electrons. The solid colored lines show the AWAKE provided electron bunches with 1015 and 1016 at 50 GeV and 1016 at 1 TeV.

Light, but strong

Quantum Electrodynamics (or QED, for short), describing the interaction between fundamental electrons and photons, is perhaps the most precisely measured and well-studied theory out there, showing agreement with experiment in a huge range of situations. However, there are some extreme phenomena out in the universe where the strength of certain fields become so great that our current understanding starts to break down. For the electromagnetic field this can in fact be quantified as the Schwinger limit, above which it is expected that nonlinear field effects start to become significant. Typically at a strength around 1018 V/m, the nonlinear corrections to the equations of QED would predict the appearance of electron-positron pairs spontaneously created from such an enormous field.

One of the predictions is the multiphoton interaction with electrons in the initial state. In linear QED, the standard 2 \rightarrow 2 scattering of e^- + \gamma \rightarrow e^- + \gamma for example is only possible. In a strong field regime, however, the initial state can then open up to n numbers of photons. Given a strong enough laser pulse, multiple laser photons can interact with electrons and investigate this incredible region of physics. We show this in Figure 4.

Figure 4: Multiphoton interaction with an electron (left) and electron-positron production from photon absorption (right). $\latex n$ here is the number of photons absorbed in the initial state.

The good and bad news is that this had already been performed as far back as the 90s in the E144 experiment at SLAC, using 50 GeV electron bunches – however unable to reach the critical field value in the electrons frame of rest. AWAKE could certainly provide highly energetic electrons and allow for different kinematic experimental reach. Could this provide the first experimental measurement of the Schwinger critical field?

Of course, these are just a few considerations amongst a plethora of uses for the production of energetic electrons over such short distances. However as physicists desperately continue their search for new physics, it may be time to consider the use of new acceleration technologies on a larger scale as AWAKE has already shown its scalability. Wakefield acceleration may even establish itself with a fully-developed new physics search plan of its own.

References and further reading:

The Delirium over Helium

Title: “New evidence supporting the existence of the hypothetic X17 particle”

Authors: A.J. Krasznahorkay, M. Csatlós, L. Csige, J. Gulyás, M. Koszta, B. Szihalmi, and J. Timár; D.S. Firak, A. Nagy, and N.J. Sas; A. Krasznahorkay

Reference: https://arxiv.org/pdf/1910.10459.pdf

This is an update to the excellent “Delirium over Beryllium” bite written by Flip Tanedo back in 2016 introducing the Beryllium anomaly (I highly recommend starting there first if you just opened this page). At the time, the Atomki collaboration in Decebren, Hungary, had just found an unexpected excess on the angular correlation distribution of electron-positron pairs from internal pair conversion in the transition of excited states of Beryllium. According to them, this excess is consistent with a new boson of mass 17 MeV/c2, nicknamed the “X17” particle. (Note: for reference, 1 GeV/c2 is roughly the mass of a proton; for simplicity, from now on I’ll omit the “c2” term by setting c, the speed of light, to 1 and just refer to masses in MeV or GeV. Here’s a nice explanation of this procedure.)

A few weeks ago, the Atomki group released a new set of results that uses an updated spectrometer and measures the same observable (positron-electron angular correlation) but from transitions of Helium excited states instead of Beryllium. Interestingly, they again find a similar excess on this distribution, which could similarly be explained by a boson with mass ~17 MeV. There are still many questions surrounding this result, and lots of skeptical voices, but the replication of this anomaly in a different system (albeit not yet performed by independent teams) certainly raises interesting questions that seem to warrant further investigation by other researchers worldwide.

Nuclear physics and spectroscopy

The paper reports the production of excited states of Helium nuclei from the bombardment of tritium atoms with protons. To a non-nuclear physicist, this may not be immediately obvious, but nuclei can be in excited states just as electrons around atoms. The entire quantum wavefunction of the nucleus is usually found in the ground state, but can be excited by various mechanisms such as the proton bombardment used in this case. Protons with a specific energy (0.9 MeV) were targeted at tritium atoms to initiate the reaction 3H(p, γ)4He, in nuclear physics notation. The equivalent particle physics notation is p + 3H → He* → He + γ (→ e+ e), where ‘*’ denotes an excited state.

This particular proton energy serves to excite the newly-produced Helium atoms into a state with energy of 20.49 MeV. This energy is sufficiently close to the Jπ = 0 state (i.e. negative parity and quantum number J = 0), which is the second excited state in the ladder of states of Helium. This state has a centroid energy of 21.01 MeV and a wide “sigma” (or decay width) of 0.84 MeV. Note that energies of the first two excited states of Helium overlap quite a bit, so actually sometimes nuclei will be found in the first excited state instead, which is not phenomenologically interesting in this case.

Figure 1. Sketch of the energy distributions for the first two excited quantum states of Helium nuclei. The second excited state (with centroid energy of 21.01 MeV) exhibits an anomaly in the electron-positron angular correlation distribution in transitions to the ground state. Proton bombardment with 0.9 MeV protons yields Helium nuclei at 20.49 MeV, therefore producing both first and second excited states, which are overlapping.

With this reaction, experimentalists can obtain transitions from the Jπ = 0 excited state back to the ground state with Jπ = 0+. These transitions typically produce a gamma ray (photon) with 21.01 MeV energy, but occasionally the photon will internally convert into an electron-positron pair, which is the experimental signature of interest here. A sketch of the experimental concept is shown below. In particular, the two main observables measured by the researchers are the invariant mass of the electron-positron pair, and the angular separation (or angular correlation) between them, in the lab frame.

Figure 2. Schematic representation of the production of excited Helium states from proton bombardment, followed by their decay back to the ground state with the emission of an “X” particle. X here can refer to a photon converting into a positron-electron pair, in which case this is an internal pair creation (IPC) event, or to the hypothetical “X17” particle, which is the process of interest in this experiment. Adapted from 1608.03591.

The measurement

For this latest measurement, the researchers upgraded the spectrometer apparatus to include 6 arms instead of the previous 5. Below is a picture of the setup with the 6 arms shown and labeled. The arms are at azimuthal positions of 0, 60, 120, 180, 240, and 300 degrees, and oriented perpendicularly to the proton beam.

Figure 3. The Atomki nuclear spectrometer. This is an upgraded detector from the previous one used to detect the Beryllium anomaly, featuring 6 arms instead of 5. Each arm has both plastic scintillators for measuring electrons’ and positrons’ energies, as well as a silicon strip-based detector to measure their hit impact positions. Image credit: A. Krasznahorkay.

The arms consist of plastic scintillators to detect the scintillation light produced by the electrons and positrons striking the plastic material. The amount of light collected is proportional to the energy of the particles. In addition, silicon strip detectors are used to measure the hit position of these particles, so that the correlation angle can be determined with better precision.

With this setup, the experimenters can measure the energy of each particle in the pair and also their incident positions (and, from these, construct the main observables: invariant mass and separation angle). They can also look at the scalar sum of energies of the electron and positron (Etot), and use it to zoom in on regions where they expect more events due to the new “X17” boson: since the second excited state lives around 21.01 MeV, the signal-enriched region is defined as 19.5 MeV < Etot < 22.0 MeV. They can then use the orthogonal region, 5 MeV < Etot < 19 MeV (where signal is not expected to be present), to study background processes that could potentially contaminate the signal region as well.

The figure below shows the angular separation (or correlation) between electron-positron pairs. The red asterisks are the main data points, and consist of events with Etot in the signal region (19.5 MeV < Etot < 22.0 MeV). We can clearly see the bump occurring around angular separations of 115 degrees. The black asterisks consist of events in the orthogonal region, 5 MeV < Etot < 19 MeV. Clearly there is no bump around 115 degrees here. The researchers then assume that the distribution of background events in the orthogonal region (black asterisks) has the same shape inside the signal region (red asterisks), so they fit the black asterisks to a smooth curve (blue line), and rescale this curve to match the number of events in the signal region in the 40 to 90 degrees sub-range (the first few red asterisks). Finally, the re-scaled blue curve is used in the 90 to 135 degrees sub-range (the last few red asterisks) as the expected distribution.

Figure 4. Angular correlation between positrons and electrons emitted in Helium nuclear transitions to the ground state. Red dots are data in the signal region (sum of positron and electron energies between 19.5 and 22 MeV), and black dots are data in the orthogonal region (sum of energies between 5 and 19 MeV). The smooth blue curve is a fit to the orthogonal region data, which is then re-scaled to to be used as background estimation in the signal region. The blue, black, and magenta histograms are Monte Carlo simulations of expected backgrounds. The green curve is a fit to the data with the hypothesis of a new “X17” particle.

In addition to the data points and fitted curves mentioned above, the figure also reports the researchers’ estimates of the physics processes that cause the observed background. These are the black and magenta histograms, and their sum is the blue histogram. Finally, there is also a green curve on top of the red data, which is the best fit to a signal hypothesis, that is, assuming that a new particle with mass 16.84 ± 0.16 MeV is responsible for the bump in the high-angle region of the angular correlation plot.

The other main observable, the invariant mass of the electron-positron pair, is shown below.

Figure 5. Invariant mass distribution of emitted electrons and positrons in the transitions of Helium nuclei to the ground state. Red asterisks are data in the signal region (sum of electron and positron energies between 19.5 and 22 MeV), and black asterisks are data in the orthogonal region (sum of energies between 5 and 19 MeV). The green smooth curve is the best fit to the data assuming the existence of a 17 MeV particle.

The invariant mass is constructed from the equation

m_{e^+e^-} = \sqrt{1 - y^2} E_{\textrm{tot}} \; \textrm{sin}(\theta/2) + 2m_e^2 \left(1 + \frac{1+y^2}{1-y^2}\, \textrm{cos} \, \theta \right)

where all relevant quantities refer to electron and positron observables: Etot is as before the sum of their energies, y is the ratio of their energy difference over their sum (y \equiv (E_{e^+} - E_{e^-})/E_{\textrm{tot}}), θ is the angular separation between them, and me is the electron and positron mass. This is just one of the standard ways to calculate the invariant mass of two daughter particles in a reaction, when the known quantities are the angular separation between them and their individual energies in the lab frame.

The red asterisks are again the data in the signal region (19.5 MeV < Etot < 22 MeV), and the black asterisks are the data in the orthogonal region (5 MeV < Etot < 19 MeV). The green curve is a new best fit to a signal hypothesis, and in this case the best-fit scenario is a new particle with mass 17.00 ± 0.13 MeV, which is statistically compatible with the fit in the angular correlation plot. The significance of this fit is 7.2 sigma, which means the probability of the background hypothesis (i.e. no new particle) producing such large fluctuations in data is less than 1 in 390,682,215,445! It is remarkable and undeniable that a peak shows up in the data — the only question is whether it really is due to a new particle, or whether perhaps the authors failed to consider all possible backgrounds, or even whether there may have been an unexpected instrumental anomaly of some sort.

According to the authors, the same particle that could explain the anomaly in the Beryllium case could also explain the anomaly here. I think this claim needs independent validation by the theory community. In any case, it is very interesting that similar excesses show up in two “independent” systems such as the Beryllium and the Helium transitions.

Some possible theoretical interpretations

There are a few particle interpretations of this result that can be made compatible with current experimental constraints. Here I’ll just briefly summarize some of the possibilities. For a more in-depth view from a theoretical perspective, check out Flip’s “Delirium over Beryllium” bite.

The new X17 particle could be the vector gauge boson (or mediator) of a protophobic force, i.e. a force that interacts preferentially with neutrons but not so much with protons. This would certainly be an unusual and new force, but not necessarily impossible. Theorists have to work hard to make this idea work, as you can see here.

Another possibility is that the X17 is a vector boson with axial couplings to quarks, which could explain, in the case of the original Beryllium anomaly, why the excess appears in only some transitions but not others. There are complete theories proposed with such vector bosons that could fit within current experimental constraints and explain the Beryllium anomaly, but they also include new additional particles in a dark sector to make the whole story work. If this is the case, then there might be new accessible experimental observables to confirm the existence of this dark sector and the vector boson showing up in the nuclear transitions seen by the Atomki group. This model is proposed here.

However, an important caveat about these explanations is in order: so far, they only apply to the Beryllium anomaly. I believe the theory community needs to validate the authors’ assumption that the same particle could explain this new anomaly in Helium, and that there aren’t any additional experimental constraints associated with the Helium signature. As far as I can tell, this has not been shown yet. In fact, the similar invariant mass is the only evidence so far that this could be due to the same particle. An independent and thorough theoretical confirmation is needed with high-stake claims such as this one.

Questions and criticisms

In the years since the first Beryllium anomaly result, a few criticisms about the paper and about the experimental team’s history have been laid out. I want to mention some of those to point out that this is still a contentious result.

First, there is the group’s history of repeated claims of new particle discoveries every so often since the early 2000s. After experimental refutation of these claims by more precise measurements, there isn’t a proper and thorough discussion of why the original excesses were seen in the first place, and why they have subsequently disappeared. Especially for such groundbreaking claims, a consistent history of solid experimental attitude towards one’s own research is very valuable when making future claims.

Second, others have mentioned that some fit curves seem to pass very close to most data points (n.b. I can’t seem to find the blog post where I originally read this or remember its author – if you know where it is, please let me know so I can give proper credit!). Take a look at the plot below, which shows the observed Etot distribution. In experimental plots, there is usually a statistical fluctuation of data points around the “mean” behavior, which is natural and expected. Below, in contrast, the data points are remarkably close to the fit. This doesn’t in itself mean there is anything wrong here, but it does raise an interesting question of how the plot and the fit were produced. It could be that this is not a fit to some prior expected behavior, but just an “interpolation”. Still, if that’s the case, then it’s not clear (to me, at least) what role the interpolation curve plays.

Figure 6. Sum of electron and positron energies distribution produced in the decay of Helium nuclei to the ground state. Black dots are data and the red curve is a fit.

Third, there is also the background fit to data in Figure 4 (black asterisks and blue line). As Ethan Siegel has pointed out, you can see how well the background fit matches data, but only in the 40 to 90 degrees sub-range. In the 90 to 135 degrees sub-range, the background fit is actually quite poorer. In a less favorable interpretation of the results, this may indicate that whatever effect is causing the anomalous peak in the red asterisks is also causing the less-than-ideal fit in the black asterisks, where no signal due to a new boson is expected. If the excess is caused by some instrumental error instead, you’d expect to see effects in both curves. In any case, the background fit (blue curve) constructed from the black asterisks does not actually model the bump region very well, which weakens the argument for using it throughout all of the data. A more careful analysis of the background is warranted here.

Fourth, another criticism comes from the simplistic statistical treatment the authors employ on the data. They fit the red asterisks in Figure 4 with the “PDF”:

\textrm{PDF}(e^+ e^-) = N_{Bg} \times \textrm{PDF}(\textrm{data}) + N_{Sig} \times \textrm{PDF}(\textrm{sig})

where PDF stands for “Probability Density Function”, and in this case they are combining two PDFs: one derived from data, and one assumed from the signal hypothesis. The two PDFs are then “re-scaled” by the expected number of background events (N_{Bg}) and signal events (N_{sig}), according to Monte Carlo simulations. However, as others have pointed out, when you multiply a PDF by a yield such as N_{Bg}, you no longer have a PDF! A variable that incorporates yields is no longer a probability. This may just sound like a semantics game, but it does actually point to the simplicity of the treatment, and makes one wonder if there could be additional (and perhaps more serious) statistical blunders made in the course of data analysis.

Fifth, there is also of course the fact that no other experiments have seen this particle so far. This doesn’t mean that it’s not there, but particle physics is in general a field with very few “low-hanging fruits”. Most of the “easy” discoveries have already been made, and so every claim of a new particle must be compatible with dozens of previous experimental and theoretical constraints. It can be a tough business. Another example of this is the DAMA experiment, which has made claims of dark matter detection for almost 2 decades now, but no other experiments were able to provide independent verification (and in fact, several have provided independent refutations) of their claims.

I’d like to add my own thoughts to the previous list of questions and considerations.

The authors mention they correct the calibration of the detector efficiency with a small energy-dependent term based on a GEANT3 simulation. The updated version of the GEANT library, GEANT4, has been available for at least 20 years. I haven’t actually seen any results that use GEANT3 code since I’ve started in physics. Is it possible that the authors are missing a rather large effect in their physics expectations by using an older simulation library? I’m not sure, but just like the simplistic PDF treatment and the troubling background fit to the signal region, it doesn’t inspire as much confidence. It would be nice to at least have a more detailed and thorough explanation of what the simulation is actually doing (which maybe already exists but I haven’t been able to find?). This could also be due to a mismatch in the nuclear physics and high-energy physics communities that I’m not aware of, and perhaps nuclear physicists tend to use GEANT3 a lot more than high-energy physicists.

Also, it’s generally tricky to use Monte Carlo simulation to estimate efficiencies in data. One needs to make sure the experimental apparatus is well understood and be confident that their simulation reproduces all the expected features of the setup, which is often difficult to do in practice, as collider experimentalists know too well. I’d really like to see a more in-depth discussion of this point.

Finally, a more technical issue: from the paper, it’s not clear to me how the best fit to the data (red asterisks) was actually constructed. The authors claim:

Using the composite PDF described in Equation 1 we first performed a list of fits by fixing the simulated particle mass in the signal PDF to a certain value, and letting RooFit estimate the best values for NSig andNBg. Letting the particle mass lose in the fit, the best fitted mass is calculated for the best fit […]

When they let loose the particle mass in the fit, do they keep the “NSig” and “NBg” found with a fixed-mass hypothesis? If so, which fixed-mass NSig and which NBg do they use? And if not, what exactly was the purpose of performing the fixed-mass fits originally? I don’t think I fully got the point here.

Where to go from here

Despite the many questions surrounding the experimental approach, it’s still an interesting result that deserves further exploration. If it holds up with independent verification from other experiments, it would be an undeniable breakthrough, one that particle physicists have been craving for a long time now.

And independent verification is key here. Ideally other experiments need to confirm that they also see this new boson before the acceptance of this result grows wider. Many upcoming experiments will be sensitive to a new X17 boson, as the original paper points out. In the next few years, we will actually have the possibility to probe this claim from multiple angles. Dedicated standalone experiments at the LHC such as FASER and CODEX-b will be able to probe highly long-lived signatures coming from the proton-proton interaction point, and so should be sensitive to new particles such as axion-like particles (ALPs).

Another experiment that could have sensitivity to X17, and has come online this year, is PADME (disclaimer: I am a collaborator on this experiment). PADME stands for Positron Annihilation into Dark Matter Experiment and its main goal is to look for dark photons produced in the annihilation between positrons and electrons. You can find more information about PADME here, and I will write a more detailed post about the experiment in the future, but the gist is that PADME is a fixed-target experiment striking a beam of positrons (beam energy: 550 MeV) against a fixed target made of diamond (carbon atoms). The annihilation between positrons in the beam and electrons in the carbon atoms could give rise to a photon and a new dark photon via kinetic mixing. By measuring the incoming positron and the outgoing photon momenta, we can infer the missing mass which is carried away by the (invisible) dark photon.

If the dark photon is the X17 particle (a big if), PADME might be able to see it as well. Our dark photon mass sensitivity is roughly between 1 and 22 MeV, so a 17 MeV boson would be within our reach. But more interestingly, using the knowledge of where the new particle hypothesis lies, we might actually be able to set our beam energy to produce the X17 in resonance (using a beam energy of roughly 282 MeV). The resonance beam energy increases the number of X17s produced and could give us even higher sensitivity to investigate the claim.

An important caveat is that PADME can provide independent confirmation of X17, but cannot refute it. If the coupling between the new particle and our ordinary particles is too feeble, PADME might not see evidence for it. This wouldn’t necessarily reject the claim by Atomki, it would just mean that we would need a more sensitive apparatus to detect it.  This might be achievable with the next generation of PADME, or with the new experiments mentioned above coming online in a few years.

Finally, in parallel with the experimental probes of the X17 hypothesis, it’s critical to continue gaining a better theoretical understanding of this anomaly. In particular, an important check is whether the proposed theoretical models that could explain the Beryllium excess also work for the new Helium excess. Furthermore, theorists have to work very hard to make these models compatible with all current experimental constraints, so they can look a bit contrived. Perhaps a thorough exploration of the theory landscape could lead to more models capable of explaining the observed anomalies as well as evading current constraints.

Conclusions

The recent results from the Atomki group raise the stakes in the search for Physics Beyond the Standard Model. The reported excesses in the angular correlation between electron-positron pairs in two different systems certainly seems intriguing. However, there are still a lot of questions surrounding the experimental methods, and given the nature of the claims made, a crystal-clear understanding of the results and the setup need to be achieved. Experimental verification by at least one independent group is also required if the X17 hypothesis is to be confirmed. Finally, parallel theoretical investigations that can explain both excesses are highly desirable.

As Flip mentioned after the first excess was reported, even if this excess turns out to have an explanation other than a new particle, it’s a nice reminder that there could be interesting new physics in the light mass parameter space (e.g. MeV-scale), and a new boson in this range could also account for the dark matter abundance we see leftover from the early universe. But as Carl Sagan once said, extraordinary claims require extraordinary evidence.

In any case, this new excess gives us a chance to witness the scientific process in action in real time. The next few years should be very interesting, and hopefully will see the independent confirmation of the new X17 particle, or a refutation of the claim and an explanation of the anomalies seen by the Atomki group. So, stay tuned!

Further reading

CERN news

Ethan Siegel’s Forbes post

Flip Tanedo’s “Delirium over Beryllium” bite

Matt Strassler’s blog

Quanta magazine article on the original Beryllium anomaly

Protophobic force interpretation

Vector boson with axial couplings to quarks interpretation

Dark Photons in Light Places

Title: “Searching for dark photon dark matter in LIGO O1 data”

Author: Huai-Ke Guo, Keith Riles, Feng-Wei Yang, & Yue Zhao

Reference: https://www.nature.com/articles/s42005-019-0255-0

There is very little we know about dark matter save for its existence. Its mass(es), its interactions, even the proposition that it consists of particles at all is mostly up to the creativity of the theorist. For those who don’t turn to modified theories of gravity to explain the gravitational effects on galaxy rotation and clustering that suggest a massive concentration of unseen matter in the universe (among other compelling evidence), there are a few more widely accepted explanations for what dark matter might be. These include weakly-interacting massive particles (WIMPS), primordial black holes, or new particles altogether, such as axions or dark photons. 

In particle physics, this latter category is what’s known as the “hidden sector,” a hypothetical collection of quantum fields and their corresponding particles that are utilized in theorists’ toolboxes to help explain phenomena such as dark matter. In order to test the validity of the hidden sector, several experimental techniques have been concocted to narrow down the vast parameter space of possibilities, which generally consist of three strategies:

  1. Direct detection: Detector experiments look for low-energy recoils of dark matter particle collisions with nuclei, often involving emitted light or phonons. 
  2. Indirect detection: These searches focus on potential decay products of dark matter particles, which depends on the theory in question.
  3. Collider production: As the name implies, colliders seek to produce dark matter in order to study its properties. This is reliant on the other two methods for verification.

The first detection of gravitational waves from a black hole merger in 2015 ushered in a new era of physics, in which the cosmological range of theory-testing is no longer limited to the electromagnetic spectrum. Bringing LIGO (the Laser Interferometer Gravitational-Wave Observatory) to the table, proposals for the indirect detection of dark matter via gravitational waves began to spring up in the literature, with implications for primordial black hole detection or dark matter ensconced in neutron stars. Yet a new proposal, in a paper by Guo et. al., suggests that direct dark matter detection with gravitational waves may be possible, specifically in the case of dark photons. 

Dark photons are hidden sector particles in the ultralight regime of dark matter candidates. Theorized as the gauge boson of a U(1) gauge group, meaning the particle is a force-carrier akin to the photon of quantum electrodynamics, dark photons either do not couple or very weakly couple to Standard Model particles in various formulations. Unlike a regular photon, dark photons can acquire a mass via the Higgs mechanism. Since dark photons need to be non-relativistic in order to meet cosmological dark matter constraints, we can model them as a coherently oscillating background field: a plane wave with amplitude determined by dark matter energy density and oscillation frequency determined by mass. In the case that dark photons weakly interact with ordinary matter, this means an oscillating force is imparted. This sets LIGO up as a means of direct detection due to the mirror displacement dark photons could induce in LIGO detectors.

Figure 1: The experimental setup of the Advanced LIGO interferometer. We can see that light leaves the laser and is reflected between a few power recycling mirrors (PR), split by a beam splitter (BS), and bounced between input and end test masses (ITM and ETM). The entire system is mounted on seismically-isolated platforms to reduce noise as much as possible. Source: https://arxiv.org/pdf/1411.4547.pdf

LIGO consists of a Michelson interferometer, in which a laser shines upon a beam splitter which in turn creates two perpendicular beams. The light from each beam then hits a mirror, is reflected back, and the two beams combine, producing an interference pattern. In the actual LIGO detectors, the beams are reflected back some 280 times (down a 4 km arm length) and are split to be initially out of phase so that the photodiode detector should not detect any light in the absence of a gravitational wave. A key feature of gravitational waves is their polarization, which stretches spacetime in one direction and compresses it in the perpendicular direction in an alternating fashion. This means that when a gravitational wave passes through the detector, the effective length of one of the interferometer arms is reduced while the other is increased, and the photodiode will detect an interference pattern as a result. 

LIGO has been able to reach an incredible sensitivity of one part in 10^{23} in its detectors over a 100 Hz bandwidth, meaning that its instruments can detect mirror displacements up to 1/10,000th the size of a proton. Taking advantage of this number, Guo et. al. demonstrated that the differential strain (the ratio of the relative displacement of the mirrors to the interferometer’s arm length, or h = \Delta L/L) is also sensitive to ultralight dark matter via the modeling process described above. The acceleration induced by the dark photon dark matter on the LIGO mirrors is ultimately proportional to the dark electric field and charge-to-mass ratio of the mirrors themselves.

Once this signal is approximated, next comes the task of estimating the background. Since the coherence length is of order 10^9 m for a dark photon field oscillating at order 100 Hz, a distance much larger than the separation between the LIGO detectors at Hanford and Livingston (in Washington and Louisiana, respectively), the signals from dark photons at both detectors should be highly correlated. This has the effect of reducing the noise in the overall signal, since the noise in each of the detectors should be statistically independent. The signal-to-noise ratio can then be computed directly using discrete Fourier transforms from segments of data along the total observation time. However, this process of breaking up the data, known as “binning,” means that some signal power is lost and must be corrected for.

Figure 2: The end result of the Guo et. al. analysis of dark photon-induced mirror displacement in LIGO. Above we can see a plot of the coupling of dark photons to baryons as a function of the dark photon oscillation frequency. We can see that over further Advanced LIGO runs, up to O4-O5, these limits are expected to improve by several orders of magnitude. Source: https://www.nature.com/articles/s42005-019-0255-0

In applying this analysis to the strain data from the first run of Advanced LIGO, Guo et. al. generated a plot which sets new limits for the coupling of dark photons to baryons as a function of the dark photon oscillation frequency. There are a few key subtleties in this analysis, primarily that there are many potential dark photon models which rely on different gauge groups, yet this framework allows for similar analysis of other dark photon models. With plans for future iterations of gravitational wave detectors, further improved sensitivities, and many more data runs, there seems to be great potential to apply LIGO to direct dark matter detection. It’s exciting to see these instruments in action for discoveries that were not in mind when LIGO was first designed, and I’m looking forward to seeing what we can come up with next!

Learn More:

  1. An overview of gravitational waves and dark matter: https://www.symmetrymagazine.org/article/what-gravitational-waves-can-say-about-dark-matter
  2. A summary of dark photon experiments and results: https://physics.aps.org/articles/v7/115 
  3. Details on the hardware of Advanced LIGO: https://arxiv.org/pdf/1411.4547.pdf
  4. A similar analysis done by Pierce et. al.: https://journals.aps.org/prl/pdf/10.1103/PhysRevLett.121.061102

Letting the Machines Search for New Physics

Article: “Anomaly Detection for Resonant New Physics with Machine Learning”

Authors: Jack H. Collins, Kiel Howe, Benjamin Nachman

Reference : https://arxiv.org/abs/1805.02664

One of the main goals of LHC experiments is to look for signals of physics beyond the Standard Model; new particles that may explain some of the mysteries the Standard Model doesn’t answer. The typical way this works is that theorists come up with a new particle that would solve some mystery and they spell out how it interacts with the particles we already know about. Then experimentalists design a strategy of how to search for evidence of that particle in the mountains of data that the LHC produces. So far none of the searches performed in this way have seen any definitive evidence of new particles, leading experimentalists to rule out a lot of the parameter space of theorists favorite models.

A summary of searches the ATLAS collaboration has performed. The left columns show model being searched for, what experimental signature was looked at and how much data has been analyzed so far. The color bars show the regions that have been ruled out based on the null result of the search. As you can see, we have already covered a lot of territory.

Despite this extensive program of searches, one might wonder if we are still missing something. What if there was a new particle in the data, waiting to be discovered, but theorists haven’t thought of it yet so it hasn’t been looked for? This gives experimentalists a very interesting challenge, how do you look for something new, when you don’t know what you are looking for? One approach, which Particle Bites has talked about before, is to look at as many final states as possible and compare what you see in data to simulation and look for any large deviations. This is a good approach, but may be limited in its sensitivity to small signals. When a normal search for a specific model is performed one usually makes a series of selection requirements on the data, that are chosen to remove background events and keep signal events. Nowadays, these selection requirements are getting more complex, often using neural networks, a common type of machine learning model, trained to discriminate signal versus background. Without some sort of selection like this you may miss a smaller signal within the large amount of background events.

This new approach lets the neural network itself decide what signal to  look for. It uses part of the data itself to train a neural network to find a signal, and then uses the rest of the data to actually look for that signal. This lets you search for many different kinds of models at the same time!

If that sounds like magic, lets try to break it down. You have to assume something about the new particle you are looking for, and the technique here assumes it forms a resonant peak. This is a common assumption of searches. If a new particle were being produced in LHC collisions and then decaying, then you would get an excess of events where the invariant mass of its decay products have a particular value. So if you plotted the number of events in bins of invariant mass you would expect a new particle to show up as a nice peak on top of a relatively smooth background distribution. This is a very common search strategy, and often colloquially referred to as a ‘bump hunt’. This strategy was how the Higgs boson was discovered in 2012.

A histogram showing the invariant mass of photon pairs. The Higgs boson shows up as a bump at 125 GeV. Plot from here

The other secret ingredient we need is the idea of Classification Without Labels (abbreviated CWoLa, pronounced like koala). The way neural networks are usually trained in high energy physics is using fully labeled simulated examples. The network is shown a set of examples and then guesses which are signal and which are background. Using the true label of the event, the network is told which of the examples it got wrong, its parameters are updated accordingly, and it slowly improves. The crucial challenge when trying to train using real data is that we don’t know the true label of any of data, so its hard to tell the network how to improve. Rather than trying to use the true labels of any of the events, the CWoLA technique uses mixtures of events. Lets say you have 2 mixed samples of events, sample A and sample B, but you know that sample A has more signal events in it than sample B. Then, instead of trying to classify signal versus background directly, you can train a classifier to distinguish between events from sample A and events from sample B and what that network will learn to do is distinguish between signal and background. You can actually show that the optimal classifier for distinguishing the two mixed samples is the same as the optimal classifier of signal versus background. Even more amazing, this technique actually works quite well in practice, achieving good results even when there is only a few percent of signal in one of the samples.

An illustration of the CWoLa method. A classifier trained to distinguish between two mixed samples of signal and background events learns can learn to classify signal versus background. Taken from here

The technique described in the paper combines these two ideas in a clever way. Because we expect the new particle to show up in a narrow region of invariant mass, you can use some of your data to train a classifier to distinguish between events in a given slice of invariant mass from other events. If there is no signal with a mass in that region then the classifier should essentially learn nothing, but if there was a signal in that region that the classifier should learn to separate signal and background. Then one can apply that classifier to select events in the rest of your data (which hasn’t been used in the training) and look for a peak that would indicate a new particle. Because you don’t know ahead of time what mass any new particle should have, you scan over the whole range you have sufficient data for, looking for a new particle in each slice.

The specific case that they use to demonstrate the power of this technique is for new particles decaying to pairs of jets. On the surface, jets, the large sprays of particles produced when quark or gluon is made in a LHC collision, all look the same. But actually the insides of jets, their sub-structure, can contain very useful information about what kind of particle produced it. If a new particle that is produced decays into other particles, like top quarks, W bosons or some a new BSM particle, before decaying into quarks then there will be a lot of interesting sub-structure to the resulting jet, which can be used to distinguish it from regular jets. In this paper the neural network uses information about the sub-structure for both of the jets in event to determine if the event is signal-like or background-like.

The authors test out their new technique on a simulated dataset, containing some events where a new particle is produced and a large number of QCD background events. They train a neural network to distinguish events in a window of invariant mass of the jet pair from other events. With no selection applied there is no visible bump in the dijet invariant mass spectrum. With their technique they are able to train a classifier that can reject enough background such that a clear mass peak of the new particle shows up. This shows that you can find a new particle without relying on searching for a particular model, allowing you to be sensitive to particles overlooked by existing searches.

Demonstration of the bump hunt search. The shaded histogram is the amount of signal in the dataset. The different levels of blue points show the data remaining after applying tighter and tighter selection based on the neural network classifier score. The red line is the predicted amount of background events based on fitting the sideband regions. One can see that for the tightest selection (bottom set of points), the data forms a clear bump over the background estimate, indicating the presence of a new particle

This paper was one of the first to really demonstrate the power of machine-learning based searches. There is actually a competition being held to inspire researchers to try out other techniques on a mock dataset. So expect to see more new search strategies utilizing machine learning being released soon. Of course the real excitement will be when a search like this is applied to real data and we can see if machines can find new physics that us humans have overlooked!

Read More:

  1. Quanta Magazine Article “How Artificial Intelligence Can Supercharge the Search for New Particles”
  2. Blog Post on the CWoLa Method “Training Collider Classifiers on Real Data”
  3. Particle Bites Post “Going Rogue: The Search for Anything (and Everything) with ATLAS”
  4. Blog Post on applying ML to top quark decays “What does Bidirectional LSTM Neural Networks has to do with Top Quarks?”
  5. Extended Version of Original Paper “Extending the Bump Hunt with Machine Learning”

Discovering the Top Quark

This post is about the discovery of the most massive quark in the Standard Model, the Top quark. Below is a “discovery plot” [1] from the Collider Detector at Fermilab collaboration (CDF). Here is the original paper.

This plot confirms the existence of the Top quark. Let’s understand how.

For each proton collision that passes certain selection conditions, the horizontal axis shows the best estimate of the Top quark mass. These selection conditions encode the particle “fingerprint” of the Top quark. Out of all possible proton collisions events, we only want to look at ones that perhaps came from Top quark decays. This subgroup of events can inform us of a best guess at the mass of the Top quark. This is what is being plotted on the x axis.

On the vertical axis are the number of these events.

The dashed distribution is the number of these events originating from the Top quark if the Top quark exists and decays this way. This could very well not be the case.

The dotted distribution is the background for these events, events that did not come from Top quark decays.

The solid distribution is the measured data.

To claim a discovery, the background (dotted) plus the signal (dashed) should equal the measured data (solid). We can run simulations for different top quark masses to give us distributions of the signal until we find one that matches the data. The inset at the top right is showing that a Top quark of mass of 175GeV best reproduces the measured data.

Taking a step back from the technicalities, the Top quark is special because it is the heaviest of all the fundamental particles. In the Standard Model, particles acquire their mass by interacting with the Higgs. Particles with more mass interact more with the Higgs. The Top mass being so heavy is an indicator that any new physics involving the Higgs may be linked to the Top quark.


References / Further Reading

[1] – Observation of Top Quark Production in pp Collisions with the Collider Detector at Fermilab – This is the “discovery paper” announcing experimental evidence of the Top.

[2] – Observation of tt(bar)H Production – Who is to say that the Top and the Higgs even have significant interactions to lowest order? The CMS collaboration finds evidence that they do in fact interact at “tree-level.”

[2] – The Perfect Couple: Higgs and top quark spotted together – This article further describes the interconnection between the Higgs and the Top.