Dark matter from down under

It isn’t often I get to plug an important experiment in high-energy physics located within my own vast country so I thought I would take this opportunity to do just that. That’s right – the land of the kangaroo, meat-pie and infamously slow internet (68th in the world if I recall) has joined the hunt for the ever elusive dark matter particle.

By now you have probably heard that about 85% of the universe is all made out of stuff that is dark. Searching for this invisible matter has not been an easy task, however the main strategy has involved detections of faint signals of dark matter scattering off nuclei that constantly pass through the Earth unimpeded. Up until now, the main contenders for these dark matter direct detection experiments have been performed above the equator.

The SABRE (Sodium-iodide with Active Background REjection) collaboration plans to operate two detectors – one in my home-state of Victoria, Australia at SUPL (Stawell Underground Physics Laboratory) and another in the northern hemisphere at LNGS, Italy. The choice to run two experiments in seperate hemispheres has the goal of potentially removing systematic effects inherent in the seasonal rotation of the Earth. In particular – any of these seasonal effects should be opposite in phase, whilst the dark matter signal should remain the same. This actually takes us to a novel dark matter direct detection search method known as annual modulation, which has been added to the spotlight through the DAMA/LIBRA scintillation detector underground the Laboratori Nazionali del Gran Sasso in Italy.

Around the world, around the world

Figure 1: When the Earth rotates around the sun, relative to the Milky Way’s DM halo, it experiences a larger interaction when it moves “head-on” with the wind. Taken from
arXiv:1209.3339
.

The DAMA/LIBRA experiment superseded the DAMA/NaI experiment which observed the dark matter halo over a period of 7 annual cycles ranging from 1995 to 2002. The idea is quite simple really. Current theory suggests that the Milky Way galaxy is surrounded by a halo of dark matter with our solar system casually floating by experiencing some “flux” of particles that pass through us all year round. However, current and past theory (up to a point) also suggest the Earth does a full revolution around the sun in a year’s time. In fact, with respect to this dark matter “wind”, the Earth’s relative velocity would be added on its approach, occurring around the start of June and then subtracted on its recession, in December. When studying detector interactions with the DM particles, one would then expect the rates to be higher in the middle of the year and of course lower at the end – hence a modulation (annually). Up to here, annual modulation results would be quite suitably model-independent and so wouldn’t depend on your particular choice of DM particle – so long as it has some interaction with the detector.

The DAMA collaboration, having reported almost 14 years of annual modulation results in total, claim evidence for a picture quite consistent with what would be expected for a range of dark matter scenarios in the energy range of 2-6 keV. This however has long been in tension with the wider community of detection for WIMP dark matter. Those such as XENON (which incidentally is also located in the Gran Sasso mountains) and CDMS have reported no detection of dark matter in the same ranges as that which the DAMA collaboration claimed to have seen them. Although these employ quite different materials such as (you guessed it) liquid xenon in the case of XENON and cryogenically cooled semiconductors at CDMS.

Figure 2: Annual modulation results from DAMA. Could this be the presence of WIMP dark matter or some other seasonal effect? From the DAMA Collaboration.

Yes, there is also the COSINE-100 experiment, using the same materials as those in DAMA (that is, sodium iodide), based in South Korea. And yes, they also published a letter to Nature claiming their results to be in “severe tension” with those of the DAMA annual modulation signal – under the assumption of WIMP interactions that are spin-independent with the detector material. However, this does not totally rule out the observation of dark matter by DAMA – just the fact that it is very unlikely to correspond to the gold-standard WIMP in a standard halo scenario. According to the collaboration, it will certainly take years more data collection to know for sure. But that’s where SABRE comes in!

As above, so below

Before the arrival of SABRE’s twin detectors in both the northern and southern hemispheres, the first phase known as the PoP (Proof of Principle) must be performed to analyze the entire search strategy and evaluate the backgrounds present in the crystal structures. Certainly, another feature of SABRE is a crystal background rate quite below that of DAMA/LIBRA using ultra-radiopure sodium iodide crystals. With the estimated current background and 50 kg of detector material, it is expected that the DAMA/LIBRA signal should be able to be independently verified (or refuted) in a matter of 3 years.

If you asked me, there is something a little special about an experiment operating on the frontier of fundamental physics in a small regional Victorian town with a population just over 6000 known for an active gold mining community and the oldest running foot race in Australia. Of course, Stawell features just the right environment to shield the detector from the relentless bombardment of cosmic rays on the Earth’s surface – and that’s why it is located 1 km underground. In fact, radiation contamination is such a prevalent issue for these sensitive detectors that everything from the concrete to the metal bolts that go in them must first be tested – and all this at the same time as the mine is still being operated.

Now, not only is SABRE experiments running in both Australia and Italy, but they actually comprise a collaboration of physicists also from the UK and the USA. But most importantly (for me, anyway) – this is the southern hemisphere’s very first dark matter detector – a great milestone and a fantastic opportunity to put Aussies in the pilot’s seat to uncover one of nature’s biggest mysteries. But for now, crack open a cold one – footy’s almost on!

Figure 3: The SABRE collaboration operates internationally with detectors in the northern and southern hemispheres. Taken from GSSI.

References and Further Reading

  1. The SABRE dark matter experiment: https://sabre.lngs.infn.it/.
  2. The COSINE-100 experiment summarizing the annual modulation technique: https://cosine.yale.edu/about-us/annual-modulation-dark-matter.
  3. The COSINE-100 Experiment search for dark matter in tension with that of the DAMA signal: arXiv:1906.01791.
  4. An overview of the SABRE experiment and its Proof of Principle (PoP) deployment: arXiv:1807.08073.

Dark Matter Cookbook: Freeze-In

In my previous post, we discussed the features of dark matter freeze-out. The freeze-out scenario is the standard production mechanism for dark matter. There is another closely related mechanism though, the freeze-in scenario. This mechanism achieves the same results as freeze-out, but in a different way. Here are the ingredients we need, and the steps to make dark matter according to the freeze-in recipe [1].

Ingredients

  • Standard Model particles that will serve as a thermal bath, we will call these “bath particles.”
  • Dark matter (DM).
  • A bath-DM coupling term in your Lagrangian.

Steps

  1. Pre-heat your early universe to temperature T. This temperature should be much greater than the dark matter mass.
  2. Add in your bath particles and allow them to reach thermal equilibrium. This will ensure that the bath has enough energy to produce DM once we begin the next step.
  3. Starting at zero, increase the bath-DM coupling such that DM production is very slow. The goal is to produce the correct amount of dark matter after letting the universe cool. If the coupling is too small, we won’t produce enough. If the coupling is too high, we will end up making too much dark matter. We want to make just enough to match the observed amount today.
  4. Slowly decrease the temperature of the universe while monitoring the DM production rate. This step is analogous to allowing the universe to expand. At temperatures lower than the dark matter mass, the bath no longer has enough energy to produce dark matter. At this point, the amount of dark matter has “frozen-in,” there are no other ways to produce more dark matter.
  5. Allow your universe to cool to 3 Kelvin and enjoy. If all went well, we should have a universe at the present-day temperature, 3 Kelvin, with the correct density of dark matter, (0.2-0.6) GeV/cm^3 [2].

This process is schematically outlined in the figure below, adapted from [1].

Schematic comparison of the freeze-in (dashed) and freeze-out (solid) scenarios.

On the horizontal axis we have the ratio of dark matter mass to temperature. Earlier times are to the left and later times are to the right. On the vertical axis is the dark matter number-density per entropy-density. This quantity automatically scales the number-density to account for cooling effects as the universe expands. The solid black line is the amount of dark matter that remains in thermal equilibrium with the bath. For the freeze-out recipe, the universe started out with a large population of dark matter that was in thermal equilibrium with the bath. In the freeze-in recipe, the universe starts with little to no dark matter and it never reaches thermal equilibrium with the bath. The dashed (solid) colored lines are dark matter abundances in the freeze-in (out) scenarios. Observe that in the freeze-in scenario, the amount of dark matter increases as temperature decreases. In the freeze-out scenario, the amount of dark matter decreases as temperature decreases. Finally, the arrows indicate the effect of increasing the X-bath coupling. For freeze in, increasing this interaction leads to more dark matter but in freeze-out, increasing this coupling leads to less dark matter.

References

[1] – Freeze-In Production of FIMP Dark Matter. This is the paper outlining the freeze-in mechanism.

[2] – Using Gaia DR2 to Constrain Local Dark Matter Density and Thin Dark Disk. This is the most recent measurement of the local dark matter density according to the Particle Data Group.

[3] – Dark Matter. This is the Particle Data Group review of dark matter.

[A] – Cake Recipe in strange science units. This SixtySymbols video provided the inspiration for the format of this post.

Does antihydrogen really matter?

Article title: Investigation of the fine structure of antihydrogen

Authors: The ALPHA Collaboration

Reference: https://doi.org/10.1038/s41586-020-2006-5 (Open Access)

Physics often doesn’t delay our introduction to one of the most important concepts in history – symmetries (as I am sure many fellow physicists will agree). From the idea that “for every action there is an equal and opposite reaction” to the vacuum solutions of electric and magnetic fields from Maxwell’s equations, we often take such astounding universal principles for granted. For example, how many years after you first calculated the speed of a billiard ball using conservation of momentum did you realise that what you were doing was only valid because of the fundamental symmetrical structure of the laws of nature? And hence goes our life through physics education – we first begin from what we ‘see’ to understanding what the real mechanisms are that operate below the hood.

These days our understanding of symmetries and how they relate to the phenomena we observe have developed so comprehensively throughout the 20th century that physicists are now often concerned with the opposite approach – applying the fundamental mechanisms to determine where the gaps are between what they predict and what we observe.

So far one of these important symmetries has stood up the test of time with no observable violation so far being reported. This is the simultaneous transformation of charge conjugation (C), parity (P) and time reversal (T), or CPT for short. A ‘CPT-transformed’ universe would be like a mirror-image of our own, with all matter as antimatter and opposite momenta. the amazing thing is that under all these transformations, the laws of physics behave the exact same way. With such an exceptional result, we would want to be absolutely sure that all our experiments say the same thing, so that brings us the our current topic of discussion – antihydrogen.

Matter, but anti.

Figure 1: The Hydrogen atom and its nemesis – antihydrogen. Together they are: Light. Source: Berkeley Science Review

The trick with antimatter is to keep it as far away from normal matter as possible. Antimatter-matter pairs readily interact, releasing vast amounts of energy proportional to the mass of the particles involved. Hence it goes without saying that we can’t just keep them sealed up in Tupperware containers and store them next to aunty’s lasagne. But what if we start simple – gather together an antiproton and a single positron and voila, we have antihydrogen – the antimatter sibling to the most abundant element in nature. Well this is precisely what the international ALPHA collaboration at CERN has been concerned with, providing “slowed-down” antiprotons with positrons in a device known as a Penning trap. Just like hydrogen, the orbit of a positron around an antiproton behaves like a tiny magnet, a property known as an object’s magnetic moment. The difficulty however is in the complexity of external magnetic field required to ‘trap’ the neutral antihydrogen in space. Therefore not surprisingly, these are the atoms of very low kinetic energy (i.e. cold) that cannot overcome the weak effect of external magnetism.

There are plenty more details of how the ALPHA collaboration acquires antihydrogen for study. I’ll leave this up to a reference at the end. What I’ll focus on is what we can do with it and what it means for fundamental physics. In particular, one of the most intriguing predictions of the invariance of the laws of physics under charge, parity and time transformations is that antihydrogen should share many of the same properties as hydrogen. And not just the mass and magnetic moment, but also the fine structure (atomic transition frequencies). In fact, the most successful theory of the 20th century, quantum electrodynamics (QED), properly accomodating anti-electronic interactions, also predicts a foundational test for both matter and antimatter hydrogen – the splitting of the 2S_{1/2} and 2P_{1/2} energy levels (I’ll leave a reference to a refresher on this notation). This is of course known as the Nobel-Prize winning Lamb Shift in hydrogen, a feature of the interaction between the quantum fluctuations in the electromagnetic field and the orbiting electron.

I’m feelin’ hyperfine

Of course it is only very recently that atomic versions of antimatter have been able to be created and trapped, allowing researchers to uniquely study the foundations of QED (and hence modern physics itself) from the perspective of this mirror-reflected anti-world. Very recently, the ALPHA collaboration have been able to report the fine structure of antihydrogen up to the n=2 state using laser-induced optical excitations from the ground state and a strong external magnetic field. Undergraduates by now will have seen, at least even qualitatively, that increasing the strength of an external magnetic field on an atomic structure also increases the gaps in the energy levels, and hence frequencies of their transitions. Maybe a little less known is the splitting due to the interaction between the electron’s spin angular momentum and that of the nucleus. This additional structure is known as the hyperfine structure, and is readily calculable in hydrogen utilizing the 1/2-integer spins of the electron and proton.

Figure 2: The expected energy levels in the antimatter version of hydrogen, an antiproton with an orbiting positron. Increased splitting on the x-axis are shown as a function of external magnetic field strength, a phenomena well-known in hydrogen (and thus predicted in antihydrogen) as the Zeeman Effect. The hyperfine splitting, due to the interaction between the positron and antiproton spin alignment are also shown by the arrows in the kets, respectively.

From the predictions of QED, one would expect antihydrogen to show precisely this same structure. Amazingly (or perhaps exactly as one would expect?) the average measurement of the antihydrogen transition frequencies agree with those in hydrogen to 16 ppb (parts per billion) – an observation that solidly keeps CPT invariance in rule but also opens up a new world of precision measurement of modern foundational physics. Similarly, with consideration to the Zeeman and hyperfine interactions, the splitting between 2P_{1/2} - 2P_{3/2} is found to be consistent with the CPT invariance of QED up to a level of 2 percent, and the identity of the Lamb shift (2S_{1/2} - 2P_{1/2}) up to 11 percent. With advancements in antiproton production and laser inducement of energy transitions, such tests provide unprecedented insight into the structure of antihydrogen. The presence of an antiproton and more accurate spectroscopy may even help in answering the unsolved question in physics: the size of the proton!

Figure 3: Transition frequencies observed in antihydrogen for the 1S-2P states (with various spin polarizations) compared with the theoretical expectation in hydrogen. The error bars are shown to 1 standard deviation.

References

  1. A Youtube link to how the ALPHA experiment acquires antihydrogen and measures excitations of anti-atoms: http://alpha.web.cern.ch/howalphaworks
  2. A picture of my aunty’s lasagne: https://imgur.com/a/2ffR4C3
  3. A reminder of what that fancy notation for labeling spin states means: https://quantummechanics.ucsd.edu/ph130a/130_notes/node315.html
  4. Details of the 1) Zeeman effect in atomic structure and 2) Lamb shift, discovery and calculation: 1) https://en.wikipedia.org/wiki/Zeeman_effect 2) https://en.wikipedia.org/wiki/Lamb_shift
  5. Hyperfine structure (great to be familiar with, and even more interesting to calculate in senior physics years): https://en.wikipedia.org/wiki/Hyperfine_structure
  6. Interested about why the size of the proton seems like such a challenge to figure out? See how the structure of hydrogen can be used to calculate it: https://en.wikipedia.org/wiki/Proton_radius_puzzle

Dark Matter Freeze Out: An Origin Story

In the universe, today, there exists some non-zero amount of dark matter. How did it get here? Has this same amount always been here? Did it start out as more or less earlier in the universe? The so-called “freeze out” scenario is one explanation for how the amount of dark matter we see today came to be.

The freeze out scenario essentially says that there is some large amount of dark matter in the early universe that decreases to the amount we observe today. This early universe dark matter (\chi) is in thermal equilibrum with the particle bath (f), meaning that whatever particle processes create and destroy dark matter, they happen at equal rates, \chi \chi \rightleftharpoons f f, so that the net amount of dark matter is unchanged. We will take this as our “initial condition” and evolve it by letting the universe expand. For pedagogical reasons, we will name processes that create dark matter (f f \rightharpoonup \chi \chi) “production” processes, and processes that destroy dark matter ( \chi \chi \rightharpoonup f f) “annihilation” processes.

Now that we’ve established our initial condition, a large amount of dark matter in thermal equilibrium with the particle bath, let us evolve it by letting the universe expand. As the universe expands, two things happen:

  1. The energy scale of the particle bath (f) decreases. The expansion of the universe also cools down the particle bath. At energy scales (temperatures) less than the dark matter mass, the production reaction becomes kinematically forbidden. This is because the initial bath particles simply don’t have enough energy to produce dark matter. The annihilation process though is unaffected, it only requires that dark matter find itself to annihilate. The net effect is that as the universe cools, dark matter production slows down and eventually stops.
  2. Dark matter annihilations cease. Due to the expansion of the universe, dark matter particles become increasingly separated in space which makes it harder for them to find each other and annihilate. The result is that as the universe expands, dark matter annihilations eventually cease.

Putting all of this together, we obtain the following plot, adapted from  The Early Universe by Kolb and Turner and color-coded by me.

Fig 1: Color – coded freeze out scenario. The solid line is the density of dark matter that remains in thermal equilibrium as the universe expands. The dashed lines represent the freeze out density. The red region corresponds to a time in the universe when the production and annihilation rate are equal. The purple region; a time when the production rate is smaller than the annihilation rate. The blue region; a time when the annihilation rate is overwhelmed by the expansion of the universe.

  • On the horizontal axis is the dark matter mass divided by temperature T. It is often more useful to parametrize the evolution of the universe as a function of temperature rather than time, through the two are directly related.
  • On the vertical axis is the co-moving dark matter number density, which is the number of dark matter particles inside an expanding volume as opposed to a stationary volume. The comoving number density is useful because it accounts for the expansion of the universe.
  • The quantity \langle \sigma_A v \rangle is the rate at which dark matter annihilates. If the annihilation rate is small, then dark matter does not annihilate very often, and we are left with more. If we increase the annihilation rate, then dark matter annihilates more frequently, and we are ultimately left with less of it.
  • The solid black line is the comoving dark matter density that remains in thermal equilibrium, where the production and annihilation rates are equal. This line falls because as the universe cools, the production rate decreases.
  • The dashed lines are the “frozen out” dark matter densities that result from the cooling and expansion of the universe. The comvoing density flattens off because the universe is expanding faster than dark matter can annihilate with itself.

The red region represents the hot, early universe where the production and annihilation rates are equal. Recall that the net effect is the amount of dark matter remains constant, so the comoving density remains constant. As the universe begins to expand and cool, we transition into the purple region. This region is dominated by temperature effects, since as the universe cools the production rate begins to fall and so the amount of dark matter than can remain in thermal equilibrium also falls. Finally, we transition to the blue region, where expansion dominate. In this region, dark matter particles can no longer find each other and annihilations cease. The comoving density is said to have “frozen out” because i) the universe is not energetic enough to produce new dark matter and ii) the universe is expanding faster than dark matter can annihilate with itself. Thus, we are left with a non-zero amount of dark matter than persists as the universe continues to evolve in time.

References

[1] – This plot is figure 5.1 of Kolb and Turners book The Early Universe (ISBN: 978-0201626742). There are many other plots that communicate essentially the same information, but are much more cluttered.

[2] – Dark Matter Genesis. This is a PhD thesis that does a good job of summarizing the history of dark matter and explaining how the freeze out mechanism works.

[3] – Dark Matter Candidates from Particle Physics and Methods of Detection. This is a review article written by a very prominent member of the field, J. Feng of the University of California, Irvine.

[4] – Dark Matter: A Primer. Have any more questions about dark matter? They are probably addressed in this primer.

Proton Momentum: Hiding in Plain Sight?

Protons and neutrons at first glance seem like simple objects. They have well defined spin and electric charge, and we even know their quark compositions. Protons are composed of two up quarks and one down quark and for neutrons, two downs and one up. Further, if a proton is moving, it carries momentum, but how is this momentum distributed between its constituent quarks? In this post, we will see that most of the momentum of the proton is in fact not carried by its constituent quarks.

Before we start, we need to have a small discussion about isospin. This will let us immediately write down the results we need later. Isospin is a quantum number that in practice, allows us to package particles together. Protons and neutrons form an isospin doublet, which means they come in the same mathematical package. The proton is the isospin +1/2 component of this package, and the neutron is the isospin -1/2 component of this package. Similarly, up quarks and down quarks form their own isospin doublet, and they come in their own package. In our experiment, if we are careful to choose which particles to scatter off of eachother, our calculations will permit us to exchange components of isospin packages everywhere instead of redoing calculations from scratch. This exchange is what I will call the “isospin trick.” It turns out that if compare electron-proton scattering to electron-neutron scattering allows us to use this trick:

\text{Proton} \leftrightarrow \text{Neutron} \\ u \leftrightarrow d

Back to protons and neutrons. We know that protons and neutrons are composite particles, they themselves are made up of more fundamental objects. We need a way to “zoom into” these composite particles, to look inside them and we do this with the help of structure functions F(x). Structure functions for the proton and neutron encode how electric charge and momentum are distributed between the constituents. We assign u(x) and d(x) to be the probability of finding an up or down quark with momentum fraction x of the proton. Explicitly, these structure functions look like:

F(x) \equiv \sum_q e_q^2 q(x) \\ \ F_P(x) = \frac{4}{9}u(x) + \frac{1}{9}d(x) \\ \ F_N(x) = \frac{4}{9}d(x) + \frac{1}{9}u(x)

where the first line is the definition of a structure function. In this line, q denotes quarks, and e_q is the electric charge of quark q. In the second line, we have written out explicitly the structure function for the proton F_P(x), and invoked the isospin trick to immediately write down the structure function for the neutron F_N(x) in the third line. Observe that if we had attempted to write down F_N(x) following the definition in line 1, we would have gotten the same thing as the proton.

At this point we must turn to experiment to determine u(x) and d(x). The plot we will examine [1] is figure 17.6 taken from section 17.4 of Peskin and Schroeder, An Introduction to Quantum Field Theory. Some data is omitted to illustrate a point.

The momentum distribution of the quarks inside the proton. Some data has been omitted for the purposes of this discussion. The full plot is provided at the end of this post.

This plot shows the momentum distribution of the up and down quarks inside a proton. On the horizontal axis is the momentum fraction x and on the vertical axis is probability. The two curves represent the probability distribution of the up (u) and down (d) quarks inside the proton. Integrating these curves gives us the total percent of momentum stored in the up and down quarks which I will call uppercase U and uppercase D. We want to know know both U and D, so we need another equation to solve this system. Luckily we can repeat this experiment using neutrons instead of protons, obtain a similar set of curves, and integrate them to obtain the following system of equations:

\int_0^1 F_P(x) = \frac{4}{9}U(x) + \frac{1}{9}D(x) = 0.18 \\ \int_0^1 F_N(x) = \frac{4}{9}D(x) + \frac{1}{9}U(x) = 0.12

Solving this system for U and D yields U = 0.36 and D = 0.18. We immediately see that the total momentum carried by the up and down quarks is \sim 54% of the momentum of the proton. Said a different way, the three quarks that make up the proton, only carry half ot its momentum. One possible conclusion is that the proton has more “stuff” inside of it that is storing the remaining momentum. It turns out that this additional “stuff” are gluons, the mediators of the strong force. If we include gluons (and anti-quarks) in the momentum distribution, we can see that at low momentum fraction x, most of the proton momentum is stored in gluons. Throughout this discussion, we have neglected anti-quarks because even at low momentum fractions, they are sub-dominant to gluons. The full plot as seen in Peskin and Schroeder is provided below for completeness.

References

[1] – Peskin and Schroeder, An Introduction to Quantum Field Theory, Section 17.4, figure 17.6.

Further Reading

[A] – The uses of isospin in early nuclear and particle physics. This article follows the historical development of isospin and does a good job of motivating why physicists used it in the first place.

[B] – Fundamentals in Nuclear Theory, Ch3. This is a more technical treatment of isospin, roughly at the level of undergraduate advanced quantum mechanics.

[C] – Symmetries in Physics: Isospin and the Eightfold Way. This provides a slightly more group-theoretic perspective of isospin and connects it to the SU(3) symmetry group.

[D] – Structure Functions. This is the Particle Data Group treatment of structure functions.

[E] – Introduction to Parton Distribution Functions]. Not a paper, but an excellent overview of Parton Distribution Functions.

The neutrino from below

Article title: The ANITA Anomalous Events as Signatures of a Beyond Standard Model Particle and Supporting Observations from IceCube

Authors: Derek B. Fox, Steinn Sigurdsson, Sarah Shandera, Peter Mészáros, Kohta Murase, Miguel Mostafá, and Stephane Coutu 

Reference: arXiv:1809.09615v1

Neutrinos have arguably made history for being nothing other than controversial. In fact from their very inception, their proposal by Wolfgang Pauli was described in his own words as “something no theorist should ever do”. Years on, as we established the role that neutrinos played in the processes of our sun, it was then discovered that it simply wasn’t providing enough of them. In the end the only option was to concede that neutrinos were more complicated than we ever thought, opening up a new area of study of ‘flavor oscillations’ with the consequence that they may in fact possess a small, non-zero mass – to this day yet to be explained.

On a more recent note, neutrinos have sneakily raised eyebrows with a number of other interesting anomalies. The OPERA collaboration, founded between CERN in Geneva, Switzerland and Gran Sasso, Italy, made international news with reports that their speeds exceeded the speed of light. Such an observation would certainly shatter the very foundations of modern physics and so was met with plenty of healthy skepticism. Alas it was eventually traced back to a faulty timing cable and all was right with the world again. However this has not been the last time that neutrinos have been involved in another controversial anomaly.

The NASA-involved Antarctic Impulsive Transient Antenna (ANITA) is an experiment designed to search for very, very energetic neutrinos originating from outer space. As the name also suggests, the experiment consists of a series of radio antennae contained in a balloon floating above the southern Antarctic ice. Very energetic neutrinos can in fact produce intense radio wave signals when they pass through the Earth and scatter off atoms in the Antarctic ice. This may sound strange, as neutrinos are typically referred to as ‘elusive’, however at incredibly high energies their probability of scattering increases dramatically – to the point where the Earth is ‘opaque’ to these neutrinos.

ANITA typically searches for the electromagnetic components of cosmic rays in the atmosphere, reflecting off the ice surface and subsequently inverting the phase of the radio wave. Alternatively, a small number of events can occur in the direction of the horizon, without reflecting off the ice and hence not inverting the waveform. However, physicists were surprised to find signals originating from below the ice, without phase inversion, in a direction much too steep to originate from the horizon.

Why is this a surprise you may ask? Well any particle present in the SM at these energies would have trouble traversing such a long distance throughout the Earth, measured in one of the observations with a chord length of 5700 km, whereas a neutrino would be expected to only survive a few hundred km. Such events would be expected to be mainly involving \nu_{\tau} (tau neutrinos), since these have the potential to convert to a charged tau lepton shortly before arriving and hadronising into an air shower, which is simply not possible for electrons or muons which are absorbed by the ice in a much smaller distance. But even in the case of tau neutrinos, the probability of such an event occuring with the observed trajectory is very small (below one in a million), leading physicists to explore more exotic (and exciting) options.

A simple possibility is that the ultra-high energy neutrinos coming from space could interact within the Earth to produce a BSM (Beyond Standard Model) particle that passes through the Earth until it exits and decays back to an SM lepton and then hadronizes in a shower of particles. Such a situation is shown in Figure 1, where the BSM particle comes from the well-known and popular supersymmetric SM extension, known as the stau slepton \tilde{\tau}.

Figure 1: An ultra-high energy neutrino interacting within the Earth to produce a beyond Standard Model particle before decaying back to a charged lepton and hadronizing, leaving a radio signal for ANITA. From “Tests of new physics scenarios at neutrino telescopes” talk by Bhavesh Chauhan.

In some popular supersymmetric extensions to the Standard Model, the stau slepton is typically the next-to-lightest supersymmetric particle (or NLSP) and can in fact be quite long-lived. In the presence of a nucleus, the stau may convert to the tau lepton and the LSP, which is typically the neutralino. In the paper titled above, the stau NLSP \tilde{\tau}_R can exist within the Gauge-Mediated Supersymmetry Breaking Model (GMSB) and can be produced through ultra-high energy neutrino interactions with nucleons with a not-so-tiny branching ratio of BR \lesssim 10^{-4}. Of course the tension still remains for the direct observation of staus that can fit resonably within this scenario, but the prospects of observing effects of BSM physics without the efforts of expensive colliders.

But the attempts at new physics explanations don’t end there. There are some ideas that involve the decays of very heavy dark matter candidates in the galactic Milky-Way center. In a similar vein, another possibility comes form the well-motivated sterile neutrino – a BSM candidate to explain the small, non-zero mass of the neutrino. There are a number of explanations for a large flux of sterile neutrinos throughout the Earth, however the rate at which they interact with the Earth is much more suppressed than the light “active” neutrinos. It could be then hoped that they would make their passage to the ANITA detector after converting back to a tau lepton.

Anomalies like these come and go, however in any case, physicists remain interested in alternate pathways to new physics – or even a motivation to search in a more specific region with collider technology. But collecting more data first always helps!

References and further reading:

Neutrinos: What Do They Know? Do They Know Things?

Title: “Upper Bound of Neutrino Masses from Combined Cosmological Observations and Particle Physics Experiments”

Author: Loureiro et al. 

Reference: https://journals.aps.org/prl/pdf/10.1103/PhysRevLett.123.081301

Neutrinos are almost a lot of things. They are almost massless, a property that goes against the predictions of the Standard Model. Possessing this non-zero mass, they should travel at almost the speed of light, but not quite, in order to be consistent with the principles of special relativity. Yet each measurement of neutrino propagation speed returns a value that is, within experimental error, exactly the speed of light. Only coupled to the weak force, they are almost non-interacting, with 65 billion of them streaming from the sun through each square centimeter of Earth each second, almost undetected. 

How do all of these pieces fit together? The story of the neutrino begins in 1930, when Wolfgang Pauli propositioned an as-yet detected particle emitted during beta decay in order to explain an observed lack of energy and momentum conservation. In 1956, an antineutrino-positron annihilation producing two gamma rays was detected, confirming the existence of neutrinos. Yet with this confirmation came an assortment of growing mysteries. In the decades that followed, a series of experiments found that there are three distinct flavors of neutrino, one corresponding to each type of lepton: electron, muon, and tau particles. Subsequent measurements of propagating neutrinos then revealed a curious fact: these three flavors are anything but distinct. When the flavor of a neutrino is initially measured to be, say, an electron neutrino, a second measurement of flavor after it has traveled some distance could return the answer of muon neutrino. Measure yet again, and you could find yourself a tau neutrino. This process, in which the probability of measuring a neutrino in one of the three flavor states varies as it propagates, is known as neutrino oscillation. 

A representation of neutrino oscillation: three flavors of neutrino form a superposed wave. As a result, a measurement of neutrino flavor as the neutrino propagates switches between the three possible flavors. This mechanism implies that neutrinos are not massless, as previously thought. From: http://www.hyper-k.org/en/neutrino.html.

Neutrino oscillation threw a wrench into the Standard Model in terms of mass; neutrino oscillation implies that the masses of the three neutrino flavors cannot be equal to each other, and hence cannot all be zero. Specifically, only one of them would be allowed to be zero, with the remaining two non-zero and non-equal. While at first glance an oddity, oscillation arises naturally from underlying mathematics, and we can arrive at this conclusion via a simple analysis. To think about a neutrino, we consider two eigenstates (the state a particle is in when it is measured to have a certain observable quantity), one corresponding to flavor and one corresponding to mass. Because neutrinos are created in weak interactions which conserve flavor, they are initially in a flavor eigenstate. Flavor and mass eigenstates cannot be simultaneously determined, and so each flavor eigenstate is a linear combination of mass eigenstates, and vice versa. Now, consider the case of three flavors of neutrino. If all three flavors consisted of the same linear combination of mass eigenstates, there would be no varying superposition between them, since the different masses would travel at different speeds in accordance with special relativity. Since we experimentally observe an oscillation between neutrino flavors, we can conclude that their masses cannot all be the same.

Although this result was unexpected and provides the first known departure from the Standard Model, it is worth noting that it also neatly resolves a few outstanding experimental mysteries, such as the solar neutrino problem. Neutrinos in the sun are produced as electron neutrinos and are likely to interact with unbound electrons as they travel outward, transitioning them into a second mass state which can interact as any of the three flavors. By observing a solar neutrino flux roughly a third of its predicted value, physicists not only provided a potential answer to a previously unexplained phenomenon but also deduced that this second mass state must be larger than the state initially produced. Related flux measurements of neutrinos produced during charged particle interactions in the Earth’s upper atmosphere, which are primarily muon neutrinos, reveal that the third mass state is quite different from the first two mass states. This gives rise to two potential mass hierarchies: the normal (m_1 < m_2 \ll m_3) and inverted (m_3 \ll m_1 < m_2) ordering.

The PMNS matrix parametrizes the transformation between the neutrino mass eigenbasis and its flavor eigenbasis. The left vector represents a neutrino in the flavor basis, while the right represents the same neutrino in the mass basis. When an individual component of the transformation matrix is squared, it gives the probability to measure the specified mass for the the corresponding flavor.

However, this oscillation also means that it is difficult to discuss neutrino masses individually, as measuring the sum of neutrino masses is currently easier from a technical standpoint. With current precision in cosmology, we cannot distinguish the three neutrinos at the epoch in which they become free-traveling, although this could change with increased precision. Future experiments in beta decay could also lead to progress in pinpointing individual masses, although current oscillation experiments are only sensitive to mass-squared differences \Delta m_{ij}^2 = m_i^2 - m_j^2. Hence, we frame our models in terms of these mass splittings and the mass sum, which also makes it easier to incorporate cosmological data. Current models of neutrinos are phenomenological not directly derived from theory but consistent with both theoretical principles and experimental data. The mixing between states is mathematically described by the PMNS (Pontecorvo-Maki-Nakagawa-Sakata) matrix, which is parametrized by three mixing angles and a phase related to CP violation. These parameters, as in most phenomenological models, have to be inserted into the theory. There is usually a wide space of parameters in such models and constraining this space requires input from a variety of sources. In the case of neutrinos, both particle physics experiments and cosmological data provide key avenues for exploration into these parameters. In a recent paper, Loureiro et al. used such a strategy, incorporating data from the large scale structure of galaxies and the cosmic microwave background to provide new upper bounds on the sum of neutrino masses.

The group investigated two main classes of neutrino mass models: exact models and cosmological approximations. The former concerns models that integrate results from neutrino oscillation experiments and are parametrized by the smallest neutrino mass, while the latter class uses a model scheme in which the neutrino mass sum is related to an effective number of neutrino species N_{\nu} times an effective mass m_{eff} which is equal for each flavor. In exact models, Gaussian priors (an initial best-guess) were used with data sampling from a number of experimental results and error bars, depending on the specifics of the model in question. This includes possibilities such as fixing the mass splittings to their central values or assuming either a normal or inverted mass hierarchy. In cosmological approximations, N_{\nu} was fixed to a specific value depending on the particular cosmological model being studied, with the total mass sum sampled from data.

The end result of the group’s analysis, which shows the calculated neutrino mass bounds from 7 studied models, where the first 4 models are exact and the last 3 are cosmological approximations. The left column gives the probability distribution for the sum of neutrino masses, while the right column gives the probability distribution for the lightest neutrino in the model (not used in the cosmological approximation scheme). From: https://journals.aps.org/prl/pdf/10.1103/PhysRevLett.123.081301

The group ultimately demonstrated that cosmologically-based models result in upper bounds for the mass sum that are much lower than those generated from physically-motivated exact models, as we can see in the figure above. One of the models studied resulted in an upper bound that is not only different from those determined from neutrino oscillation experiments, but is inconsistent with known lower bounds. This puts us into the exciting territory that neutrinos have pushed us to again and again: a potential finding that goes against what we presently know. The calculated upper bound is also significantly different if the assumption is made that one of the neutrino masses is zero, with the mass sum contained in the remaining two neutrinos, setting the stage for future differentiation between neutrino masses. Although the group did not find any statistically preferable model, they provide a framework for studying neutrinos with a considerable amount of cosmological data, using results of the Planck, BOSS, and SDSS collaborations, among many others. Ultimately, the only way to arrive at a robust answer to the question of neutrino mass is to consider all of these possible sources of information for verification. 

With increased sensitivity in upcoming telescopes and a plethora of intriguing beta decay experiments on the horizon, we should be moving away from studies that update bounds and toward ones which make direct estimations. In these future experiments, previous analysis will prove vital in working toward an understanding of the underlying mass models and put us one step closer to unraveling the enigma of the neutrino. While there are still many open questions concerning their properties Why are their masses so small? Is the neutrino its own antiparticle? What governs the mass mechanism? studies like these help to grow intuition and prepare for the next phases of discovery. I’m excited to see what unexpected results come next for this almost elusive particle.

Further Reading:

  1. A thorough introduction to neutrino oscillation: https://arxiv.org/pdf/1802.05781.pdf
  2. Details on mass ordering: https://arxiv.org/pdf/1806.11051.pdf
  3. More information about solar neutrinos (from a fellow ParticleBites writer!): https://particlebites.com/?p=6778 
  4. A summary of current neutrino experiments and their expected results: https://www.symmetrymagazine.org/article/game-changing-neutrino-experiments 

Gamma Rays from Andromeda: Evidence of Dark Matter?

In a recent paper [1], Chris Karwin et al. report an excess in gamma rays that seem to be originating from our nearest galaxy Andromeda. The amount of excess is arguably insignificant, roughly 3% – 5%. However, the spatial location of this excess is what is interesting. The luminous region of Andromeda is roughly 70 Kpc in diameter [2], but the gamma ray excess is located roughly 120 – 200 Kpc from the center. Below is an approximately-to-scale diagram that illustrates the spatial extent of the excess:

Pictorial representation of the spatial extent of the gamma ray excess from Andromeda.

There may be many possible explanations and interpretations for this excess, one of the more exotic possibilities is dark matter annihilations. The dark matter paradigm says that galaxies form within “dark matter halos,” a cloud of dark matter that attracts luminous matter that will eventually form a galaxy [3]. The dark matter halo encompassing Andromeda has a virial radius of about 200 Kpc [4], well beyond the location of the gamma ray excess. This means that there is dark matter at the location of the excess, but how do we start with dark matter and end up with photons?

Within the “light mediator” paradigm of dark matter particle physics, dark matter can annihilate with itself into massive dark photons, the dark matter equivalent of the photon. These dark photons by ansatz, can interact with standard model and ultimately decay into photons. A schematic diagram of this process is provided below:

Dark matter (X) annihilates into dark photons (A) which couple to Standard Model particles that eventually decay into photons.

To recap,

  1. There is an excess of high energy, 1-100 GeV photons from Andromeda.
  2. The location of this excess is displaced from the luminous matter within Andromeda.
  3. This spatial displacement means that the cause of this excess is probably not the luminous matter within Andromeda.
  4. We do know however, that there is dark matter at the location of the excess.
  5. It is possible for dark matter to yield high energy photons so this excess may be evidence of dark matter.

References:

[1] – FERMI-LAT Observations Of Gamma Ray Emission Towards the Outer Halo of M31. This is the paper that reports the gamma ray excess from Andromeda.

[2] – A kinematically selected, metal-poor stellar halo in the outskirts of M31. This work estimates the size or extent of Andromeda.

[3] – The Connection between Galaxies and their Dark Matter Halos. This paper details how dark matter halos relate and impact galaxy formation.

[4] – Stellar mass map and dark matter distribution in M31. This paper determines various properties of the Andromeda dark matter halo.

Solar Neutrino Problem

Why should we even care about neutrinos coming from the sun in the first place? In the 1960’s, the processes governing the interior of the sun were not well understood. There was a strong suspicion that the suns main energy source was the fusion of Hydrogen into Helium, but there was no direct evidence for this hypothesis. This is because the photons produced in fusion processes have a mean free path of about ​10^(-10) times the radius of the sun [1]. That is to say, it takes thousands of years for the light produced inside the core of the sun to escape and be detected at Earth. Photons then are not a good experimental observable to use if we want to understand the interior of the sun.

Additionally, these fusion processes also produce neutrinos, which are essentially non-interacting. Their non-interactive properties on one hand means that they can escape the interior of the sun unimpeded. Neutrinos thus give us a direct probe into the core of the sun without the wait that photons require. On the other hand though, these same non-interactive properties mean that detecting them is extremely difficult.

The undertaking to understand and measure these neutrinos was headed by John Bahcall, who headed the theoretical development, and Ray Davis Jr, who headed the experimental development.

In 1963, John Bahcall gave the first prediction of the neutrino flux coming from the sun [1]. Five years later in 1968, Ray Davis provided the first measurement of the solar neutrino flux [2]. They found that the predicted value was about 2.5 times higher than the measured value. This discrepancy is what became known as the solar neutrino problem.

This plot shows the discrepancy between the measured (blue) and predicted (not blue) amounts of electron neutrinos from various experiments. Blue corresponds to experimental measurements. The other colors correspond to the predicted amount of neutrinos from various sources. This figure was first presented in a 2004 paper by Bahcall [3].

Broadly speaking, there were three causes for this discrepancy:

  1. The prediction was incorrect. This was Bahcalls domain. At lowest order, this could involve some combination of two things. First, incorrect modeling of the sun resulting in inaccurate neutrino fluxes. Second, inaccurate calculation of the observable signal resulting from the neutrino interactions with the detector. Bahcall and his collaborators spent 20 years refining this work and much more but the discrepancy persisted.
  2. The experimental measurement was incorrect. During those same 20 years, until the late 1980’s, Ray Davis’ experiment was the only active neutrino experiment [4]. He continued to improve the experimental sensitivity, but the discrepancy still persisted.
  3. New Physics. In 1968, B. Pontecorv and V. Gribov formulated Neutrino oscillations as we know it today. They proposed that Neutrino flavor eigenstates are linear combinations of mass eigenstates [5]. At a very hand-wavy level, this ansatz sounds reasonable because a neutrino of one flavor at production can change its identity while it propagates from the Sun to the Earth. This is because it is the mass eigenstates that have well-defined time-evolution in quantum mechanics.

It turns out that Pontecorv and Gribov had found the resolution to the Solar Neutrino problem. It would take an additional 30 years for experimental verification of neutrino oscillations by Super-K in 1998 [6], and Sundbury Neutrino Observatory (SNO) in 1999 [7].


References:

[1] – Solar Neutrinos I: Theoretical This paper lays out the motivation for why we should care about solar neutrinos at all.

[2] – Search for Neutrinos from the Sun The first announcement of the measurement of the solar neutrino flux.

[3] – Solar Models and Solar Neutrinos This is a summary of the Solar Neutrino Problem as presented by Bahcall in 2004.

[4] – The Evolution of Neutrino Astronomy A recounting of their journey in neutrino oscillations written by Bahcall and Davis.

[5] – Neutrino Astronomy and Lepton Charge This is the paper that laid down the mathematical groundwork for neutrino oscillations.

[6] – Evidence for Oscillation of Atmospheric Neutrinos The Super-K collaboration reporting their findings in support of neutrino flavor oscillations.

[7] – The Sudbury Neutrino Observatory The SNO collaboration announcing that they had strong experimental evidence for neutrino oscillations.

Additional References

[A] – Formalism of Neturino Osciallations: An Introduction. An accessible introduction to neutrino oscillations, this is useful for anyone who wants a primer on this topic.

[B] – Neutrino Masses, Mixing, and Oscillations. This is the Particle Data Group (PDG) treatment of neutrino mixing and oscillation.

[C] – Solving the mystery of the missing neutrinos. Writen by John Bahcall, this is a comprehensive discussion of the “missing neutrino” or “solar neutrino” problem.

Riding the wave to new physics

Article title: “Particle physics applications of the AWAKE acceleration scheme”

Authors: A. Caldwell, J. Chappell, P. Crivelli, E. Depero, J. Gall, S. Gninenko, E. Gschwendtner, A. Hartin, F. Keeble, J. Osborne, A. Pardons, A. Petrenko, A. Scaachi , and M. Wing

Reference: arXiv:1812.11164

On the energy frontier, the search for new physics remains a contentious issue – do we continue to build bigger, more powerful colliders? Or is this a too costly (or too impractical) an endeavor? The standard method of accelerating charged particles remains in the realm of radio-frequency (RF) cavities, possessing an electric field strength of about 100 Megavolts per meter, such as that proposed for the future Compact Linear Accelerator (CLIC) at CERN aiming for center-of-mass energies in the multi-TeV regime. Such a technology in the linear fashion is nothing new, being a key part of the SLAC National Accelerator Laboratory (California, USA) for decades before it’s shutdown around the early millennium. However, a device such as CLIC would still require more than ten times the space of SLAC, predicted to come in at around 10-50 km. Not only that, the walls of the cavities are based on normal conducting material and so tend to heat up very quickly and so are typically run in short pulses. And we haven’t even mentioned the costs yet!

Physicists are a smart bunch, however, and they’re always on the lookout for new technologies, new techniques and unique ways of looking at the same problem. As you may have guessed already, the limiting factor determining the length required for sufficient linear acceleration is the field gradient. But what if there were a way to achieve hundreds of times that of a standard RF cavity? The answer has been found in plasma wakefields – separated bunches of dense protons with the potential to drive electrons up to gigaelectronvolt energies in a matter of meters!

Wakefields of plasma are by no means a new idea, being proposed first at least four decades ago. However, most examples have demonstrated this idea using electrons or lasers to ‘drive’ the wakefield in the plasma. More specifically, this is known as the ‘drive beam’ which does not actually participate in the acceleration but provides the large electric field gradient for what is known as the ‘witness beam’ – the electrons. However, this has not been demonstrated using protons as the drive beam to penetrate much further into the plasma – until now.

In fact, very recently CERN has demonstrated proton-driven wakefield technology for the first time during the 2016-2018 run of AWAKE (which stands for Advanced Proton Driven Plasma Wakefield Acceleration Experiment, naturally), accelerating electrons to 2 GeV in only 10 m. The protons that drive the electrons are injected from the Super Proton Synchrotron (SPS) into a Rubidium gas, ionizing the atoms and altering their uniform electron distribution into an osscilating wavelike state. The electrons that ‘witness’ the wakefield then ‘ride the wave’ much like a surfer at the forefront of a water wave. Right now, AWAKE is just a proof of concept, however plans to scale up to 10 GeV electrons in the coming years could hopefully pave the pathway to LHC level proton energies – shooting electrons up to TeV energies!

Figure 1: A layout of AWAKE (Advanced Proton Driven Plasma Wakefield Acceleration Experiment).

In this article, we focus instead on the interesting physics applications of such a device. Bunches of electrons with energies up to TeV energies is so far unprecedented. The most obvious application would of course be a high energy linear electron-positron collider. However, let’s focus on some of the more novel experimental applications that are being discussed today, particularly those that could benefit from such a strong electromagnetic presence in almost a ‘tabletop physics’ configuration.

Awake in the dark

One of the most popular considerations when it comes to dark matter is the existence of dark photons, mediating interactions between dark and visible sector physics (see “The lighter side of Dark Matter” for more details). Finding them has been the subject of recent experimental and theoretical approaches, even with high-energy electron fixed-target experiments already. Figure 2 shows such an interaction, where A^\prime represents the dark photon. One experiment based at CERN known as the NA64 already searches for dark photons through incident electrons on a target, utilizing interactions of the SPS proton beam. In the standard picture, the dark photon is searched through the missing energy signature, leaving the detector without interacting but escaping with a portion of the energy. The energy of the electrons is of course not the issue when the SPS is used, however the number of electrons is.

Figure 2: Dark photon production from a fixed-target experiment with an electron-positron final state.

Assuming one could work with the AWAKE scheme, one could achieve numbers of electrons on target orders of magnitude larger – clearly enhancing the reach for masses and mixing of the dark photon. The idea would be to introduce a high number of energetic electron bunches to a tungsten target with a following 10 m long volume for the dark photon to decay (in accordance with Figure 2). Because of the opposite charges of the electron and positron, the final decay products can then be separated with magnetic fields and hence one can ultimately determine the dark photon invariant mass.

Figure 3 shows how much of an impact a larger number of on-target electrons would make for the discovery reach in the plane of kinetic mixing \epsilon vs mass of the dark photon m_{A^\prime} (again we refer the reader to “The lighter side of Dark Matter” for explanations of these parameters). With the existing NA64 setup, one can already see new areas of the parameter space being explored for 1010 – 1013 electrons. However a significant difference can be seen with the electron bunches provided by the AWAKE configuration, with an ambitious limit shown by the 1016 electrons at 1 TeV.

Figure 3: Exclusion limits in the \epsilon - m_{A^\prime} plane for the dark photon decaying to an electron-positron final state. The NA64 experiment using larger numbers of electrons is shown in the colored non-solid curves from 1010 to 1013 of total on-target electrons. The solid colored lines show the AWAKE provided electron bunches with 1015 and 1016 at 50 GeV and 1016 at 1 TeV.

Light, but strong

Quantum Electrodynamics (or QED, for short), describing the interaction between fundamental electrons and photons, is perhaps the most precisely measured and well-studied theory out there, showing agreement with experiment in a huge range of situations. However, there are some extreme phenomena out in the universe where the strength of certain fields become so great that our current understanding starts to break down. For the electromagnetic field this can in fact be quantified as the Schwinger limit, above which it is expected that nonlinear field effects start to become significant. Typically at a strength around 1018 V/m, the nonlinear corrections to the equations of QED would predict the appearance of electron-positron pairs spontaneously created from such an enormous field.

One of the predictions is the multiphoton interaction with electrons in the initial state. In linear QED, the standard 2 \rightarrow 2 scattering of e^- + \gamma \rightarrow e^- + \gamma for example is only possible. In a strong field regime, however, the initial state can then open up to n numbers of photons. Given a strong enough laser pulse, multiple laser photons can interact with electrons and investigate this incredible region of physics. We show this in Figure 4.

Figure 4: Multiphoton interaction with an electron (left) and electron-positron production from photon absorption (right). $\latex n$ here is the number of photons absorbed in the initial state.

The good and bad news is that this had already been performed as far back as the 90s in the E144 experiment at SLAC, using 50 GeV electron bunches – however unable to reach the critical field value in the electrons frame of rest. AWAKE could certainly provide highly energetic electrons and allow for different kinematic experimental reach. Could this provide the first experimental measurement of the Schwinger critical field?

Of course, these are just a few considerations amongst a plethora of uses for the production of energetic electrons over such short distances. However as physicists desperately continue their search for new physics, it may be time to consider the use of new acceleration technologies on a larger scale as AWAKE has already shown its scalability. Wakefield acceleration may even establish itself with a fully-developed new physics search plan of its own.

References and further reading: