Too Massive? New measurement of the W boson’s mass sparks intrigue

This is part one of our coverage of the CDF W mass result covering its implications. Read about the details of the measurement in a sister post here!

Last week the physics world was abuzz with the latest results from an experiment that stopped running a decade ago. Some were heralding this as the beginning of a breakthrough in fundamental physics, headlines read “Shock result in particle experiment could spark physics revolution” (BBC). So what exactly is all the fuss about?

The result itself is an ultra-precise measurement of the mass of the W boson. The W boson is one of the carriers of weak force and this measurement pegged its mass at 80,433 MeV with an uncertainty of 9 MeV. The excitement is coming because this value disagrees with the prediction from our current best theory of particle physics, the Standard Model. In theoretical structure of the Standard Model the masses of the gauge bosons are all interrelated. In the Standard Model the mass of the W boson can be computed based on the mass of the Z as well as few other parameters in the theory (like the weak mixing angle). In a first approximation (ie to the lowest order in perturbation theory), the mass of the W boson is equal to the mass of the Z boson times the cosine of the weak mixing angle. Based on other measurements that have been performed including the Z mass, the Higgs mass, the lifetime of muons and others, the Standard Model predicts that the mass of the W boson should be 80,357 (with an uncertainty of 6 MeV). So the two numbers disagree quite strongly, at the level of 7 standard deviations.

If the measurement and the Standard Model prediction are both correct, this would imply that there is some deficiency in the Standard Model; some new particle interacting with the W boson whose effects haven’t been unaccounted for. This would be welcome news to particle physicists, as we know that the Standard Model is an incomplete theory but have been lacking direct experimental confirmation of its deficiencies. The size of the discrepancy would also mean that whatever new particle was causing the deviation may also be directly detectable within our current or near future colliders.

If this discrepancy is real, exactly what new particles would this entail? Judging based on the 30+ (and counting) papers released on the subject in the last week, there are a good number of possibilities. Some examples include extra Higgs bosons, extra Z-like bosons, and vector-like fermions. It would take additional measurements and direct searches to pick out exactly what the culprit was. But it would hopefully give experimenters definite targets of particles to look for, which would go a long way in advancing the field.

But before everyone starts proclaiming the Standard Model dead and popping champagne bottles, its important to take stock of this new CDF measurement in the larger context. Measurements of the W mass are hard, that’s why it has taken the CDF collaboration over 10 years to publish this result since they stopped taking data. And although this measurement is the most precise one to date, several other W mass measurements have been performed by other experiments.

The Other Measurements

A plot summarizing the various W mass measurements performed to date
A summary of all the W mass measurements performed to date (black dots) with their uncertainties (blue bars) as compared to the the Standard Model prediction (yellow band). One can see that this new CDF result is in tension with previous measurements. (source)

Previous measurements of the W mass have come from experiments at the Large Electron-Positron collider (LEP), another experiment at the Tevatron (D0) and experiments at the LHC (ATLAS and LHCb). Though none of these were as precise as this new CDF result, they had been painting a consistent picture of a value in agreement with the Standard Model prediction. If you take the average of these other measurements, their value differs from the CDF measurement the level about 4 standard deviations, which is quite significant. This discrepancy seems large enough that it is unlikely to arise from purely random fluctuation, and likely means that either some uncertainties have been underestimated or something has been overlooked in either the previous measurements or this new one.

What one would like are additional, independent, high precision measurements that could either confirm the CDF value or the average value of the previous measurements. Unfortunately it is unlikely that such a measurement will come in the near future. The only currently running facility capable of such a measurement is the LHC, but it will be difficult for experiments at the LHC to rival the precision of this CDF one.

W mass measurements are somewhat harder at the LHC than the Tevatron for a few reasons. First of all the LHC is proton-proton collider, while the Tevatron was a proton-antiproton collider, and the LHC also operates at a higher collision energy than the Tevatron. Both differences cause W bosons produced at the LHC to have more momentum than those produced at the Tevatron. Modeling of the W boson’s momentum distribution can be a significant uncertainty of its mass measurement, and the extra momentum of W’s at the LHC makes this a larger effect. Additionally, the LHC has a higher collision rate, meaning that each time a W boson is produced there are actually tens of other collisions laid on top (rather than only a few other collisions like at the Tevatron). These extra collisions are called pileup and can make it harder to perform precision measurements like these. In particular for the W mass measurement, the neutrino’s momentum has to be inferred from the momentum imbalance in the event, and this becomes harder when there are many collisions on top of each other. Of course W mass measurements are possible at the LHC, as evidenced by ATLAS and LHCb’s already published results. And we can look forward to improved results from ATLAS and LHCb as well as a first result from CMS. But it may be very difficult for them to reach the precision of this CDF result.

A histogram of the transverse mass of the W from the ATLAS result. Showing how 50 MeV shifts in the W mass change the spectrum by extremely small amounts (a few tenths of a percent).
A plot of the transverse mass (one of the variables used in a measurement) of the W from the ATLAS measurement. The red and yellow lines show how little the distribution changes if the W mass changes by 50 MeV, which is around two and half times the uncertainty of the ATLAS result. These shifts change the distribution by only a few tenths of a percent, illustrating the difficulty involved. (source)

The Future

A future electron positron collider would be able to measure the W mass extremely precisely by using an alternate method. Instead of looking at the W’s decay, the mass could be measured through its production, by scanning the energy of the electron beams very close to the threshold to produce two W bosons. This method should offer precision significantly better than even this CDF result. However any measurement from a possible future electron positron collider won’t come for at least a decade.

In the coming months, expect this new CDF measurement to receive a lot buzz. Experimentalists will be poring over the details trying to figure out why it is in tension with previous measurements and working hard to produce new measurements from LHC data. Meanwhile theorists will write a bunch of papers detailing the possibilities of what new particles could explain the discrepancy and if there is a connection to other outstanding anomalies (like the muon g-2). But the big question of whether we are seeing the first real crack in the Standard Model or there is some mistake in one or more of the measurements is unlikely to be answered for a while.

If you want to learn about how the measurement actually works, check out this sister post!

Read More:

Cern Courier “CDF sets W mass against the Standard Model

Blog post on the CDF result from an (ATLAS) expert on W mass measurements “[Have we] finally found new physics with the latest W boson mass measurement?”

PDG Review “Electroweak Model and Constraints on New Physics

Moriond 2022 : Return of the Excesses ?!

Recontres de Moriond is probably the biggest ski-vacation  conference of the year in particle physics, and is one of the places big particle physics experiments often unveil their new results. For the last few years the buzz in particle physics has been surrounding ‘indirect’ probes of new physics, specifically the latest measurement of the muons anomalous magnetic moment (g-2) and hints from LHCb about lepton flavor universality violation. If either of these anomalies were confirmed this would of course be huge, definitive laboratory evidence for physics beyond the standard model, but they would not answer the question of what exactly that new physics was. As evidenced by the 500+ papers written in the last year offering explanations of the g-2 anomaly, there are a lot of different potential explanations.

A definitive answer would come in the form of a ‘direct’ observation of whatever particle is causing the anomaly, which traditionally means producing and observing said particle in a collider. But so far the largest experiments performing these direct searches, ATLAS and CMS, have not shown any hints of new particles. But this Moriond, as the LHC experiments are getting ready for the start of a new data taking run later this year, both collaborations unveiled ‘excesses’ in their Run-2 data. These excesses, extra events above a background prediction that resemble the signature of a new particle, don’t have enough statistical significance to claim discoveries yet, and may disappear as more data is collected, as many an excess has done before. But they are intriguing and some have connections to anomalies seen in other experiments. 

So while there have been many great talks at Moriond (covering cosmology, astro-particle searches for dark matter, neutrino physics, and more flavor physics measurements and more) and the conference is still ongoing, its worth reviewing these new excesses in particular and what they might mean.

Excess 1: ATLAS Heavy Stable Charged Particles

Talk (paper forthcoming): https://agenda.infn.it/event/28365/contributions/161449/attachments/89009/119418/LeThuile-dEdx.pdf

Most searches for new particles at the LHC assume that said new particles decay very quickly once they are produced and their signatures can then be pieced together by measuring all the particles they decay to. However in the last few years there has been increasing interest in searching for particles that don’t decay quickly and therefore leave striking signatures in the detectors that can be distinguished from regular Standard Model particles. This particular ATLAS search searches for particles that are long-lived, heavy and charged. Due to their heavy masses (and/or large charges) particles such as these will produce greater ionization signals as they pass through the detector than standard model particles would. This ATLAS analysis selects tracks with high momentum, and unusually high ionization signals. They find an excess of events with high mass and high ionization, with a significance of 3.3-sigma.

The ATLAS excess of heavy stable charged particles. The black data points lie above the purple background prediction and match well with the signature of a new particle (yellow line). 

If their background has been estimated properly, this seems to be quite clear signature and it might be time to get excited. ATLAS has checked that these events are not due to any known instrumental defect, but they do offer one caveat. For a heavy particle like this (with a mass of ~TeV) one would expect for it to be moving noticeably slower than the speed of light. But when ATLAS compares the ‘time of flight’ of the particle, how long it takes to reach their detectors, its velocity appears indistinguishable from the speed of light. One would expect background Standard Model particles to travel close to the speed of light.

So what exactly to make of this excess is somewhat unclear. Hopefully CMS can weigh in soon!

Excesses 2-4: CMS’s Taus; Vector-Like-Leptons and TauTau Resonance(s)

Paper 1 : https://cds.cern.ch/record/2803736

Paper 2: https://cds.cern.ch/record/2803739

Many of the models seeking to explain the flavor anomalies seen by LHCb predict new particles that couple preferentially to tau’s and b-quarks. These two separate CMS analyses look for particles that decay specifically to tau leptons.

In the first analysis they look for pairs of vector-like-leptons (VLL’s) the lightest particle predicted in one of the favored models to explain the flavor anomalies. The VLL’s are predicted to decay into tau leptons and b-quarks, so the analysis targets events which have at least four b-tagged jets and reconstructed tau leptons. They trained a machine learning classifier to separate VLL’s from their backgrounds. They see an excess of events at high VLL classification probability in the categories with 1 or 2 reconstructed tau’s, with a significance of 2.8 standard deviations.

The CMS Vector-Like-Lepton excess. The gray filled histogram shows the best-fit amount of VLL signal. The histograms of other colors show the contributions of various backgrounds the the hatched band their uncertainty. 

In the second analysis they look for new resonances that decay into two tau leptons. They employ a sophisticated ’embedding’ technique to estimate the large background of Z bosons decaying to tau pairs by using the decays of Z bosons to muons. They see two excesses, one at 100 GeV and one at 1200 GeV, each with a significances of around 3-sigma. The excess at ~100 GeV could also be related to another CMS analysis that saw an excess of diphoton events at ~95 GeV, especially given that if there was an additional Higgs-like boson at 95 GeV  diphoton and ditau would be the two channels it would likely first appear in.

CMS TauTau excesses. The excess at ~100 GeV is shown in the left plot and the one at 1200 GeV is shown on the right, the best fit signal is shown with the red line in the bottom ration panels. 

While the statistical significances of these excess are not quite as high as the first one, meaning it is more likely they are fluctuations that will disappear with more data, their connection to other anomalies is quite intriguing.

Excess 4: CMS Paired Dijet Resonances

Paper: https://cds.cern.ch/record/2803669

Often statistical significance doesn’t tell the full story of an excess. When CMS first performed its standard dijet search on Run2 LHC data, where one looks for a resonance decaying to two jets by looking for bumps in the dijet invariant mass spectrum, they did not find any significant excesses. But they did note one particular striking event, which 4 jets which form two ‘wide jets’, each with a mass of 1.9 TeV and the 4 jet mass is 8 TeV.

An event display for the striking the CMS 4-jet event. The 4 jets combine to form two back-to-back dijet pairs, each with mass of 1.9 TeV. 

This single event seems very likely to occur via normal Standard Model QCD which normally has a regular 2-jet topology. However a new 8 TeV resonance which decayed to two intermediate particles with masses of 1.9 TeV which then each decayed to a pair of jets would lead to such a signature. This motivated them to design this analysis, a new search specifically targeting this paired dijet resonance topology. In this new search they have now found a second event with very similar characteristics. The local statistical significance of this excess is 3.9-sigma, but when one accounts for the many different potential dijet and 4-jet mass combinations which were considered in the analysis that drops to 1.6-sigma.

Though 1.6-sigma is relatively low, the striking nature of these events is certainly intriguing and warrants follow up. The Run-3 will also bring a slight increase to the LHC’s energy (13 -> 13.6 TeV) which will give the production rate of any new 8 TeV particles a not-insignificant boost.

Conclusions

The safe bet on any of these excesses would probably be that it will disappear with more data, as many excesses have done in the past. And many particle physicists are probably wary of getting too excited after the infamous 750 GeV diphoton fiasco in which many people got very excited (and wrote hundreds of papers about) a about a few-sigma excess in CMS + ATLAS data that disappeared as more data was collected. All of theses excesses are for analyses only performed by a single experiment (ATLAS or CMS) for now, but both experiments have similar capabilities so it will be interesting to see what the counterpart has to say for each excess once they perform a similar analysis on their Run-2 data. At the very least these results add some excitement for the upcoming LHC Run-3–the LHC collisions are starting up again this year after being on hiatus since 2018.

 

Read more:

CERN Courier Article “Dijet excess intrigues at CMS” 

Background on the imfamous 750 GeV diphoton excess, Physics World Article “And so to bed for the 750 GeV bump

Background on the LHCb flavor anomalies, CERN Courier “New data strengthens RK flavour anomaly

 

A hint of CEvNS heaven at a nuclear reactor

Title : “Suggestive evidence for Coherent Elastic Neutrino-Nucleus Scattering from reactor antineutrinos”

Authors : J. Colaresi et al.

Link : https://arxiv.org/abs/2202.09672

Neutrinos are the ghosts of particle physics, passing right through matter as if it isn’t there. Their head-on collisions with atoms are so rare that it takes a many-ton detector to see them. Far more often though, a neutrino gives a tiny push to an atom’s nucleus, like a golf ball glancing off a bowling ball. Even a small detector can catch these frequent scrapes, but only if it can pick up the bowling ball’s tiny budge. Today’s paper may mark the opening of a new window into these events, called “coherent neutrino-nucleus scattering” or CEvNS (pronounced “sevens”), which can teach us about neutrinos, their astrophysical origins, and the even more elusive dark matter.

A scrape with a ghost in a sea of noise

CEvNS was first measured in 2017 by COHERENT at a neutron beam facility, but much more data is needed to fully understand it. Nuclear reactors produce far more neutrinos than other sources, but they are even less energetic and thus harder to detect. To find these abundant but evasive events, the authors used a detector called “NCC-1701” that can count the electrons knocked off a germanium atom when a neutrino from the reactor collides with its nucleus.

Unfortunately, a nuclear reactor produces lots of neutrons as well, which glance off atoms just like neutrinos, and the detector was further swamped with electronic noise due to its hot, buzzing surroundings. To pick out CEvNS from this mess, the researchers found creative ways to reduce these effects: shielding the detector from as many neutrons as possible, cooling its vicinity, and controlling for environmental variables.

An intriguing bump with a promising future

After all this work, a clear bump was visible in the data when the reactor was running, and disappeared when it was turned off. You can see this difference in the top and bottom of Fig. 1, which shows the number of events observed after subtracting the backgrounds, as a function of the energy they deposited (number of electrons released from germanium atoms).

Fig. 1: The number of events observed minus the expected background, as a function of the energy the events deposited. In the top panel, when the nuclear reactor was running, a clear bump is visible at low energy. The bump is moderately to very strongly suggestive of CEvNS, depending on which germanium model is used (solid vs. dashed line). When the reactor’s operation was interrupted (bottom), the bump disappeared – an encouraging sign.

But measuring CEvNS is such a new enterprise that it isn’t clear exactly what to look for – the number of electrons a neutrino tends to knock off a germanium atom is still uncertain. This can be seen in the top of Fig. 1, where the model used for this number changes the amount of CEvNS expected (solid vs dashed line).

Still, for a range of these models, statistical tests “moderately” to “very strongly” confirmed CEvNS as the likely explanation of the excess events. When more data accumulates and the bump becomes clearer, NCC-1701 can determine which model is correct. CEvNS may then become the easiest way to measure neutrinos, since detectors only need to be a couple feet in size.

Understanding CEvNS is also critical for finding dark matter. With dark matter detectors coming up empty, it now seems that dark matter hits atoms even less often than neutrinos, making CEvNS an important background for dark matter hunters. If experiments like NCC-1701 can determine CEvNS models, then dark matter searches can stop worrying about this rain of neutrinos from the sky and instead start looking for them. These “astrophysical” neutrinos are cosmic messengers carrying information about their origins, from our sun’s core to supernovae.

This suggestive bump in the data of a tiny detector near the roiling furnace of a nuclear reactor shows just how far neutrino physics has come – the sneakiest ghosts in the Standard Model can now be captured with a germanium crystal that could fit in your palm. Who knows what this new window will reveal?

Read More

Ever-Elusive Neutrinos Spotted Bouncing Off Nuclei for the First Time” – Scientific American article from the first COHERENT detection in 2017

Hitting the neutrino floor” – Symmetry Magazine article on the importance of CEvNS to dark matter searches

Local nuclear reactor helps scientists catch and study neutrinos” – Phys.org story about these results

Exciting headways into mining black holes for energy!

Based on the paper Penrose process for a charged black hole in a uniform magnetic field

It has been over half a century since Roger Penrose first theorized that spinning black holes could be used as energy powerhouses by masterfully exploiting the principles of special and general relativity [1, 2]. Although we might not be able to harness energy from a black hole to reheat that cup of lukewarm coffee just yet, with a slew of amazing breakthroughs [4, 5, 6], it seems that we may be closer than ever before to making the transition from pure thought experiment to finally figuring out a realistic powering mechanism for several high-energy astrophysical phenomena. Not only can there be dramatic increases in the energies of radiated particles using charged, spinning black holes as energy reservoirs via the electromagnetic Penrose process rather than neutral, spinning black holes via the original mechanical Penrose process, the authors of this paper also demonstrate that the region outside the event horizon (see below) from which energy can be extracted is much larger in the former than the latter. In fact, the enhanced power of this process is so great, that it is one of the most suitable candidates for explaining various high-energy astrophysical phenomena such as ultrahigh-energy cosmic rays, particles [7, 8, 9] and relativistic jets [10, 11].

Stellar black holes are the final stages in the life cycle of stars so massive that they collapse upon themselves, unable to withstand their own gravitational pull. They are characterized by a point-like singularity at the centre where a complete breakdown of Einstein’s equations of general relativity occurs, and surrounded by an outer event horizon, within which the gravitational force is so strong that not even light can escape it. Just outside the event horizon of a rotating black hole is a region called the ergosphere, bounded by an outer stationary surface, within which space-time is dragged along inexorably with the black hole via a process called frame-dragging. This effect predicted by Einstein’s theory of general relativity, makes it impossible for an object to stand still with respect to an outside observer.

The ergosphere has a rather curious property that makes the word-line (the path traced in 4-dim space-time) of a particle or observer change from being time-like outside the static surface to being space-like inside it. In other words, the time and angular coordinates of the metric swap places! This leads to the existence of negative energy states of particles orbiting the black hole with respect to observer at infinity [2, 12, 13]. It is this very property that enables the extraction of rotational energy from the ergosphere as explained below.

According to Penrose’s calculations, if a massive particle that falls into the ergosphere were to split into two, the daughter who gets a kick from the black hole, would be accelerated out with a much higher positive energy (upto 20.7 percent higher to be exact) than the in-falling parent, as long as her sister is left with a negative energy. While it may seem counter-intuitive to imagine a particle with negative energy, note that no laws of relativity or thermodynamics are actually broken. This is because the observed energy of any particle is relative, and depends upon the momentum measured in the rest frame of the observer. Thus, a positive kinetic energy of the daughter particle left behind would be measured as negative by an observer at infinity [3].

In contrast to the purely geometric mechanical Penrose process, if one now considers black holes that possess charge as well as spin, a tremendous amount of energy stored in the electromagnetic fields can be tapped into, leading to ultra high energy extraction efficiencies. While there is a common misconception that a charged black hole tends to neutralize itself swiftly by attracting oppositely charged particles from the ambient medium, this is not quite true for a spinning black hole in a magnetic field (due to the dynamics of the hot plasma soup in which it is embedded). In fact in this case, Wald [14] showed that black holes tend to charge up till they reach a certain energetically favourable value. This value plays a crucial role in the amount of energy that can be delivered to the outgoing particle through the electromagnetic Penrose process. The authors of this paper explicitly locate the regions from which energy can be extracted and show that these are no longer restricted to the ergosphere, as there are a whole bunch of previously inaccessible negative energy states that can now be mined. They also find novel disconnected, toroidal regions not coincident with the ergosphere that can trap the negative energy particles forever (refer to Fig.1)! The authors calculate the effective coupling strength between the black hole and charged particles, a certain combination of the mass and charge parameters of the black hole and charged particle, and the external magnetic field. This simple coupling formula enables them to estimate the efficiency of the process as the magnitude of the energy boost that can be delivered to the outgoing particle is directly dependent on it. They also find that the coupling strength decreases as energy is extracted, much the same way as the spin of a black hole decreases as it loses energy to the super-radiant particles in the mechanical analogue.

While the electromagnetic Penrose process is the most favoured astrophysically viable mechanism for high energy sources and phenomena such as quasars, fast radio bursts, relativistic jets etc., as the authors mention “Just because a particle can decay into a trapped negative-energy daughter and a significantly boosted positive-energy radiator, does not mean it will do so..” However, in this era of precision black hole astrophysics, state-of-the-art observatories, the Event Horizon Telescope capable of capturing detailed observations of emission mechanisms in real time, and enhanced numerical and scientific methods at our disposal, it appears that we might be on the verge of detecting observable imprints left by the Penrose process on black holes, and perhaps tap into a source of energy for advanced civilisations!

References

  1. Gravitational collapse: The role of general relativity
  2. Extraction of Rotational Energy from a Black Hole
  3. Penrose process for a charged black hole in a uniform magnetic field
  4. First-Principles Plasma Simulations of Black-Hole Jet Launching
  5. Fifty years of energy extraction from rotating black hole: revisiting magnetic Penrose process
  6. Magnetic Reconnection as a Mechanism for Energy Extraction from Rotating Black Holes
  7. Near-horizon structure of escape zones of electrically charged particles around weakly magnetized rotating black hole: case of oblique magnetosphere
  8. GeV emission and the Kerr black hole energy extraction in the BdHN I GRB 130427A
  9. Supermassive Black Holes as Possible Sources of Ultrahigh-energy Cosmic Rays
  10. Acceleration of the charged particles due to chaotic scattering in the combined black hole gravitational field and asymptotically uniform magnetic field
  11. Acceleration of the high energy protons in an active galactic nuclei
  12. Energy-extraction processes from a Kerr black hole immersed in a magnetic field. I. Negative-energy states
  13. Revival of the Penrose Process for Astrophysical Applications
  14. Black hole in a uniform magnetic field

 

 

The Higgs Comes Out of its Shell

Title : “First evidence for off-shell production of the Higgs boson and measurement of its width”

Authors : The CMS Collaboration

Link : https://arxiv.org/abs/2202.06923

CMS Analysis Summary : https://cds.cern.ch/record/2784590?ln=en

If you’ve met a particle physicist in the past decade, they’ve almost certainly told you about the Higgs boson. Since its discovery in 2012, physicists have been busy measuring as many of its properties as the ATLAS and CMS datasets will allow, including its couplings to other particles (e.g. bottom quarks or muons) and how it gets produced at the LHC. Any deviations from the standard model (SM) predictions might signal new physics, so people are understandably very eager to learn as much as possible about the Higgs.

Amidst all the talk of Yukawa couplings and decay modes, it might occur to you to ask a seemingly simpler question: what is the Higgs boson’s lifetime? This turns out to be very difficult to measure, and it was only recently — nearly 10 years after the Higgs discovery — that the CMS experiment released the first measurement of its lifetime.

The difficulty lies in the Higgs’ extremely short lifetime, predicted by the standard model to be around 10⁻²² seconds. This is far shorter than anything we could hope to measure directly, so physicists instead measured a related quantity: its width. According to the Heiseinberg uncertainty principle, short-lived particles can have significant uncertainty in their energy. This means that whenever we produce a Higgs boson at the LHC and reconstruct its mass from its decay products, we’ll measure a slightly different mass each time. If you make a histogram of these measurements, its shape looks like a Breit-Wigner distribution (Fig. 1) peaked at the nominal mass and with a characteristic width .

Fig. 1: A Breit-Wigner curve, which describes the distribution of masses that a particle takes on when it’s produced at the LHC. The peak sits at the particle’s nominal mass, and production within the width is most common (“on-shell”). The long tails allow for rare production far from the peak (“off-shell”).

So, the measurement should be easy, right? Just measure a bunch of Higgs decays, make a histogram of the mass, and run a fit! Unfortunately, things don’t work out this way. A particle’s width and lifetime are inversely proportional, meaning an extremely short-lived particle will have a large width and vice-versa. For particles like the Z boson — which lives for about 10⁻²⁵ seconds — we can simply extract its width from its mass spectrum. The Higgs, however, sits in a sweet spot of experimental evasion: its lifetime is too short to measure, and the corresponding width (about 4 MeV) cannot be resolved by our detectors, whose resolution is limited to roughly 1 GeV.

To overcome this difficulty, physicists relied on another quantum mechanical quirk: “off-shell” Higgs production. Most of the time, a Higgs is produced on-shell, meaning its reconstructed mass will be close to the Breit-Wigner peak. In rare cases, however, it can be produced with a mass very far away from its nominal mass (off-shell) and undergo decays that are otherwise energetically forbidden. Off-shell production is incredibly rare, but if you can manage to measure the ratio of off-shell to on-shell production rates, you can deduce the Higgs width!

Have we just replaced one problem (a too-short lifetime) with another one (rare off-shell production)? Thankfully, the Breit-Wigner distribution saves the day once again. The CMS analysis focused on a Higgs decaying to a pair of Z bosons (Fig. 2, left), one of which must be produced off-shell (the Higgs mass is 125 GeV, whereas each Z is 91 GeV). The Z bosons have a Breit-Wigner peak of their own, however, which enhances the production rate of very off-shell Higgs bosons that can decay to a pair of on-shell Zs. The enhancement means that roughly 10% of H → ZZ decays are expected to involve an off-shell Higgs, which is a large enough fraction to measure with the present-day CMS dataset!

Fig. 2: The signal process involving a Higgs decay to Z bosons (left), and background ZZ production without the Higgs (right)

To measure the off-shell H → ZZ rate, physicists looked at events where one Z boson decays to a pair of leptons and the other to a pair of neutrinos. The neutrinos escape the detector without depositing any energy, generating a large missing transverse momentum which helps identify candidate Higgs events. Using the missing momentum as a proxy for the neutrinos’ momentum, they reconstruct a “transverse mass” for the off-shell Higgs boson. By comparing the observed transverse mass spectrum to the expected “continuum background” (Z boson pairs produced via other mechanisms, e.g. Fig. 2, right) and signal rate, they are able to extract the off-shell production rate.

After a heavy load of sophisticated statistical analysis, the authors found that off-shell Higgs production happened at a rate consistent with SM predictions (Fig. 3). Using these off-shell events, they measured the Higgs width to be 3.2 (+2.4, -1.7) MeV, again consistent with the expectation of 4.1 MeV and a marked improvement upon the previously measured limit of 9.2 MeV.

Fig. 3: The best-fit “signal strength” parameters for off-shell Higgs production in two different modes: gluon fusion (x-axis, shown also in the leftmost Feynman diagram above) and associated production with a vector boson (y-axis). Signal strength measures how often a process occurs relative to the SM expectation, and a value of 1 means that it occurs at the rate predicted by the SM. In this case, the SM prediction (X) is within one standard deviation of the best fit signal strength (diamond).

Unfortunately, this result doesn’t hint at any new physics in the Higgs sector. It does, however, mark a significant step forward into the era of precision Higgs physics at ATLAS and CMS. With a mountain of data at our fingertips — and much more data to come in the next decade — we’ll soon find out what else the Higgs has to teach us.

Read More

“Life of the Higgs Boson” – Coverage of this result from the CMS Collaboration

“Most Particles Decay — But Why?” – An interesting article by Matt Strassler explaining why (some) particles decay

“The Physics Still Hiding in the Higgs Boson” – A Quanta article on what we can learn about new physics by measuring Higgs properties

How to find a ‘beautiful’ valentine at the LHC

References:  https://arxiv.org/abs/1712.07158 (CMS)  and https://arxiv.org/abs/1907.05120 (ATLAS)

If you are looking for love at the Large Hadron Collider this Valentines Day, you won’t find a better eligible bachelor than the b-quark. The b-quark (also called the ‘beauty’ quark if you are feeling romantic, the ‘bottom’ quark if you are feeling crass, or a ‘beautiful bottom quark’ if you trying to weird people out) is the 2nd heaviest quark behind the top quark. It hangs out with a cool crowd, as it is the Higgs’s favorite decay and the top quark’s BFF; two particles we would all like to learn a bit more about.

Choose beauty this valentines day

No one wants a romantic partner who is boring, and can’t stand out from the crowd. Unfortunately when most quarks or gluons are produced at the LHC, they produce big sprays of particles called ‘jets’ that all look the same. That means even if the up quark was giving you butterflies, you wouldn’t be able to pick its jets out from those of strange quarks or down quarks, and no one wants to be pressured into dating a whole friend group. But beauty quarks can set themselves apart in a few ways. So if you are swiping through LHC data looking for love, try using these tips to find your b(ae).

Look for a partner whose not afraid of commitment and loves to travel.  Beauty quarks live longer than all the other quarks (a full 1.5 picoseconds, sub-atomic love is unfortunately very fleeting) letting them explore their love of traveling (up to a centimeter from the beamline, a great honeymoon spot I’ve heard) before decaying.

You want a lover who will bring you gifts, which you can hold on to even after they are gone. And when beauty quarks they, you won’t be in despair, but rather charmed with your new c-quark companion. And sometimes if they are really feeling the magic, they leave behind charged leptons when they go, so you will have something to remember them by.

The ‘profile photo’ of a beauty quark. You can see its traveled away from the crowd (the Primary Vertex, PV) and has started a cool new Secondary Vertex (SV) to hang out in.

But even with these standout characteristics, beauty can still be hard to find, as there are a lot of un-beautiful quarks in the sea you don’t want to get hung up on. There is more to beauty than meets the eye, and as you get to know them you will find that beauty quarks have even more subtle features that make them stick out from the rest. So if you are serious about finding love in 2022, its may be time to turn to the romantic innovation sweeping the nation: modern machine learning.  Even if we would all love to spend many sleepless nights learning all about them, unfortunately these days it feels like the-scientist-she-tells-you-not-to-worry-about, neural networks, will always understand them a bit better. So join the great romantics of our time (CMS and ATLAS) in embracing the modern dating scene, and let the algorithms find the most beautiful quarks for you.

So if you looking for love this Valentines Day, look no further than the beauty quark. And if you area feeling hopeless, you can take inspiration from this decades-in-the-making love story from a few years ago: “Higgs Decay into Bottom Beauty Quarks Seen at Last

A beautiful wedding photo that took decades to uncover, the Higgs decay in beauty quarks (red) was finally seen in 2018. Other, boring couples (dibosons), are shown in gray.

Towards resolving the black hole information paradox!

Based on the paper The black hole information puzzle and the quantum de Finetti theorem

Black holes are some of the most fascinating objects in the universe. They are extreme deformations of space and time, formed from the collapse of massive stars, with a gravitational pull so strong that nothing, not even light, can escape it. Apart from the astrophysical aspects of black holes (which are bizarre enough to warrant their own study), they provide the ideal theoretical laboratory for exploring various aspects of quantum gravity, the theory that seeks to unify the principles of general relativity with those of quantum mechanics.

One definitive way of making progress in this endeavor is to resolve the infamous black hole information paradox [1], and through a series of recent exciting developments, it appears that we might be closer to achieving this than we have ever been before [5, 6]! Paradoxes in physics paradoxically tend to be quite useful, in that they clarify what we don’t know about what we know. Stephen Hawking’s semi-classical calculations of black hole radiation treat the matter in and around black holes as quantum fields but describe them within the framework of classical general relativity. The corresponding results turn out to be in disagreement with the results obtained from a purely quantum theoretical viewpoint. The information paradox encapsulates this particular discrepancy. According to the calculations of Hawking and Bekenstein in the 1970s [3], black hole evaporation via Hawking radiation is completely thermal. This simply means that the von Neumann entropy S(R) of radiation (a measure of its thermality or our ignorance of the system) keeps growing with the number of radiation quanta, reaching a maximum when the black hole has evaporated completely. This corresponds to a complete loss of information, whereby even a pure quantum state entering a black hole, would be transformed into a mixed state of Hawking radiation and all previous information about it would be destroyed. This conclusion is in stark contrast to what one would expect when regarding the black hole from the outside, as a quantum system that must obey the laws of quantum mechanics. The fundamental tenets of quantum mechanics are determinism and reversibility, the combination of which asserts that all information must be conserved. Thus if a black hole is formed by collapsing matter in a pure state, the state of the total system including the radiation R must remain pure. This can only happen if the entropy S(R) that first increases during the radiation process, ultimately decreases to zero when the black hole has disappeared completely, corresponding to a final pure state [4]. This quantum mechanical result is depicted in the famous Page curve (Fig.1)

Certain significant discoveries in the recent past showing that the Page curve is indeed the correct curve and can in fact be reproduced by semi-classical approximations of gravitational path integrals [7, 8] may finally hold the key towards the resolution of this paradox. These calculations rely on the replica trick [9, 10] and take into account contribution from space-time geometries comprising of wormholes that connect various replica black holes. This simple geometric amendment to the gravitational path integral is the only factor different from Hawking’s calculations and yet leads to diametrically different results! The replica trick is a neat method that enables the computation of the entropy of the radiation field by first considering n identical copies of a black hole, calculating their Renyi entropies and using the fact that this equals the desired von Neumann entropy in the limit of n \rightarrow 1. These in turn are calculated using the semi-classical gravitational path integral under the assumption that the dominant contributions come from the geometries that are classical solutions to the gravity action, obeying Zn symmetry. This leads to two distinct solutions:

  • The Hawking saddle consisting of disconnected geometries (corresponding to identical copies of a black hole).
  • The replica wormholes geometry consisting of connections between the different replicas.

Upon taking the n \rightarrow 1 limit, one finds that all that was missing from Hawking’s calculations and the quantum compatible Page curve was the inclusion of the replica wormhole solution in the former.
In this paper, the authors attempt to find the reason for this discrepancy, where this extra information is stored and the physical relevance of the replica wormholes using insights from quantum information theory. Invoking the quantum de Finetti theorem [11, 12], they find that there exists a particular reference information, W and the entropy one assigns to the black hole radiation field depends on whether or not one has access to this reference. While Hawking’s calculations correspond to measuring the unconditional von Neumann entropy S(R), ignoring W, the novel calculations using the replica trick calculate the conditional von Neumann entropy S(R|W), which takes W into account. The former yields the entropy of the ensemble average of all possible radiation states, while the latter yields the ensemble average of the entropy of the same states. They also show that the replica wormholes are a geometric representation of the correlation that appears between the n black holes, mediated by W.
The precise interpretation of W and what it might appear to be to an observer falling into a black hole remains an open question. Exploring what it could represent in holographic theories, string theories and loop quantum gravity could open up a dizzying array of insights into black hole physics and the nature of space-time itself. It appears that the different pieces of the quantum gravity puzzle are slowly but surely coming together to what will hopefully soon give us the entire picture.

References

  1. The Entropy of Hawking Radiation
  2. Black holes as mirrors: Quantum information in random subsystems
  3. Particle creation by black holes
  4. Average entropy of a subsystem
  5. The entropy of bulk quantum fields and the entanglement wedge of an evaporating black hole
  6. Entanglement Wedge Reconstruction and the Information Paradox
  7. Replica wormholes and the black hole interior
  8. Replica Wormholes and the Entropy of Hawking Radiation
  9. Entanglement entropy and quantum field theory
  10. Generalized gravitational entropy
  11. Locally normal symmetric states and an analogue of de Finetti’s theorem
  12. Unknown quantum states: The quantum de Finetti representation

The Mini and Micro BooNE Mystery, Part 2: Theory

Title: “Search for an Excess of Electron Neutrino Interactions in MicroBooNE Using Multiple Final State Topologies”

Authors: MicroBooNE Collaboration

References: https://arxiv.org/pdf/2110.14054.pdf

This is the second post in a series on the latest MicroBooNE results, covering the theory side. Click here to read about the experimental side. 

Few stories in physics are as convoluted as the one written by neutrinos. These ghost-like particles, a notoriously slippery experimental target and one of the least-understood components of the Standard Model, are making their latest splash in the scientific community through MicroBooNE, an experiment at FermiLab that unveiled its first round of data earlier this month. While MicroBooNE’s predecessors have provided hints of a possible anomaly within the neutrino sector, its own detectors have yet to uncover a similar signal. Physicists were hopeful that MicroBooNE would validate this discrepancy, yet the tale is turning out to be much more nuanced than previously thought.

Unexpected Foundations

Originally proposed by Wolfgang Pauli in 1930 as an explanation for missing momentum in certain particle collisions, the neutrino was added to the Standard Model as a massless particle that can come in one of three possible flavors: electron, muon, and tau. At first, it was thought that these flavors are completely distinct from one another. Yet when experiments aimed to detect a particular neutrino type, they consistently measured a discrepancy from their prediction. A peculiar idea known as neutrino oscillation presented a possible explanation: perhaps, instead of propagating as a singular flavor, a neutrino switches between flavors as it travels through space. 

This interpretation emerges fortuitously if the model is modified to give the neutrinos mass. In quantum mechanics, a particle’s mass eigenstate — the possible masses a particle can be found to have upon measurement — can be thought of as a traveling wave with a certain frequency. If the three possible mass eigenstates of the neutrino are different, meaning that at most one of the mass values could be zero, this creates a phase shift between the waves as they travel. It turns out that the flavor eigenstates — describing which of the electron, muon, or tau flavors the neutrino is measured to possess — are then superpositions of these mass eigenstates. As the neutrino propagates, the relative phase between the mass waves varies such that when the flavor is measured, the final superposition could be different from the initial one, explaining how the flavor can change. In this way, the mass eigenstates and the flavor eigenstates of neutrinos are said to “mix,” and we can mathematically characterize this model via mixing parameters that encode the mass content of each flavor eigenstate.

A visual representation of how neutrino oscillation works. From: http://www.hyper-k.org/en/neutrino.html.

These massive oscillating neutrinos represent a radical departure from the picture originally painted by the Standard Model, requiring a revision in our theoretical understanding. The oscillation phenomenon also poses a unique experimental challenge, as it is especially difficult to unravel the relationships between neutrino flavors and masses. Thus far, physicists have only been able to determine the sum of neutrino masses, and have found that this value is constrained to be exceedingly small, posing yet another mystery. The neutrino experiments of the past three decades have set their sights on measuring the mixing parameters in order to determine the probabilities of the possible flavor switches.

A Series of Perplexing Experiments

In 1993, scientists in Los Alamos peered at the data gathered by the Liquid Scintillator Neutrino Detector (LSND) to find something rather strange. The group had set out to measure the number of electron neutrino events produced via decays in their detector, and found that this number exceeded what had been predicted by the three-neutrino oscillation model. In 2002, experimentalists turned on the complementary MiniBooNE detector at FermiLab (BooNE is an acronym for Booster Neutrino Experiment), which searched for oscillations of muon neutrinos into electron neutrinos, and again found excess electron neutrino events. For a more detailed account of the setup of these experiments, check out Oz Amram’s latest piece.

While two experiments are notable for detecting excess signal, they stand as outliers when we consider all neutrino experiments that have collected oscillation data. Collaborations that were taking data at the same time as LSND and MiniBooNE include: MINOS (Main Injector Neutrino Oscillation Search), KamLAND (Kamioka Liquid Scintillator Antineutrino Detector), and IceCube (surprisingly, not a fancy acronym, but deriving its name from the fact that it’s located under ice in Antarctica), to name just a few prominent ones. Their detectors targeted neutrinos from a beamline, nearby nuclear reactors, and astrophysical sources, respectively. Not one found a mismatch between predicted and measured events. 

The results of these other experiments, however, do not negate the findings of LSND and MiniBooNE. This extensive experimental range — probing several sources of neutrinos, and detecting with different hardware specifications — is necessary in order to consider the full range of possible neutrino mixing parameters and masses. Each model or experiment is endowed with a parameter space: a set of allowed values that its parameters can take. In this case, the neutrino mass and mixing parameters form a two-dimensional grid of possibilities. The job of a theorist is to find a solution that both resolves the discrepancy and has a parameter space that overlaps with allowed experimental parameters. Since LSND and MiniBooNE had shared regions of parameter space, the resolution of this mystery should be able to explain not only the origins of the excess, but why no similar excess was uncovered by other detectors.

A simple explanation to the anomaly emerged and quickly gained traction: perhaps the data hinted at a fourth type of neutrino. Following the logic of the three-neutrino oscillation model, this interpretation considers the possibility that the three known flavors have some probability of oscillating into an additional fourth flavor. For this theory to remain consistent with previous experiments, the fourth neutrino would have to provide the LSND and MiniBooNE excess signals, while at the same time sidestepping prior detection by coupling to only the force of gravity. Due to its evasive behavior, this potential fourth neutrino has come to be known as the sterile neutrino. 

The Rise of the Sterile Neutrino

The sterile neutrino is a well-motivated and especially attractive candidate for beyond the Standard Model physics. It differs from ordinary neutrinos, also called active neutrinos, by having the opposite “handedness”. To illustrate this property, imagine a spinning particle. If the particle is spinning with a leftward orientation, we say it is “left-handed”, and if it is spinning with a rightward orientation, we say it is “right-handed”. Mathematically, this quantity is called helicity, which is formally the projection of a particle’s spin along its direction of momentum. However, this helicity depends implicitly on the reference frame from which we make the observation. Because massive particles move slower than the speed of light, we can choose a frame of reference such that the particle appears to have momentum going in the opposite direction, and as a result, the opposite helicity. Conversely, because massless particles move at the speed of light, they will have the same helicity in every reference frame. 

An illustration of chirality. We define, by convention, a “right-handed” particle as one whose spin and momentum directions align, and a “left-handed” particle as one whose spin and momentum directions are anti-aligned. Source: Wikipedia.

This frame-dependence unnecessarily complicates calculations, but luckily we can instead employ a related quantity that encapsulates the same properties while bypassing the reference frame issue: chirality. Much like helicity, while massless particles can only display one chirality, massive particles can be either left- or right-chiral. Neutrinos interact via the weak force, which is famously parity-violating, meaning that it has been observed to preferentially interact only with particles of one particular chirality. Yet massive neutrinos could presumably also be right-handed — there’s no compelling reason to think they shouldn’t exist. Sterile neutrinos could fill this gap.

They would also lend themselves nicely to addressing questions of dark matter and baryon asymmetry. The former — the observed excess of gravitationally-interacting matter over light-emitting matter by a factor of 20 — could be neatly explained away by the detection of a particle that interacts only gravitationally, much like the sterile neutrino. The latter, in which our patch of the universe appears to contain considerably more matter than antimatter, could also be addressed by the sterile neutrino via a proposed model of neutrino mass acquisition known as the seesaw mechanism. 

In this scheme, active neutrinos are represented as Dirac fermions: spin-½ particles that have a unique anti-particle, the oppositely-charged particle with otherwise the same properties. In contrast, sterile neutrinos are considered to be Majorana fermions: spin-½ particles that are their own antiparticle. The masses of the active and sterile neutrinos are fundamentally linked such that as the value of one goes up, the value of the other goes down, much like a seesaw. If sterile neutrinos are sufficiently heavy, this mechanism could explain the origin of neutrino masses and possibly even why the masses of the active neutrinos are so small. 

These considerations position the sterile neutrino as an especially promising contender to address a host of Standard Model puzzles. Yet it is not the only possible solution to the LSND/MiniBooNE anomaly — a variety of alternative theoretical interpretations invoke dark matter, variations on the Higgs boson, and even more complicated iterations of the sterile neutrino. MicroBooNE was constructed to traverse this range of scenarios and their corresponding signatures. 

Open Questions

After taking data for three years, the collaboration has compiled two dedicated analyses: one that searches for single electron final states, and another that searches for single photon final states. Each of these products can result from electron neutrino interactions — yet both analyses did not detect an excess, pointing to no obvious signs of new physics via these channels. 

Above, we can see that the expected number of electron neutrino events agrees well with the number of measured events, disfavoring the MiniBooNE excess. Source: https://microboone.fnal.gov/wp-content/uploads/paper_electron_analysis_2021.pdf

Although confounding, this does not spell death for the sterile neutrino. A significant disparity between MiniBooNE and MicroBooNE’s detectors is the ability to discern between single and multiple electron events — MiniBooNE lacked the resolution that MicroBooNE was upgraded to achieve. MiniBooNE also was unable to fully distinguish between electron and photon events in the same way as MicroBooNE. The possibility remains that there exist processes involving new physics that were captured by LSND and MiniBooNE — perhaps decays resulting in two electrons, for instance.  

The idea of a right-handed neutrino remains a promising avenue for beyond the Standard Model physics, and it could turn out to have a mass much larger than our current detection mechanisms can probe. The MicroBooNE collaboration has not yet done a targeted study of the sterile neutrino, which is necessary in order to fully assess how their data connects to its possible signatures. There still exist regions of parameter space where the sterile neutrino could theoretically live, but with every excluded region of parameter space, it becomes harder to construct a theory with a sterile neutrino that is consistent with experimental constraints. 

While the list of neutrino-based mysteries only seems to grow with MicroBooNE’s latest findings, there are plenty of results on the horizon that could add clarity to the picture. Researchers are anticipating the output of more data from MicroBooNE as well as more specific theoretical studies of the results and their relationship to the LSND/MiniBooNE anomaly, the sterile neutrino, and other beyond the Standard Model scenarios. MicroBooNE is also just one in a series of planned neutrino experiments, and will operate alongside the upcoming SBND (Short-Baseline Neutrino Detector) and ICARUS (Imaging Cosmic Rare and Underground Signals), further expanding the parameter space we are able to probe.

The neutrino sector has proven to be fertile ground for physics beyond the Standard Model, and it is likely that this story will continue to produce more twists and turns. While we have some promising theoretical explanations, nothing theorists have concocted thus far has fit seamlessly with our available data. More data from MicroBooNE and near-future detectors is necessary to expand our understanding of these puzzling pieces of particle physics. The neutrino story is pivotal to the tome of the Standard Model, and may be the key to writing the next chapter in our understanding of the fundamental ingredients of our world.

Further Reading

  1. A review of neutrino oscillation and mass properties: https://pdg.lbl.gov/2020/reviews/rpp2020-rev-neutrino-mixing.pdf
  2. An in-depth review of the LSND and MiniBooNE results: https://arxiv.org/pdf/1306.6494.pdf

The Mini and Micro Boone Mystery, Part 1 Experiment

Title: “Search for an Excess of Electron Neutrino Interactions in MicroBooNE Using Multiple Final State Topologies”

Authors: The MiniBoone Collaboration

Reference: https://arxiv.org/abs/2110.14054

This is the first post in a series on the latest MicroBooNE results, covering the experimental side. Click here to read about the theory side. 

The new results from the MicroBoone experiment received a lot of excitement last week, being covered by several major news outlets. But unlike most physics news stories that make the press, it was a null result; they did not see any evidence for new particles or interactions. So why is it so interesting? Particle physics experiments produce null results every week, but what made this one newsworthy is that MicroBoone was trying to check the results from two previous experiments LSND and MiniBoone, that did see something anomalous with very high statistical evidence. If the LSND/MiniBoone result was confirmed, it would have been a huge breakthrough in particle physics, but now that it wasn’t many physicists are scratching their heads trying to make sense of these seemingly conflicting results. However, the MicroBoone experiment is not exactly the same as MiniBoone/LSND, and understanding the differences between the two sets of experiments may play an important role in unraveling this mystery.

Accelerator Neutrino Basics

All of these experiments are ‘accelerator neutrino experiments’, so lets first review what that means. Neutrino’s are ‘ghostly’ particles that are difficult to study (check out this post for more background on neutrinos).  Because they only couple through the weak force, neutrinos don’t like to interact with anything very much. So in order to detect them you need both a big detector with a lot of active material and a source with a lot of neutrinos. These experiments are designed to detect neutrinos produced in a human-made beam. To make the beam, a high energy beam of protons is directed at a target. These collisions produce a lot of particles, including unstable bound states of quarks like pions and kaons. These unstable particles have charge, so we can use magnets to focus them into a well-behaved beam.  When the pions and kaons decay they usually produce a muon and a muon neutrino. The beam of pions and kaons is pointed at an underground detector located a few hundred meters (or kilometers!) away, and then given time to decay. After they decay there will be a nice beam of muons and muon neutrinos. The muons can be stopped by some kind of shielding (like the earth’s crust), but the neutrinos will sail right through to the detector.

A diagram showing the basics of how a neutrino beam is made. Source

Nearly all of the neutrinos from the beam will still pass right through your detector, but a few of them will interact, allowing you to learn about their properties.

All of these experiments are considered ‘short-baseline’ because the distance between the neutrino source and the detector is only a few hundred meters (unlike the hundreds of kilometers in other such experiments). These experiments were designed to look for oscillation of the beam’s muon neutrinos into electron neutrinos which then interact with their detector (check out this post for some background on neutrino oscillations). Given the types of neutrinos we know about and their properties, this should be too short of a distance for neutrinos to oscillate, so any observed oscillation would be an indication something new (beyond the Standard Model) was going on.

The LSND + MiniBoone Anomaly

So the LSND and MiniBoone ‘anomaly’ was an excess of events above backgrounds that looked like electron neutrinos interacting with their detector. Both detectors were based on similar technology and were a similar distance from their neutrino source. Their detectors were essentially big tanks of mineral oil lined with light-detecting sensors.

An engineer styling inside the LSND detector. Source

At these energies the most common way neutrinos interact is to scatter against a neutron to produce a proton and a charged lepton (called a ‘charged current’ interaction). Electron neutrinos will produce outgoing electrons and muon neutrinos will produce outgoing muons.

A diagram of a ‘charged current’ interaction. A muon neutrino comes in and scatters against a neutron, producing a muon and a proton. Source

When traveling through the mineral oil these charged leptons will produce a ring of Cherenkov light which is detected by the sensors on the edge of the detector. Muons and electrons can be differentiated based on the characteristics of the Cherenkov light they emit. Electrons will undergo multiple scatterings off of the detector material while muons will not. This makes the Cherenkov rings of electrons ‘fuzzier’ than those of muons. High energy photons can produce electrons positron pairs which look very similar to a regular electron signal and are thus a source of background. 

A comparison of muon and electron Cherenkov rings from the Super-Kamiokande experiment. Electrons produce fuzzier rings than muons. Source

Even with a good beam and a big detector, the feebleness of neutrino interactions means that it takes a while to get a decent number of potential events. The MiniBoone experiment ran for 17 years looking for electron neutrinos scattering in their detector. In MiniBoone’s most recent analysis, they saw around 600 more events than would be expected if there were no anomalous electron neutrinos reaching their detector. The statistical significance of this excess, 4.8-sigma, was very high. Combining with LSND which saw a similar excess, the significance was above 6-sigma. This means its very unlikely this is a statistical fluctuation. So either there is some new physics going on or one of their backgrounds has been seriously under-estimated. This excess of events is what has been dubbed the ‘MiniBoone anomaly’.

The number of events seen in the MiniBoone experiment as a function of the energy seen in the interaction. The predicted number of events from various known background sources are shown in the colored histograms. The best fit to the data including the signal of anomalous oscillations is shown by the dashed line. One can see that at low energies the black data points lie significantly above these backgrounds and strongly favor the oscillation hypothesis.

The MicroBoone Result

The MicroBoone experiment was commissioned to verify the MiniBoone anomaly as well as test out a new type of neutrino detector technology. The MicroBoone is the first major neutrino experiment to use a ‘Liquid Argon Time Projection Chamber’ detector. This new detector technology allows more detailed reconstruction of what is happening when a neutrino scatters in the detector. The the active volume of the detector is liquid Argon, which allows both light and charge to propagate through it. When a neutrino scatters in the liquid Argon, scintillation light is produced that is collected in sensors. As charged particles created in the collision pass through the liquid Argon they ionize atoms they pass by. An electric field applied to the detector causes this produced charge to drift towards a mesh of wires where it can be collected. By measuring the difference in arrival time between the light and the charge, as well as the amount of charge collected at different positions and times, the precise location and trajectory of the particles produced in the collision can be determined. 

A beautiful reconstructed event in the MicroBoone detector. The colored lines show the tracks of different particles produced in the collision, all coming from a single point where the neutrino interaction took place. One can also see that one of the tracks produced a shower of particles away from the interaction vertex.

This means that unlike the MiniBoone and LSND, MicroBoone can see not just the lepton, but also the hadronic particles (protons, pions, etc) produced when a neutrino scatters in their detector. This means that the same type of neutrino interaction actually looks very different in their detector. So when they went to test the MiniBoone anomaly they adopted multiple different strategies of what exactly to look for. In the first case they looked for the type of interaction that an electron neutrino would have most likely produced: an outgoing electron and proton whose kinematics match those of a charged current interaction. Their second set of analyses, designed to mimic the MiniBoone selection, are slightly more general. They require one electron and any number of protons, but no pions. Their third analysis is the most general and requires an electron along with anything else. 

These different analyses have different levels of sensitivity to the MiniBoone anomaly, but all of them are found to be consistent with a background-only hypothesis: there is no sign of any excess events. Three out of four of them even see slightly less events than the expected background. 

A summary of the different MicroBoone analyses. The Y-axis shows the ratio of observed to expected number of events expected if there was only background present. The red lines show the excess predicted to be seen if the MiniBoone anomaly produced a signal in each channel. One can see that the black data points are much more consistent with the grey bands showing the background only prediction than amount predicted if the MiniBoone anomaly was present.

Overall the MicroBoone data rejects the hypothesis that the MiniBoone anomaly is due to electron neutrino charged current interactions at quite high significance (>3sigma). So if its not electron neutrinos causing the MiniBoone anomaly, what is it?

What’s Going On?

Given that MicroBoone did not see any signal, many would guess that MiniBoone’s claim of an excess must be flawed and they have underestimated one of their backgrounds. Unfortunately it is not very clear what that could be. If you look at the low-energy region where MiniBoone has an excess, there are three major background sources: decays of the Delta baryon that produce a photon (shown in tan), neutral pions decaying to pairs of photons (shown in red), and backgrounds from true electron neutrinos (shown in various shades of green). However all of these sources of background seem quite unlikely to be the source of the MiniBoone anomaly.

Before releasing these results, MicroBoone performed a dedicated search for Delta baryons decaying into photons, and saw a rate in agreement with the theoretical prediction MiniBoone used, and well below the amount needed to explain the MiniBoone excess.

Backgrounds from true electron neutrinos produced in the beam, as well as from the decays of muons, should not concentrate only at low energies like the excess does, and their rate has also been measured within MiniBoone data by looking at other signatures.

The decay of a neutral pions can produce two photons, and if one of them escapes detection, a single photon will mimic their signal. However one would expect that it would be more likely that photons would escape the detector near its edges, but the excess events are distributed uniformly in the detector volume.

So now the mystery of what could be causing this excess is even greater. If it is a background, it seems most likely it is from an unknown source not previously considered. As will be discussed in our part 2 post, its possible that MiniBoone anomaly was caused by a more exotic form of new physics; possibly the excess events in MiniBoone were not really coming from the scattering of electron neutrinos but something else that produced a similar signature in their detector. Some of these explanations included particles that decayed into pairs of electrons or photons. These sorts of explanations should be testable with MicroBoone data but will require dedicated analyses for their different signatures.

So on the experimental side, we now we are left to scratch our heads and wait for new results from MicroBoone that may help get to the bottom of this.

Click here for part 2 of our MicroBoone coverage that goes over the theory side of the story!

Read More

Is the Great Neutrino Puzzle Pointing to Multiple Missing Particles?” – Quanta Magazine article on the new MicroBoone result

“Can MiniBoone be Right?” – Resonaances blog post summarizing the MiniBoone anomaly prior to the the MicroBoone results

A review of different types of neutrino detectors – from the T2K experiment

Planckian dark matter: DEAP edition

Title: First direct detection constraints on Planck-scale mass dark matter with multiple-scatter signatures using the DEAP-3600 detector.

Reference: https://arxiv.org/abs/2108.09405.

Here is a broad explainer of the paper via breaking down its title.

Direct detection.

The term in use for a kind of astronomy, ‘dark matter astronomy’, that has been in action since the 1980s. The word “astronomy” usually evokes telescopes pointing at something in the sky and catching its light. But one could also catch other things, e.g., neutrinos, cosmic rays and gravitational waves, to learn about what’s out there: that counts as astronomy too! As touched upon elsewhere in these pages, we think dark matter is flying into Earth at about 300 km/s, making its astronomy a possibility. But we are yet to conclusively catch dark particles. The unique challenge, unlike astronomy with light or neutrinos or gravity waves, is that we do not quite know the character of dark matter. So we must first imagine what it could be, and accordingly design a telescope/detector. That is challenging, too. We only really know that dark matter exists on the size scale of small galaxies: 10^{19} metres. Whereas our detectors are at best a metre across. This vast gulf in scales can only be addressed by theoretical models.

Multiple-scatter signatures.

How heavy all the dark matter is in the neighbourhood of the Sun has been ballparked, but that does not tell us how far apart the dark particles are from each other, i.e. if they are lightweights huddled close, or anvils socially distanced. Usually dark matter experiments (there are dozens around the world!) look for dark particles bunched a few centimetres apart, called WIMPs. This experiment looked, for the first time, for dark particles that may be  30 kilometres apart. In particular they looked for “MIMPs” — multiply interacting massive particles — dark matter that leaves a “track” in the detector as opposed to a single “burst” characteristic of a WIMP. As explained here, to discover very dilute dark particles like DEAP-3600 wanted to, one must necessarily look for tracks. So they carefully analyzed the waveforms of energy dumps in the detector (e.g., from radioactive material, cosmic muons, etc.) to pick out telltale tracks of dark matter.

Figure above: Simulated waveforms for two benchmark parameters.

DEAP-3600 detector.

The largest dark matter detector built so far, the 130 cm-diameter, 3.3 tonne liquid argon-based DEAP (“Dark matter Experiment using Argon Pulse-shaped discrimination”) in SNOLAB, Canada.  Three years of data recorded on whatever passed through the detector were used. That amounts to the greatest integrated flux of dark particles through a detector in a dark matter experiment so far, enabling them to probe the frontier of “diluteness” in dark matter.

Planck-scale mass.

By looking for the dilutest dark particles, DEAP-3600 is the first laboratory experiment to say something about dark matter that may weigh a “Planck mass” — about 22 micrograms, or 1.2 \times 10^{19} GeV/c^2 — the greatest mass an elementary particle could have. That’s like breaking the sound barrier. Nothing prevents you from moving faster than sound, but you’d transition to a realm of new physical effects. Similarly nothing prevents an experiment from probing dark matter particles beyond the Planck mass. But novel intriguing theoretical possibilities for dark matter’s unknown identity are now impacted by this result, e.g., large composite particles, solitonic balls, and charged mini-black holes.

Constraints.

The experiment did not discover dark matter, but has mapped out its masses and nucleon scattering cross sections that are now ruled out thanks to its extensive search.

Image

Image

Figure above: For two classes of models of composite dark matter, DEAP-3600 limits on its cross sections for scattering on nucleons  versus its unknown mass. Also displayed are previously placed limits from various other searches.

[Full disclosure: the author was part of the experimental search, which was based on proposals in [a] [b]. It is hoped that this search leads the way for other collaborations to, using their own fun tricks, cast an even wider net than DEAP did.]

Further reading.

[1] Proposals for detection of (super-)Planckian dark matter via purely gravitational interactions:

Laser interferometers as dark matter detectors,

Gravitational direct detection of dark matter.

[2] Constraints on (super-)Planckian dark matter from recasting searches in etched plastic and ancient underground mica.

[3] A recent multi-scatter search for dark matter reaching masses of 10^{12} GeV/c^2.

[4] Look out for Benjamin Broerman‘s PhD thesis featuring results from a multi-scatter search in the bubble chamber-based PICO-60.