A Massive W for CDF

This is part two of our coverage of the CDF W mass measurement, discussing how the measurement was done. Read about the implications of this result in our sister post here

Last week, the CDF collaboration announced the most precise measurement of the W boson’s mass to date. After nearly ten years of careful analysis, the W weighed in at 80,433.5 ± 9.4 MeV: a whopping seven standard deviations away from the Standard Model expectation! This result quickly became the talk of the town among particle physicists, and there are already dozens of arXiv papers speculating about what it means for the Standard Model. One of the most impressive and hotly debated aspects of this measurement is its high precision, which came from an extremely careful characterization of the CDF detector and recent theoretical developments in modeling proton structure. In this post, I’ll describe how they made the measurement and the clever techniques they used to push down the uncertainties.

The new CDF measurement of the W boson mass. The center of the red ellipse corresponds to the central values of the measured W mass (y-coordinate) and top quark mass (x-coordinate, from other experiments). The purple line shows the Standard Model constraint on the W mass as a function of the top mass, and the border of the red ellipse is the one standard deviation boundary around the measurement.

The imaginatively titled “Collider Detector at Fermilab” (CDF) collected proton-antiproton collision data at Fermilab’s Tevatron accelerator for over 20 years, until the Tevatron shut down in 2011. Much like ATLAS and CMS, CDF is made of cylindrical detector layers, with the innermost charged particle tracker and adjacent electromagnetic calorimeter (ECAL) being most important for the W mass measurement. The Tevatron ran at a center of mass energy of 1.96 TeV — much lower than the LHC’s 13 TeV — which enabled a large reduction in the “theoretical uncertainties” on the measurement. Physicists use models called “parton distribution functions” (PDFs) to calculate how a proton’s momentum is distributed among its constituent quarks, and modern PDFs make very good predictions at the Tevatron’s energy scale. Additionally, W boson production in proton-antiproton collisions doesn’t involve any gluons, which are a major source of uncertainty in PDFs (LHC collisions are full of gluons, making for larger theory uncertainty in LHC W mass measurements).

A cutaway view of the CDF detector. The innermost tracking detector (yellow) reconstructs the trajectories of charged particles, and the nearby electromagnetic calorimeter (red) collects energy deposits from photons and charged particles (e.g. electrons). The tracker and EM Cal were both central in the W mass measurement.

Armed with their fancy PDFs, physicists set out to measure the W mass in the same way as always: by looking at its decay products! They focused on the leptonic channel, where the W decays to a lepton (electron or muon) and its associated neutrino. This clean final state is easy to identify in the detector and allows for a high-purity, low-background signal selection. The only sticking point is the neutrino, which flies out of the detector completely undetected. Thankfully, momentum conservation allowed them to reconstruct the neutrino’s transverse momentum (pT) from the rest of the visible particles produced in the collision. Combining this with the lepton’s measured momentum, they reconstructed the “transverse mass” of the W — an important observable for estimating its true mass.

A leptonic decay of the W boson, where it decays to an electron and an electron antineutrino. This channel, along with the muon + muon antineutrino channel, formed the basis of CDF’s W mass measurement.

Many of the key observables for this measurement flow from the lepton’s momentum, which means it needs to be measured very carefully! The analysis team calibrated their energy and momentum measurements by using the decays of other Standard Model particles: the ϒ(1S) and J/ψ mesons, and the Z boson. These particles’ masses are very precisely known from other experiments, and constraints from these measurements helped physicists understand how accurately CDF reconstructs a particle’s energy. For momentum measurements in the tracker, they reconstructed the ϒ(1S) and J/ψ masses from their decays to muon-antimuon pairs inside CDF, and compared CDF-measured masses to their known values from other experiments. This allowed them to calculate a correction factor to apply to track momenta. For ECAL energy measurements, they looked at samples of Z and W bosons decaying to electrons, and measured ratio of energy deposited in the ECAL (E) to the momentum measured in the tracker (p). The shape of the E/p distribution then allowed them to calculate an energy calibration for the ECAL.

Left: the fractional deviation of the measured muon momentum relative to its true momentum (y-axis), as a function of the muon’s average inverse transverse momentum. Data from ϒ(1S), J/ψ, and Z decays are shown, and the fit line (in black) has a slope consistent with zero. This indicates that there is no significant mismodeling of the energy lost by a particle flying through the detector. Right: the distribution of the ratio energy measured in the ECAL to momentum measured in the tracker. The shape of the peak and tail are used to calibrate the ECAL energy measurements.

To make sure their tracker and ECAL calibrations worked correctly, they applied them in measurements of the Z boson mass in the electron and muon decay channels. Thankfully, their measurements were consistent with the world average in both channels, providing an important cross-check of their calibration strategy.

Having done everything humanly possible to minimize uncertainties and calibrate their measurements, the analysis team was finally ready to measure the W mass. To do this, they simulated W boson events with many different settings for the W mass (an additional mountain of effort went into ensuring that the simulations were as accurate as possible!). At each mass setting, they extracted “template” distributions of the lepton pT, neutrino pT, and W boson transverse mass, and fit each template to the distribution measured in real CDF data. The templates that best fit the measured data correspond to CDF’s measured value of the W mass (plus some additional legwork to calculate uncertainties)

The reconstructed W boson transverse mass distribution in the muon + muon antineutrino decay channel. The best-fit template (red) is plotted along with the background distribution (gray) and the measured data (black points).

After years of careful analysis, CDF’s measurement of mW = 80,433.5 ± 9.4 MeV sticks out like a sore thumb. If it stands up to the close scrutiny of the particle physics community, it’s further evidence that something new and mysterious lies beyond the Standard Model. The only way to know for sure is to make additional measurements, but in the meantime we’ll all be happily puzzling over what this might mean.

CDF’s W mass measurement (bottom), shown alongside results from other experiments and the SM expectation (gray).

Read More

Quanta Magazine’s coverage of the measurement

A recorded talk from the Fermilab Wine & Cheese seminar covering the result in great detail

Too Massive? New measurement of the W boson’s mass sparks intrigue

This is part one of our coverage of the CDF W mass result covering its implications. Read about the details of the measurement in a sister post here!

Last week the physics world was abuzz with the latest results from an experiment that stopped running a decade ago. Some were heralding this as the beginning of a breakthrough in fundamental physics, headlines read “Shock result in particle experiment could spark physics revolution” (BBC). So what exactly is all the fuss about?

The result itself is an ultra-precise measurement of the mass of the W boson. The W boson is one of the carriers of weak force and this measurement pegged its mass at 80,433 MeV with an uncertainty of 9 MeV. The excitement is coming because this value disagrees with the prediction from our current best theory of particle physics, the Standard Model. In theoretical structure of the Standard Model the masses of the gauge bosons are all interrelated. In the Standard Model the mass of the W boson can be computed based on the mass of the Z as well as few other parameters in the theory (like the weak mixing angle). In a first approximation (ie to the lowest order in perturbation theory), the mass of the W boson is equal to the mass of the Z boson times the cosine of the weak mixing angle. Based on other measurements that have been performed including the Z mass, the Higgs mass, the lifetime of muons and others, the Standard Model predicts that the mass of the W boson should be 80,357 (with an uncertainty of 6 MeV). So the two numbers disagree quite strongly, at the level of 7 standard deviations.

If the measurement and the Standard Model prediction are both correct, this would imply that there is some deficiency in the Standard Model; some new particle interacting with the W boson whose effects haven’t been unaccounted for. This would be welcome news to particle physicists, as we know that the Standard Model is an incomplete theory but have been lacking direct experimental confirmation of its deficiencies. The size of the discrepancy would also mean that whatever new particle was causing the deviation may also be directly detectable within our current or near future colliders.

If this discrepancy is real, exactly what new particles would this entail? Judging based on the 30+ (and counting) papers released on the subject in the last week, there are a good number of possibilities. Some examples include extra Higgs bosons, extra Z-like bosons, and vector-like fermions. It would take additional measurements and direct searches to pick out exactly what the culprit was. But it would hopefully give experimenters definite targets of particles to look for, which would go a long way in advancing the field.

But before everyone starts proclaiming the Standard Model dead and popping champagne bottles, its important to take stock of this new CDF measurement in the larger context. Measurements of the W mass are hard, that’s why it has taken the CDF collaboration over 10 years to publish this result since they stopped taking data. And although this measurement is the most precise one to date, several other W mass measurements have been performed by other experiments.

The Other Measurements

A plot summarizing the various W mass measurements performed to date
A summary of all the W mass measurements performed to date (black dots) with their uncertainties (blue bars) as compared to the the Standard Model prediction (yellow band). One can see that this new CDF result is in tension with previous measurements. (source)

Previous measurements of the W mass have come from experiments at the Large Electron-Positron collider (LEP), another experiment at the Tevatron (D0) and experiments at the LHC (ATLAS and LHCb). Though none of these were as precise as this new CDF result, they had been painting a consistent picture of a value in agreement with the Standard Model prediction. If you take the average of these other measurements, their value differs from the CDF measurement the level about 4 standard deviations, which is quite significant. This discrepancy seems large enough that it is unlikely to arise from purely random fluctuation, and likely means that either some uncertainties have been underestimated or something has been overlooked in either the previous measurements or this new one.

What one would like are additional, independent, high precision measurements that could either confirm the CDF value or the average value of the previous measurements. Unfortunately it is unlikely that such a measurement will come in the near future. The only currently running facility capable of such a measurement is the LHC, but it will be difficult for experiments at the LHC to rival the precision of this CDF one.

W mass measurements are somewhat harder at the LHC than the Tevatron for a few reasons. First of all the LHC is proton-proton collider, while the Tevatron was a proton-antiproton collider, and the LHC also operates at a higher collision energy than the Tevatron. Both differences cause W bosons produced at the LHC to have more momentum than those produced at the Tevatron. Modeling of the W boson’s momentum distribution can be a significant uncertainty of its mass measurement, and the extra momentum of W’s at the LHC makes this a larger effect. Additionally, the LHC has a higher collision rate, meaning that each time a W boson is produced there are actually tens of other collisions laid on top (rather than only a few other collisions like at the Tevatron). These extra collisions are called pileup and can make it harder to perform precision measurements like these. In particular for the W mass measurement, the neutrino’s momentum has to be inferred from the momentum imbalance in the event, and this becomes harder when there are many collisions on top of each other. Of course W mass measurements are possible at the LHC, as evidenced by ATLAS and LHCb’s already published results. And we can look forward to improved results from ATLAS and LHCb as well as a first result from CMS. But it may be very difficult for them to reach the precision of this CDF result.

A histogram of the transverse mass of the W from the ATLAS result. Showing how 50 MeV shifts in the W mass change the spectrum by extremely small amounts (a few tenths of a percent).
A plot of the transverse mass (one of the variables used in a measurement) of the W from the ATLAS measurement. The red and yellow lines show how little the distribution changes if the W mass changes by 50 MeV, which is around two and half times the uncertainty of the ATLAS result. These shifts change the distribution by only a few tenths of a percent, illustrating the difficulty involved. (source)

The Future

A future electron positron collider would be able to measure the W mass extremely precisely by using an alternate method. Instead of looking at the W’s decay, the mass could be measured through its production, by scanning the energy of the electron beams very close to the threshold to produce two W bosons. This method should offer precision significantly better than even this CDF result. However any measurement from a possible future electron positron collider won’t come for at least a decade.

In the coming months, expect this new CDF measurement to receive a lot buzz. Experimentalists will be poring over the details trying to figure out why it is in tension with previous measurements and working hard to produce new measurements from LHC data. Meanwhile theorists will write a bunch of papers detailing the possibilities of what new particles could explain the discrepancy and if there is a connection to other outstanding anomalies (like the muon g-2). But the big question of whether we are seeing the first real crack in the Standard Model or there is some mistake in one or more of the measurements is unlikely to be answered for a while.

If you want to learn about how the measurement actually works, check out this sister post!

Read More:

Cern Courier “CDF sets W mass against the Standard Model

Blog post on the CDF result from an (ATLAS) expert on W mass measurements “[Have we] finally found new physics with the latest W boson mass measurement?”

PDG Review “Electroweak Model and Constraints on New Physics

Moriond 2022 : Return of the Excesses ?!

Recontres de Moriond is probably the biggest ski-vacation  conference of the year in particle physics, and is one of the places big particle physics experiments often unveil their new results. For the last few years the buzz in particle physics has been surrounding ‘indirect’ probes of new physics, specifically the latest measurement of the muons anomalous magnetic moment (g-2) and hints from LHCb about lepton flavor universality violation. If either of these anomalies were confirmed this would of course be huge, definitive laboratory evidence for physics beyond the standard model, but they would not answer the question of what exactly that new physics was. As evidenced by the 500+ papers written in the last year offering explanations of the g-2 anomaly, there are a lot of different potential explanations.

A definitive answer would come in the form of a ‘direct’ observation of whatever particle is causing the anomaly, which traditionally means producing and observing said particle in a collider. But so far the largest experiments performing these direct searches, ATLAS and CMS, have not shown any hints of new particles. But this Moriond, as the LHC experiments are getting ready for the start of a new data taking run later this year, both collaborations unveiled ‘excesses’ in their Run-2 data. These excesses, extra events above a background prediction that resemble the signature of a new particle, don’t have enough statistical significance to claim discoveries yet, and may disappear as more data is collected, as many an excess has done before. But they are intriguing and some have connections to anomalies seen in other experiments. 

So while there have been many great talks at Moriond (covering cosmology, astro-particle searches for dark matter, neutrino physics, and more flavor physics measurements and more) and the conference is still ongoing, its worth reviewing these new excesses in particular and what they might mean.

Excess 1: ATLAS Heavy Stable Charged Particles

Talk (paper forthcoming): https://agenda.infn.it/event/28365/contributions/161449/attachments/89009/119418/LeThuile-dEdx.pdf

Most searches for new particles at the LHC assume that said new particles decay very quickly once they are produced and their signatures can then be pieced together by measuring all the particles they decay to. However in the last few years there has been increasing interest in searching for particles that don’t decay quickly and therefore leave striking signatures in the detectors that can be distinguished from regular Standard Model particles. This particular ATLAS search searches for particles that are long-lived, heavy and charged. Due to their heavy masses (and/or large charges) particles such as these will produce greater ionization signals as they pass through the detector than standard model particles would. This ATLAS analysis selects tracks with high momentum, and unusually high ionization signals. They find an excess of events with high mass and high ionization, with a significance of 3.3-sigma.

The ATLAS excess of heavy stable charged particles. The black data points lie above the purple background prediction and match well with the signature of a new particle (yellow line). 

If their background has been estimated properly, this seems to be quite clear signature and it might be time to get excited. ATLAS has checked that these events are not due to any known instrumental defect, but they do offer one caveat. For a heavy particle like this (with a mass of ~TeV) one would expect for it to be moving noticeably slower than the speed of light. But when ATLAS compares the ‘time of flight’ of the particle, how long it takes to reach their detectors, its velocity appears indistinguishable from the speed of light. One would expect background Standard Model particles to travel close to the speed of light.

So what exactly to make of this excess is somewhat unclear. Hopefully CMS can weigh in soon!

Excesses 2-4: CMS’s Taus; Vector-Like-Leptons and TauTau Resonance(s)

Paper 1 : https://cds.cern.ch/record/2803736

Paper 2: https://cds.cern.ch/record/2803739

Many of the models seeking to explain the flavor anomalies seen by LHCb predict new particles that couple preferentially to tau’s and b-quarks. These two separate CMS analyses look for particles that decay specifically to tau leptons.

In the first analysis they look for pairs of vector-like-leptons (VLL’s) the lightest particle predicted in one of the favored models to explain the flavor anomalies. The VLL’s are predicted to decay into tau leptons and b-quarks, so the analysis targets events which have at least four b-tagged jets and reconstructed tau leptons. They trained a machine learning classifier to separate VLL’s from their backgrounds. They see an excess of events at high VLL classification probability in the categories with 1 or 2 reconstructed tau’s, with a significance of 2.8 standard deviations.

The CMS Vector-Like-Lepton excess. The gray filled histogram shows the best-fit amount of VLL signal. The histograms of other colors show the contributions of various backgrounds the the hatched band their uncertainty. 

In the second analysis they look for new resonances that decay into two tau leptons. They employ a sophisticated ’embedding’ technique to estimate the large background of Z bosons decaying to tau pairs by using the decays of Z bosons to muons. They see two excesses, one at 100 GeV and one at 1200 GeV, each with a significances of around 3-sigma. The excess at ~100 GeV could also be related to another CMS analysis that saw an excess of diphoton events at ~95 GeV, especially given that if there was an additional Higgs-like boson at 95 GeV  diphoton and ditau would be the two channels it would likely first appear in.

CMS TauTau excesses. The excess at ~100 GeV is shown in the left plot and the one at 1200 GeV is shown on the right, the best fit signal is shown with the red line in the bottom ration panels. 

While the statistical significances of these excess are not quite as high as the first one, meaning it is more likely they are fluctuations that will disappear with more data, their connection to other anomalies is quite intriguing.

Excess 4: CMS Paired Dijet Resonances

Paper: https://cds.cern.ch/record/2803669

Often statistical significance doesn’t tell the full story of an excess. When CMS first performed its standard dijet search on Run2 LHC data, where one looks for a resonance decaying to two jets by looking for bumps in the dijet invariant mass spectrum, they did not find any significant excesses. But they did note one particular striking event, which 4 jets which form two ‘wide jets’, each with a mass of 1.9 TeV and the 4 jet mass is 8 TeV.

An event display for the striking the CMS 4-jet event. The 4 jets combine to form two back-to-back dijet pairs, each with mass of 1.9 TeV. 

This single event seems very likely to occur via normal Standard Model QCD which normally has a regular 2-jet topology. However a new 8 TeV resonance which decayed to two intermediate particles with masses of 1.9 TeV which then each decayed to a pair of jets would lead to such a signature. This motivated them to design this analysis, a new search specifically targeting this paired dijet resonance topology. In this new search they have now found a second event with very similar characteristics. The local statistical significance of this excess is 3.9-sigma, but when one accounts for the many different potential dijet and 4-jet mass combinations which were considered in the analysis that drops to 1.6-sigma.

Though 1.6-sigma is relatively low, the striking nature of these events is certainly intriguing and warrants follow up. The Run-3 will also bring a slight increase to the LHC’s energy (13 -> 13.6 TeV) which will give the production rate of any new 8 TeV particles a not-insignificant boost.

Conclusions

The safe bet on any of these excesses would probably be that it will disappear with more data, as many excesses have done in the past. And many particle physicists are probably wary of getting too excited after the infamous 750 GeV diphoton fiasco in which many people got very excited (and wrote hundreds of papers about) a about a few-sigma excess in CMS + ATLAS data that disappeared as more data was collected. All of theses excesses are for analyses only performed by a single experiment (ATLAS or CMS) for now, but both experiments have similar capabilities so it will be interesting to see what the counterpart has to say for each excess once they perform a similar analysis on their Run-2 data. At the very least these results add some excitement for the upcoming LHC Run-3–the LHC collisions are starting up again this year after being on hiatus since 2018.

 

Read more:

CERN Courier Article “Dijet excess intrigues at CMS” 

Background on the imfamous 750 GeV diphoton excess, Physics World Article “And so to bed for the 750 GeV bump

Background on the LHCb flavor anomalies, CERN Courier “New data strengthens RK flavour anomaly

 

A hint of CEvNS heaven at a nuclear reactor

Title : “Suggestive evidence for Coherent Elastic Neutrino-Nucleus Scattering from reactor antineutrinos”

Authors : J. Colaresi et al.

Link : https://arxiv.org/abs/2202.09672

Neutrinos are the ghosts of particle physics, passing right through matter as if it isn’t there. Their head-on collisions with atoms are so rare that it takes a many-ton detector to see them. Far more often though, a neutrino gives a tiny push to an atom’s nucleus, like a golf ball glancing off a bowling ball. Even a small detector can catch these frequent scrapes, but only if it can pick up the bowling ball’s tiny budge. Today’s paper may mark the opening of a new window into these events, called “coherent neutrino-nucleus scattering” or CEvNS (pronounced “sevens”), which can teach us about neutrinos, their astrophysical origins, and the even more elusive dark matter.

A scrape with a ghost in a sea of noise

CEvNS was first measured in 2017 by COHERENT at a neutron beam facility, but much more data is needed to fully understand it. Nuclear reactors produce far more neutrinos than other sources, but they are even less energetic and thus harder to detect. To find these abundant but evasive events, the authors used a detector called “NCC-1701” that can count the electrons knocked off a germanium atom when a neutrino from the reactor collides with its nucleus.

Unfortunately, a nuclear reactor produces lots of neutrons as well, which glance off atoms just like neutrinos, and the detector was further swamped with electronic noise due to its hot, buzzing surroundings. To pick out CEvNS from this mess, the researchers found creative ways to reduce these effects: shielding the detector from as many neutrons as possible, cooling its vicinity, and controlling for environmental variables.

An intriguing bump with a promising future

After all this work, a clear bump was visible in the data when the reactor was running, and disappeared when it was turned off. You can see this difference in the top and bottom of Fig. 1, which shows the number of events observed after subtracting the backgrounds, as a function of the energy they deposited (number of electrons released from germanium atoms).

Fig. 1: The number of events observed minus the expected background, as a function of the energy the events deposited. In the top panel, when the nuclear reactor was running, a clear bump is visible at low energy. The bump is moderately to very strongly suggestive of CEvNS, depending on which germanium model is used (solid vs. dashed line). When the reactor’s operation was interrupted (bottom), the bump disappeared – an encouraging sign.

But measuring CEvNS is such a new enterprise that it isn’t clear exactly what to look for – the number of electrons a neutrino tends to knock off a germanium atom is still uncertain. This can be seen in the top of Fig. 1, where the model used for this number changes the amount of CEvNS expected (solid vs dashed line).

Still, for a range of these models, statistical tests “moderately” to “very strongly” confirmed CEvNS as the likely explanation of the excess events. When more data accumulates and the bump becomes clearer, NCC-1701 can determine which model is correct. CEvNS may then become the easiest way to measure neutrinos, since detectors only need to be a couple feet in size.

Understanding CEvNS is also critical for finding dark matter. With dark matter detectors coming up empty, it now seems that dark matter hits atoms even less often than neutrinos, making CEvNS an important background for dark matter hunters. If experiments like NCC-1701 can determine CEvNS models, then dark matter searches can stop worrying about this rain of neutrinos from the sky and instead start looking for them. These “astrophysical” neutrinos are cosmic messengers carrying information about their origins, from our sun’s core to supernovae.

This suggestive bump in the data of a tiny detector near the roiling furnace of a nuclear reactor shows just how far neutrino physics has come – the sneakiest ghosts in the Standard Model can now be captured with a germanium crystal that could fit in your palm. Who knows what this new window will reveal?

Read More

Ever-Elusive Neutrinos Spotted Bouncing Off Nuclei for the First Time” – Scientific American article from the first COHERENT detection in 2017

Hitting the neutrino floor” – Symmetry Magazine article on the importance of CEvNS to dark matter searches

Local nuclear reactor helps scientists catch and study neutrinos” – Phys.org story about these results

Exciting headways into mining black holes for energy!

Based on the paper Penrose process for a charged black hole in a uniform magnetic field

It has been over half a century since Roger Penrose first theorized that spinning black holes could be used as energy powerhouses by masterfully exploiting the principles of special and general relativity [1, 2]. Although we might not be able to harness energy from a black hole to reheat that cup of lukewarm coffee just yet, with a slew of amazing breakthroughs [4, 5, 6], it seems that we may be closer than ever before to making the transition from pure thought experiment to finally figuring out a realistic powering mechanism for several high-energy astrophysical phenomena. Not only can there be dramatic increases in the energies of radiated particles using charged, spinning black holes as energy reservoirs via the electromagnetic Penrose process rather than neutral, spinning black holes via the original mechanical Penrose process, the authors of this paper also demonstrate that the region outside the event horizon (see below) from which energy can be extracted is much larger in the former than the latter. In fact, the enhanced power of this process is so great, that it is one of the most suitable candidates for explaining various high-energy astrophysical phenomena such as ultrahigh-energy cosmic rays, particles [7, 8, 9] and relativistic jets [10, 11].

Stellar black holes are the final stages in the life cycle of stars so massive that they collapse upon themselves, unable to withstand their own gravitational pull. They are characterized by a point-like singularity at the centre where a complete breakdown of Einstein’s equations of general relativity occurs, and surrounded by an outer event horizon, within which the gravitational force is so strong that not even light can escape it. Just outside the event horizon of a rotating black hole is a region called the ergosphere, bounded by an outer stationary surface, within which space-time is dragged along inexorably with the black hole via a process called frame-dragging. This effect predicted by Einstein’s theory of general relativity, makes it impossible for an object to stand still with respect to an outside observer.

The ergosphere has a rather curious property that makes the word-line (the path traced in 4-dim space-time) of a particle or observer change from being time-like outside the static surface to being space-like inside it. In other words, the time and angular coordinates of the metric swap places! This leads to the existence of negative energy states of particles orbiting the black hole with respect to observer at infinity [2, 12, 13]. It is this very property that enables the extraction of rotational energy from the ergosphere as explained below.

According to Penrose’s calculations, if a massive particle that falls into the ergosphere were to split into two, the daughter who gets a kick from the black hole, would be accelerated out with a much higher positive energy (upto 20.7 percent higher to be exact) than the in-falling parent, as long as her sister is left with a negative energy. While it may seem counter-intuitive to imagine a particle with negative energy, note that no laws of relativity or thermodynamics are actually broken. This is because the observed energy of any particle is relative, and depends upon the momentum measured in the rest frame of the observer. Thus, a positive kinetic energy of the daughter particle left behind would be measured as negative by an observer at infinity [3].

In contrast to the purely geometric mechanical Penrose process, if one now considers black holes that possess charge as well as spin, a tremendous amount of energy stored in the electromagnetic fields can be tapped into, leading to ultra high energy extraction efficiencies. While there is a common misconception that a charged black hole tends to neutralize itself swiftly by attracting oppositely charged particles from the ambient medium, this is not quite true for a spinning black hole in a magnetic field (due to the dynamics of the hot plasma soup in which it is embedded). In fact in this case, Wald [14] showed that black holes tend to charge up till they reach a certain energetically favourable value. This value plays a crucial role in the amount of energy that can be delivered to the outgoing particle through the electromagnetic Penrose process. The authors of this paper explicitly locate the regions from which energy can be extracted and show that these are no longer restricted to the ergosphere, as there are a whole bunch of previously inaccessible negative energy states that can now be mined. They also find novel disconnected, toroidal regions not coincident with the ergosphere that can trap the negative energy particles forever (refer to Fig.1)! The authors calculate the effective coupling strength between the black hole and charged particles, a certain combination of the mass and charge parameters of the black hole and charged particle, and the external magnetic field. This simple coupling formula enables them to estimate the efficiency of the process as the magnitude of the energy boost that can be delivered to the outgoing particle is directly dependent on it. They also find that the coupling strength decreases as energy is extracted, much the same way as the spin of a black hole decreases as it loses energy to the super-radiant particles in the mechanical analogue.

While the electromagnetic Penrose process is the most favoured astrophysically viable mechanism for high energy sources and phenomena such as quasars, fast radio bursts, relativistic jets etc., as the authors mention “Just because a particle can decay into a trapped negative-energy daughter and a significantly boosted positive-energy radiator, does not mean it will do so..” However, in this era of precision black hole astrophysics, state-of-the-art observatories, the Event Horizon Telescope capable of capturing detailed observations of emission mechanisms in real time, and enhanced numerical and scientific methods at our disposal, it appears that we might be on the verge of detecting observable imprints left by the Penrose process on black holes, and perhaps tap into a source of energy for advanced civilisations!

References

  1. Gravitational collapse: The role of general relativity
  2. Extraction of Rotational Energy from a Black Hole
  3. Penrose process for a charged black hole in a uniform magnetic field
  4. First-Principles Plasma Simulations of Black-Hole Jet Launching
  5. Fifty years of energy extraction from rotating black hole: revisiting magnetic Penrose process
  6. Magnetic Reconnection as a Mechanism for Energy Extraction from Rotating Black Holes
  7. Near-horizon structure of escape zones of electrically charged particles around weakly magnetized rotating black hole: case of oblique magnetosphere
  8. GeV emission and the Kerr black hole energy extraction in the BdHN I GRB 130427A
  9. Supermassive Black Holes as Possible Sources of Ultrahigh-energy Cosmic Rays
  10. Acceleration of the charged particles due to chaotic scattering in the combined black hole gravitational field and asymptotically uniform magnetic field
  11. Acceleration of the high energy protons in an active galactic nuclei
  12. Energy-extraction processes from a Kerr black hole immersed in a magnetic field. I. Negative-energy states
  13. Revival of the Penrose Process for Astrophysical Applications
  14. Black hole in a uniform magnetic field

 

 

How to find a ‘beautiful’ valentine at the LHC

References:  https://arxiv.org/abs/1712.07158 (CMS)  and https://arxiv.org/abs/1907.05120 (ATLAS)

If you are looking for love at the Large Hadron Collider this Valentines Day, you won’t find a better eligible bachelor than the b-quark. The b-quark (also called the ‘beauty’ quark if you are feeling romantic, the ‘bottom’ quark if you are feeling crass, or a ‘beautiful bottom quark’ if you trying to weird people out) is the 2nd heaviest quark behind the top quark. It hangs out with a cool crowd, as it is the Higgs’s favorite decay and the top quark’s BFF; two particles we would all like to learn a bit more about.

Choose beauty this valentines day

No one wants a romantic partner who is boring, and can’t stand out from the crowd. Unfortunately when most quarks or gluons are produced at the LHC, they produce big sprays of particles called ‘jets’ that all look the same. That means even if the up quark was giving you butterflies, you wouldn’t be able to pick its jets out from those of strange quarks or down quarks, and no one wants to be pressured into dating a whole friend group. But beauty quarks can set themselves apart in a few ways. So if you are swiping through LHC data looking for love, try using these tips to find your b(ae).

Look for a partner whose not afraid of commitment and loves to travel.  Beauty quarks live longer than all the other quarks (a full 1.5 picoseconds, sub-atomic love is unfortunately very fleeting) letting them explore their love of traveling (up to a centimeter from the beamline, a great honeymoon spot I’ve heard) before decaying.

You want a lover who will bring you gifts, which you can hold on to even after they are gone. And when beauty quarks they, you won’t be in despair, but rather charmed with your new c-quark companion. And sometimes if they are really feeling the magic, they leave behind charged leptons when they go, so you will have something to remember them by.

The ‘profile photo’ of a beauty quark. You can see its traveled away from the crowd (the Primary Vertex, PV) and has started a cool new Secondary Vertex (SV) to hang out in.

But even with these standout characteristics, beauty can still be hard to find, as there are a lot of un-beautiful quarks in the sea you don’t want to get hung up on. There is more to beauty than meets the eye, and as you get to know them you will find that beauty quarks have even more subtle features that make them stick out from the rest. So if you are serious about finding love in 2022, its may be time to turn to the romantic innovation sweeping the nation: modern machine learning.  Even if we would all love to spend many sleepless nights learning all about them, unfortunately these days it feels like the-scientist-she-tells-you-not-to-worry-about, neural networks, will always understand them a bit better. So join the great romantics of our time (CMS and ATLAS) in embracing the modern dating scene, and let the algorithms find the most beautiful quarks for you.

So if you looking for love this Valentines Day, look no further than the beauty quark. And if you area feeling hopeless, you can take inspiration from this decades-in-the-making love story from a few years ago: “Higgs Decay into Bottom Beauty Quarks Seen at Last

A beautiful wedding photo that took decades to uncover, the Higgs decay in beauty quarks (red) was finally seen in 2018. Other, boring couples (dibosons), are shown in gray.

Towards resolving the black hole information paradox!

Based on the paper The black hole information puzzle and the quantum de Finetti theorem

Black holes are some of the most fascinating objects in the universe. They are extreme deformations of space and time, formed from the collapse of massive stars, with a gravitational pull so strong that nothing, not even light, can escape it. Apart from the astrophysical aspects of black holes (which are bizarre enough to warrant their own study), they provide the ideal theoretical laboratory for exploring various aspects of quantum gravity, the theory that seeks to unify the principles of general relativity with those of quantum mechanics.

One definitive way of making progress in this endeavor is to resolve the infamous black hole information paradox [1], and through a series of recent exciting developments, it appears that we might be closer to achieving this than we have ever been before [5, 6]! Paradoxes in physics paradoxically tend to be quite useful, in that they clarify what we don’t know about what we know. Stephen Hawking’s semi-classical calculations of black hole radiation treat the matter in and around black holes as quantum fields but describe them within the framework of classical general relativity. The corresponding results turn out to be in disagreement with the results obtained from a purely quantum theoretical viewpoint. The information paradox encapsulates this particular discrepancy. According to the calculations of Hawking and Bekenstein in the 1970s [3], black hole evaporation via Hawking radiation is completely thermal. This simply means that the von Neumann entropy S(R) of radiation (a measure of its thermality or our ignorance of the system) keeps growing with the number of radiation quanta, reaching a maximum when the black hole has evaporated completely. This corresponds to a complete loss of information, whereby even a pure quantum state entering a black hole, would be transformed into a mixed state of Hawking radiation and all previous information about it would be destroyed. This conclusion is in stark contrast to what one would expect when regarding the black hole from the outside, as a quantum system that must obey the laws of quantum mechanics. The fundamental tenets of quantum mechanics are determinism and reversibility, the combination of which asserts that all information must be conserved. Thus if a black hole is formed by collapsing matter in a pure state, the state of the total system including the radiation R must remain pure. This can only happen if the entropy S(R) that first increases during the radiation process, ultimately decreases to zero when the black hole has disappeared completely, corresponding to a final pure state [4]. This quantum mechanical result is depicted in the famous Page curve (Fig.1)

Certain significant discoveries in the recent past showing that the Page curve is indeed the correct curve and can in fact be reproduced by semi-classical approximations of gravitational path integrals [7, 8] may finally hold the key towards the resolution of this paradox. These calculations rely on the replica trick [9, 10] and take into account contribution from space-time geometries comprising of wormholes that connect various replica black holes. This simple geometric amendment to the gravitational path integral is the only factor different from Hawking’s calculations and yet leads to diametrically different results! The replica trick is a neat method that enables the computation of the entropy of the radiation field by first considering n identical copies of a black hole, calculating their Renyi entropies and using the fact that this equals the desired von Neumann entropy in the limit of n \rightarrow 1. These in turn are calculated using the semi-classical gravitational path integral under the assumption that the dominant contributions come from the geometries that are classical solutions to the gravity action, obeying Zn symmetry. This leads to two distinct solutions:

  • The Hawking saddle consisting of disconnected geometries (corresponding to identical copies of a black hole).
  • The replica wormholes geometry consisting of connections between the different replicas.

Upon taking the n \rightarrow 1 limit, one finds that all that was missing from Hawking’s calculations and the quantum compatible Page curve was the inclusion of the replica wormhole solution in the former.
In this paper, the authors attempt to find the reason for this discrepancy, where this extra information is stored and the physical relevance of the replica wormholes using insights from quantum information theory. Invoking the quantum de Finetti theorem [11, 12], they find that there exists a particular reference information, W and the entropy one assigns to the black hole radiation field depends on whether or not one has access to this reference. While Hawking’s calculations correspond to measuring the unconditional von Neumann entropy S(R), ignoring W, the novel calculations using the replica trick calculate the conditional von Neumann entropy S(R|W), which takes W into account. The former yields the entropy of the ensemble average of all possible radiation states, while the latter yields the ensemble average of the entropy of the same states. They also show that the replica wormholes are a geometric representation of the correlation that appears between the n black holes, mediated by W.
The precise interpretation of W and what it might appear to be to an observer falling into a black hole remains an open question. Exploring what it could represent in holographic theories, string theories and loop quantum gravity could open up a dizzying array of insights into black hole physics and the nature of space-time itself. It appears that the different pieces of the quantum gravity puzzle are slowly but surely coming together to what will hopefully soon give us the entire picture.

References

  1. The Entropy of Hawking Radiation
  2. Black holes as mirrors: Quantum information in random subsystems
  3. Particle creation by black holes
  4. Average entropy of a subsystem
  5. The entropy of bulk quantum fields and the entanglement wedge of an evaporating black hole
  6. Entanglement Wedge Reconstruction and the Information Paradox
  7. Replica wormholes and the black hole interior
  8. Replica Wormholes and the Entropy of Hawking Radiation
  9. Entanglement entropy and quantum field theory
  10. Generalized gravitational entropy
  11. Locally normal symmetric states and an analogue of de Finetti’s theorem
  12. Unknown quantum states: The quantum de Finetti representation

The Mini and Micro BooNE Mystery, Part 2: Theory

Title: “Search for an Excess of Electron Neutrino Interactions in MicroBooNE Using Multiple Final State Topologies”

Authors: MicroBooNE Collaboration

References: https://arxiv.org/pdf/2110.14054.pdf

This is the second post in a series on the latest MicroBooNE results, covering the theory side. Click here to read about the experimental side. 

Few stories in physics are as convoluted as the one written by neutrinos. These ghost-like particles, a notoriously slippery experimental target and one of the least-understood components of the Standard Model, are making their latest splash in the scientific community through MicroBooNE, an experiment at FermiLab that unveiled its first round of data earlier this month. While MicroBooNE’s predecessors have provided hints of a possible anomaly within the neutrino sector, its own detectors have yet to uncover a similar signal. Physicists were hopeful that MicroBooNE would validate this discrepancy, yet the tale is turning out to be much more nuanced than previously thought.

Unexpected Foundations

Originally proposed by Wolfgang Pauli in 1930 as an explanation for missing momentum in certain particle collisions, the neutrino was added to the Standard Model as a massless particle that can come in one of three possible flavors: electron, muon, and tau. At first, it was thought that these flavors are completely distinct from one another. Yet when experiments aimed to detect a particular neutrino type, they consistently measured a discrepancy from their prediction. A peculiar idea known as neutrino oscillation presented a possible explanation: perhaps, instead of propagating as a singular flavor, a neutrino switches between flavors as it travels through space. 

This interpretation emerges fortuitously if the model is modified to give the neutrinos mass. In quantum mechanics, a particle’s mass eigenstate — the possible masses a particle can be found to have upon measurement — can be thought of as a traveling wave with a certain frequency. If the three possible mass eigenstates of the neutrino are different, meaning that at most one of the mass values could be zero, this creates a phase shift between the waves as they travel. It turns out that the flavor eigenstates — describing which of the electron, muon, or tau flavors the neutrino is measured to possess — are then superpositions of these mass eigenstates. As the neutrino propagates, the relative phase between the mass waves varies such that when the flavor is measured, the final superposition could be different from the initial one, explaining how the flavor can change. In this way, the mass eigenstates and the flavor eigenstates of neutrinos are said to “mix,” and we can mathematically characterize this model via mixing parameters that encode the mass content of each flavor eigenstate.

A visual representation of how neutrino oscillation works. From: http://www.hyper-k.org/en/neutrino.html.

These massive oscillating neutrinos represent a radical departure from the picture originally painted by the Standard Model, requiring a revision in our theoretical understanding. The oscillation phenomenon also poses a unique experimental challenge, as it is especially difficult to unravel the relationships between neutrino flavors and masses. Thus far, physicists have only been able to determine the sum of neutrino masses, and have found that this value is constrained to be exceedingly small, posing yet another mystery. The neutrino experiments of the past three decades have set their sights on measuring the mixing parameters in order to determine the probabilities of the possible flavor switches.

A Series of Perplexing Experiments

In 1993, scientists in Los Alamos peered at the data gathered by the Liquid Scintillator Neutrino Detector (LSND) to find something rather strange. The group had set out to measure the number of electron neutrino events produced via decays in their detector, and found that this number exceeded what had been predicted by the three-neutrino oscillation model. In 2002, experimentalists turned on the complementary MiniBooNE detector at FermiLab (BooNE is an acronym for Booster Neutrino Experiment), which searched for oscillations of muon neutrinos into electron neutrinos, and again found excess electron neutrino events. For a more detailed account of the setup of these experiments, check out Oz Amram’s latest piece.

While two experiments are notable for detecting excess signal, they stand as outliers when we consider all neutrino experiments that have collected oscillation data. Collaborations that were taking data at the same time as LSND and MiniBooNE include: MINOS (Main Injector Neutrino Oscillation Search), KamLAND (Kamioka Liquid Scintillator Antineutrino Detector), and IceCube (surprisingly, not a fancy acronym, but deriving its name from the fact that it’s located under ice in Antarctica), to name just a few prominent ones. Their detectors targeted neutrinos from a beamline, nearby nuclear reactors, and astrophysical sources, respectively. Not one found a mismatch between predicted and measured events. 

The results of these other experiments, however, do not negate the findings of LSND and MiniBooNE. This extensive experimental range — probing several sources of neutrinos, and detecting with different hardware specifications — is necessary in order to consider the full range of possible neutrino mixing parameters and masses. Each model or experiment is endowed with a parameter space: a set of allowed values that its parameters can take. In this case, the neutrino mass and mixing parameters form a two-dimensional grid of possibilities. The job of a theorist is to find a solution that both resolves the discrepancy and has a parameter space that overlaps with allowed experimental parameters. Since LSND and MiniBooNE had shared regions of parameter space, the resolution of this mystery should be able to explain not only the origins of the excess, but why no similar excess was uncovered by other detectors.

A simple explanation to the anomaly emerged and quickly gained traction: perhaps the data hinted at a fourth type of neutrino. Following the logic of the three-neutrino oscillation model, this interpretation considers the possibility that the three known flavors have some probability of oscillating into an additional fourth flavor. For this theory to remain consistent with previous experiments, the fourth neutrino would have to provide the LSND and MiniBooNE excess signals, while at the same time sidestepping prior detection by coupling to only the force of gravity. Due to its evasive behavior, this potential fourth neutrino has come to be known as the sterile neutrino. 

The Rise of the Sterile Neutrino

The sterile neutrino is a well-motivated and especially attractive candidate for beyond the Standard Model physics. It differs from ordinary neutrinos, also called active neutrinos, by having the opposite “handedness”. To illustrate this property, imagine a spinning particle. If the particle is spinning with a leftward orientation, we say it is “left-handed”, and if it is spinning with a rightward orientation, we say it is “right-handed”. Mathematically, this quantity is called helicity, which is formally the projection of a particle’s spin along its direction of momentum. However, this helicity depends implicitly on the reference frame from which we make the observation. Because massive particles move slower than the speed of light, we can choose a frame of reference such that the particle appears to have momentum going in the opposite direction, and as a result, the opposite helicity. Conversely, because massless particles move at the speed of light, they will have the same helicity in every reference frame. 

An illustration of chirality. We define, by convention, a “right-handed” particle as one whose spin and momentum directions align, and a “left-handed” particle as one whose spin and momentum directions are anti-aligned. Source: Wikipedia.

This frame-dependence unnecessarily complicates calculations, but luckily we can instead employ a related quantity that encapsulates the same properties while bypassing the reference frame issue: chirality. Much like helicity, while massless particles can only display one chirality, massive particles can be either left- or right-chiral. Neutrinos interact via the weak force, which is famously parity-violating, meaning that it has been observed to preferentially interact only with particles of one particular chirality. Yet massive neutrinos could presumably also be right-handed — there’s no compelling reason to think they shouldn’t exist. Sterile neutrinos could fill this gap.

They would also lend themselves nicely to addressing questions of dark matter and baryon asymmetry. The former — the observed excess of gravitationally-interacting matter over light-emitting matter by a factor of 20 — could be neatly explained away by the detection of a particle that interacts only gravitationally, much like the sterile neutrino. The latter, in which our patch of the universe appears to contain considerably more matter than antimatter, could also be addressed by the sterile neutrino via a proposed model of neutrino mass acquisition known as the seesaw mechanism. 

In this scheme, active neutrinos are represented as Dirac fermions: spin-½ particles that have a unique anti-particle, the oppositely-charged particle with otherwise the same properties. In contrast, sterile neutrinos are considered to be Majorana fermions: spin-½ particles that are their own antiparticle. The masses of the active and sterile neutrinos are fundamentally linked such that as the value of one goes up, the value of the other goes down, much like a seesaw. If sterile neutrinos are sufficiently heavy, this mechanism could explain the origin of neutrino masses and possibly even why the masses of the active neutrinos are so small. 

These considerations position the sterile neutrino as an especially promising contender to address a host of Standard Model puzzles. Yet it is not the only possible solution to the LSND/MiniBooNE anomaly — a variety of alternative theoretical interpretations invoke dark matter, variations on the Higgs boson, and even more complicated iterations of the sterile neutrino. MicroBooNE was constructed to traverse this range of scenarios and their corresponding signatures. 

Open Questions

After taking data for three years, the collaboration has compiled two dedicated analyses: one that searches for single electron final states, and another that searches for single photon final states. Each of these products can result from electron neutrino interactions — yet both analyses did not detect an excess, pointing to no obvious signs of new physics via these channels. 

Above, we can see that the expected number of electron neutrino events agrees well with the number of measured events, disfavoring the MiniBooNE excess. Source: https://microboone.fnal.gov/wp-content/uploads/paper_electron_analysis_2021.pdf

Although confounding, this does not spell death for the sterile neutrino. A significant disparity between MiniBooNE and MicroBooNE’s detectors is the ability to discern between single and multiple electron events — MiniBooNE lacked the resolution that MicroBooNE was upgraded to achieve. MiniBooNE also was unable to fully distinguish between electron and photon events in the same way as MicroBooNE. The possibility remains that there exist processes involving new physics that were captured by LSND and MiniBooNE — perhaps decays resulting in two electrons, for instance.  

The idea of a right-handed neutrino remains a promising avenue for beyond the Standard Model physics, and it could turn out to have a mass much larger than our current detection mechanisms can probe. The MicroBooNE collaboration has not yet done a targeted study of the sterile neutrino, which is necessary in order to fully assess how their data connects to its possible signatures. There still exist regions of parameter space where the sterile neutrino could theoretically live, but with every excluded region of parameter space, it becomes harder to construct a theory with a sterile neutrino that is consistent with experimental constraints. 

While the list of neutrino-based mysteries only seems to grow with MicroBooNE’s latest findings, there are plenty of results on the horizon that could add clarity to the picture. Researchers are anticipating the output of more data from MicroBooNE as well as more specific theoretical studies of the results and their relationship to the LSND/MiniBooNE anomaly, the sterile neutrino, and other beyond the Standard Model scenarios. MicroBooNE is also just one in a series of planned neutrino experiments, and will operate alongside the upcoming SBND (Short-Baseline Neutrino Detector) and ICARUS (Imaging Cosmic Rare and Underground Signals), further expanding the parameter space we are able to probe.

The neutrino sector has proven to be fertile ground for physics beyond the Standard Model, and it is likely that this story will continue to produce more twists and turns. While we have some promising theoretical explanations, nothing theorists have concocted thus far has fit seamlessly with our available data. More data from MicroBooNE and near-future detectors is necessary to expand our understanding of these puzzling pieces of particle physics. The neutrino story is pivotal to the tome of the Standard Model, and may be the key to writing the next chapter in our understanding of the fundamental ingredients of our world.

Further Reading

  1. A review of neutrino oscillation and mass properties: https://pdg.lbl.gov/2020/reviews/rpp2020-rev-neutrino-mixing.pdf
  2. An in-depth review of the LSND and MiniBooNE results: https://arxiv.org/pdf/1306.6494.pdf

Planckian dark matter: DEAP edition

Title: First direct detection constraints on Planck-scale mass dark matter with multiple-scatter signatures using the DEAP-3600 detector.

Reference: https://arxiv.org/abs/2108.09405.

Here is a broad explainer of the paper via breaking down its title.

Direct detection.

The term in use for a kind of astronomy, ‘dark matter astronomy’, that has been in action since the 1980s. The word “astronomy” usually evokes telescopes pointing at something in the sky and catching its light. But one could also catch other things, e.g., neutrinos, cosmic rays and gravitational waves, to learn about what’s out there: that counts as astronomy too! As touched upon elsewhere in these pages, we think dark matter is flying into Earth at about 300 km/s, making its astronomy a possibility. But we are yet to conclusively catch dark particles. The unique challenge, unlike astronomy with light or neutrinos or gravity waves, is that we do not quite know the character of dark matter. So we must first imagine what it could be, and accordingly design a telescope/detector. That is challenging, too. We only really know that dark matter exists on the size scale of small galaxies: 10^{19} metres. Whereas our detectors are at best a metre across. This vast gulf in scales can only be addressed by theoretical models.

Multiple-scatter signatures.

How heavy all the dark matter is in the neighbourhood of the Sun has been ballparked, but that does not tell us how far apart the dark particles are from each other, i.e. if they are lightweights huddled close, or anvils socially distanced. Usually dark matter experiments (there are dozens around the world!) look for dark particles bunched a few centimetres apart, called WIMPs. This experiment looked, for the first time, for dark particles that may be  30 kilometres apart. In particular they looked for “MIMPs” — multiply interacting massive particles — dark matter that leaves a “track” in the detector as opposed to a single “burst” characteristic of a WIMP. As explained here, to discover very dilute dark particles like DEAP-3600 wanted to, one must necessarily look for tracks. So they carefully analyzed the waveforms of energy dumps in the detector (e.g., from radioactive material, cosmic muons, etc.) to pick out telltale tracks of dark matter.

Figure above: Simulated waveforms for two benchmark parameters.

DEAP-3600 detector.

The largest dark matter detector built so far, the 130 cm-diameter, 3.3 tonne liquid argon-based DEAP (“Dark matter Experiment using Argon Pulse-shaped discrimination”) in SNOLAB, Canada.  Three years of data recorded on whatever passed through the detector were used. That amounts to the greatest integrated flux of dark particles through a detector in a dark matter experiment so far, enabling them to probe the frontier of “diluteness” in dark matter.

Planck-scale mass.

By looking for the dilutest dark particles, DEAP-3600 is the first laboratory experiment to say something about dark matter that may weigh a “Planck mass” — about 22 micrograms, or 1.2 \times 10^{19} GeV/c^2 — the greatest mass an elementary particle could have. That’s like breaking the sound barrier. Nothing prevents you from moving faster than sound, but you’d transition to a realm of new physical effects. Similarly nothing prevents an experiment from probing dark matter particles beyond the Planck mass. But novel intriguing theoretical possibilities for dark matter’s unknown identity are now impacted by this result, e.g., large composite particles, solitonic balls, and charged mini-black holes.

Constraints.

The experiment did not discover dark matter, but has mapped out its masses and nucleon scattering cross sections that are now ruled out thanks to its extensive search.

Image

Image

Figure above: For two classes of models of composite dark matter, DEAP-3600 limits on its cross sections for scattering on nucleons  versus its unknown mass. Also displayed are previously placed limits from various other searches.

[Full disclosure: the author was part of the experimental search, which was based on proposals in [a] [b]. It is hoped that this search leads the way for other collaborations to, using their own fun tricks, cast an even wider net than DEAP did.]

Further reading.

[1] Proposals for detection of (super-)Planckian dark matter via purely gravitational interactions:

Laser interferometers as dark matter detectors,

Gravitational direct detection of dark matter.

[2] Constraints on (super-)Planckian dark matter from recasting searches in etched plastic and ancient underground mica.

[3] A recent multi-scatter search for dark matter reaching masses of 10^{12} GeV/c^2.

[4] Look out for Benjamin Broerman‘s PhD thesis featuring results from a multi-scatter search in the bubble chamber-based PICO-60.

A new boson at 151 GeV?! Not quite yet

Title: “Accumulating Evidence for the Associate Production of
a Neutral Scalar with Mass around 151 GeV”

Authors: Andreas Crivellin et al.

Reference: https://arxiv.org/abs/2109.02650

Everyone in particle physics is hungry for the discovery of a new particle not in the standard model, that will point the way forward to a better understanding of nature. And recent anomalies: potential Lepton Flavor Universality violation in B meson decays and the recent experimental confirmation of the muon g-2 anomaly, have renewed peoples hopes that there may new particles lurking nearby within our experimental reach. While these anomalies are exciting, if they are confirmed they would be ‘indirect’ evidence for new physics, revealing concrete a hole in the standard model, but not definitely saying what it is that fills that hole.  We would then would really like to ‘directly’ observe what was causing the anomaly, so we can know exactly what the new particle is and study it in detail. A direct observation usually involves being able to produce it in a collider, which is what the high momentum experiments at the LHC (ATLAS and CMS) are designed to look for.

By now these experiments have done hundreds of different analyses of their data searching for potential signals of new particles being produced in their collisions and so far haven’t found anything. But in this recent paper, a group of physicists outside these collaborations argue that they may have missed such a signal in their own data. Whats more, they claim statistical evidence for this new particle at the level of around 5-sigma, which is the threshold usually corresponding to a ‘discovery’ in particle physics.  If true, this would of course be huge, but there are definitely reasons to be a bit skeptical.

This group took data from various ATLAS and CMS papers that were looking for something else (mostly studying the Higgs) and noticed that multiple of them had an excess of events at a particle energy, 151 GeV. In order to see how significant theses excesses were in combination, they constructed a statistical model that combined evidence from the many different channels simultaneously. Then they evaluate that the probability of there being an excess at the same energy in all of these channels without a new particle is extremely low, and thus claim evidence for this new particle at 5.1-sigma (local). 

 
4 plots in different channels showing the purported excess at 151 GeV in different channels.
FIgure 1 from the paper. This shows the invariant mass spectrum of the new hypothetical boson mass in the different channels the authors consider. The authors have combined CMS and ATLAS data from different analyses and normalized everything to be consistent in order to make such plot. The pink line shows the purported signal at 151 GeV. The largest significance comes from the channel where the new boson decays into two photons and is produced in association with something that decays invisibly (which produces missing energy).
A plot of the significance (p-value) as a function of the mass of the new particle. Combing all the channels, the significance reaches the level of 5-sigma. One can see that the significance is dominated by diphoton channels.

This is a of course a big claim, and one reason to be skeptical is because they don’t have a definitive model, they cannot predict exactly how much signal you would expect to see in each of these different channels. This means that when combining the different channels, they have to let the relative strength of the signal in each channel be a free parameter. They are also combining the data a multitude of different CMS and ATLAS papers, essentially selected because they are showing some sort of fluctuation around 151 GeV. So this sort of cherry picking of data and no constraints on the relative signal strengths means that their final significance should be taken with several huge grains of salt.

The authors further attempt to quantify a global significance, which would account of the look-elsewhere effect , but due to the way they have selected their datasets  it is not really possible in this case (in this humble experimenter’s opinion).

Still, with all of those caveats, it is clear that there is some excesses in the data around 151 GeV, and it should be worth experimental collaborations’ time to investigate it further. Most of the data the authors use comes control regions of from analyses that were focused solely on the Higgs, so this motivates the experiments expanding their focus a bit to cover these potential signals. The authors also propose a new search that would be sensitive to their purported signal, which would look for a new scalar decaying to two new particles that decay to pairs of photons and bottom quarks respectively (H->SS*-> γγ bb).

 

In an informal poll on Twitter, most were not convinced a new particle has been found, but the ball is now in ATLAS and CMS’s courts to analyze the data themselves and see what they find. 

 

 

Read More:

An Anomalous Anomaly : The New Fermilab Muon g-2 Results” A Particle Bites post about one recent exciting anomaly 

The flavour of new physics” Cern Courier article about the recent anomalies relating to lepton flavor violation 

Unveiling Hidden Physics at the LHC” Recent whitepaper that contains a good review of the recent anomalies relevant for LHC physics 

For a good discussion of this paper claiming a new boson, see this Twitter thread