Just before the 2022 holiday season LHCb announced it was giving the particle physics community a highly anticipated holiday present : an updated measurement of the lepton flavor universality ratio R(K). Unfortunately when the wrapping paper was removed and the measurement revealed, the entire particle physics community let out a collective groan. It was not shiny new-physics-toy we had all hoped for, but another pair of standard-model-socks.
The particle physics community is by now very used to standard-model-socks, receiving hundreds of pairs each year from various experiments all over the world. But this time there had be reasons to hope for more. Previous measurements of R(K) from LHCb had been showing evidence of a violation one of the standard model’s predictions (lepton flavor universality), making this triumph of the standard model sting much worse than most.
R(K) is the ratio of how often a B-meson (a bound state of a b-quark) decays into final states with a kaon (a bound state of an s-quark) plus two electrons vs final states with a kaon plus two muons. In the standard model there is a (somewhat mysterious) principle called lepton flavor universality which means that muons are just heavier versions of electrons. This principle implies B-mesons decays should produce electrons and muons equally and R(K) should be one.
But previous measurements from LHCb had found R(K) to be less than one, with around 3σ of statistical evidence. Other LHCb measurements of B-mesons decays had also been showing similar hints of lepton flavor universality violation. This consistent pattern of deviations had not yet reached the significance required to claim a discovery. But it had led a good amount of physicists to become #cautiouslyexcited that there may be a new particle around, possibly interacting preferentially with muons and b-quarks, that was causing the deviation. Several hundred papers were written outlining possibilities of what particles could cause these deviations, checking whether their existence was constrained by other measurements, and suggesting additional measurements and experiments that could rule out or discover the various possibilities.
This had all led to a considerable amount of anticipation for these updated results from LHCb. They were slated to be their final word on the anomaly using their full dataset collected during LHC’s 2nd running period of 2016-2018. Unfortunately what LHCb had discovered in this latest analysis was that they had made a mistake in their previous measurements.
There were additional backgrounds in their electron signal region which had not been previously accounted for. These backgrounds came from decays of B-mesons into pions or kaons which can be mistakenly identified as electrons. Backgrounds from mis-identification are always difficult to model with simulation, and because they are also coming from decays of B-mesons they produce similar peaks in their data as the sought after signal. Both these factors combined to make it hard to spot they were missing. Without accounting for these backgrounds it made it seem like there was more electron signal being produced than expected, leading to R(K) being below one. In this latest measurement LHCb found a way to estimate these backgrounds using other parts of their data. Once they were accounted for, the measurements of R(K) no longer showed any deviations, all agreed with one within uncertainties.
It is important to mention here that data analysis in particle physics is hard. As we attempt to test the limits of the standard model we are often stretching the limits of our experimental capabilities and mistakes do happen. It is commendable that the LHCb collaboration was able to find this issue and correct the record for the rest of the community. Still, some may be a tad frustrated that the checks which were used to find these missing backgrounds were not done earlier given the high profile nature of these measurements (their previous result claimed ‘evidence’ of new physics and was published in Nature).
Though the R(K) anomaly has faded away, the related set of anomalies that were thought to be part of a coherent picture (including another leptonic branching ratio R(D) and an angular analysis of the same B meson decay in to muons) still remain for now. Though most of these additional anomalies involve significantly larger uncertainties on the Standard Model predictions than R(K) did, and are therefore less ‘clean’ indications of new physics.
So as we begin 2023, with a great deal of fresh LHC data expected to be delivered, particle physicists once again begin our seemingly Sisyphean task : to find evidence physics beyond the standard model. We know its out there, but nature is under no obligation to make it easy for us.
Paper: Test of lepton universality in b→sℓ+ℓ− decays (arXiv link)
Deep underground, on the border between Switzerland and France, the Large Hadron Collider (LHC) is starting back up again after a 4 year hiatus. Today, July 5th, the LHC had its first full energy collisions since 2018. Whenever the LHC is running is exciting enough on its own, but this new run of data taking will also feature several upgrades to the LHC itself as well as the several different experiments that make use of its collisions. The physics world will be watching to see if the data from this new run confirms any of the interesting anomalies seen in previous datasets or reveals any other unexpected discoveries.
New and Improved
During the multi-year shutdown the LHC itself has been upgraded. Noticably the energy of the colliding beams has been increased, from 13 TeV to 13.6 TeV. Besides breaking its own record for the highest energy collisions every produced, this 5% increase to the LHC’s energy will give a boost to searches looking for very rare high energy phenomena. The rate of collisions the LHC produces is also expected to be roughly 50% higher previous maximum achieved in previous runs. At the end of this three year run it is expected that the experiments will have collected twice as much data as the previous two runs combined.
The experiments have also been busy upgrading their detectors to take full advantage of this new round of collisions.
The ALICE experiment had the most substantial upgrade. It features a new silicon inner tracker, an upgraded time projection chamber, a new forward muon detector, a new triggering system and an improved data processing system. These upgrades will help in its study of exotic phase of matter called the quark gluon plasma, a hot dense soup of nuclear material present in the early universe.
ATLAS and CMS, the two ‘general purpose’ experiments at the LHC, had a few upgrades as well. ATLAS replaced their ‘small wheel’ detector used to measure the momentum of muons. CMS replaced the inner most part its inner tracker, and installed a new GEM detector to measure muons close to the beamline. Both experiments also upgraded their software and data collection systems (triggers) in order to be more sensitive to the signatures of potential exotic particles that may have been missed in previous runs.
The LHCb experiment, which specializes in studying the properties of the bottom quark, also had major upgrades during the shutdown. LHCb installed a new Vertex Locator closer to the beam line and upgraded their tracking and particle identification system. It also fully revamped its trigger system to run entirely on GPU’s. These upgrades should allow them to collect 5 times the amount of data over the next two runs as they did over the first two.
One of the main goals in particle physics now is direct experimental evidence of a phenomena unexplained by the Standard Model. While very successful in many respects, the Standard Model leaves several mysteries unexplained such as the nature of dark matter, the imbalance of matter over anti-matter, and the origin of neutrino’s mass. All of these are questions many hope that the LHC can help answer.
Much of the excitement for Run-3 of the LHC will be on whether the additional data can confirm some of the deviations from the Standard Model which have been seen in previous runs.
One very hot topic in particle physics right now are a series of ‘flavor anomalies‘ seen by the LHCb experiment in previous LHC runs. These anomalies are deviations from the Standard Model predictions of how often certain rare decays of the b quarks should occur. With their dataset so far, LHCb has not yet had enough data to pass the high statistical threshold required in particle physics to claim a discovery. But if these anomalies are real, Run-3 should provide enough data to claim a discovery.
There are also a decent number ‘excesses’, potential signals of new particles being produced in LHC collisions, that have been seen by the ATLAS and CMS collaborations. The statistical significance of these excesses are all still quite low, and many such excesses have gone away with more data. But if one or more of these excesses was confirmed in the Run-3 dataset it would be a massive discovery.
While all of these anomalies are gamble, this new dataset will also certainly be used to measure various known entities with better precision, improving our understanding of nature no matter what. Our understanding of the Higgs boson, the top quark, rare decays of the bottom quark, rare standard model processes, the dynamics of the quark gluon plasma and many other areas will no doubt improve from this additional data.
In addition to these ‘known’ anomalies and measurements, whenever an experiment starts up again there is also the possibility of something entirely unexpected showing up. Perhaps one of the upgrades performed will allow the detection of something entirely new, unseen in previous runs. Perhaps FASER will see signals of long-lived particles missed by the other experiments. Or perhaps the data from the main experiments will be analyzed in a new way, revealing evidence of a new particle which had been missed up until now.
No matter what happens, the world of particle physics is a more exciting place when the LHC is running. So lets all cheers to that!
This is part one of our coverage of the CDF W mass result covering its implications. Read about the details of the measurement in a sister post here!
Last week the physics world was abuzz with the latest results from an experiment that stopped running a decade ago. Some were heralding this as the beginning of a breakthrough in fundamental physics, headlines read “Shock result in particle experiment could spark physics revolution” (BBC). So what exactly is all the fuss about?
The result itself is an ultra-precise measurement of the mass of the W boson. The W boson is one of the carriers of weak force and this measurement pegged its mass at 80,433 MeV with an uncertainty of 9 MeV. The excitement is coming because this value disagrees with the prediction from our current best theory of particle physics, the Standard Model. In theoretical structure of the Standard Model the masses of the gauge bosons are all interrelated. In the Standard Model the mass of the W boson can be computed based on the mass of the Z as well as few other parameters in the theory (like the weak mixing angle). In a first approximation (ie to the lowest order in perturbation theory), the mass of the W boson is equal to the mass of the Z boson times the cosine of the weak mixing angle. Based on other measurements that have been performed including the Z mass, the Higgs mass, the lifetime of muons and others, the Standard Model predicts that the mass of the W boson should be 80,357 (with an uncertainty of 6 MeV). So the two numbers disagree quite strongly, at the level of 7 standard deviations.
If the measurement and the Standard Model prediction are both correct, this would imply that there is some deficiency in the Standard Model; some new particle interacting with the W boson whose effects haven’t been unaccounted for. This would be welcome news to particle physicists, as we know that the Standard Model is an incomplete theory but have been lacking direct experimental confirmation of its deficiencies. The size of the discrepancy would also mean that whatever new particle was causing the deviation may also be directly detectable within our current or near future colliders.
If this discrepancy is real, exactly what new particles would this entail? Judging based on the 30+ (and counting) papers released on the subject in the last week, there are a good number of possibilities. Some examples include extra Higgs bosons, extra Z-like bosons, and vector-like fermions. It would take additional measurements and direct searches to pick out exactly what the culprit was. But it would hopefully give experimenters definite targets of particles to look for, which would go a long way in advancing the field.
But before everyone starts proclaiming the Standard Model dead and popping champagne bottles, its important to take stock of this new CDF measurement in the larger context. Measurements of the W mass are hard, that’s why it has taken the CDF collaboration over 10 years to publish this result since they stopped taking data. And although this measurement is the most precise one to date, several other W mass measurements have been performed by other experiments.
The Other Measurements
Previous measurements of the W mass have come from experiments at the Large Electron-Positron collider (LEP), another experiment at the Tevatron (D0) and experiments at the LHC (ATLAS and LHCb). Though none of these were as precise as this new CDF result, they had been painting a consistent picture of a value in agreement with the Standard Model prediction. If you take the average of these other measurements, their value differs from the CDF measurement the level about 4 standard deviations, which is quite significant. This discrepancy seems large enough that it is unlikely to arise from purely random fluctuation, and likely means that either some uncertainties have been underestimated or something has been overlooked in either the previous measurements or this new one.
What one would like are additional, independent, high precision measurements that could either confirm the CDF value or the average value of the previous measurements. Unfortunately it is unlikely that such a measurement will come in the near future. The only currently running facility capable of such a measurement is the LHC, but it will be difficult for experiments at the LHC to rival the precision of this CDF one.
W mass measurements are somewhat harder at the LHC than the Tevatron for a few reasons. First of all the LHC is proton-proton collider, while the Tevatron was a proton-antiproton collider, and the LHC also operates at a higher collision energy than the Tevatron. Both differences cause W bosons produced at the LHC to have more momentum than those produced at the Tevatron. Modeling of the W boson’s momentum distribution can be a significant uncertainty of its mass measurement, and the extra momentum of W’s at the LHC makes this a larger effect. Additionally, the LHC has a higher collision rate, meaning that each time a W boson is produced there are actually tens of other collisions laid on top (rather than only a few other collisions like at the Tevatron). These extra collisions are called pileup and can make it harder to perform precision measurements like these. In particular for the W mass measurement, the neutrino’s momentum has to be inferred from the momentum imbalance in the event, and this becomes harder when there are many collisions on top of each other. Of course W mass measurements are possible at the LHC, as evidenced by ATLAS and LHCb’s already published results. And we can look forward to improved results from ATLAS and LHCb as well as a first result from CMS. But it may be very difficult for them to reach the precision of this CDF result.
A future electron positron collider would be able to measure the W mass extremely precisely by using an alternate method. Instead of looking at the W’s decay, the mass could be measured through its production, by scanning the energy of the electron beams very close to the threshold to produce two W bosons. This method should offer precision significantly better than even this CDF result. However any measurement from a possible future electron positron collider won’t come for at least a decade.
In the coming months, expect this new CDF measurement to receive a lot buzz. Experimentalists will be poring over the details trying to figure out why it is in tension with previous measurements and working hard to produce new measurements from LHC data. Meanwhile theorists will write a bunch of papers detailing the possibilities of what new particles could explain the discrepancy and if there is a connection to other outstanding anomalies (like the muon g-2). But the big question of whether we are seeing the first real crack in the Standard Model or there is some mistake in one or more of the measurements is unlikely to be answered for a while.
If you want to learn about how the measurement actually works, check out this sister post!
Recontres de Moriond is probably the biggest ski-vacation conference of the year in particle physics, and is one of the places big particle physics experiments often unveil their new results. For the last few years the buzz in particle physics has been surrounding ‘indirect’ probes of new physics, specifically the latest measurement of the muons anomalous magnetic moment (g-2) and hints from LHCb about lepton flavor universality violation. If either of these anomalies were confirmed this would of course be huge, definitive laboratory evidence for physics beyond the standard model, but they would not answer the question of what exactly that new physics was. As evidenced by the 500+ papers written in the last year offering explanations of the g-2 anomaly, there are a lot of different potential explanations.
A definitive answer would come in the form of a ‘direct’ observation of whatever particle is causing the anomaly, which traditionally means producing and observing said particle in a collider. But so far the largest experiments performing these direct searches, ATLAS and CMS, have not shown any hints of new particles. But this Moriond, as the LHC experiments are getting ready for the start of a new data taking run later this year, both collaborations unveiled ‘excesses’ in their Run-2 data. These excesses, extra events above a background prediction that resemble the signature of a new particle, don’t have enough statistical significance to claim discoveries yet, and may disappear as more data is collected, as many an excess has done before. But they are intriguing and some have connections to anomalies seen in other experiments.
So while there have been many great talks at Moriond (covering cosmology, astro-particle searches for dark matter, neutrino physics, and more flavor physics measurements and more) and the conference is still ongoing, its worth reviewing these new excesses in particular and what they might mean.
Most searches for new particles at the LHC assume that said new particles decay very quickly once they are produced and their signatures can then be pieced together by measuring all the particles they decay to. However in the last few years there has been increasing interest in searching for particles that don’t decay quickly and therefore leave striking signatures in the detectors that can be distinguished from regular Standard Model particles. This particular ATLAS search searches for particles that are long-lived, heavy and charged. Due to their heavy masses (and/or large charges) particles such as these will produce greater ionization signals as they pass through the detector than standard model particles would. This ATLAS analysis selects tracks with high momentum, and unusually high ionization signals. They find an excess of events with high mass and high ionization, with a significance of 3.3-sigma.
If their background has been estimated properly, this seems to be quite clear signature and it might be time to get excited. ATLAS has checked that these events are not due to any known instrumental defect, but they do offer one caveat. For a heavy particle like this (with a mass of ~TeV) one would expect for it to be moving noticeably slower than the speed of light. But when ATLAS compares the ‘time of flight’ of the particle, how long it takes to reach their detectors, its velocity appears indistinguishable from the speed of light. One would expect background Standard Model particles to travel close to the speed of light.
So what exactly to make of this excess is somewhat unclear. Hopefully CMS can weigh in soon!
Excesses 2-4: CMS’s Taus; Vector-Like-Leptons and TauTau Resonance(s)
Paper 1 : https://cds.cern.ch/record/2803736
Paper 2: https://cds.cern.ch/record/2803739
Many of the models seeking to explain the flavor anomalies seen by LHCb predict new particles that couple preferentially to tau’s and b-quarks. These two separate CMS analyses look for particles that decay specifically to tau leptons.
In the first analysis they look for pairs of vector-like-leptons (VLL’s) the lightest particle predicted in one of the favored models to explain the flavor anomalies. The VLL’s are predicted to decay into tau leptons and b-quarks, so the analysis targets events which have at least four b-tagged jets and reconstructed tau leptons. They trained a machine learning classifier to separate VLL’s from their backgrounds. They see an excess of events at high VLL classification probability in the categories with 1 or 2 reconstructed tau’s, with a significance of 2.8 standard deviations.
In the second analysis they look for new resonances that decay into two tau leptons. They employ a sophisticated ’embedding’ technique to estimate the large background of Z bosons decaying to tau pairs by using the decays of Z bosons to muons. They see two excesses, one at 100 GeV and one at 1200 GeV, each with a significances of around 3-sigma. The excess at ~100 GeV could also be related to another CMS analysis that saw an excess of diphoton events at ~95 GeV, especially given that if there was an additional Higgs-like boson at 95 GeV diphoton and ditau would be the two channels it would likely first appear in.
While the statistical significances of these excess are not quite as high as the first one, meaning it is more likely they are fluctuations that will disappear with more data, their connection to other anomalies is quite intriguing.
Excess 4: CMS Paired Dijet Resonances
Often statistical significance doesn’t tell the full story of an excess. When CMS first performed its standard dijet search on Run2 LHC data, where one looks for a resonance decaying to two jets by looking for bumps in the dijet invariant mass spectrum, they did not find any significant excesses. But they did note one particular striking event, which 4 jets which form two ‘wide jets’, each with a mass of 1.9 TeV and the 4 jet mass is 8 TeV.
This single event seems very likely to occur via normal Standard Model QCD which normally has a regular 2-jet topology. However a new 8 TeV resonance which decayed to two intermediate particles with masses of 1.9 TeV which then each decayed to a pair of jets would lead to such a signature. This motivated them to design this analysis, a new search specifically targeting this paired dijet resonance topology. In this new search they have now found a second event with very similar characteristics. The local statistical significance of this excess is 3.9-sigma, but when one accounts for the many different potential dijet and 4-jet mass combinations which were considered in the analysis that drops to 1.6-sigma.
Though 1.6-sigma is relatively low, the striking nature of these events is certainly intriguing and warrants follow up. The Run-3 will also bring a slight increase to the LHC’s energy (13 -> 13.6 TeV) which will give the production rate of any new 8 TeV particles a not-insignificant boost.
The safe bet on any of these excesses would probably be that it will disappear with more data, as many excesses have done in the past. And many particle physicists are probably wary of getting too excited after the infamous 750 GeV diphoton fiasco in which many people got very excited (and wrote hundreds of papers about) a about a few-sigma excess in CMS + ATLAS data that disappeared as more data was collected. All of theses excesses are for analyses only performed by a single experiment (ATLAS or CMS) for now, but both experiments have similar capabilities so it will be interesting to see what the counterpart has to say for each excess once they perform a similar analysis on their Run-2 data. At the very least these results add some excitement for the upcoming LHC Run-3–the LHC collisions are starting up again this year after being on hiatus since 2018.
References: https://arxiv.org/abs/1712.07158 (CMS) and https://arxiv.org/abs/1907.05120 (ATLAS)
If you are looking for love at the Large Hadron Collider this Valentines Day, you won’t find a better eligible bachelor than the b-quark. The b-quark (also called the ‘beauty’ quark if you are feeling romantic, the ‘bottom’ quark if you are feeling crass, or a ‘beautiful bottom quark’ if you trying to weird people out) is the 2nd heaviest quark behind the top quark. It hangs out with a cool crowd, as it is the Higgs’s favorite decay and the top quark’s BFF; two particles we would all like to learn a bit more about.
No one wants a romantic partner who is boring, and can’t stand out from the crowd. Unfortunately when most quarks or gluons are produced at the LHC, they produce big sprays of particles called ‘jets’ that all look the same. That means even if the up quark was giving you butterflies, you wouldn’t be able to pick its jets out from those of strange quarks or down quarks, and no one wants to be pressured into dating a whole friend group. But beauty quarks can set themselves apart in a few ways. So if you are swiping through LHC data looking for love, try using these tips to find your b(ae).
Look for a partner whose not afraid of commitment and loves to travel. Beauty quarks live longer than all the other quarks (a full 1.5 picoseconds, sub-atomic love is unfortunately very fleeting) letting them explore their love of traveling (up to a centimeter from the beamline, a great honeymoon spot I’ve heard) before decaying.
You want a lover who will bring you gifts, which you can hold on to even after they are gone. And when beauty quarks they, you won’t be in despair, but rather charmed with your new c-quark companion. And sometimes if they are really feeling the magic, they leave behind charged leptons when they go, so you will have something to remember them by.
But even with these standout characteristics, beauty can still be hard to find, as there are a lot of un-beautiful quarks in the sea you don’t want to get hung up on. There is more to beauty than meets the eye, and as you get to know them you will find that beauty quarks have even more subtle features that make them stick out from the rest. So if you are serious about finding love in 2022, its may be time to turn to the romantic innovation sweeping the nation: modern machine learning. Even if we would all love to spend many sleepless nights learning all about them, unfortunately these days it feels like the-scientist-she-tells-you-not-to-worry-about, neural networks, will always understand them a bit better. So join the great romantics of our time (CMS and ATLAS) in embracing the modern dating scene, and let the algorithms find the most beautiful quarks for you.
So if you looking for love this Valentines Day, look no further than the beauty quark. And if you area feeling hopeless, you can take inspiration from this decades-in-the-making love story from a few years ago: “Higgs Decay into Bottom Beauty Quarks Seen at Last”
Title: “Search for an Excess of Electron Neutrino Interactions in MicroBooNE Using Multiple Final State Topologies”
Authors: The MiniBoone Collaboration
This is the first post in a series on the latest MicroBooNE results, covering the experimental side. Click here to read about the theory side.
The new results from the MicroBoone experiment received a lot of excitement last week, being covered by several major news outlets. But unlike most physics news stories that make the press, it was a null result; they did not see any evidence for new particles or interactions. So why is it so interesting? Particle physics experiments produce null results every week, but what made this one newsworthy is that MicroBoone was trying to check the results from two previous experiments LSND and MiniBoone, that did see something anomalous with very high statistical evidence. If the LSND/MiniBoone result was confirmed, it would have been a huge breakthrough in particle physics, but now that it wasn’t many physicists are scratching their heads trying to make sense of these seemingly conflicting results. However, the MicroBoone experiment is not exactly the same as MiniBoone/LSND, and understanding the differences between the two sets of experiments may play an important role in unraveling this mystery.
Accelerator Neutrino Basics
All of these experiments are ‘accelerator neutrino experiments’, so lets first review what that means. Neutrino’s are ‘ghostly’ particles that are difficult to study (check out this post for more background on neutrinos). Because they only couple through the weak force, neutrinos don’t like to interact with anything very much. So in order to detect them you need both a big detector with a lot of active material and a source with a lot of neutrinos. These experiments are designed to detect neutrinos produced in a human-made beam. To make the beam, a high energy beam of protons is directed at a target. These collisions produce a lot of particles, including unstable bound states of quarks like pions and kaons. These unstable particles have charge, so we can use magnets to focus them into a well-behaved beam. When the pions and kaons decay they usually produce a muon and a muon neutrino. The beam of pions and kaons is pointed at an underground detector located a few hundred meters (or kilometers!) away, and then given time to decay. After they decay there will be a nice beam of muons and muon neutrinos. The muons can be stopped by some kind of shielding (like the earth’s crust), but the neutrinos will sail right through to the detector.
Nearly all of the neutrinos from the beam will still pass right through your detector, but a few of them will interact, allowing you to learn about their properties.
All of these experiments are considered ‘short-baseline’ because the distance between the neutrino source and the detector is only a few hundred meters (unlike the hundreds of kilometers in other such experiments). These experiments were designed to look for oscillation of the beam’s muon neutrinos into electron neutrinos which then interact with their detector (check out this post for some background on neutrino oscillations). Given the types of neutrinos we know about and their properties, this should be too short of a distance for neutrinos to oscillate, so any observed oscillation would be an indication something new (beyond the Standard Model) was going on.
The LSND + MiniBoone Anomaly
So the LSND and MiniBoone ‘anomaly’ was an excess of events above backgrounds that looked like electron neutrinos interacting with their detector. Both detectors were based on similar technology and were a similar distance from their neutrino source. Their detectors were essentially big tanks of mineral oil lined with light-detecting sensors.
At these energies the most common way neutrinos interact is to scatter against a neutron to produce a proton and a charged lepton (called a ‘charged current’ interaction). Electron neutrinos will produce outgoing electrons and muon neutrinos will produce outgoing muons.
When traveling through the mineral oil these charged leptons will produce a ring of Cherenkov light which is detected by the sensors on the edge of the detector. Muons and electrons can be differentiated based on the characteristics of the Cherenkov light they emit. Electrons will undergo multiple scatterings off of the detector material while muons will not. This makes the Cherenkov rings of electrons ‘fuzzier’ than those of muons. High energy photons can produce electrons positron pairs which look very similar to a regular electron signal and are thus a source of background.
Even with a good beam and a big detector, the feebleness of neutrino interactions means that it takes a while to get a decent number of potential events. The MiniBoone experiment ran for 17 years looking for electron neutrinos scattering in their detector. In MiniBoone’s most recent analysis, they saw around 600 more events than would be expected if there were no anomalous electron neutrinos reaching their detector. The statistical significance of this excess, 4.8-sigma, was very high. Combining with LSND which saw a similar excess, the significance was above 6-sigma. This means its very unlikely this is a statistical fluctuation. So either there is some new physics going on or one of their backgrounds has been seriously under-estimated. This excess of events is what has been dubbed the ‘MiniBoone anomaly’.
The MicroBoone Result
The MicroBoone experiment was commissioned to verify the MiniBoone anomaly as well as test out a new type of neutrino detector technology. The MicroBoone is the first major neutrino experiment to use a ‘Liquid Argon Time Projection Chamber’ detector. This new detector technology allows more detailed reconstruction of what is happening when a neutrino scatters in the detector. The the active volume of the detector is liquid Argon, which allows both light and charge to propagate through it. When a neutrino scatters in the liquid Argon, scintillation light is produced that is collected in sensors. As charged particles created in the collision pass through the liquid Argon they ionize atoms they pass by. An electric field applied to the detector causes this produced charge to drift towards a mesh of wires where it can be collected. By measuring the difference in arrival time between the light and the charge, as well as the amount of charge collected at different positions and times, the precise location and trajectory of the particles produced in the collision can be determined.
This means that unlike the MiniBoone and LSND, MicroBoone can see not just the lepton, but also the hadronic particles (protons, pions, etc) produced when a neutrino scatters in their detector. This means that the same type of neutrino interaction actually looks very different in their detector. So when they went to test the MiniBoone anomaly they adopted multiple different strategies of what exactly to look for. In the first case they looked for the type of interaction that an electron neutrino would have most likely produced: an outgoing electron and proton whose kinematics match those of a charged current interaction. Their second set of analyses, designed to mimic the MiniBoone selection, are slightly more general. They require one electron and any number of protons, but no pions. Their third analysis is the most general and requires an electron along with anything else.
These different analyses have different levels of sensitivity to the MiniBoone anomaly, but all of them are found to be consistent with a background-only hypothesis: there is no sign of any excess events. Three out of four of them even see slightly less events than the expected background.
Overall the MicroBoone data rejects the hypothesis that the MiniBoone anomaly is due to electron neutrino charged current interactions at quite high significance (>3sigma). So if its not electron neutrinos causing the MiniBoone anomaly, what is it?
What’s Going On?
Given that MicroBoone did not see any signal, many would guess that MiniBoone’s claim of an excess must be flawed and they have underestimated one of their backgrounds. Unfortunately it is not very clear what that could be. If you look at the low-energy region where MiniBoone has an excess, there are three major background sources: decays of the Delta baryon that produce a photon (shown in tan), neutral pions decaying to pairs of photons (shown in red), and backgrounds from true electron neutrinos (shown in various shades of green). However all of these sources of background seem quite unlikely to be the source of the MiniBoone anomaly.
Before releasing these results, MicroBoone performed a dedicated search for Delta baryons decaying into photons, and saw a rate in agreement with the theoretical prediction MiniBoone used, and well below the amount needed to explain the MiniBoone excess.
Backgrounds from true electron neutrinos produced in the beam, as well as from the decays of muons, should not concentrate only at low energies like the excess does, and their rate has also been measured within MiniBoone data by looking at other signatures.
The decay of a neutral pions can produce two photons, and if one of them escapes detection, a single photon will mimic their signal. However one would expect that it would be more likely that photons would escape the detector near its edges, but the excess events are distributed uniformly in the detector volume.
So now the mystery of what could be causing this excess is even greater. If it is a background, it seems most likely it is from an unknown source not previously considered. As will be discussed in our part 2 post, its possible that MiniBoone anomaly was caused by a more exotic form of new physics; possibly the excess events in MiniBoone were not really coming from the scattering of electron neutrinos but something else that produced a similar signature in their detector. Some of these explanations included particles that decayed into pairs of electrons or photons. These sorts of explanations should be testable with MicroBoone data but will require dedicated analyses for their different signatures.
So on the experimental side, we now we are left to scratch our heads and wait for new results from MicroBoone that may help get to the bottom of this.
Click here for part 2 of our MicroBoone coverage that goes over the theory side of the story!
Title: “Accumulating Evidence for the Associate Production of a Neutral Scalar with Mass around 151 GeV”
Authors: Andreas Crivellin et al.
Everyone in particle physics is hungry for the discovery of a new particle not in the standard model, that will point the way forward to a better understanding of nature. And recent anomalies: potential Lepton Flavor Universality violation in B meson decays and the recent experimental confirmation of the muon g-2 anomaly, have renewed peoples hopes that there may new particles lurking nearby within our experimental reach. While these anomalies are exciting, if they are confirmed they would be ‘indirect’ evidence for new physics, revealing concrete a hole in the standard model, but not definitely saying what it is that fills that hole. We would then would really like to ‘directly’ observe what was causing the anomaly, so we can know exactly what the new particle is and study it in detail. A direct observation usually involves being able to produce it in a collider, which is what the high momentum experiments at the LHC (ATLAS and CMS) are designed to look for.
By now these experiments have done hundreds of different analyses of their data searching for potential signals of new particles being produced in their collisions and so far haven’t found anything. But in this recent paper, a group of physicists outside these collaborations argue that they may have missed such a signal in their own data. Whats more, they claim statistical evidence for this new particle at the level of around 5-sigma, which is the threshold usually corresponding to a ‘discovery’ in particle physics. If true, this would of course be huge, but there are definitely reasons to be a bit skeptical.
This group took data from various ATLAS and CMS papers that were looking for something else (mostly studying the Higgs) and noticed that multiple of them had an excess of events at a particle energy, 151 GeV. In order to see how significant theses excesses were in combination, they constructed a statistical model that combined evidence from the many different channels simultaneously. Then they evaluate that the probability of there being an excess at the same energy in all of these channels without a new particle is extremely low, and thus claim evidence for this new particle at 5.1-sigma (local).
This is a of course a big claim, and one reason to be skeptical is because they don’t have a definitive model, they cannot predict exactly how much signal you would expect to see in each of these different channels. This means that when combining the different channels, they have to let the relative strength of the signal in each channel be a free parameter. They are also combining the data a multitude of different CMS and ATLAS papers, essentially selected because they are showing some sort of fluctuation around 151 GeV. So this sort of cherry picking of data and no constraints on the relative signal strengths means that their final significance should be taken with several huge grains of salt.
The authors further attempt to quantify a global significance, which would account of the look-elsewhere effect , but due to the way they have selected their datasets it is not really possible in this case (in this humble experimenter’s opinion).
Still, with all of those caveats, it is clear that there is some excesses in the data around 151 GeV, and it should be worth experimental collaborations’ time to investigate it further. Most of the data the authors use comes control regions of from analyses that were focused solely on the Higgs, so this motivates the experiments expanding their focus a bit to cover these potential signals. The authors also propose a new search that would be sensitive to their purported signal, which would look for a new scalar decaying to two new particles that decay to pairs of photons and bottom quarks respectively (H->SS*-> γγ bb).
In an informal poll on Twitter, most were not convinced a new particle has been found, but the ball is now in ATLAS and CMS’s courts to analyze the data themselves and see what they find.
You might have heard that one of the big things we are looking for in collider experiments are ever elusive dark matter particles. But given that dark matter particles are expected to interact very rarely with regular matter, how would you know if you happened to make some in a collision? The so called ‘direct detection’ experiments have to operate giant multi-ton detectors in extremely low-background environments in order to be sensitive to an occasional dark matter interaction. In the noisy environment of a particle collider like the LHC, in which collisions producing sprays of particles happen every 25 nanoseconds, the extremely rare interaction of the dark matter with our detector is likely to be missed. But instead of finding dark matter by seeing it in our detector, we can instead find it by not seeing it. That may sound paradoxical, but its how most collider based searches for dark matter work.
The trick is based on every physicists favorite principle: the conservation of energy and momentum. We know that energy and momentum will be conserved in a collision, so if we know the initial momentum of the incoming particles, and measure everything that comes out, then any invisible particles produced will show up as an imbalance between the two. In a proton-proton collider like the LHC we don’t know the initial momentum of the particles along the beam axis, but we do that they were traveling along that axis. That means that the net momentum in the direction away from the beam axis (the ‘transverse’ direction) should be zero. So if we see a momentum imbalance going away from the beam axis, we know that there is some ‘invisible’ particle traveling in the opposite direction.
We normally refer to the amount of transverse momentum imbalance in an event as its ‘missing momentum’. Any collisions in which an invisible particle was produced will have missing momentum as tell-tale sign. But while it is a very interesting signature, missing momentum can actually be very difficult to measure. That’s because in order to tell if there is anything missing, you have to accurately measure the momentum of every particle in the collision. Our detectors aren’t perfect, any particles we miss, or mis-measure the momentum of, will show up as a ‘fake’ missing energy signature.
Even if you can measure the missing energy well, dark matter particles are not the only ones invisible to our detector. Neutrinos are notoriously difficult to detect and will not get picked up by our detectors, producing a ‘missing energy’ signature. This means that any search for new invisible particles, like dark matter, has to understand the background of neutrino production (often from the decay of a Z or W boson) very well. No one ever said finding the invisible would be easy!
However particle physicists have been studying these processes for a long time so we have gotten pretty good at measuring missing energy in our events and modeling the standard model backgrounds. Missing energy is a key tool that we use to search for dark matter, supersymmetry and other physics beyond the standard model.
Title : New physics and tau g−2 using LHC heavy ion collisions
Authors: Lydia Beresford and Jesse Liu
Since April, particle physics has been going crazy with excitement over the recent announcement of the muon g-2 measurement which may be our first laboratory hint of physics beyond the Standard Model. The paper with the new measurement has racked up over 100 citations in the last month. Most of these papers are theorists proposing various models to try an explain the (controversial) discrepancy between the measured value of the muon’s magnetic moment and the Standard Model prediction. The sheer number of papers shows there are many many models that can explain the anomaly. So if the discrepancy is real, we are going to need new measurements to whittle down the possibilities.
Given that the current deviation is in the magnetic moment of the muon, one very natural place to look next would be the magnetic moment of the tau lepton. The tau, like the muon, is a heavier cousin of the electron. It is the heaviest lepton, coming in at 1.78 GeV, around 17 times heavier than the muon. In many models of new physics that explain the muon anomaly the shift in the magnetic moment of a lepton is proportional to the mass of the lepton squared. This would explain why we are a seeing a discrepancy in the muon’s magnetic moment and not the electron (though there is a actually currently a small hint of a deviation for the electron too). This means the tau should be 280 times more sensitive than the muon to the new particles in these models. The trouble is that the tau has a much shorter lifetime than the muon, decaying away in just 10-13 seconds. This means that the techniques used to measure the muons magnetic moment, based on magnetic storage rings, won’t work for taus.
Thats where this new paper comes in. It details a new technique to try and measure the tau’s magnetic moment using heavy ion collisions at the LHC. The technique is based on light-light collisions (previously covered on Particle Bites) where two nuclei emit photons that then interact to produce new particles. Though in classical electromagnetism light doesn’t interact with itself (the beam from two spotlights pass right through each other) at very high energies each photon can split into new particles, like a pair of tau leptons and then those particles can interact. Though the LHC normally collides protons, it also has runs colliding heavier nuclei like lead as well. Lead nuclei have more charge than protons so they emit high energy photons more often than protons and lead to more light-light collisions than protons.
Light-light collisions which produce tau leptons provide a nice environment to study the interaction of the tau with the photon. A particles magnetic properties are determined by its interaction with photons so by studying these collisions you can measure the tau’s magnetic moment.
However studying this process is be easier said than done. These light-light collisions are “Ultra Peripheral” because the lead nuclei are not colliding head on, and so the taus produced generally don’t have a large amount of momentum away from the beamline. This can make them hard to reconstruct in detectors which have been designed to measure particles from head on collisions which typically have much more momentum. Taus can decay in several different ways, but always produce at least 1 neutrino which will not be detected by the LHC experiments further reducing the amount of detectable momentum and meaning some information about the collision will lost.
However one nice thing about these events is that they should be quite clean in the detector. Because the lead nuclei remain intact after emitting the photon, the taus won’t come along with the bunch of additional particles you often get in head on collisions. The level of background processes that could mimic this signal also seems to be relatively minimal. So if the experimental collaborations spend some effort in trying to optimize their reconstruction of low momentum taus, it seems very possible to perform a measurement like this in the near future at the LHC.
The authors of this paper estimate that such a measurement with a the currently available amount of lead-lead collision data would already supersede the previous best measurement of the taus anomalous magnetic moment and further improvements could go much farther. Though the measurement of the tau’s magnetic moment would still be far less precise than that of the muon and electron, it could still reveal deviations from the Standard Model in realistic models of new physics. So given the recent discrepancy with the muon, the tau will be an exciting place to look next!
Article Title: “FASER: ForwArd Search ExpeRiment at the LHC”
Authors: The FASER Collaboration
When the LHC starts up again for its 3rd run of data taking, there will be a new experiment on the racetrack. FASER, the ForwArd Search ExpeRiment at the LHC is an innovative new experiment that just like its acronym, will stretch LHC collisions to get the most out of them we can.
While the current LHC detectors are great, the have a (literal) hole. General purpose detectors (like ATLAS and CMS) are essentially giant cylinders with the incoming particle beams passing through the central axis of the cylinder before colliding. Because they have to leave room for the incoming beam of particles, they can’t detect anything too close to the beam axis. This typically isn’t a problem, because when a heavy new particle, like Higgs boson, is produced, its decay products fly off in all directions, so it is very unlikely that all of the particles produced would end up moving along the beam axis. However if you are looking for very light particles, they will often be produced in ‘imbalanced’ collisions, where one of the protons contributes a lot more energy than the other one, and the resulting particles therefore mostly carry on in the direction of the proton, along the beam axis. Because these general purpose detectors have to have a gap in them for the beams to enter they have no hope of detecting such collisions.
That’s where FASER comes in.
FASER is specifically looking for new light “long-lived” particles (LLP’s) that could be produced in LHC collisions and then carry on in the direction of the beam. Long-lived means that once produced they can travel for a while before decaying back into Standard Model particles. Many popular models of dark matter have particles that could fit this bill, including axion-like particles, dark photons, and heavy neutral leptons. To search for these particles FASER will be placed approximately 500 meters down the line from the ATLAS interaction point, in a former service tunnel. They will be looking for the signatures of LLP’s that made were produced in collisions at the ATLAS interaction point, traveled through the ground and eventually decayed in volume of their detector.
Any particles reaching FASER will travel through hundreds of meters of rock and concrete, filtering out a large amount of the Standard Model particles produced in the LHC collisions. But the LLP’s FASER is looking for interact very feebly with the Standard Model so they should sail right through. FASER also has dedicated detector elements to veto any remaining muons that might make it through the ground, allowing FASER be able to almost entirely eliminate any backgrounds that would mimic an LLP signal. This low background and their unique design will allow them to break new ground in the search for LLP’s in the coming LHC run.
In addition to their program searching for new particles, FASER will also feature a neutrino detector. This will allow them to detect the copious and highly energetic neutrinos produced in LHC collisions which actually haven’t been studied yet. In fact, this will be the first direct detection of neutrinos produced in a particle collider, and will enable them to test neutrino properties at energies much higher than any previous human-made source.
FASER is a great example of physicists thinking up clever ways to get more out of our beloved LHC collisions. Currently being installed, it will be one of the most exciting new developments of the LHC Run III, so look out for their first results in a few years!