A world with no weak forces

Gravity, electromagnetism, strong, and weak — these are the beating hearts of the universe, the four fundamental forces. But do we really need the last one for us to exist?

Harnik, Kribs and Perez went about building a world without weak interactions and showed that, indeed, life as we know it could emerge there. This was a counter-proof by example to a famous anthropic argument by Agrawal, Barr, Donoghue and Seckel for the puzzling tininess of the weak scale, i.e. the electroweak hierarchy problem.

Summary of the argument in hep-ph/9707380 that a tiny Higgs mass (in Planck mass units) is necessary for life to develop.

Let’s ask first: would the Sun be there in a weakless universe? Sunshine is the product of proton fusion, and that’s the strong force. However, the reaction chain is ignited by the weak force!

image: Eric G. Blackman

So would no stars shine in a weakless world? Amazingly, there’s another route to trigger stellar burning: deuteron-proton fusion via the strong force! In our world, gas clouds collapsing into stars do not take this option because deuterons are very rare, with protons outnumbering them by 50,000. But we need not carry this, er, weakness into our gedanken universe. We can tune the baryon-to-photon ratio — whose origin is unknown — so that we end up with roughly as many deuterons as protons from the primordial synthesis of nuclei. Harnik et al. go on to show that, as in our universe, elements up to iron can be cooked in weakless stars, that they live for billions of years, and may explode in supernovae that disperse heavy elements into the interstellar medium.

source: hep-ph/0604027

A “weakless” universe is arranged by elevating the electroweak scale or the Higgs vacuum expectation value (\approx 246 GeV) to, say, the Planck scale (\approx 10^{19} GeV). To get the desired nucleosynthesis, care must be taken to keep the u, d, s quarks and the electron at their usual mass by tuning the Yukawa couplings, which are technically natural.

And let’s not forget dark matter. To make stars, one needs galaxy-like structures. And to make those, density perturbations must be gravitationally condensed by a large population of matter. In the weakless world of Harnik et al., hyperons make up some of the dark matter, but you would also need much other dark stuff such as your favourite non-WIMP.

If you believe in the string landscape, a weakless world isn’t just a hypothetical. Someone somewhere might be speculating about a habitable universe with a fourth fundamental force, explaining to their bemused colleagues: “It’s kinda like the strong force, only weak…”

xkcd.com/1489

Bibliography

Viable range of the mass scale of the standard model
V. Agrawal, S. M. Barr, J. F. Donoghue, D. Seckel, Phys.Rev.D 57 (1998) 5480-5492.

A Universe without weak interactions
R. Harnik, G. D. Kribs, G. Perez, Phys.Rev.D 74 (2006) 035006

Further reading

Gedanken Worlds without Higgs: QCD-Induced Electroweak Symmetry Breaking
C. Quigg, R. Shrock, Phys.Rev.D 79 (2009) 096002

The Multiverse and Particle Physics
J. F. Donoghue, Ann.Rev.Nucl.Part.Sci. 66 (2016)

The eighteen arbitrary parameters of the standard model in your everyday life
R. N. Cahn, Rev. Mod. Phys. 68, 951 (1996)

LHCb’s Xmas Letdown : The R(K) Anomaly Fades Away

Just before the 2022 holiday season LHCb announced it was giving the particle physics community a highly anticipated holiday present : an updated measurement of the lepton flavor universality ratio R(K).  Unfortunately when the wrapping paper was removed and the measurement revealed,  the entire particle physics community let out a collective groan. It was not shiny new-physics-toy we had all hoped for, but another pair of standard-model-socks.

The particle physics community is by now very used to standard-model-socks, receiving hundreds of pairs each year from various experiments all over the world. But this time there had be reasons to hope for more. Previous measurements of R(K) from LHCb had been showing evidence of a violation one of the standard model’s predictions (lepton flavor universality), making this triumph of the standard model sting much worse than most.

R(K) is the ratio of how often a B-meson (a bound state of a b-quark) decays into final states with a kaon (a bound state of an s-quark) plus two electrons vs final states with a kaon plus two muons. In the standard model there is a (somewhat mysterious) principle called lepton flavor universality which means that muons are just heavier versions of electrons. This principle implies B-mesons decays should produce electrons and muons equally and R(K) should be one. 

But previous measurements from LHCb had found R(K) to be less than one, with around 3σ of statistical evidence. Other LHCb measurements of B-mesons decays had also been showing similar hints of lepton flavor universality violation. This consistent pattern of deviations had not yet reached the significance required to claim a discovery. But it had led a good amount of physicists to become #cautiouslyexcited that there may be a new particle around, possibly interacting preferentially with muons and b-quarks, that was causing the deviation. Several hundred papers were written outlining possibilities of what particles could cause these deviations, checking whether their existence was constrained by other measurements, and suggesting additional measurements and experiments that could rule out or discover the various possibilities. 

This had all led to a considerable amount of anticipation for these updated results from LHCb. They were slated to be their final word on the anomaly using their full dataset collected during LHC’s 2nd running period of 2016-2018. Unfortunately what LHCb had discovered in this latest analysis was that they had made a mistake in their previous measurements.

There were additional backgrounds in their electron signal region which had not been previously accounted for. These backgrounds came from decays of B-mesons into pions or kaons which can be mistakenly identified as electrons. Backgrounds from mis-identification are always difficult to model with simulation, and because they are also coming from decays of B-mesons they produce similar peaks in their data as the sought after signal. Both these factors combined to make it hard to spot they were missing. Without accounting for these backgrounds it made it seem like there was more electron signal being produced than expected, leading to R(K) being below one. In this latest measurement LHCb found a way to estimate these backgrounds using other parts of their data. Once they were accounted for, the measurements of R(K) no longer showed any deviations, all agreed with one within uncertainties.

Plots showing two of the signal regions of for the electron channel measurements. The previously unaccounted for backgrounds are shown in lime green and the measured signal contribution is shown in red. These backgrounds have a peak overlapping with that of the signal, making it hard to spot that they were missing.

It is important to mention here that data analysis in particle physics is hard. As we attempt to test the limits of the standard model we are often stretching the limits of our experimental capabilities and mistakes do happen. It is commendable that the LHCb collaboration was able to find this issue and correct the record for the rest of the community. Still, some may be a tad frustrated that the checks which were used to find these missing backgrounds were not done earlier given the high profile nature of these measurements (their previous result claimed ‘evidence’ of new physics and was published in Nature).

Though the R(K) anomaly has faded away, the related set of anomalies that were thought to be part of a coherent picture (including another leptonic branching ratio R(D) and an angular analysis of the same B meson decay in to muons) still remain for now. Though most of these additional anomalies involve significantly larger uncertainties on the Standard Model predictions than R(K) did, and are therefore less ‘clean’ indications of new physics.

Besides these ‘flavor anomalies’ other hints of new physics remain, including measurements of the muon’s magnetic moment, the measured mass of the W boson and others. Though certainly none of these are slam dunk, as they each causes for skepticism.

So as we begin 2023, with a great deal of fresh LHC data expected to be delivered, particle physicists once again begin our seemingly Sisyphean task : to find evidence physics beyond the standard model. We know its out there, but nature is under no obligation to make it easy for us.

Paper: Test of lepton universality in b→sℓ+ℓ− decays (arXiv link)

Authors: LHCb Collaboration

Read More:

Excellent twitter thread summarizing the history of the R(K) saga

A related, still discrepant, flavor anomaly from LHCb

The W Mass Anomaly

What’s Next for Theoretical Particle Physics?

2022 saw the pandemic-delayed Snowmass process confront the past, present, and future of particle physics. As the last papers trickle in for the year, we review Snowmass’s major developments and takeaways for particle theory.

A team of scientists wanders through the landscape of questions. Generated by DALL·E 2.

It’s February 2022, and I am in an auditorium next to the beach in sunny Santa Barbara, listening to particle theory experts discuss their specialty. Each talk begins with roughly the same starting point: the Standard Model (SM) is incomplete. We know it is incomplete because, while its predictive capability is astonishingly impressive, it does not address a multitude of puzzles. These are the questions most familiar to any reader of popular physics: What is dark matter? What is dark energy? How can gravity be incorporated into the SM, which describes only 3 of the 4 known fundamental forces? How can we understand the origin of the SM’s structure — the values of its parameters, the hierarchy of its scales, and its “unnatural” constants that are calculated to be mysteriously small or far too large to be compatible with observation? 

This compilation of questions is the reason that I, and all others in the room, are here. In the 80s, the business of particle discovery was booming. Eight new particles had been confirmed in the past two decades alone, cosmology was pivoting toward the recently developed inflationary paradigm, and supersymmetry (SUSY) was — as the lore goes — just around the corner. This flourish of progress and activity in the field had become too extensive for any collaboration or laboratory to address on its own. Meanwhile, links between theoretical developments, experimental proposals, and the flurry of results ballooned. The transition from the solitary 18th century tinkerer to the CERN summer student running an esoteric simulation for a research group was now complete: particle physics, as a field and community, had emerged. 

It was only natural that the field sought a collective vision, or at minimum a notion of promising directions to pursue. In 1982, the American Physical Society’s Division of Particles and Fields organized Snowmass, a conference of a mere hundred participants that took place in a single room on a mountain in its namesake town of Snowmass, Colorado. Now, too large to be contained by its original location (although enthusiasm for organizing physics meetings at prominent ski locations abounds), Snowmass is both a conference and a multi-year process. 

The depth and breadth of particle physics knowledge acquired in the last half-century is remarkable, yet a snapshot of the field today appears starkly different. The Higgs boson just celebrated its tenth “discovery birthday”, and while the completion of the Standard Model (SM) as we know it is no doubt a momentous achievement, no new fundamental particles have been found since, despite overwhelming evidence of the inevitability of new physics. Supersymmetry may still prove to be just around the corner at a next-generation collider…or orders of magnitude beyond our current experimental reach. Despite attention-grabbing headlines that herald the “death” of particle physics, there remains an abundance of questions ripe for exploration. 

In light of this shift, the field is up against a challenge: how do we reconcile the disappointments of supersymmetry? Moreover, how can we make the case for the importance of fundamental physics research in an increasingly uncertain landscape?

The researchers are here at the Kavli Institute for Theoretical Physics (KITP) at UC Santa Barbara to map out the “Theory Frontier” of the Snowmass process. The “frontiers” — subsections of the field focusing on a different approach of particle physics — have met over the past year to weave their own story of the last decade’s progress, burgeoning both questions and promising future trajectories. This past summer, thousands of particle physicists across the frontiers convened in Seattle, Washington to share, debate, and ponder questions and new directions. Now, these frontiers are collating their stories into an anthology. 

Below are a few (theory) focal points in this astoundingly expansive picture.

Scattering Amplitudes

A 4-point amplitude can be constructed from two 3-point amplitudes. By Henriette Elvang.

Quantum field theory (QFT) is the common language of particle physics. QFT provides a description of a particle system based on two fundamental tools: the Lagrangian and the path integral, which can both be wrapped up in the familiar diagrams of Richard Feynman. This approach, utilizing a visualization of incoming and outgoing scattering or decaying particles, has provided relief to many Ph.D. students over the past few generations due to its comparative ease of use. The diagrams are roughly divided into three parts: propagators (which tell us about the motion of a free particle), vertices (in which three or more particles interact), and loops (which describe the scenario in which the trajectory of two particles form a closed path). They contain both real, incoming particles, which are known as on-shell, as well as virtual, intermediate particles that cannot be measured, which are known as off-shell. To calculate a scattering amplitude — the probability of one or more particles interacting to form some specified final state — in this paradigm requires summing over all possibilities of what these virtual particles may be. This can prove not only cumbersome, but can also result in redundancies in our calculations.

Particle theory, however, is undergoing a paradigm shift. If we instead focus on the physical observable itself, the scattering amplitude, we can build more complicated diagrams from simpler ones in a recursive fashion. For example, we can imagine creating a 4-particle amplitude by gluing together two 3-particle amplitudes, as shown above. The process bypasses the intermediate, virtual particles and focuses only on computing on-shell states. This is not only a nice feature, but it can significantly reduce the problem at hand: calculating the scattering amplitude of 8 gluons with the Feynman approach  requires computing more than a million Feynman diagrams, whereas the amplitudes method reduces the problem to a mere half-line. 

In recent years, this program has seen renewed efforts, not only for its practical simplicity but also for its insights into the underlying concepts that shape particle interactions. The Lagrangian formalism organizes a theory based on the fields it contains and the symmetries those fields obey, with the rule of thumb that any term respecting the theory’s symmetries can be included in the Lagrangian. Further, these terms satisfy several general principles: Unitarity (the sum of the probabilities of each possible process in the theory adds to one, and time-evolves in a respecting manner), causality (an effect originates only from a cause that is contained in the effect’s backward light cone), and locality (observables that are localized at distinct regions in spacetime cannot affect one another). These are all reasonable axioms, but they must be explicitly baked into a theory that is represented in the Lagrangian formalism. Scattering amplitudes, in contrast, can reveal these principles without prior assumptions, signaling the unveiling of a more fundamental structure.

Recent research surrounding amplitudes concerns both diving deeper into this structure, as well as applying the results of the amplitudes program toward precision theory predictions. The past decade has seen a flurry of results from an idea known as bootstrapping, which takes the relationship between physics and mathematics and flips it on its head.

QFTs are typically built up from “bottom-up” by including terms in the Lagrangian based on which fields are present and which symmetries they obey. The bootstrapping methodology instead asks what the observable quantities are that result from a theory, and considers which underlying properties they must obey in order to be mathematically consistent. This process of elimination rules out a large swath of possibilities, significantly constraining the system and allowing us to, in some cases, guess our way to the answer. 

This rich research program has plenty of directions to pursue. We can compute the scattering amplitudes of multi-loop diagrams in order to arrive at extremely precise SM predictions. We can probe their structure in the classical regime with the gravitational waves resulting from inspiraling stellar and black hole binaries. We can apply them to less understood regimes; for example, cosmological scattering amplitudes pose a unique challenge because they proceed in curved, rather than flat, space. Are there curved space analogues to the flat space amplitude structures? If so, what are they? What can we compute with them? Amplitudes are pushing forward our notion of what a QFT is. With them, we may be able to uncover the more fundamental frameworks that must underlie particle physics.

Computational Advances

The overlap between machine learning, deep learning, artificial intelligence, and physics. By Jesse Thaler.

Making theoretical predictions in the modern era has become incredibly computationally expensive. The Large Hadron Collider (LHC) and other accelerators produce over 100 terabytes of data per day while running, requiring not only intensive data filtering systems, but efficient computational methods to categorize and search for the signatures of particle collisions. Performing calculations in the quark sector — which relies on lattice gauge theory, in which spacetime is broken down into a discrete grid — also requires vast computational resources. And as simulations in astrophysics and cosmology balloon, so too does the supercomputing power needed to handle them. 

This challenge over the past decade has received a significant helping hand from the advent of machine learning — deep learning in particular. On the collider front, these techniques have been applied to the detection of anomalies — a deviation in the data from the SM “background” that may signal new physics — as well as the analysis of jets. These protocols can be trained on previously analyzed collider data and synthetic data to establish benchmarks and push computational efficiency much further. As the LHC enters its third operational run, it will be particularly focused on precision measurements as the increasing quantity of data allows for higher statistical certainty in our results. The growing list of anomalies — including the W mass measurement and the muon g-2 anomaly — will confront these increased statistics, allowing for possible confirmation or rejection of previous results. Our analyses have also grown more sophisticated; the showers of quarks and gluons that result from collisions of hadrons known as jets have proved to reveal substructure that opens up another avenue for comparison of data with an SM theory prediction. 

The quark sector especially will benefit from the growing adoption of machine learning in particle physics. Analytical calculations in this sector are intractable due to strong coupling, so in practice calculations are built upon the use of lattice gauge theory. Increasingly precise calculations are dependent upon making this grid smaller and smaller and including more and more particles. 

As physics continually benefits from the rapid development of machine learning and artificial intelligence, the field is up against a unique challenge. Machine learning algorithms can often be applied blindly, resulting in misunderstood outputs via a black box. The key in utilizing these techniques effectively is in asking the right questions, understanding what questions we are asking, and translating the physics appropriately to a machine learning context. This has its practical uses — in confidently identifying tasks for which automation is appropriate — but also opens up the possibility to formulate theoretical particle physics in a computational language. As we look toward the future, we can dream of the possibilities that could result from such a language: to what extent can we train a machine to learn physics itself? There is much work to be done before such questions can be answered, but the prospects are exciting nonetheless.

Cosmological Approaches

Shapes of possible non-Gaussian correlations in the distribution of galaxies. By Dan Green.

As the promise of SUSY is so far unfilled, the space of possible models is expansive, and anomalies pop up and disappear in our experiments, the field is yearning for a source of new data. While colliders have fueled significant progress in the past decades, a new horizon has formed with the launching of ultra-precise telescopes and gravitational wave detectors: probing the universe via cosmological data. 

The use of observations of astrophysical and cosmological sources to tell us about particle physics is not new, — we’ve long hunted for supernovae and mapped the cosmic microwave background (CMB) — but nascent theory developments hold incredible potential for discovery. As of 2015, observations of gravitational waves guide insights into stellar and black hole binaries, with an eye toward a detection of a stochastic gravitational wave background originating from the period of exponential expansion known as inflation, which proceeded shortly after the big bang. The observation of black hole binaries in particular can provide valuable insights into the workings of gravity at the smallest of scales, when it enters the realm of quantum mechanics. The possibility of a stochastic gravitational wave background raises the promise of “seeing” the workings of the universe at earlier stages in its history than we’ve ever before been able to access, potentially even to the start of the universe itself. 

Inflation also lends itself to other applications within particle physics. Quantum fields at the beginning of the universe, in alignment with the uncertainty principle, predict tiny, statistical fluctuations. These initial spacetime curvature perturbations beget density perturbations in the distribution of matter which beget the temperature fluctuations visible in the cosmic microwave background (CMB). These fluctuations are observed to be distributed according to a Gaussian normal distribution, as of the latest CMB datasets. But tiny, primordial non-Gaussianities — processes that lead to a correlated pattern of fluctuations in the CMB and other datasets — are predicted for certain particle interactions during inflation. In particular, if particles interacting with the fields responsible for inflation acquire heavy masses during inflation, they could imprint a distinct, oscillating signature within these datasets. This would show up in our observables, such as the large-scale distribution of galaxies shown above, in the form of triangular (or higher-point polygonal) shapes signaling a non-Gaussian correlation. Currently, our probes of these non-Gaussianities are not precise enough to unveil such signatures, but planned and upcoming experiments may establish this new window into the workings of the early universe. 

Finally, a section on the intersections of cosmology and particle physics would not be complete without mention of everyone’s favorite mystery: dark matter. A decade ago, the prime candidate for dark matter was the WIMP — the Weakly Interacting Massive Particle. This model was fairly simple, able to account for the 25% dark matter content of the universe we observe today, and remain in harmony with all other known cosmology. However, we’ve now probed a large swath of possible masses and cross-sections for the WIMP and come up short. The field’s focus has shifted to a different candidate for dark matter, the axion, which addresses both the dark matter mystery and a puzzle known as the strong CP problem simultaneously. While experiments to probe the axion parameter space are built, theorists are tasked with identifying well-motivated regions of this space — that is, possibilities for the mass and other parameters describing the axion that are plausible. The prospects include: theoretical motivation from calculations in string theory, considerations of the Peccei-Quinn symmetry underlying the notion of an axion, as well as various possible modes of production, including extreme astrophysical environments such as neutron stars and black holes. 

Cosmological data has thus far been an important source not only into the history and evolution of the universe, but also of particle physics at high energy scales. As new telescopes and gravitational wave observatories are slated to come online within the next decade, expect this prolific field to continue to deliver alluring prospects for physics beyond the SM.

Neutrinos

A visual representation of how neutrino oscillation works. From: http://www.hyper-k.org/en/neutrino.html.

While the previous sections have highlighted new approaches to uncovering physics beyond the SM, there is a particular collection of particles that stand out in the spotlight. In the SM formulation, the three flavors of neutrinos are massless, just like the photon. Yet we know unequivocally from experiment that this is false. Neutrinos display a phenomenon known as neutrino mixing, in which one flavor of neutrino can turn into another flavor of neutrino as it propagates. This implies that at least two of the three neutrino flavors are in fact massive

Investigating why neutrinos have mass — and where that mass comes from — is a central question in particle physics. Neutrinos are especially captivating because any observation of a neutrino mass mechanism is guaranteed to be a window to new physics. Further, neutrinos could be of central importance to several puzzles within the SM, including the MicroBooNE anomaly, the question of why there is more matter than antimatter in the observable universe, and the flavor puzzle, among others. The latter refers to an overall lack of understanding of the origin of flavor in the SM. Why do quarks come in six flavors, organized into three generations each consisting of one “up-type” quark with a +⅔ charge and one “down-type” quarks with a -⅓ charge? Why do leptons come in six flavors, with three pairs of one electron-like particle and one neutrino? What is the origin of the hierarchy of masses for both quarks and leptons? Of the SM’s 19 free parameters — which includes particle masses, coupling strengths, and others — 14 of them are associated with flavor. 

The unequivocal evidence for neutrino mixing was the crown prize of the last few decades of neutrino physics research. Modern experiments are charged with detecting more subtle signs of new physics, through measurements of neutrino energy in colliders, ever more precise oscillation data, and the possibility for a heavy neutrino belonging to a fourth generation. 

Experiment has a clear role to play; the upcoming Deep Underground Neutrino Experiment (DUNE) will produce neutrinos at Fermilab and observe them at Sanford Lab, South Dakota in order to accumulate data regarding long-distance neutrino oscillation. DUNE and other detectors will also turn their eye toward the sky in observations of neutrinos sourced by supernovae. There is also much room for theorists, both in developing models for neutrino mass-generation as well as influencing the future of neutrino experiment — short-distance neutrino oscillation experiments are a key proposal in the quest to address the MicroBooNE anomaly. 

The field of neutrino physics is only growing. It is likely we’ll learn much more about the SM and beyond through these ghostly, mysterious particles in the coming decades.

Which Collider(s)?

The proposed location, right over the LHC, of the Future Circular Collider (FCC), one of the many options for a next-generation collider. From: CERN.

One looming question has formed an undercurrent through the entirety of the Snowmass process: What’s next after the LHC? In the past decade, propositions have been fleshed out in various stages, with the goal of satisfying some part of the lengthy wish list of questions a future collider would hope to probe. 

The most well-known possible successor to the LHC is the Future Circular Collider (FCC), which is roughly a plan for a larger LHC, able to reach energies some 30 times that of its modern-day counterpart. An FCC that collides hadrons, as the LHC does, would extend the reach of our studies into the Higgs boson, other force-carrying gauge bosons, and dark matter searches. Its higher collision rate would enable studies of rare hadron decays and continue the trek into the realm of flavor physics searches. It would also enable our discovery of gauge bosons of new interactions — if they exist at those energies. This proposal, while captivating, has also met its fair share of skepticism, particularly because there is no singular particle physics goal it would be guaranteed to achieve. When the LHC was built, physicists were nearly certain that the Higgs boson would be found there — and it was. However, physicists were also somewhat confident in the prospect of finding SUSY at the LHC. Could supersymmetric particles be discovered at the FCC? Maybe, or maybe not. 

A second plan exists for the FCC, in which it collides electrons and positrons instead of hadrons. This targets the electroweak sector of the SM, covering the properties of the Higgs, the W and Z bosons, and the heaviest quark (the top quark). Whereas hadrons are composite particles, and produce particle showers and jets upon collision, leptons are fundamental particles, and so have well-defined initial states. This allows for greater precision in measurements compared to hadron colliders, particularly in questions of the Higgs. Is the Higgs boson the only Higgs-like particle? Is it a composite particle? How does the origin of mass influence other key questions, such as the nature of dark matter? While unable to reach as high of energies as a hadron collider, an electron-positron collider is appealing due to its precision. This dichotomy epitomizes the choice between these two proposals for the FCC.

The options go beyond circular colliders. Linear colliders such as the International Linear Collider (ILC) and Compact Linear Collider (CLIC) are also on the table. While circular colliders are advantageous for their ability to accelerate particles over long distances and to keep un-collided particles in circulation for other experiments, they come with a particular disadvantage due to their shape. The acceleration of charged particles along a curved path results in synchrotron radiation — electromagnetic radiation that significantly reduces the energy available for each collision. For this reason, a circular accelerator is more suited to the collision of heavy particles — like the protons used in the LHC — than much lighter leptons. The lepton collisions within a linear accelerator would produce Higgs bosons at a high rate, allowing for deeper insight into the multitude of Higgs-related questions.

In the past few years, interest has grown for a different kind of lepton collider: a muon collider. Muons are, like electrons, fundamental particles, and therefore much cleaner in collisions than composite hadrons. They are also much more massive than electrons, which leads to a smaller proportion of energy being lost to synchrotron radiation in comparison to electron-positron colliders. This would allow for both high-precision measurements as well as high energies, making a muon collider an incredibly attractive candidate. The heavier mass of the muon, however, does bring with it a new set of technical challenges, particularly because the muon is not a stable particle and decays within a short timeframe. 

As a multi-billion dollar project requiring the cooperation of numerous countries, getting a collider funded, constructed, and running is no easy feat. As collider proposals are put forth and debated, there is much at stake — a future collider will also determine the research programs and careers of many future students and professors. With that in mind, considerable care is necessary. Only one thing is certain: there will be something after the LHC.

Toward the Next Snowmass

The path forward in the quest to understand particle physics. By Raman Sundrum.

The above snapshots are only a few of the myriad subtopics within particle theory; other notable ones include string theory, quantum information science, lattice gauge theory, and effective field theory. The full list of contributed papers can be found here. 

As the Snowmass process wraps up, the voice of particle theory has played and continues to play an influential role. Overall, progress in theory remains more accessible than in experiment —  the number of possible models we’ve developed far outpaces the detectors we are able to build to investigate them. The theoretical physics community both guides the direction and targets of future experiments, and has plenty of room to make progress on the model-building front, including understanding quantum field theories at the deepest level and further uncovering the structures of amplitudes. A decade ago, SUSY at LHC-scales was at the prime objective in the hunt for an ultimate theory of physics. Now, new physics could be anywhere and everywhere; Snowmass is crucial to charting our path in an endless valley of questions. I look forward to the trails of the next decade.

The LHC is on turning on again! What does that mean?

Deep underground, on the border between Switzerland and France, the Large Hadron Collider (LHC) is starting back up again after a 4 year hiatus. Today, July 5th, the LHC had its first full energy collisions since 2018.  Whenever the LHC is running is exciting enough on its own, but this new run of data taking will also feature several upgrades to the LHC itself as well as the several different experiments that make use of its collisions. The physics world will be watching to see if the data from this new run confirms any of the interesting anomalies seen in previous datasets or reveals any other unexpected discoveries. 

New and Improved

During the multi-year shutdown the LHC itself has been upgraded. Noticably the energy of the colliding beams has been increased, from 13 TeV to 13.6 TeV. Besides breaking its own record for the highest energy collisions every produced, this 5% increase to the LHC’s energy will give a boost to searches looking for very rare high energy phenomena. The rate of collisions the LHC produces is also expected to be roughly 50% higher  previous maximum achieved in previous runs. At the end of this three year run it is expected that the experiments will have collected twice as much data as the previous two runs combined. 

The experiments have also been busy upgrading their detectors to take full advantage of this new round of collisions.

The ALICE experiment had the most substantial upgrade. It features a new silicon inner tracker, an upgraded time projection chamber, a new forward muon detector, a new triggering system and an improved data processing system. These upgrades will help in its study of exotic phase of matter called the quark gluon plasma, a hot dense soup of nuclear material present in the early universe. 

 

A diagram showing the various upgrades to the ALICE detector (source)

ATLAS and CMS, the two ‘general purpose’ experiments at the LHC, had a few upgrades as well. ATLAS replaced their ‘small wheel’ detector used to measure the momentum of muons. CMS replaced the inner most part its inner tracker, and installed a new GEM detector to measure muons close to the beamline. Both experiments also upgraded their software and data collection systems (triggers) in order to be more sensitive to the signatures of potential exotic particles that may have been missed in previous runs. 

The new ATLAS ‘small wheel’ being lowered into place. (source)

The LHCb experiment, which specializes in studying the properties of the bottom quark, also had major upgrades during the shutdown. LHCb installed a new Vertex Locator closer to the beam line and upgraded their tracking and particle identification system. It also fully revamped its trigger system to run entirely on GPU’s. These upgrades should allow them to collect 5 times the amount of data over the next two runs as they did over the first two. 

Run 3 will also feature a new smaller scale experiment, FASER, which will study neutrinos produced in the LHC and search for long-lived new particles

What will we learn?

One of the main goals in particle physics now is direct experimental evidence of a phenomena unexplained by the Standard Model. While very successful in many respects, the Standard Model leaves several mysteries unexplained such as the nature of dark matter, the imbalance of matter over anti-matter, and the origin of neutrino’s mass. All of these are questions many hope that the LHC can help answer.

Much of the excitement for Run-3 of the LHC will be on whether the additional data can confirm some of the deviations from the Standard Model which have been seen in previous runs.

One very hot topic in particle physics right now are a series of ‘flavor anomalies‘ seen by the LHCb experiment in previous LHC runs. These anomalies are deviations from the Standard Model predictions of how often certain rare decays of the b quarks should occur. With their dataset so far, LHCb has not yet had enough data to pass the high statistical threshold required in particle physics to claim a discovery. But if these anomalies are real, Run-3 should provide enough data to claim a discovery.

A summary of the various measurements making up the ‘flavor anomalies’. The blue lines and error bars indicate the measurements and their uncertainties. The yellow line and error bars indicates the standard model predictions and their uncertainties. Source

There are also a decent number ‘excesses’, potential signals of new particles being produced in LHC collisions, that have been seen by the ATLAS and CMS collaborations. The statistical significance of these excesses are all still quite low, and many such excesses have gone away with more data. But if one or more of these excesses was confirmed in the Run-3 dataset it would be a massive discovery.

While all of these anomalies are gamble, this new dataset will also certainly be used to measure various known entities with better precision, improving our understanding of nature no matter what. Our understanding of the Higgs boson, the top quark, rare decays of the bottom quark, rare standard model processes, the dynamics of the quark gluon plasma and many other areas will no doubt improve from this additional data.

In addition to these ‘known’ anomalies and measurements, whenever an experiment starts up again there is also the possibility of something entirely unexpected showing up. Perhaps one of the upgrades performed will allow the detection of something entirely new, unseen in previous runs. Perhaps FASER will see signals of long-lived particles missed by the other experiments. Or perhaps the data from the main experiments will be analyzed in a new way, revealing evidence of a new particle which had been missed up until now.

No matter what happens, the world of particle physics is a more exciting place when the LHC is running. So lets all cheers to that!

Read More:

CERN Run-3 Press Event / Livestream Recording “Join us for the first collisions for physics at 13.6 TeV!

Symmetry Magazine “What’s new for LHC Run 3?

CERN Courier “New data strengthens RK flavour anomaly

A Massive W for CDF

This is part two of our coverage of the CDF W mass measurement, discussing how the measurement was done. Read about the implications of this result in our sister post here

Last week, the CDF collaboration announced the most precise measurement of the W boson’s mass to date. After nearly ten years of careful analysis, the W weighed in at 80,433.5 ± 9.4 MeV: a whopping seven standard deviations away from the Standard Model expectation! This result quickly became the talk of the town among particle physicists, and there are already dozens of arXiv papers speculating about what it means for the Standard Model. One of the most impressive and hotly debated aspects of this measurement is its high precision, which came from an extremely careful characterization of the CDF detector and recent theoretical developments in modeling proton structure. In this post, I’ll describe how they made the measurement and the clever techniques they used to push down the uncertainties.

The new CDF measurement of the W boson mass. The center of the red ellipse corresponds to the central values of the measured W mass (y-coordinate) and top quark mass (x-coordinate, from other experiments). The purple line shows the Standard Model constraint on the W mass as a function of the top mass, and the border of the red ellipse is the one standard deviation boundary around the measurement.

The imaginatively titled “Collider Detector at Fermilab” (CDF) collected proton-antiproton collision data at Fermilab’s Tevatron accelerator for over 20 years, until the Tevatron shut down in 2011. Much like ATLAS and CMS, CDF is made of cylindrical detector layers, with the innermost charged particle tracker and adjacent electromagnetic calorimeter (ECAL) being most important for the W mass measurement. The Tevatron ran at a center of mass energy of 1.96 TeV — much lower than the LHC’s 13 TeV — which enabled a large reduction in the “theoretical uncertainties” on the measurement. Physicists use models called “parton distribution functions” (PDFs) to calculate how a proton’s momentum is distributed among its constituent quarks, and modern PDFs make very good predictions at the Tevatron’s energy scale. Additionally, W boson production in proton-antiproton collisions doesn’t involve any gluons, which are a major source of uncertainty in PDFs (LHC collisions are full of gluons, making for larger theory uncertainty in LHC W mass measurements).

A cutaway view of the CDF detector. The innermost tracking detector (yellow) reconstructs the trajectories of charged particles, and the nearby electromagnetic calorimeter (red) collects energy deposits from photons and charged particles (e.g. electrons). The tracker and EM Cal were both central in the W mass measurement.

Armed with their fancy PDFs, physicists set out to measure the W mass in the same way as always: by looking at its decay products! They focused on the leptonic channel, where the W decays to a lepton (electron or muon) and its associated neutrino. This clean final state is easy to identify in the detector and allows for a high-purity, low-background signal selection. The only sticking point is the neutrino, which flies out of the detector completely undetected. Thankfully, momentum conservation allowed them to reconstruct the neutrino’s transverse momentum (pT) from the rest of the visible particles produced in the collision. Combining this with the lepton’s measured momentum, they reconstructed the “transverse mass” of the W — an important observable for estimating its true mass.

A leptonic decay of the W boson, where it decays to an electron and an electron antineutrino. This channel, along with the muon + muon antineutrino channel, formed the basis of CDF’s W mass measurement.

Many of the key observables for this measurement flow from the lepton’s momentum, which means it needs to be measured very carefully! The analysis team calibrated their energy and momentum measurements by using the decays of other Standard Model particles: the ϒ(1S) and J/ψ mesons, and the Z boson. These particles’ masses are very precisely known from other experiments, and constraints from these measurements helped physicists understand how accurately CDF reconstructs a particle’s energy. For momentum measurements in the tracker, they reconstructed the ϒ(1S) and J/ψ masses from their decays to muon-antimuon pairs inside CDF, and compared CDF-measured masses to their known values from other experiments. This allowed them to calculate a correction factor to apply to track momenta. For ECAL energy measurements, they looked at samples of Z and W bosons decaying to electrons, and measured ratio of energy deposited in the ECAL (E) to the momentum measured in the tracker (p). The shape of the E/p distribution then allowed them to calculate an energy calibration for the ECAL.

Left: the fractional deviation of the measured muon momentum relative to its true momentum (y-axis), as a function of the muon’s average inverse transverse momentum. Data from ϒ(1S), J/ψ, and Z decays are shown, and the fit line (in black) has a slope consistent with zero. This indicates that there is no significant mismodeling of the energy lost by a particle flying through the detector. Right: the distribution of the ratio energy measured in the ECAL to momentum measured in the tracker. The shape of the peak and tail are used to calibrate the ECAL energy measurements.

To make sure their tracker and ECAL calibrations worked correctly, they applied them in measurements of the Z boson mass in the electron and muon decay channels. Thankfully, their measurements were consistent with the world average in both channels, providing an important cross-check of their calibration strategy.

Having done everything humanly possible to minimize uncertainties and calibrate their measurements, the analysis team was finally ready to measure the W mass. To do this, they simulated W boson events with many different settings for the W mass (an additional mountain of effort went into ensuring that the simulations were as accurate as possible!). At each mass setting, they extracted “template” distributions of the lepton pT, neutrino pT, and W boson transverse mass, and fit each template to the distribution measured in real CDF data. The templates that best fit the measured data correspond to CDF’s measured value of the W mass (plus some additional legwork to calculate uncertainties)

The reconstructed W boson transverse mass distribution in the muon + muon antineutrino decay channel. The best-fit template (red) is plotted along with the background distribution (gray) and the measured data (black points).

After years of careful analysis, CDF’s measurement of mW = 80,433.5 ± 9.4 MeV sticks out like a sore thumb. If it stands up to the close scrutiny of the particle physics community, it’s further evidence that something new and mysterious lies beyond the Standard Model. The only way to know for sure is to make additional measurements, but in the meantime we’ll all be happily puzzling over what this might mean.

CDF’s W mass measurement (bottom), shown alongside results from other experiments and the SM expectation (gray).

Read More

Quanta Magazine’s coverage of the measurement

A recorded talk from the Fermilab Wine & Cheese seminar covering the result in great detail

Too Massive? New measurement of the W boson’s mass sparks intrigue

This is part one of our coverage of the CDF W mass result covering its implications. Read about the details of the measurement in a sister post here!

Last week the physics world was abuzz with the latest results from an experiment that stopped running a decade ago. Some were heralding this as the beginning of a breakthrough in fundamental physics, headlines read “Shock result in particle experiment could spark physics revolution” (BBC). So what exactly is all the fuss about?

The result itself is an ultra-precise measurement of the mass of the W boson. The W boson is one of the carriers of weak force and this measurement pegged its mass at 80,433 MeV with an uncertainty of 9 MeV. The excitement is coming because this value disagrees with the prediction from our current best theory of particle physics, the Standard Model. In theoretical structure of the Standard Model the masses of the gauge bosons are all interrelated. In the Standard Model the mass of the W boson can be computed based on the mass of the Z as well as few other parameters in the theory (like the weak mixing angle). In a first approximation (ie to the lowest order in perturbation theory), the mass of the W boson is equal to the mass of the Z boson times the cosine of the weak mixing angle. Based on other measurements that have been performed including the Z mass, the Higgs mass, the lifetime of muons and others, the Standard Model predicts that the mass of the W boson should be 80,357 (with an uncertainty of 6 MeV). So the two numbers disagree quite strongly, at the level of 7 standard deviations.

If the measurement and the Standard Model prediction are both correct, this would imply that there is some deficiency in the Standard Model; some new particle interacting with the W boson whose effects haven’t been unaccounted for. This would be welcome news to particle physicists, as we know that the Standard Model is an incomplete theory but have been lacking direct experimental confirmation of its deficiencies. The size of the discrepancy would also mean that whatever new particle was causing the deviation may also be directly detectable within our current or near future colliders.

If this discrepancy is real, exactly what new particles would this entail? Judging based on the 30+ (and counting) papers released on the subject in the last week, there are a good number of possibilities. Some examples include extra Higgs bosons, extra Z-like bosons, and vector-like fermions. It would take additional measurements and direct searches to pick out exactly what the culprit was. But it would hopefully give experimenters definite targets of particles to look for, which would go a long way in advancing the field.

But before everyone starts proclaiming the Standard Model dead and popping champagne bottles, its important to take stock of this new CDF measurement in the larger context. Measurements of the W mass are hard, that’s why it has taken the CDF collaboration over 10 years to publish this result since they stopped taking data. And although this measurement is the most precise one to date, several other W mass measurements have been performed by other experiments.

The Other Measurements

A plot summarizing the various W mass measurements performed to date
A summary of all the W mass measurements performed to date (black dots) with their uncertainties (blue bars) as compared to the the Standard Model prediction (yellow band). One can see that this new CDF result is in tension with previous measurements. (source)

Previous measurements of the W mass have come from experiments at the Large Electron-Positron collider (LEP), another experiment at the Tevatron (D0) and experiments at the LHC (ATLAS and LHCb). Though none of these were as precise as this new CDF result, they had been painting a consistent picture of a value in agreement with the Standard Model prediction. If you take the average of these other measurements, their value differs from the CDF measurement the level about 4 standard deviations, which is quite significant. This discrepancy seems large enough that it is unlikely to arise from purely random fluctuation, and likely means that either some uncertainties have been underestimated or something has been overlooked in either the previous measurements or this new one.

What one would like are additional, independent, high precision measurements that could either confirm the CDF value or the average value of the previous measurements. Unfortunately it is unlikely that such a measurement will come in the near future. The only currently running facility capable of such a measurement is the LHC, but it will be difficult for experiments at the LHC to rival the precision of this CDF one.

W mass measurements are somewhat harder at the LHC than the Tevatron for a few reasons. First of all the LHC is proton-proton collider, while the Tevatron was a proton-antiproton collider, and the LHC also operates at a higher collision energy than the Tevatron. Both differences cause W bosons produced at the LHC to have more momentum than those produced at the Tevatron. Modeling of the W boson’s momentum distribution can be a significant uncertainty of its mass measurement, and the extra momentum of W’s at the LHC makes this a larger effect. Additionally, the LHC has a higher collision rate, meaning that each time a W boson is produced there are actually tens of other collisions laid on top (rather than only a few other collisions like at the Tevatron). These extra collisions are called pileup and can make it harder to perform precision measurements like these. In particular for the W mass measurement, the neutrino’s momentum has to be inferred from the momentum imbalance in the event, and this becomes harder when there are many collisions on top of each other. Of course W mass measurements are possible at the LHC, as evidenced by ATLAS and LHCb’s already published results. And we can look forward to improved results from ATLAS and LHCb as well as a first result from CMS. But it may be very difficult for them to reach the precision of this CDF result.

A histogram of the transverse mass of the W from the ATLAS result. Showing how 50 MeV shifts in the W mass change the spectrum by extremely small amounts (a few tenths of a percent).
A plot of the transverse mass (one of the variables used in a measurement) of the W from the ATLAS measurement. The red and yellow lines show how little the distribution changes if the W mass changes by 50 MeV, which is around two and half times the uncertainty of the ATLAS result. These shifts change the distribution by only a few tenths of a percent, illustrating the difficulty involved. (source)

The Future

A future electron positron collider would be able to measure the W mass extremely precisely by using an alternate method. Instead of looking at the W’s decay, the mass could be measured through its production, by scanning the energy of the electron beams very close to the threshold to produce two W bosons. This method should offer precision significantly better than even this CDF result. However any measurement from a possible future electron positron collider won’t come for at least a decade.

In the coming months, expect this new CDF measurement to receive a lot buzz. Experimentalists will be poring over the details trying to figure out why it is in tension with previous measurements and working hard to produce new measurements from LHC data. Meanwhile theorists will write a bunch of papers detailing the possibilities of what new particles could explain the discrepancy and if there is a connection to other outstanding anomalies (like the muon g-2). But the big question of whether we are seeing the first real crack in the Standard Model or there is some mistake in one or more of the measurements is unlikely to be answered for a while.

If you want to learn about how the measurement actually works, check out this sister post!

Read More:

Cern Courier “CDF sets W mass against the Standard Model

Blog post on the CDF result from an (ATLAS) expert on W mass measurements “[Have we] finally found new physics with the latest W boson mass measurement?”

PDG Review “Electroweak Model and Constraints on New Physics

Moriond 2022 : Return of the Excesses ?!

Recontres de Moriond is probably the biggest ski-vacation  conference of the year in particle physics, and is one of the places big particle physics experiments often unveil their new results. For the last few years the buzz in particle physics has been surrounding ‘indirect’ probes of new physics, specifically the latest measurement of the muons anomalous magnetic moment (g-2) and hints from LHCb about lepton flavor universality violation. If either of these anomalies were confirmed this would of course be huge, definitive laboratory evidence for physics beyond the standard model, but they would not answer the question of what exactly that new physics was. As evidenced by the 500+ papers written in the last year offering explanations of the g-2 anomaly, there are a lot of different potential explanations.

A definitive answer would come in the form of a ‘direct’ observation of whatever particle is causing the anomaly, which traditionally means producing and observing said particle in a collider. But so far the largest experiments performing these direct searches, ATLAS and CMS, have not shown any hints of new particles. But this Moriond, as the LHC experiments are getting ready for the start of a new data taking run later this year, both collaborations unveiled ‘excesses’ in their Run-2 data. These excesses, extra events above a background prediction that resemble the signature of a new particle, don’t have enough statistical significance to claim discoveries yet, and may disappear as more data is collected, as many an excess has done before. But they are intriguing and some have connections to anomalies seen in other experiments. 

So while there have been many great talks at Moriond (covering cosmology, astro-particle searches for dark matter, neutrino physics, and more flavor physics measurements and more) and the conference is still ongoing, its worth reviewing these new excesses in particular and what they might mean.

Excess 1: ATLAS Heavy Stable Charged Particles

Talk (paper forthcoming): https://agenda.infn.it/event/28365/contributions/161449/attachments/89009/119418/LeThuile-dEdx.pdf

Most searches for new particles at the LHC assume that said new particles decay very quickly once they are produced and their signatures can then be pieced together by measuring all the particles they decay to. However in the last few years there has been increasing interest in searching for particles that don’t decay quickly and therefore leave striking signatures in the detectors that can be distinguished from regular Standard Model particles. This particular ATLAS search searches for particles that are long-lived, heavy and charged. Due to their heavy masses (and/or large charges) particles such as these will produce greater ionization signals as they pass through the detector than standard model particles would. This ATLAS analysis selects tracks with high momentum, and unusually high ionization signals. They find an excess of events with high mass and high ionization, with a significance of 3.3-sigma.

The ATLAS excess of heavy stable charged particles. The black data points lie above the purple background prediction and match well with the signature of a new particle (yellow line). 

If their background has been estimated properly, this seems to be quite clear signature and it might be time to get excited. ATLAS has checked that these events are not due to any known instrumental defect, but they do offer one caveat. For a heavy particle like this (with a mass of ~TeV) one would expect for it to be moving noticeably slower than the speed of light. But when ATLAS compares the ‘time of flight’ of the particle, how long it takes to reach their detectors, its velocity appears indistinguishable from the speed of light. One would expect background Standard Model particles to travel close to the speed of light.

So what exactly to make of this excess is somewhat unclear. Hopefully CMS can weigh in soon!

Excesses 2-4: CMS’s Taus; Vector-Like-Leptons and TauTau Resonance(s)

Paper 1 : https://cds.cern.ch/record/2803736

Paper 2: https://cds.cern.ch/record/2803739

Many of the models seeking to explain the flavor anomalies seen by LHCb predict new particles that couple preferentially to tau’s and b-quarks. These two separate CMS analyses look for particles that decay specifically to tau leptons.

In the first analysis they look for pairs of vector-like-leptons (VLL’s) the lightest particle predicted in one of the favored models to explain the flavor anomalies. The VLL’s are predicted to decay into tau leptons and b-quarks, so the analysis targets events which have at least four b-tagged jets and reconstructed tau leptons. They trained a machine learning classifier to separate VLL’s from their backgrounds. They see an excess of events at high VLL classification probability in the categories with 1 or 2 reconstructed tau’s, with a significance of 2.8 standard deviations.

The CMS Vector-Like-Lepton excess. The gray filled histogram shows the best-fit amount of VLL signal. The histograms of other colors show the contributions of various backgrounds the the hatched band their uncertainty. 

In the second analysis they look for new resonances that decay into two tau leptons. They employ a sophisticated ’embedding’ technique to estimate the large background of Z bosons decaying to tau pairs by using the decays of Z bosons to muons. They see two excesses, one at 100 GeV and one at 1200 GeV, each with a significances of around 3-sigma. The excess at ~100 GeV could also be related to another CMS analysis that saw an excess of diphoton events at ~95 GeV, especially given that if there was an additional Higgs-like boson at 95 GeV  diphoton and ditau would be the two channels it would likely first appear in.

CMS TauTau excesses. The excess at ~100 GeV is shown in the left plot and the one at 1200 GeV is shown on the right, the best fit signal is shown with the red line in the bottom ration panels. 

While the statistical significances of these excess are not quite as high as the first one, meaning it is more likely they are fluctuations that will disappear with more data, their connection to other anomalies is quite intriguing.

Excess 4: CMS Paired Dijet Resonances

Paper: https://cds.cern.ch/record/2803669

Often statistical significance doesn’t tell the full story of an excess. When CMS first performed its standard dijet search on Run2 LHC data, where one looks for a resonance decaying to two jets by looking for bumps in the dijet invariant mass spectrum, they did not find any significant excesses. But they did note one particular striking event, which 4 jets which form two ‘wide jets’, each with a mass of 1.9 TeV and the 4 jet mass is 8 TeV.

An event display for the striking the CMS 4-jet event. The 4 jets combine to form two back-to-back dijet pairs, each with mass of 1.9 TeV. 

This single event seems very likely to occur via normal Standard Model QCD which normally has a regular 2-jet topology. However a new 8 TeV resonance which decayed to two intermediate particles with masses of 1.9 TeV which then each decayed to a pair of jets would lead to such a signature. This motivated them to design this analysis, a new search specifically targeting this paired dijet resonance topology. In this new search they have now found a second event with very similar characteristics. The local statistical significance of this excess is 3.9-sigma, but when one accounts for the many different potential dijet and 4-jet mass combinations which were considered in the analysis that drops to 1.6-sigma.

Though 1.6-sigma is relatively low, the striking nature of these events is certainly intriguing and warrants follow up. The Run-3 will also bring a slight increase to the LHC’s energy (13 -> 13.6 TeV) which will give the production rate of any new 8 TeV particles a not-insignificant boost.

Conclusions

The safe bet on any of these excesses would probably be that it will disappear with more data, as many excesses have done in the past. And many particle physicists are probably wary of getting too excited after the infamous 750 GeV diphoton fiasco in which many people got very excited (and wrote hundreds of papers about) a about a few-sigma excess in CMS + ATLAS data that disappeared as more data was collected. All of theses excesses are for analyses only performed by a single experiment (ATLAS or CMS) for now, but both experiments have similar capabilities so it will be interesting to see what the counterpart has to say for each excess once they perform a similar analysis on their Run-2 data. At the very least these results add some excitement for the upcoming LHC Run-3–the LHC collisions are starting up again this year after being on hiatus since 2018.

 

Read more:

CERN Courier Article “Dijet excess intrigues at CMS” 

Background on the imfamous 750 GeV diphoton excess, Physics World Article “And so to bed for the 750 GeV bump

Background on the LHCb flavor anomalies, CERN Courier “New data strengthens RK flavour anomaly

 

A hint of CEvNS heaven at a nuclear reactor

Title : “Suggestive evidence for Coherent Elastic Neutrino-Nucleus Scattering from reactor antineutrinos”

Authors : J. Colaresi et al.

Link : https://arxiv.org/abs/2202.09672

Neutrinos are the ghosts of particle physics, passing right through matter as if it isn’t there. Their head-on collisions with atoms are so rare that it takes a many-ton detector to see them. Far more often though, a neutrino gives a tiny push to an atom’s nucleus, like a golf ball glancing off a bowling ball. Even a small detector can catch these frequent scrapes, but only if it can pick up the bowling ball’s tiny budge. Today’s paper may mark the opening of a new window into these events, called “coherent neutrino-nucleus scattering” or CEvNS (pronounced “sevens”), which can teach us about neutrinos, their astrophysical origins, and the even more elusive dark matter.

A scrape with a ghost in a sea of noise

CEvNS was first measured in 2017 by COHERENT at a neutron beam facility, but much more data is needed to fully understand it. Nuclear reactors produce far more neutrinos than other sources, but they are even less energetic and thus harder to detect. To find these abundant but evasive events, the authors used a detector called “NCC-1701” that can count the electrons knocked off a germanium atom when a neutrino from the reactor collides with its nucleus.

Unfortunately, a nuclear reactor produces lots of neutrons as well, which glance off atoms just like neutrinos, and the detector was further swamped with electronic noise due to its hot, buzzing surroundings. To pick out CEvNS from this mess, the researchers found creative ways to reduce these effects: shielding the detector from as many neutrons as possible, cooling its vicinity, and controlling for environmental variables.

An intriguing bump with a promising future

After all this work, a clear bump was visible in the data when the reactor was running, and disappeared when it was turned off. You can see this difference in the top and bottom of Fig. 1, which shows the number of events observed after subtracting the backgrounds, as a function of the energy they deposited (number of electrons released from germanium atoms).

Fig. 1: The number of events observed minus the expected background, as a function of the energy the events deposited. In the top panel, when the nuclear reactor was running, a clear bump is visible at low energy. The bump is moderately to very strongly suggestive of CEvNS, depending on which germanium model is used (solid vs. dashed line). When the reactor’s operation was interrupted (bottom), the bump disappeared – an encouraging sign.

But measuring CEvNS is such a new enterprise that it isn’t clear exactly what to look for – the number of electrons a neutrino tends to knock off a germanium atom is still uncertain. This can be seen in the top of Fig. 1, where the model used for this number changes the amount of CEvNS expected (solid vs dashed line).

Still, for a range of these models, statistical tests “moderately” to “very strongly” confirmed CEvNS as the likely explanation of the excess events. When more data accumulates and the bump becomes clearer, NCC-1701 can determine which model is correct. CEvNS may then become the easiest way to measure neutrinos, since detectors only need to be a couple feet in size.

Understanding CEvNS is also critical for finding dark matter. With dark matter detectors coming up empty, it now seems that dark matter hits atoms even less often than neutrinos, making CEvNS an important background for dark matter hunters. If experiments like NCC-1701 can determine CEvNS models, then dark matter searches can stop worrying about this rain of neutrinos from the sky and instead start looking for them. These “astrophysical” neutrinos are cosmic messengers carrying information about their origins, from our sun’s core to supernovae.

This suggestive bump in the data of a tiny detector near the roiling furnace of a nuclear reactor shows just how far neutrino physics has come – the sneakiest ghosts in the Standard Model can now be captured with a germanium crystal that could fit in your palm. Who knows what this new window will reveal?

Read More

Ever-Elusive Neutrinos Spotted Bouncing Off Nuclei for the First Time” – Scientific American article from the first COHERENT detection in 2017

Hitting the neutrino floor” – Symmetry Magazine article on the importance of CEvNS to dark matter searches

Local nuclear reactor helps scientists catch and study neutrinos” – Phys.org story about these results

Exciting headways into mining black holes for energy!

Based on the paper Penrose process for a charged black hole in a uniform magnetic field

It has been over half a century since Roger Penrose first theorized that spinning black holes could be used as energy powerhouses by masterfully exploiting the principles of special and general relativity [1, 2]. Although we might not be able to harness energy from a black hole to reheat that cup of lukewarm coffee just yet, with a slew of amazing breakthroughs [4, 5, 6], it seems that we may be closer than ever before to making the transition from pure thought experiment to finally figuring out a realistic powering mechanism for several high-energy astrophysical phenomena. Not only can there be dramatic increases in the energies of radiated particles using charged, spinning black holes as energy reservoirs via the electromagnetic Penrose process rather than neutral, spinning black holes via the original mechanical Penrose process, the authors of this paper also demonstrate that the region outside the event horizon (see below) from which energy can be extracted is much larger in the former than the latter. In fact, the enhanced power of this process is so great, that it is one of the most suitable candidates for explaining various high-energy astrophysical phenomena such as ultrahigh-energy cosmic rays, particles [7, 8, 9] and relativistic jets [10, 11].

Stellar black holes are the final stages in the life cycle of stars so massive that they collapse upon themselves, unable to withstand their own gravitational pull. They are characterized by a point-like singularity at the centre where a complete breakdown of Einstein’s equations of general relativity occurs, and surrounded by an outer event horizon, within which the gravitational force is so strong that not even light can escape it. Just outside the event horizon of a rotating black hole is a region called the ergosphere, bounded by an outer stationary surface, within which space-time is dragged along inexorably with the black hole via a process called frame-dragging. This effect predicted by Einstein’s theory of general relativity, makes it impossible for an object to stand still with respect to an outside observer.

The ergosphere has a rather curious property that makes the word-line (the path traced in 4-dim space-time) of a particle or observer change from being time-like outside the static surface to being space-like inside it. In other words, the time and angular coordinates of the metric swap places! This leads to the existence of negative energy states of particles orbiting the black hole with respect to observer at infinity [2, 12, 13]. It is this very property that enables the extraction of rotational energy from the ergosphere as explained below.

According to Penrose’s calculations, if a massive particle that falls into the ergosphere were to split into two, the daughter who gets a kick from the black hole, would be accelerated out with a much higher positive energy (upto 20.7 percent higher to be exact) than the in-falling parent, as long as her sister is left with a negative energy. While it may seem counter-intuitive to imagine a particle with negative energy, note that no laws of relativity or thermodynamics are actually broken. This is because the observed energy of any particle is relative, and depends upon the momentum measured in the rest frame of the observer. Thus, a positive kinetic energy of the daughter particle left behind would be measured as negative by an observer at infinity [3].

In contrast to the purely geometric mechanical Penrose process, if one now considers black holes that possess charge as well as spin, a tremendous amount of energy stored in the electromagnetic fields can be tapped into, leading to ultra high energy extraction efficiencies. While there is a common misconception that a charged black hole tends to neutralize itself swiftly by attracting oppositely charged particles from the ambient medium, this is not quite true for a spinning black hole in a magnetic field (due to the dynamics of the hot plasma soup in which it is embedded). In fact in this case, Wald [14] showed that black holes tend to charge up till they reach a certain energetically favourable value. This value plays a crucial role in the amount of energy that can be delivered to the outgoing particle through the electromagnetic Penrose process. The authors of this paper explicitly locate the regions from which energy can be extracted and show that these are no longer restricted to the ergosphere, as there are a whole bunch of previously inaccessible negative energy states that can now be mined. They also find novel disconnected, toroidal regions not coincident with the ergosphere that can trap the negative energy particles forever (refer to Fig.1)! The authors calculate the effective coupling strength between the black hole and charged particles, a certain combination of the mass and charge parameters of the black hole and charged particle, and the external magnetic field. This simple coupling formula enables them to estimate the efficiency of the process as the magnitude of the energy boost that can be delivered to the outgoing particle is directly dependent on it. They also find that the coupling strength decreases as energy is extracted, much the same way as the spin of a black hole decreases as it loses energy to the super-radiant particles in the mechanical analogue.

While the electromagnetic Penrose process is the most favoured astrophysically viable mechanism for high energy sources and phenomena such as quasars, fast radio bursts, relativistic jets etc., as the authors mention “Just because a particle can decay into a trapped negative-energy daughter and a significantly boosted positive-energy radiator, does not mean it will do so..” However, in this era of precision black hole astrophysics, state-of-the-art observatories, the Event Horizon Telescope capable of capturing detailed observations of emission mechanisms in real time, and enhanced numerical and scientific methods at our disposal, it appears that we might be on the verge of detecting observable imprints left by the Penrose process on black holes, and perhaps tap into a source of energy for advanced civilisations!

References

  1. Gravitational collapse: The role of general relativity
  2. Extraction of Rotational Energy from a Black Hole
  3. Penrose process for a charged black hole in a uniform magnetic field
  4. First-Principles Plasma Simulations of Black-Hole Jet Launching
  5. Fifty years of energy extraction from rotating black hole: revisiting magnetic Penrose process
  6. Magnetic Reconnection as a Mechanism for Energy Extraction from Rotating Black Holes
  7. Near-horizon structure of escape zones of electrically charged particles around weakly magnetized rotating black hole: case of oblique magnetosphere
  8. GeV emission and the Kerr black hole energy extraction in the BdHN I GRB 130427A
  9. Supermassive Black Holes as Possible Sources of Ultrahigh-energy Cosmic Rays
  10. Acceleration of the charged particles due to chaotic scattering in the combined black hole gravitational field and asymptotically uniform magnetic field
  11. Acceleration of the high energy protons in an active galactic nuclei
  12. Energy-extraction processes from a Kerr black hole immersed in a magnetic field. I. Negative-energy states
  13. Revival of the Penrose Process for Astrophysical Applications
  14. Black hole in a uniform magnetic field

 

 

How to find a ‘beautiful’ valentine at the LHC

References:  https://arxiv.org/abs/1712.07158 (CMS)  and https://arxiv.org/abs/1907.05120 (ATLAS)

If you are looking for love at the Large Hadron Collider this Valentines Day, you won’t find a better eligible bachelor than the b-quark. The b-quark (also called the ‘beauty’ quark if you are feeling romantic, the ‘bottom’ quark if you are feeling crass, or a ‘beautiful bottom quark’ if you trying to weird people out) is the 2nd heaviest quark behind the top quark. It hangs out with a cool crowd, as it is the Higgs’s favorite decay and the top quark’s BFF; two particles we would all like to learn a bit more about.

Choose beauty this valentines day

No one wants a romantic partner who is boring, and can’t stand out from the crowd. Unfortunately when most quarks or gluons are produced at the LHC, they produce big sprays of particles called ‘jets’ that all look the same. That means even if the up quark was giving you butterflies, you wouldn’t be able to pick its jets out from those of strange quarks or down quarks, and no one wants to be pressured into dating a whole friend group. But beauty quarks can set themselves apart in a few ways. So if you are swiping through LHC data looking for love, try using these tips to find your b(ae).

Look for a partner whose not afraid of commitment and loves to travel.  Beauty quarks live longer than all the other quarks (a full 1.5 picoseconds, sub-atomic love is unfortunately very fleeting) letting them explore their love of traveling (up to a centimeter from the beamline, a great honeymoon spot I’ve heard) before decaying.

You want a lover who will bring you gifts, which you can hold on to even after they are gone. And when beauty quarks they, you won’t be in despair, but rather charmed with your new c-quark companion. And sometimes if they are really feeling the magic, they leave behind charged leptons when they go, so you will have something to remember them by.

The ‘profile photo’ of a beauty quark. You can see its traveled away from the crowd (the Primary Vertex, PV) and has started a cool new Secondary Vertex (SV) to hang out in.

But even with these standout characteristics, beauty can still be hard to find, as there are a lot of un-beautiful quarks in the sea you don’t want to get hung up on. There is more to beauty than meets the eye, and as you get to know them you will find that beauty quarks have even more subtle features that make them stick out from the rest. So if you are serious about finding love in 2022, its may be time to turn to the romantic innovation sweeping the nation: modern machine learning.  Even if we would all love to spend many sleepless nights learning all about them, unfortunately these days it feels like the-scientist-she-tells-you-not-to-worry-about, neural networks, will always understand them a bit better. So join the great romantics of our time (CMS and ATLAS) in embracing the modern dating scene, and let the algorithms find the most beautiful quarks for you.

So if you looking for love this Valentines Day, look no further than the beauty quark. And if you area feeling hopeless, you can take inspiration from this decades-in-the-making love story from a few years ago: “Higgs Decay into Bottom Beauty Quarks Seen at Last

A beautiful wedding photo that took decades to uncover, the Higgs decay in beauty quarks (red) was finally seen in 2018. Other, boring couples (dibosons), are shown in gray.