Exciting headways into mining black holes for energy

Based on the paper Penrose process for a charged black hole in a uniform magnetic field

It has been over half a century since Roger Penrose first theorized that spinning black holes could be used as energy powerhouses by masterfully exploiting the principles of special and general relativity [1, 2]. Although we might not be able to harness energy from a black hole to reheat that cup of lukewarm coffee just yet, with a slew of amazing breakthroughs [4, 5, 6], it seems that we may be closer than ever before to making the transition from pure thought experiment to finally figuring out a realistic powering mechanism for several high-energy astrophysical phenomena. Not only can there be dramatic increases in the energies of radiated particles using charged, spinning black holes as energy reservoirs via the electromagnetic Penrose process rather than neutral, spinning black holes via the original mechanical Penrose process, the authors of this paper also demonstrate that the region outside the event horizon (see below) from which energy can be extracted is much larger in the former than the latter. In fact, the enhanced power of this process is so great, that it is one of the most suitable candidates for explaining various high-energy astrophysical phenomena such as ultrahigh-energy cosmic rays, particles [7, 8, 9] and relativistic jets [10, 11].

Stellar black holes are the final stages in the life cycle of stars so massive that they collapse upon themselves, unable to withstand their own gravitational pull. They are characterized by a point-like singularity at the centre where a complete breakdown of Einstein’s equations of general relativity occurs, and surrounded by an outer event horizon, within which the gravitational force is so strong that not even light can escape it. Just outside the event horizon of a rotating black hole is a region called the ergosphere, bounded by an outer stationary surface, within which space-time is dragged along inexorably with the black hole via a process called frame-dragging. This effect predicted by Einstein’s theory of general relativity, makes it impossible for an object to stand still with respect to an outside observer.

The ergosphere has a rather curious property that makes the word-line (the path traced in 4-dim space-time) of a particle or observer change from being time-like outside the static surface to being space-like inside it. In other words, the time and angular coordinates of the metric swap places! This leads to the existence of negative energy states of particles orbiting the black hole with respect to observer at infinity [2, 12, 13]. It is this very property that enables the extraction of rotational energy from the ergosphere as explained below.

According to Penrose’s calculations, if a massive particle that falls into the ergosphere were to split into two, the daughter who gets a kick from the black hole, would be accelerated out with a much higher positive energy (upto 20.7 percent higher to be exact) than the in-falling parent, as long as her sister is left with a negative energy. While it may seem counter-intuitive to imagine a particle with negative energy, note that no laws of relativity or thermodynamics are actually broken. This is because the observed energy of any particle is relative, and depends upon the momentum measured in the rest frame of the observer. Thus, a positive kinetic energy of the daughter particle left behind would be measured as negative by an observer at infinity [3].

In contrast to the purely geometric mechanical Penrose process, if one now considers black holes that possess charge as well as spin, a tremendous amount of energy stored in the electromagnetic fields can be tapped into, leading to ultra high energy extraction efficiencies. While there is a common misconception that a charged black hole tends to neutralize itself swiftly by attracting oppositely charged particles from the ambient medium, this is not quite true for a spinning black hole in a magnetic field (due to the dynamics of the hot plasma soup in which it is embedded). In fact in this case, Wald [14] showed that black holes tend to charge up till they reach a certain energetically favourable value. This value plays a crucial role in the amount of energy that can be delivered to the outgoing particle through the electromagnetic Penrose process. The authors of this paper explicitly locate the regions from which energy can be extracted and show that these are no longer restricted to the ergosphere, as there are a whole bunch of previously inaccessible negative energy states that can now be mined. They also find novel disconnected, toroidal regions not coincident with the ergosphere that can trap the negative energy particles forever (refer to Fig.1)! The authors calculate the effective coupling strength between the black hole and charged particles, a certain combination of the mass and charge parameters of the black hole and charged particle, and the external magnetic field. This simple coupling formula enables them to estimate the efficiency of the process as the magnitude of the energy boost that can be delivered to the outgoing particle is directly dependent on it. They also find that the coupling strength decreases as energy is extracted, much the same way as the spin of a black hole decreases as it loses energy to the super-radiant particles in the mechanical analogue.

While the electromagnetic Penrose process is the most favoured astrophysically viable mechanism for high energy sources and phenomena such as quasars, fast radio bursts, relativistic jets etc., as the authors mention “Just because a particle can decay into a trapped negative-energy daughter and a significantly boosted positive-energy radiator, does not mean it will do so..” However, in this era of precision black hole astrophysics, state-of-the-art observatories, the Event Horizon Telescope capable of capturing detailed observations of emission mechanisms in real time, and enhanced numerical and scientific methods at our disposal, it appears that we might be on the verge of detecting observable imprints left by the Penrose process on black holes, and perhaps tap into a source of energy for advanced civilisations!

References

  1. Gravitational collapse: The role of general relativity
  2. Extraction of Rotational Energy from a Black Hole
  3. Penrose process for a charged black hole in a uniform magnetic field
  4. First-Principles Plasma Simulations of Black-Hole Jet Launching
  5. Fifty years of energy extraction from rotating black hole: revisiting magnetic Penrose process
  6. Magnetic Reconnection as a Mechanism for Energy Extraction from Rotating Black Holes
  7. Near-horizon structure of escape zones of electrically charged particles around weakly magnetized rotating black hole: case of oblique magnetosphere
  8. GeV emission and the Kerr black hole energy extraction in the BdHN I GRB 130427A
  9. Supermassive Black Holes as Possible Sources of Ultrahigh-energy Cosmic Rays
  10. Acceleration of the charged particles due to chaotic scattering in the combined black hole gravitational field and asymptotically uniform magnetic field
  11. Acceleration of the high energy protons in an active galactic nuclei
  12. Energy-extraction processes from a Kerr black hole immersed in a magnetic field. I. Negative-energy states
  13. Revival of the Penrose Process for Astrophysical Applications
  14. Black hole in a uniform magnetic field

Towards resolving the black hole information paradox!

Based on the paper The black hole information puzzle and the quantum de Finetti theorem

Black holes are some of the most fascinating objects in the universe. They are extreme deformations of space and time, formed from the collapse of massive stars, with a gravitational pull so strong that nothing, not even light, can escape it. Apart from the astrophysical aspects of black holes (which are bizarre enough to warrant their own study), they provide the ideal theoretical laboratory for exploring various aspects of quantum gravity, the theory that seeks to unify the principles of general relativity with those of quantum mechanics.

One definitive way of making progress in this endeavor is to resolve the infamous black hole information paradox [1], and through a series of recent exciting developments, it appears that we might be closer to achieving this than we have ever been before [5, 6]! Paradoxes in physics paradoxically tend to be quite useful, in that they clarify what we don’t know about what we know. This particular puzzle encapsulates the disagreement between Hawking’s semi-classical calculations of black hole radiation which treat the matter in and around black holes as quantum fields but describe them within the framework of classical general relativity, and the results stemming from a purely quantum theoretical viewpoint. According to the calculations of Hawking and Bekenstein in the 1970s [3], black hole evaporation via Hawking radiation is completely thermal. This means that the von Neumann entropy S(R) of radiation (a measure of its thermality or our ignorance of the system) keeps growing with the number of radiation quanta, reaching a maximum when the black hole has evaporated completely. This corresponds to a complete loss of information, whereby even a pure quantum state entering a black hole, would be transformed into a mixed state of Hawking radiation and all previous information about it would be destroyed. This conclusion is in stark contrast to what one would expect when regarding the black hole from the outside, as a quantum system that must obey the laws of quantum mechanics. The fundamental tenets of quantum mechanics are determinism and reversibility, the combination of which asserts that all information must be conserved. Thus if a black hole is formed by collapsing matter in a pure state, the state of the total system including the radiation R must remain pure. This can only happen if the entropy S(R) that first increases during the radiation process, ultimately decreases to zero when the black hole has disappeared completely, corresponding to a final pure state [4]. This quantum mechanical result is depicted in the famous Page curve (Fig.1)

Certain significant discoveries in the recent past showing that the Page curve is indeed the correct curve and can in fact be reproduced by semi-classical approximations of gravitational path integrals [7, 8] may finally hold the key towards the resolution of this paradox. These calculations rely on the replica trick [9, 10] and take into account contribution from space-time geometries comprising of wormholes that connect various replica black holes. This simple geometric amendment to the gravitational path integral is the only factor different from Hawking’s calculations and yet leads to diametrically different results! The replica trick is a neat method that enables the computation of the entropy of the radiation field by first considering n identical copies of a black hole, calculating their Renyi entropies and using the fact that this equals the desired von Neumann entropy in the limit of n \rightarrow 1. These in turn are calculated using the semi-classical gravitational path integral under the assumption that the dominant contributions come from the geometries that are classical solutions to the gravity action, obeying Zn symmetry. This leads to two distinct solutions:

  • The Hawking saddle consisting of disconnected geometries (corresponding to identical copies of a black hole).
  • The replica wormholes geometry consisting of connections between the different replicas.

Upon taking the n \rightarrow 1 limit, one finds that all that was missing from Hawking’s calculations and the quantum compatible Page curve was the inclusion of the replica wormhole solution in the former.
In this paper, the authors attempt to find the reason for this discrepancy, where this extra information is stored and the physical relevance of the replica wormholes using insights from quantum information theory. Invoking the quantum de Finetti theorem [11, 12], they find that there exists a particular reference information, W and the entropy one assigns to the black hole radiation field depends on whether or not one has access to this reference. While Hawking’s calculations correspond to measuring the unconditional von Neumann entropy S(R), ignoring W, the novel calculations using the replica trick calculate the conditional von Neumann entropy S(R|W), which takes W into account. The former yields the entropy of the ensemble average of all possible radiation states, while the latter yields the ensemble average of the entropy of the same states. They also show that the replica wormholes are a geometric representation of the correlation that appears between the n black holes, mediated by W.
The precise interpretation of W and what it might appear to be to an observer falling into a black hole remains an open question. Exploring what it could represent in holographic theories, string theories and loop quantum gravity could open up a dizzying array of insights into black hole physics and the nature of space-time itself. It appears that the different pieces of the quantum gravity puzzle are slowly but surely coming together to what will hopefully soon give us the entire picture.

References

  1. The Entropy of Hawking Radiation
  2. Black holes as mirrors: Quantum information in random subsystems
  3. Particle creation by black holes
  4. Average entropy of a subsystem
  5. The entropy of bulk quantum fields and the entanglement wedge of an evaporating black hole
  6. Entanglement Wedge Reconstruction and the Information Paradox
  7. Replica wormholes and the black hole interior
  8. Replica Wormholes and the Entropy of Hawking Radiation
  9. Entanglement entropy and quantum field theory
  10. Generalized gravitational entropy
  11. Locally normal symmetric states and an analogue of de Finetti’s theorem
  12. Unknown quantum states: The quantum de Finetti representation

The Mini and Micro BooNE Mystery, Part 2: Theory

Title: “Search for an Excess of Electron Neutrino Interactions in MicroBooNE Using Multiple Final State Topologies”

Authors: MicroBooNE Collaboration

References: https://arxiv.org/pdf/2110.14054.pdf

This is the second post in a series on the latest MicroBooNE results, covering the theory side. Click here to read about the experimental side. 

Few stories in physics are as convoluted as the one written by neutrinos. These ghost-like particles, a notoriously slippery experimental target and one of the least-understood components of the Standard Model, are making their latest splash in the scientific community through MicroBooNE, an experiment at FermiLab that unveiled its first round of data earlier this month. While MicroBooNE’s predecessors have provided hints of a possible anomaly within the neutrino sector, its own detectors have yet to uncover a similar signal. Physicists were hopeful that MicroBooNE would validate this discrepancy, yet the tale is turning out to be much more nuanced than previously thought.

Unexpected Foundations

Originally proposed by Wolfgang Pauli in 1930 as an explanation for missing momentum in certain particle collisions, the neutrino was added to the Standard Model as a massless particle that can come in one of three possible flavors: electron, muon, and tau. At first, it was thought that these flavors are completely distinct from one another. Yet when experiments aimed to detect a particular neutrino type, they consistently measured a discrepancy from their prediction. A peculiar idea known as neutrino oscillation presented a possible explanation: perhaps, instead of propagating as a singular flavor, a neutrino switches between flavors as it travels through space. 

This interpretation emerges fortuitously if the model is modified to give the neutrinos mass. In quantum mechanics, a particle’s mass eigenstate — the possible masses a particle can be found to have upon measurement — can be thought of as a traveling wave with a certain frequency. If the three possible mass eigenstates of the neutrino are different, meaning that at most one of the mass values could be zero, this creates a phase shift between the waves as they travel. It turns out that the flavor eigenstates — describing which of the electron, muon, or tau flavors the neutrino is measured to possess — are then superpositions of these mass eigenstates. As the neutrino propagates, the relative phase between the mass waves varies such that when the flavor is measured, the final superposition could be different from the initial one, explaining how the flavor can change. In this way, the mass eigenstates and the flavor eigenstates of neutrinos are said to “mix,” and we can mathematically characterize this model via mixing parameters that encode the mass content of each flavor eigenstate.

A visual representation of how neutrino oscillation works. From: http://www.hyper-k.org/en/neutrino.html.

These massive oscillating neutrinos represent a radical departure from the picture originally painted by the Standard Model, requiring a revision in our theoretical understanding. The oscillation phenomenon also poses a unique experimental challenge, as it is especially difficult to unravel the relationships between neutrino flavors and masses. Thus far, physicists have only been able to determine the sum of neutrino masses, and have found that this value is constrained to be exceedingly small, posing yet another mystery. The neutrino experiments of the past three decades have set their sights on measuring the mixing parameters in order to determine the probabilities of the possible flavor switches.

A Series of Perplexing Experiments

In 1993, scientists in Los Alamos peered at the data gathered by the Liquid Scintillator Neutrino Detector (LSND) to find something rather strange. The group had set out to measure the number of electron neutrino events produced via decays in their detector, and found that this number exceeded what had been predicted by the three-neutrino oscillation model. In 2002, experimentalists turned on the complementary MiniBooNE detector at FermiLab (BooNE is an acronym for Booster Neutrino Experiment), which searched for oscillations of muon neutrinos into electron neutrinos, and again found excess electron neutrino events. For a more detailed account of the setup of these experiments, check out Oz Amram’s latest piece.

While two experiments are notable for detecting excess signal, they stand as outliers when we consider all neutrino experiments that have collected oscillation data. Collaborations that were taking data at the same time as LSND and MiniBooNE include: MINOS (Main Injector Neutrino Oscillation Search), KamLAND (Kamioka Liquid Scintillator Antineutrino Detector), and IceCube (surprisingly, not a fancy acronym, but deriving its name from the fact that it’s located under ice in Antarctica), to name just a few prominent ones. Their detectors targeted neutrinos from a beamline, nearby nuclear reactors, and astrophysical sources, respectively. Not one found a mismatch between predicted and measured events. 

The results of these other experiments, however, do not negate the findings of LSND and MiniBooNE. This extensive experimental range — probing several sources of neutrinos, and detecting with different hardware specifications — is necessary in order to consider the full range of possible neutrino mixing parameters and masses. Each model or experiment is endowed with a parameter space: a set of allowed values that its parameters can take. In this case, the neutrino mass and mixing parameters form a two-dimensional grid of possibilities. The job of a theorist is to find a solution that both resolves the discrepancy and has a parameter space that overlaps with allowed experimental parameters. Since LSND and MiniBooNE had shared regions of parameter space, the resolution of this mystery should be able to explain not only the origins of the excess, but why no similar excess was uncovered by other detectors.

A simple explanation to the anomaly emerged and quickly gained traction: perhaps the data hinted at a fourth type of neutrino. Following the logic of the three-neutrino oscillation model, this interpretation considers the possibility that the three known flavors have some probability of oscillating into an additional fourth flavor. For this theory to remain consistent with previous experiments, the fourth neutrino would have to provide the LSND and MiniBooNE excess signals, while at the same time sidestepping prior detection by coupling to only the force of gravity. Due to its evasive behavior, this potential fourth neutrino has come to be known as the sterile neutrino. 

The Rise of the Sterile Neutrino

The sterile neutrino is a well-motivated and especially attractive candidate for beyond the Standard Model physics. It differs from ordinary neutrinos, also called active neutrinos, by having the opposite “handedness”. To illustrate this property, imagine a spinning particle. If the particle is spinning with a leftward orientation, we say it is “left-handed”, and if it is spinning with a rightward orientation, we say it is “right-handed”. Mathematically, this quantity is called helicity, which is formally the projection of a particle’s spin along its direction of momentum. However, this helicity depends implicitly on the reference frame from which we make the observation. Because massive particles move slower than the speed of light, we can choose a frame of reference such that the particle appears to have momentum going in the opposite direction, and as a result, the opposite helicity. Conversely, because massless particles move at the speed of light, they will have the same helicity in every reference frame. 

An illustration of chirality. We define, by convention, a “right-handed” particle as one whose spin and momentum directions align, and a “left-handed” particle as one whose spin and momentum directions are anti-aligned. Source: Wikipedia.

This frame-dependence unnecessarily complicates calculations, but luckily we can instead employ a related quantity that encapsulates the same properties while bypassing the reference frame issue: chirality. Much like helicity, while massless particles can only display one chirality, massive particles can be either left- or right-chiral. Neutrinos interact via the weak force, which is famously parity-violating, meaning that it has been observed to preferentially interact only with particles of one particular chirality. Yet massive neutrinos could presumably also be right-handed — there’s no compelling reason to think they shouldn’t exist. Sterile neutrinos could fill this gap.

They would also lend themselves nicely to addressing questions of dark matter and baryon asymmetry. The former — the observed excess of gravitationally-interacting matter over light-emitting matter by a factor of 20 — could be neatly explained away by the detection of a particle that interacts only gravitationally, much like the sterile neutrino. The latter, in which our patch of the universe appears to contain considerably more matter than antimatter, could also be addressed by the sterile neutrino via a proposed model of neutrino mass acquisition known as the seesaw mechanism. 

In this scheme, active neutrinos are represented as Dirac fermions: spin-½ particles that have a unique anti-particle, the oppositely-charged particle with otherwise the same properties. In contrast, sterile neutrinos are considered to be Majorana fermions: spin-½ particles that are their own antiparticle. The masses of the active and sterile neutrinos are fundamentally linked such that as the value of one goes up, the value of the other goes down, much like a seesaw. If sterile neutrinos are sufficiently heavy, this mechanism could explain the origin of neutrino masses and possibly even why the masses of the active neutrinos are so small. 

These considerations position the sterile neutrino as an especially promising contender to address a host of Standard Model puzzles. Yet it is not the only possible solution to the LSND/MiniBooNE anomaly — a variety of alternative theoretical interpretations invoke dark matter, variations on the Higgs boson, and even more complicated iterations of the sterile neutrino. MicroBooNE was constructed to traverse this range of scenarios and their corresponding signatures. 

Open Questions

After taking data for three years, the collaboration has compiled two dedicated analyses: one that searches for single electron final states, and another that searches for single photon final states. Each of these products can result from electron neutrino interactions — yet both analyses did not detect an excess, pointing to no obvious signs of new physics via these channels. 

Above, we can see that the expected number of electron neutrino events agrees well with the number of measured events, disfavoring the MiniBooNE excess. Source: https://microboone.fnal.gov/wp-content/uploads/paper_electron_analysis_2021.pdf

Although confounding, this does not spell death for the sterile neutrino. A significant disparity between MiniBooNE and MicroBooNE’s detectors is the ability to discern between single and multiple electron events — MiniBooNE lacked the resolution that MicroBooNE was upgraded to achieve. MiniBooNE also was unable to fully distinguish between electron and photon events in the same way as MicroBooNE. The possibility remains that there exist processes involving new physics that were captured by LSND and MiniBooNE — perhaps decays resulting in two electrons, for instance.  

The idea of a right-handed neutrino remains a promising avenue for beyond the Standard Model physics, and it could turn out to have a mass much larger than our current detection mechanisms can probe. The MicroBooNE collaboration has not yet done a targeted study of the sterile neutrino, which is necessary in order to fully assess how their data connects to its possible signatures. There still exist regions of parameter space where the sterile neutrino could theoretically live, but with every excluded region of parameter space, it becomes harder to construct a theory with a sterile neutrino that is consistent with experimental constraints. 

While the list of neutrino-based mysteries only seems to grow with MicroBooNE’s latest findings, there are plenty of results on the horizon that could add clarity to the picture. Researchers are anticipating the output of more data from MicroBooNE as well as more specific theoretical studies of the results and their relationship to the LSND/MiniBooNE anomaly, the sterile neutrino, and other beyond the Standard Model scenarios. MicroBooNE is also just one in a series of planned neutrino experiments, and will operate alongside the upcoming SBND (Short-Baseline Neutrino Detector) and ICARUS (Imaging Cosmic Rare and Underground Signals), further expanding the parameter space we are able to probe.

The neutrino sector has proven to be fertile ground for physics beyond the Standard Model, and it is likely that this story will continue to produce more twists and turns. While we have some promising theoretical explanations, nothing theorists have concocted thus far has fit seamlessly with our available data. More data from MicroBooNE and near-future detectors is necessary to expand our understanding of these puzzling pieces of particle physics. The neutrino story is pivotal to the tome of the Standard Model, and may be the key to writing the next chapter in our understanding of the fundamental ingredients of our world.

Further Reading

  1. A review of neutrino oscillation and mass properties: https://pdg.lbl.gov/2020/reviews/rpp2020-rev-neutrino-mixing.pdf
  2. An in-depth review of the LSND and MiniBooNE results: https://arxiv.org/pdf/1306.6494.pdf

Planckian dark matter: DEAP edition

Title: First direct detection constraints on Planck-scale mass dark matter with multiple-scatter signatures using the DEAP-3600 detector.

Reference: https://arxiv.org/abs/2108.09405.

Here is a broad explainer of the paper via breaking down its title.

Direct detection.

The term in use for a kind of astronomy, ‘dark matter astronomy’, that has been in action since the 1980s. The word “astronomy” usually evokes telescopes pointing at something in the sky and catching its light. But one could also catch other things, e.g., neutrinos, cosmic rays and gravitational waves, to learn about what’s out there: that counts as astronomy too! As touched upon elsewhere in these pages, we think dark matter is flying into Earth at about 300 km/s, making its astronomy a possibility. But we are yet to conclusively catch dark particles. The unique challenge, unlike astronomy with light or neutrinos or gravity waves, is that we do not quite know the character of dark matter. So we must first imagine what it could be, and accordingly design a telescope/detector. That is challenging, too. We only really know that dark matter exists on the size scale of small galaxies: 10^{19} metres. Whereas our detectors are at best a metre across. This vast gulf in scales can only be addressed by theoretical models.

Multiple-scatter signatures.

How heavy all the dark matter is in the neighbourhood of the Sun has been ballparked, but that does not tell us how far apart the dark particles are from each other, i.e. if they are lightweights huddled close, or anvils socially distanced. Usually dark matter experiments (there are dozens around the world!) look for dark particles bunched a few centimetres apart, called WIMPs. This experiment looked, for the first time, for dark particles that may be  30 kilometres apart. In particular they looked for “MIMPs” — multiply interacting massive particles — dark matter that leaves a “track” in the detector as opposed to a single “burst” characteristic of a WIMP. As explained here, to discover very dilute dark particles like DEAP-3600 wanted to, one must necessarily look for tracks. So they carefully analyzed the waveforms of energy dumps in the detector (e.g., from radioactive material, cosmic muons, etc.) to pick out telltale tracks of dark matter.

Figure above: Simulated waveforms for two benchmark parameters.

DEAP-3600 detector.

The largest dark matter detector built so far, the 130 cm-diameter, 3.3 tonne liquid argon-based DEAP (“Dark matter Experiment using Argon Pulse-shaped discrimination”) in SNOLAB, Canada.  Three years of data recorded on whatever passed through the detector were used. That amounts to the greatest integrated flux of dark particles through a detector in a dark matter experiment so far, enabling them to probe the frontier of “diluteness” in dark matter.

Planck-scale mass.

By looking for the dilutest dark particles, DEAP-3600 is the first laboratory experiment to say something about dark matter that may weigh a “Planck mass” — about 22 micrograms, or 1.2 \times 10^{19} GeV/c^2 — the greatest mass an elementary particle could have. That’s like breaking the sound barrier. Nothing prevents you from moving faster than sound, but you’d transition to a realm of new physical effects. Similarly nothing prevents an experiment from probing dark matter particles beyond the Planck mass. But novel intriguing theoretical possibilities for dark matter’s unknown identity are now impacted by this result, e.g., large composite particles, solitonic balls, and charged mini-black holes.

Constraints.

The experiment did not discover dark matter, but has mapped out its masses and nucleon scattering cross sections that are now ruled out thanks to its extensive search.

Image

Image

Figure above: For two classes of models of composite dark matter, DEAP-3600 limits on its cross sections for scattering on nucleons  versus its unknown mass. Also displayed are previously placed limits from various other searches.

[Full disclosure: the author was part of the experimental search, which was based on proposals in [a] [b]. It is hoped that this search leads the way for other collaborations to, using their own fun tricks, cast an even wider net than DEAP did.]

Further reading.

[1] Proposals for detection of (super-)Planckian dark matter via purely gravitational interactions:

Laser interferometers as dark matter detectors,

Gravitational direct detection of dark matter.

[2] Constraints on (super-)Planckian dark matter from recasting searches in etched plastic and ancient underground mica.

[3] A recent multi-scatter search for dark matter reaching masses of 10^{12} GeV/c^2.

[4] Look out for Benjamin Broerman‘s PhD thesis featuring results from a multi-scatter search in the bubble chamber-based PICO-60.

A new boson at 151 GeV?! Not quite yet

Title: “Accumulating Evidence for the Associate Production of
a Neutral Scalar with Mass around 151 GeV”

Authors: Andreas Crivellin et al.

Reference: https://arxiv.org/abs/2109.02650

Everyone in particle physics is hungry for the discovery of a new particle not in the standard model, that will point the way forward to a better understanding of nature. And recent anomalies: potential Lepton Flavor Universality violation in B meson decays and the recent experimental confirmation of the muon g-2 anomaly, have renewed peoples hopes that there may new particles lurking nearby within our experimental reach. While these anomalies are exciting, if they are confirmed they would be ‘indirect’ evidence for new physics, revealing concrete a hole in the standard model, but not definitely saying what it is that fills that hole.  We would then would really like to ‘directly’ observe what was causing the anomaly, so we can know exactly what the new particle is and study it in detail. A direct observation usually involves being able to produce it in a collider, which is what the high momentum experiments at the LHC (ATLAS and CMS) are designed to look for.

By now these experiments have done hundreds of different analyses of their data searching for potential signals of new particles being produced in their collisions and so far haven’t found anything. But in this recent paper, a group of physicists outside these collaborations argue that they may have missed such a signal in their own data. Whats more, they claim statistical evidence for this new particle at the level of around 5-sigma, which is the threshold usually corresponding to a ‘discovery’ in particle physics.  If true, this would of course be huge, but there are definitely reasons to be a bit skeptical.

This group took data from various ATLAS and CMS papers that were looking for something else (mostly studying the Higgs) and noticed that multiple of them had an excess of events at a particle energy, 151 GeV. In order to see how significant theses excesses were in combination, they constructed a statistical model that combined evidence from the many different channels simultaneously. Then they evaluate that the probability of there being an excess at the same energy in all of these channels without a new particle is extremely low, and thus claim evidence for this new particle at 5.1-sigma (local). 

 
4 plots in different channels showing the purported excess at 151 GeV in different channels.
FIgure 1 from the paper. This shows the invariant mass spectrum of the new hypothetical boson mass in the different channels the authors consider. The authors have combined CMS and ATLAS data from different analyses and normalized everything to be consistent in order to make such plot. The pink line shows the purported signal at 151 GeV. The largest significance comes from the channel where the new boson decays into two photons and is produced in association with something that decays invisibly (which produces missing energy).
A plot of the significance (p-value) as a function of the mass of the new particle. Combing all the channels, the significance reaches the level of 5-sigma. One can see that the significance is dominated by diphoton channels.

This is a of course a big claim, and one reason to be skeptical is because they don’t have a definitive model, they cannot predict exactly how much signal you would expect to see in each of these different channels. This means that when combining the different channels, they have to let the relative strength of the signal in each channel be a free parameter. They are also combining the data a multitude of different CMS and ATLAS papers, essentially selected because they are showing some sort of fluctuation around 151 GeV. So this sort of cherry picking of data and no constraints on the relative signal strengths means that their final significance should be taken with several huge grains of salt.

The authors further attempt to quantify a global significance, which would account of the look-elsewhere effect , but due to the way they have selected their datasets  it is not really possible in this case (in this humble experimenter’s opinion).

Still, with all of those caveats, it is clear that there is some excesses in the data around 151 GeV, and it should be worth experimental collaborations’ time to investigate it further. Most of the data the authors use comes control regions of from analyses that were focused solely on the Higgs, so this motivates the experiments expanding their focus a bit to cover these potential signals. The authors also propose a new search that would be sensitive to their purported signal, which would look for a new scalar decaying to two new particles that decay to pairs of photons and bottom quarks respectively (H->SS*-> γγ bb).

 

In an informal poll on Twitter, most were not convinced a new particle has been found, but the ball is now in ATLAS and CMS’s courts to analyze the data themselves and see what they find. 

 

 

Read More:

An Anomalous Anomaly : The New Fermilab Muon g-2 Results” A Particle Bites post about one recent exciting anomaly 

The flavour of new physics” Cern Courier article about the recent anomalies relating to lepton flavor violation 

Unveiling Hidden Physics at the LHC” Recent whitepaper that contains a good review of the recent anomalies relevant for LHC physics 

For a good discussion of this paper claiming a new boson, see this Twitter thread

Universality of Black Hole Entropy

A range of supermassive black holes lights up this new image from NASA’s NuSTAR. All of the dots are active black holes tucked inside the hearts of galaxies, with colors representing different energies of X-ray light.

It was not until the past few decades that physicists have made remarkable experimental advancements in the study of black holes, such as with the Event Horizon Telescope and the Laser Interferometer Gravitational-Wave Observatory.

On the theoretical side, there are still lingering questions regarding the thermodynamics of these objects.  It is well known that black holes have a simple formula for their entropy. It was first postulated by Jacob Bekenstein and Stephen Hawking  that the entropy is proportional to the area of its event horizon. The universality of this formula is quite impressive and has stood the test of time.

However, there is more to the story of black hole thermodynamics. Even though the entropy is proportional to its area, there are sub-leading terms that also contribute. Theoretical physicists like to focus on the logarithmic corrections to this formula and investigate whether it is just as universal as the leading term.

Examining a certain class of black holes in four dimensions, Hristov and Reys have shown such a universal result may exist. They focused on a set of spacetimes, that asymptote for large radial distance, to a negatively curved spacetime, called Anti-de Sitter.  These Anti-de Sitter spacetimes have been at the forefront of high energy theory due to the AdS/CFT correspondence.

Moreover, they found that the logarithmic term is proportional to its Euler Characteristic, a topologically invariant quantity, and a single dynamical coefficient, that depends on the spacetime background. Their work is a stepping stone in understanding the structure of the entropy for these asymptotically AdS black holes.

How to find invisible particles in a collider

 You might have heard that one of the big things we are looking for in collider experiments are ever elusive dark matter particles. But given that dark matter particles are expected to interact very rarely with regular matter, how would you know if you happened to make some in a collision? The so called ‘direct detection’ experiments have to operate giant multi-ton detectors in extremely low-background environments in order to be sensitive to an occasional dark matter interaction. In the noisy environment of a particle collider like the LHC, in which collisions producing sprays of particles happen every 25 nanoseconds, the extremely rare interaction of the dark matter with our detector is likely to be missed. But instead of finding dark matter by seeing it in our detector, we can instead find it by not seeing it. That may sound paradoxical, but its how most collider based searches for dark matter work. 

The trick is based on every physicists favorite principle: the conservation of energy and momentum. We know that energy and momentum will be conserved in a collision, so if we know the initial momentum of the incoming particles, and measure everything that comes out, then any invisible particles produced will show up as an imbalance between the two. In a proton-proton collider like the LHC we don’t know the initial momentum of the particles along the beam axis, but we do that they were traveling along that axis. That means that the net momentum in the direction away from the beam axis (the ‘transverse’ direction) should be zero. So if we see a momentum imbalance going away from the beam axis, we know that there is some ‘invisible’ particle traveling in the opposite direction.

A sketch of what the signature of an invisible particle would like in a detector. Note this is a 2D cross section of the detector, with the beam axis traveling through the center of the diagram. There are two signals measured in the detector moving ‘up’ away from the beam pipe. Momentum conservation means there must have been some particle produced which is traveling ‘down’ and was not measured by the detector. Figure borrowed from here  

We normally refer to the amount of transverse momentum imbalance in an event as its ‘missing momentum’. Any collisions in which an invisible particle was produced will have missing momentum as tell-tale sign. But while it is a very interesting signature, missing momentum can actually be very difficult to measure. That’s because in order to tell if there is anything missing, you have to accurately measure the momentum of every particle in the collision. Our detectors aren’t perfect, any particles we miss, or mis-measure the momentum of, will show up as a ‘fake’ missing energy signature. 

A picture of a particularly noisy LHC collision, with a large number of tracks
Can you tell if there is any missing energy in this collision? Its not so easy… Figure borrowed from here

Even if you can measure the missing energy well, dark matter particles are not the only ones invisible to our detector. Neutrinos are notoriously difficult to detect and will not get picked up by our detectors, producing a ‘missing energy’ signature. This means that any search for new invisible particles, like dark matter, has to understand the background of neutrino production (often from the decay of a Z or W boson) very well. No one ever said finding the invisible would be easy!

However particle physicists have been studying these processes for a long time so we have gotten pretty good at measuring missing energy in our events and modeling the standard model backgrounds. Missing energy is a key tool that we use to search for dark matter, supersymmetry and other physics beyond the standard model.

Read More:

What happens when energy goes missing?” ATLAS blog post by Julia Gonski

How to look for supersymmetry at the LHC“, blog post by Matt Strassler

“Performance of missing transverse momentum reconstruction with the ATLAS detector using proton-proton collisions at √s = 13 TeV” Technical Paper by the ATLAS Collaboration

“Search for new physics in final states with an energetic jet or a hadronically decaying W or Z boson and transverse momentum imbalance at √s= 13 TeV” Search for dark matter by the CMS Collaboration

Strings 2021 – an overview

Strings 2021 Flyer

It was that time of year again when the entire string theory community comes together to discuss current research programs, the status of string theory and more recently, the social issues common in the field. This annual conference has been held in various countries but for the first time in its 35 year-long history has been hosted in Latin America at the ICTP South American Institute for Fundamental Research (ICTP-SAIFR).

One positive aspect of its virtual platform has been the increase in the number of participants attending the conference. Similar to Strings 2020 held in South Africa, more than two thousand participants were registered for the conference. In addition to research talks on technical subtopics, participants were involved in daily informal discussions on topics such as the black hole information paradox, ensemble averaging, and cosmology and string theory. More junior participants were involved in the poster sessions and gong shows, held in the first week of the conference.

One particular discussion session I would like to point out was panel discussion on the 4 generations of women in string theory, featuring women from different age groups and how they have dealt with issues of gender and implicit bias in their current or previous roles in academia.

To say the very least, the conference was a major success and has shown the effectiveness of virtual platforms for upcoming years, possibly including Strings 2022 to be held in Vienna.

For the string theory enthusiasts reading this, recordings of the conference can be found here.

Might I inquire?

Is N=2 large?queried  Kitano, Yamada and Yamazaki in their paper title. Exactly five months later, they co-wrote with Matsudo a paper titled “N=2 is large“, proving that their question was, after all, rhetorical. 

Papers ask the darndest things. Collected below are titular posers from the field’s literature that keep us up at night. 

Image: Rahmat Dwi Kahyo, Noun Project.

Who?

Who you gonna call?

Who is afraid of quadratic divergences?
Who is afraid of CPT violations?

How?

How big are penguins?

How bright is the proton?
How brightly does the glasma shine?
How bright can the brightest neutrino source be?

How stable is the photon?
(Abstract: “Yes, the photon.”)

How heavy is the cold photon?

How good is the Villain approximation?
How light is dark?

How many solar neutrino experiments are wrong?
How many thoughts are there?

How would a kilonova look on camera?
How does a collapsing star look?

How much information is in a jet?

How fast can a black hole eat?

How straight is the Linac and how much does it vibrate?
How do we build Bjorken’s 1000 TeV machine?

How black is a constituent quark?

How neutral are atoms?

How long does hydrogen live?

How does a pseudoscalar glueball come unglued?

How warm is too warm?

How degenerate can we be?

How the heck is it possible that a system emitting only a dozen particles can be described by fluid dynamics?

Bonus

How I spent my summer vacation

Why?

Why is \  F^2_\pi \gamma_{\rho \pi \pi^2}/m^2_\rho \cong 0?

Why trust a theory?

Why be natural?

Why do things fall?

Why do we flush gas in gaseous detectors?

Why continue with nuclear physics?

What and why are Siberian snakes?

Why does the proton beam have a hole?

Why are physicists going underground?

Why unify?

Why do nucleons behave like nucleons inside nuclei and not like peas in a meson soup?

Why i?

Bonus

The best why

Why I would be very sad if a Higgs boson were discovered

Why the proton is getting bigger

Why I think that dark matter has large self-interactions

Why Nature appears to ‘read books on free field theory’

String Dualities and Corrections to Gravity

Based on arXiv:2012.15677 [hep-th]

Figure 1: a torus is an example of a geometry that has T-duality

Physicists have been searching for ways to describe the interplay between gravity and quantum mechanics – quantum gravity – for the last century. The problem of finding a consistent theory of quantum gravity still looms physicists to this day. Fortunately, string theory is the most promising candidate for such a task. 

One of the strengths of string theory is that at low energies, the equations arising from string theory are shown to be precisely Einstein’s theory of general relativity. Let’s break down what this means. First, we must make sure we know the definition of a coupling constant. Theories of physics are typically described by some parameter that signifies the strength of the interaction. This parameter is called the coupling constant of that theory. According to quantum field theory, the value of the coupling constant depends on the energy. We often plot the logarithm of the energy and the coupling constant to understand how the theory behaves at a certain energy scale. The slope of this plot is called the beta function and when this function is zero, that point is called a fixed point. These fixed points are interesting since they imply that the quantum theory does not have any notion of scale.

Back to string theory, its coupling constant is called α′ (said as alpha-prime). At weak coupling, when α′ is small, we can similarly find the beta function for string theory. At the quantum level, string theory must have a vanishing beta function. At the corresponding fixed point, we find that the Einstein’s equations of motion emerge. This is quite remarkable!

We can go even further. Due to the smallness of α′, we can expand the beta function perturbatively. All the subleading terms in α′, which are infinite in number, are considered to be corrections to general relativity. Therefore, we can understand how general relativity is modified via string theory. It becomes technically challenging to compute these corrections and little is known about what the full expansion looks like.

Fortunately for physicists, string theories are interesting in other ways that could help figure us out these corrections to gravity. Particularly, the string energy spectrum that has radii R and radii α′/R look exactly the same. This relation is called T-duality. An example of a geometry that has this duality is the torus, see Figure 1. Because we know that certain dualities for strings must hold, we can use this to guess what the higher order correction must look like. Codina, Hohm and Marques took advantage of this idea to find corrections to the third power of α′. Using a simple scenario where the graviton is the only field in the theory, they were able to predict what the corrections must be.

This method can be applied at higher orders in α′ as well as a theory with more fields than the graviton, although technical challenges still arise. Due to the structure of how T-duality was used, the authors can also use their results to study cosmological models. Finally, the theory result confirms that string theory should be T-duality at all orders of α′.