Can we measure black hole kicks using gravitational waves?

Article: Black hole kicks as new gravitational wave observables
Authors: Davide Gerosa, Christopher J. Moore
Reference: arXiv:1606.04226Phys. Rev. Lett. 117, 011101 (2016)

On September 14 2015, something really huge happened in physics: the first direct detection of gravitational waves happened. But measuring a single gravitational wave was never the goal—.though freaking cool in and of itself of course!  So what is the purpose of gravitational wave astronomy?

The idea is that gravitational waves can be used as another tool to learn more about our Universe and its components. Until the discovery of gravitational waves, observations in astrophysics and astronomy were limited to observations with telescopes and thus to electromagnetic radiation. Now a new era has started: the era of gravitational wave astronomy. And when the space-based eLISA observatory comes online, it will begin an era of gravitational wave cosmology. So what is it that we can learn from our universe from gravitational waves?

First of all, the first detection aka GW150914 was already super interesting:

  1. It was the first observation of a binary black hole system (with unexpected masses!).
  2. It put some strong constraints on the allowed deviations from Einstein’s theory of general relativity.

What is next? We hope to detect a neutron star orbiting a black hole or another neutron star.  This will allow us to learn more about the equation of state of neutron stars and thus their composition. But the authors in this paper suggest another exciting prospect: observing so-called black hole kicks using gravitational wave astronomy.

So, what is a black hole kick? When two black holes rotate around each other, they emit gravitational waves. In this process, they lose energy and therefore they get closer and closer together before finally merging to form a single black hole. However, generically the radiation is not the same in all directions and thus there is also a net emission of linear momentum. By conservation of momentum, when the black holes merge, the final remnant experiences a recoil in the opposite direction. Previous numerical studies have shown that non-spinning black holes ‘only’ have kicks of ∼ 170 km per second, but you can also have “superkicks” as high as ∼5000 km per second! These speeds can exceed the escape velocity of even the most massive galaxies and may thus eject black holes from their hosts. These dramatic events have some electromagnetic signatures, but also leave an imprint in the gravitational waveform that we detect.

figure_strain
Fig. 1: This graph shows two black holes rotating around each other (without any black hole kick) and finally merging during the final part of the inspiral phase followed by the very short merger and ringdown phase. The wave below is the gravitational waveform. [Figure from 1602.03837]
The idea is rather simple: as the system experiences a kick, its gravitational wave is Doppler shifted. This Doppler shift effects the frequency f in the way you would expect:

fKickpng
Doppler shift from black hole kick.

with v the kick velocity and n the unit vector in the direction from the observer to the black hole system (and c the speed of light). The black hole dynamics is entirely captured by the dimensionless number G f M/c3 with M the mass of the binary (and G Newton’s constant). So you can also model this shift in frequency by using the unkicked frequency fno kick and observing the Doppler shift into the mass. This is very convenient because this means that you can use all the current knowledge and results for the gravitational waveforms and just change the mass. Now the tricky part is that the velocity changes over time and this needs to be modelled more carefully.

A crude model would be to say that during the inspiral of the black holes (which is the long phase during which the two black holes rotate around each other – see figure 1), the emitted linear momentum is too small and the mass is unaffected by emission of linear momentum. During the final stages the black holes merge and the final remnant emits a gravitational wave with decreasing amplitude, which is called the ringdown phase. During this latter phase the velocity kick is important and one can relate the mass during inspiral Mi with the mass during the ringdown phase Mr simply by

Mr
Mass during ringdown related to mass during inspiral.

The results of doing this for a black hole kick moving away (or towards) us are shown in fig. 2: the wave gets redshifted (or blueshifted).

Fig. 2: If a black hole binary radiates isotropically, it does not experience any kick and the gravitational wave has the black waveform. However, if it experiences a kick along the line of sight, the waveform can get redshifted (when the system moves away from us) as shown on the left of blueshifted (when system moves toward us) as shown on the right. The top and lower panel correspond to the two independent polarizations of the gravitational wave.[Figure taken from this paper]
Fig. 2: If a black hole binary radiates isotropically, it does not experience any kick and the gravitational wave has the black waveform. However, if it experiences a kick along the line of sight, the waveform can get redshifted (when the system moves away from us) as shown on the left of blueshifted (when system moves toward us) as shown on the right. The top and lower panel correspond to the two independent polarizations of the gravitational wave. [Figure from 1606.04226]
This model is refined in various ways and the results show that it is unlikely that kicks will be measured by LIGO, as LIGO is optimized for detecting black hole with relatively low masses and black hole systems with low masses have velocity kicks that are too low to be detected. However, the prospects for eLISA are better for two reasons: (1) eLISA is designed to measure supermassive black hole binaries with masses in the range of 105 to 1010 solar masses, which can have much larger kicks (and thus are more easily detectable) and (2) the signal-to-noise ratio for eLISA is much higher giving better data. This study estimates about 6 detectable kicks per year. Thus, black hole (super)kicks might be detected in the next decade using gravitational wave astronomy. The future is bright 🙂

Further Reading

Jets: More than Riff, Tony, and a rumble

Review Bite: Jet Physics
(This is the first in a series of posts on jet physics by Reggie Bain.)

Ubiquitous in the LHC’s ultra-high energy collisions are collimated sprays of particles called jets. The study of jet physics is a rapidly growing field where experimentalists and theorists work together to unravel the complex geometry of the final state particles at LHC experiments. If you’re totally new to the idea of jets…this bite from July 18th, 2016 by Julia Gonski is a nice experimental introduction to the importance of jets. In this bite, we’ll look at the basic ideas of jet physics from a more theoretical perspective. Let’
s address a few basic questions:

  1. What is a jet? Jets are highly collimated collections of particles that are frequently observed in detectors. In visualizations of collisions in the ATLAS detector, one can often identify jets by eye.
A nicely colored visualization of a multi-jet event in the ATLAS detector. Reason #172 that I’m not an experimentalist...actually sifting out useful information from the detector (or even making a graphic like this) is insanely hard.
A nicely colored visualization of a multi-jet event in the ATLAS detector. Reason #172 that I’m not an experimentalist…actually sifting out useful information from the detector (or even making a graphic like this) is insanely hard.

Jets are formed in the final state of a collision when a particle showers off radiation in such a way as to form a focused cone of particles. The most commonly studied jets are formed by quarks and gluons that fragment into hadrons like pions, kaons, and sometimes more exotic particles like the $latex J/Ψ, Υ, χc and many others. This process is often referred to as hadronization.

  1. Why do jets exist? Jets are a fundamental prediction of Quantum Field Theories like Quantum Chromodynamics (QCD).  One common process studied in field theory textbooks is electron–positron annihilation into a pair of quarks, e+e → q q. In order to calculate the
    cross-section of this process, it turns out that one has to consider the possibility that additional gluons are produced along with the qq. Since no detector has infinite resolution, it’s always possible that there are gluons that go unobserved by your detector. This could be because they are incredibly soft (low energy) or because they travel almost exactly collinear to the q or q itself. In this region of momenta, the cross-section gets very large and the process favors the creation of this extra radiation. Since these gluons carry color/anti-color, they begin to hadronize and decay so as to become stable, colorless states. When the q, q have high momenta, the zoo of particles that are formed from the hadronization all have momenta that are clustered around the direction of the original q,q and form a cone shape in the detector…thus a jet is born! The details of exactly how hadronization works is where theory can get a little hazy. At the energy and distance scales where quarks/gluons start to hadronize, perturbation theory breaks down making many of our usual calculational tools useless. This, of course, makes the realm of hadronization—often referred to as parton fragmentation in the literature—a hot topic in QCD research.

 

  1. How do we measure/study jets? Now comes the tricky part. As experimentalists will tell you, actually measuring jets can a messy business. By taking the signatures of the final state particles in an event (i.e. a collision), one can reconstruct a jet using a jet algorithm. One of the first concepts of such jet definitions was introduced by Geroge Sterman and Steven Weinberg in 1977. There they defined a jet using two parameters θ, E. These restricted the angle and energy of particles that are in or out of a jet.  Today, we have a variety of jet algorithms that fall into two categories:
  • Cone Algorithms — These algorithms identify stable cones of a given angular size. These cones are defined in such a way that if one or two nearby particles are added to or removed from the jet cone, that it won’t drastically change the cone location and energy
  • Recombination Algorithms — These look pairwise at the 4-momenta of all particles in an event and combine them, according to a certain distance metric (there’s a different one for each algorithm), in such a way as to be left with distinct, well-separated jets.
Figure 2: From Cacciari and Salam’s original paper on the “Anti-kT” jet algorithm (See arXiv:0802.1189). The picture shows the application of 4 different jet algorithms: the kT, Cambridge/Aachen, Seedless-Infrared-Safe Cone, and anti-kT algorithms to a single set of final state particles in an event. You can see how each algorithm reconstructs a slightly different jet structure. These are among the most commonly used clustering algorithms on the market (the anti-kT being, at least in my experience, the most popular).
Figure 2: From Cacciari and Salam’s original paper on the “Anti-kT” jet algorithm (See arXiv:0802.1189). The picture shows the application of 4 different jet algorithms: the kT, Cambridge/Aachen, Seedless-Infrared-Safe Cone, and anti-kT algorithms to a single set of final state particles in an event. You can see how each algorithm reconstructs a slightly different jet structure. These are among the most commonly used clustering algorithms on the market (the anti-kT being, at least in my experience, the most popular).
  1. Why are jets important? On the frontier of high energy particle physics, CERN leads the world’s charge in the search for new physics. From deepening our understanding of the Higgs to observing never before seen particles, projects like ATLAS,
N-subjettiness
An illustration of an interesting type of jet substructure observable called “N-subjettiness” from the original paper by Jesse Thaler and Ken van Tilburg (see arXiv:1011.2268). N-subjettiness aims to study how momenta within a jet are distributed by dividing them up into n sub-jets. The diagram on the left shows an example of 2-subjettiness where a jet contains two sub-jets. The diagram on the right shows a jet with 0 sub-jets.

CMS, and LHCb promise to uncover interesting physics for years to come. As it turns out, a large amount of Standard Model background to these new physics discoveries comes in the form of jets. Understanding the origin and workings of these jets can thus help us in the search for physics beyond the Standard Model.

Additionally, there are a number of interesting questions that remain about the Standard Model itself. From studying the production of heavy hadron production/decay in pp and heavy-ion collisions to providing precision measurements of the strong coupling, jets physics has a wide range of applicability and relevance to Standard Model problems. In recent years, the physics of  jet substructure, which studies the distributions of particle momenta within a jet, has also seen increased interest. By studying the geometry of jets, a number of clever observables have been developed that can help us understand what particles they come from and how they are formed. Jet substructure studies will be the subject of many future bites!

Going forward…With any luck, this should serve as a brief outline to the uninitiated on the basics of jet physics. In a world increasingly filled with bigger, faster, and stronger colliders, jets will continue to play a major role in particle phenomenology. In upcoming bites, I’ll discuss the wealth of new and exciting results coming from jet physics research. We’ll examine questions like:

  1. How do theoretical physicists tackle problems in jet physics?
  2. How does the process of hadronization/fragmentation of quarks and gluons really work?
  3. Can jets be used to answer long outstanding problems in the Standard Model?

I’ll also bite about how physicists use theoretical smart bombs called “effective field theories” to approach these often nasty theoretical calculations. But more on that later…

 

Further Reading…

  1. “QCD and Collider Physics,” (a.k.a The Pink Book) by Ellis, Stirling, and Webber — This is a fantastic reference for a variety of important topics in QCD. Even if many of the derivations are beyond you at this point, it still contains great explanations of the underlying physics concepts.
  2. “Quantum Field Theory and the Standard Model” by Matthew Schwartz — A relatively new QFT textbook written by a prominent figure in the jet physics world. Chapter 20 has an engaging introduction to the concept of jets. Warning: It will take a bit of familiarity with QFT/Particle physics to really get into the details.

750 GeV Bump Update

Article: Search for resonant production of high-mass photon pairs in proton-proton collisions at sqrt(s) = 8 and 13 TeV
Authors: CMS Collaboration
Reference: arXiv:1606.04093 (Submitted to Phys. Rev. Lett)

Following the discovery of the Higgs boson at the LHC in 2012, high-energy physicists asked the same question that they have asked for years: “what’s next?” This time, however, the answer to that question was not nearly as obvious as it had been in the past. When the top quark was discovered at Fermilab in 1995, the answer was clear “the Higgs is next.” And when the W and Z bosons were discovered at CERN in 1983, physicists were saying “the top quark is right around the corner.” However, because the Higgs is the last piece of the puzzle that is the Standard Model, there is no clear answer to the question “what’s next?” At the moment, the honest answer to this question is “we aren’t quite sure.”

The Higgs completes the Standard Model, which would be fantastic news were it not for the fact that there remain unambiguous indications of physics beyond the Standard Model. Among these is dark matter, which makes up roughly one-quarter of the energy content of the universe. Neutrino mass, the Hierarchy Problem, and the matter-antimatter asymmetry in the universe are among other favorite arguments in favor of new physics. The salient point is clear: the Standard Model, though newly-completed, is not a complete description of nature, so we must press on.

Background-only p-values for a new scalar particle in the CMS diphoton data. The dip at 750 GeV may be early evidence for a new particle.
Background-only p-values for a new scalar particle in the CMS diphoton data. The dip at 750 GeV may be early evidence for a new particle.

Near the end of Run I of the LHC (2013) and the beginning of Run II (2015), the focus was on searches for new physics. While searches for supersymmetry and the direct production of dark matter drew a considerable deal of focus, towards the end of 2015, a small excess – or, as physicists commonly refer to them, a bump – began to materialize in decays to two photons seen by the CMS Collaboration. This observation was made all the more exciting by the fact that ATLAS observed an analogous bump in the same channel with roughly the same significance. The paper in question here, published June 2016, presents a combination of the 2012 (8 TeV) and 2015 (13 TeV) CMS data; it represents the most recent public CMS result on the so-called “di-photon resonance”. (See also Roberto’s recent ParticleBite.)

This analysis searches for events with two photons, a relatively clean signal. If there is a heavy particle which decays into two photons, then we expect to see an excess of events near the mass of this particle. In this case, CMS and ATLAS have observed an excess of events near 750 GeV in the di-photon channel. While some searches for new physics rely upon hard kinematic requirements or tailor their search to a certain signal model, the signal here is simple: look for an event with two photons and nothing else. However, because this is a model-independent search with loose selection requirements, great care must be taken to understand the background (events that mimic the signal) in order to observe an excess, should one exist. In this case, the background processes are direct production of two photons and events where one or more photon is actually a misidentified jet. For example, a neutral pion may be mistaken for a photon.

Part of the excitement from this excess is due to the fact that ATLAS and CMS both observed corresponding bump sin their datasets, a useful cross-check that the bump has a chance of being real. A bigger part of the excitement, however, are the physics implications of a new, heavy particle that decays into two photons. A particle decaying to two photons would likely be either spin-0 or spin-2 (in principle, it could be of spin-N where N is an integer and N ≥ 2). Models exist in which the aforementioned Higgs boson, h(125), is one of a family of Higgs particles, and these so-called “expanded Higgs sectors” predict heavy, spin-0 particles which would decay to two photons. Moreover, in models which there are extra spatial dimensions, we would expect to find a spin-2 resonance – a graviton – decaying to two photons. Both of these scenarios would be extremely exciting, if realized by experiment, which contributed to the buzz surrounding this signal.

So, where do we stand today? After considering the data from 2015 (at 13 TeV center-of-mass energy) and 2012 (at 8 TeV center-of-mass energy) together, CMS reports an excess with a local significance of 3.4-sigma. However, the global significance – which takes into account the “look-elsewhere effect” and is the figure of merit here – is a mere 1.6-sigma. While the outlook is not extremely encouraging, more data is needed to definitively rule on the status of the di-photon resonance. CMS and ATLAS should have just that, more data, in time for the International Conference on High Energy Physics (ICHEP) 2016 in early August. At that point, we should have sufficient data to determine the fate of the di-photon excess. For now, the di-photon bump serves as a reminder of the unpredictability of new physics signatures, and it might suggest the need for more model-independent searches for new physics, especially as the LHC continues to chip away at the available supersymmetry phase space without any discoveries.

References and Further Reading

Probing the Standard Model with muons: new results from MEG

Article: Search for the lepton flavor violating decay μ+ → e+γ with the full dataset of the MEG experiment
Authors: MEG Collaboration
Reference: arXiv:1605.05081

I work on the Muon g-2 experiment, which is housed inside a brand new building at Fermilab.  Next door, another experiment hall is under construction. It will be the home of the Mu2e experiment, which is slated to use Fermilab’s muon beam as soon as Muon g-2 wraps up in a few years. Mu2e will search for evidence of an extremely rare process — namely, the conversion of a muon to an electron in the vicinity of a nucleus. You can read more about muon-to-electron conversion in a previous post by Flip.

Today, though, I bring you news of a different muon experiment, located at the Paul Scherrer Institute in Switzerland. The MEG experiment was operational from 2008-2013, and they recently released their final result.

Context of the MEG experiment

Figure 1: Almost 100% of the time, a muon will decay into an electron and two neutrinos.

MEG (short for “mu to e gamma”) and Mu2e are part of the same family of experiments. They each focus on a particular example of charged lepton flavor violation (CLFV). Normally, a muon decays into an electron and two neutrinos. The neutrinos ensure that lepton flavor is conserved; the overall amounts of “muon-ness” and “electron-ness” do not change.

Figure 2 lists some possible CLFV muon processes. In each case, the muon transforms into an electron without producing any neutrinos — so lepton flavor is not conserved! These processes are allowed by the standard model, but with such minuscule probabilities that we couldn’t possibly measure them. If that were the end of the story, no one would bother doing experiments like MEG and Mu2e — but of course that’s not the end of the story. It turns out that many new physics models predict CLFV at levels that are within range of the next generation of experiments. If an experiment finds evidence for one of these CLFV processes, it will be a clear indication of beyond-the-standard-model physics.

Figure 2: Some examples of muon processes that do not conserve lepton flavor. Also listed are the current/upcoming experiments that aim to measure the probabilities of these never-before-observed processes.

Results from MEG

The goal of the MEG experiment was to do one of two things:

  1. Measure the branching ratio of the μ+ → e+γ decay, or
  2. Establish a new upper limit

Outcome #1 is only possible if the branching ratio is high enough to produce a clear signal. Otherwise, all the experimenters can do is say “the branching ratio must be smaller than such-and-such, because otherwise we would have seen a signal” (i.e., outcome #2).

MEG saw no evidence of μ+ → e+γ decays. Instead, they determined that the branching ratio is less than 4.2 × 10^-13 (90% confidence level). Roughly speaking, that means if you had a pair of magic goggles that let you peer directly into the subatomic world, you could stand around and watch 2 × 10^12 muons decay without seeing anything unusual. Because real experiments are messier and less direct than magic goggles, the MEG result is actually based on data from 7.5 × 10^14 muons.

Before MEG, the previous experiment to search for μ+ → e+γ was the MEGA experiment at Los Alamos; they collected data from 1993-1995, and published their final result in 1999. They found an upper limit for the branching ratio of 1.2 × 10^-11. Thus, MEG achieved a factor of 30 improvement in sensitivity over the previous result.

How the experiment works

Figure 3: The MEG signal consists of a back-to-back positron and gamma, each carrying half the rest energy of the parent muon.

A continuous beam of positive muons enters a large magnet and hits a thin plastic target. By interacting with the material, about 80% of the muons lose their kinetic energy and come to rest inside the target. Because the muons decay from rest, the MEG signal is simple. Energy and momentum must be conserved, so the positron and gamma emerge from the target in opposite directions, each with an energy of 52.83 MeV (half the rest energy of the muon).1  The experiment is specifically designed to catch and measure these events. It consists of three detectors: a drift chamber to measure the positron trajectory and momentum, a timing counter to measure the positron time, and a liquid xenon detector to measure the photon time, position, and energy. Data from all three detectors must be combined to get a complete picture of each muon decay, and determine whether it fits the profile of a MEG signal event.

Figure 4: Layout of the MEG experiment. Source: arXiv:1605.05081.

In principle, it sounds pretty simple….to search for MEG events, you look at each chunk of data and go through a checklist:

  • Is there a photon with the correct energy?
  • Is there a positron at the same time?
  • Did the photon and positron emerge from the target in opposite directions?
  • Does the positron have the correct energy?

Four yeses and you might be looking at a rare CLFV muon decay! However, the key word here is might. Unfortunately, it is possible for a normal muon decay to masquerade as a CLFV decay. For MEG, one source of background is “radiative muon decay,” in which a muon decays into a positron, two neutrinos and a photon; if the neutrinos happen to have very low energy, this will look exactly like a MEG event. In order to get a meaningful result, MEG scientists first had to account for all possible sources of background and figure out the expected number of background events for their data sample. In general, experimental particle physicists spend a great deal of time reducing and understanding backgrounds!

What’s next for MEG?

The MEG collaboration is planning an upgrade to their detector which will produce an order of magnitude improvement in sensitivity. MEG-II is expected to begin three years of data-taking late in 2017. Perhaps at the new level of sensitivity, a μ+ → e+γ signal will emerge from the background!

 

1 Because photons are massless and positrons are not, their energies are not quite identical, but it turns out that they both round to 52.83 MeV. You can work it out yourself if you’re skeptical (that’s what I did).

Further Reading

  • Robert H. Bernstein and Peter S. Cooper, “Charged Lepton Flavor Violation: An Experimenter’s Guide.” (arXiv:1307.5787)
  • S. Mihara, J.P. Miller, P. Paradisi and G. Piredda, “Charged Lepton Flavor–Violation Experiments.” (DOI: 10.1146/annurev-nucl-102912-144530)
  • André de Gouvêa and Petr Vogel, “Lepton Flavor and Number Conservation, and Physics Beyond the Standard Model.” (arXiv:1303.4097)

Jets: From Energy Deposits to Physics Objects

Title: “Jet energy scale and resolution in the CMS experiment in pp collisions at 8 TeV”
Author: The CMS Collaboration
Reference: arXiv:hep-ex:1607.03663v1.pdf

As a collider physicist, I care a lot about jets. They are fascinating objects that cover the ATLAS and CMS detectors during LHC operation and make event displays look really cool (see Figure 1.) Unfortunately, as interesting as jets are, they’re also somewhat complicated and difficult to measure. A recent paper from the CMS Collaboration details exactly how we reconstruct, simulate, and calibrate these objects.

This event was collected in August 2015. The two high-pT jets have an invariant mass of 6.9 TeV and the leading and subleading jet have a pT of 1.3 and 1.2 TeV respectively. (Image credit: ATLAS public results)
Figure 1: This event was collected in August 2015. The two high-pT jets have an invariant mass of 6.9 TeV and the leading and subleading jet have a pT of 1.3 and 1.2 TeV respectively. (Image credit: ATLAS public results)

For the uninitiated, a jet is the experimental signature of quarks or gluons that emerge from a high energy particle collision. Since these colored Standard Model particles cannot exist on their own due to confinement, they cluster or ‘hadronize’ as they move through a detector. The result is a spray of particles coming from the interaction point. This spray can contain mesons, charged and neutral hadrons, basically anything that is colorless as per the rules of QCD.

So what does this mess actually look like in a detector? ATLAS and CMS are designed to absorb most of a jet’s energy by the end of the calorimeters. If the jet has charged constituents, there will also be an associated signal in the tracker. It is then the job of the reconstruction algorithm to combine these various signals into a single object that makes sense. This paper discusses two different reconstructed jet types: calo jets and particle-flow (PF) jets. Calo jets are built only from energy deposits in the calorimeter; since the resolution of the calorimeter gets worse with higher energies, this method can get bad quickly. PF jets, on the other hand, are reconstructed by linking energy clusters in the calorimeters with signals in the trackers to create a complete picture of the object at the individual particle level. PF jets generally enjoy better momentum and spatial resolutions, especially at low energies (see Figure 2).

Jet-energy resolution for calorimeter and particle-flow jets as a function of the jet transverse momentum. The improvement in resolution, of almost a factor of two at low transverse momentum, remains sizable even for jets with very high transverse momentum. (Image credit: CMS Collaboration)
Jet-energy resolution for calorimeter and particle-flow jets as a function of the jet transverse momentum. The improvement in resolution, of almost a factor of two at low transverse momentum, remains sizable even for jets with very high transverse momentum.
(Image credit: CMS Collaboration)

Once reconstruction is done, we have a set of objects that we can now call jets. But we don’t want to keep all of them for real physics. Any given event will have a large number of pile up jets, which come from softer collisions between other protons in a bunch (in time), or leftover calorimeter signals from the previous bunch crossing (out of time). Being able to identify and subtract pile up considerably enhances our ability to calibrate the deposits that we know came from good physics objects. In this paper CMS reports a pile up reconstruction and identification efficiency of nearly 100% for hard scattering events, and they estimate that each jet energy is enhanced by about 10 GeV due to pileup alone.

Once the pile up is corrected, the overall jet energy correction (JEC) is determined via detector response simulation. The simulation is necessary to simulate how the initial quarks and gluons fragment, and the way in which those subsequent partons shower in the calorimeters. This correction is dependent on jet momentum (since the calorimeter resolution is as well), and jet pseudorapidity (different areas of the detector are made of different materials or have different total thickness.) Figure 3 shows the overall correction factors for several different jet radius R values.

Jet energy correction factors for a jet with pT = 30 GeV, as a function of eta (left). Note the spikes around 1.7 (TileGap3, very little absorber material) and 3 (beginning of endcaps.) Simulated jet energy response after JEC as a function of pT (right).
Figure 3: Jet energy correction factors for a jet with pT = 30 GeV, as a function of eta (left). Note the spikes around 1.7 (TileGap3, very little absorber material) and 3 (beginning of endcaps.) Simulated jet energy response after JEC as a function of pT (right).

Finally, we turn to data as a final check on how well these calibrations went. An example of such a check is the tag and probe method with dijet events. Here, we take a good clean event with two back-to-back jets, and ask for one low eta jet for a ‘tag’ jet. The other ‘probe’ jet, at arbitrary eta, is then measured using the previously derived corrections. If the resulting pT is close to the pT of the tag jet, we know the calibration was solid (this also gives us info on how calibrations perform as a function of eta.) A similar method known as pT balancing can be done with a single jet back to back with an easily reconstructed object, such as a Z boson or a photon.

This is really a bare bones outline of how jet calibration is done. In real life, there are systematic uncertainties, jet flavor dependence, correlations; the list goes on. But the entire procedure works remarkably well given the complexity of the task. Ultimately CMS reports a jet energy uncertainty of 3% for most physics analysis jets, and as low as 0.32% for some jets—a new benchmark for hadron colliders!

 

Further Reading:

  1. “Jets: The Manifestation of Quarks and Gluons.” Of Particular Significance, Matt Strassler.
  2. “Commissioning of the Particle-flow Event Reconstruction with the first LHC collisions recorded in the CMS detector.” The CMS Collaboration, CMS PAS PFT-10-001.
  3. “Determination of jet energy calibrations and transverse momentum resolution in CMS.” The CMS Collaboration, 2011 JINST 6 P11002.

The dawn of multi-messenger astronomy: using KamLAND to study gravitational wave events GW150914 and GW151226

Article: Search for electron antineutrinos associated with gravitational wave events GW150914 and GW151226 using KamLAND
Authors: KamLAND Collaboration
Reference: arXiv:1606.07155

 

After the chirp heard ‘round the world, the search is on for coincident astrophysical particle events to provide insight into the source and nature of the era-defining gravitational wave events detected by the LIGO Scientific Collaboration in late 2015.

By combining information from gravitational wave (GW) events with the detection of astrophysical neutrinos and electromagnetic signatures such as gamma-ray bursts, physicists and astronomers are poised to draw back the curtain on the dynamics of astrophysical phenomena, and we’re surely in for some surprises.

The first recorded gravitational wave event, GW150914, was likely a merger of two black holes which took place more than one billion light years from the Earth. The event’s name marks the day it was observed by the Advanced Laser Interferometer Gravitational-wave Observatory (LIGO), September 14th, 2015.  LIGO detections are named “GW” for “gravitational wave,” followed by the observation date in YYMMDD format. The second event, GW151226 (December 26th, 2015) was likely another merger of two black holes, having 8 and 14 times the mass of the sun, taking place 1.4 billion light years away from Earth. A third gravitational wave event candidate, LVT151012, a possible black hole merger which occurred on October 12th, 2015, did not reach the same detection significance a the aforementioned events, but still has a >50% chance of astrophysical origin. LIGO candidates are named differently than detections. The names start with “LVT” for “LIGO-Virgo Trigger,” but are followed by the observation date in the same YYMMDD format. The different name indicates that the event was not significant enough to be called a gravitational wave.

 

Two black holes spiral in towards one another and merge to emit a burst of gravitational waves that Advanced LIGO can detect. Source: APS Physics.

The following  computer simulation created by the multi-university SXS (Simulating eXtreme Spacetimes) project depicts what the collision of two black holes would look like  if we could get close enough to the merger. It was created by solving equations from Albert Einstein’s general theory of relativity using the LIGO data. (Source: LIGO Lab Caltech : MIT).

Observations from other scientific collaborations can search for particles associated with these gravitational waves. The combined information from the gravitational wave and particle detections could identify the origin of these gravitational wave events. For example, some violent astrophysical phenomena emit not only gravitational waves, but also high-energy neutrinos. Conversely, there is currently no known mechanism for the production of either neutrinos or electromagnetic waves in a black hole merger.

Black holes with rapidly accreting disks can be the origin of gamma-ray bursts and neutrino signals, but these disks are not expected to be present during mergers like the ones detected by LIGO. For this reason, it was surprising when the Fermi Gamma-ray Space Telescope reported a coincident gamma-ray burst occurring 0.4 seconds after the September GW event with a false alarm probability of 1 in 455. Although there is some debate in the community about whether or not this observation is to be believed, the observation motivates a multi-messenger analysis including the hunt for associated astrophysical neutrinos at all energies.

Could a neutrino experiment like KamLAND find low energy antineutrino events coincident with the GW events, even when higher energy searches by IceCube and ANTARES did not?

 

Schematic diagram of the KamLAND detector. Source:  hep-ex/0212021v1

KamLAND, the Kamioka Liquid scintillator Anti-Neutrino Detector, is located under Mt. Ikenoyama, Japan, buried beneath the equivalent of 2,700 meters of water. It consists of an 18 meter diameter stainless steel sphere, the inside of which is covered with photomultiplier tubes, surrounding an EVOH/nylon balloon enclosed by pure mineral oil. Inside the balloon resides 1 kton of highly purified liquid scintillator. Outside the stainless steel sphere is a cylindrical 3.2 kton water-Cherenkov detector that provides shielding and enables cosmic ray muon identification.

KamLAND is optimized to search for ~MeV neutrinos and antineutrinos. The detection of the gamma ray burst by the Fermi telescope suggests that the detected black hole merger might have retained its accretion disk, and the spectrum of accretion disk neutrinos around a single black hole is expected to peak around 10 MeV, so KamLAND searched for correlations between the LIGO GW events and ~10 MeV electron antineutrino events occurring within a 500 second window of the merger events. Researchers focused on the detection of electron antineutrinos through the inverse beta decay reaction.

No events were found within the target window of any gravitational wave event, and any adjacent event was consistent with background. KamLAND researchers used this information to determine a monochromatic fluence (time integrated flux) upper limit, as well as an upper limit on source luminosity for each gravitational wave event, which places a bound on the total energy released as low energy neutrinos during the merger events and candidate event. The lack of detected concurrent inverse beta decay events supports the conclusion that GW150914 was a black hole merger, and not another astrophysical event such as a core-collapse supernova.

More information would need to be obtained to explain the gamma ray burst observed by the Fermi telescope, and work to improve future measurements is ongoing. Large uncertainties in the origin region of gamma ray bursts observed by the Fermi telescope will be reduced, and the localization of GW events will be improved, most drastically so by the addition of a third LIGO detector (LIGO India).

As Advanced LIGO continues its operation, there will likely be many more chances for KamLAND and other neutrino experiments to search for coincidence neutrinos. Multi-messenger astronomy has only just begun to shed light on the nature of black holes, supernovae, mergers, and other exciting astrophysical phenomena — and the future looks bright.

Background reading:

Discovery of a New Particle or a Sick and Twisted Santa?

Good day particle nibblers,

The last time I was here I wrote about the potentially exciting “bump” which was observed by both the ATLAS and CMS experiments at the LHC.  As you’ll recall, the “bump” I’m referring to here is the excess of events seen at around 750 GeV in data containing pairs of high energy photons, what you may have heard referred to as “the diphoton excess”. The announcement was made by the experimental collaborations just before Christmas last year, ensuring that theorists around the world would not enjoy a Christmas break as instead we plunged head first into model building and speculation of what this “bump” could be. Combined with too much holiday wine, this lead to an explosion of papers in the following weeks and months (see here for a Game of Thrones themed accounting of the papers written).

The excitement was further fueled in March at the Moriond conference when both ATLAS and CMS announced results from re-analyzed data taken at 13 TeV during 2015 (and some 8 TeV data taken in 2012). They found, after optimizing their analysis for both a spin-0 and spin-2 particle, that the statistical significance for the excess increased slightly in both experiments (see Figure 1 for ATLAS results and here for a more in depth discussion).

ATLAS 13 TeV diphoton spectrum with cuts optimized for a spin-0 heavy resonance (left) and for a spin-2 resonance (right).
Figure 1: ATLAS 13 TeV diphoton spectrum with cuts optimized for a spin-0 heavy resonance (left) and for a spin-2 resonance (right).

In the end both experiments reported a (local) statistical significance (see Footnote 1) of more than 3 standard deviations (or 3σ for short). Normally 3σ’s don’t cause such a frenzy, but the fact that two separate experiments observed this made the probability that it was just a statistical fluctuation much lower (something on the order of 1 in a few thousand chance). If this excess really is just a statistical fluctuation it is a pretty nasty one indeed and may suggest a sick and twisted Santa has been messing with the fragile emotional state of particle theorists ever since Christmas (see Figure 2).

Figure 2: Last known photo of the sick and twisted Santa suspected of perpetuating the false hope of a 750 GeV diphoton excess.
Figure 2: Last known photo of the sick and twisted Santa suspected of perpetuating the false hope of a 750 GeV diphoton excess.

Since the update at the Moriand conference in March (based primarily on 2015 data), particle physicists have been eagerly awaiting the first results based on data taken at the LHC in 2016. With the rate at which the LHC has been accumulating data this year, already there is more than enough collected by ATLAS and CMS to definitively pin down whether the excess is real or if we are indeed dealing with a demented Santa. The first official results will be presented later this summer at ICHEP, but we particle physicists are impatient so the rumor chasing is already in full swing.

Sadly, the latest rumors circulating in the twitter/blogosphere (see also here, here, and here for further rumor mongering) seem to indicate that the excess has disappeared with the new data collected in 2016. While we have to wait for the experimental collaborations to make an official public announcement before shedding tears, judging by the sudden slow down of ‘diphoton excess’ papers appearing on the arXiv, it seems much of the theory community is already accepting this pessimistic scenario.

If the diphoton excess is indeed dead it will be a sad day for the particle physics community. The possibilities for what it could have been were vast and mind-boggling. Even more exciting however was the fact that if the diphoton excess were real and associated with a new resonance, the discovery of additional new particles would almost certainly have been just around the corner, thus setting off a new era of experimental particle physics. While a dead diphoton excess would indeed be sad, I urge you young nibblers to not be discouraged. One thing this whole ordeal has taught us is that the LHC is an amazing machine and working fantastically. Second, there are still many interesting theoretical ideas out there to be explored, some of which came to light in attempting to explain the excess. And remember it only takes one discovery to set off a revolution of physics beyond the Standard Model so don’t give up hope yet!

I also urge you to not pay much attention to the inevitable negative backlash that will occur (and already beginning in the blogosphere) both within the particle physics community and the popular media. There was a legitimate excess in the 2015 diphoton data and that got theorists excited (reasonably so IMO), including yours truly. If the excitement of the excess brought in a few more particle nibblers then even better still! So while we mourn the (potential) loss of this excess let us not give up just yet on the amazing machine that is the LHC possibly discovering new physics. And then we can tell that sick and twisted Santa to go back to the north pole for good!

OK nibblers, thats all the thoughts I wanted to share on the social phenomenon that is (was?) the diphoton excess. While we wait for official announcements, let us in the meantime hope the rumors are wrong and that Santa really is warm and fuzzy and cares about us like they told us as children.

Footnote 1: The global significance was between 1 and 2σ, but I wont get into these details here.

Disclaimer 1: I promise next post I will get back to discussing actual physics instead of just social commentary =).

Disclaimer 2: Since I am way too low on the physics totem pole to have any official information, please take anything written here about rumors of the diphoton excess with a grain of salt. Stay tuned here for more credible sources.

Dark matter or Pulsars?

Title: 3FGL Demographics Outside the Galactic Plane using Supervised Machine Learning: Pulsar and Dark Matter Subhalo Interpretations
PublicationarXiv:1605.00711, accepted ApJ

The universe has a way of keeping scientists guessing. For over 70 years, scientists have been trying to understand the particle nature of dark matter. We’ve buried detectors deep underground to shield them from backgrounds, smashed particles together at inconceivably high energies, and dedicated instruments to observing where we have measured dark matter to be a dominant component. Like any good mystery, this has yielded more questions than answers.

There are a lot of ideas as to what the distribution of dark matter looks like in the universe. One example is from a paper by L. Pieri et al., (PRD 83 023518 (2011), arXiv:0908.0195). They simulated what the gamma-ray sky would look like from dark matter annihilation into b-quarks. The results of their simulation are shown below. The plot is an Aitoff projection in galactic coordinates (meaning that the center of the galaxy is at the center of the map).

Gamma-ray sky map from dark matter annihilation into bb using the Via Lacita simulation between 3-40 GeV
Gamma-ray sky map from dark matter annihilation into bb using the Via Lacita simulation between 3-40 GeV. L. Pieri et al., (PRD 83 023518 (2011), arXiv:0908.0195)

The obvious bright spot is the galactic center. This is because the center of the Milky Way has the highest density of dark matter nearby (F. Iocco, Pato, Bertone, Nature Physics 11, 245–248 (2015)). Just for some context the center of the Milky way is ~8.5 kiloparsecs or 27,700 light years away from us… so it’s a big neighborhood. However, the center of the galaxy is particularly hard to observe because the Galaxy itself is obstructing our view. As it turns out there are lots of stars, gas and dust in our Galaxy 🙂

This map also shows us that there are other regions of high dark matter density away from the Galactic center. These could be dark matter halos, dwarf spheroidal galaxies, galaxy clusters, or anything else with a high density of dark matter. The paper I’m discussing uses this simulation in combination with the Fermi-LAT 3rd source catalog (3FGL) (Fermi-LAT Collaboration, Astrophys. J. Suppl 218 (2015) arXiv:1501.02003).

Over 1/3 of the sources in the 3FGL are unassociated with a known astrophysical source (this means we don’t know what source is yielding gamma rays). The paper analyzes these sources to see if their gamma-ray flux is consistent with dark matter annihilation or if it’s more consistent with the spectral shape from pulsars, rapidly rotating neutron stars with strong magnetic fields that emit radio waves (and gamma-rays) in very regular pulses. These are a fascinating class of astrophysical objects and I’d highly recommend reading up on them (See NASA’s site). The challenge is that the gamma-ray flux from dark matter annihilation into b-quarks is surprisingly similar to that from pulsars (See below).

Gamma-ray spectra of dark matter annihilating into bb and tautau.
Gamma-ray spectra of dark matter annihilating into b-quarks and taus. (Image produced by me using DMFit)
Gamma-ray spectrum from pulsars from the globular cluster 47 Tuc.
Gamma-ray spectrum from pulsars from the globular cluster 47 Tuc (Fermi-LAT Collaboration)

 

 

 

 

 

 

 

 

 

They found 34 candidates sources which are consistent with both dark matter annihilation and pulsar spectra away from the Galactic plane using two different machine learning techniques. Generally, if a source can be explained by something other than dark matter, that’s the accepted interpretation. So, the currently favored astrophysical interpretations for these objects are pulsars. Yet, these objects could also be interpreted as dark matter annihilation taking place in ultra-faint dwarf galaxies or dark matter subhalos. Unfortunately, Fermi-LAT spectra are not sufficient to break degeneracies between the different scenarios. The distribution of the 34 dark matter subhalo candidates found in this work are shown below.

Galactic distribution of 34 high-latitude Galactic candidates (red circles) superimposed on a smoothed Fermi LAT all-sky map for energies E ≥ 1 GeV based on events collected during the period 2008 August 4–2015 August 4 (Credit: Fermi LAT Collaboration). High-latitude 3FGL pulsars (blue crosses) are also plotted for comparison.
Galactic distribution of 34 high-latitude Galactic candidates (red circles) superimposed on a smoothed Fermi LAT all-sky map for energies E ≥ 1 GeV based on events collected during the period 2008 August 4–2015 August 4 (Credit: Fermi LAT Collaboration).
High-latitude 3FGL pulsars (blue crosses) are also plotted for comparison.

 

 

 

The paper presents scenarios which support the pulsar interpretation and with the dark matter interpretation. If they are pulsars, they find the 34 found to be in excellent agreement with predictions from a new population that predicts many more pulsars than are currently found. However, if they are dark matter substructures, they also place upper limits on the number of Galactic subhalos surviving today and on dark matter annihilation cross sections. The cross section is shown below.

 

Upper limits on the dark matter annihilation cross section for the b-quark channel using the 14 subhalo candidates very far from the galactic plane (>20 degrees) (black solid line). The dashed red line is an upper limit derived from the Via Lactea II simulation when zero 3FGL subhalos are adopted (Schoonenberg et al. 2016). The blue line corresponds to the constraint for zero 3FGL subhalo candidates using the Aquarius simulation instead (Bertoni, Hooper, & Linden 2015). The horizontal dotted line marks the canonical thermal relic cross section (Steigman, Dasgupta, & Beacom 2012).
Upper limits on the dark matter annihilation cross section for the b-quark channel using the 14 subhalo candidates very far from the galactic plane (>20 degrees) (black solid line). The dashed red line is an upper limit derived from the Via Lactea II simulation when zero 3FGL subhalos are adopted (Schoonenberg et al. 2016). The blue line corresponds to the constraint for zero 3FGL subhalo candidates using the Aquarius simulation instead (Bertoni, Hooper, & Linden 2015). The horizontal dotted line marks the canonical thermal relic cross section (Steigman, Dasgupta, & Beacom 2012).

 

The only thing we can do (beyond waiting for more Fermi-LAT data) is try to identify these sources definitely as pulsars which requires extensive follow-up observations using other telescopes (in particular radio telescopes to look for pulses). So stay tuned!

Other reading: See also Chris Karwin’s ParticleBite on the Fermi-LAT analysis.