The lighter side of Dark Matter

Article title: “Absorption of light dark matter in semiconductors”

Authors: Yonit Hochberg, Tongyan Lin, and Kathryn M. Zurek

Reference: arXiv:1608.01994

Direct detection strategies for dark matter (DM) have grown significantly from the dominant narrative of looking for scattering of these ghostly particles off of large and heavy nuclei. Such experiments involve searches for the Weakly-Interacting Massive Particles (WIMPs) in the many GeV (gigaelectronvolt) mass range. Such candidates for DM are predicted by many beyond Standard Model (SM) theories, one of the most popular involving a very special and unique extension called supersymmetry. Once dubbed the “WIMP Miracle”, these types of particles were found to possess just the right properties to be suitable as dark matter. However, as these experiments become more and more sensitive, the null results put a lot of stress on their feasibility.

Typical detectors like that of LUX, XENON, PandaX and ZEPLIN, detect flashes of light (scintillation) from the result of particle collisions in noble liquids like argon or xenon. Other cryogenic-type detectors, used in experiments like CDMS, cool semiconductor arrays down to very low temperatures to search for ionization and phonon (quantized lattice vibration) production in crystals. Already incredibly successful at deriving direct detection limits for heavy dark matter, new ideas are emerging to look into the lighter side.

Recently, DM below the GeV range have become the new target of a huge range of detection methods, utilizing new techniques and functional materials – semiconductors, superconductors and even superfluid helium. In such a situation, recoils from the much lighter electrons in fact become much more sensitive than those of such large and heavy nuclear targets.

There are several ways that one can consider light dark matter interacting with electrons. One popular consideration is to introduce a new gauge boson that has a very small ‘kinetic’ mixing with the ordinary photon of the Standard Model. If massive, these ‘dark photons’ could also be potentially dark matter candidates themselves and an interesting avenue for new physics. The specifics of their interaction with the electron are then determined by the mass of the dark photon and the strength of its mixing with the SM photon.

Typically the gap between the valence and conduction bands in semiconductors like silicon and germanium is around an electronvolt (eV). When the energy of the dark matter particle exceeds the band gap, electron excitations in the material can usually be detected through a complicated secondary cascade of electron-hole pair generation. Below the band gap however, there is not enough energy to excite the electron to the conduction band, and so detection proceeds through low-energy multi-phonon excitations, with the dominant being the emission of two back-to-back phonons.

In both these regimes, the absorption rate of dark matter in the material is directly related to the properties of the material, namely its optical properties. In particular, the absorption rate for ordinary SM photons is determined by the polarization tensor in the medium, and in turn the complex conductivity, \hat{\sigma}(\omega)=\sigma_{1}+i \sigma_{2} , through what is known as the optical theorem. Ultimately this describes the response of the material to an electromagnetic field, which has been measured in several energy ranges. This ties together the astrophysical properties of how the dark matter moves through space and the fundamental description of DM-electron interactions at the particle level.

In a more technical sense, the rate of DM absorption, in events per unit time per unit target mass, is given by the following equation:

R=\frac{1}{\rho} \frac{\rho_{D M}}{m_{A^{\prime}}} \kappa_{e f f}^{2} \sigma_{1}

  • \rho – mass density of the target material
  • \rho_{DM} – local dark matter mass density (0.3 GeV/cm3) in the galactic halo
  • m_{A'} – mass of the dark photon particle
  • \kappa_{eff} – kinetic mixing parameter (in-medium)
  • \sigma_1 – absorption rate of ordinary SM photons

Shown in Figure 1, the projected sensitivity at 90% confidence limit (C.L.) for a 1 kg-year exposure of semiconductor target to dark photon detection can be almost an order of magnitude greater than existing nuclear recoil experiments. Dependence is shown on the kinetic mixing parameter and the mass of the dark photon. Limits are also shown for existing semiconductor experiments, known as DAMIC and CDMSLite with 0.6 and 70 kg-day exposure, respectively.

Figure 1. Projected reach of a silicon (blue, solid) and germanium (green, solid) semiconductor target at 90% C.L. for 1 kg-year exposure through the absorption of dark photons DM, kinetically mixed with SM photons. Multi-phonon excitations are significant for the sub-eV range, and electron excitations approximately over 0.6 and 1 eV (the size of the band gaps for germanium and silicon, respectively).

Furthermore, in the millielectronvolt-kiloelectronvolt range, these could provide much stronger constraints than any of those that currently exist from sources in astrophysics, even at this exposure. These materials also provide a novel way of detecting DM in a single experiment, so long as improvements are made in phonon detection.

These possibilities, amongst a plethora of other detection materials and strategies, can open up a significant area of parameter space for finally closing in on the identity of the ever-elusive dark matter!

References and further reading: 

Discovering the Tau

This plot [1] is the first experimental evidence for the particle that would eventually be named the tau.

On the horizontal axis is the energy of the experiment. This particular experiment collided electron and positron beams. On the vertical axis is the cross section of a specific event resulting from the electron and positron beams colliding. The cross section is like a probability for a given event to occur. When two particles collide, many many things can happen, each with their own probability. The cross section for an event encodes the probability for that particular event to occur. Events with larger probability have larger cross sections and vice versa.

The collaboration found one event could not be explained by the Standard Model at the time. The event in question looks like:

This event is peculiar because the final state contains both an electron and a muon with opposite charges. In 1975, when this paper was published, there was no way to obtain this final state, from any known particles or interactions.

In order to explain this anomaly, particle physicists proposed the following explanations:

  1. Pair production of a heavy lepton. With some insight from the future, we will call this heavy lepton the “tau.”

  2. Pair production of charged Bosons. These charged bosons actually end up being the bosons that mediate the weak nuclear force.

The production of tau’s and these bosons are not equally likely though. Depending on the initial energy of the beams, we are more likely to produce one than the other. It turns out that at the energies of this experiment (a few GeV), it is much more likely to produce taus than to produce the bosons. We would say that the taus have a larger cross section than the bosons. From the plot, we can read off that the production of taus, their cross section, is largest at around 5 GeV of energy. Finally, since these taus are the result of pair production, they are produced in pairs. This bump at 5 GeV is the energy at which it is most likely to produce a pair of taus. This plot then predicts the tau to have a mass of about 2.5 GeV.

References

[1] – Evidence for Anomalous Lepton Production in e+−e− Annihilation. This is the original paper that announced the anomaly that would become the Tau.

[2] – The Discovery of the Tau Lepton. This is a comprehensive story of the discovery of the Tau, written by Martin Perl who would go on to win the 1995 Nobel prize in Physics for its discovery.

[3] – Lepton Review. Hyperphysics provides an accessible review of the Leptonic sector of the Standard Model.

The Early Universe in a Detector: Investigations with Heavy-Ion Experiments

Title: “Probing dense baryon-rich matter with virtual photons”

Author: HADES Collaboration

Reference: https://www.nature.com/articles/s41567-019-0583-8

The quark-gluon plasma, a sea of unbound quarks and gluons moving at relativistic speeds thought to exist at extraordinarily high temperature and density, is a phase of matter critical to our understanding of the early universe and extreme stellar interiors. On the timescale of milliseconds after the Big Bang, the matter in the universe is postulated to have been in a quark-gluon plasma phase, before the universe expanded, cooled, and formed the hadrons we observe today from constituent quarks and gluons. The study of quark matter, the range of phases formed from quarks and gluons, can provide us with insight into the evanescent early universe, providing an intriguing focus for experimentation. Astrophysical objects that are comprised of quarks, such as neutron stars, are also thought to house the necessary conditions for the formation of quark-gluon plasma at their cores. With the accumulation of new data from neutron star mergers, studies of quark matter are becoming increasingly productive and rife for new discovery. 

Quantum chromodynamics (QCD) is the theory of quarks and the strong interaction between them. In this theory, quarks and force-carrying gluons, the aptly-named particles that “glue” quarks together, have a “color” charge analogous to charge in quantum electrodynamics (QED). In QCD, the gluon field is often modeled as a narrow tube between two color charges with a constant strong force between them, in contrast with the inverse-square dependence on distance for fields in QED. The pair potential energy between the quarks increases linearly with separation, eventually surpassing the creation energy for a new quark-antiquark pair. Hence, the quarks cannot exist in unbound pairs at low energies, a property known as color confinement. When separation is attempted between quarks, new quarks are instead produced. In particle accelerators, physicists see “jets” of new color-neutral particles (mesons and baryons) in the process of hadronization. At high energies, the story changes and hinges on an idea known as asymptotic freedom, in which the strength of particle interactions decreases with increased energy scale in certain gauge theories such as QCD. 

A Feynman diagrammatic scheme of the production of new hadrons from a dilepton collision. We observe an electron-positron pair annihilating to a virtual photon, which then decays to many hadrons via hadronization. Source: https://cds.cern.ch/record/317673

QCD matter is commonly probed with heavy-ion collision experiments and quark-gluon plasma has been produced before in minute quantities at the Relativistic Heavy Ion Collider (RHIC) at Brookhaven National Lab as well as the Large Hadron Collider (LHC) at CERN. The goal of these experiments is to create conditions similar to those of the early universe or at the center of dense stars — doing so requires intense temperatures and an abundance of quarks. Heavy-ions, such as gold or lead nuclei, fit this bill when smashed together at relativistic speeds. When these collisions occur, the resulting “fireball” of quarks and gluons is unstable and quickly decays into a barrage of new, stable hadrons via the hadronization method discussed above. 

There are several main goals of heavy-ion collision experiments around the world, revolving around the study of the phase diagram for quark matter. The first component of this is the search for the critical point: the endpoint of the line of first-order phase transitions. The phase transition between hadronic matter, in which quarks and gluons are confined, and partonic matter, in which they are dissociated in a quark-gluon plasma, is also an active area of investigation. There is additionally an ongoing search for chiral symmetry restoration at finite temperature and finite density. A chiral symmetry occurs when the handedness of the particles remains invariant under a parity transformation, that is, when the sign of a spatial coordinate is flipped. However, in QCD, a symmetric system becomes asymmetric in a process known as spontaneous symmetry breaking. Several experiments are designed to investigate evidence of the restoration of this symmetry.

The phase diagram for quark matter, a plot of chemical potential vs. temperature, has many unknown points of interest.  Source: https://www.sciencedirect.com/science/article/pii/S055032131630219X

The HADES (High-Acceptance DiElectron Spectrometer) collaboration is a group attempting to address such questions. In a recent experiment, HADES focused on the creation of quark matter via collisions of a beam of Au (gold) ions with a stack of Au foils. Dileptons, which are bound lepton-antilepton pairs that emerge from the decay of virtual particles, are a key element of HADES’ findings. In quantum field theory (QFT), in which particles are modeled as excitations in an underlying field, virtual particles can be thought of as excitations in the field that are transient due to limitations set by the uncertainty principle. Virtual particles are represented by internal lines in Feynman diagrams, are used as tools in calculations, and are not isolated or measured on their own — they are only exchanged with ordinary particles. In the HADES experiment, the virtual photons that produce dileptons which immediately decouple from the strong force. Produced at all stages of QCD interaction, they are ideal messengers of any modification of hadron properties. They are also thought to contain information about the thermal properties of the underlying medium. 

To actually extract this information, the HADES detector utilizes a time-of-flight chamber and ring-imaging Cherenkov (RICH) chamber, which identifies particles using the characteristics of Cherenkov radiation: electromagnetic radiation emitted when a particle travels through a dielectric medium at a velocity greater than the phase velocity of light in that particular medium. The detector is then able to measure the invariant mass, rapidity (a commonly-used substitute measure for relativistic velocity), and transverse momentum of emitted electron-positron pairs, the dilepton of choice. In accelerator experiments, there are typically a number of selection criteria in place to ensure that the machinery is detecting the desired particles and the corresponding data is recorded. When a collision event occurs within HADES, a number of checks are in place to ensure that only electron-positron events are kept, factoring in both the number of detected events and detector inefficiency, while excess and background data is thrown out. The end point of this data collection is a calculation of the four-momenta of each lepton pair, a description of its relativistic energy and momentum components. This allows for the construction of a dilepton spectrum: the distribution of the invariant masses of detected dileptons. 

The main data takeaway from this experiment was the observation of an excess of dilepton events in an exponential shape, contrasting with the expected number of dileptons from ordinary particle collisions. This suggests a shift in the properties of the underlying matter, with a reconstructed temperature above 70 MeV (note that particle physicists tend to quote temperatures in more convenient units of electron volts). The kicker comes when the group compares these results to simulated neutron star mergers, with expected core temperatures of 75 MeV. This means that the bulk matter created within HADES is similar to the highly dense matter formed in such mergers, a comparison which has become recently accessible due to multi-messenger signals incorporating both electromagnetic and gravitational wave data. 

Practically, we see that HADES’ approach is quite promising for future studies of matter under extreme conditions, with the potential to reveal much about the state of the universe early on in its history as well as probe certain astrophysical objects — an exciting realization! 

Learn More:

  1. https://home.cern/science/physics/heavy-ions-and-quark-gluon-plasma
  2. https://www-hades.gsi.de/
  3. https://profmattstrassler.com/articles-and-posts/particle-physics-basics/virtual-particles-what-are-they/

When light and light collide

Title: “Evidence for light-by-light scattering in heavy-ion collisions with the ATLAS detector at the LHC”

Author: ATLAS Collaboration

Reference: doi:10.1038/nphys4208

According to classical wave theory, two electromagnetic waves that happen to cross each other in space will not interfere. In fact, this is a crucial feature of the conventional definition of a wave, in contrast to a corpuscle or particle: when two waves meet, they briefly occupy the same space at the same time, each without “knowing” about the other’s existence. Particles, on the other hand, do interact (or scatter) when they get close to each other, and the results of this encounter can be measured.

The mathematical backbone for this idea is the so-called superposition principle. It arises from the fact that the equations describing wave propagation are all linear in the fields and sources, meaning that they don’t occur in squares or cubes of those quantities. When we take two such waves that happen to be nearby, the linearity of the equations implies that we can treat the overall wave as just a linear superposition of two separate waves, traveling in different directions. The equations do not distinguish between the two scenarios.

This story gets more interesting after the insights by Albert Einstein who predicted the quantization of light, and the subsequent formal development of Quantum Electrodynamics in the 1940s by Shin’ichirō Tomonaga, Julian Schwinger and Richard Feynman. In the quantum theory of light, it is no longer “just” a wave, but rather a dual entity that can be treated as both a wave and as a particle called photon.

The new interpretation of light as a particle opens up the possibility that two electromagnetic waves may indeed interact when crossing each other, just as particles would. The mathematical equivalent statement is that the quantum theory of light yields wave equations that are not entirely linear anymore, but instead contain a small non-linear part. The non-linear contributions have to be tiny, otherwise we would already have detected the effects, but nonetheless they are predicted to occur by quantum theory. In this context, it is called light-by-light scattering.

Detection

In 2017, The ATLAS experiment at the LHC observed the first direct evidence of light-by-light scattering, using collisions of relativistic heavy-ions. Since this is such a rare phenomenon, it took us a long time to become directly experimentally sensitive to it.

The experiment is based on so-called Ultra-Peripheral Collisions (UPC) of lead ions (Z=82) at a center-of-mass energy of 5.02 TeV (or about 5,000 times the mass of the proton). In UPC collisions, the two oncoming beams are far enough apart that the beam particles are less likely to undergo hard scattering, and instead just “graze” each other. This means that the strong force is not usually involved in such interactions, since its range is tiny and it only has intra-nuclear coverage. Instead, the electromagnetic interaction dominates in UPC events.

The figure below shows how UPCs proceed. The grazing of two lead ions leads to large electromagnetic fields in the space between the ions, and this interaction can be interpreted as an exchange of photons between them, since photons are the mediator of electromagnetism. Then, by also looking for two final-state photons in the ATLAS detector (the ‘X’ on the right in the figure), the light-by-light process, \gamma \gamma \rightarrow \gamma \gamma, can be probed (\gamma stands for photons).

Left: Feynman diagrams of light-by-light scattering. Two incoming photons interact and two outgoing photons are produced. Right: how to measure light-by-light scattering using Ultra-Peripheral Collisions (UPC) with lead ions in the LHC. Source: https://doi.org/10.1038/nphys4208

In order to isolate this particular process from all the other physics happening at the LHC, a series of increasingly tighter selections (or cuts) is applied to the acquired data. The final cuts are optimized to obtain the maximum possible sensitivity to the light-by-light scattering process. This sensitivity depends on how likely it is to select a candidate signal event, and similarly on how unlikely it is to select a background event. The main background physics that could mimic the signature of light-by-light scattering (two photons in the ATLAS calorimeter on an UPC lead-ion data run) include \gamma \gamma \rightarrow e^+ e^-, where the final-state electrons and positrons are misidentified by the detector as photons; central exclusive production (CEP) of photons in g g \rightarrow \gamma \gamma (where g is a gluon); and hadronic fakes from \pi_0 production in low-pT dijet events, where \pi_0’s (neutral pions) decay to a pair of photons.

The various applied selections begin with a dedicated trigger which selects events with moderate activity in the calorimeter and very little activity elsewhere. This is what is expected in a lead-ion UPC collision, since the ions just escape down the LHC beam pipe and are not detected, leaving the two photons as the only visible products. Then a set of selections is applied to ensure that the recorded activity in the calorimeter is compatible with two photons. These selections rely mostly on the shape of electromagnetic showers deposited on calorimeter crystals, which varies for different types of incident particles.

Finally, a series of extra selections is applied to minimize the number of possible background events, such as vetoing any events containing charged-particle tracks in the ATLAS tracker, which effectively removes \gamma \gamma \rightarrow  e^+ e^- events with electrons and positrons mis-tagged as photons. Note that this also removes some of the real light-by-light signal events (about ~10%), where final-state photons undergo photon conversion after interacting with tracker material, but in this case the trade-off is certainly worth it. Another such selection is the requirement that the transverse momentum of the diphoton system be less than 2 GeV. This removes contributions from other fake-photon backgrounds (such as cosmic-ray muons), because it ensures that the net transverse momentum of the system is small, and thus likely to originate in the ion-ion interaction.

The same exact set of selections is applied to both data and Monte Carlo (MC) simulations of the experiment. The MC simulations yield an estimate of how many background and signal events should be expected in data. The table below shows the results.

Selection cuts and number of events left after each cut, applied to both data and MC samples, including backgrounds and signal. Source: https://doi.org/10.1038/nphys4208

The penultimate row contains the sum of all cuts and a comparison between total number of expected background events (2.6), light-by-light scattering events (7.3), and data (13). The sum of 2.6+7.3=9.9 events certainly seems compatible with the observed data, given the quoted uncertainties in the last row. In fact, it is possible to estimate the significance of this result by asking how likely it would be for the background-only hypothesis (that is, pretending light-by-light scattering doesn’t exist and only including the backgrounds) to yield 13 observed events. This likelihood is tiny, 5\times10^{-6}, which corresponds to a significance of 4.4 sigma!

In addition to the number of events, in the figure below the paper also plots the diphoton invariant mass distribution for the 13 observed data events (black points), and for the MC simulations (signal in red and backgrounds in blue and gray). This comparison provides further evidence that we do indeed see light-by-light scattering in the ATLAS data.

Left: distribution of diphoton acoplanarity. Right: distribution of diphoton invariant mass with final selection cuts, both on data and MC backgrounds and signal. Source: https://doi.org/10.1038/nphys4208

Finally, given the observed number of events in data and the expected number of MC background events, it is possible to measure the cross-section of light-by-light scattering (as a reminder, the cross-section of a process measures how likely it is to occur in collisions). The ATLAS collaboration calculates the cross-section of light-by-light scattering with the formula:

\sigma_{\text{fid}} = \frac{N_{\text{data}} - N_{\text{bkg}}}{C \times \int L dt}

Where N_{\text{data}} is the number of observed events in data, N_{\text{bkg}} is the number of background events in MC, \int L dt is the total amount of data collected, and C is a correction factor which translates all of the detector inefficiencies into a single number. You can think of the entire denominator as the “effective” amount of data that was analyzed, and the numerator as the “effective” number of signal events that was seen. The ratio of the two quantities yields the probability of seeing light-by-light scattering in a single collision. The ATLAS collaboration found this value to be 70 \pm 24 \; (\text{statistical}) \pm 17 \;(\text{systematic}) nb, which agrees with the theorized values of 45 \pm 9 nb  and 49 \pm 10 nb within uncertainties.

To conclude, the measurement of light-by-light scattering by ATLAS is an exciting result which offers us a direct glimpse into the stark differences between classical and quantum physics in an accessible (and dare I say) amusing way!

Grad students can apply now for ComSciCon’18!

Applications are now open for the Communicating Science 2018 workshop, to be held in Boston, MA on June 14-16, 2018!

Graduate students at U.S. and Canadian institutions in all fields of science, technology, engineering, health, mathematics, and related fields are encouraged to apply. The application deadline is March 1st.

Graduate student attendees of ComSciCon’17

As for past ComSciCon national workshops, acceptance to the workshop is competitive; attendance is free and travel support and lodging will be provided to accepted applicants.

Attendees will be selected on the basis of their achievement in and capacity for leadership in science communication. Graduate students who have engaged in entrepreneurship and created opportunities for other students to practice science communication are especially encouraged to apply.

Participants will network with other leaders in science communication and build the communication skills that scientists and other technical professionals need to express complex ideas to the general public, experts in other fields, and their peers. In additional to panel discussions on topics like Science Journalism, Creative & Digital Storytelling, and Diversity and Inclusion in Science, ample time is allotted for networking with science communication experts and developing science outreach collaborations with fellow graduate students.

You can follow this link to submit an application or learn more about the workshop programs and participants. You can also follow ComSciCon on Twitter (@comscicon) and use #comscicon18 !

Our group photo at ComSciCon’17

Longtime Astrobites readers may remember that ComSciCon was founded in 2012 by Astrobites and Chembites authors. Since then, we’ve led five national workshops on leadership in science communication and established a franchising model through which graduate student alums of our program have led sixteen regional and specialized workshops on training in science communication.

We are so excited about the impact ComSciCon has had on the science communication community. You can read more about our programs, the outcomes we have documented, find vignettes about our amazing graduate student attendees, and more in ComSciCon’s 2017 annual report.

The organizers of ComSciCon’17. Can you spot the astronomers?

None of ComSciCon’s impacts would be possible without the generous support of our sponsors. These contributions make it possible for us to offer this programming free of charge to graduate students, and to support their travel to our events. In particular, we wish to thank the American Astronomical Society for being a supporter of both Particlebites and ComSciCon.

If you believe in the mission of ComSciCon, you can support our program to! Visit our webpage to learn more and donate today!

Attendees react to a challenging section of a student’s Pop Talk at ComSciCon’17

It’s A Wrap! Summary of ATLAS Updates from 2016

Article: Latest ATLAS results from Run 2
Authors: Claudia Gemme on behalf of the ATLAS Collaboration
Reference: arXiv:1612.01987 [hep-ex]

2016 certainly has been an…interesting year. There’s been a lot of division, so as the year winds down, this seems like a perfect time to reflect on something we can all get excited about: particle physics! This has been an incredible year for the ATLAS experiment and the LHC in general. Run 2 began in 2015, but really hit its stride this year. The LHC reached a peak luminosity of 1.37 x 1034 cm-2s-1, which exceeds the design luminosity by almost 40%! Additionally, the pile up (number of interactions per bunch crossing) nearly doubled with respect to 2015. From these delivered events, ATLAS was able to record 30 fb-1 of data at 13 TeV center-of-mass energy, with a DAQ (data acquisition) efficiency above 90%.

Before discussing the many analyses being done with this amazing new data set, I’d like to highlight some of the upgrades performed during the long shutdown (2013-2014) that allowed ATLAS to perform so well during Run 2. One of the most important improvements was the addition of the Insertable B-layer (IBL). The IBL is now the innermost part of the inner detector, located only 3.3 cm from the interaction point. The IBL was designed to combat radiation damage to the inner detector; due to its insertable nature, it can eventually be replaced and it shields the other 3 detector layers from the bulk of the radiation damage. Of course, it also allows more tracking points which further improves track reconstruction. There were also extensive improvements to the magnetic and cryogenic systems to provide more powerful cooling and repair damage from Run 1. To deal with the increased data load, the ATLAS DAQ and trigger systems were also upgraded (reminder, triggers are data collection elements that decide which events get stored for analysis and which get thrown out). The High Level Trigger, ATLAS’s only hardware level trigger was updated to output 100kHz of data (previously 75kHz), and the 2 software level triggers were merged into one. This allowed ATLAS DAQ to have an average physics data output of 1kHz, allowing more of the Run 2 events to be processed and stored.

Now, lets talk about ATLAS’s physics program in 2016. A precise understanding of the Standard Model is incredibly important for all particle physics programs. It allows physicists to test theoretical mass and coupling predictions, and to make accurate predictions for background processes in BSM searches. ATLAS improved the precision of many SM measurements in 2016, and a summary of some important cross section measurements is shown in Figure 1. In particular, the measurements of the Z+jets and diboson cross sections were compatible with new next-to-next-leading-order calculations from theorists, confirming theoretical predictions.

Overview of cross-section measurements for a variety of SM processes compared to theoretical expectations

Another major goal of the Run 2 ATLAS physics program was to confirm the discovery of the Higgs Boson and further study it. During Run 1, both ATLAS and CMS found evidence of a Higgs boson with a combined measured mass of 125.09 GeV using the H→gg and H→ZZ*→4l channels. Furthermore, nearly all measured couplings were consistent with SM predictions within 2σ. In Run 2, ATLAS ‘rediscovered’ the Higgs at a compatible mass with a local significance of 10 (compared to 8.6 expected) using the same channels. The Run 2 cross section is slightly higher than the SM prediction, but still compatible within uncertainty. A summary of the cross-section measurements at various energies can be seen in Figure 2. Additionally, ATLAS physicists sought to show conclusive evidence of the H→bb decay channel in Run 2. This decay mode of the Higgs has the largest predicted branching fraction (58%) according to the SM, but has been difficult to measure due to large multi-jet backgrounds. The increased luminosity in Run 2 made studying the similar X+H→bb decay a promising alternative, despite its cross-section being much lower. This decay process is easier to isolate as leptonic decays of the W or Z create a clean signature. However, the measured significance of this decay was only 0.42, compared to a SM expectation of 1.94. The analysis procedure was well validated by measuring SM (W/Z)Z yield, so this large discrepancy is certainly an area for more study in 2017.

Total pp->H cross-sections measured at different center-of-mass energies compared to SM predictions

The final component of the ATLAS physics program is, of course, the many searches for BSM physics. The discovery potential for many of these processes is increased in Run 2 due to the enhanced cross-sections at the larger energy. There are 3 important channels for general BSM searches: diboson, diphoton, and dilepton. Many extensions of the SM predict new heavy particles including heavy neutral Higgs, Heavy Vector Triplet W, and some Gravitons, that could decay into vector-boson pairs. General searches in this channel are done by looking for hadronic decays of the boosted W and Z bosons within a single large-radius jet and using jet substructure properties. Unfortunately, no significant excess is observed. The diphoton channel became the hot topic of 2016, due to a deviation of at least 3σ from the SM only hypothesis seen in both ATLAS and CMS early this year using the 2015 data set. However, this excess was not confirmed with the full 2016 data set, which has 4x more statistics. The largest excess seen by ATLAS in the diphoton channel is now 2.3σ for a mass near 710 GeV. The dilepton final states have excellent sensitivity to a variety of new phenomena and have high signal selection efficiencies. Again, however, the 2016 measurements are consistent with the SM prediction. Lower limits on a resonance mass have been set, enhancing exclusion up to 1 TeV more than in Run 1.

ATLAS physicists also study the infamous SUSY model by looking for signatures of gluinos and squarks (predicted to be the lightest SUSY particles). These particles could be observed in events with high jet multiplicity and lots of missing energy. Once again, no significant excess was found. These results were interpreted within 2 SUSY models, and gluinos with mass up to 1600 GeV were excluded.

So, to conclude, no evidence of BSM physics has been found at ATLAS in Run 2, but the  new limits on some SUSY models and various other BSM phenomena have been improved and can be seen in Figures 3 and 4. Precision measurements of the SM were improved, the detector equipment was updated, and the Higgs was rediscovered, but ATLAS physicists and particle enthusiasts are still hoping for something more exciting. Lest we leave 2016 on a low note, its important to remember that only 50% of ATLAS searches have been updated to the Run 2 energy, so there are still many more channels to be explored. Plus, there are already some modest excesses, as well as the discrepancy in the H→bb measurement. There is certainly more to be discovered as Run 2 continues, so here’s to 2017!

Mass reach of selected ATLAS SUSY searches
Mass reaches of selected non-SUSY BSM searches at ATLAS

References and further reading:

 

Horton Hears a Sterile Neutrino?

Article: Limits on Active to Sterile Neutrino Oscillations from Disappearance Searches in the MINOS, Daya Bay, and Bugey-3 Experiments
Authors:  Daya Bay and MINOS collaborations
Reference: arXiv:1607.01177v4

So far, the hunt for sterile neutrinos has come up empty. Could a joint analysis between MINOS, Daya Bay and Bugey-3 data hint at their existence?

Neutrinos, like the beloved Whos in Dr. Seuss’ “Horton Hears a Who!,” are light and elusive, yet have a large impact on the universe we live in. While neutrinos only interact with matter through the weak nuclear force and gravity, they played a critical role in the formation of the early universe. Neutrino physics is now an exciting line of research pursued by the Hortons of particle physics, cosmology, and astrophysics alike. While most of what we currently know about neutrinos is well described by a three-flavor neutrino model, a few inconsistent experimental results such as those from the Liquid Scintillator Neutrino Detector (LSND) and the Mini Booster Neutrino Experiment (MiniBooNE) hint at the presence of a new kind of neutrino that only interacts with matter through gravity. If this “sterile” kind of neutrino does in fact exist, it might also have played an important role in the evolution of our universe.

Horton hears a sterile neutrino? Source: imdb.com

The three known neutrinos come in three flavors: electron, muon, or tau. The discovery of neutrino oscillation by the Sudbury Neutrino Observatory and the Super-Kamiokande Observatory, which won the 2015 Nobel Prize, proved that one flavor of neutrino can transform into another. This led to the realization that each neutrino mass state is a superposition of the three different neutrino flavor states. From neutrino oscillation measurements, most of the parameters that define the mixing between neutrino states are well known for the three standard neutrinos.

The relationship between the three known neutrino flavor states and mass states is usually expressed as a 3×3 matrix known as the PMNS matrix, for Bruno Pontecorvo, Ziro Maki, Masami Nakagawa and Shoichi Sakata. The PMNS matrix includes three mixing angles, the values of which determine “how much” of each neutrino flavor state is in each mass state. The distance required for one neutrino flavor to become another, the neutrino oscillation wavelength, is determined by the difference between the squared masses of the two mass states. The values of mass splittings m_2^2-m_1^2 and m_3^2-m_2^2 are known to good precision.

A fourth flavor? Adding a sterile neutrino to the mix

A “sterile” neutrino is referred to as such because it would not interact weakly: it would only interact through the gravitational force. Neutrino oscillations involving the hypothetical sterile neutrino can be understood using a “four-flavor model,” which introduces a fourth neutrino mass state, m_4, heavier than the three known “active” mass states. This fourth neutrino state would be mostly sterile, with only a small contribution from a mixture of the three known neutrino flavors. If the sterile neutrino exists, it should be possible to experimentally observe neutrino oscillations with a wavelength set by the difference between m_4^2 and the square of the mass of another known neutrino mass state. Current observations suggest a squared mass difference in the range of 0.1-10 eV^2.

Oscillations between active and sterile states would result in the disappearance of muon (anti)neutrinos and electron (anti)neutrinos. In a disappearance experiment, you know how many neutrinos of a specific type you produce, and you count the number of that type of neutrino a distance away, and find that some of the neutrinos have “disappeared,” or in other words, oscillated into a different type of neutrino that you are not detecting.

A joint analysis by the MINOS and Daya Bay collaborations

The MINOS and Daya Bay collaborations have conducted a joint analysis to combine independent measurements of muon (anti)neutrino disappearance by MINOS and electron antineutrino disappearance by Daya Bay and Bugey-3. Here’s a breakdown of the involved experiments:

  • MINOS, the Main Injector Neutrino Oscillation Search: A long-baseline neutrino experiment with detectors at Fermilab and northern Minnesota that use an accelerator at Fermilab as the neutrino source
  • The Daya Bay Reactor Neutrino Experiment: Uses antineutrinos produced by the reactors of China’s Daya Bay Nuclear Power Plant and the Ling Ao Nuclear Power Plant
  • The Bugey-3 experiment: Performed in the early 1990s, used antineutrinos from the Bugey Nuclear Power Plant in France for its neutrino oscillation observations
Screen Shot 2016-09-12 at 10.22.49 AM
MINOS and Daya Bay/Bugey-3 combined 90% confidence level limits (in red) compared to the LSND and MiniBooNE 90% confidence level allowed regions (in green/purple). Plots the mass splitting between mass states 1 and 4 (corresponding to the sterile neutrino) against a function of the \mu-e mixing angle, which is equivalent to a function involving the 1-4 and 2-4 mixing angles. Regions of parameter space to the right of the red contour are excluded, counting out the majority of the LSND/MiniBooNE allowed regions. Source: arXiv:1607.01177v4.

Assuming a four-flavor model, the MINOS and Daya Bay collaborations put new constraints on the value of the mixing angle \theta_{\mu e}, the parameter controlling electron (anti)neutrino appearance in experiments with short neutrino travel distances. As for the hypothetical sterile neutrino? The analysis excluded the parameter space allowed by the LSND and MiniBooNE appearance-based indications for the existence of light sterile neutrinos for \Delta m_{41}^2 < 0.8 eV^2 at a 95% confidence level. In other words, the MINOS and Daya Bay analysis essentially rules out the LSND and MiniBooNE inconsistencies that allowed for the presence of a sterile neutrino in the first place. These results illustrate just how at odds disappearance searches and appearance searches are when it comes to providing insight into the existence of light sterile neutrinos. If the Whos exist, they will need to be a little louder in order for the world to hear them.

 

Background reading:

Dragonfly 44: A potential Dark Matter Galaxy

Title: A High Stellar Velocity Dispersion and ~100 Globular Clusters for the Ultra Diffuse Galaxy Dragonfly 44

PublicationApJ, v828, Number 1, arXiv: 1606.06291

The title of this paper sounds like some standard astrophysics analyses; but, dig a little deeper and you’ll find – what I think – is an incredibly interesting, surprising and unexpected observation.

The Coma Cluster: NASA, ESA, and the Hubble Heritage Team (STScI/AURA)

Last year, using the WM Keck Observatory and the Gemini North Telescope in Manuakea, Hawaii, the Dragonfly Telephoto Array observed the Coma cluster (a large cluster of galaxies in the constellation Coma – I’ve included a Hubble Image to the left). The team identified a population of large, very low surface brightness (ie: not a lot of stars), spheroidal galaxies around an Ultra Diffuse Galaxy (UDG) called Dragonfly 44 (shown below). They determined that Dragonfly 44 has so few stars that gravity could not hold it together – so some other matter had to be involved – namely DARK MATTER (my favorite kind of unknown matter).

 

The ultra-diffuse galaxy Dragonfly 44. The galaxy consists almost entirely of dark matter. It is surrounded by faint, compact sources. Image credit: Pieter van Dokkum / Roberto Abraham / Gemini Observatory / SDSS / AURA.
The ultra-diffuse galaxy Dragonfly 44. The galaxy consists almost entirely of dark matter. It is surrounded by faint, compact sources. Image credit: Pieter van Dokkum / Roberto Abraham / Gemini Observatory / SDSS / AURA

The team used the DEIMOS instrument installed on Keck II to measure the velocities of stars for 33.5 hours over a period of six nights so they could determine the galaxy’s mass. Observations of Dragonfly 44’s rotational speed suggest that it has a mass of about one trillion solar masses, about the same as the Milky Way. However, the galaxy emits only 1% of the light emitted by the Milky Way. In other words, the Milky Way has more than a hundred times more stars than Dragonfly 44. I’ve also included the Mass-to-Light ratio plot vs. the dynamical mass. This illustrates how unique Dragonfly 44 is compared to other dark matter dominated galaxies like dwarf spheroidal galaxies.

 

 

MLratio
Relation between dynamical mass-to-light ratio and dynamical mass. Open symbols are dispersion-dominated objects from Zaritsky, Gonzalez, & Zabludoff (2006) and Wolf et al. (2010). The UDGs VCC 1287 (Beasley et al. 2016) and Dragonfly 44 fall outside of the band defined by the other galaxies, having a very high M/L ratio for their mass.

What is particularly exciting is that we don’t understand how galaxies like this form.

Their research indicates that these UDGs could be failed galaxies, with the sizes, dark matter content, and globular cluster systems of much more luminous objects. But we’ll need to discover more to fully understand them.

 

 

 

 

 

 

 

 

Further reading (works by the same authors)
Forty-Seven Milky Way-Sized, Extremely Diffuse Galaxies in the Coma Cluster,arXiv: 1410.8141
Spectroscopic Confirmation of the Existence of Large, Diffuse Galaxies in the Coma Cluster: arXiv: 1504.03320