First Evidence the Higgs Talks to Other Generations

Article Titles: “Measurement of Higgs boson decay to a pair of muons in proton-proton collisions at sqrt(S) = 13 TeV” and “A search for the dimuon decay of the Standard Model Higgs boson with the ATLAS detector”

Authors: The CMS Collaboration and The ATLAS Collaboration, respectively

References: CDS: CMS-PAS-HIG-19-006 and arxiv:2007.07830, respectively

Like parents who wonder if millennials have ever read a book by someone outside their generation, physicists have been wondering if the Higgs communicates with matter particles outside the 3rd generation. Since its discovery in 2012, phycists at the LHC experiments have been studying the Higgs in a variety of ways. However despite the fact that matter seems to be structured into 3 distinct ‘generations’ we have so far only seen the Higgs talking to the 3rd generation. In the Standard Model, the different generations of matter are 3 identical copies of the same kinds of particles, just with each generation having heavier masses. Due to the fact that the Higgs interacts with particles in proportion to their mass, this means it has been much easier to measure the Higgs talking to the third and heaviest generation of mater particles. But in order to test whether the Higgs boson really behaves exactly like the Standard Model predicts or has slight deviations -(indicating new physics), it is important to measure its interactions with particles from the other generations too. The 2nd generation particle the Higgs decays most often to is the charm quark, but the experimental difficulty of identifying charm quarks makes this an extremely difficult channel to probe (though it is being tried).

The best candidate for spotting the Higgs talking to the 2nd generation is by looking for the Higgs decaying to two muons which is exactly what ATLAS and CMS both did in their recent publications. However this is no easy task. Besides being notoriously difficult to produce, the Higgs only decays to dimuons two out of every 10,000 times it is produced. Additionally, there is a much larger background of Z bosons decaying to dimuon pairs that further hides the signal.

The branching ratio (fraction of decays to a given final state) of the Higgs boson as a function of its mass (the measured Higgs mass is around 125 GeV). The decay to a pair of muons is shown in gold, much below the other decays that have been observed.

CMS and ATLAS try to make the most of their data by splitting up events into multiple categories by applying cuts that target different the different ways Higgs bosons are produced: the fusion of two gluons, two vector bosons, two top quarks or radiated from a vector boson. Some of these categories are then further sub-divided to try and squeeze out as much signal as possible. Gluon fusion produces the most Higgs bosons, but it also the hardest to distinguish from the Z boson production background. The vector boson fusion process produces the 2nd most Higgs and is a more distinctive signature so it contributes the most to the overall measurement. In each of these sub-categories a separate machine learning classifier is trained to distinguish Higgs boson decays from background events. All together CMS uses 14 different categories of events and ATLAS uses 20. Backgrounds are estimated using both simulation and data-driven techniques, with slightly different methods in each category. To extract the overall amount of signal present, both CMS and ATLAS fit all of their respective categories at once with a single parameter controlling the strength of a Higgs boson signal.

At the end of the day, CMS and ATLAS are able to report evidence of Higgs decay to dimuons with a significance of 3-sigma and 2-sigma respectively (chalk up 1 point for CMS in their eternal rivalry!). Both of them find an amount of signal in agreement with the Standard Model prediction.

Combination of all the events used in the CMS (left) and ATLAS (right) searches for a Higgs decaying to dimuons. Events are weighted by the amount of expected signal in that bin. Despite this trick, the small evidence for a signal can be seen only be seen in the bottom panels showing the number of data events minus the predicted amount of background around 125 GeV.

CMS’s first evidence of this decay allows them to measuring the strength of the Higgs coupling to muons as compared to the Standard Model prediction. One can see this latest muon measurement sits right on the Standard Model prediction, and probes the Higgs’ coupling to a particle with much smaller mass than any of the other measurements.

CMS’s latest summary of Higgs couplings as a function of particle mass. This newest edition of the coupling to muons is shown in green. One can see that so far there is impressive agreement with the Standard Model across a mass range spanning 3 orders of magnitude!

As CMS and ATLAS collect more data and refine their techniques, they will certainly try to push their precision up to the 5-sigma level needed to claim discovery of the Higgs’s interaction with the 2nd generation. They will be on the lookout for any deviations from the expected behavior of the SM Higgs, which could indicate new physics!

Further Reading:

Older ATLAS Press Release “ATLAS searches for rare Higgs boson decays into muon pairs

Cern Courier Article “The Higgs adventure: five years in

Particle Bites Post “Studying the Higgs via Top Quark Couplings

Blog Post from Matt Strassler on “How the Higgs Field Works

A simple matter

Article title: Evidence of A Simple Dark Sector from XENON1T Anomaly

Authors: Cheng-Wei Chiang, Bo-Qiang Lu

Reference: arXiv:2007.06401

As with many anomalies in the high-energy universe, particle physicists are rushed off their feet to come up with new, and often somewhat often complicated models to explain them. With the recent detection of an excess in electron recoil events in the 1-7 keV region from the XENON1T experiment (see Oz’s post in case you missed it), one can ask whether even the simplest of models can even still fit the bill. Although still at 3.5 sigma evidence – not quite yet in the ‘discovery’ realm – there is still great opportunity to test the predictability and robustness of our most rudimentary dark matter ideas.

The paper in question considers would could be one of the simplest dark sectors with the introduction of only two more fundamental particles – a dark photon and a dark fermion. The dark fermion plays the role of the dark matter (or part of it) which communicates with our familiar Standard Model particles, namely the electron, through the dark photon. In the language of particle physics, the dark sector particles actually carries a kind of ‘dark charge’, much like the electron carries what we know as the electric charge. The (almost massless) dark photon is special in the sense that it can interact with both the visible and dark sector – and as opposed to visible photons, and have a very long mean free path able to reach the detector on Earth. An important parameter describing how much the ordinary and dark photon ‘mix’ together is usually described by \varepsilon. But how does this fit into the context of the XENON 1T excess?

Fig 1: Annihilation of dark fermions into dark photon pairs

The idea is that the dark fermions annihilate into pairs of dark photons (seen in Fig. 1) which excite electrons when they hit the detector material, much like a dark version of the photoelectric effect – only much more difficult to observe. The processes above remain exclusive, without annihilating straight to Standard Model particles, as long as the dark matter mass remains less than the lightest charged particle, the electron. With the electron at a few hundred keV, we should be fine in the range of the XENON excess.

What we are ultimately interested in is the rate at which the dark matter interacts with the detector, which in high-energy physics are highly calculable:

\frac{d R}{d \Delta E}= 1.737 \times 10^{40}\left(f_{\chi} \alpha^{\prime}\right)^{2} \epsilon(E)\left(\frac{\mathrm{keV}}{m_{\chi}}\right)^{4}\left(\frac{\sigma_{\gamma}\left(m_{\chi}\right)}{\mathrm{barns}}\right) \frac{1}{\sqrt{2 \pi} \sigma} e^{-\frac{\left(E-m_{\chi}\right)^{2}}{2 \sigma^{2}}}

where f_{\chi} is the fraction of dark matter represented by \chi, \alpha'=\varepsilon e^2_{X} / (4\pi), \epsilon(E) is the efficiency factor for the XENON 1T experiment and \sigma_{\gamma} is the photoelectric cross section.

Figure 2 shows the favoured regions for the dark fermion explanation fot the XENON excess. The dashed green lines represent only a 1% fraction of dark fermion matter for the universe, whilst the solid lines are to explain the entire dark matter content. Upper limits from the XENON 1T data is shown in blue, with a bunch of other astrophysical contraints (namely red giants, red dwarfs and horizontal branch star) far above the preffered regions.

Fig 2: The green bands represent the 1 and 2 sigma parameter regions in the \alpha' - m_{\chi} plane favoured by the dark fermion model in explaning the XENON excess. The solid lines cover the entire DM component, whilst the dashed lines are only a 1% fraction.

This plot actually raises another important question: How sensitive are these results to the fraction of dark matter represented by this model? For that we need to specify how the dark matter is actually created in the first place – with the two most probably well-known mechanisms the ‘freeze-out’ and the ‘freeze-in’ (follow the links to previous posts!)

Fig 3: Freeze-out and freeze-in mechanisms for producing the dark matter relic density. The measured density (from PLANCK) is \Omega h^2 = 0.12, shown on the red solid curve. The best fit values are also shown by the dashed lines, with their 1 sigma band. The mass of the dark fermion is fixed to its best-fit value of 3.17 keV, from Figure 2.

The first important point to note from the above figures is that the freeze-out mechanism doesn’t even depend on the mixing between the visible and dark sector i.e. the vertical axes. However, recall that the relic density in freeze-out is determined by the rate of annihlation into SM fermions – which is of course forbidden here for the mass of fermionic DM. The freeze-in works a little differently since there are two processes that can contribute to populating the relic density of DM: SM charged fermion annihlations and dark photon annihilations. It turns out that the charged fermion channel dominates for larger values of e_X and in of course then becomes insensitive to the mixing parameter \varepsilon and hence dark photon annihilations.

Of course it has been emphasized in previous posts that the only way to really get a good test of these models is with more data. But the advantage of simple models like these are that they are readily available in the physicist’s arsenal when anomalies like these pop up (and they do!)

Charmonium-onium: A fully charmed tetraquark

Paper Title: Observation of structure in the J/\psi-pair mass spectrum

Authors: LHCb Collaboration


My (artistic) rendition of a tetraquark. The blue and orange balls represent charm and anticharm quarks with gluons connecting all of them.

The Announcement

The LHCb collaboration reports a 5-sigma resonance at 6.9 GeV, consistent with predictions of a fully-charmed tetraquark state.

The Background

One of the ways quarks interact with each other is the strong nuclear force. This force is unlike the electroweak or gravitational forces in that the interaction strength increases with the separation between quarks, until it sharply falls off at roughly 10^{-15}m. We say that the strong force is “confined” due to this sharp drop off. It is also dissimilar to the other forces in that the Strong force is non-perturbative. For perturbation theory to work well, the more complex a Feynman diagram becomes, the less it should contribute to the process. In the strong interaction though, each successive diagram contributes more than the previous one. Despite these challenges, physicists have still made sense organizing the zoo of quarks and bound states that come from particle collisions.

The quark (q) model [1,2] classifies hadrons into Mesons (q \bar{q}) and Baryons (qqq or \bar{q}\bar{q}\bar{q}). It also allows for the existence of exotic hadrons like the tetraquark (qq\bar{q}\bar{q}) or pentaquark (qqq\bar{q}\bar{q}\bar{q}). The first evidence for an exotic hardon of this nature came in 2003 from the Belle Collaboration [1]. According to the LHCb collaboration, “all hadrons observed to date, including those of exotic nature, contain at most two heavy charm (c) or bottom (b) quarks, whereas many QCD-motivated phenomenological models also predict the existence of states consisting of four heavy quarks.” In this paper, the LHCb reports evidence of a cc\bar{c}\bar{c} state, the first fully charmed tetraquark state.

The Method

Perhaps the simplest way to form a fully charmed tetraquark state, T_{ cc \bar{c}\bar{c}} from now on, is to form two charmonium states ( J/\psi) which then themselves form a bound state. This search focuses on pairs of charmonium that are produced from two separate interactions, as opposed to resonant production through a single interaction. This is advantageous because “the distribution of any di-J/\psi observable can be constructed using the kinematics from single J/\psi production.” In other words, independent J/\psi production reduces the amount of work it takes to construct observables.

Once J/\psi is formed, the most useful decay it undergoes is into pairs of muons with about a 6% branching ratio [2]. To form J/\psi candidates, the di-muon invariant mass must be between 3.0 - 3.2GeV. To form a di-J/\psi candidate, the T_{ cc \bar{c}\bar{c}}, all four muons are required to have originated from the same proton-proton collision point. This eliminates the possibility of associating two J/\psis from two different proton collisions.

The Findings

When the dust settles, the LHCb finds a 5-\sigma resonance at m_{\text{di}- J/\psi} = 6905 \pm 11 \pm 7 MeV with a width of \Gamma = 80 \pm 19 \pm 33 MeV. This resonance is just above twice the J/\psi mass.


[1] – An SU3 model for strong interaction symmetry and its breaking.

[2] – A schematic model of baryons and mesons.

[3] – Observation of a narrow charmonium-like state in exclusive B^+ \rightarrow K^+ \pi^+ \pi^- J/\psi decays.

[4] –

The XENON1T Excess : The Newest Craze in Particle Physics

Paper: Observation of Excess Electronic Recoil Events in XENON1T

Authors: XENON1T Collaboration

Recently the particle physics world has been abuzz with a new result from the XENON1T experiment who may have seen a revolutionary signal. XENON1T is one of the world’s most sensitive dark matter experiments. The experiment consists of a huge tank of Xenon placed deep underground in the Gran Sasso mine in Italy. It is a ‘direct-detection’ experiment, hunting for very rare signals of dark matter particles from space interacting with their detector. It was originally designed to look for WIMP’s, Weakly Interacting Massive Particles, who used to be everyone’s favorite candidate for dark matter. However, given recent null results by WIMP-hunting  direct-detection experiments, and collider experiments at the LHC, physicists have started to broaden their dark matter horizons. Experiments like XENON1T, who were designed to look for heavy WIMP’s colliding off of Xenon nuclei have realized that they can also be very sensitive to much lighter particles by looking for electron recoils. New particles that are much lighter than traditional WIMP’s would not leave much of an impact on large Xenon nuclei, but they can leave a signal in the detector if they instead scatter off of the electrons around those nuclei. These electron recoils can be identified by the ionization and scintillation signals they leave in the detector, allowing them to be distinguished from nuclear recoils.

In this recent result, the XENON1T collaboration searched for these electron recoils in the energy range of 1-200 keV with unprecedented sensitivity.  Their extraordinary sensitivity is due to its exquisite control over backgrounds and extremely low energy threshold for detection. Rather than just being impressed, what has gotten many physicists excited is that the latest data shows an excess of events above expected backgrounds in the 1-7 keV region. The statistical significance of the excess is 3.5 sigma, which in particle physics is enough to claim ‘evidence’ of an anomaly but short of the typical 5-sigma required to claim discovery.

The XENON1T data that has caused recent excitement. The ‘excess’ is the spike in the data (black points) above the background model (red line) in the 1-7 keV region. The significance of the excess is around 3.5 sigma.

So what might this excess mean? The first, and least fun answer, is nothing. 3.5 sigma is not enough evidence to claim discovery, and those well versed in particle physics history know that there have been numerous excesses with similar significances have faded away with more data. Still it is definitely an intriguing signal, and worthy of further investigation.

The pessimistic explanation is that it is due to some systematic effect or background not yet modeled by the XENON1T collaboration. Many have pointed out that one should be skeptical of signals that appear right at the edge of an experiments energy detection threshold. The so called ‘efficiency turn on’, the function that describes how well an experiment can reconstruct signals right at the edge of detection, can be difficult to model. However, there are good reasons to believe this is not the case here. First of all the events of interest are actually located in the flat part of their efficiency curve (note the background line is flat below the excess), and the excess rises above this flat background. So to explain this excess their efficiency would have to somehow be better at low energies than high energies, which seems very unlikely. Or there would have to be a very strange unaccounted for bias where some higher energy events were mis-reconstructed at lower energies. These explanations seem even more implausible given that the collaboration performed an electron reconstruction calibration using the radioactive decays of Radon-220 over exactly this energy range and were able to model the turn on and detection efficiency very well.

Results of a calibration done to radioactive decays of Radon-220. One can see that data in the efficiency turn on (right around 2 keV) is modeled quite well and no excesses are seen.

However the possibility of a novel Standard Model background is much more plausible. The XENON collaboration raises the possibility that the excess is due to a previously unobserved background from tritium β-decays. Tritium decays to Helium-3 and an electron and a neutrino with a half-life of around 12 years. The energy released in this decay is 18.6 keV, giving the electron having an average energy of a few keV. The expected energy spectrum of this decay matches the observed excess quite well. Additionally, the amount of contamination needed to explain the signal is exceedingly small. Around 100 parts-per-billion of H2 would lead to enough tritium to explain the signal, which translates to just 3 tritium atoms per kilogram of liquid Xenon. The collaboration tries their best to investigate this possibility, but they neither rule out or confirm such a small amount of tritium contamination. However, other similar contaminants, like diatomic oxygen have been confirmed to be below this level by 2 orders of magnitude, so it is not impossible that they were able to avoid this small amount of contamination.

So while many are placing their money on the tritium explanation, there is the exciting possibility remains that this is our first direct evidence of physics Beyond the Standard Model (BSM)! So if the signal really is a new particle or interaction what would it be? Currently it it is quite hard to pin down exactly based on the data. The analysis was specifically searching for two signals that would have shown up in exactly this energy range: axions produced in the sun, and neutrinos produced in the sun interacting with electrons via a large (BSM) magnetic moment. Both of these models provide good fits to the signal shape, with the axion explanation being slightly preferred. However since this result has been released, many have pointed out that these models would actually be in conflict with constraints from astrophysical measurements. In particular, the axion model they searched for would have given stars an additional way to release energy, causing them to cool at a faster rate than in the Standard Model. The strength of interaction between axions and electrons needed to explain the XENON1T excess is incompatible with the observed rates of stellar cooling. There are similar astrophysical constraints on neutrino magnetic moments that also make it unlikely.

This has left door open for theorists to try to come up with new explanations for these excess events, or think of clever ways to alter existing models to avoid these constraints. And theorists are certainly seizing this opportunity! There are new explanations appearing on the arXiv every day, with no sign of stopping. In the roughly 2 weeks since the XENON1T announced their result and this post is being written, there have already been 50 follow up papers! Many of these explanations involve various models of dark matter with some additional twist, such as being heated up in the sun or being boosted to a higher energy in some other way.

A collage of different models trying to explain the XENON1T excess (center). Each plot is from a separate paper released in the first week and a half following the original announcement. Source

So while theorists are currently having their fun with this, the only way we will figure out the true cause of this this anomaly is with more data. The good news is that the XENON collaboration is already preparing for the XENONnT experiment that will serve as a follow to XENON1T. XENONnT will feature a larger active volume of Xenon and a lower background level, allowing them to potentially confirm this anomaly at the 5-sigma level with only a few months of data. If  the excess persists, more data would also allow them to better determine the shape of the signal; allowing them to possibly distinguish between the tritium shape and a potential new physics explanation. If real, other liquid Xenon experiments like LUX and PandaX should also be able to independently confirm the signal in the near future. The next few years should be a very exciting time for these dark matter experiments so stay tuned!

Read More:

Quanta Magazine Article “Dark Matter Experiment Finds Unexplained Signal”

Previous ParticleBites Post on Axion Searches

Blog Post “Hail the XENON Excess”

Are You Magnetic?

It’s no secret that the face of particle physics lies in the collaboration of scientists all around the world – and for the first time a group of 170 physicists have come to a consensus on one of the most puzzling predictions of the Standard Model muon. The anomalous magnetic moment of the muon concerns the particle’s rotation, or precession, in the presence of a magnetic field. Recall that elementary particles, like the electron and muon, possess intrinsic angular momentum, called spin, and hence indeed behave like a little dipole “bar magnet” – consequently affected by an external magnetic field.

The “classical” version of such an effect comes straight from the Dirac equation, a quantum mechanical framework for relativistic spin-1/2 particles like the electron and muon. It is expressed in terms of the g-factor, where g=2 in the Dirac theory. However, more accurate predictions, to compare to with experiment, require more extended calculations in the framework of quantum field theory, with “loops” of virtual particles forming the quantum mechanical corrections. In such a case we of course find deviation from the classical value in what becomes the anomalous magnetic moment with

a = \frac{g-2}{2}

For the electron, the prediction coming from Quantum Electrodynamics (QED) is so accurate, it actually agrees with the experimental result up to 10 significant figures (side note: in fact, this is not the only thing that agrees very well with experiment from QED, see precision tests of QED).

Figure 1: a “one-loop” contribution to the magnetic dipole moment in the theory of Quantum Electrodynamics (QED)

The muon, however, isn’t so simple and actually gets rather messy. In the Standard Model it comes with three parts, QED, electroweak and hadronic contributions

a^{SM}_{\mu} = a^{QED}_{\mu}+a^{EW}_{\mu}+a^{hadron}_{\mu}

Up until now, the accuracy of these calculations have been the subject of a number of collaborations around the world. The largest source (in fact, almost all) of the uncertainty actually comes from the smaller contributions to the magnetic moment, the hadronic part. It is so difficult to estimate that it actually requires input from experimental sources and lattice QCD methods. This review constitutes the most comprehensive report of both the data-driven and lattice methods for hadronic contributions to the muon’s magnetic moment.

Their final result, a^{SM}_{\mu} = 116591810(43) \times 10^{-11} remains 3.7 standard deviations below the current experimental value, measured at Fermilab in Brookhaven National Laboratory. However the most exciting part about all this is the fact that Fermilab is on the brink of releasing a new measurement, with the uncertainties reduced by almost a factor of four compared to the last. And if they don’t agree then? We could be that much closer to confirmation of some new physics in one of the most interesting of places!

References and Further Reading:

1. The new internationally-collaborated calculation: The anomalous magnetic moment of the muon in the Standard model,


A Charming Story

This post is intended to give an overview of the motivation, purpose, and discovery of the charm quark.

The Problem

The conventional 3-quark (up, down, and strange) models of the weak interaction are inconsistent with weak selection rules. In particular, strangeness-changing (\Delta S = 2) processes as seen in neutral Kaon oscillation (K_0 \leftrightarrow \bar{K_0} ) [1]. These processes should be smaller than the predictions obtained from the conventional 3-quark theory. There are two diagrams that contribute to neutral kaon oscillation [2].

Neutral Kaon Oscillation

In a 3-quark model, the fermion propagators can only be up quark propagators, they both give a positive contribution to the process, and it seems as though we are stuck with these \Delta S = 2 oscillations. It would be nice if we could somehow suppress these diagrams.


Introduce another up-type quark and one new quantum number called “Charm,” designed to counteract the effects of “Strangeness” carried by the strange quark. With some insight from the future, we will call this new up-type quark the charm quark.

Now, in our 4-quark model (up, down, strange, and charm), we have up and charm quark propagators a cancellation can in-principle occur. First proposed by Glashow, Iliopoulos, and Maiani, this mechanism would later become known as the “GIM Mechanism” [3]. The result is a suppression of these \Delta S = 2 processes which is exactly what we need to make the theory consistent with experiments.

Experimental Evidence

Amusingly, two different experiments reported the same resonance at nearly the same time. In 1974, both the Stanford Linear Accelerator [4] and the Brookhaven National Lab [5] both reported a resonance at 3.1 GeV. SLAC named this particle the \psi, and Brookhaven named it the J and thus the J/ \psi particle was born. It turns out that the resonance they detected was “Charmonium,” a bound state of c \bar{c}.


[1] – Report on Long Lived K0. This paper experimentally confirms neutral Kaons oscillation.

[2] – Kaon Physics. This powerpoint contains the picture of neutral Kaon oscillation that I used.

[3] – Weak Interactions with Lepton-Hadron Symmetry. This is the paper by Glashow, Iliopoulos, and Maiani that outlines the GIM mechanism.

[4] – Discovery of a Narrow Resonance in e+e Annihilation. This is the SLAC discovery of the J \psi particle.

[5] – Experimental Observation of a Heavy Particle J This is the Brookhaven discovery of the J \psi particle.

[A] – History of Charm Quark

Dark matter from down under

It isn’t often I get to plug an important experiment in high-energy physics located within my own vast country so I thought I would take this opportunity to do just that. That’s right – the land of the kangaroo, meat-pie and infamously slow internet (68th in the world if I recall) has joined the hunt for the ever elusive dark matter particle.

By now you have probably heard that about 85% of the universe is all made out of stuff that is dark. Searching for this invisible matter has not been an easy task, however the main strategy has involved detections of faint signals of dark matter scattering off nuclei that constantly pass through the Earth unimpeded. Up until now, the main contenders for these dark matter direct detection experiments have been performed above the equator.

The SABRE (Sodium-iodide with Active Background REjection) collaboration plans to operate two detectors – one in my home-state of Victoria, Australia at SUPL (Stawell Underground Physics Laboratory) and another in the northern hemisphere at LNGS, Italy. The choice to run two experiments in seperate hemispheres has the goal of potentially removing systematic effects inherent in the seasonal rotation of the Earth. In particular – any of these seasonal effects should be opposite in phase, whilst the dark matter signal should remain the same. This actually takes us to a novel dark matter direct detection search method known as annual modulation, which has been added to the spotlight through the DAMA/LIBRA scintillation detector underground the Laboratori Nazionali del Gran Sasso in Italy.

Around the world, around the world

Figure 1: When the Earth rotates around the sun, relative to the Milky Way’s DM halo, it experiences a larger interaction when it moves “head-on” with the wind. Taken from

The DAMA/LIBRA experiment superseded the DAMA/NaI experiment which observed the dark matter halo over a period of 7 annual cycles ranging from 1995 to 2002. The idea is quite simple really. Current theory suggests that the Milky Way galaxy is surrounded by a halo of dark matter with our solar system casually floating by experiencing some “flux” of particles that pass through us all year round. However, current and past theory (up to a point) also suggest the Earth does a full revolution around the sun in a year’s time. In fact, with respect to this dark matter “wind”, the Earth’s relative velocity would be added on its approach, occurring around the start of June and then subtracted on its recession, in December. When studying detector interactions with the DM particles, one would then expect the rates to be higher in the middle of the year and of course lower at the end – hence a modulation (annually). Up to here, annual modulation results would be quite suitably model-independent and so wouldn’t depend on your particular choice of DM particle – so long as it has some interaction with the detector.

The DAMA collaboration, having reported almost 14 years of annual modulation results in total, claim evidence for a picture quite consistent with what would be expected for a range of dark matter scenarios in the energy range of 2-6 keV. This however has long been in tension with the wider community of detection for WIMP dark matter. Those such as XENON (which incidentally is also located in the Gran Sasso mountains) and CDMS have reported no detection of dark matter in the same ranges as that which the DAMA collaboration claimed to have seen them. Although these employ quite different materials such as (you guessed it) liquid xenon in the case of XENON and cryogenically cooled semiconductors at CDMS.

Figure 2: Annual modulation results from DAMA. Could this be the presence of WIMP dark matter or some other seasonal effect? From the DAMA Collaboration.

Yes, there is also the COSINE-100 experiment, using the same materials as those in DAMA (that is, sodium iodide), based in South Korea. And yes, they also published a letter to Nature claiming their results to be in “severe tension” with those of the DAMA annual modulation signal – under the assumption of WIMP interactions that are spin-independent with the detector material. However, this does not totally rule out the observation of dark matter by DAMA – just the fact that it is very unlikely to correspond to the gold-standard WIMP in a standard halo scenario. According to the collaboration, it will certainly take years more data collection to know for sure. But that’s where SABRE comes in!

As above, so below

Before the arrival of SABRE’s twin detectors in both the northern and southern hemispheres, the first phase known as the PoP (Proof of Principle) must be performed to analyze the entire search strategy and evaluate the backgrounds present in the crystal structures. Certainly, another feature of SABRE is a crystal background rate quite below that of DAMA/LIBRA using ultra-radiopure sodium iodide crystals. With the estimated current background and 50 kg of detector material, it is expected that the DAMA/LIBRA signal should be able to be independently verified (or refuted) in a matter of 3 years.

If you asked me, there is something a little special about an experiment operating on the frontier of fundamental physics in a small regional Victorian town with a population just over 6000 known for an active gold mining community and the oldest running foot race in Australia. Of course, Stawell features just the right environment to shield the detector from the relentless bombardment of cosmic rays on the Earth’s surface – and that’s why it is located 1 km underground. In fact, radiation contamination is such a prevalent issue for these sensitive detectors that everything from the concrete to the metal bolts that go in them must first be tested – and all this at the same time as the mine is still being operated.

Now, not only is SABRE experiments running in both Australia and Italy, but they actually comprise a collaboration of physicists also from the UK and the USA. But most importantly (for me, anyway) – this is the southern hemisphere’s very first dark matter detector – a great milestone and a fantastic opportunity to put Aussies in the pilot’s seat to uncover one of nature’s biggest mysteries. But for now, crack open a cold one – footy’s almost on!

Figure 3: The SABRE collaboration operates internationally with detectors in the northern and southern hemispheres. Taken from GSSI.

References and Further Reading

  1. The SABRE dark matter experiment:
  2. The COSINE-100 experiment summarizing the annual modulation technique:
  3. The COSINE-100 Experiment search for dark matter in tension with that of the DAMA signal: arXiv:1906.01791.
  4. An overview of the SABRE experiment and its Proof of Principle (PoP) deployment: arXiv:1807.08073.

Dark Matter Cookbook: Freeze-In

In my previous post, we discussed the features of dark matter freeze-out. The freeze-out scenario is the standard production mechanism for dark matter. There is another closely related mechanism though, the freeze-in scenario. This mechanism achieves the same results as freeze-out, but in a different way. Here are the ingredients we need, and the steps to make dark matter according to the freeze-in recipe [1].


  • Standard Model particles that will serve as a thermal bath, we will call these “bath particles.”
  • Dark matter (DM).
  • A bath-DM coupling term in your Lagrangian.


  1. Pre-heat your early universe to temperature T. This temperature should be much greater than the dark matter mass.
  2. Add in your bath particles and allow them to reach thermal equilibrium. This will ensure that the bath has enough energy to produce DM once we begin the next step.
  3. Starting at zero, increase the bath-DM coupling such that DM production is very slow. The goal is to produce the correct amount of dark matter after letting the universe cool. If the coupling is too small, we won’t produce enough. If the coupling is too high, we will end up making too much dark matter. We want to make just enough to match the observed amount today.
  4. Slowly decrease the temperature of the universe while monitoring the DM production rate. This step is analogous to allowing the universe to expand. At temperatures lower than the dark matter mass, the bath no longer has enough energy to produce dark matter. At this point, the amount of dark matter has “frozen-in,” there are no other ways to produce more dark matter.
  5. Allow your universe to cool to 3 Kelvin and enjoy. If all went well, we should have a universe at the present-day temperature, 3 Kelvin, with the correct density of dark matter, (0.2-0.6) GeV/cm^3 [2].

This process is schematically outlined in the figure below, adapted from [1].

Schematic comparison of the freeze-in (dashed) and freeze-out (solid) scenarios.

On the horizontal axis we have the ratio of dark matter mass to temperature. Earlier times are to the left and later times are to the right. On the vertical axis is the dark matter number-density per entropy-density. This quantity automatically scales the number-density to account for cooling effects as the universe expands. The solid black line is the amount of dark matter that remains in thermal equilibrium with the bath. For the freeze-out recipe, the universe started out with a large population of dark matter that was in thermal equilibrium with the bath. In the freeze-in recipe, the universe starts with little to no dark matter and it never reaches thermal equilibrium with the bath. The dashed (solid) colored lines are dark matter abundances in the freeze-in (out) scenarios. Observe that in the freeze-in scenario, the amount of dark matter increases as temperature decreases. In the freeze-out scenario, the amount of dark matter decreases as temperature decreases. Finally, the arrows indicate the effect of increasing the X-bath coupling. For freeze in, increasing this interaction leads to more dark matter but in freeze-out, increasing this coupling leads to less dark matter.


[1] – Freeze-In Production of FIMP Dark Matter. This is the paper outlining the freeze-in mechanism.

[2] – Using Gaia DR2 to Constrain Local Dark Matter Density and Thin Dark Disk. This is the most recent measurement of the local dark matter density according to the Particle Data Group.

[3] – Dark Matter. This is the Particle Data Group review of dark matter.

[A] – Cake Recipe in strange science units. This SixtySymbols video provided the inspiration for the format of this post.

Does antihydrogen really matter?

Article title: Investigation of the fine structure of antihydrogen

Authors: The ALPHA Collaboration

Reference: (Open Access)

Physics often doesn’t delay our introduction to one of the most important concepts in history – symmetries (as I am sure many fellow physicists will agree). From the idea that “for every action there is an equal and opposite reaction” to the vacuum solutions of electric and magnetic fields from Maxwell’s equations, we often take such astounding universal principles for granted. For example, how many years after you first calculated the speed of a billiard ball using conservation of momentum did you realise that what you were doing was only valid because of the fundamental symmetrical structure of the laws of nature? And hence goes our life through physics education – we first begin from what we ‘see’ to understanding what the real mechanisms are that operate below the hood.

These days our understanding of symmetries and how they relate to the phenomena we observe have developed so comprehensively throughout the 20th century that physicists are now often concerned with the opposite approach – applying the fundamental mechanisms to determine where the gaps are between what they predict and what we observe.

So far one of these important symmetries has stood up the test of time with no observable violation so far being reported. This is the simultaneous transformation of charge conjugation (C), parity (P) and time reversal (T), or CPT for short. A ‘CPT-transformed’ universe would be like a mirror-image of our own, with all matter as antimatter and opposite momenta. the amazing thing is that under all these transformations, the laws of physics behave the exact same way. With such an exceptional result, we would want to be absolutely sure that all our experiments say the same thing, so that brings us the our current topic of discussion – antihydrogen.

Matter, but anti.

Figure 1: The Hydrogen atom and its nemesis – antihydrogen. Together they are: Light. Source: Berkeley Science Review

The trick with antimatter is to keep it as far away from normal matter as possible. Antimatter-matter pairs readily interact, releasing vast amounts of energy proportional to the mass of the particles involved. Hence it goes without saying that we can’t just keep them sealed up in Tupperware containers and store them next to aunty’s lasagne. But what if we start simple – gather together an antiproton and a single positron and voila, we have antihydrogen – the antimatter sibling to the most abundant element in nature. Well this is precisely what the international ALPHA collaboration at CERN has been concerned with, providing “slowed-down” antiprotons with positrons in a device known as a Penning trap. Just like hydrogen, the orbit of a positron around an antiproton behaves like a tiny magnet, a property known as an object’s magnetic moment. The difficulty however is in the complexity of external magnetic field required to ‘trap’ the neutral antihydrogen in space. Therefore not surprisingly, these are the atoms of very low kinetic energy (i.e. cold) that cannot overcome the weak effect of external magnetism.

There are plenty more details of how the ALPHA collaboration acquires antihydrogen for study. I’ll leave this up to a reference at the end. What I’ll focus on is what we can do with it and what it means for fundamental physics. In particular, one of the most intriguing predictions of the invariance of the laws of physics under charge, parity and time transformations is that antihydrogen should share many of the same properties as hydrogen. And not just the mass and magnetic moment, but also the fine structure (atomic transition frequencies). In fact, the most successful theory of the 20th century, quantum electrodynamics (QED), properly accomodating anti-electronic interactions, also predicts a foundational test for both matter and antimatter hydrogen – the splitting of the 2S_{1/2} and 2P_{1/2} energy levels (I’ll leave a reference to a refresher on this notation). This is of course known as the Nobel-Prize winning Lamb Shift in hydrogen, a feature of the interaction between the quantum fluctuations in the electromagnetic field and the orbiting electron.

I’m feelin’ hyperfine

Of course it is only very recently that atomic versions of antimatter have been able to be created and trapped, allowing researchers to uniquely study the foundations of QED (and hence modern physics itself) from the perspective of this mirror-reflected anti-world. Very recently, the ALPHA collaboration have been able to report the fine structure of antihydrogen up to the n=2 state using laser-induced optical excitations from the ground state and a strong external magnetic field. Undergraduates by now will have seen, at least even qualitatively, that increasing the strength of an external magnetic field on an atomic structure also increases the gaps in the energy levels, and hence frequencies of their transitions. Maybe a little less known is the splitting due to the interaction between the electron’s spin angular momentum and that of the nucleus. This additional structure is known as the hyperfine structure, and is readily calculable in hydrogen utilizing the 1/2-integer spins of the electron and proton.

Figure 2: The expected energy levels in the antimatter version of hydrogen, an antiproton with an orbiting positron. Increased splitting on the x-axis are shown as a function of external magnetic field strength, a phenomena well-known in hydrogen (and thus predicted in antihydrogen) as the Zeeman Effect. The hyperfine splitting, due to the interaction between the positron and antiproton spin alignment are also shown by the arrows in the kets, respectively.

From the predictions of QED, one would expect antihydrogen to show precisely this same structure. Amazingly (or perhaps exactly as one would expect?) the average measurement of the antihydrogen transition frequencies agree with those in hydrogen to 16 ppb (parts per billion) – an observation that solidly keeps CPT invariance in rule but also opens up a new world of precision measurement of modern foundational physics. Similarly, with consideration to the Zeeman and hyperfine interactions, the splitting between 2P_{1/2} - 2P_{3/2} is found to be consistent with the CPT invariance of QED up to a level of 2 percent, and the identity of the Lamb shift (2S_{1/2} - 2P_{1/2}) up to 11 percent. With advancements in antiproton production and laser inducement of energy transitions, such tests provide unprecedented insight into the structure of antihydrogen. The presence of an antiproton and more accurate spectroscopy may even help in answering the unsolved question in physics: the size of the proton!

Figure 3: Transition frequencies observed in antihydrogen for the 1S-2P states (with various spin polarizations) compared with the theoretical expectation in hydrogen. The error bars are shown to 1 standard deviation.


  1. A Youtube link to how the ALPHA experiment acquires antihydrogen and measures excitations of anti-atoms:
  2. A picture of my aunty’s lasagne:
  3. A reminder of what that fancy notation for labeling spin states means:
  4. Details of the 1) Zeeman effect in atomic structure and 2) Lamb shift, discovery and calculation: 1) 2)
  5. Hyperfine structure (great to be familiar with, and even more interesting to calculate in senior physics years):
  6. Interested about why the size of the proton seems like such a challenge to figure out? See how the structure of hydrogen can be used to calculate it:

Dark Matter Freeze Out: An Origin Story

In the universe, today, there exists some non-zero amount of dark matter. How did it get here? Has this same amount always been here? Did it start out as more or less earlier in the universe? The so-called “freeze out” scenario is one explanation for how the amount of dark matter we see today came to be.

The freeze out scenario essentially says that there is some large amount of dark matter in the early universe that decreases to the amount we observe today. This early universe dark matter (\chi) is in thermal equilibrum with the particle bath (f), meaning that whatever particle processes create and destroy dark matter, they happen at equal rates, \chi \chi \rightleftharpoons f f, so that the net amount of dark matter is unchanged. We will take this as our “initial condition” and evolve it by letting the universe expand. For pedagogical reasons, we will name processes that create dark matter (f f \rightharpoonup \chi \chi) “production” processes, and processes that destroy dark matter ( \chi \chi \rightharpoonup f f) “annihilation” processes.

Now that we’ve established our initial condition, a large amount of dark matter in thermal equilibrium with the particle bath, let us evolve it by letting the universe expand. As the universe expands, two things happen:

  1. The energy scale of the particle bath (f) decreases. The expansion of the universe also cools down the particle bath. At energy scales (temperatures) less than the dark matter mass, the production reaction becomes kinematically forbidden. This is because the initial bath particles simply don’t have enough energy to produce dark matter. The annihilation process though is unaffected, it only requires that dark matter find itself to annihilate. The net effect is that as the universe cools, dark matter production slows down and eventually stops.
  2. Dark matter annihilations cease. Due to the expansion of the universe, dark matter particles become increasingly separated in space which makes it harder for them to find each other and annihilate. The result is that as the universe expands, dark matter annihilations eventually cease.

Putting all of this together, we obtain the following plot, adapted from  The Early Universe by Kolb and Turner and color-coded by me.

Fig 1: Color – coded freeze out scenario. The solid line is the density of dark matter that remains in thermal equilibrium as the universe expands. The dashed lines represent the freeze out density. The red region corresponds to a time in the universe when the production and annihilation rate are equal. The purple region; a time when the production rate is smaller than the annihilation rate. The blue region; a time when the annihilation rate is overwhelmed by the expansion of the universe.

  • On the horizontal axis is the dark matter mass divided by temperature T. It is often more useful to parametrize the evolution of the universe as a function of temperature rather than time, through the two are directly related.
  • On the vertical axis is the co-moving dark matter number density, which is the number of dark matter particles inside an expanding volume as opposed to a stationary volume. The comoving number density is useful because it accounts for the expansion of the universe.
  • The quantity \langle \sigma_A v \rangle is the rate at which dark matter annihilates. If the annihilation rate is small, then dark matter does not annihilate very often, and we are left with more. If we increase the annihilation rate, then dark matter annihilates more frequently, and we are ultimately left with less of it.
  • The solid black line is the comoving dark matter density that remains in thermal equilibrium, where the production and annihilation rates are equal. This line falls because as the universe cools, the production rate decreases.
  • The dashed lines are the “frozen out” dark matter densities that result from the cooling and expansion of the universe. The comvoing density flattens off because the universe is expanding faster than dark matter can annihilate with itself.

The red region represents the hot, early universe where the production and annihilation rates are equal. Recall that the net effect is the amount of dark matter remains constant, so the comoving density remains constant. As the universe begins to expand and cool, we transition into the purple region. This region is dominated by temperature effects, since as the universe cools the production rate begins to fall and so the amount of dark matter than can remain in thermal equilibrium also falls. Finally, we transition to the blue region, where expansion dominate. In this region, dark matter particles can no longer find each other and annihilations cease. The comoving density is said to have “frozen out” because i) the universe is not energetic enough to produce new dark matter and ii) the universe is expanding faster than dark matter can annihilate with itself. Thus, we are left with a non-zero amount of dark matter than persists as the universe continues to evolve in time.


[1] – This plot is figure 5.1 of Kolb and Turners book The Early Universe (ISBN: 978-0201626742). There are many other plots that communicate essentially the same information, but are much more cluttered.

[2] – Dark Matter Genesis. This is a PhD thesis that does a good job of summarizing the history of dark matter and explaining how the freeze out mechanism works.

[3] – Dark Matter Candidates from Particle Physics and Methods of Detection. This is a review article written by a very prominent member of the field, J. Feng of the University of California, Irvine.

[4] – Dark Matter: A Primer. Have any more questions about dark matter? They are probably addressed in this primer.