LHCb’s Flavor Mystery Deepens

Title: Measurement of CP -averaged observables in the B0→ K∗0µ+µ− decay

Authors: LHCb Collaboration

Refference: https://arxiv.org/abs/2003.04831

In the Standard Model, matter is organized in 3 generations; 3 copies of the same family of particles but with sequentially heavier masses. Though the Standard Model can successfully describe this structure, it offers no insight into why nature should be this way. Many believe that a more fundamental theory of nature would better explain where this structure comes from. A natural way to look for clues to this deeper origin is to check whether these different ‘flavors’ of particles really behave in exactly the same ways, or if there are subtle differences that may hint at their origin.

The LHCb experiment is designed to probe these types of questions. And in recent years, they have seen a series of anomalies, tensions between data and Standard Model predictions, that may be indicating the presence of new particles which talk to the different generations. In the Standard Model, the different generations can only interact with each other through the W boson, which means that quarks with the same charge can only interact through more complicated processes like those described by ‘penguin diagrams’.

These interactions typically have quite small rates in the Standard Model, meaning that the rate of these processes can be quite sensitive to new particles, even if they are very heavy or interact very weakly with the SM ones. This means that studying these sort of flavor decays is a promising avenue to search for new physics.

In a press conference last month, LHCb unveiled a new measurement of the angular distribution of the rare B0→K*0μ+μ– decay. The interesting part of this process involves a b → s transition (a bottom quark decaying into a strange quark), where number of anomalies have been seen in recent years.

Rather just measuring the total rate of this decay, this analysis focuses on measuring the angular distribution of the decay products. They also perform this mesaurement in different bins of ‘q^2’, the dimuon pair’s invariant mass. These choices allow the measurement to be less sensitive to uncertainties in the Standard Model prediction due to difficult to compute hadronic effects. This also allows the possibility of better characterizing the nature of whatever particle may be causing a deviation.

The kinematics of decay are fully described by 3 angles between the final state particles and q^2. Based on knowing the spins and polarizations of each of the particles, they can fully describe the angular distributions in terms of 8 parameters. They also have to account for the angular distribution of background events, and distortions of the true angular distribution that are caused by the detector. Once all such effects are accounted for, they are able to fit the full angular distribution in each q^2 bin to extract the angular coefficients in that bin.

This measurement is an update to their 2015 result, now with twice as much data. The previous result saw an intriguing tension with the SM at the level of roughly 3 standard deviations. The new result agrees well with the previous one, and mildly increases the tension to the level of 3.4 standard deviations.

This latest result is even more interesting given that LHCb has seen an anomaly in another measurement (the R_k anomaly) involving the same b → s transition. This had led some to speculate that both effects could be caused by a single new particle. The most popular idea is a so-called ‘leptoquark’ that only interacts with some of the flavors.

LHCb is already hard at work on updating this measurement with more recent data from 2017 and 2018, which should once again double the number of events. Updates to the R_k measurement with new data are also hotly anticipated. The Belle II experiment has also recent started taking data and should be able to perform similar measurements. So we will have to wait and see if this anomaly is just a statistical fluke, or our first window into physics beyond the Standard Model!

Symmetry Magazine “The mystery of particle generations”

Cern Courier “Anomalies persist in flavour-changing B decays”

Lecture Notes “Introduction to Flavor Physcis”

Does antihydrogen really matter?

Article title: Investigation of the fine structure of antihydrogen

Authors: The ALPHA Collaboration

Reference: https://doi.org/10.1038/s41586-020-2006-5 (Open Access)

Physics often doesn’t delay our introduction to one of the most important concepts in history – symmetries (as I am sure many fellow physicists will agree). From the idea that “for every action there is an equal and opposite reaction” to the vacuum solutions of electric and magnetic fields from Maxwell’s equations, we often take such astounding universal principles for granted. For example, how many years after you first calculated the speed of a billiard ball using conservation of momentum did you realise that what you were doing was only valid because of the fundamental symmetrical structure of the laws of nature? And hence goes our life through physics education – we first begin from what we ‘see’ to understanding what the real mechanisms are that operate below the hood.

These days our understanding of symmetries and how they relate to the phenomena we observe have developed so comprehensively throughout the 20th century that physicists are now often concerned with the opposite approach – applying the fundamental mechanisms to determine where the gaps are between what they predict and what we observe.

So far one of these important symmetries has stood up the test of time with no observable violation so far being reported. This is the simultaneous transformation of charge conjugation (C), parity (P) and time reversal (T), or CPT for short. A ‘CPT-transformed’ universe would be like a mirror-image of our own, with all matter as antimatter and opposite momenta. the amazing thing is that under all these transformations, the laws of physics behave the exact same way. With such an exceptional result, we would want to be absolutely sure that all our experiments say the same thing, so that brings us the our current topic of discussion – antihydrogen.

Matter, but anti.

The trick with antimatter is to keep it as far away from normal matter as possible. Antimatter-matter pairs readily interact, releasing vast amounts of energy proportional to the mass of the particles involved. Hence it goes without saying that we can’t just keep them sealed up in Tupperware containers and store them next to aunty’s lasagne. But what if we start simple – gather together an antiproton and a single positron and voila, we have antihydrogen – the antimatter sibling to the most abundant element in nature. Well this is precisely what the international ALPHA collaboration at CERN has been concerned with, providing “slowed-down” antiprotons with positrons in a device known as a Penning trap. Just like hydrogen, the orbit of a positron around an antiproton behaves like a tiny magnet, a property known as an object’s magnetic moment. The difficulty however is in the complexity of external magnetic field required to ‘trap’ the neutral antihydrogen in space. Therefore not surprisingly, these are the atoms of very low kinetic energy (i.e. cold) that cannot overcome the weak effect of external magnetism.

There are plenty more details of how the ALPHA collaboration acquires antihydrogen for study. I’ll leave this up to a reference at the end. What I’ll focus on is what we can do with it and what it means for fundamental physics. In particular, one of the most intriguing predictions of the invariance of the laws of physics under charge, parity and time transformations is that antihydrogen should share many of the same properties as hydrogen. And not just the mass and magnetic moment, but also the fine structure (atomic transition frequencies). In fact, the most successful theory of the 20th century, quantum electrodynamics (QED), properly accomodating anti-electronic interactions, also predicts a foundational test for both matter and antimatter hydrogen – the splitting of the $2S_{1/2}$ and $2P_{1/2}$ energy levels (I’ll leave a reference to a refresher on this notation). This is of course known as the Nobel-Prize winning Lamb Shift in hydrogen, a feature of the interaction between the quantum fluctuations in the electromagnetic field and the orbiting electron.

I’m feelin’ hyperfine

Of course it is only very recently that atomic versions of antimatter have been able to be created and trapped, allowing researchers to uniquely study the foundations of QED (and hence modern physics itself) from the perspective of this mirror-reflected anti-world. Very recently, the ALPHA collaboration have been able to report the fine structure of antihydrogen up to the $n=2$ state using laser-induced optical excitations from the ground state and a strong external magnetic field. Undergraduates by now will have seen, at least even qualitatively, that increasing the strength of an external magnetic field on an atomic structure also increases the gaps in the energy levels, and hence frequencies of their transitions. Maybe a little less known is the splitting due to the interaction between the electron’s spin angular momentum and that of the nucleus. This additional structure is known as the hyperfine structure, and is readily calculable in hydrogen utilizing the 1/2-integer spins of the electron and proton.

From the predictions of QED, one would expect antihydrogen to show precisely this same structure. Amazingly (or perhaps exactly as one would expect?) the average measurement of the antihydrogen transition frequencies agree with those in hydrogen to 16 ppb (parts per billion) – an observation that solidly keeps CPT invariance in rule but also opens up a new world of precision measurement of modern foundational physics. Similarly, with consideration to the Zeeman and hyperfine interactions, the splitting between $2P_{1/2} - 2P_{3/2}$ is found to be consistent with the CPT invariance of QED up to a level of 2 percent, and the identity of the Lamb shift ($2S_{1/2} - 2P_{1/2}$) up to 11 percent. With advancements in antiproton production and laser inducement of energy transitions, such tests provide unprecedented insight into the structure of antihydrogen. The presence of an antiproton and more accurate spectroscopy may even help in answering the unsolved question in physics: the size of the proton!

References

1. A Youtube link to how the ALPHA experiment acquires antihydrogen and measures excitations of anti-atoms: http://alpha.web.cern.ch/howalphaworks
2. A picture of my aunty’s lasagne: https://imgur.com/a/2ffR4C3
3. A reminder of what that fancy notation for labeling spin states means: https://quantummechanics.ucsd.edu/ph130a/130_notes/node315.html
4. Details of the 1) Zeeman effect in atomic structure and 2) Lamb shift, discovery and calculation: 1) https://en.wikipedia.org/wiki/Zeeman_effect 2) https://en.wikipedia.org/wiki/Lamb_shift
5. Hyperfine structure (great to be familiar with, and even more interesting to calculate in senior physics years): https://en.wikipedia.org/wiki/Hyperfine_structure
6. Interested about why the size of the proton seems like such a challenge to figure out? See how the structure of hydrogen can be used to calculate it: https://en.wikipedia.org/wiki/Proton_radius_puzzle

Dark Matter Freeze Out: An Origin Story

In the universe, today, there exists some non-zero amount of dark matter. How did it get here? Has this same amount always been here? Did it start out as more or less earlier in the universe? The so-called “freeze out” scenario is one explanation for how the amount of dark matter we see today came to be.

The freeze out scenario essentially says that there is some large amount of dark matter in the early universe that decreases to the amount we observe today. This early universe dark matter $(\chi)$ is in thermal equilibrum with the particle bath $(f)$, meaning that whatever particle processes create and destroy dark matter, they happen at equal rates, $\chi \chi \rightleftharpoons f f$, so that the net amount of dark matter is unchanged. We will take this as our “initial condition” and evolve it by letting the universe expand. For pedagogical reasons, we will name processes that create dark matter $(f f \rightharpoonup \chi \chi)$ “production” processes, and processes that destroy dark matter $( \chi \chi \rightharpoonup f f)$ “annihilation” processes.

Now that we’ve established our initial condition, a large amount of dark matter in thermal equilibrium with the particle bath, let us evolve it by letting the universe expand. As the universe expands, two things happen:

1. The energy scale of the particle bath $(f)$ decreases. The expansion of the universe also cools down the particle bath. At energy scales (temperatures) less than the dark matter mass, the production reaction becomes kinematically forbidden. This is because the initial bath particles simply don’t have enough energy to produce dark matter. The annihilation process though is unaffected, it only requires that dark matter find itself to annihilate. The net effect is that as the universe cools, dark matter production slows down and eventually stops.
2. Dark matter annihilations cease. Due to the expansion of the universe, dark matter particles become increasingly separated in space which makes it harder for them to find each other and annihilate. The result is that as the universe expands, dark matter annihilations eventually cease.

Putting all of this together, we obtain the following plot, adapted from  The Early Universe by Kolb and Turner and color-coded by me.

• On the horizontal axis is the dark matter mass divided by temperature $T$. It is often more useful to parametrize the evolution of the universe as a function of temperature rather than time, through the two are directly related.
• On the vertical axis is the co-moving dark matter number density, which is the number of dark matter particles inside an expanding volume as opposed to a stationary volume. The comoving number density is useful because it accounts for the expansion of the universe.
• The quantity $\langle \sigma_A v \rangle$ is the rate at which dark matter annihilates. If the annihilation rate is small, then dark matter does not annihilate very often, and we are left with more. If we increase the annihilation rate, then dark matter annihilates more frequently, and we are ultimately left with less of it.
• The solid black line is the comoving dark matter density that remains in thermal equilibrium, where the production and annihilation rates are equal. This line falls because as the universe cools, the production rate decreases.
• The dashed lines are the “frozen out” dark matter densities that result from the cooling and expansion of the universe. The comvoing density flattens off because the universe is expanding faster than dark matter can annihilate with itself.

The red region represents the hot, early universe where the production and annihilation rates are equal. Recall that the net effect is the amount of dark matter remains constant, so the comoving density remains constant. As the universe begins to expand and cool, we transition into the purple region. This region is dominated by temperature effects, since as the universe cools the production rate begins to fall and so the amount of dark matter than can remain in thermal equilibrium also falls. Finally, we transition to the blue region, where expansion dominate. In this region, dark matter particles can no longer find each other and annihilations cease. The comoving density is said to have “frozen out” because i) the universe is not energetic enough to produce new dark matter and ii) the universe is expanding faster than dark matter can annihilate with itself. Thus, we are left with a non-zero amount of dark matter than persists as the universe continues to evolve in time.

References

[1] – This plot is figure 5.1 of Kolb and Turners book The Early Universe (ISBN: 978-0201626742). There are many other plots that communicate essentially the same information, but are much more cluttered.

[2] – Dark Matter Genesis. This is a PhD thesis that does a good job of summarizing the history of dark matter and explaining how the freeze out mechanism works.

[3] – Dark Matter Candidates from Particle Physics and Methods of Detection. This is a review article written by a very prominent member of the field, J. Feng of the University of California, Irvine.

[4] – Dark Matter: A Primer. Have any more questions about dark matter? They are probably addressed in this primer.