## Universality classes are not that universal (…and some of its other shortcomings)

Article: Shortcomings of New Parameterizations of Inflation
Authors: J. Martin, C. Ringeval and V. Vennin
Reference: arXiv:1609.04739

The Cosmic Microwave Background radiation (CMB) is an amazingly simple and beautiful picture that is pivotal for observational cosmology. It is the oldest light in our Universe: it originates from when the Universe was only a sprightly 380,000 years old, much younger than its current 13.8 billion years. Cosmologists can measure this radiation to high precision and extract information about the early Universe. Specifically, they can learn about the epoch of inflation, which was a period of accelerated expansion in the first fractions of a second after the Big Bang.

There are hundreds of theoretical inflationary models that attempt to describe the physics of this epoch. All of them have slightly different predictions for the two important numbers, $\displaystyle n_s$ and $r$. The number $n_s$ measures the relative strength of cosmological perturbations with different wavelengths: if it is exactly equal to one, all perturbations have the same power, if it is smaller than one, the longer wavelengths have more power and if it is bigger than one, the smaller wavelengths have more power. $r$ is, loosely speaking, the ratio of primordial gravitational waves to the other primordial cosmological fluctuations. In other words, it measures how many gravitational waves were produced in the early universe as compared to what was around at that time.

Recent data from the  Planck satellite have ruled out many models on the basis of these numbers, but a large class of scenarios remains compatible with the data. The hope is that future experiments will further restrict the number of allowed models to help us pinpoint what actually happened during inflation. One way to go about determining what the “correct” model of inflation is, is to systematically compare the predictions of each model with the data. However, a more efficient method is to have a model-independent parametrization of inflation so that you don’t have to specify your model, i.e. the particles and interactions, and then compute the values for $n_s$ and $r$ every time you change the model.  There exist several of these model-independent descriptions of inflation: the truncated horizon-flow formalism, the hydrodynamical description of inflation and “universality classes” . The authors of this article argue that each of these approaches has serious shortcomings. Here I summarize the main drawbacks of the “universality classes”.

So what are universality classes? One can write $n_s$ and $r$ as a function of how much our universe expanded during the final phase of inflation, $\Delta N_*$ . $\Delta N_*$ is typically taken to be large, and thus $1 / \Delta N_*$ is small. Therefore, the expressions for $n_s$ and $r$ can be expanded in terms of the smallness parameter $1 / \Delta N_*$. Then it turns out that the leading order behavior falls into three classes:

1. Perturbative, for which the leading order term shows power-law behavior;
2. Non-perturbative, for which the leading order term is an exponential;
3. Logarithmic, for which the leading order term is logarithmic in $\Delta N_*$.

When all these classes were introduced, the idea was that they allow a model-independent, systematic way to analyze the data. But we shall see that there are some issues with this approach.

First of all, it turns out that the three universality classes are not enough to describe all inflationary models. At leading order in the $1/\Delta N_*$ expansion, a given model may fall into one of these classes, but at the next-to-leading order it can belong to a different class. For instance, consider the popular inflationary model called Starobinsky inflation. Starobinsky inflation is non-perturbative at leading order but logarithmic at the next. Thus, the three classes are not enough to classify all models and you need to increase the number of classes if you consider the next-to-leading order. This is not a problem by itself, but it this means that one of the attractive features of these universality classes (“you only need three classes”) is not true. Due to these higher order terms, you need to introduce new hybrid classes. The number of universality classes increases even more if you go to even higher orders. Now you may argue that this is only a theoretical issue and that the lowest order is more than enough to describe the data. But this is not the case, as you can see in Fig.1.: you need at least to go to the next order to get accurate predictions (especially for future CMB experiments).

Secondly, for many inflationary models if one were to apply the $1/\Delta N_*$ expansion, one finds that the terms in the expansion depend on the parameters of the specific inflationary model. This is, in principle okay, but it can happen that the $1/\Delta N_*$ expansion is only valid for a restrictive set of the parameter space. This can exclude the realistic values for the parameters of the model. So there are various inflationary scenarios for which one cannot even perform the $1/\Delta N_*$ expansion! Thus, the ‘universality classes’ are not as universal as the name suggests.

Third, the $1/\Delta N_*$ expansion is useful if you know a priori the allowed range of $\Delta N_*$. Typically, one assumes $\Delta N_* \in \left[ 50,60 \right]$ (or if you are slightly more conservative $\Delta N_* \in \left[ 40,70 \right]$).  However,  these values are only reasonable under specific conditions and depend sensitively on the period of reheating that follows inflation and the scale of inflation. For instance, one can even have  $\Delta N_* =100$ in single-field inflation depending on the details of reheating. So for various models, $\Delta N_*$ falls outside the typically chosen range. This questions the usefulness of the  $1/\Delta N_*$ expansion given that one cannot  predict the value of  $\Delta N_*$ accurately.

Finally, and this may be even the strongest drawback of this model independent approach: it is statistically flawed. If you look for example at the perturbative class and you want to fit the data to:

$\epsilon = \beta (\Delta N_*)^{- \alpha}$

then you need to give a statistical prior for $\alpha$ and $\beta$. We may take a flat prior for these parameters. Now consider a particular model of inflation and reheating that falls within this universality class and is described by some parameters $\theta_1, \theta_2, \ldots \theta_n$. For this model, the parameters $\alpha$ and $\beta$ are functions of $\theta_1, \theta_2, \ldots \theta_n$ and assuming a flat prior for $\alpha$ and $\beta$ may induce a very funky prior for the parameters of the inflationary model $\theta_1, \theta_2, \ldots \theta_n$. Thus, from a statistical approach these universality classes are not particularly useful either as the introduce very unnatural priors on the parameters in the inflationary models it tries to describe.

• Beautiful lecture notes on early universe cosmology and inflation by Daniel Baumann are available on his website
• Another set of great lecture notes by another expert in the field, Leonardo Senator, is also a good introduction to inflation and the CMB
• For more background on the universality classes, see the original articles referenced in the paper.

## A new anomaly: the electromagnetic duality anomaly

Article: Electromagnetic duality anomaly in curved spacetimes
Authors: I. Agullo, A. del Rio and J. Navarro-Salas
Reference: arXiv:1607.08879

Disclaimer: this blogpost requires some basic knowledge of QFT (or being comfortable with taking my word at face value for some of the claims made :))

Anomalies exists everywhere. Probably the most intriguing ones are medical, but in particle physics they can be pretty fascinating too. In physics, anomalies refer to the breaking of a symmetry. There are basically two types of anomalies:

• The first type, gauge anomalies, are red-flags: if they show up in your theory, they indicate that the theory is mathematically inconsistent.
• The second type of anomaly does not signal any problems with the theory and in fact can have experimentally observable consequences. A prime example is the chiral anomaly. This anomaly nicely explains the decay rate of the neutral pion into two photons.

In this paper, a new anomaly is discussed. This anomaly is related to the polarization of light and is called the electromagnetic duality anomaly.

Chiral anomaly 101
So let’s first brush up on the basics of the chiral anomaly. How does this anomaly explain the decay rate of the neutral pion into two photons? For that we need to start with the Lagrangian for QED that describes the interactions between the electromagnetic field (that is, the photons) and spin-½ fermions (which pions are build from):

$\displaystyle \mathcal L = \bar\psi \left( i \gamma^\mu \partial_\mu - i e \gamma^\mu A_\mu \right) \psi + m \bar\psi \psi$

where the important players in the above equation are the $\psi$s that describe the spin-½ particles and the vector potential $A_\mu$ that describes the electromagnetic field. This Lagrangian is invariant under the chiral symmetry:

$\displaystyle \psi \to e^{i \gamma_5} \psi .$

Due to this symmetry the current density $j^\mu = \bar{\psi} \gamma_5 \gamma^\mu \psi$ is conserved: $\nabla_\mu j^\mu = 0$. This then immediately tells us that the charge associated with this current density is time-independent. Since the chiral charge is time-independent, it prevents the $\psi$ fields to decay into the electromagnetic fields, because the $\psi$ field has a non-zero chiral charge and the photons have no chiral charge. Hence, if this was the end of the story, a pion would never be able to decay into two photons.

However, the conservation of the charge is only valid classically! As soon as you go from classical field theory to quantum field theory this is no longer true; hence, the name (quantum) anomaly.  This can be seen most succinctly using Fujikawa’s observation that even though the field $\psi$ and Lagrangian are invariant under the chiral symmetry, this is not enough for the quantum theory to also be invariant. If we take the path integral approach to quantum field theory, it is not just the Lagrangian that needs to be invariant but the entire path integral needs to be:

$\displaystyle \int D[A] \, D[\bar\psi]\, \int D[\psi] \, e^{i\int d^4x \mathcal L}$ .

From calculating how the chiral symmetry acts on the measure $D \left[\psi \right] \, D \left[\bar \psi \right]$, one can extract all the relevant physics such as the decay rate.

The electromagnetic duality anomaly
Just like the chiral anomaly, the electromagnetic duality anomaly also breaks a symmetry at the quantum level that exists classically. The symmetry that is broken in this case is – as you might have guessed from its name – the electromagnetic duality. This symmetry is a generalization of a symmetry you are already familiar with from source-free electromagnetism. If you write down source-free Maxwell equations, you can just swap the electric and magnetic field and the equations look the same (you just have to send $\displaystyle \vec{E} \to \vec{B}$ and $\vec{B} \to - \vec{E}$). Now the more general electromagnetic duality referred to here is slightly more difficult to visualize: it is a rotation in the space of the electromagnetic field tensor and its dual. However, its transformation is easy to write down mathematically:

$\displaystyle F_{\mu \nu} \to \cos \theta \, F_{\mu \nu} + \sin \theta \, \, ^\ast F_{\mu \nu} .$

In other words, since this is a symmetry, if you plug this transformation into the Lagrangian of electromagnetism, the Lagrangian will not change: it is invariant. Now following the same steps as for the chiral anomaly, we find that the associated current is conserved and its charge is time-independent due to the symmetry. Here, the charge is simply the difference between the number of photons with left helicity and those with right helicity.

Let us continue following the exact same steps as those for the chiral anomaly. The key is to first write electromagnetism in variables analogous to those of the chiral theory. Then you apply Fujikawa’s method and… *drum roll for the anomaly that is approaching*…. Anti-climax: nothing happens, everything seems to be fine. There are no anomalies, nothing!

So why the title of this blog? Well, as soon as you couple the electromagnetic field with a gravitational field, the electromagnetic duality is broken in a deeply quantum way. The number of photon with left helicity and right helicity is no longer conserved when your spacetime is curved.

Physical consequences
Some potentially really cool consequences have to do with the study of light passing by rotating stars, black holes or even rotating clusters. These astrophysical objects do not only gravitationally bend the light, but the optical helicity anomaly tells us that there might be a difference in polarization between lights rays coming from different sides of these objects. This may also have some consequences for the cosmic microwave background radiation, which is ‘picture’ of our universe when it was only 380,000 years old (as compared to the 13.8 billion years it is today!). How big this effect is and whether we will be able to see it in the near future is still an open question.

• An introduction to anamolies using only quantum mechanics instead of quantum field theory is “Anomalies for pedestrians” by Barry Holstein
• The beautiful book “Quantum field theory and the Standard Model” by Michael Schwartz has a nice discussion in the later chapters on the chiral anomaly.
• Lecture notes by Adal Bilal for graduate students on anomalies in general  can be found here

## Can we measure black hole kicks using gravitational waves?

Article: Black hole kicks as new gravitational wave observables
Authors: Davide Gerosa, Christopher J. Moore
Reference: arXiv:1606.04226Phys. Rev. Lett. 117, 011101 (2016)

On September 14 2015, something really huge happened in physics: the first direct detection of gravitational waves happened. But measuring a single gravitational wave was never the goal—.though freaking cool in and of itself of course!  So what is the purpose of gravitational wave astronomy?

The idea is that gravitational waves can be used as another tool to learn more about our Universe and its components. Until the discovery of gravitational waves, observations in astrophysics and astronomy were limited to observations with telescopes and thus to electromagnetic radiation. Now a new era has started: the era of gravitational wave astronomy. And when the space-based eLISA observatory comes online, it will begin an era of gravitational wave cosmology. So what is it that we can learn from our universe from gravitational waves?

First of all, the first detection aka GW150914 was already super interesting:

1. It was the first observation of a binary black hole system (with unexpected masses!).
2. It put some strong constraints on the allowed deviations from Einstein’s theory of general relativity.

What is next? We hope to detect a neutron star orbiting a black hole or another neutron star.  This will allow us to learn more about the equation of state of neutron stars and thus their composition. But the authors in this paper suggest another exciting prospect: observing so-called black hole kicks using gravitational wave astronomy.

So, what is a black hole kick? When two black holes rotate around each other, they emit gravitational waves. In this process, they lose energy and therefore they get closer and closer together before finally merging to form a single black hole. However, generically the radiation is not the same in all directions and thus there is also a net emission of linear momentum. By conservation of momentum, when the black holes merge, the final remnant experiences a recoil in the opposite direction. Previous numerical studies have shown that non-spinning black holes ‘only’ have kicks of ∼ 170 km per second, but you can also have “superkicks” as high as ∼5000 km per second! These speeds can exceed the escape velocity of even the most massive galaxies and may thus eject black holes from their hosts. These dramatic events have some electromagnetic signatures, but also leave an imprint in the gravitational waveform that we detect.

The idea is rather simple: as the system experiences a kick, its gravitational wave is Doppler shifted. This Doppler shift effects the frequency f in the way you would expect:

with v the kick velocity and n the unit vector in the direction from the observer to the black hole system (and c the speed of light). The black hole dynamics is entirely captured by the dimensionless number G f M/c3 with M the mass of the binary (and G Newton’s constant). So you can also model this shift in frequency by using the unkicked frequency fno kick and observing the Doppler shift into the mass. This is very convenient because this means that you can use all the current knowledge and results for the gravitational waveforms and just change the mass. Now the tricky part is that the velocity changes over time and this needs to be modelled more carefully.

A crude model would be to say that during the inspiral of the black holes (which is the long phase during which the two black holes rotate around each other – see figure 1), the emitted linear momentum is too small and the mass is unaffected by emission of linear momentum. During the final stages the black holes merge and the final remnant emits a gravitational wave with decreasing amplitude, which is called the ringdown phase. During this latter phase the velocity kick is important and one can relate the mass during inspiral Mi with the mass during the ringdown phase Mr simply by

The results of doing this for a black hole kick moving away (or towards) us are shown in fig. 2: the wave gets redshifted (or blueshifted).

This model is refined in various ways and the results show that it is unlikely that kicks will be measured by LIGO, as LIGO is optimized for detecting black hole with relatively low masses and black hole systems with low masses have velocity kicks that are too low to be detected. However, the prospects for eLISA are better for two reasons: (1) eLISA is designed to measure supermassive black hole binaries with masses in the range of 105 to 1010 solar masses, which can have much larger kicks (and thus are more easily detectable) and (2) the signal-to-noise ratio for eLISA is much higher giving better data. This study estimates about 6 detectable kicks per year. Thus, black hole (super)kicks might be detected in the next decade using gravitational wave astronomy. The future is bright 🙂