Protons and neutrons at first glance seem like simple objects. They have well defined spin and electric charge, and we even know their quark compositions. Protons are composed of two up quarks and one down quark and for neutrons, two downs and one up. Further, if a proton is moving, it carries momentum, but how is this momentum distributed between its constituent quarks? In this post, we will see that most of the momentum of the proton is in fact not carried by its constituent quarks.
Before we start, we need to have a small discussion about isospin. This will let us immediately write down the results we need later. Isospin is a quantum number that in practice, allows us to package particles together. Protons and neutrons form an isospin doublet, which means they come in the same mathematical package. The proton is the isospin +1/2 component of this package, and the neutron is the isospin -1/2 component of this package. Similarly, up quarks and down quarks form their own isospin doublet, and they come in their own package. In our experiment, if we are careful to choose which particles to scatter off of eachother, our calculations will permit us to exchange components of isospin packages everywhere instead of redoing calculations from scratch. This exchange is what I will call the “isospin trick.” It turns out that if compare electron-proton scattering to electron-neutron scattering allows us to use this trick:
Back to protons and neutrons. We know that protons and neutrons are composite particles, they themselves are made up of more fundamental objects. We need a way to “zoom into” these composite particles, to look inside them and we do this with the help of structure functions . Structure functions for the proton and neutron encode how electric charge and momentum are distributed between the constituents. We assign and to be the probability of finding an up or down quark with momentum fraction of the proton. Explicitly, these structure functions look like:
where the first line is the definition of a structure function. In this line, denotes quarks, and is the electric charge of quark . In the second line, we have written out explicitly the structure function for the proton , and invoked the isospin trick to immediately write down the structure function for the neutron in the third line. Observe that if we had attempted to write down following the definition in line 1, we would have gotten the same thing as the proton.
At this point we must turn to experiment to determine and . The plot we will examine [1] is figure 17.6 taken from section 17.4 of Peskin and Schroeder, An Introduction to Quantum Field Theory. Some data is omitted to illustrate a point.
This plot shows the momentum distribution of the up and down quarks inside a proton. On the horizontal axis is the momentum fraction and on the vertical axis is probability. The two curves represent the probability distribution of the up (u) and down (d) quarks inside the proton. Integrating these curves gives us the total percent of momentum stored in the up and down quarks which I will call uppercase and uppercase . We want to know know both and , so we need another equation to solve this system. Luckily we can repeat this experiment using neutrons instead of protons, obtain a similar set of curves, and integrate them to obtain the following system of equations:
Solving this system for and yields and . We immediately see that the total momentum carried by the up and down quarks is of the momentum of the proton. Said a different way, the three quarks that make up the proton, only carry half ot its momentum. One possible conclusion is that the proton has more “stuff” inside of it that is storing the remaining momentum. It turns out that this additional “stuff” are gluons, the mediators of the strong force. If we include gluons (and anti-quarks) in the momentum distribution, we can see that at low momentum fraction , most of the proton momentum is stored in gluons. Throughout this discussion, we have neglected anti-quarks because even at low momentum fractions, they are sub-dominant to gluons. The full plot as seen in Peskin and Schroeder is provided below for completeness.
References
[1] – Peskin and Schroeder, An Introduction to Quantum Field Theory, Section 17.4, figure 17.6.
[B] – Fundamentals in Nuclear Theory, Ch3. This is a more technical treatment of isospin, roughly at the level of undergraduate advanced quantum mechanics.
Article title: The ANITA Anomalous Events as Signatures of a Beyond Standard Model Particle and Supporting Observations from IceCube
Authors: Derek B. Fox, Steinn Sigurdsson, Sarah Shandera, Peter Mészáros, Kohta Murase, Miguel Mostafá, and Stephane Coutu
Reference: arXiv:1809.09615v1
Neutrinos have arguably made history for being nothing other than controversial. In fact from their very inception, their proposal by Wolfgang Pauli was described in his own words as “something no theorist should ever do”. Years on, as we established the role that neutrinos played in the processes of our sun, it was then discovered that it simply wasn’t providing enough of them. In the end the only option was to concede that neutrinos were more complicated than we ever thought, opening up a new area of study of ‘flavor oscillations’ with the consequence that they may in fact possess a small, non-zero mass – to this day yet to be explained.
On a more recent note, neutrinos have sneakily raised eyebrows with a number of other interesting anomalies. The OPERA collaboration, founded between CERN in Geneva, Switzerland and Gran Sasso, Italy, made international news with reports that their speeds exceeded the speed of light. Such an observation would certainly shatter the very foundations of modern physics and so was met with plenty of healthy skepticism. Alas it was eventually traced back to a faulty timing cable and all was right with the world again. However this has not been the last time that neutrinos have been involved in another controversial anomaly.
The NASA-involved Antarctic Impulsive Transient Antenna (ANITA) is an experiment designed to search for very, very energetic neutrinos originating from outer space. As the name also suggests, the experiment consists of a series of radio antennae contained in a balloon floating above the southern Antarctic ice. Very energetic neutrinos can in fact produce intense radio wave signals when they pass through the Earth and scatter off atoms in the Antarctic ice. This may sound strange, as neutrinos are typically referred to as ‘elusive’, however at incredibly high energies their probability of scattering increases dramatically – to the point where the Earth is ‘opaque’ to these neutrinos.
ANITA typically searches for the electromagnetic components of cosmic rays in the atmosphere, reflecting off the ice surface and subsequently inverting the phase of the radio wave. Alternatively, a small number of events can occur in the direction of the horizon, without reflecting off the ice and hence not inverting the waveform. However, physicists were surprised to find signals originating from below the ice, without phase inversion, in a direction much too steep to originate from the horizon.
Why is this a surprise you may ask? Well any particle present in the SM at these energies would have trouble traversing such a long distance throughout the Earth, measured in one of the observations with a chord length of 5700 km, whereas a neutrino would be expected to only survive a few hundred km. Such events would be expected to be mainly involving (tau neutrinos), since these have the potential to convert to a charged tau lepton shortly before arriving and hadronising into an air shower, which is simply not possible for electrons or muons which are absorbed by the ice in a much smaller distance. But even in the case of tau neutrinos, the probability of such an event occuring with the observed trajectory is very small (below one in a million), leading physicists to explore more exotic (and exciting) options.
A simple possibility is that the ultra-high energy neutrinos coming from space could interact within the Earth to produce a BSM (Beyond Standard Model) particle that passes through the Earth until it exits and decays back to an SM lepton and then hadronizes in a shower of particles. Such a situation is shown in Figure 1, where the BSM particle comes from the well-known and popular supersymmetric SM extension, known as the stauslepton .
In some popular supersymmetric extensions to the Standard Model, the stau slepton is typically the next-to-lightest supersymmetric particle (or NLSP) and can in fact be quite long-lived. In the presence of a nucleus, the stau may convert to the tau lepton and the LSP, which is typically the neutralino. In the paper titled above, the stau NLSP can exist within the Gauge-Mediated Supersymmetry Breaking Model (GMSB) and can be produced through ultra-high energy neutrino interactions with nucleons with a not-so-tiny branching ratio of . Of course the tension still remains for the direct observation of staus that can fit resonably within this scenario, but the prospects of observing effects of BSM physics without the efforts of expensive colliders.
But the attempts at new physics explanations don’t end there. There are some ideas that involve the decays of very heavy dark matter candidates in the galactic Milky-Way center. In a similar vein, another possibility comes form the well-motivated sterile neutrino – a BSM candidate to explain the small, non-zero mass of the neutrino. There are a number of explanations for a large flux of sterile neutrinos throughout the Earth, however the rate at which they interact with the Earth is much more suppressed than the light “active” neutrinos. It could be then hoped that they would make their passage to the ANITA detector after converting back to a tau lepton.
Anomalies like these come and go, however in any case, physicists remain interested in alternate pathways to new physics – or even a motivation to search in a more specific region with collider technology. But collecting more data first always helps!
Neutrinos are almost a lot of things. They are almostmassless, a property that goes against the predictions of the Standard Model. Possessing this non-zero mass, they should travel at almost the speed of light, but not quite, in order to be consistent with the principles of special relativity. Yet each measurement of neutrino propagation speed returns a value that is, within experimental error, exactly the speed of light. Only coupled to the weak force, they are almost non-interacting, with 65 billion of them streaming from the sun through each square centimeter of Earth each second, almost undetected.
How do all of these pieces fit together? The story of the neutrino begins in 1930, when Wolfgang Pauli propositioned an as-yet detected particle emitted during beta decay in order to explain an observed lack of energy and momentum conservation. In 1956, an antineutrino-positron annihilation producing two gamma rays was detected, confirming the existence of neutrinos. Yet with this confirmation came an assortment of growing mysteries. In the decades that followed, a series of experiments found that there are three distinct flavors of neutrino, one corresponding to each type of lepton: electron, muon, and tau particles. Subsequent measurements of propagating neutrinos then revealed a curious fact: these three flavors are anything but distinct. When the flavor of a neutrino is initially measured to be, say, an electron neutrino, a second measurement of flavor after it has traveled some distance could return the answer of muon neutrino. Measure yet again, and you could find yourself a tau neutrino. This process, in which the probability of measuring a neutrino in one of the three flavor states varies as it propagates, is known as neutrino oscillation.
Neutrino oscillation threw a wrench into the Standard Model in terms of mass; neutrino oscillation implies that the masses of the three neutrino flavors cannot be equal to each other, and hence cannot all be zero. Specifically, only one of them would be allowed to be zero, with the remaining two non-zero and non-equal. While at first glance an oddity, oscillation arises naturally from underlying mathematics, and we can arrive at this conclusion via a simple analysis. To think about a neutrino, we consider two eigenstates (the state a particle is in when it is measured to have a certain observable quantity), one corresponding to flavor and one corresponding to mass. Because neutrinos are created in weak interactions which conserve flavor, they are initially in a flavor eigenstate. Flavor and mass eigenstates cannot be simultaneously determined, and so each flavor eigenstate is a linear combination of mass eigenstates, and vice versa. Now, consider the case of three flavors of neutrino. If all three flavors consisted of the same linear combination of mass eigenstates, there would be no varying superposition between them, since the different masses would travel at different speeds in accordance with special relativity. Since we experimentally observe an oscillation between neutrino flavors, we can conclude that their masses cannot all be the same.
Although this result was unexpected and provides the first known departure from the Standard Model, it is worth noting that it also neatly resolves a few outstanding experimental mysteries, such as the solar neutrino problem. Neutrinos in the sun are produced as electron neutrinos and are likely to interact with unbound electrons as they travel outward, transitioning them into a second mass state which can interact as any of the three flavors. By observing a solar neutrino flux roughly a third of its predicted value, physicists not only provided a potential answer to a previously unexplained phenomenon but also deduced that this second mass state must be larger than the state initially produced. Related flux measurements of neutrinos produced during charged particle interactions in the Earth’s upper atmosphere, which are primarily muon neutrinos, reveal that the third mass state is quite different from the first two mass states. This gives rise to two potential mass hierarchies: the normal () and inverted () ordering.
However, this oscillation also means that it is difficult to discuss neutrino masses individually, as measuring the sum of neutrino masses is currently easier from a technical standpoint. With current precision in cosmology, we cannot distinguish the three neutrinos at the epoch in which they become free-traveling, although this could change with increased precision. Future experiments in beta decay could also lead to progress in pinpointing individual masses, although current oscillation experiments are only sensitive to mass-squared differences . Hence, we frame our models in terms of these mass splittings and the mass sum, which also makes it easier to incorporate cosmological data. Current models of neutrinos are phenomenological ― not directly derived from theory but consistent with both theoretical principles and experimental data. The mixing between states is mathematically described by the PMNS (Pontecorvo-Maki-Nakagawa-Sakata) matrix, which is parametrized by three mixing angles and a phase related to CP violation. These parameters, as in most phenomenological models, have to be inserted into the theory. There is usually a wide space of parameters in such models and constraining this space requires input from a variety of sources. In the case of neutrinos, both particle physics experiments and cosmological data provide key avenues for exploration into these parameters. In a recent paper, Loureiro et al. used such a strategy, incorporating data from the large scale structure of galaxies and the cosmic microwave background to provide new upper bounds on the sum of neutrino masses.
The group investigated two main classes of neutrino mass models: exact models and cosmological approximations. The former concerns models that integrate results from neutrino oscillation experiments and are parametrized by the smallest neutrino mass, while the latter class uses a model scheme in which the neutrino mass sum is related to an effective number of neutrino species times an effective mass which is equal for each flavor. In exact models, Gaussian priors (an initial best-guess) were used with data sampling from a number of experimental results and error bars, depending on the specifics of the model in question. This includes possibilities such as fixing the mass splittings to their central values or assuming either a normal or inverted mass hierarchy. In cosmological approximations, was fixed to a specific value depending on the particular cosmological model being studied, with the total mass sum sampled from data.
The group ultimately demonstrated that cosmologically-based models result in upper bounds for the mass sum that are much lower than those generated from physically-motivated exact models, as we can see in the figure above. One of the models studied resulted in an upper bound that is not only different from those determined from neutrino oscillation experiments, but is inconsistent with known lower bounds. This puts us into the exciting territory that neutrinos have pushed us to again and again: a potential finding that goes against what we presently know. The calculated upper bound is also significantly different if the assumption is made that one of the neutrino masses is zero, with the mass sum contained in the remaining two neutrinos, setting the stage for future differentiation between neutrino masses. Although the group did not find any statistically preferable model, they provide a framework for studying neutrinos with a considerable amount of cosmological data, using results of the Planck, BOSS, and SDSS collaborations, among many others. Ultimately, the only way to arrive at a robust answer to the question of neutrino mass is to consider all of these possible sources of information for verification.
With increased sensitivity in upcoming telescopes and a plethora of intriguing beta decay experiments on the horizon, we should be moving away from studies that update bounds and toward ones which make direct estimations. In these future experiments, previous analysis will prove vital in working toward an understanding of the underlying mass models and put us one step closer to unraveling the enigma of the neutrino. While there are still many open questions concerning their properties ― Why are their masses so small? Is the neutrino its own antiparticle? What governs the mass mechanism? ― studies like these help to grow intuition and prepare for the next phases of discovery. I’m excited to see what unexpected results come next for this almost elusive particle.
In a recent paper [1], Chris Karwin et al. report an excess in gamma rays that seem to be originating from our nearest galaxy Andromeda. The amount of excess is arguably insignificant, roughly 3% – 5%. However, the spatial location of this excess is what is interesting. The luminous region of Andromeda is roughly 70 Kpc in diameter [2], but the gamma ray excess is located roughly 120 – 200 Kpc from the center. Below is an approximately-to-scale diagram that illustrates the spatial extent of the excess:
There may be many possible explanations and interpretations for this excess, one of the more exotic possibilities is dark matter annihilations. The dark matter paradigm says that galaxies form within “dark matter halos,” a cloud of dark matter that attracts luminous matter that will eventually form a galaxy [3]. The dark matter halo encompassing Andromeda has a virial radius of about 200 Kpc [4], well beyond the location of the gamma ray excess. This means that there is dark matter at the location of the excess, but how do we start with dark matter and end up with photons?
Within the “light mediator” paradigm of dark matter particle physics, dark matter can annihilate with itself into massive dark photons, the dark matter equivalent of the photon. These dark photons by ansatz, can interact with standard model and ultimately decay into photons. A schematic diagram of this process is provided below:
To recap,
There is an excess of high energy, 1-100 GeV photons from Andromeda.
The location of this excess is displaced from the luminous matter within Andromeda.
This spatial displacement means that the cause of this excess is probably not the luminous matter within Andromeda.
We do know however, that there is dark matter at the location of the excess.
It is possible for dark matter to yield high energy photons so this excess may be evidence of dark matter.