When a single experimental collaboration has a few thousand contributors (and even more opinions), there are a lot of rules. These rules dictate everything from how you get authorship rights to how you get chosen to give a conference talk. In fact, this rulebook is so thorough that it could be the topic of a whole other post. But for now, I want to focus on one rule in particular, a rule that has only been around for a few decades in particle physics but is considered one of the most important practices of good science: blinding.
In brief, blinding is the notion that it’s experimentally compromising for a scientist to look at the data before finalizing the analysis. As much as we like to think of ourselves as perfectly objective observers, the truth is, when we really really want a particular result (let’s say a SUSY discovery), that desire can bias our work. For instance, imagine you were looking at actual collision data while you were designing a signal region. You might unconsciously craft your selection in such a way to force an excess of data over background prediction. To avoid such human influences, particle physics experiments “blind” their analyses while they are under construction, and only look at the data once everything else is in place and validated.
This technique has kept the field of particle physics in rigorous shape for quite a while. But there’s always been a subtle downside to this practice. If we only ever look at the data after we finalize an analysis, we are trapped within the confines of theoretically motivated signatures. In this blinding paradigm, we’ll look at all the places that theory has shone a spotlight on, but we won’t look everywhere. Our whole game is to search for new physics. But what if amongst all our signal regions and hypothesis testing and neural net classifications… we’ve simply missed something?
It is this nagging question that motivates a specific method of combing the LHC datasets for new physics, one that the authors of this paper call a “structured, global and automated way to search for new physics.” With this proposal, we can let the data itself tell us where to look and throw unblinding caution to the winds.
The idea is simple: scan the whole ATLAS dataset for discrepancies, setting a threshold for what defines a feature as “interesting”. If this preliminary scan stumbles upon a mysterious excess of data over Standard Model background, don’t just run straight to Stockholm proclaiming a discovery. Instead, simply remember to look at this area again once more data is collected. If your feature of interest is a fluctuation, it will wash out and go away. If not, you can keep watching it until you collect enough statistics to do the running to Stockholm bit. Essentially, you let a first scan of the data rather than theory define your signal regions of interest. In fact, all the cool kids are doing it: H1, CDF, D0, and evenATLASandCMS have performed earlier versions of this general search.
The nuts and bolts of this particular paper include 3.2 fb-1 of 2015 13 TeV LHC data to try out. Since the whole goal of this strategy is to be as general as possible, we might as well go big or go home with potential topologies. To that end, the authors comb through all the data and select any event “involving high pT isolated leptons (electrons and muons), photons, jets, b-tagged jets and missing transverse momentum”. All of the backgrounds are simply modeled with Monte Carlo simulation.
Once we have all these events, we need to sort them. Here, “the classification includes all possible final state configurations and object multiplicities, e.g. if a data event with seven reconstructed muons is found it is classified in a ‘7- muon’ event class (7μ).” When you add up all the possible permutations of objects and multiplicities, you come up with a cool 639 event classes with at least 1 data event and a Standard Model expectation of at least 0.1.
From here, it’s just a matter of checking data vs. MC agreement and the pulls for each event class. The authors also apply some measures to weed out the low stat or otherwise sketchy regions; for instance, 1 electron + many jets is more likely to be multijet faking a lepton and shouldn’t necessarily be considered as a good event category. Once this logic applied, you can plot all of your SRs together grouped by category; Figure 2 shows an example for the multijet events. The paper includes 10 of these plots in total, with regions ranging in complexity from nothing but 1μ1j to more complicated final states like ETmiss2μ1γ4j (say that five times fast.)
Once we can see data next to Standard Model prediction for all these categories, it’s necessary to have a way to measure just how unusual an excess may be. The authors of this paper implement an algorithm that searches for the region of largest deviation in the distributions of two variables that are good at discriminating background from new physics. These are the effective mass, the sum of all jet and missing momenta, and the invariant mass, computed with all visible objects and no missing energy.
For each deviation found, a simple likelihood function is built as the convolution of probability density functions (pdfs): one Poissonian pdf to describe the event yields, and Gaussian pdfs for each systematic uncertainty. The integral of this function, p0, is the probability that the Standard Model expectation fluctuated to the observed yield. This p0 value is an industry standard in particle physics: a value of p0 < 3e-7 is our threshold for discovery.
Sadly (or reassuringly), the smallest p0 value found in this scan is 3e-04 (in the 1m1e4b2j event class). To figure out precisely how significant this value is, the authors ran a series of pseudoexperiments for each event class and applied the same scanning algorithm to them, to determine how often such a deviation would occur in a wholly different fake dataset. In fact, a p0 of 3e-04 was expected 70% of the pseudoexperiments.
So the excesses that were observed are not (so far) significant enough to focus on. But the beauty of this analysis strategy is that this deviation can be easily followed up with the addition of a newer dataset. Think of these general searches as the sidekick of the superheros that are our flagship SUSY, exotics, and dark matter searches. They can help us dot i’s and cross t’s, make sure nothing falls through the cracks— and eventually, just maybe, make a discovery.
In lieu of a typical HEP paper summary this month, I’m linking a comprehensive overview of the new results shown at this year’s Moriond conference, originally published in the CERN EP Department Newsletter. Since this includes the latest and greatest from all four experiments on the LHC ring (ATLAS, CMS, ALICE, and LHCb), you can take it as a sort of “state-of-the-field”. Here is a sneak preview:
“Every March, particle physicists around the world take two weeks to promote results, share opinions and do a bit of skiing in between. This is the Moriond tradition and the 52nd iteration of the conference took place this year in La Thuile, Italy. Each of the four main experiments on the LHC ring presented a variety of new and exciting results, providing an overview of the current state of the field, while shaping the discussion for future efforts.”
Read more in my article for the CERN EP Department Newsletter here!
March is an exciting month for high energy physicists. Every year at this time, scientists from all over the world gather for the annual Moriond Conference, where all of the latest results are shown and discussed. Now that this physics Christmas season is over, I, like many other physicists, am sifting through the proceedings, trying to get a hint of what is the new cool physics to be chasing after. My conclusions? The Higgsino search is high on this list.
The search for Higgsinos falls under the broad and complex umbrella of searches for supersymmetry (SUSY). We’ve talked about SUSY on Particlebites in the past; see a recent post on thestop search for reference. Recall that the basic prediction of SUSY is that every boson in the Standard Model has a fermionic supersymmetric partner, and every fermion gets a bosonic partner.
So then what exactly is a Higgsino? The naming convention of SUSY would indicate that the –ino suffix means that a Higgsino is the supersymmetric partner of the Higgs boson. This is partly true, but the whole story is a bit more complicated, and requires some understanding of the Higgs mechanism.
To summarize, in our Standard Model, the photon carries the electromagnetic force, and the W and Z carry the weak force. But before electroweak symmetry breaking, these bosons did not have such distinct tasks. Rather, there were three massless bosons, the B, W, and Higgs, which together all carried the electroweak force. It is the supersymmetric partners of these three bosons that mix to form new mass eigenstates, which we call simply charginos or neutralinos, depending on their charge. When we search for new particles, we are searching for these mass eigenstates, and then interpreting our results in the context of electroweak-inos.
SUSY searches can be broken into many different analyses, each targeting a particular particle or group of particles in this new sector. Starting with the particles that are suspected to have low mass is a good idea, since we’re more likely to observe these at the current LHC collision energy. If we begin with these light particles, and add in the popular theory of naturalness, we conclude that Higgsinos will be the easiest to find of all the new SUSY particles. More specifically, the theory predicts three Higgsinos that mix into two neutralinos and a chargino, each with a mass around 200-300 GeV, but with a very small mass splitting between the three. See Figure 1 for a sample mass spectra of all these particles, where N and C indicate neutralino or chargino respectively (keep in mind this is just a possibility; in principle, any bino/wino/higgsino mass hierarchy is allowed.)
This is both good news and bad news. The good part is that we have reason to think that there are three Higgsinos with masses that are well within our reach at the LHC. The bad news is that this mass spectrum is very compressed, making the Higgsinos extremely difficult to detect experimentally. This is due to the fact that when C1 or N2 decays to N1 (the lightest neutralino), there is very little mass difference leftover to supply energy to the decay products. As a result, all of the final state objects (two N1s plus a W or a Z as a byproduct, see Figure 2) will have very low momentum and thus are very difficult to detect.
The CMS collaboration Higgsino analysis documented here uses a clever analysis strategy for such compressed decay scenarios. Since initial state radiation (ISR) jets occur often in proton-proton collisions, you can ask for your event to have one. This jet radiating from the collision will give the system a kick in the opposite direction, providing enough energy to those soft particles for them to be detectable. At the end of the day, the analysis team looks for events with ISR, missing transverse energy (MET), and two soft opposite sign leptons from the Z decay (to distinguish from hadronic SM-like backgrounds). Figure 3 shows a basic diagram of what these signal events would look like.
In order to conduct this search, several new analysis techniques were employed. Reconstruction of leptons at low pT becomes extremely important in this regime, and the standard cone isolation of the lepton and impact parameter cuts are used to ensure proper lepton identification. New discriminating variables are also added, which exploit kinematic information about the lepton and the soft particles around it, in order to distinguish “prompt” (signal) leptons from those that may have come from a jet and are thus “non prompt” (background.)
In addition, the analysis team paid special attention to the triggers that could be used to select signal events from the immense number of collisions, creating a new “compressed” trigger that uses combined information from both soft muons (pT > 5 GeV) and missing energy ( > 125 GeV).
With all of this effort, the group is able to probe down to a mass splitting between Higgsinos of 20 GeV, excluding N2 masses up to 230 GeV. This is an especially remarkable result because the current strongest limit on Higgsinos comes from the LEP experiment, a result that is over ten years old! Because the Higgsino searches are strongly limited by the low cross section of electroweak SUSY, additional data will certainly mean that these searches will proceed quickly, and more stringent bounds will be placed (or, perhaps, a discovery is in store!)
Article: Improved search for a light sterile neutrino with the full configuration of the Daya Bay Experiment Authors: Daya Bay Collaboration Reference:arXiv:1607.01174
Today I bring you news from the Daya Bay reactor neutrino experiment, which detects neutrinos emitted by three nuclear power plants on the southern coast of China. The results in this paper are based on the first 621 days of data, through November 2013; more data remain to be analyzed, and we can expect a final result after the experiment ends in 2017.
For more on sterile neutrinos, see also this recent post by Eve.
Neutrinos exist in three flavors, each corresponding to one of the charged leptons: electron neutrinos (), muon neutrinos () and tau neutrinos (). When a neutrino is born via the weak interaction, it is created in a particular flavor eigenstate. So, for example, a neutrino born in the sun is always an electron neutrino. However, the electron neutrino does not have a definite mass. Instead, each flavor eigenstate is a linear combination of the three mass eigenstates. This “mixing” of the flavor and mass eigenstates is described by the PMNS matrix, as shown in Figure 2.
The PMNS matrix can be parameterized by 4 numbers: three mixing angles (θ12, θ23 and θ13) and a phase (δ).1 These parameters aren’t known a priori — they must be measured by experiments.
Solar neutrinos stream outward in all directions from their birthplace in the sun. Some intercept Earth, where human-built neutrino observatories can inventory their flavors. After traveling 150 million kilometers, only ⅓ of them register as electron neutrinos — the other ⅔ have transformed along the way into muon or tau neutrinos. These neutrino flavor oscillations are the experimental signature of neutrino mixing, and the means by which we can tease out the values of the PMNS parameters. In any specific situation, the probability of measuring each type of neutrino is described by some experiment-specific parameters (the neutrino energy, distance from the source, and initial neutrino flavor) and some fundamental parameters of the theory (the PMNS mixing parameters and the neutrino mass-squared differences). By doing a variety of measurements with different neutrino sources and different source-to-detector (“baseline”) distances, we can attempt to constrain or measure the individual theory parameters. This has been a major focus of the worldwide experimental neutrino program for the past 15 years.
1 This assumes the neutrino is a Dirac particle. If the neutrino is a Majorana particle, there are two more phases, for a total of 6 parameters in the PMNS matrix.
Many neutrino experiments have confirmed our model of neutrino oscillations and the existence of three neutrino flavors. However, some experiments have observed anomalous signals which could be explained by the presence of a fourth neutrino. This proposed “sterile” neutrino doesn’t have a charged lepton partner (and therefore doesn’t participate in weak interactions) but does mix with the other neutrino flavors.
The discovery of a new type of particle would be tremendously exciting, and neutrino experiments all over the world (including Daya Bay) have been checking their data for any sign of sterile neutrinos.
Neutrinos from reactors
Nuclear reactors are a powerful source of electron antineutrinos. To see why, take a look at this zoomed out version of the chart of the nuclides. The chart of the nuclides is a nuclear physicist’s version of the periodic table. For a chemist, Hydrogen-1 (a single proton), Hydrogen-2 (one proton and one neutron) and Hydrogen-3 (one proton and two neutrons) are essentially the same thing, because chemical bonds are electromagnetic and every hydrogen nucleus has the same electric charge. In the realm of nuclear physics, however, the number of neutrons is just as important as the number of protons. Thus, while the periodic table has a single box for each chemical element, the chart of the nuclides has a separate entry for every combination of protons and neutrons (“nuclide”) that has ever been observed in nature or created in a laboratory.
The black squares are stable nuclei. You can see that stability only occurs when the ratio of neutrons to protons is just right. Furthermore, unstable nuclides tend to decay in such a way that the daughter nuclide is closer to the line of stability than the parent.
Nuclear power plants generate electricity by harnessing the energy released by the fission of Uranium-235. Each U-235 nucleus contains 143 neutrons and 92 protons (n/p = 1.6). When U-235 undergoes fission, the resulting fragments also have n/p ~ 1.6, because the overall number of neutrons and protons is still the same. Thus, fission products tend to lie along the white dashed line in Figure 3, which falls above the line of stability. These nuclides have too many neutrons to be stable, and therefore undergo beta decay: . A typical power reactor emits 6 × 10^20 per second.
The Daya Bay experiment
The Daya Bay nuclear power complex is located on the southern coast of China, 55 km northeast of Hong Kong. With six reactor cores, it is one of the most powerful reactor complexes in the world — and therefore an excellent source of electron antineutrinos. The Daya Bay experiment consists of 8 identical antineutrino detectors in 3 underground halls. One experimental hall is located as close as possible to the Daya Bay nuclear power plant; the second is near the two Ling Ao power plants; the third is located 1.5 – 1.9 km away from all three pairs of reactors, a distance chosen to optimize Daya Bay’s sensitivity to the mixing angle .
The neutrino target at the heart of each detector is a cylindrical vessel filled with 20 tons of Gadolinium-doped liquid scintillator. The vast majority of pass through undetected, but occasionally one will undergo inverse beta decay in the target volume, interacting with a proton to produce a positron and a neutron: .
The positron and neutron create signals in the detector with a characteristic time relationship, as shown in Figure 6. The positron immediately deposits its energy in the scintillator and then annihilates with an electron. This all happens within a few nanoseconds and causes a prompt flash of scintillation light. The neutron, meanwhile, spends some tens of microseconds bouncing around (“thermalizing”) until it is slow enough to be captured by a Gadolinium nucleus. When this happens, the nucleus emits a cascade of gamma rays, which in turn interact with the scintillator and produce a second flash of light. This combination of prompt and delayed signals is used to identify interaction events.
Daya Bay’s search for sterile neutrinos
Daya Bay is a neutrino disappearance experiment. The electron antineutrinos emitted by the reactors can oscillate into muon or tau antineutrinos as they travel, but the detectors are only sensitive to , because the antineutrinos have enough energy to produce a positron but not the more massive or . Thus, Daya Bay observes neutrino oscillations by measuring fewer than would be expected otherwise.
Based on the number of detected at one of the Daya Bay experimental halls, the usual three-neutrino oscillation theory can predict the number that will be seen at the other two experimental halls (EH). You can see how this plays out in Figure 7. We are looking at the neutrino energy spectrum measured at EH2 and EH3, divided by the prediction computed from the EH1 data. The gray shaded regions mark the one-standard-deviation uncertainty bounds of the predictions. If the black data points deviated significantly from the shaded region, that would be a sign that the three-neutrino oscillation model is not complete, possibly due to the presence of sterile neutrinos. However, in this case, the black data points are statistically consistent with the prediction. In other words, Daya Bay sees no evidence for sterile neutrinos.
Does that mean sterile neutrinos don’t exist? Not necessarily. For one thing, the effect of a sterile neutrino on the Daya Bay results would depend on the sterile neutrino mass and mixing parameters. The blue and red dashed lines in Figure 7 show the sterile neutrino prediction for two specific choices of and ; these two examples look quite different from the three-neutrino prediction and can be ruled out because they don’t match the data. However, there are other parameter choices for which the presence of a sterile neutrino wouldn’t have a discernable effect on the Daya Bay measurements. Thus, Daya Bay can constrain the parameter space, but can’t rule out sterile neutrinos completely. However, as more and more experiments report “no sign of sterile neutrinos here,” it appears less and less likely that they exist.
Article: Search for magnetic monopoles with the MoEDAL prototype trapping detector in 8 TeV proton-proton collisions at the LHC Authors: The ATLAS Collaboration Reference: arXiv:1604.06645v4 [hep-ex]
Somewhere in a tiny corner of the massive LHC cavern, nestled next to the veteran LHCb detector, a new experiment is coming to life.
The Monopole & Exotics Detector at the LHC, nicknamed the MoEDAL experiment, recently published its first ever results on the search for magnetic monopoles and other highly ionizing new particles. The data collected for this result is from the 2012 run of the LHC, when the MoEDAL detector was still a prototype. But it’s still enough to achieve the best limit to date on the magnetic monopole mass.
Magnetic monopoles are a very appealing idea. From basic electromagnetism, we expect to swap electric and magnetic fields under duality without changing Maxwell’s equations. Furthermore, Dirac showed that a magnetic monopole is not inconsistent with quantum electrodynamics (although they do not appear natually.) The only problem is that in the history of scientific experimentation, we’ve never actually seen one. We know that if we break a magnet in half, we will get two new magnetics, each with its own North and South pole (see Figure 1).
This is proving to be a thorn in the side of many physicists. Finding a magnetic monopole would be great from a theoretical standpoint. Many Grand Unified Theories predict monopoles as a natural byproduct of symmetry breaking in the early universe. In fact, the theory of cosmological inflation so confidently predicts a monopole that its absence is known as the “monopole problem”. There have been occasional blips of evidence for monopoles in the past (such as a single event in a detector), but nothing has been reproducible to date.
Enter MoEDAL (Figure 2). It is the seventh addition to the LHC family, having been approved in 2010. If the monopole is a fundamental particle, it will be produced in proton-proton collisions. It is also expected to be very massive and long-lived. MoEDAL is designed to search for such a particle with a three-subdetector system.
The Nuclear Track Detector is composed of plastics that are damaged when a charged particle passes through them. The size and shape of the damage can then be observed with an optical microscope. Next is the TimePix Radiation Monitor system, a pixel detector which absorbs charge deposits induced by ionizing radiation. The newest addition is the Trapping Detector system, which is simply a large aluminum volume that will trap a monopole with its large nuclear magnetic moment.
The collaboration collected data using these distinct technologies in 2012, and studied the resulting materials and signals. The ultimate limit in the paper excludes spin-0 and spin-1/2 monopoles with masses between 100 GeV and 3500 GeV, and a magnetic charge > 0.5gD (the Dirac magnetic charge). See Figures 3 and 4 for the exclusion curves. It’s worth noting that this upper limit is larger than any fundamental particle we know of to date. So this is a pretty stringent result.
As for moving forward, we’ve only talked about monopoles here, but the physics programme for MoEDAL is vast. Since the detector technology is fairly broad-based, it is possible to find anything from SUSY to Universal Extra Dimensions to doubly charged particles. Furthermore, this paper is only published on LHC data from September to December of 2012, which is not a whole lot. In fact, we’ve collected over 25x that much data in this year’s run alone (although this detector was not in use this year.) More data means better statistics and more extensive limits, so this is definitely a measurement that will be greatly improved in future runs. A new version of the detector was installed in 2015, and we can expect to see new results within the next few years.
Article: Search for TeV-scale gravity signatures in high-mass final states with leptons and jets with the ATLAS detector at sqrt(s)=13 TeV Authors: The ATLAS Collaboration Reference: arXiv:1606.02265 [hep-ex]
What would gravity look like if we lived in a 6-dimensional space-time? Models of TeV-scale gravity theorize that the fundamental scale of gravity, MD, is much lower than what’s measured here in our normal, 4-dimensional space-time. If true, this could explain the large difference between the scale of electroweak interactions (order of 100 GeV) and gravity (order of 1016 GeV), an important open question in particle physics. There are several theoretical models to describe these extra dimensions, and they all predict interesting new signatures in the form of non-perturbative gravitational states. One of the coolest examples of such a state is microscopic black holes. Conveniently, this particular signature could be produced and measured at the LHC!
Sounds cool, but how do you actually look for microscopic black holes with a proton-proton collider? Because we don’t have a full theory of quantum gravity (yet), ATLAS researchers made predictions for the production cross-sections of these black holes using semi-classical approximations that are valid when the black hole mass is above MD. This production cross-section is also expected to dramatically larger when the energy scale of the interactions (pp collisions) surpasses MD. We can’t directly detect black holes with ATLAS, but many of the decay channels of these black holes include leptons in the final state, which IS something that can be measured at ATLAS! This particular ATLAS search looked for final states with at least 3 high transverse momentum (pt) jets, at least one of which must be a leptonic (electron or muon) jet (the others can be hadronic or leptonic). The sum of the transverse momenta, is used as a discriminating variable since the signal is expected to appear only at high pt.
This search used the full 3.2 fb-1 of 13 TeV data collected by ATLAS in 2015 to search for this signal above relevant Standard Model backgrounds (Z+jets, W+jets, and ttbar, all of which produce similar jet final states). The results are shown in Figure 1 (electron and muon channels are presented separately). The various backgrounds are shown in various colored histograms, the data in black points, and two microscopic black hole models in green and blue lines. There is a slight excess in the 3 TeV region in the electron channel, which corresponds to a p-value of only 1% when tested against the background only hypothesis. Unfortunately, this isn’t enough evidence to indicate new physics yet, but it’s an exciting result nonetheless! This analysis was also used to improve exclusion limits on individual extra-dimensional gravity models, as shown in Figure 2. All limits were much stronger than those set in Run 1.
So: no evidence of microscopic black holes or extra-dimensional gravity at the LHC yet, but there is a promising excess and Run 2 has only just begun. Since publication, ATLAS has collected another 10 fb-1 of sqrt(13) TeV data that has yet to be analyzed. These results could also be used to constrain other Beyond the Standard Model searches at the TeV scale that have similar high pt leptonic jet final states, which would give us more information about what can and can’t exist outside of the Standard Model. There is certainly more to be learned from this search!
Article: Particle Physics Models for the 17 MeV Anomaly in Beryllium Nuclear Decays Authors: J.L. Feng, B. Fornal, I. Galon, S. Gardner, J. Smolinsky, T. M. P. Tait, F. Tanedo Reference:arXiv:1608.03591 (Submitted to Phys. Rev. D)
See also this Latin American Webinar on Physics recorded talk.
Also featuring the results from:
— Gulyás et al., “A pair spectrometer for measuring multipolarities of energetic nuclear transitions” (description of detector; 1504.00489; NIM)
— Krasznahorkay et al., “Observation of Anomalous Internal Pair Creation in 8Be: A Possible Indication of a Light, Neutral Boson” (experimental result; 1504.01527; PRL version; note PRL version differs from arXiv)
— Feng et al., “Protophobic Fifth-Force Interpretation of the Observed Anomaly in 8Be Nuclear Transitions” (phenomenology; 1604.07411; PRL)
Editor’s note: the author is a co-author of the paper being highlighted.
Recently there’s some press (see links below) regarding early hints of a new particle observed in a nuclear physics experiment. In this bite, we’ll summarize the result that has raised the eyebrows of some physicists, and the hackles of others.
A crash course on nuclear physics
Nuclei are bound states of protons and neutrons. They can have excited states analogous to the excited states of at lowoms, which are bound states of nuclei and electrons. The particular nucleus of interest is beryllium-8, which has four neutrons and four protons, which you may know from the triple alpha process. There are three nuclear states to be aware of: the ground state, the 18.15 MeV excited state, and the 17.64 MeV excited state.
Most of the time the excited states fall apart into a lithium-7 nucleus and a proton. But sometimes, these excited states decay into the beryllium-8 ground state by emitting a photon (γ-ray). Even more rarely, these states can decay to the ground state by emitting an electron–positron pair from a virtual photon: this is called internal pair creation and it is these events that exhibit an anomaly.
The beryllium-8 anomaly
Physicists at the Atomki nuclear physics institute in Hungary were studying the nuclear decays of excited beryllium-8 nuclei. The team, led by Attila J. Krasznahorkay, produced beryllium excited states by bombarding a lithium-7 nucleus with protons.
The proton beam is tuned to very specific energies so that one can ‘tickle’ specific beryllium excited states. When the protons have around 1.03 MeV of kinetic energy, they excite lithium into the 18.15 MeV beryllium state. This has two important features:
Picking the proton energy allows one to only produce a specific excited state so one doesn’t have to worry about contamination from decays of other excited states.
Because the 18.15 MeV beryllium nucleus is produced at resonance, one has a very high yield of these excited states. This is very good when looking for very rare decay processes like internal pair creation.
What one expects is that most of the electron–positron pairs have small opening angle with a smoothly decreasing number as with larger opening angles.
Instead, the Atomki team found an excess of events with large electron–positron opening angle. In fact, even more intriguing: the excess occurs around a particular opening angle (140 degrees) and forms a bump.
Here’s why a bump is particularly interesting:
The distribution of ordinary internal pair creation events is smoothly decreasing and so this is very unlikely to produce a bump.
Bumps can be signs of new particles: if there is a new, light particle that can facilitate the decay, one would expect a bump at an opening angle that depends on the new particle mass.
Schematically, the new particle interpretation looks like this:
As an exercise for those with a background in special relativity, one can use the relation to prove the result:
This relates the mass of the proposed new particle, X, to the opening angle θ and the energies E of the electron and positron. The opening angle bump would then be interpreted as a new particle with mass of roughly 17 MeV. To match the observed number of anomalous events, the rate at which the excited beryllium decays via the X boson must be 6×10-6 times the rate at which it goes into a γ-ray.
The anomaly has a significance of 6.8σ. This means that it’s highly unlikely to be a statistical fluctuation, as the 750 GeV diphoton bump appears to have been. Indeed, the conservative bet would be some not-understood systematic effect, akin to the 130 GeV Fermi γ-ray line.
The beryllium that cried wolf?
Some physicists are concerned that beryllium may be the ‘boy that cried wolf,’ and point to papers by the late Fokke de Boer as early as 1996 and all the way to 2001. de Boer made strong claims about evidence for a new 10 MeV particle in the internal pair creation decays of the 17.64 MeV beryllium-8 excited state. These claims didn’t pan out, and in fact the instrumentation paper by the Atomki experiment rules out that original anomaly.
The proposed evidence for “de Boeron” is shown below:
When the Atomki group studied the same 17.64 MeV transition, they found that a key background component—subdominant E1 decays from nearby excited states—dramatically improved the fit and were not included in the original de Boer analysis. This is the last nail in the coffin for the proposed 10 MeV “de Boeron.”
However, the Atomki group also highlight how their new anomaly in the 18.15 MeV state behaves differently. Unlike the broad excess in the de Boer result, the new excess is concentrated in a bump. There is no known way in which additional internal pair creation backgrounds can contribute to add a bump in the opening angle distribution; as noted above: all of these distributions are smoothly falling.
The Atomki group goes on to suggest that the new particle appears to fit the bill for a dark photon, a reasonably well-motivated copy of the ordinary photon that differs in its overall strength and having a non-zero (17 MeV?) mass.
Theory part 1: Not a dark photon
With the Atomki result was published and peer reviewed in Physics Review Letters, the game was afoot for theorists to understand how it would fit into a theoretical framework like the dark photon. A group from UC Irvine, University of Kentucky, and UC Riverside found that actually, dark photons have a hard time fitting the anomaly simultaneously with other experimental constraints. In the visual language of this recent ParticleBite, the situation was this:
The main reason for this is that a dark photon with mass and interaction strength to fit the beryllium anomaly would necessarily have been seen by the NA48/2 experiment. This experiment looks for dark photons in the decay of neutral pions (π0). These pions typically decay into two photons, but if there’s a 17 MeV dark photon around, some fraction of those decays would go into dark-photon — ordinary-photon pairs. The non-observation of these unique decays rules out the dark photon interpretation.
The theorists then decided to “break” the dark photon theory in order to try to make it fit. They generalized the types of interactions that a new photon-like particle, X, could have, allowing protons, for example, to have completely different charges than electrons rather than having exactly opposite charges. Doing this does gross violence to the theoretical consistency of a theory—but they goal was just to see what a new particle interpretation would have to look like. They found that if a new photon-like particle talked to neutrons but not protons—that is, the new force were protophobic—then a theory might hold together.
Theory appendix: pion-phobia is protophobia
Editor’s note: what follows is for readers with some physics background interested in a technical detail; others may skip this section.
How does a new particle that is allergic to protons avoid the neutral pion decay bounds from NA48/2? Pions decay into pairs of photons through the well-known triangle-diagrams of the axial anomaly. The decay into photon–dark-photon pairs proceed through similar diagrams. The goal is then to make sure that these diagrams cancel.
A cute way to look at this is to assume that at low energies, the relevant particles running in the loop aren’t quarks, but rather nucleons (protons and neutrons). In fact, since only the proton can talk to the photon, one only needs to consider proton loops. Thus if the new photon-like particle, X, doesn’t talk to protons, then there’s no diagram for the pion to decay into γX. This would be great if the story weren’t completely wrong.
The correct way of seeing this is to treat the pion as a quantum superposition of an up–anti-up and down–anti-down bound state, and then make sure that the X charges are such that the contributions of the two states cancel. The resulting charges turn out to be protophobic.
The fact that the “proton-in-the-loop” picture gives the correct charges, however, is no coincidence. Indeed, this was precisely how Jack Steinberger calculated the correct pion decay rate. The key here is whether one treats the quarks/nucleons linearly or non-linearly in chiral perturbation theory. The relation to the Wess-Zumino-Witten term—which is what really encodes the low-energy interaction—is carefully explained in chapter 6a.2 of Georgi’s revised Weak Interactions.
Theory part 2: Not a spin-0 particle
The above considerations focus on a new particle with the same spin and parity as a photon (spin-1, parity odd). Another result of the UCI study was a systematic exploration of other possibilities. They found that the beryllium anomaly could not be consistent with spin-0 particles. For a parity-odd, spin-0 particle, one cannot simultaneously conserve angular momentum and parity in the decay of the excited beryllium-8 state. (Parity violating effects are negligible at these energies.)
For a parity-odd pseudoscalar, the bounds on axion-like particles at 20 MeV suffocate any reasonable coupling. Measured in terms of the pseudoscalar–photon–photon coupling (which has dimensions of inverse GeV), this interaction is ruled out down to the inverse Planck scale.
Additional possibilities include:
Dark Z bosons, cousins of the dark photon with spin-1 but indeterminate parity. This is very constrained by atomic parity violation.
Axial vectors, spin-1 bosons with positive parity. These remain a theoretical possibility, though their unknown nuclear matrix elements make it difficult to write a predictive model. (See section II.D of 1608.03591.)
Theory part 3: Nuclear input
The plot thickens when once also includes results from nuclear theory. Recent results from Saori Pastore, Bob Wiringa, and collaborators point out a very important fact: the 18.15 MeV beryllium-8 state that exhibits the anomaly and the 17.64 MeV state which does not are actually closely related.
Recall (e.g. from the first figure at the top) that both the 18.15 MeV and 17.64 MeV states are both spin-1 and parity-even. They differ in mass and in one other key aspect: the 17.64 MeV state carries isospin charge, while the 18.15 MeV state and ground state do not.
Isospin is the nuclear symmetry that relates protons to neutrons and is tied to electroweak symmetry in the full Standard Model. At nuclear energies, isospin charge is approximately conserved. This brings us to the following puzzle:
If the new particle has mass around 17 MeV, why do we see its effects in the 18.15 MeV state but not the 17.64 MeV state?
Naively, if the new particle emitted, X, carries no isospin charge, then isospin conservation prohibits the decay of the 17.64 MeV state through emission of an X boson. However, the Pastore et al. result tells us that actually, the isospin-neutral and isospin-charged states mix quantum mechanically so that the observed 18.15 and 17.64 MeV states are mixtures of iso-neutral and iso-charged states. In fact, this mixing is actually rather large, with mixing angle of around 10 degrees!
The result of this is that one cannot invoke isospin conservation to explain the non-observation of an anomaly in the 17.64 MeV state. In fact, the only way to avoid this is to assume that the mass of the X particle is on the heavier side of the experimentally allowed range. The rate for X emission goes like the 3-momentum cubed (see section II.E of 1608.03591), so a small increase in the mass can suppresses the rate of X emission by the lighter state by a lot.
The UCI collaboration of theorists went further and extended the Pastore et al. analysis to include a phenomenological parameterization of explicit isospin violation. Independent of the Atomki anomaly, they found that including isospin violation improved the fit for the 18.15 MeV and 17.64 MeV electromagnetic decay widths within the Pastore et al. formalism. The results of including all of the isospin effects end up changing the particle physics story of the Atomki anomaly significantly:
The results of the nuclear analysis are thus that:
An interpretation of the Atomki anomaly in terms of a new particle tends to push for a slightly heavier X mass than the reported best fit. (Remark: the Atomki paper does not do a combined fit for the mass and coupling nor does it report the difficult-to-quantify systematic errors associated with the fit. This information is important for understanding the extent to which the X mass can be pushed to be heavier.)
The effects of isospin mixing and violation are important to include; especially as one drifts away from the purely protophobic limit.
Theory part 4: towards a complete theory
The theoretical structure presented above gives a framework to do phenomenology: fitting the observed anomaly to a particle physics model and then comparing that model to other experiments. This, however, doesn’t guarantee that a nice—or even self-consistent—theory exists that can stretch over the scaffolding.
Indeed, a few challenges appear:
The isospin mixing discussed above means the X mass must be pushed to the heavier values allowed by the Atomki observation.
The “protophobic” limit is not obviously anomaly-free: simply asserting that known particles have arbitrary charges does not generically produce a mathematically self-consistent theory.
Atomic parity violation constraints require that the X couple in the same way to left-handed and right-handed matter. The left-handed coupling implies that X must also talk to neutrinos: these open up new experimental constraints.
The Irvine/Kentucky/Riverside collaboration first note the need for a careful experimental analysis of the actual mass ranges allowed by the Atomki observation, treating the new particle mass and coupling as simultaneously free parameters in the fit.
Next, they observe that protophobic couplings can be relatively natural. Indeed: the Standard Model Z boson is approximately protophobic at low energies—a fact well known to those hunting for dark matter with direct detection experiments. For exotic new physics, one can engineer protophobia through a phenomenon called kinetic mixing where two force particles mix into one another. A tuned admixture of electric charge and baryon number, (Q-B), is protophobic.
Baryon number, however, is an anomalous global symmetry—this means that one has to work hard to make a baryon-boson that mixes with the photon (see 1304.0576 and 1409.8165 for examples). Another alternative is if the photon kinetically mixes with not baryon number, but the anomaly-free combination of “baryon-minus-lepton number,” Q-(B-L). This then forces one to apply additional model-building modules to deal with the neutrino interactions that come along with this scenario.
In the language of the ‘model building blocks’ above, result of this process looks schematically like this:
The theory collaboration presented examples of the two cases, and point out how the additional ‘bells and whistles’ required may tie to additional experimental handles to test these hypotheses. These are simple existence proofs for how complete models may be constructed.
We have delved rather deeply into the theoretical considerations of the Atomki anomaly. The analysis revealed some unexpected features with the types of new particles that could explain the anomaly (dark photon-like, but not exactly a dark photon), the role of nuclear effects (isospin mixing and breaking), and the kinds of features a complete theory needs to have to fit everything (be careful with anomalies and neutrinos). The single most important next step, however, is and has always been experimental verification of the result.
While the Atomki experiment continues to run with an upgraded detector, what’s really exciting is that a swath of experiments that are either ongoing or in construction will be able to probe the exact interactions required by the new particle interpretation of the anomaly. This means that the result can be independently verified or excluded within a few years. A selection of upcoming experiments is highlighted in section IX of 1608.03591:
We highlight one particularly interesting search: recently a joint team of theorists and experimentalists at MIT proposed a way for the LHCb experiment to search for dark photon-like particles with masses and interaction strengths that were previously unexplored. The proposal makes use of the LHCb’s ability to pinpoint the production position of charged particle pairs and the copious amounts of D mesons produced at Run 3 of the LHC. As seen in the figure above, the LHCb reach with this search thoroughly covers the Atomki anomaly region.
So where we stand is this:
There is an unexpected result in a nuclear experiment that may be interpreted as a sign for new physics.
The next steps in this story are independent experimental cross-checks; the threshold for a ‘discovery’ is if another experiment can verify these results.
Meanwhile, a theoretical framework for understanding the results in terms of a new particle has been built and is ready-and-waiting. Some of the results of this analysis are important for faithful interpretation of the experimental results.
What if it’s nothing?
This is the conservative take—and indeed, we may well find that in a few years, the possibility that Atomki was observing a new particle will be completely dead. Or perhaps a source of systematic error will be identified and the bump will go away. That’s part of doing science.
Meanwhile, there are some important take-aways in this scenario. First is the reminder that the search for light, weakly coupled particles is an important frontier in particle physics. Second, for this particular anomaly, there are some neat take aways such as a demonstration of how effective field theory can be applied to nuclear physics (see e.g. chapter 3.1.2 of the new book by Petrov and Blechman) and how tweaking our models of new particles can avoid troublesome experimental bounds. Finally, it’s a nice example of how particle physics and nuclear physics are not-too-distant cousins and how progress can be made in particle–nuclear collaborations—one of the Irvine group authors (Susan Gardner) is a bona fide nuclear theorist who was on sabbatical from the University of Kentucky.
What if it’s real?
This is a big “what if.” On the other hand, a 6.8σ effect is not a statistical fluctuation and there is no known nuclear physics to produce a new-particle-like bump given the analysis presented by the Atomki experimentalists.
The threshold for “real” is independent verification. If other experiments can confirm the anomaly, then this could be a huge step in our quest to go beyond the Standard Model. While this type of particle is unlikely to help with the Hierarchy problem of the Higgs mass, it could be a sign for other kinds of new physics. One example is the grand unification of the electroweak and strong forces; some of the ways in which these forces unify imply the existence of an additional force particle that may be light and may even have the types of couplings suggested by the anomaly.
Could it be related to other anomalies?
The Atomki anomaly isn’t the first particle physics curiosity to show up at the MeV scale. While none of these other anomalies are necessarily related to the type of particle required for the Atomki result (they may not even be compatible!), it is helpful to remember that the MeV scale may still have surprises in store for us.
The KTeV anomaly: The rate at which neutral pions decay into electron–positron pairs appears to be off from the expectations based on chiral perturbation theory. In 0712.0007, a group of theorists found that this discrepancy could be fit to a new particle with axial couplings. If one fixes the mass of the proposed particle to be 20 MeV, the resulting couplings happen to be in the same ballpark as those required for the Atomki anomaly. The important caveat here is that parameters for an axial vector to fit the Atomki anomaly are unknown, and mixed vector–axial states are severely constrained by atomic parity violation.
The anomalous magnetic moment of the muon and the cosmic lithium problem: much of the progress in the field of light, weakly coupled forces comes from Maxim Pospelov. The anomalous magnetic moment of the muon, (g-2)μ, has a long-standing discrepancy from the Standard Model (see e.g. this blog post). While this may come from an error in the very, very intricate calculation and the subtle ways in which experimental data feed into it, Pospelov (and also Fayet) noted that the shift may come from a light (in the 10s of MeV range!), weakly coupled new particle like a dark photon. Similarly, Pospelov and collaborators showed that a new light particle in the 1-20 MeV range may help explain another longstanding mystery: the surprising lack of lithium in the universe (APS Physics synopsis).
A lot of recent progress in dark matter has revolved around the possibility that in addition to dark matter, there may be additional light particles that mediate interactions between dark matter and the Standard Model. If these particles are light enough, they can change the way that we expect to find dark matter in sometimes surprising ways. One interesting avenue is called self-interacting dark matter and is based on the observation that these light force carriers can deform the dark matter distribution in galaxies in ways that seem to fit astronomical observations. A 20 MeV dark photon-like particle even fits the profile of what’s required by the self-interacting dark matter paradigm, though it is very difficult to make such a particle consistent with both the Atomki anomaly and the constraints from direct detection.
Should I be excited?
Given all of the caveats listed above, some feel that it is too early to be in “drop everything, this is new physics” mode. Others may take this as a hint that’s worth exploring further—as has been done for many anomalies in the recent past. For researchers, it is prudent to be cautious, and it is paramount to be careful; but so long as one does both, then being excited about a new possibility is part what makes our job fun.
For the general public, the tentative hopes of new physics that pop up—whether it’s the Atomki anomaly, or the 750 GeV diphoton bump, a GeV bump from the galactic center, γ-ray lines at 3.5 keV and 130 GeV, or penguins at LHCb—these are the signs that we’re making use of all of the data available to search for new physics. Sometimes these hopes fizzle away, often they leave behind useful lessons about physics and directions forward. Maybe one of these days an anomaly will stick and show us the way forward.
Here are some of the popular-level press on the Atomki result. See the references at the top of this ParticleBite for references to the primary literature.
One thing that makes physics, and especially particle physics, is unique in the sciences is the split between theory and experiment. The role of experimentalists is clear: they build and conduct experiments, take data and analyze it using mathematical, statistical, and numerical techniques to separate signal from background. In short, they seem to do all of the real science!
So what is it that theorists do, besides sipping espresso and scribbling on chalk boards? In this post we describe one type of theoretical work called model building. This usually falls under the umbrella of phenomenology, which in physics refers to making connections between mathematically defined theories (or models) of nature and actual experimental observations of nature.
One common scenario is that one experiment observes something unusual: an anomaly. Two things immediately happen:
Other experiments find ways to cross-check to see if they can confirm the anomaly.
Theorists start figure out the broader implications if the anomaly is real.
#1 is the key step in the scientific method, but in this post we’ll illuminate what #2 actually entails. The scenario looks a little like this:
Theorists, who have spent plenty of time mulling over the open questions in physics, are ready to apply their favorite models of new physics to see if they fit. These are the models that they know lead to elegant mathematical results, like grand unification or a solution to the Hierarchy problem. Sometimes theorists are more utilitarian, and start with “do it all” Swiss army knife theories called effective theories (or simplified models) and see if they can explain the anomaly in the context of existing constraints.
Here’s what usually happens:
Indeed, usually one needs to get creative and modify the nice-and-elegant theory to make sure it can explain the anomaly while avoiding other experimental constraints. This makes the theory a little less elegant, but sometimes nature isn’t elegant.
Now we’re feeling pretty good about ourselves. It can take quite a bit of work to hack the well-motivated original theory in a way that both explains the anomaly and avoids all other known experimental observations. A good theory can do a couple of other things:
It points the way to future experiments that can test it.
It can use the additional structure to explain other anomalies.
The picture for #2 is as follows:
Even at this stage, there can be a lot of really neat physics to be learned. Model-builders can develop a reputation for particularly clever, minimal, or inspired modules. If a module is really successful, then people will start to think about it as part of a pre-packaged deal:
Model-smithing is a craft that blends together a lot of the fun of understanding how physics works—which bits of common wisdom can be bent or broken to accommodate an unexpected experimental result? Is it possible to find a simpler theory that can explain more observations? Are the observations pointing to an even deeper guiding principle?
Of course—we should also say that sometimes, while theorists are having fun developing their favorite models, other experimentalists have gone on to refute the original anomaly.
But here’s the mark of a really, really good model: even if the anomaly goes away and the particular model falls out of favor, a good model will have taught other physicists something really neat about what can be done within the a given theoretical framework. Physicists get a feel for the kinds of modules that are out in the market (like an app store) and they develop a library of tricks to attack future anomalies. And if one is really fortunate, these insights can point the way to even bigger connections between physical principles.
I cannot help but end this post without one of my favorite physics jokes, courtesy of T. Tait:
A theorist and an experimentalist are having coffee. The theorist is really excited, she tells the experimentalist, “I’ve got it—it’s a model that’s elegant, explains everything, and it’s completely predictive.”
The experimentalist listens to her colleague’s idea and realizes how to test those predictions. She writes several grant applications, hires a team of postdocs and graduate students, trains them, and builds the new experiment. After years of design, labor, and testing, the machine is ready to take data. They run for several months, and the experimentalist pores over the results.
The experimentalist knocks on the theorist’s door the next day and says, “I’m sorry—the experiment doesn’t find what you were predicting. The theory is dead.”
The theorist frowns a bit: “What a shame. Did you know I spent three whole weeks of my life writing that paper?”
Article: Search for resonant production of high mass photon pairs using 12.9/fb of proton-proton collisions at √s = 13 TeV and combined interpretation of searches at 8 and 13 TeV Authors: CMS Collaboration Reference:CERN Document Server (CMS-PAS-EXO-16-027, presented at ICHEP)
In the early morning hours of Friday, August 5th, the particle physics community let our a collective, exasperated sigh. What some knew, others feared, and everyone gossiped about was announced publicly at the 38th International Conference on High Energy Physics: the 750 GeV bump had vanished.
Had it endured, the now-defunct “diphoton bump” would have been the highlight of ICHEP, a biennial meeting of the high energy physics community currently being held in Chicago, Illinois. In light of this, the scheduling of the announcements for a parallel session in the middle of the conference – rather than a specially arranged plenary session – said anything that the rumors had not already: there would be no need for champagne or press releases.
While the exact statistical significance depends upon the width and spin of the resonance in question, meaning that the paper presents multiple p-value plots corresponding to different signal hypotheses, the following plot is a good representative.
We hoped that the 2016 LHC dataset would bring confirmation that the excess seen during 2015 was evidence of a new particle, but instead, the 2016 data has assured us that 2015 was merely a statistical fluctuation. When combining the data currently available from 8 TeV and 13 TeV, the excess at 750 GeV is reduced to <2σ local significance. The channel with the largest, which was 3.4σ local significance excess before the addition of 2016 data, has now been reduced to 1.9 sigma local significance, and other channels have seen analogous drops. As a result, CMS reports that “no significant excess” is observed over the Standard Model predictions.
The excess disappearing was clearly the less-desirable of the two possible outcomes, but is there a silver lining here? This CMS result puts the most stringent limits to date on the production of Randall-Sundrum (RS) gravitons, and the excitement generated by the diphoton bump sparked a flurry of activity within the theory community. A discovery would have been preferred to exclusion limits, and the papers published concerned a signal that has subsequently disappeared, but I would argue that both of these help our field move forward.
However, as we continue to push exclusion limits further across all manner of search for new physics, particle physicists become understandably antsy. Across all manner of searches for supersymmetry and exotica, the jump in energy from 8 to 13 TeV has allowed us to place more stringent exclusion limits. This is great news, but it is not the flurry of discoveries that some hoped the increase in energy during Run II would bring. It seems that there was no new physics ready to jump out and surprise us at the outset of Run II, so if we are to discover new physics at the LHC in the coming years, we will need to pick it out of the mountains of background. New physics may be lurking nearby, but if we want it, we will have to work harder to get it.
References and Further Reading
CERN Press, “Chicago sees floods of LHC data and new results at the ICHEP 2016 Conference” (link)
Alessandro Strumia, “Interpreting the 750 GeV digamma excess: a review” (arXiv:1605.09401)
Article: Search for resonant production of high-mass photon pairs in proton-proton collisions at sqrt(s) = 8 and 13 TeV Authors: CMS Collaboration Reference:arXiv:1606.04093 (Submitted to Phys. Rev. Lett)
Following the discovery of the Higgs boson at the LHC in 2012, high-energy physicists asked the same question that they have asked for years: “what’s next?” This time, however, the answer to that question was not nearly as obvious as it had been in the past. When the top quark was discovered at Fermilab in 1995, the answer was clear “the Higgs is next.” And when the W and Z bosons were discovered at CERN in 1983, physicists were saying “the top quark is right around the corner.” However, because the Higgs is the last piece of the puzzle that is the Standard Model, there is no clear answer to the question “what’s next?” At the moment, the honest answer to this question is “we aren’t quite sure.”
The Higgs completes the Standard Model, which would be fantastic news were it not for the fact that there remain unambiguous indications of physics beyond the Standard Model. Among these is dark matter, which makes up roughly one-quarter of the energy content of the universe. Neutrino mass, the Hierarchy Problem, and the matter-antimatter asymmetry in the universe are among other favorite arguments in favor of new physics. The salient point is clear: the Standard Model, though newly-completed, is not a complete description of nature, so we must press on.
Near the end of Run I of the LHC (2013) and the beginning of Run II (2015), the focus was on searches for new physics. While searches for supersymmetry and the direct production of dark matter drew a considerable deal of focus, towards the end of 2015, a small excess – or, as physicists commonly refer to them, a bump – began to materialize in decays to two photons seen by the CMS Collaboration. This observation was made all the more exciting by the fact that ATLAS observed an analogous bump in the same channel with roughly the same significance. The paper in question here, published June 2016, presents a combination of the 2012 (8 TeV) and 2015 (13 TeV) CMS data; it represents the most recent public CMS result on the so-called “di-photon resonance”. (See also Roberto’s recent ParticleBite.)
This analysis searches for events with two photons, a relatively clean signal. If there is a heavy particle which decays into two photons, then we expect to see an excess of events near the mass of this particle. In this case, CMS and ATLAS have observed an excess of events near 750 GeV in the di-photon channel. While some searches for new physics rely upon hard kinematic requirements or tailor their search to a certain signal model, the signal here is simple: look for an event with two photons and nothing else. However, because this is a model-independent search with loose selection requirements, great care must be taken to understand the background (events that mimic the signal) in order to observe an excess, should one exist. In this case, the background processes are direct production of two photons and events where one or more photon is actually a misidentified jet. For example, a neutral pion may be mistaken for a photon.
Part of the excitement from this excess is due to the fact that ATLAS and CMS both observed corresponding bump sin their datasets, a useful cross-check that the bump has a chance of being real. A bigger part of the excitement, however, are the physics implications of a new, heavy particle that decays into two photons. A particle decaying to two photons would likely be either spin-0 or spin-2 (in principle, it could be of spin-N where N is an integer and N ≥ 2). Models exist in which the aforementioned Higgs boson, h(125), is one of a family of Higgs particles, and these so-called “expanded Higgs sectors” predict heavy, spin-0 particles which would decay to two photons. Moreover, in models which there are extra spatial dimensions, we would expect to find a spin-2 resonance – a graviton – decaying to two photons. Both of these scenarios would be extremely exciting, if realized by experiment, which contributed to the buzz surrounding this signal.
So, where do we stand today? After considering the data from 2015 (at 13 TeV center-of-mass energy) and 2012 (at 8 TeV center-of-mass energy) together, CMS reports an excess with a local significance of 3.4-sigma. However, the global significance – which takes into account the “look-elsewhere effect” and is the figure of merit here – is a mere 1.6-sigma. While the outlook is not extremely encouraging, more data is needed to definitively rule on the status of the di-photon resonance. CMS and ATLAS should have just that, more data, in time for the International Conference on High Energy Physics (ICHEP) 2016 in early August. At that point, we should have sufficient data to determine the fate of the di-photon excess. For now, the di-photon bump serves as a reminder of the unpredictability of new physics signatures, and it might suggest the need for more model-independent searches for new physics, especially as the LHC continues to chip away at the available supersymmetry phase space without any discoveries.