by Arka Banerjee
While neutrinos were hypothesized by Wolfgang Pauli back in 1930, they remain among the most mysterious particles within the Standard Model of particle physics. We now know that there are three types of neutrinos, and neutrino oscillation experiments have shown that there are at least two types which have mass. Current experiments have not yet been able to nail down the precise masses of the three neutrinos, but have placed upper bounds on sum of their masses. These upper bounds tell us that neutrinos have to be the lightest of all Standard Model particles, more than six orders of magnitude lighter than the electron! Pinning down the exact masses, and understanding the mechanisms by which neutrino mass is generated are among the most interesting questions in the field of particle physics today. Many experiments are underway, or are being planned, expressly to answer these questions (see, for example, this recent Symmetry magazine article).
Surprisingly, information about these extremely light particles is also imprinted on the largest cosmological scales of the Universe that we can observe. This is because neutrinos, by number, are the second most abundant particles in the Universe, with their number density being only slightly lower than that of photons. These large number densities mean that neutrinos make up a non-negligible fraction of the total energy density of the Universe. In fact, the more massive the neutrinos, the larger the energy fraction (as a function of the total energy in a given volume of the Universe) contained in these particles.
Until recombination (the point at which the cosmic microwave background, or CMB, becomes free to travel unimpeded through space), neutrinos were relativistic and contributed to the expansion of the Universe as radiation instead of matter. Thus precise measurements of the CMB can give us information about the number of neutrino species, but not their masses. However, late-time phenomena in the Universe, such as the formation and clustering of dark matter halos and galaxies (generally referred to as the Universe’s Large Scale Structure), do indeed contain information about the total energy fraction of neutrinos—and hence, their masses. The best-studied effect of massive neutrinos is the way they affect the shape of the "matter-power spectrum" at low redshifts.
The matter-power spectrum is a measure of how all the matter in the Universe clusters at different scales. This is usually plotted as a function of the wavenumber k, which is roughly the inverse of a given length scale R. So, in the figure above, large physical length scales correspond to small values of k, and small scales correspond to large values of k. As a consequence of their low masses, neutrinos can, on average, move much faster at the same temperature than heavier particles like Cold Dark Matter (CDM) or baryons. Therefore, unlike the CDM, which clusters down to very small scales due to the low velocity and mutual gravitational interactions of the constituent particles, neutrinos barely cluster together on small scales. However, on very large scales - larger than the scale up to which an average neutrino particle can travel given its thermal velocity (which is referred to as the free-streaming scale)—neutrinos behave just like CDM and baryons. This leads to the behavior shown in the figure, which shows the ratio of power spectrum from a cosmology with massive neutrinos to the power spectrum of a cosmology with massless neutrinos. We hold fixed the total amount of matter in the Universe in both cosmologies—i.e., cosmology with massive neutrinos has less cold dark matter than one with massless neutrinos.
On very large scales (small k), removing CDM and replacing it by neutrinos has no effect, as they both behave similarly in terms of clustering. This ensures that the ratio is exactly one. On small scales, removing a clustering component like CDM and replacing it by non-clustering neutrinos, damps the power spectrum, i.e., the ratio becomes smaller than unity. This is seen both in analytic linear perturbation theory calculations (black curve), as well as in fully nonlinear calculations calibrated from cosmological simulations (red curve). The total amount of structure damping on small scales, as well as the scale at which the ratio starts to deviate from unity gives information about the mass of neutrinos. Heavier neutrinos lead to more damping on small scales, but the scale at which start damping the power spectrum becomes smaller.
Various current and future cosmological experiments look to measure this characteristic damping effect of neutrinos on the matter-power spectrum using different observables. For example, upper bounds have already been set on the neutrino mass through precise measurements of the lensing of the primary CMB signal, measurements of the Lyman alpha forest power spectrum, galaxy clustering and galaxy-galaxy lensing in photometric surveys, and Baryon Acoustic Oscillation studies. Various current and upcoming experiments, such as the Dark Energy Survey, DESI, EUCLID, and LSST will also aim to pin down the mass of the neutrino by carefully measuring the power spectrum.
To interpret the measurements from these surveys, one needs very accurate theoretical predictions for the power spectrum on all measurable scales. On large scales, where the evolution of Large Scale Structure is mostly linear, analytic calculations can predict the power spectrum very accurately. On small scales, gravitational collapse introduces non-linearities which require full cosmological simulations for modeling their evolution. Therefore, cosmological simulations which include the effects of massive neutrinos are extremely important to be able to extract useful information about neutrino masses down to small scales.
For cosmologies with just CDM, N-body simulations are known to produce very precise and accurate predictions in the non-linear regime. These simulations evolve a set of ‘particles’ moving under the effects of their mutual gravitational interactions. However, massive neutrinos have proven to be somewhat difficult to incorporate into these simulations precisely because they have such large thermal velocities. Since neutrinos are fermions, these thermal velocities are drawn from the quantum distribution for fermions, known as the Fermi-Dirac distribution.
One of the standard approaches in literature has been to sample this underlying distribution randomly at every point on a grid on which the initial conditions are generated—i.e., given enough neutrino particles in the simulation, one would eventually recover the true distribution. This procedure, of course, needs to be repeated at every point on the grid, so, in principle, one would need multiple neutrino particles starting off at every point in space to get a good approximation of the distribution of neutrinos throughout the simulation volume. Unfortunately, the number of particles needed for the randomly drawn distribution to be a good approximation of the true distribution throughout the evolution of the simulation far exceeds the maximum number of particles that can be handled even in the largest simulations run to date. In fact one would need about a million times as many particles as used in the largest simulations to reach the required accuracy.
In the absence of sufficient numbers of particles, the random nature of thermal velocities means that the neutrino particles in the simulations just zoom about in random directions. This leads to the situation that the density of neutrino particles in different parts of the simulation volume is just a random number, and not a true representation of what the physical density is expected to be. This problem is called the shot noise problem for fast moving particles in N-body simulations. While this problem was recognized more than a decade ago, most neutrino simulations were run with this method—the hope being that neutrinos formed a small enough fraction of the matter density that large errors in the local neutrino density would not affect predictions for cosmological observables at a level that can actually be observed in experiments.
However, cosmological surveys like DESI and LSST will be able to measure the power spectrum on small scales at a very high degree of accuracy, and therefore it has become imperative to investigate whether the shot noise problem in the neutrino simulations can indeed be ignored, or if new methods are needed to describe the power spectrum down to small scales.
In the recent paper Reducing Noise in Cosmological N-body Simulations with Neutrinos, (Banerjee et al., 2018), we proposed a new method to solve the shot noise problem in a particularly simple way (one of the co-authors of this paper, recent KIPAC graduate Devon Powell, wrote a KIPAC blogpost earlier about related simulation work, parts of which were utilized in this work). We showed that the shot noise could be completely removed by changing the method for generating the initial thermal velocities of the neutrino particles. Instead of sampling the Fermi-Dirac distribution randomly to assign the initial thermal velocities of neutrinos, we sampled the distribution in a regular and repeatable manner at every point on the initial grid. This means that at the starting point of the simulation, if any grid point in the simulation has a neutrino particle moving, say, at 1000 km/s along the x-axis, every other grid point in the box will also have a neutrino particle moving in the same direction, and with the same velocity.
A way to understand how this helps is the following: in the absence of physical perturbations, sampling the distribution in a regular manner ensures equal numbers of neutrino particles moving in and out of any given patch in the simulation volume at any later time. Any deviations from this uniform sea of neutrinos has to be produced by true physical perturbations, rather than by random motions of neutrinos. The latter is exactly what happens in the random sampling method, where it becomes impossible to separate the effects of true fluctuations in the number of neutrino particles in a given patch versus neutrinos just ending up there randomly. This randomness in the counts of neutrino particles is precisely what sources the shot noise problem, and, therefore, the regular sampling of the distribution ultimately provides the solution to the shot noise problem. We still need a minimum number of particles to describe the distribution correctly, but for the same number of particles, the regular sampling is able to reduce the noise levels by roughly a factor of 10⁷ compared to the random sampling! We showed that sampling the distribution in a regular and repeatable manner at every point ensures that there is no shot noise, while being able to describe the subsequent evolution of the power spectrum correctly. This method also lends itself very naturally to “Simplex in Cell” (SIC) method of depositing densities described in Abel, Hahn, and Kaehler (2011) and implemented in Powell and Abel (2015) to produce extremely accurate density maps.
We show the difference in the density maps of neutrinos in the simulation volume when our new initial conditions are used (on the right) in simulations compared to the older method of sampling the distribution randomly for the initial velocities (on the left). The physical structures stand out much more clearly in the right panel, and this is even after we varied the colors in the left panel to make the features stand out as much as we could. As can be clearly seen, the map on the left is mostly noise generated by the random positions of neutrino particles.
We could also study the clustering of neutrinos as a function of the magnitude and direction of their initial velocity. An example of this is shown in the figure below, where we look at the density field of neutrinos that were all moving in a single direction at the beginning of the simulation. The cosmic web structure that is seen in all CDM simulations correlates well with the overdense regions in the map (red patches) but high initial velocity smears the neutrino structure in the direction of the initial velocity, as can be clearly seen in the bottom right panel.
Because of the low noise and high accuracy of these new simulations, we were able to check and validate a number of results that were derived from the earlier, noisier simulations, such as the exact effect on the small scale matter-power spectrum shown in the first figure. We showed that the new method can reduce the noise even on the largest scales in the simulation volume. Another important aspect we discussed in the paper was that because of the accuracy with which DESI and LSST will measure the power spectrum, we are now in a regime where simulations need to be run not only with the same total mass of neutrinos, but also the realistic individual neutrino mass splittings for a given total mass. This point has been overlooked in a number of papers on simulations as well as data analysis from previous surveys.
While this article mostly explores the idea of constraining the neutrino mass in cosmology using the shape of the power spectrum, neutrino simulations have also been used to study the effect of neutrinos on the way dark matter halos and voids (empty regions) cluster on large scales compared to the underlying matter distribution (Villaescusa-Navarro, et al., (2014), Banerjee and Dalal, (2016) ). These orthogonal measurements will also help improve the bounds on the neutrino mass from cosmological data.
With advances in simulations, as well as the accuracy of cosmological surveys, weighing neutrinos from their impact on the largest structures in the Universe will soon be possible. In fact, there is a real chance the neutrino mass measurements from cosmology will be able to pin down the exact masses before any terrestrial experiment. Already the bounds on the sum of neutrino masses from cosmology are stringent enough to almost rule out the "inverted hierarchy"—the scenario in which there are two relatively heavy neutrinos, and one lighter one. This scenario is in contrast to the normal hierarchy one, where there is one relatively heavy neutrino, and two light ones. By the era of LSST observations, even if the neutrinos have the lowest possible mass allowed by current atmospheric oscillation experiments, we will achieve a detection with a statistical significance of larger than 3σ. Such a measurement will be yet another vindication of our understanding of, and connection between, fundamental physics from the smallest scales, described by particle physics, and the largest scales, described by cosmology!
Reducing Noise in Cosmological N-body Simulations with Neutrinos (arXiv link) (2018)
Tracing the Dark Matter Sheet in Phase Space (arXiv link) (2012)
Simulating nonlinear cosmological structure formation with massive neutrinos (arXiv link) (2016)