How small-scale cosmology simulations predict observations from large galaxy surveys

Feb 12, 2019

The author, Albert Chuang. (Courtesy of A. Chuang.)

by Chia-Hsun (Albert) Chuang

Since 1998, analyses of supernovae data have shown that our Universe has been undergoing accelerated expansion in the latter part of its life. Because ordinary matter can only provide an attractive force, there must be something else providing the repulsive force to accelerate the Universe's expansion. We call the unknown driver of  this expansion "dark energy"—and despite its discovery now over 20 years ago, the question of its origin remains a deeply vexing issue in cosmology and fundamental physics today. How is humanity going to sort out this profound mystery?

A Universe full of sound and fury, signifying something

One clue is that dark energy became a dominant component in the what we call the “late-time” Universe—i.e., in the last few billion years, which is relatively recent compared to the time of the Big Bang almost 13.8 billion years ago. Therefore, it is more tractable to detect dark energy using observations from the late-time Universe, including observations of supernovae and the large-scale structure of galaxy clusters. In the earlier Universe (which was much denser and hotter), photons, electrons, and baryons formed a kind of fluid in which sounds could propagate. Once our Universe cooled down, the interactions between photons and electrons became much less frequent and the sound waves “froze” in place. When that happened, these sound horizons, called baryon acoustic oscillations (BAO)—specifically the distances between the peaks and valleys in these waves—could be used as standard size rulers since any additional change in size from that point on was due only to the expansion of the Universe. The BAO scales were first detected in data from SDSS in 2005 (and were mentioned in the context of other cosmological probes here in this series in 2015).

Now, with dramatically growing data sets and ambitious galaxy surveys, we expect to measure the BAO scales at various times in our Universe to sub-percent level, thus getting a glimpse of the underlying geometry at those times. We can then reconstruct the evolution of the dark energy based on these measurements. To this end, we are developing robust methodologies to extract dark energy information from the galaxy surveys accurately and precisely.

The Dark Energy Spectroscopic Instrument (DESI), supported by the Department of Energy Office of Science, is a galaxy redshift survey that will run over an approximately five-year period from 2020 to 2024. DESI will measure the positions of about 30 million galaxies and quasars to construct a detailed 3D map of a large portion of the late-time Universe. DESI will be mounted on the four-meter Mayall Telescope at Kitt Peak National Observatory, 56 miles southwest of Tucson, Arizona. The Mayall is a twin of the Blanco telescope used for the Dark Energy Survey (discussed in these previous posts here from January 2018, September 2017, August 2017, and March 2015).  

Kitt Peak, with the Mayall Telescope in the foreground. (Credit: NOAO.)
Figure 1. Kitt Peak, with the Mayall Telescope in the foreground. (Credit: NOAO.)


The telescope will send light through 5000 robotic fiber positioners, which will be programmed to move the fibers to collect the light from the selected target galaxies. Figure 2 shows one of the “pie slices” of DESI into which the 5000 spectroscopic fibers will be inserted. 

The first “petal” machined for the Dark Energy Spectroscopic Instrument (DESI) is shown in these photos. Ten of these petals, which together will hold 5,000 robots (like the one in the lower right photo)—each pointing a thin fiber-optic cable at separate sky objects—will be installed in DESI. (Credit: Joe Silber/Berkeley Lab.)
Figure 2. The first “petal” machined for the Dark Energy Spectroscopic Instrument (DESI) is shown in these photos. Ten of these petals, which together will hold 5,000 robots (like the one in the lower right photo)—each pointing a thin fiber-optic cable at separate sky objects—will be installed in DESI. (Credit: Joe Silber/Berkeley Lab.)


Simulations + models = insight

We build our theoretical models of the evolution and contents of the Universe by making cosmology simulations (simulations on cosmological scales have been covered previously in this space in November 2017 and in April 2018). These simulations start at about 11 billion years ago and evolve according based on physical laws such as gravity. The volume of a cosmology simulation determines the uncertainty of the theoretical prediction: the larger the volume, the smaller the uncertainty. To maximize the information extracted from galaxy surveys like DESI we need to minimize the uncertainty of our theoretical models. However, this is a challenge for a survey like DESI since its survey volume is huge.

In addition, a significant fraction of the galaxies DESI will observe will be star-forming galaxies. As their name implies, these galaxies have high levels of star formation occurring in them, which means that there will be many young short-lived stars that emit very intense light in the blue and UV parts of the spectrum. This energetic light ionizes gases nearby, causing them to glow. That light can be analyzed to determine its spectrum, including specific emission lines that are like fingerprints for the particular molecules or atoms that make up the gas emitting it.

The light from star formation galaxies has strong emission lines that can be identified more easily in a spectrum than the light from quieter galaxies. Thus, we call these kind of galaxies "emission line galaxies" (ELGs). ELGs serve as our primary targets for two reasons.

First, the emission lines from an ELG help us to determine its distance from the Earth, as the precise Doppler shifts give us an exact cosmological recession velocity. Knowing this we can estimate the distance to the galaxy through the linear Hubble-Lemaitre relation (which, in its simplest form, relates recession velocity to the product of the Hubble parameter and galactic distance).

Second, we do not need to spend very much exposure time on each target because the signal-to-noise ratios of the emission lines will be high enough to identify them with a relatively short observation time. However, compared to typical bright/luminous galaxies (older galaxies), ELGs tend to live in environments with lower density (this is because lower density regions produce galaxies later in the Universe’s existence so that they are still star-forming galaxies even in recent times). Thus, to properly simulate structures in these regions, we need to make high-resolution simulations to get a correct description of the areas that have such a low density.

In summary, we need to make a simulation with a huge volume AND high-resolution. Bigger volume and higher resolution both require more computational resources (e.g., more than one billion CPU hours). These are all definite challenges to gaining insight to these systems.

How to understand a noisy Universe by reducing noise in simulations on Earth

After the Big Bang, the initial random quantum-scale fluctuations became the seeds of forming structures of our Universe (if it were completely homogeneous, there would be no structure now; i.e., no galaxies, no stars, no Earth—no fun!). In principle, we need to simulate these random fluctuations that introduce some noise in the simulations. Ideally, we could minimize the noises by increasing the volume of the simulations so that we could have a noiseless prediction to compare with our observations. However, it is computationally quite expensive to do so. To overcome these hurdles, techniques for speeding up the simulations have been developed with effort on the computational side, joined by a newly developed alternative approach. The idea for this new approach is to remove the noise in the simulations by controlling their initial conditions without increasing the volume. In other words, since we know exactly the noise we have added into the simulation, we should be able to find a way to remove its impact partially at least. In fact, Angulo and Pontzen (2016) showed that one can obtain accurate and high-precision predictions by making some simulations with reasonable volume using the controlled initial conditions.

These techniques are fairly technical, but to give a brief description: one trick is to make two simulations which have opposite properties to one other; i.e., the high matter density region in one simulation is the low matter density region in the other (see Figure 3). In this case, certain aspects of the noise would be canceled out when one averages the measurements from them. (Another technical trick is to reduce the uncertainty in the amplitude of a given frequency mode, as one can decompose the matter density fluctuations into different frequency modes.)

The UNIT project (Universe N-body simulations for the Investigation of Theoretical models from galaxy surveys) which I have been leading, has adopted the controlling-initial-conditions method and made the first set of this kind of simulation publicly available. So far, the simulation can provide precise predictions with a simulation volume equivalent to seven times the DESI survey volume. The UNIT project members are starting to test the theoretical models using these simulations. We expect these simulations will enable a number of studies to further unveil the nature of dark energy and structure formation with galaxy surveys.

Thus, working carefully on simulations encoded in the silicon of computer farms right here on Earth will lead ultimately to humanity being able to reach the next frontier in unraveling some of the most perplexing conundrums about the entire Universe made of all forms of matter and energy far beyond us.

Two simulations using opposite phases of initial conditions. (Credit: UNIT.)
Two simulations using opposite phases of initial conditions; i.e., the overdensity region in one of the simulations would be an underdensity region in the other simulation (this can be seen by eye by comparing the two images closely). We are showing part of a slice with thickness of 1 Mpc/h and each these slices is 500 Mpc/h on each side, where h is the reduced Hubble constant (in other words, these slices are about 7 million light years thick, and 1.6 billion light years on each side). Since some behaviors of the noise have opposite numerical values in these two simulations, by averaging the statistics from them we obtain less noisy measurements. (Credit: UNIT.)


Read more

Exploring dark energy with robots (Symmetry, June 2015)

3-D Galaxy-mapping Project Enters Construction Phase (LBNL Newscenter, August 2016)