Where the invisible things are: Probing the phase space structure of dark matter

Mar 21, 2015

by Devon Powell

 

The Unseen Scaffolding of the Universe

Dark matter is the mysterious substance that makes up about 85% of the mass in our universe. The problem with studying it is that it’s invisible: As far as we have seen so far, it interacts with itself and with “normal” matter (the matter we are made of and interact with every day) only through its gravitational pull, though there are numerous experiments underway that are trying to prove otherwise (see e.g. recent posts in this blog about direct [1 , 2] and indirect searches).

But we know it is out there (and passing through us at this very moment, as well). We can observe, for instance, through gravitational lensing around galaxy clusters that there is much more mass present than we can see. Its existence is also supported by the way galaxies rotate, and by the pattern of temperature fluctuations in the cosmic microwave background. In fact, we know that dark matter plays a crucial role in the way galaxies form and cluster. This is because, weighing in at 85% of the mass in the universe, dark matter forms a gravitational scaffold that guides the flow of “normal” matter in forming galaxies and galaxy clusters.

Trying to “see” the dark matter virtually leads to computationally technical issues

The fact that we can’t see dark matter is obviously a problem for understanding what it is and how it behaves. Thankfully, computers give us another window, which is to simulate the gravitational dynamics of dark matter in a virtual universe. We traditionally do this using “N-body” codes, which are based on the simple physics of Newtonian gravity (a highly accurate approximation of the most accurate theory of gravitation we know so far, Einsteinian General Relativity, or “GR”) taken on very large scales.  

When paired with some extra bits that take the expansion of the universe into account (which comes from GR, once we understand the matter-energy density content of the system), the whole can be used to accurately calculate the trajectories of dark matter particles in our Universe. Here, “particle” does not refer to an actual fundamental particle in nature, but rather to a mass element in the simulation - a pointlike parcel of mass weighing up to billions of times the mass of our Sun (the precise factor depends on the fidelity of the simulation and the size of the virtual universe being modeled). In technical terms, this is how we “discretize” the dark matter (“discretization” is a general term used in computer simulations that refers to the way in which a continuous system, like dark matter or a fluid, is represented using a finite amount of data), bringing it from a smooth distribution of mass in our universe to a finite number of mass-carrying elements that a computer can manage. As a general rule, simulations that use more particles (i.e. have a higher “resolution”) are more accurate.

Here is a 2-D projection of a simulation box (i.e. a virtual universe) roughly 50 million light-years across, showing how dark matter is discretized into particles:

A Brief History of Simulating Dark Matter on Machines

The N-body technique discussed above is tried-and-true. The first N-body simulations were done in the early 1960s by von Hoerner (1960) and Aarseth (1963), using on the order of 100 particles. In the late 1980s, advances in numerical algorithms for solving for gravitational forces were made by Barnes and Hut (1986) and Hockney and Eastwood (1988) (among others), who simulated the gravitational interactions of several thousands of particles. This led to the development of “modern” cosmology codes and a more advanced understanding of the large-scale structures in our universe.

Nowadays, cosmology codes that employ the N-body technique are ubiquitous. Two mature, oft-cited examples are Gadget (used to run the well-known Millenium Simulation) and Enzo. Computing power has also improved dramatically, allowing cosmologists to simulate trillions of particles (for example, see the DarkSky simulations, a collaborative effort involving KIPAC’s own Risa Wechsler and Sam Skillman).

The challenge: how to properly estimate a DM density?

Discretizing dark matter on a computer using particles suffers from one distinct difficulty, which is that we often want to know the density of dark matter at any given point in space.  

This is important, for instance, in studying the large-scale structures in the cosmos. Dark matter particles, as a general rule, form structures called “haloes” (clumps that host galaxies), “filaments” (elongated strands that connect haloes), “voids” (vast regions of low density), and “walls” (sheets that separate voids from one another). The consistent identification and classification of these structures in the “cosmic web” is a topic of ongoing debate. Another example is the hypothetical gamma-ray emission from dark matter haloes. Dark matter may interact with itself, converting its own mass to gamma rays according to E = mc2 at a rate proportional to its density squared. This is a particularly exciting topic that may yield clues regarding to the nature of dark matter, an effort that KIPAC is heavily involved in through the Fermi-LAT (a space-based gamma-ray detector) collaboration (see e.g. our blogpost on recent searches for dark matter in newly discovered dwarf galaxies, and references therein for more detail). Lastly, knowing the density of dark matter in a simulation is necessary for evolving the gravitational force equations in the simulation itself.

To illustrate why the particle discretization for dark matter fails to give a consistent and accurate density at any point in space, consider the collection of dots below:

What is the density of dots at the location of the red X?  The answer is that we don’t really know, because the locations of the dots don’t give us enough information. We could think of different ways to estimate the density. For example, we could draw a circle around the red X and count the number of points inside the circle, then divide that number by the area of the circle to get a density. Such “density estimators” abound, along with their acronyms: adaptive kernel smoothing, cloud-in-cell (CIC), Delaunay tessellation field estimator (DTFE), and Voronoi tessellation field estimator (VTFE), to name a few. They all have their pros and cons, but at the end of the day, they are just estimates. This problem derives from something referred to as “Poisson noise,” which is the inherent ambiguity in measuring the local density of a collection of discrete points.

The phase space sheet

What if it were possible to discretize dark matter on a computer in such a way that the actual density field were known, with no need to use estimators or other tricks? In fact, it is. Since 2011, KIPAC Director Tom Abel and his group at KIPAC (including former KIPAC postdocs Oliver Hahn and Raul Angulo, as well as visualization expert Ralf Kaehler) have been exploring a new method for discretizing dark matter. This method works by respecting something called the “phase space” structure of the dark matter, which the particle discretization does not.

“Phase space” refers to the combined position (“position space”) and velocity (“velocity space”) coordinates that a point moving along with the flow of dark matter may have. In our universe, a bit of dark matter has three position coordinates (since we live in 3-D), as well as three velocity components, which give a total of six dimensions for a 6-D phase space.  

The concept of phase space can initially seem a bit esoteric, so let us consider an example to illustrate just why phase space is a useful tool for us. Imagine a 1-D universe in which the positions of dark matter particles can be specified by a single number. A plot of of dark matter particle positions in a “1-D halo” might look something like this:

We can see that there is a region of higher density (more points per unit length) in the middle. We could apply a density estimator to just these points if we wanted to. But what if we include the velocity coordinate of each particle on the vertical axis, as well? Now, the particles occupy a 2-D phase space, the space consisting of one position coordinate and one velocity component:

All of a sudden, we see structure! The spiral is a result of “phase mixing,” a process by which neighboring particles that orbit the halo at slightly different speeds end up drifting away from one another. The use of phase space coordinates illuminates the physical structure of the particle distribution, which is that particles that are neighbors in physical space are not necessarily neighbors in phase space. More precisely, we are interested in the “Lagrangian” neighbors, particles who were neighbors when the simulation began. To illustrate, let us now connect the dots (literally, in this case):

 

Getting insight into the true DM distribution through phase space

If we think of where the mass in the simulation is, we now see that a more natural way of discretizing the mass is between Lagrangian neighbors, rather than at the particle locations. In other words - and this is the key point - we want to represent the dark matter as a set of mass-carrying line segments, demoting the particles to tracers of the flow that serve as endpoints for these mass elements.

This discretization gives us an unambiguous density field everywhere in space, free from Poisson noise:

We also immediately see that we capture structures in the density field called “caustics,” those spikes where particle orbits reverse direction. Compare this to a naive density estimator in which we simply bin particles together in a histogram (this is very similar to the aforementioned CIC estimator), which gives a noisy density field:

To get a bit technical for a moment:  in mathematical terms, we say that the dark matter is distributed along a “1-manifold embedded in 2-D phase space,” meaning that the dark matter forms a curvy line winding its way through a 2-D plane. This idea extends directly to 3-D, except that the dark matter occupies a “3-manifold embedded in 6-D phase space”; this 3-D “sheet” in phase space fully describes the physics and geometry of the dark matter (this is not a property of phase space distributions in general, but dark matter starts out with virtually zero velocity in the early universe, so it forms such a sheet rather than filling the entire phase space). In 3-D, we discretize the dark matter into mass-carrying tetrahedra (see Abel et al. 2012) instead of 1-D line segments, so that the tracer particles serve as vertices of the tetrahedra rather than mass elements themselves.

We use tetrahedra in 3-D because the tetrahedron is the "simplex" (the regular solid with the smallest number of vertices) of 3-D space. (Likewise, the line segment is the simplex of 1-D space, and the triangle is the simplex of 2-D space.) This is important for our purposes because a tetrahedron always remains a tetrahedron, regardless of how its vertices move with the dark matter flow. This is not the case for non-simplices, like the cube, which does not necessarily remain a cube if its vertices are moved arbitrarily.

Unfortunately, it’s impossible to give a good visualization of a 6-D space on a 2-D screen. However, we can look at the resulting density field in physical space. The ability to visualize this density field by projecting the phase space sheet from 6-D phase space into 3-D position space in a geometrically exact, mass-conserving way is a computational geometry problem addressed in Powell and Abel (2015). We see that it is smooth and gives an unambiguous density everywhere. This method clearly reveals the web-like structure of the cosmos, even when the resolution is relatively low. Compare the following to the first figure in this article:

This is a huge improvement over the noisy particle-based discretization used in traditional N-body codes.

Very Bright Future Prospects

Discretizing dark matter into a collection of mass-carrying tetrahedra has already been used successfully in several scenarios. Hahn et al. (2013)  showed that this method eliminates artificial clumping in N-body simulations, and Hahn et al. (2014) studied the statistics of cosmic velocity fields. Angulo et al. (2014) created smooth maps of the gravitational lensing potential around dark matter haloes. Kaehler et al. (2012)  also created a custom GPU-based tool for visualizing these simulation results. The method for projecting the dark matter density field in a mass-conserving, geometrically correct way devised by Powell and Abel (2015) opens the door to yet more exciting prospects. Already, Wojtak et al. (2015, in prep.) are using this resulting field to analyze the characteristics and evolution of voids in the cosmic web. We also plan to apply this method to creating a more advanced version of an N-body code, using this smooth, continuous density field to solve for the gravitational forces more accurately, building on the work by Hahn and Angulo (2015) , who hope, among other things, to shed light on the “cusp-core problem.” This new discretization might also allow us to study the gamma-ray emission from dark matter haloes in unprecedented detail.

This representation of dark matter as a “sheet” in phase space is a novel idea that gives more physically complete information about the distribution of dark matter in the universe, opening the door to many profoundly exciting prospects for future computational research on dark matter and cosmology.