By Nicola Omodei and Giacomo Vianello
The first direct detection in 2015 of a gravitational wave event (GW) by the recently upgraded Laser Interferometer Gravitational-Wave Observatory, known as Advanced LIGO, ushered in with a mighty bang a completely new era in astronomy. The first science run (‘O1’) with the Advanced LIGO detector started in September 2015, and two high-significance events (GW150914 and GW151226) and one sub-threshold event (LVT151012) were reported. These three events were compatible with signals expected from the mergers of two black holes (BH).
The identification and study of electromagnetic (EM) counterparts to GW events is critical as we move into the new area of GW astronomy for several reasons. If an EM signal is also observed it would provide additional information that can give us significantly better constraints on the parameters (such as mass, orbit, spin, etc.) of the binary black hole system. An EM signal would also allow for a cross-check between the distances measured through the GW signal and the redshifts measured through its EM counterpart, providing an independent constraint on cosmological models. Also, the simultaneous detection of a clear EM counterpart can confirm a GW event that hovers at the threshold of significance, effectively increasing the sensitivity of the search and the distance to which GW events can be detected by LIGO.
However, finding an EM counterpart to a GW observation is a challenging proposition. We start with a “probability map” provided by LIGO (which will soon be working in conjunction with its European counterpart, Virgo, which will help narrow down the probability map to a much smaller area), showing the section of the sky in which the gravitational event is most likely to have taken place. The probability map is produced using a complex analysis chain which takes into account various aspects of the detected gravitational waveform signal (frequency in the inspiral, merger, and ringdown phases, amplitude at peak, length of time it spends in each phase, etc.) as well as the time lag of the signal between the two LIGO detectors. It is usually shaped like a thickened arc (sometimes broken into two pieces) and can cover several hundred square degrees.
The area covered by the probability map is currently much larger than the field-of-view (FoV) of a typical soft X–ray, optical, or radio telescope, which is at most only a few square degrees for the largest detectors of today. Further, the luminosity of an EM counterpart is also expected to decay rapidly, meaning we need to cover a big patch of sky in a very short time.
Above: the Fermi Space Telescope shown in an artist’s rendition orbiting above the Earth. (Credit: NASA.)
On the other hand, hard X–ray telescopes such as Swift-BAT and INTEGRAL-ISGRI as well as gamma-ray detectors such as the Fermi Gamma-Ray Burst Monitor, the Fermi Large Area Telescope, and HAWC, have much larger FoVs and can cover the probability map much more quickly. They are therefore expected to play a major role in the discovery of the first EM counterpart to a GW event.
To focus in on one of these instruments: the Fermi Gamma-ray Space Telescope satellite, launched in 2008, orbits the Earth every 90 minutes and scans the entire sky every two orbits. This means there’s a very good chance that part of the probability map is in the GBM or LAT field of view at the time of the trigger, and full coverage of the map is expected to occur within a few hours.
Figure 1: LIGO localization probability map for LVT 151012, shown as the 2 thin brown arcs. The blue region represents the part of the sky occulted by the Earth at the time of the LIGO trigger from the vantage position of the Fermi satellite while the pink shaded region is the LAT field of view visible at the time of the trigger. Thus, approximately 50% of the high-probability region was visible at the time of the GW event.
As soon as we received temporal and localization information for the LIGO events, we searched back in the recorded LAT data for an EM counterpart. Searching for a small signal over a large fraction of the sky is like searching for the proverbial needle in a haystack. In contradistinction, the standard LAT analysis process assumes we know the location of a source with some accuracy. Given the size of the localization region of the GW event, the search for a transient counterpart in LAT data is challenging and requires new methods.
Many more details on the two methodologies can be found at the end of this post.
When we did not find a counterpart, we tried to determine an upper bound for the EM signal; in other words, how strong the signal could possibly have been, given the type of GW event and the fact we didn’t see anything. But this requires accounting for the uncertainty in the position of the source, which in turn requires a careful statistical treatment. We have developed two complementary methods to search for EM counterparts to GW events in the Fermi-LAT data and to place flux upper bounds in cases of non-detections. One is based on searching excesses over fixed time windows, and the other involves optimizing the time window while taking into account the exposure.
Unfortunately we did not detect any EM excess through our two methods, but we we were able to calculate the upper bounds on the gamma-ray flux from the three events, and these upper bounds can be used to constrain models used to look for EM counterparts to GW events above 100 MeV and for setting flux upper bounds. In the future, a LAT detection and localization would vastly improve the chances for a successful follow-up by other instruments with narrower FoVs, such as the Swift X-Ray Telescope, or an optical telescope.
Ultimately, we hope to track down the EM counterparts to future GW events to shed more light on the cataclysmic origins of GW signals, to probe the nature of matter in these most intense regimes, and ultimately to give us another handle on distances through another ‘sensory modality,’ or new way of seeing the Universe, in astrophysics. Once we see them also in EM signals, we will further be able to use these as ‘standard sirens’, the name given to the GW equivalent of standard candles, to get better distances to some of the farthest events we can detect in our Universe, making them an excellent tool to better probe dark energy—the fundamental nature of which remains one of the most enigmatic mysteries in modern cosmology. The ability to detect GW signals has already heralded our entry into a dramatic new era of astrophysics. When we add in the detection of EM counterpart signals, we are sure we will be afforded even more incredibly profound opportunities to learn about our mysterious cosmos.
Methodology used to search for GW events and set upper limits on gamma-ray flux
(Note: Technical discussion follows.)
We studied and implemented ways to measure upper bounds on the flux from EM counterparts to GW events despite the large uncertainty in event position inherent in GW detection.
Our two novel techniques are capable of fully exploiting both the capabilities of the EM search instrument and the prior information available from the LIGO and Virgo observatories. These methods, developed during the first LIGO science run and already applied to three observed GW events, will be used to systematically search for EM counterparts to future GW events. In case of a detection, the methods presented here will return a localization, a flux estimation and a significance of the EM counterpart. If no EM counterpart is detected, a statistically meaningful set of upper limits can be measured and used to constrain models.
For the fixed time window method, we choose to use a time window corresponding to the interval of time need to fully cover the 90% probability contour. In each pixel (downscaling the resolution of the map in order to match the resolution of the LAT) we model the gamma ray emission by taking into account the steady contribution from the Milky Way (derived from data), the isotropic contribution from cosmic ray particles and the contribution from all the point sources detected by Fermi in its seven years in orbit. We then optimize the parameters of this model to the data (letting the fluxes be freely adjusted) using a maximum likelihood method. We then repeat the procedure adding an additional point source to the center of the pixel as an alternative hypothesis. We use the likelihood ratio method (or test statistic, TS) to test the significance of the alternative hypothesis versus the null hypothesis. In practice, the new hypothesis is rejected if the value of TS is large, meaning that the additional source is needed by the data.
In the second method, we define an adaptive time window that starts when the pixel enters in the LAT field of view and ends when it exits. Therefore, each pixel is analyzed in a different time window. Apart from this, the likelihood analysis is the same.
The advantage of using a fixed time window for every position in the sky is that we can use a statistically solid method to combine the upper bound derived in each pixel found in the probability map and derive a global value for the flux. In practice this corresponds to the maximum flux allowed by the data, in order to not have a detection with a given significance.
The figure above explains this concept: The y-axis denotes the credibility level while the x-axis denotes the flux value, with each point on the graph corresponding to the flux value at that credibility level. For example, with a credibility level of 0.95 we can say that at any position in the sky, the maximum flux of an additional source is ~0.5x10^-9 erg/cm^2/s. The beauty of this method is that these values are independent of the particular location in the sky and can be used to constrain models.
On the other hand, if, for some reason the localization of the GW event becomes constrained—for example from the detection of the EM counterpart by some other instrument—our second method can be used to impose more stringent upper bounds. In this case, depending on the position of such a localization, a researcher using our technique can choose the upper bound most relevant for the candidate counterpart a posteriori. Because the time windows have been optimized for each pixel, the corresponding upper bound could be deeper and hence more constraining with respect to the fixed time window analysis.
How to read the above (rather complicated) figure: If the location of the source becomes known, we can select a pixel in the leftmost map and its corresponding value for the flux upper limit (we fix the confidence level to 90%). The same pixel has a color in the second map, which indicates when it enters in the LAT field of view (aka the start of the adaptive interval). This is the same color as the corresponding horizontal bar in the last panel, showing precisely the interval in which the flux upper limit (y-axis) is calculated. The dashed arrows are an example on how to read the figure. This flux value can be more constraining than the previous limit, but, clearly, requires knowing the location of the source.