by Bryné Hadnott
For more than a decade, scientists and engineers from the SLAC National Accelerator Laboratory (SLAC) have been leading the development of the world’s largest digital camera for the Legacy Survey of Space and Time (LSST) Simonyi Survey Telescope. They’ve broken Guinness World Records for highest resolution digital camera and largest lens, but the heart of the camera—a table-sized focal plane made up of nearly two hundred charge-coupled devices (CCDs)—has been a scientific study in its own right, filling thousands of research papers and countless PhD dissertations.
“There’s physics everywhere inside of this camera,” says SLAC research associate, Andrew Bradshaw. “Now, we have the sensors all together and we're still finding mysteries. As my previous advisor, Tony Tyson, would say, ‘You’ve got to do an experiment on your experiment.’”
Bradshaw started working with LSST Chief Scientist Tony Tyson in 2010 while pursuing a doctorate in physics at the University of California, Davis (UC Davis). Together, they developed the LSST beam simulator to study the myriad effects of aperture size and shape, brightness, and the path of light on images captured by the full array of 189 CCDs.

“We ended up going from one pinhole to an array of 40,000 pinholes with different sizes and ellipticities to test the entire measurement system,” says Bradshaw. “Then we added shapes of stars or galaxies to mimic observing a fake night sky. That was the real power of the system we developed at Davis and that Tony envisioned. We can simulate observations on just one CCD to really dig down deep into the physics.”
Physicists Assemble!
It might seem strange for an astronomer to create their own night sky, but in order to understand the complicated pathways of light from a distant star to an image captured by a CCD, Bradshaw and the LSST Camera Integration team had to account for every systematic error—temperature, detector voltage, even a star’s position in the sky—they could come up with, including the motions of objects a little closer to home.

“I’m measuring how the proliferation of satellites in low Earth orbit— Starlink, One Web, all sorts of other companies—is going to affect the Rubin Observatory,” says Adam Snyder, a postdoctoral researcher in Tyson’s group at UC Davis. “We're trying to understand how many pixels or objects we’re going to miss out on because of satellites that are incident at the same time.”
Snyder started working with the LSST camera team as a graduate student at Stanford’s Kavli Institute for Particle Astrophysics and Cosmology (KIPAC) while in KIPAC professor Aaron Roodman’s research group, characterizing the camera’s imaging sensors. While doing research at SLAC, Snyder helped bring Yousuke Utsumi, a celebrated expert in astronomical instrumentation and former professor at Hiroshima University, onto the camera team.
Utsumi previously worked on the National Astronomical Observatory of Japan’s precursor to LSST, the Hyper Suprime-Cam (HSC), developing the sensors, readout electronics, and even the software for the camera’s just shy of two feet-wide focal plane. He was also involved in a staggering number of essential tasks for the Hiroshima University Operated Tibet Optical Robotic Imager (HinOTORI), including “telescope design, instrument design, equatorial mount design, dome design, logistics, and coordination.”
“We couldn't operate the telescope without Yousuke,” says Bradshaw. “He has been critical in the development of the software, the operation of the electronics, sensor readout, honestly, he does everything.”
“We designed HSC to be a wide field of view camera for the Subaru Telescope in Mauna Kea, similar in concept to LSST,” says Utsumi. “The size is eight hundred megapixels, about an order of magnitude less than LSST, and we can read out in thirty seconds with overhead.”

When compared to HSC, LSST’s focal plane is unprecedentedly large, spanning more than two feet in diameter with an array of 189 imaging sensors working in concert to produce stunning 3.2 gigapixel images, about 400 times the resolution of a brand new, 4K ultra-high-definition television. Each three by three square of sensors is its own autonomous 144 megapixel camera featuring specially designed CCDs divided into sixteen segments, or multiplexed, so that images can be delivered in record time.
“We can read out images in under two seconds, but in our commissioning tests, we were running into problems with data coming off the CCDs too fast,” laughs Snyder. “We had to slow down to make sure there was enough time between images to write the data to disk and not cause a traffic jam.”
The Camera Adds Ten Pounds
The custom-designed CCDs in LSST’s imaging sensors introduced new effects that haven’t been seen before, requiring an extraordinarily detailed approach to the characterization of LSST’s focal plane and a new field for exploration: semiconductor physics.
“The overall strategy for the LSST Camera is to acquire short exposures —fifteen seconds—and stack many images taken over the life of the survey to reach a greater level of sensitivity,” says SLAC Staff Scientist and KIPAC Senior Member, Andy Rasmussen. “The sensors’ ‘systematics limit’ becomes very important to deal with properly and this is where understanding the physics of the sensors becomes crucial.”
CCDs are in a class of materials called metal-oxide-semiconductors, a sandwich of metal electrodes on top, oxide in the middle, and at the bottom, a semiconductor created by doping a hair's-breadth layer of silicon with a small amount of impurities. The arrangement of the metal electrodes on top defines the CCD’s pixel size—about 10 microns on each side, slightly larger than a red blood cell—and when a positive voltage is applied, electrons excited by the light collect within the pixel area.
“Pixels are more like fuzzy squares,” says Snyder. “They're not all the same size or shape and because there are no physical walls between them, there's some fuzziness near the edges.”
In order to accurately determine the shapes and sizes of stars and galaxies —critical for measuring the effects of weak gravitational lensing, a powerful probe for dark matter—the LSST camera team is developing detailed correction models to account for image artifacts introduced by pixel “fuzziness,” including a recently discovered phenomenon called the “brighter-fatter effect.”

“When you're looking at a very bright object, charge builds up inside of the pixel creating an electric field that can change the pixel’s boundaries,” explains Alexander Broughton, Department of Energy (DOE) Science Graduate Student Research Fellow at the University of California, Irvine. “Then, new incoming charges can get deflected to nearby pixels, causing the observed object to look wider, which affects the measurements of stars and galaxies.”
“There are sub-pixel effects that depend on how bright an object is, where light falls onto the sensor, and even where the light converts into an electronic signal inside the sensor,” adds Broughton. “It’s kind of a wonder that we can take a picture of anything.”
As the myriad components making up LSST’s camera near integration, just in time for the camera’s delivery to Chile next year, the camera team will continue “experimenting on their experiment,” characterizing the focal plane and sensors to an exquisite level of detail to ensure that the highest-resolution digital camera in the world can accurately measure light from the most distant, or the brightest, of stars and galaxies.
“Now that the whole focal plane is assembled, we've all been able to come together as a team and design tests on the big focal plane,” says Bradshaw. “It's really a story of ‘follow your nose,’ build on what's already there, and help each other out with the science. It'll be interesting to contrast what we think now with what we think later because we're on the brink of more understanding about our camera.”
“For a precision instrument like the LSST Camera, you could say our motto is ‘know your instrument (intimately),’” says Rasmussen. “We continue to strive to produce ‘photon images’ where the effects of the instrument have all been removed to the best of our ability. Hopefully, those effects won’t continue to grow when the camera is buttoned up for the last time, but that remains to be seen!”
