Human: "Why should I believe you?" AI: "Because I am an Astrophysics Foundation Model."

Apr 17, 2025 - 11:00 am to 12:00 pm
Location

Campus, PAB 102/103

Speaker
Josh Speagle (University of Toronto) In Person and zoom

Zoom Recording Passcode: ED0j=T*5

As datasets continue to grow, machine learning/artificial intelligence (ML/AI) has taken on an increasingly large role in scientific analyses as both a practical necessity (to handle the data volumes) but also as a way to "bypass" theoretical models by learning directly from the data. However, this speed, complexity, and flexibility also have proved to be one of the main challenges involved in actually getting scientists to "trust" the results from these ML/AI algorithms and commonly serves as a roadblock for incorporating them into a broader analysis framework.

In this talk, I will attempt to (randomly) cover a subset of topics that touch on a range of issues, which may include but will not be limited to:

(1) the "unreasonable effectiveness" of deep learning,

(2) the impacts of "scaling" (in computation, data, and model size),

(3) the abilities of ML/AI models to perform rigorous statistical inference (and what that even means),

(4) challenges with model selection in large parameter settings, and

(5) the importance of interpretability for scientific learning and discovery.

These will all be motivated using applications across astrophysics, which may include galaxy morphology classification from broadband images, characterizing the relationship between globular clusters and galaxies, stellar parameter recovery from stellar spectra (and light curves), and gyrochronology with low-mass stars.