The IAIFI Summer Workshop brings together researchers from across Physics and AI for plenary talks, poster sessions, and networking to promote research at the intersection of Physics and AI. We are also accepting submissions for contributed talks and/or posters.
- The 2025 Summer Workshop will be held August 11–15, 2025
- Location: Harvard University (Cambridge, MA)
- Registration is now open for the 2025 IAIFI Summer Workshop; Register by July 31, 2025 ($200 registration fee).
Register for the 2025 IAIFI Summer Workshop
Here’s what attendees at previous IAIFI Summer Workshops had to say about the experience:
Agenda Speakers FAQ Accommodations Past Workshops
About
The Institute for Artificial Intelligence and Fundamental Interactions (IAIFI) is enabling physics discoveries and advancing foundational AI through the development of novel AI approaches that incorporate first principles, best practices, and domain knowledge from fundamental physics. The goal of the Workshop is to serve as a meeting place to facilitate advances and connections across this growing interdisciplinary field.
Agenda
The agenda is subject to change.
Monday, August 11, 2025
9:00-9:15 am ET
Welcome
9:15–10:00 am ET
AI in Astrophysics: Tackling Domain Shift, Model Robustness and Uncertainty
Alex Ciprijanovic, Fermilab
Abstract
Artificial Intelligence is transforming astrophysics, from studying stars and galaxies to analyzing cosmic large-scale structures. However, a critical challenge arises when AI models trained on simulations or past observational data are applied to new observation— leading to domain shifts, reduced robustness, and increased uncertainty of model predictions. This talk will explore these issues, highlighting examples such as galaxy morphology classification and cosmological parameter inference, where AI struggles to adapt across different datasets. We will discuss domain adaptation as a strategy to improve model generalization and mitigate biases—essential for making AI-driven discoveries reliable. Notably, these challenges extend beyond astrophysics, affecting AI applications across physics and other scientific domains. Addressing them is essential for maximizing AI’s impact in advancing scientific research.10:00–10:45 am ET
Theoretical foundations for language model self-improvement
Dylan Foster, Microsoft Research
Abstract
Abstract to come.10:45-11:15 am ET
Break
11:15 am–12:00 pm ET
Quasimetrics and Reinforcement Learning
Amy Zhang, UT Austin
Abstract
Abstract to come.12:00–1:30 pm ET
Lunch Break
1:30–2:15 pm ET
Machine Learning for Time-Domain Astrophysics
Alex Gagliano, IAIFI Fellow
Abstract
Abstract to come.2:15–3:00 pm ET
Title to come
Tri Nguyen, CIERA, Northwestern University
Abstract
Abstract to come.3:00–3:30 pm ET
Break
3:30–4:15 pm ET
Multimodal Foundation Models for Scientific Data
Francois Lanusse, CNRS
Abstract
Abstract to come.4:15-5:00 pm ET
Structured Learning for Astrophysical Data
Peter Melchior, Princeton University
Abstract
Abstract to come.5:30–8:00 pm ET
Poster Session and Reception
Tuesday, August 12, 2025
9:00–9:45 am ET
(Machine) Learning of Dark Matter
Lina Necib, MIT
Abstract
Abstract to come.9:45–10:30 am ET
Learning the Universe: Building a Scalable, Verifiable Emulation Pipeline for Astronomical Survey Science
Matthew Ho, Columbia University
Abstract
Learning the Universe is developing a large-scale, ML-accelerated pipeline for simulation-based inference in cosmology and astrophysics. By combining high-resolution physical models with fast emulators, we generate realistic training sets at the scale required for field-level inference from galaxy survey data. This enables us to constrain models of galaxy formation and cosmology from observations with unprecedented scale and precision. In designing this pipeline, we have also developed validation methodologies to assess emulator accuracy, identify sources of systematic error, and support blinded survey analysis. I will present results from its application to the SDSS BOSS CMASS spectroscopic galaxy sample and discuss how this approach is scaling to upcoming cosmological surveys.10:30-11:00 am ET
Break
11:00–11:45 am ET
High-dimensional Bayesian Inference with Diffusion Models and Generative Flow Networks
Alexandre Adam, Université de Montréal
Abstract
The nature of dark matter is one of the greatest mystery in modern cosmology. Little is known about dark matter, other than it has a mass, thus gravitate, and interact with the electromagnetic field weakly, if none at all. Gravitational lensing is a natural phenomena which involves the trajectory of photons from distant galaxies bending by the gravity of massive object in our line of sight. As such, it is one of the most promising probe to study the nature of dark matter. This talk will discuss the problem of inferring the mass of the hypothetic dark matter particle from strong gravitational lens measurement, the challenges involved in such an inference from a Bayesien perspective and the potential solutions offered by modern deep learning framework such as diffusion models and generative flow networks.11:45 am–12:30 pm ET
Title to come.
Noemi Anau Montel, Max Planck Institute for Astrophysics
Abstract
Abstract to come.12:30–2:00 pm ET
Lunch
2:00–3:30 pm ET
Contributed Talks - Parallel Sessions
3:30–4:00 pm ET
Break
4:00–4:45 pm ET
Title to come
Hidenori Tanaka, Harvard University/NTT Research, Inc.
Abstract
Abstract to come.4:45-5:30 pm ET
Computing with Neural Manifolds in Biological and Artificial Neural Networks
SueYeon Chung, Harvard University (starting Fall 2025), Flatiron Institute
Abstract
Abstract to comeWednesday, August 13, 2025
9:00–9:45 am ET
From Neurons to Neutrons: An Intepretable AI model for Nuclear Physics
Sokratis Trifinopoulos, MIT/CERN
Abstract
Abstract to come.9:45–10:30 am ET
Investigating Proton Spatial and Spin Structure with Interpretable AI
Simonetta Liuti, University of Virginia
Abstract
Abstract to come.10:30-11:00 am ET
Break
11:00–11:45 am ET
ML inroads into Conformal Field Theory
Costis Papageorgakis, Queen Mary University of London
Abstract
Abstract to come.11:45 am–12:30 pm ET
Title to come.
Sven Krippendorf, University of Cambridge
Abstract
Abstract to come.12:30–2:00 pm ET
Lunch
2:00–3:30 pm ET
Contributed Talks - Parallel Sessions
3:30–4:00 pm ET
Break
4:00–4:45 pm ET
Title to come
Lukas Heinrich, Technical University Munich
Abstract
Abstract to come.4:45-5:30 pm ET
AI on the Edge: Decoding Particles, Brains, and Cosmic Collisions in Real Time
Shih-Chieh Hsu, University of Washington
Abstract
Artificial Intelligence is transforming scientific discovery at every scale-from the subatomic to the cosmic-by enabling real-time data analysis with unprecedented speed and precision. The A3D3 Institute leads this revolution, leveraging advanced hardware like FPGAs and GPUs, cutting-edge model compression, and specialized inference frameworks to accelerate breakthroughs in particle physics, neuroscience, and multi-messenger astrophysics. This talk highlights how A3D3’s innovations are powering instant detection of rare events, live decoding of brain signals, and rapid response to cosmic phenomena, ushering in a new era where AI turns massive data streams into actionable insights as they happen.Thursday, August 14, 2025
9:00–9:45 am ET
Low latency machine learning at the LHCb experiment
Eluned Smith, MIT
Abstract
Abstract to come.9:45–10:30 am ET
Towards Complete Automation in Particle Image Inference
Francois Drielsma, SLAC
Abstract
Particle imaging detectors have had a ubiquitous role in particle physics for over a century. The unrivaled level of detail they deliver has led to many discoveries and continues to make them an attractive choice in modern experiments. The liquid argon time projection chamber (LArTPC) technology – a dense, scalable realization of this detection paradigm – is the cornerstone of the US-based accelerator neutrino program. While the human brain can reliably recognize patterns in particle interaction images, automating this reconstruction process has been an ongoing challenge which could jeopardize the success of LArTPC experiments. Recent leaps in computer vision, made possible by machine learning (ML), have led to a remedy. We introduce an ML-based data reconstruction chain for particle imaging detectors: a multi-task network cascade which combines voxel-level feature extraction using Sparse Convolutional Neural Networks and particle superstructure formation using Graph Neural Networks. It provides a detailed description of an image and is currently used for state-of-the-art physics inference in three LArTPC experiments. Building on this success, we briefly inrtoduce the potential of leveraging self-supervised learning – the core concept of cutting-edge large language models – to learn the fundamental structure of detector data directly from a large corpus of raw, unlabeled data. This novel approach could address current shortcomings in signal processing and reduce the impact of data/simulation disagreements.10:30-11:00 am ET
Break
11:00–11:45 am ET
Foundation Models for Detector Data: progress, potential, and concerns
Michelle Kuchera, Davidson College
Abstract
Abstract to come.11:45 am–12:30 pm ET
Deep(er) Reconstruction of Imaging Cherenkov Detectors: From Generative Towards Foundation Models
Cristiano Fanelli, William & Mary
Abstract
Abstract to come.12:30–2:00 pm ET
Lunch
2:00–3:30 pm ET
Contributed Talks - Parallel Sessions
3:30–4:00 pm ET
Break
4:00–5:00 pm ET
Building an AI Scientist: Best Practices for Vibe Coding
Panel TBA
6:00–8:00 pm ET
Workshop Dinner
Museum of Science
Friday, August 15, 2025
9:00–9:45 am ET
Generative quantum advantage for classical and quantum problems
Hsin-Yuan (Robert) Huang, Caltech, Google
Abstract
Abstract to come.9:45–10:30 am ET
Artificial intelligence for quantum matter
Liang Fu, MIT
Abstract
Abstract to come.10:30-11:00 am ET
Break
11:00–11:45 am ET
Understanding inference-time compute: Self-improvement and scaling
Akshay Krishnamurthy, Microsoft Research
Abstract
Inference-time compute has emerged as a new axis for scaling large language models, leading to breakthroughs in AI reasoning. Broadly speaking, inference-time compute methods involve allowing the language model to interact with a verifier to search for desirable, high-quality, or correct responses. While recent breakthroughs involve using a ground-truth verifier of correctness, it is also possible to invoke the language model itself or an otherwise learned model as verifiers. These latter protocols raise the possibility of self-improvement, whereby the AI system evaluates and refines its own generations to achieve higher performance. This talk presents new understanding of and new algorithms for language model self-improvement. The first part of the talk focuses on a new perspective on self-improvement that we refer to as sharpening, whereby we "sharpen" the model toward one placing large probability mass on high-quality sequence, as measured by the language model itself. We show how the sharpening process can be done purely at inference time or amortized into the model via post-training, thereby avoiding expensive inference-time computation. In the second part of the talk, we consider the more general setting of a learned reward model, show that the performance of naive-but-widely-used inference-time compute strategies does not improve monotonically with compute, and develop a new compute-monotone algorithm with optimal statistical performance. Based on joint works with Audrey Huang, Dhruv Rohatgi, Adam Block, Qinghua Liu, Jordan T. Ash, Cyril Zhang, Max Simchowitz, Dylan J. Foster and Nan Jiang.11:45 am–12:30 pm ET
A Physics-informed Approach To Sensing
Petros Boufounos, Mitsubishi Electric Research Laboratories
Abstract
Physics-based models are experiencing a resurgence in signal processing applications. Thanks to developments in theory and computation it is now practical to incorporate models of dynamical systems within signal processing pipelines, learning algorithms and optimization loops. Advances in learning theory, such as Physics-Informed Neural Networks (PINNs) also allow for flexible and adaptive modeling of systems, even if the exact system model is not available at the algorithm design stage. In this talk we will explore how these new capabilities improve sensing systems and enable new capabilities is a variety of applications and modalities. We will discuss applications in underground imaging, fluid modeling and sensing, and airflow imaging, among others, and investigate different approaches to developing and using these models, their advantages and pitfalls.12:30–2:00 pm ET
Lunch
2:00–2:45 pm ET
State of AI Reasoning for Theoretical Physics - Insights from the TPBench Project
Moritz Münchmeyer, University of Wisconsin-Madison
Abstract
Large-language reasoning models are now powerful enough to perform mathematical reasoning in theoretical physics at graduate level. In the mathematics community, data sets such as FrontierMath are being used to drive progress and evaluate models, but theoretical physics has so far received less attention. In this talk I will present our dataset TPBench (arxiv:2502.15815, tpbench.org), which was constructed to benchmark and improve AI models specifically for theoretical physics. We find rapid progress of models over the last months, but also significant challenges at research level difficulty. I will discuss strategies to improve these models for theoretical physics and also show new results using test-time scaling techniques on these problems.2:45–3:30 pm ET
A model of emergent abilities in learning from language
Yasamin Bahri, Google DeepMind
Abstract
Abstract to come3:30–4:00 pm ET
Closing
Speakers
Speakers will be announced as they are confirmed.




























Accommodations
We have established discounted rates for August 10–August 16, 2025 at the following hotels:
-
Porter Square Hotel, 1924 Massachusetts Avenue, Cambridge, MA 02142.
$235-275 nightly rate (1-2 people per room)
Deadline to book: First come, first served
To book, call 617-499-3399 and reference code 141315.
-
Hotel 1868, 1868 Massachusetts Avenue, Cambridge, MA 02142.
$225-265 nightly rate (1-2 people per room)
Deadline to book: First come, first served
To book, call 617-499-3399 and reference code 141315.
Workshop attendees are also welcome to book dorms for a discounted rate at Harvard University:
-
Harvard University Dorms, 36 Oxford Street, Cambridge, MA 02138
$110 nightly rate (1 person per room only)
FAQ
- Who can attend the Summer Workshop? Any researcher working at or interested in the intersection of physics and AI is encouraged to attend the Summer Workshop.
- What is the cost to attend the Summer Worskhop? The registration fee for the Summer Workshop is 200 USD and includes a welcome dinner, as well as coffee breaks and snacks.
- If I come to the Summer School, can I also attend the Workshop? Yes! We encourage you to stay for the Workshop and you can stay in the dorms for both events if you choose (at your expense).
- Will the recordings of the talks be available? We plan to share the talks on our YouTube channel.
2025 Organizing Committee
- Fabian Ruehle, Chair (Northeastern University)
- Bill Freeman (MIT)
- Cora Dvorkin (Harvard)
- Thomas Harvey (IAIFI Fellow)
- Sam Bright-Thonney (IAIFI Fellow)
- Sneh Pandya (Northeastern)
- Yidi Qi (Northeastern)
- Manos Theodosis (Harvard)
- Marshall Taylor (MIT)
- Marisa LaFleur (IAIFI Project Manager)
- Thomas Bradford (IAIFI Project Coordinator)