Internal Discussion Seminars

Upcoming Seminars

  • Gaia Grosso, Postdoctoral Fellow, IAIFI
    • Friday, April 26, 2024, 2:00–3:00 pm
    • A signal-agnostic ML approach to hypothesis testing at the LHC
    • Signal-agnostic data exploration could unveil very subtle statistical deviations of collider data from the expected Standard Model of particle physics. The extreme size, rate and complexity of the datasets produced at the Large Hadron Collider (LHC) pose unique challenges to the design of anomaly detection tools that are powerful, efficient and, at the same time, interpretable and robust. In this talk, I will present New Physics Learning Machine (NPLM), a machine-learning based strategy to test experimental data for significant departures from the Standard Model, with no prior bias on the nature of the underlying process responsible for it. The main idea behind the method is to fit the likelihood-ratio test statistic directly to the data with flexible machine learning models. I will describe the solution adopted in NPLM to account for systematic uncertainties affecting the Standard Model, and I will conclude discussing ongoing research on the problems of optimal model selection and efficient computing on large size samples that are essential for the application of the method to LHC datasets.

Past Seminars

Access recordings of past seminars (for IAIFI members only)

Spring 2024 Seminars

  • Susanne Yelin, Professor of Physics in Residence, Harvard University
    • February 2, 2024, 2:00–3:00 pm
    • Analog quantum machine learning for near-term hardware
    • Quantum neuromorphic computing is a subfield of quantum machine learning that capitalizes on inherent system dynamics. As a result, it can run on contemporary, noisy quantum hardware and is poised to realize challenging algorithms in the near term. I will show how a present-day programmable quantum simulator has all the features to allow the learning of several cognitive tasks, such as multitasking, decision-making, and memory, by taking advantage of several key features of such a platform. One key element yet to be added to such modes is the characterization of the requisite dynamics for universal quantum neuromorphic computations. We address this issue by proposing a quantum perceptron, a simple mathematical model for a neuron that is the building block of various machine learning architectures and demonstrate that it can realize universal quantum computations. The effectiveness of this architecture can then also be shown by applying it to, e.g., calculating the inner products between quantum states, energy measurement, and quantum metrology.
    • Talk Slides (For IAIFI members only)
  • Michael Albergo, Postdoctoral Fellow, Courant Institute of Mathematical Sciences
    • Friday, February 23, 2024, 2:00–3:00 pm
    • Dynamical Measure Transport: Theory and Applications
    • I will discuss recent advances in generative modeling from the perspective of transport between probability densities. In particular, I will describe efforts to unify flow-based and diffusion based generative modeling methods through a paradigm known as stochastic interpolants. Using these ideas, I will illustrate various ways to make generative models more performant for conventional tasks, such as image generation, while also highlighting new applications in domains such as scientific computing, for example in the probabilistic forecasting of dynamical systems.
    • Talk Slides (For IAIFI members only)
  • Alex Gagliano, Postdoctoral Fellow, IAIFI
    • Friday, March 8, 2024, 2:00–3:00 pm
    • Reconstructing the Lives (and Deaths) of Massive Stars through Scalable, Low-Latency Inference
    • The deaths of stars as supernovae sit at the nexus of multiple astrophysical domains: they trace the Universe’s expansion, reveal the chemical enrichment of galaxies, and unveil the terminal stages of stellar evolution. As supernova discovery rates surpass exponential scaling, population studies of even rare phenomena will soon become possible. In this talk, I will describe the unique statistical challenges that accompany this new data-rich, real-time analysis paradigm. I will motivate a shift away from event classification toward characterizing unusual phenomena with the help of physically meaningful latent spaces. I will conclude by discussing the prospects for supernova science with the Vera C. Rubin Observatory, and what the impending data deluge can teach us about the pre-explosion stellar system.
    • Talk Slides (For IAIFI members only)

Fall 2023 Seminars

  • Dmitrii Kochkov, Research Engineer, Google
    • Friday, October 27, 2023, 2:00–3:00 pm
    • Neural General Circulation Models for weather & climate
    • In recent years, there have been significant advancements in data-driven approaches to global weather forecasting that have demonstrated accuracy competitive with modern operational systems. While current state-of-the-art ML approaches achieve lower errors at medium-range lead-times, physics-based models like ECMWF’s HRES/ENS feature superior physical consistency and better forecast accuracy at longer lead-times. In this talk I’ll describe our ongoing research effort where we are developing a hybrid atmospheric model based on a differentiable dynamical core with a neural network representation of parameterized physics, trained end-to-end. Specifically, I’ll discuss the rationale behind our model formulation and show preliminary results on accuracy, physical consistency and emergent long-term atmospheric phenomena.
    • Talk Slides (For IAIFI members only)
  • Mike Douglas, Research Scientist, Harvard CMSA
    • Friday, November 17, 2023, 3:00–4:00 pm (Note later time than usual)
    • Interactive Theorem Proving: an overview for scientists
    • Interactive theorem proving (ITP) is a technique for expressing mathematical theorems in a formal language and verifying their correctness by computer. Modern ITP systems such as Coq, Lean and Isabelle can express any rigorous mathematical statement, and a trained user can write verifiable proofs which are around four times longer than ``informal’’ proofs. Much of undergraduate level and some graduate level mathematics has been formalized, and ITP is starting to help with the development of cutting edge mathematics. Meanwhile, AI researchers make steady progress on the original AI challenge problem of automatically discovering and proving mathematical theorems, now with the help of large language models. This talk is a colloquium level survey of these topics.
    • Talk Slides (For IAIFI members only)

Spring 2023 Seminars

  • Denis Boyda, Fellow, IAIFI
    • Friday, April 28, 2023
    • Normalizing Flows in Lattice QCD
    • Lattice Quantum Chromodynamics (QCD) is the most established method for simulating physical processes governed by strong interactions in a systematically improvable manner. It is currently being used to precisely examine the Standard Model of particle physics. For instance, recent inputs from lattice QCD have resolved disagreements on computations of the magnetic moment of the muon. However, despite its advantages, lattice QCD encounters certain issues, and debates persist regarding reliable estimates of systematic uncertainties on predictions. During this talk, I will elaborate on some recent Machine Learning developments that aim to overcome the existing difficulties in lattice QCD. Specifically, I will highlight the progress made in developing expressive Normalizing flow models and discuss some interesting theoretical advancements and practical strategies.
    • Talk Slides (For IAIFI members only)
  • Brian Nord, Research Scientist, Fermilab
    • Friday, March 24, 2023, 3:00–4:00 pm
    • How do we build trustworthy AI models for physics?
    • Machine learning (ML) has begun to permeate many aspects of the scientific cycle in physics – data analysis, simulation, experiment design, and even hypothesis generation. However, which aspects of ML are we prepared to trust for our science applications? I will discuss some of the methods that are used for addressing trustworthiness, especially in the context of astrophysics, including uncertainty quantification to domain adaptation. I’d also like to hear from you what you think is needed to make ML useful and safe enough to perform discovery science.
    • Talk Slides (For IAIFI members only)
  • Carolina Cuesta-Lazaro, Fellow, IAIFI
    • Friday, February 24, 2023, 2:00–3:00 pm
    • Will Machine Learning shape the future of cosmology? A journey through obstacles and potential approaches to overcoming them
    • Large three-dimensional maps of the Universe are a rich source of information on how structure forms and grows in the Universe. From them we can infer the rate at which large structures grow and shed light on the nature of the accelerated expansion. But we can also accurately measure the composition of the universe, including the mass of neutrinos, test the nature of inflation, and understand how galaxies form in interaction with the cosmic web. However, existing methods of summarizing this noisy and complex information using N-point statistics may miss important details that could improve our understanding of cosmology, gravity, and galaxy formation. In this talk, we will explore the potential of machine learning to extract additional insights from spectroscopic surveys, focusing on the challenges that must be addressed to fully leverage this powerful tool. Whether you are interested in cosmology, machine learning, or both, this talk will provide insights into the intersection of these two exciting fields.
    • Talk Slides (For IAIFI members only)
  • Special Seminar: Eun-Ah Kim, Professor, Cornell University
    • Friday, January 27, 2023, 2:00–3:00 pm
    • Machine Learning Quantum Emergence
    • Decades of efforts in improving computing power and experimental instrumentation were driven by our desire to better understand the complex problem of quantum emergence. The resulting ‘data revolution’ presents new challenges. I will discuss how these challenges can be embraced and turned into opportunities through machine learning. The scientific questions in the field of electronic quantum matter require fundamentally new approaches to data science for two reasons: (1) quantum mechanics restricts our access to information, (2) inference from data should be subject to fundamental laws of physics. Hence machine learning quantum emergence requires collective wisdom of data science and condensed matter physics. I will present my group’s results on the machine-learning-based analysis of complex data and resulting new insights.
    • Talk Slides (For IAIFI members only)

Fall 2022 Seminars

  • Aleksander Madry, Professor, MIT
    • Friday, December 16, 2022, 2:00-3:00 pm
    • What (and how) the ML models learn
    • The training data that modern machine learning models ingest has a major impact on these models’ performance (as well as failures). Yet, this impact tends to be neither fully appreciated nor understood at a fine-grained enough level. In this talk, we will discuss some of the key ways in which training data influences not only what but also how models “learn” as well as tools to dissect this influence. In particular, we will present a new framework—called datamodeling—for directly casting predictions as functions of training data and the corresponding model class. This framework enables us to perform a range of model class-driven data analysis, including discovery of subpopulations, quantifying brittleness of model predictions, and diagnosing other shortcomings of the training set.
    • Talk Slides (For IAIFI members only)
  • Abiy Tasissa, Professor, Tufts
    • Friday, October 28, 2022, 2:00-3:00 pm
    • Geometric sparse coding with learned archetypes: Theory and applications
    • Given a set of data points, archetypal analysis is a method which represents each data as a convex combination of exemplars called ‘archetypes’. The benefit of this analysis is the interpretable archetypes along with information that can be gleaned from the representation coefficients. We propose a method that combines manifold learning and archetypal analysis by positing that each data point can be written as a convex combination of nearby landmarks. To encourage representing a data point via closeby landmarks, we propose a locality regularizer. We discuss how this regularizer relates to graph matching, K-means and Laplacian smoothness. Under the assumption that the data is exactly generated from vertices of a Delaunay triangulation, the proposed regularizer exactly recovers the underlying sparse solution. Moreover, for fixed representation coefficients, we show that the optimal landmarks can be computed in closed form. To solve the optimization problem of finding the coefficients and the landmarks, we use algorithm unrolling to derive a neural network that efficiently solves the problem. We discuss how the sparse embeddings derived from our algorithm can be used for downstream tasks such as clustering.
    • Talk Slides (For IAIFI members only)
  • Special Seminar: Kaća Bradonjić, Professor, Hampshire College
    • Friday, October 21, 2022, 2:00-3:00 pm
    • Return to the phenomenal: An exploration of the subjective, internal representations of physical theories and their relation to the collective pursuits of knowledge
    • Physicists study the physical world on spatial, temporal and complexity scales inaccessible through ordinary human perception. How, then, does a person ground their understanding of physics at these scales in the sensory impressions and emotional states made possible by their body? In relation to the field as a whole, we can also ask: To what extent and how does a physicist’s research methodology (theoretical, experimental, or computational) shape the features of the mental models they use in their work? Conversely, to what extent and how do a physicist’s mental models affect the way they approach and engage with a research problem? Finally, what is the nature of the dynamic relation between the individual mental models and their collectively-accepted representational counterparts, and how does it impact physics research? In this talk, I will sketch out the framing of my approach to these questions that integrates artistic and intellectual practices, and is informed by the history and philosophy of science, theories of embodied cognition, and philosophy of phenomenology. I will then describe the exploratory stages of my first project in this vein, carried out at IAIFI, with the particular focus on the role of AI in particle physics research.
    • Talk Slides (For IAIFI members only)
  • Jessie Micallef, Fellow, IAIFI
    • Friday, September 30, 2022, 2:00-3:00 pm
    • Neutrinos and Neural Networks: Need for Speed and Adaptability
    • Neutrinos remain an elusive and intriguing fundamental particle that is useful for probing inconsistencies of the Standard Model: neutrinos have mass when the Standard Model predicts they should not, they potentially exhibit charge parity violation, and there are possible hints of a fourth, sterile neutrino flavor. Data from neutrino detectors is particularly valuable due to the neutrinos’ weakly interacting nature, thus it is crucial that we maximize the information per detected interaction. In this talk, I will show how we are using machine learning to better analyze the precious data from various types of neutrino detectors. I will discuss optimizing convolutional neural networks (CNNs) to reconstruct GeV-scale neutrino events in the IceCube detector and how these measurements can help improve our understanding of these difficult-to-detect particles. I will focus on the challenges of reconstructing sparse, noisy neutrino events along with the speedup advantages of using machine learning methods. I will also touch on challenges that machine learning reconstructions face with the current and next generation of neutrino experiments, which will leverage Liquid Argon (LAr) detectors that use charge and light to record neutrino interactions.
    • Talk Slides (For IAIFI members only)

Spring 2022 Seminars

  • Anna Golubeva, Fellow, IAIFI
    • Friday, March 11, 2022, 2:00-3:00 pm
    • The role of symmetry in machine learning
    • In physics, symmetry is a concept of fundamental importance. It has served as a powerful guiding principle that allows us to find regularities in complex phenomena and to deduce the underlying simple laws of nature. Can we leverage the principle of symmetry to gain insights into Machine Learning? There are three separate but interconnected parts of a ML system where we could look for symmetries: The neural network architecture, the input data and the loss function. I will give an overview of the existing research on this topic and discuss the implications for practical ML.
    • Talk Slides (For IAIFI members only)
  • Boaz Borak, Professor, Computer Science, Harvard
    • Friday, April 8, 2022, 2:00-3:00 pm
    • Deep learning, generalization, and rationality
    • Deep learning often operates in a regime where traditional generalization bounds fail to hold, and indeed are not even true, in the sense that there is a non vanishing gap between empirical and population performance. Yet, deep neural networks still generalize and perform well beyond their training set. In this talk we will present: (1) Empirical evidence that deep networks have similar internal representations regardless of whether they are trained in the traditional ‘full supervised’ manner or trained in a ‘self supervised + simple’ (SSS) method, where all but their last layer are trained without access to the labels; (2) Empirical evidence that for SSS algorithms, generalization is true in practice, along with a theoretical bound on the generalization gap of such algorithms which is non vacuous in several practical setting. The bound does not make structural or conditional independence assumptions on the training distribution, but rather assumes the algorithm is ‘rational’ in a certain precise sense, which is empirically shown to hold in practice.
    • Talk Slides (For IAIFI members only)
  • Siddharth Mishra-Sharma, Fellow, IAIFI
    • Friday, April 22, 2022, 4:00-5:00 pm
    • Flows for inference and interpretability: a Galactic Center Excess case study
    • The source of the so-called Galactic Center Excess (GCE)—an excess of gamma-rays observed from the central regions of the Milky Way—remains an open question. Disentangling the various possibilities, such as annihilating dark matter and astrophysical point sources, is a challenging modeling and inference task. I will describe some recent attempts at making progress in this direction by leveraging neural simulation-based inference techniques. Time permitting, I will describe some ongoing work using generative modeling as a test of robustness of neural network-based inference methods in the context of the GCE.
    • Talk Slides (For IAIFI members only)
  • Special Seminar: Junyu Liu, Researcher, University of Chicago/IBM
    • Wednesday, June 1, 2022, 2:30-3:30 pm
    • An analytic theory for the dynamics of wide quantum neural networks
    • Parametrized quantum circuits can be used as quantum neural networks and have the potential to outperform their classical counterparts when trained for addressing learning problems. To date, much of the results on their performance on practical problems are heuristic in nature. In particular, the convergence rate for the training of quantum neural networks is not fully understood. Here, we analyze the dynamics of gradient descent for the training error of a class of variational quantum machine learning models. We define wide quantum neural networks as parameterized quantum circuits in the limit of a large number of qubits and variational parameters. We then find a simple analytic formula that captures the average behavior of their loss function and discuss the consequences of our findings. For example, for random quantum circuits, we predict and characterize an exponential decay of the residual training error as a function of the parameters of the system. We finally validate our analytic results with numerical experiments.
    • Talk Slides (For IAIFI members only)

Fall 2021 Seminars

  • Fabian Ruehle, Assistant Professor, Northeastern University
    • Friday, September 24, 2021, 2:00-3:00 pm
    • Learning metrics in extra dimensions
    • String theory is a very promising candidate for a fundamental theory of our universe. An interesting prediction of string theory is that spacetime is ten-dimensional. Since we only observe four spacetime dimensions, the extra six dimensions are small and compact, thus evading detection. These extra six-dimensional spaces, known as Calabi-Yau spaces, are very special and elusive. They are equipped with a special metric needed to make string theory consistent. This special property is given in terms of a (notoriously hard) type of partial differential equation. While we know, thanks to the heroic work of Calabi and Yau, that this PDE has a unique solution and hence that the metric exists, we neither know what it looks like nor how to construct it explicitly. However, the metric is an important quantity that enters in many physical observables, e.g. particle masses. Thinking of the metric as a function that satisfies three constraints that enter in the Calabi-Yau theorem, we can parameterize the metric as a neural network and formulate the problem as multiple continuous optimization tasks. The neural network is trained (akin to self-supervision) by sampling points from the Calabi-Yau space and imposing the constraints entering the theorem as customized loss functions.
    • Talk Slides (For IAIFI members only)
  • Di Luo, Fellow, IAIFI
    • Friday, October 8, 2021, 2:00-3:00 pm
    • Machine Learning for Quantum Many-body Physics
    • The study of quantum many-body physics plays an crucial role across condensed matter physics, high energy physics and quantum information science. Due to the exponential growing nature of Hilbert space, challenges arise for exact classical simulations of high dimensional wave function which is the core object in quantum many-body physics. A natural question comes as whether machine learning, which is powerful for processing high dimensional probability distribution, can provide new methods for studying quantum many-body physics. In contrast to the standard high dimensional probability distribution, the wave function further exhibits complex phase structure and rich symmetries besides high dimensionality. It opens up a series of interesting questions for high dimensional optimization, sampling and representation imposed by quantum many-body physics. In this talk, I will discuss recent advancement of the field and present (1) neural network representations for quantum states with Fermionic anti-symmetry and gauge symmetries; (2) neural network simulations for ground state and real-time dynamics in condensed matter physics, high-energy physics and quantum information science; (3) quantum control protocol discovery with machine learning.
    • Talk Slides (For IAIFI members only)
  • Cengiz Pehlevan, Assistant Professor, Applied Mathematics, Harvard University (SEAS)
    • Friday, October 22, 2021 2:00-3:00 pm
    • Inductive bias of neural networks
    • A learner’s performance depends crucially on how its internal assumptions, or inductive biases, align with the task at hand. I will present a theory that describes the inductive biases of neural networks in the infinite width limit using kernel methods and statistical mechanics. This theory elucidates an inductive bias to explain data with ‘simple functions’ which are identified by solving a related kernel eigenfunction problem on the data distribution. This notion of simplicity allows us to characterize whether a network is compatible with a learning task, facilitating good generalization performance from a small number of training examples. I will present applications of the theory to deep networks (at finite width) trained on synthetic and real datasets, and recordings from the mouse primary visual cortex. Finally, I will briefly present an extension of the theory to out-of-distribution generalization.
    • Talk Slides (For IAIFI members only)
  • Bryan Ostdiek, Postdoctoral Fellow, Theoretical Particle Physics, Harvard University
    • Friday, November 5, 2021 2:00-3:00 pm
    • Lessons from the Dark Machines Anomaly Score Challenge
    • With LHC experiments producing strong exclusion bounds on theoretical new physics models, there has been recent interest in model agnostic methods to search for physics beyond the standard model. The Dark Machines group conducted a ‘challenge’ as an open playground to examine unsupervised anomaly detection methods on simulated collider events. In this discussion, I briefly motivate and introduce anomaly detection, along with the public data set. We found that the methods which performed best across a wide range of signals shared a common feature; the metric for determining how anomalous an event is depends only on how the event can be encoded into a small representation - there is no decoding step. The discussion will start with speculations about why the ‘fixed target’ encoding can work and look to future tests.
    • Talk Slides (For IAIFI members only)
  • Tess Smidt, Assistant Professor, EECS, MIT
    • Friday, November 19, 2021 2:00-3:00 pm
    • Unexpected properties of symmetry equivariant neural networks
    • Physical data and the way that it is represented contains rich context, e.g. symmetries, conserved quantities, and experimental setups. There are many ways to imbue machine learning models with this context (e.g. input representation, training schemes, constraining model structure) and each vary in their flexibility and robustness. In this talk, I’ll give examples of some surprising consequences of what happens when we impose constraints on the functional forms of our models. Specifically, I’ll discuss properties of Euclidean Neural Networks which are constructed to preserve 3D Euclidean symmetry. Perhaps unsurprisingly, symmetry-preserving algorithms are extremely data-efficient; they are able to achieve better results with less training data. More unexpectedly, Euclidean Neural Networks also act as ‘symmetry-compilers’: they can only learn tasks that are symmetrically well-posed and they can also help uncover when there is symmetry implied missing information. I’ll give examples of these properties and how they can be used to craft useful training tasks for physical data. To conclude, I’ll highlight some open questions in symmetry equivariant neural networks particularly relevant to representing physical systems.
    • Talk Slides (For IAIFI members only)
  • Harini Suresh, PhD Student, Computer Science, MIT
    • Friday, December 3, 2021, 2:00-3:00 pm
    • Understanding Sources of Harm throughout the Machine Learning Life Cycle
    • As machine learning increasingly affects people and society, awareness of its potential harmful effects has also grown. To anticipate, prevent, and mitigate undesirable downstream consequences, it’s important that we understand when and how harm might be introduced throughout the ML life cycle. This talk will walk through a framework that identifies seven distinct potential sources of downstream harm in machine learning, spanning the data collection, development, and deployment processes. It will also explore how different sources of harm might motivate different mitigation techniques.
    • Talk Slides (For IAIFI members only)

Spring 2021 Seminars