IAIFI Public Colloquia (Spring 2021)

In Spring 2021, our colloquium series featured IAIFI senior investigators, aiming to introduce you to some of the exciting research being carried out at our institute.

All times are Boston time. Please sign up for our mailing list to receive updates on future events.

You can also watch our colloquia live on YouTube.

  • Phiala Shanahan
    • Thursday, February 4, 11am-noon
    • “Ab-initio AI for first-principles calculations of the structure of matter”
    • YouTube Recording
    • Talk Slides, IAIFI Introduction Slides
    • Abstract: The unifying theme of IAIFI is “ab-initio AI”: novel approaches to AI that that draw from, and are motivated by, aspects of fundamental physics. In this context, I will discuss opportunities for machine learning, in particular generative models, to accelerate first-principles lattice quantum field theory calculations in particle and nuclear physics. Particular challenges in this context include incorporating complex (gauge) symmetries into model architectures, and scaling models to the large number of degrees of freedom of state-of-the-art numerical studies. I will show the results of proof-of-principle studies that demonstrate that sampling from generative models can be orders of magnitude more efficient than traditional Hamiltonian/hybrid Monte Carlo approaches in this context.
  • Pulkit Agrawal
    • Thursday, February 18, 12:30pm-1:30pm Note special time!
    • “Challenges in Real World Reinforcement Learning”
    • YouTube Recording
    • Talk Slides
    • Abstract: In recent years, reinforcement learning (RL) algorithms have achieved impressive results on many tasks. However, for most systems, RL algorithms still remain impractical. In this talk, I will discuss some of the underlying challenges: (i) defining and measuring reward functions; (ii) data inefficiency; (iii) poor transfer across tasks. I will end by discussing some directions pursued in my lab to overcome these problems.
  • Max Tegmark
    • Thursday, March 4, 11am-noon
    • YouTube Recording
    • Talk Slides
    • “ML-discovery of equations, conservation laws and useful degrees of freedom”
    • Abstract: A central goal of physics is to discover mathematical patterns in data. For example, after four years of analyzing data tables on planetary orbits, Johannes Kepler started a scientific revolution in 1605 by discovering that Mars’ orbit was an ellipse. I describe how we can automate such tasks with machine learning and not only discover symbolic formulas accurately matching datasets (so-called symbolic regression), equations of motion and conserved quantities, but also auto-discover which degrees of freedom are most useful for predicting time evolution (for example, optimal generalized coordinates extracted from video data). The methods I present exploit numerous ideas from physics to recursively simplify neural networks, ranging from symmetries to differentiable manifolds, curvature and topological defects, and also take advantage of mathematical insights from knot theory and graph modularity.
  • Phil Harris
    • Thursday, March 18, 11am-noon
    • YouTube Recording
    • Talk Slides
    • “Quick and Quirk with Quarks: Ultrafast AI with Real-Time systems and Anomaly detection For the LHC and beyond”
    • Abstract: With data rates rivaling 1 petabit/s, the LHC detectors have some of the largest data rates in the world. If we were to save every collision to disk, we would exceed the world’s storage by many orders of magnitude. As a consequence, we need to analyze the data in real-time. In this talk, we will discuss new technology that allows us to be able to deploy AI algorithms at ultra-low latencies to process information at the LHC at incredible speeds. Furthermore, we comment on how this can change the nature of real-time systems across many domains, including Gravitational Wave Astrophysics. In addition to real-time AI, we present new ideas on anomaly detection that builds on recent developments in the field of semi-supervised learning. We show that these ideas are quickly opening up possibilities for a new class of measurements at the LHC and beyond.
  • Doug Finkbeiner
    • Thursday, April 1, 11am-noon
    • “Beyond the Gaussian: A Higher-Order Correlation Statistic for the Interstellar Medium”
    • Talk Slides
    • YouTube Recording
    • Abstract: Our project to map Milky Way dust has produced 3-D maps of dust density and precise cloud distances, leading to the discovery of the structure known as the Radcliffe Wave. However, these advances have not yet allowed us to do something the CMB community takes for granted: read physical model parameters off of the observed density patterns. The CMB is (almost) a Gaussian random field, so the power spectrum contains the information needed to constrain cosmological parameters. In contrast, dust clouds and filaments require higher-order correlation statistics to capture relevant information. I will present a statistic based on the wavelet scattering transform (WST) that captures essential features of ISM turbulence in MHD simulations, and maps them onto physical parameters. This statistic is light-weight (easy to understand and fast to evaluate) and provides a framework for comparing ISM theory and observation in a rigorous way.
  • Demba Ba
    • Thursday, April 15, 11am-noon
    • “Interpretable AI in Neuroscience: Sparse Coding, Artificial Neural Networks, and the Brain”
    • Talk Slides
    • YouTube Recording
    • Abstract: Sparse signal processing relies on the assumption that we can express data of interest as the superposition of a small number of elements from a typically very large set or dictionary. As a guiding principle, sparsity plays an important role in the physical principles that govern many systems, the brain in particular. Neuroscientists have demonstrated, for instance, that sparse dictionary learning applied to natural images explains early visual processing in the brain of mammals. Other examples abound, in seismic exploration, biology, and astrophysics, to name a few. In the field of computer science, it has become apparent in the last few years that sparsity also plays an important role in artificial neural networks (ANNs). The ReLU activation function, for instance, arises from an assumption of sparsity on the hidden layers of a neural network. The current picture points to an intimate link between sparsity, ANNs and the principles behind systems in many scientific fields. In this talk, I will focus on neuroscience. In the first part, I will show how to use sparse dictionary learning to design, in a principled fashion, ANNs for solving unsupervised pattern discovery and source separation problems in neuroscience. This approach leads to interpretable architectures with orders of magnitude fewer parameters than black-box ANNs, that can leverage more efficiently the speed and parallelism offered by GPUs for scalability. In the second part, I will introduce a deep generalization of a sparse coding model that makes predictions as to the principles of hierarchical sensory processing in the brain. Using neuroscience as an example, I will make the case that sparse generative models of data, along with the deep ReLU networks associated with them, may provide a framework that utilizes deep learning, in conjunction with experiment, to guide scientific discovery.
  • Jim Halverson
    • Thursday, April 29, 11am-noon
    • “ML for Ab Initio Data: A Tour of Knots and Natural Language”
    • Talk Slides
    • YouTube Recording
    • Abstract: Most applications of ML are applied to experimental data collected on Earth, rather than theoretical or mathematical datasets that have ab initio definitions, such as groups, topological classes of manifolds, or chiral gauge theories. Such mathematical landscapes are characterized by a high degree of structure and truly big data, including continuous or discrete infinite sets or finite sets larger than the number of particles in the visible universe. After introducing ab initio data, we will focus on one such dataset: mathematical knots, which admit a treatment using techniques from natural language processing via their braid representatives. Elements of knot theory will be introduced, including the use of ML for various problems that arise in it. For instance, reinforcement learning can be utilized to unknot complicated representations of trivial knots, similar to untangling headphones, but we will also use transformers for the unknot decision problem, as well as treating the general topological problem.

Here are other organizations that hold public events relevant to the IAIFI community:

IAIFI Internal Seminars (Spring 2021)

These talks are only open to IAIFI members and affiliates.

  • Justin Solomon
    • Thursday, February 11, 11am-noon
    • “Geometric Data Processing at MIT”
  • Phil Harris, Anjali Nambrath, Karna Morey, Michal Szurek, Jade Chongsathapornpong
    • Thursday, February 25, 11am-noon
    • “Open Data Science in Physics Courses”
  • Ge Yang
    • Thursday, Mar 11, 11am-noon
    • “Learning Task Informed Abstractions”
  • Christopher Rackauckas
    • Thursday, Mar 25, 11am-noon
    • “Overview of SciML”
  • George Barbastathis/Demba Ba
    • Thursday, April 8, 11am-noon
    • “On the Countinuum between Dictionaries and Neural Nets for Inverse Problems”
  • David Kaiser
    • Thursday, April 22, 11am-noon
    • “Ethics and AI”
  • Alexander Rakhlin
    • Thursday, May 6, 11am-noon
    • “Deep Learning: A Statistical Viewpoint”
  • Edo Berger
    • Thursday, May 20, 11am-noon
    • “Machine Learning for Cosmic Explosions”

Previous Events

Kickoff Internal Events (Fall 2020)

In Fall 2020, we held two internal events to introduce IAIFI members to each other and identify research synergies.

  • IAIFI Fall 2020 Unconference
    • Monday, December 14, 2020, 2pm-5pm
    • Internal Meeting
  • IAIFI Fall 2020 Symposium
    • Monday, November 23, 2020, 2pm-5pm
    • Internal Meeting