## IAIFI Public Colloquia (Spring 2021)

In Spring 2021, our colloquium series will feature IAIFI senior investigators, aiming to introduce you to some of the exciting research being carried out at our institute.

All times are Boston time. Please sign up for our mailing list to receive Zoom links for these events.

You can also watch our colloquia live on YouTube.

**Phiala Shanahan****Thursday, February 4, 11am-noon***“Ab-initio AI for first-principles calculations of the structure of matter”*- YouTube Recording
- Talk Slides, IAIFI Introduction Slides
- Abstract: The unifying theme of IAIFI is “ab-initio AI”: novel approaches to AI that that draw from, and are motivated by, aspects of fundamental physics. In this context, I will discuss opportunities for machine learning, in particular generative models, to accelerate first-principles lattice quantum field theory calculations in particle and nuclear physics. Particular challenges in this context include incorporating complex (gauge) symmetries into model architectures, and scaling models to the large number of degrees of freedom of state-of-the-art numerical studies. I will show the results of proof-of-principle studies that demonstrate that sampling from generative models can be orders of magnitude more efficient than traditional Hamiltonian/hybrid Monte Carlo approaches in this context.

**Pulkit Agrawal****Thursday, February 18, 12:30pm-1:30pm***Note special time!**“Challenges in Real World Reinforcement Learning”*- Abstract: In recent years, reinforcement learning (RL) algorithms have achieved impressive results on many tasks. However, for most systems, RL algorithms still remain impractical. In this talk, I will discuss some of the underlying challenges: (i) defining and measuring reward functions; (ii) data inefficiency; (iii) poor transfer across tasks. I will end by discussing some directions pursued in my lab to overcome these problems.

**Max Tegmark****Thursday, March 4, 11am-noon***“Title: ML-discovery of equations, conservation laws and useful degrees of freedom”*- Abstract: A central goal of physics is to discover mathematical patterns in data. For example, after four years of analyzing data tables on planetary orbits, Johannes Kepler started a scientific revolution in 1605 by discovering that Mars’ orbit was an ellipse. I describe how we can automate such tasks with machine learning and not only discover symbolic formulas accurately matching datasets (so-called symbolic regression), equations of motion and conserved quantities, but also auto-discover which degrees of freedom are most useful for predicting time evolution (for example, optimal generalized coordinates extracted from video data). The methods I present exploit numerous ideas from physics to recursively simplify neural networks, ranging from symmetries to differentiable manifolds, curvature and topological defects, and also take advantage of mathematical insights from knot theory and graph modularity.

**Phil Harris****Thursday, March 18, 11am-noon***“Title: Quick and Quirk with Quarks: Ultrafast AI with Real-Time systems and Anomaly detection For the LHC and beyond”*- Abstract: With data rates rivaling 1 petabit/s, the LHC detectors have some of the largest data rates in the world. If we were to save every collision to disk, we would exceed the world’s storage by many orders of magnitude. As a consequence, we need to analyze the data in real-time. In this talk, we will discuss new technology that allows us to be able to deploy AI algorithms at ultra-low latencies to process information at the LHC at incredible speeds. Furthermore, we comment on how this can change the nature of real-time systems across many domains, including Gravitational Wave Astrophysics. In addition to real-time AI, we present new ideas on anomaly detection that builds on recent developments in the field of semi-supervised learning. We show that these ideas are quickly opening up possibilities for a new class of measurements at the LHC and beyond.

**Demba Ba****Thursday, April 15, 11am-noon***“Title: Interpretable AI in Neuroscience: Sparse Coding, Artificial Neural Networks, and the Brain”*- Abstract: Sparse signal processing relies on the assumption that we can express data of interest as the superposition of a small number of elements from a typically very large set or dictionary. As a guiding principle, sparsity plays an important role in the physical principles that govern many systems, the brain in particular. Neuroscientists have demonstrated, for instance, that sparse dictionary learning applied to natural images explains early visual processing in the brain of mammals. Other examples abound, in seismic exploration, biology, and astrophysics, to name a few. In the field of computer science, it has become apparent in the last few years that sparsity also plays an important role in artificial neural networks (ANNs). The ReLU activation function, for instance, arises from an assumption of sparsity on the hidden layers of a neural network. The current picture points to an intimate link between sparsity, ANNs and the principles behind systems in many scientific fields. In this talk, I will focus on neuroscience. In the first part, I will show how to use sparse dictionary learning to design, in a principled fashion, ANNs for solving unsupervised pattern discovery and source separation problems in neuroscience. This approach leads to interpretable architectures with orders of magnitude fewer parameters than black-box ANNs, that can leverage more efficiently the speed and parallelism offered by GPUs for scalability. In the second part, I will introduce a deep generalization of a sparse coding model that makes predictions as to the principles of hierarchical sensory processing in the brain. Using neuroscience as an example, I will make the case that sparse generative models of data, along with the deep ReLU networks associated with them, may provide a framework that utilizes deep learning, in conjunction with experiment, to guide scientific discovery.

## Related Public Events

Here are other organizations that hold public events relevant to the IAIFI community:

## IAIFI Internal Seminars (Spring 2021)

These talks are only open to IAIFI members and affiliates.

**Justin Solomon****Thursday, February 11, 11am-noon***“Geometric Data Processing at MIT”*

**Phil Harris, Anjali Nambrath, Karna Morey, Michal Szurek, Jade Chongsathapornpong****Thursday, February 25, 11am-noon***“Open Data Science in Physics Courses”*

**Ge Yang****Thursday, Mar 11, 11am-noon***“Learning Task Informed Abstractions”*

**Christopher Rackauckas****Thursday, Mar 25, 11am-noon***“Overview of SciML”*

**George Barbastathis/Demba Ba****Thursday, April 8, 11am-noon***“On the Countinuum between Dictionaries and Neural Nets for Inverse Problems”*

**David Kaiser****Thursday, April 22, 11am-noon***“Ethics and AI”*

**Isaac Chuang****Thursday, May 6, 11am-noon***“Quantum Computation and Machine Learning”*

**Edo Berger****Thursday, May 20, 11am-noon***“Machine Learning for Cosmic Explosions”*

## Previous Events

### Kickoff Internal Events (Fall 2020)

In Fall 2020, we held two internal events to introduce IAIFI members to each other and identify research synergies.

**IAIFI Fall 2020 Unconference**- Monday, December 14, 2020, 2pm-5pm
- Internal Meeting

**IAIFI Fall 2020 Symposium**- Monday, November 23, 2020, 2pm-5pm
- Internal Meeting