The IAIFI Journal Club is open to IAIFI members and affiliates.
Upcoming Journal Clubs
 Darius Faroughy, Postdoctoral Associate, Rutgers University
 Tuesday, February 20, 2024, 11:00 am–12:00 pm, MIT LNS Conference Room (26528)
 Is flowmatching an alternative to diffusion?
 We discuss flowmatching (2210.02747), a recently proposed objective for training continuous normalizing flows inspired by diffusion models. As a generative model, flowmatching can produce stateoftheart samples for images and other data representations. More interestingly, flowmatching can be used to go beyond generative modeling by learning to approximate the optimal transport map between two arbitrary data distributions. The JC is meant to be an interactive blackboard talk discussing the method. At the end, I’ll flash a few slides illustrating its usefulness for generating jets as particle clouds (2310.00049).
 Jonas Rigo, Postdoc, Forschungszentrum Jülich GmbH
 Tuesday, February 27, 2024, 11:00 am–12:00 pm, MIT LNS Conference Room (26528)
 Is the ground state of Anderson’s impurity model a recurrent neural network?
 When the Anderson impurity model (AIM) is expressed in terms of a Wilson chain it assumes a hierarchical Renormalization group structure that translates to a ground state with features like Friedel oscillations and the Kondo screening cloud [1]. Recurrent neural networks (RNNs) have recently gained traction in the form of Neural Quantum States (NQS) ansätze for quantum many body ground states and they are known to be able to learn such complex patterns [2]. We explore RNNs as an ansatz to capture the AIM’s ground state for a given Wilson chain length and investigate its capability to predict the ground state on longer chains for a converged ground state energy. [1] Affleck, Ian, László Borda, and Hubert Saleur. “Friedel oscillations and the Kondo screening cloud.” Physical Review B* 77.18 (2008): 180404. [2] HibatAllah, Mohamed, et al. “Recurrent neural network wave functions.” Physical Review Research 2.2 (2020): 023358.
 Kehang Zhu, Grad Student, Harvard
 CANCELED (will reschedule): Tuesday, February 13, 2024, 11:00 am–12:00 pm, MIT LNS Conference Room (26528)
 Agentbased modeling: Harnessing Large Language Models for Automated Exploration of Emergent Behaviors in Simulated Social Systems
 Two significant impediments to success of the social sciences in comparison to physics are the inherent difficulty in both rapidly executing multiple controlled experiments to explore a parameter space and determining what parameter space to explore. In this work, we present a computational framework and platform that simulates the entire social scientific process, leveraging Large Language Models (LLMs) to study human actors within social systems. We create controlled environments, akin to toy models in physics, that systematically explore the space parameter of variables relevant to any social system (such as attributes of human actors), allowing for the exponentially faster discovery of emergent social behaviors as compared to traditional social science experimentation. Central to our approach is the automatic generation of Structural Causal Models (SCMs) that generate statistical correlations of potential interactions within a social system and outline the requisite metrics and tools to observe and measure these nonlinear dynamics. With the flexibility to vary controlled variables across a nearly infinite parameter space, our system offers a sandbox to simulate and analyze various social scenarios – from wage bargaining and auction mechanics to nuclear weapon negotiations. Our framework and platform offers a new playground for physicists to study the nonlinear dynamics and emergent phenomena in human social systems.
 Katherine Fraser, Graduate Student, Harvard University
 Tuesday, March 19, 2024, 11:00 am–12:00 pm, MIT LNS Conference Room (26528)
 Details to come
 Akshunna Dogra, Graduate Student, Imperial College London
 Tuesday, April 23, 2024, 11:00 am–12:00 pm, MIT LNS Conference Room (26528)
 Details to come
Past Journal Clubs
Fall 2023 Journal Clubs
 Tony Menzo, Graduate Assistant, University of Cincinnati
 Tuesday, September 19, 2023, 11:00 am–12:00 pm, MIT LNS Conference Room (26528)
 Towards a datadriven model of hadronization
 We will discuss recent and ongoing developments at the intersection of machine learning and simulated hadronization. Specifically, we’ll focus on some of the major challenges presented when attempting to build a datadriven model of hadronization that utilizes experimental data during training. Solutions to some of these challenges will be presented in the context of invertible neural networks or normalizing flows including the introduction of a new paradigm that allows for the training of microscopic hadronization dynamics from macroscopic eventlevel observables.
 Slides (for IAIFI members only)
 Jeffrey Lazar, Graduate Student, Harvard
 Tuesday, September 26, 2023, 11:00 am–12:00 pm, MIT LNS Conference Room (26528)
 OpenSource Simulation and Machine Learning for Neutrino Telescopes
 In the last decade, the filed of neutrino astronomy has made major strides, culminating of the definitive detection of galactic and extragalactic components of the astrophysical neutrino flux. We can now begin characterizing these astrophysical beams and pursuing new physics through them. Machine learning techniques have played an integral part in these recent advances, and while these current efforts have been impressive, it is clear that there is room to improve. This face, along with the growing, global network of neutrino telescopes, drives the need for opensource tools to use all person power and avoid reduplicating effort. In this talk I will present Prometheus, the first opensource, endtoend simulation for neutrino telescopes. Furthermore, I will show a recent example of Prometheus to develop machine learning techniques capable of running at typical neutrino telescope trigger rates.
 Slides (for IAIFI members only)
 Manos Theodosis, Graduate Student, Harvard
 Tuesday, October 3, 2023, 11:00 am–12:00 pm, MIT LNS Conference Room (26528)
 Learning Group Representations in Neural Networks
 Employing equivariance in neural networks leads to greater parameter efﬁciency via parameter sharing and improved generalization performance through the encoding of domain knowledge in the architecture; however, the majority of existing approaches require an a priori speciﬁcation of the data symmetries. We present a neural network architecture, Group Representation Networks (GRNs), that learns symmetries on the weight space of neural networks without any supervision or knowledge of the hidden symmetries in the data. Beyond their interpretability, GRNs’ learned representations distill symmetries of the data domain and the downstream task, which are incorporated when training networks on different datasets. The key idea behind GRNs relates weights in neural networks via a cyclic action whose group representation depends on the data domain, and is learned in an unsupervised manner. Our experiments underline the ability of GRNs to correctly recover symmetries in the data, show competitive performance when GRNs are used as a dropin replacement for conventional layers, and highlight the ability to transfer learned representations across tasks and datasets.
 Slides (for IAIFI members only)
 Andy Jin, Graduate Student, Harvard University
 Tuesday, October 17, 2023, 11:00 am–12:00 pm, MIT LNS Conference Room (26528)
 Two Watts is All You Need: LowPower Machine Learning on TPU for Neutrino Telescopes
 In neutrino experiments, we have seen machine learning software methods to boost our abilities of physics discovery given the hardware experimental setups. Currently, we face upgrades and new telescopes and experimental hardware expecting more statistics as well as more complicated data signals. This calls out for an upgrade on the software side as well for handling the more complex data in a more efficient way. Specifically, we need low power and fast software methods in order to achieve real time signal processing, where current machine learning base methods are too expensive to be deployed in the commonly powerrestricted regions where these experiments are located. In this talk, I will present the first attempt at and a proof of concept for enabling machine learning methods to be deployed live in under water/ice neutrino telescopes via quantization and deployment on Tensor Processing Units (TPUs). We use an LSTMbased recursive neural network with residual convolutionbased data encoding, combined with specifically tailored data preprocessing and quantization aware training methods for deployment on the Google Edge TPU. This algorithm can achieve stateoftheart angular resolution in reconstruction with a realtime inference frequency of 100 Hz/Watts in a TPU accelerator at only 2 Watts of power consumption. This opens up a world of chances to integrate machine learning capacity into detectors and electronics deep into even the most powerrestricted environments.
 Slides (for IAIFI members only)
 Ryan Raikman, Undergraduate, Carnegie Mellon University (currently working with LIGO), MIT LIGO
 Tuesday, October 24, 2023, 11:00 am–12:00 pm, MIT LNS Conference Room (26528)
 GWAK: Gravitational Wave Anomalous Knowledge with Recurrent Autoencoders
 Matchedfiltering detection techniques for gravitationalwave (GW) signals in groundbased interferometers rely on having wellmodeled templates of the GW emission. Such techniques have been traditionally used in searches for compact binary coalescences (CBCs), and have been employed in all known GW detections so far. However, interesting science cases aside from compact mergers do not yet have accurate enough modeling to make matched filtering possible, including corecollapse supernovae and sources where stochasticity may be involved. Therefore the development of techniques to identify sources of these types is of significant interest. In this paper, we present a method of anomaly detection based on deep recurrent autoencoders to enhance the search region to unmodeled transients. We use a semisupervised strategy that we name Gravitational Wave Anomalous Knowledge (GWAK). While the semisupervised nature of the problem comes with a cost in terms of accuracy as compared to supervised techniques, there is a qualitative advantage in generalizing experimental sensitivity beyond precomputed signal templates. We construct a lowdimensional embedded space using the GWAK method, capturing the physical signatures of distinct signals on each axis of the space. By introducing signal priors that capture some of the salient features of GW signals, we allow for the recovery of sensitivity even when an unmodeled anomaly is encountered. We show that regions of the GWAK space can identify CBCs, detector glitches and also a variety of unmodeled astrophysical sources.
 Slides (for IAIFI members only)
 Thorsten Glüsenkamp, Postdoc, Uppsala University
 Tuesday, October 31, 2023, 11:00 am–12:00 pm, MIT LNS Conference Room (26528)
 Conditional normalizing flows for IceCube event reconstruction
 In this seminar, I will talk about normalizing flows (NFs), in particular about the types that are useful for highenergy neutrino event reconstruction in IceCube. First, I will give an introduction that focuses on essentially two different classes of flows which have quite a citation disparity in the literature: 1) normalizing flows in high dimensions (D>~100), which typically have high citation counts, and 2) normalizing flows in low dimensions (D = 1  100), which are typically cited less frequently. I discuss the reasons why I think this latter class, which is often less known, is in particular useful for highenergy physicists, and then briefly review two examples of that class: specific Gaussianization flows (2003.01941) , and exponentialmap flows (0906.0874/2002.02428). Finally, I discuss a recent application of these particular flows as conditional NFs for neutrino event econstruction in the IceCube detector (2309.16380).
 Slides (for IAIFI members only)
 Jose Miguel Munoz Arias, Graduate Student, MIT
 Tuesday, November 7, 2023, 11:00 am–12:00 pm, MIT LNS Conference Room (26528)
 lienn: Pioneering lie Gequivariant Neural Networks for CrossDomain Scientific Applications
 This talk explores a novel Equivariant Neural Network architecture that respects symmetries of finitedimensional representations of any reductive Lie Group G. These groups span several scientific domains, from high energy physics to computer vision. We extend ACE and MACE frameworks to data equivariant to a reductive Lie group action. We present lienn, a software library for building Gequivariant neural networks that simplifies the application to varied problems by decomposing tensor products into irreducible representations. We illustrate the adaptability and effectiveness of our approach with top quark decay tagging and shape recognition applications. We demonstrate that acknowledging these symmetries can boost prediction accuracy while using less training data. Our study represents a significant step towards generating interactive representations of geometric point clouds, offering a fresh problemsolving framework across scientific fields.
 Slides (for IAIFI members only)
 Zeviel Imani, Graduate Student, Tufts
 Tuesday, November 14, 2023, 11:00 am–12:00 pm, MIT LNS Conference Room (26528)
 Scorebased Diffusion Models for Generating LArTPC Images
 Modern generative modeling has demonstrated remarkable success in the realm of natural images. However, these approaches do not necessarily generalize to all image domains. In neutrino physics experiments, our Liquid Argon Time Project Chamber (LArTPC) particle detectors produce images that are globally sparse but locally dense. We have found that some generation algorithms, such as GANs and VQVAE, are unable to reproduce these image characteristics. Recently, we have successfully generated highfidelity images of track and shower particle event types using a scorebased diffusion model. In this talk, I will outline the methodology underlying this type of model, explore our quality metrics for these generated images, and discuss planned extensions and applications of this work.
 Slides (for IAIFI members only)
 Neill Warrington, Postdoc, MIT
 Tuesday, November 28, 2023, 11:00 am–12:00 pm, MIT LNS Conference Room (26528)
 Thimbology and The Sign Problem
 I will talk about thimbology, a technique for taming sign problems in lattice field theory, where the domain of integration of path integral is deformed into complex field space. Machine learning contours proves useful for certain problems and is now a common technique. I’ll review the idea for a general audience, then share some recent results.
 Slides (for IAIFI members only)
Spring 2023 Journal Clubs
 Di Luo, IAIFI Fellow
 April 25, 2023, 11:00am12:00pm
 Multilegged Robot Locomotion via Spin Models Duality
 Contact planning is crucial in locomoting systems.Specifically, appropriate contact planning can enable versatile behaviors (e.g., sidewinding in limbless locomotors) and facilitate speeddependent gait transitions (e.g., walktrotgallop in quadrupedal locomotors). The challenges of contact planning include determining not only the sequence by which contact is made and broken between the locomotor and the environments, but also the sequence of internal shape changes (e.g., body bending and limb shoulder joint oscillation). Most stateofart contact planning algorithms focused on conventional robots (e.g.biped and quadruped) and conventional tasks (e.g. forward locomotion), and there is a lack of study on general contact planning in multilegged robots. In this talk, I am going to discuss that using geometric mechanics framework, we can obtain the global optimal contact sequence given the internal shape changes sequence. Therefore, we simplify the contact planning problem to a graph optimization problem to identify the internal shape changes. Taking advantages of the spatiotemporal symmetry in locomotion, we map the graph optimization problem to special cases of spin models, which allows us to obtain the global optima in polynomial time. We apply our approach to develop new forward and sidewinding behaviors in a hexapod and a 12legged centipede. We verify our predictions using numerical and robophysical models, and obtain novel and effective locomotion behaviors.
 Slides (For IAIFI members only)
 Ziming Liu, Grad Student, MIT
 April 25, 2023, 11:00am12:00pm
 Physicsinspired generative models
 It might be surprising and delightful to physicists that physics has been playing a huge role in diffusion models. In fact, the evolution of our physical world can be viewed as a generation process. In this journal club, I will first review diffusion models, the more recent PFGM/PFGM++ inspired from electrostatics, and then introduce the GenPhys framework which manages to convert even more physical processes to generative models.
 Slides (For IAIFI members only)
 Asem Wardak, Research Fellow, Harvard
 April 11, 2023, 11:00am12:00pm
 Extended Anderson Criticality in HeavyTailed Neural Networks
 This talk focuses on nonlinearly interacting systems with random, heavytailed connectivity. We show how heavytailed connectivity gives rise to an extended critical regime of spatially multifractal fluctuations between the quiescent and active phases. This phase differs from the edge of chaos in classical networks by the appearance of universal hallmarks of the Anderson transition in condensed matter physics over an extended region in phase space. We then investigate some consequences of the multifractal Anderson regime for performing persistent computations.
 Slides (For IAIFI members only)
 Joshua Villarreal, Grad Student, MIT
 April 4, 2023, 11:00am12:00pm
 Surrogate Modeling of Particle Accelerators
 Abstract: The design, construction, and finetuning of particle accelerators has never been easy. Each is a technical challenge in and of itself, and the need to repeatedly run accurate, highfidelity simulations of the beam traversing the device can slow development. This is especially true for many modernday particle accelerators, whose beam dynamics tend to observe more nonlinear effects like those arising from space charge, making their simulation more computationally expensive. Thus, there is demand to develop machine learning and statistical learning models that can reproduce these beam dynamic simulations with ordersofmagnitude improvements in runtime. In this talk, I present an overview of recent efforts to build such accelerator surrogate models, which can be used for the design optimization and realtime commissioning, tuning, and running of the accelerator they aim to replicate. As an example, I also present the status of IsoDAR’s work to build a surrogate model for a RadioFrequency Quadrupole accelerator, a vital component to IsoDAR’s groundbreaking design. I outline challenges of these and other virtual accelerators, and present future plans to make these surrogate models ubiquitous in future development of accelerator experiments of all kinds.
 Slides (For IAIFI members only)
 Daniel Murnane, Postdoc Researcher, Berkeley Lab
 March 21, 2023, 11:00am12:00pm
 MultiTasking ML for Point Clouds at the LHC
 Abstract: The Large Hadron Collider is one of the world’s most dataintensive experiments. Every second, millions of collisions are processed, each one resembling a jigsaw puzzle with thousands of pieces. With the upcoming upgrade to the High Luminosity LHC, this problem will only become more complex. To make sense of this data, deep learning techniques are increasingly being used. For example, graph neural networks and transformers have proven effective at handling point cloud tasks such as track reconstruction and jet tagging. In this talk, I will review the point cloud problems in collider physics and recent deep learning solutions investigated by the Exatrkx project  an initiative to implement innovative algorithms for HEP at exascale. These architectures can accurately perform tracking and tagging with low latency, even in the high luminosity regime. Additionally, I will explore how multitasking and multimodal networks can combine several of these different tasks.
 Slides (For IAIFI members only)
 Manuel Szewc, Postdoc, University of Cincinnati
 March 14, 2023, 11:00am12:00pm
 Modeling Hadronization with Machine Learning
 Abstract: A fundamental part of event generation, hadronization is currently simulated with the help of finetuned empirical models. In this talk, I’ll present MLHAD, a proposed alternative for hadronization where the empirical model is replaced by a surrogate Machine Learningbased model to be ultimately datatrainable. I’ll detail the current stage of development and discuss possible ways forward.
 Slides (For IAIFI members only)
 Max Tegmark, Professor, MIT
 February 28, 2023, 11:00am12:00pm
 Mechanistic interpretability
 Abstract: Mechanistic interpretability aims to reverseengineer trained neural networks to distill out the algorithms they have discovered for performing various tasks. Although such “artificial neuroscience” is hard and fun, it’s easier than conventional neuroscience since you have complete knowledge of what every neuron and synapse is doing.
 Slides (For IAIFI members only)
 Liping Liu, Assistant Professor, Tufts University
 February 14, 2023, 11:00am12:00pm
 Address combinatorial graph problems with learning methods
 Abstract: There are plenty of hard combinatorial problems defined on graphs. Recently learning algorithms have been used to speed up the search for approximate solutions to these problems. This talk will start with an introduction to hard problems on graphs and traditional algorithms, then it will give an overview of learning algorithms for solving combinatorial problems on graphs. The second part of the talk will focus on two specific problems, graph matching and subgraph distance calculation, and discuss neural methods for these two problems. Finally, it will conclude with open questions: why and when can neural networks help to solve combinatorial problems?
 References:
 Slides (For IAIFI members only)
Fall 2022 Journal Clubs
 Anna Golubeva, IAIFI Fellow and Matt Schwartz, Professor, Harvard
 November 29, 2022, 11:00am12:00pm
 Should artificial intelligence be interpretable to humans?
 Resource:
 Michael Toomey, PhD Student, Brown University
 November 15, 2022, 11:00am12:00pm
 Deep Learning the Dark Sector
 Abstract: One of the most pressing questions in physics today is the microphysical origin of dark matter. While there have been numerous experimental programs aimed at detecting its interactions with the Standard Model, all efforts todate have come up empty. An alternative method to constrain dark matter is purely based on its gravitational interactions. In particular, gravitational lensing can be very sensitive to the distribution and morphology of dark matter substructure which can vary appreciably between different models. However, the complexity of data sets, systematics, and large volumes of data make the dimensionality of this problem difficult to approach from more traditional methods. Thankfully, this is a task ideally suited for machine learning. In this talk we will demonstrate how machine learning will play a critical role in distinguishing between models of dark matter and constraining model parameters in lensing data. We will additionally discuss techniques unique to ML for transferring the knowledge accumulated by models in the controlled setting of simulation to real data sets utilizing unsupervised domain adaptation.
 Slides (For IAIFI members only)
 Ziming Liu, PhD Student, MIT
 November 8, 2022, 11:00am12:00pm
 Toy Models of Superposition
 Abstract: It would be very convenient if the individual neurons of artificial neural networks corresponded to cleanly interpretable features of the input. For example, in an “ideal” ImageNet classifier, each neuron would fire only in the presence of a specific visual feature, such as the color red, a leftfacing curve, or a dog snout. But it isn’t always the case that features correspond so cleanly to neurons, especially in large language models where it actually seems rare for neurons to correspond to clean features. I will present a recent paper “Toy Models of Superposition” from Anthropic, aiming to answer these questions: Why is it that neurons sometimes align with features and sometimes don’t? Why do some models and tasks have many of these clean neurons, while they’re vanishingly rare in others?
 Slides (For IAIFI members only)
 Sona Najafi, Researcher, IBM
 October 25, 2022, 11:00am12:00pm
 Quantum machine learning from algorithms to hardware
 Abstract: The rapid progress of technology over the past few decades has led to the emergence of two powerful computational paradigms known as quantum computing and machine learning. While machine learning tries to learn the solutions from data, quantum computing harnesses the quantum laws for more powerful computation compared to classical computers. In this talk, I will discuss three domains of quantum machine learning, each harnessing a particular aspect of quantum computers and targeting specific problems. The first domain scrutinizes the power of quantum computers to work with highdimensional data and speedup algebra, but raises the caveat of input/output due to the quantum measurement rules. The second domain circumvents this problem by using a hybrid architecture, performing optimization on a classical computer while evaluating parameterized states on a quantum circuit, chosen based on a particular issue. Finally, the third domain is inspired by brainlike computation and uses a given quantum system’s natural interaction and unitary dynamic as a source for learning
 Kim Nicoli, Grad Student, Technical University of Berlin
 October 18, 2022, 11:00am12:00pm
 Deep Learning approaches in lattice quantum field theory: recent advances and future challenges**
 Abstract: Normalizing flows are deep generative models that leverage the change of variable formula to map simple base densities to arbitrary complex target distributions. Recent works have shown the potential of such methods in learning normalized Boltzmann densities in many fields ranging from condensed matter physics to molecular science to lattice field theory. Though sampling from a flowbased density comes with many advantages over standard MCMC sampling, it is known that these methods still suffer from several limitations. In my talk, I will start to give an overview on how to deploy deep generative models to learn Boltzmann densities in the context of a phi^4 lattice field theory. Specifically, I’ll focus on how these methods open up the possibility to estimate thermodynamic observables, i.e., physical observables which depend on the partition function and hence are not straightforward to estimate using standard MCMC methods. In the second part of my talk, I will present two ideas that have been proposed to mitigate the wellknown problem of modecollapse which often occurs when normalizing flows are trained to learn a multimodal target density. More specifically I’ll talk about a novel “modedropping estimator” and path gradients. In the last part of my talk, I’ll present a new idea which aims at using flowbased methods to mitigate the sign problem.
 Slides (For IAIFI members only)
 Adriana Dropulic, Grad Student, Princeton
 October 4, 2022, 11:00am12:00pm
 Machine Learning the 6th Dimension: Stellar Radial Velocities from 5D PhaseSpace Correlations
 Abstract: The Gaia satellite will observe the positions and velocities of over a billion Milky Way stars. In the early data releases, most observed stars do not have complete 6D phasespace information. We demonstrate the ability to infer the missing lineofsight velocities until more spectroscopic observations become available. We utilize a novel neural network architecture that, after being trained on a subset of data with complete phasespace information, takes in a star’s 5D astrometry (angular coordinates, proper motions, and parallax) and outputs a predicted lineofsight velocity with an associated uncertainty. Working with a mock Gaia catalog, we show that the network can successfully recover the distributions and correlations of each velocity component for stars that fall within ~5 kpc of the Sun. We also demonstrate that the network can accurately reconstruct the velocity distribution of a kinematic substructure in the stellar halo that is spatially uniform, even when it comprises a small fraction of the total star count. We apply the neural network to real Gaia data and discuss how the inferred information augments our understanding of the Milky Way’s formation history.
 Slides (For IAIFI members only)
 Iris Cong, Grad Student, Harvard
 September 27, 2022, 11:00am12:00pm
 Quantum Convolutional Neural Networks
 Abstract: Convolutional neural networks (CNNs) have recently proven successful for many complex applications ranging from image recognition to precision medicine. In the first part of my talk, motivated by recent advances in realizing quantum information processors, I introduce and analyze a quantum circuitbased algorithm inspired by CNNs. Our quantum convolutional neural network (QCNN) uses only O(log(N)) variational parameters for input sizes of N qubits, allowing for its efficient training and implementation on realistic, nearterm quantum devices. To explicitly illustrate its capabilities, I show that QCNN can accurately recognize quantum states associated with a onedimensional symmetryprotected topological phase, with performance surpassing existing approaches. I further demonstrate that QCNN can be used to devise a quantum error correction (QEC) scheme optimized for a given, unknown error model that substantially outperforms known quantum codes of comparable complexity. The design of such error correction codes is particularly important for nearterm experiments, whose error models may be different from those addressed by generalpurpose QEC schemes. If time permits, I will also present our latest results on generalizing the QCNN framework to more accurately and efficiently identify twodimensional topological phases of matter.
 Slides (For IAIFI members only)
 Miles Cranmer, Grad Student, Princeton
 September 20, 2022, 11:00am–12:00pm
 Interpretable Machine Learning for Physics
 Abstract: Would Kepler have discovered his laws if machine learning had been around in 1609? Or would he have been satisfied with the accuracy of some black box regression model, leaving Newton without the inspiration to find the law of gravitation? In this talk I will present a review of some industryoriented machine learning algorithms, and discuss a major issue facing their use in the natural sciences: a lack of interpretability. I will then outline several approaches I have created with collaborators to help address these problems, based largely on a mix of structured deep learning and symbolic methods. This will include an introduction to the PySR software (https://astroautomata.com/PySR), a Python/Julia package for highperformance symbolic regression. I will conclude by demonstrating applications of such techniques and how we may gain new insights from such results.
 Resources: https://arxiv.org/abs/2207.12409; https://arxiv.org/abs/2202.02306; https://arxiv.org/abs/2006.11287
 Slides (For IAIFI members only)
 Anindita Maiti, Grad Student, Northeastern
 September 13, 2022, 11:00am12:00pm
 A Study of Neural Network Field Theories
 Abstract: I will present a systematic exploration of field theories arising in Neural Networks, using a dual framework given by Neural Network parameters. The infinite width limit of NN architectures, combined with i.i.d. parameters, lead to Gaussian Processes in Neural Networks by the Central Limit Theorem (CLT), corresponding to generalized free field theories. Small and large violations of the CLT respectively lead to weakly coupled and nonperturbative nonLagrangian field theories in Neural Networks. NonGaussianity, locality (via cluster decomposition), and symmetries of Neural Network field theories are examined via NN parameter space, without necessitating the knowledge of field theoretic actions. Thus, Neural Network field theories, in conjunction to this duality via parameters, may have potential implications for Physics and Machine Learning both.
 Resources: https://arxiv.org/abs/2106.00694
 Slides (For IAIFI members only)
Spring 2022 Journal Clubs
 Jessie Micallef, PhD Student, Michigan State University & Incoming IAIFI Fellow
 March 10, 2022, 11:00am12:00pm
 “Adapting CNNs to Reconstruct Sparse, GeVScale IceCube Neutrino Events”
 Resources:
 Slides (For IAIFI members only)
 Denis Boyda, Postdoctoral Appointee, Argonne National Laboratory & Incoming IAIFI Fellow
 RESCHEDULED: March 17, 2022, 11:00am12:00pm
 “Overview of some popular Machine Learning frameworks for data parallelism”
 Resources:
 S. Li et. al. PyTorch Distributed: Experiences on Accelerating Data Parallel Training. 2020. arXiv:2006.15704
 A. Sergeev and Mike Del Balso. Horovod: fast and easy distributed deep learning in TensorFlow. 2018. arXiv:1802.05799
 S. Rajbhandari et.al. ZeRO: Memory Optimizations Toward Training Trillion Parameter Models. 2020. arXiv:1910.02054
 Slides (For IAIFI members only)
 Yin Lin, Postdoctoral Researcher, MIT
 April 7, 2022, 11:00am12:00pm
 “Accelerating Dirac equation solves in lattice QFT with neuralnetwork preconditioners”
 Resources:
 Slides (For IAIFI members only)
 Anatoly Dymarsky, Associate Professor, University of Kentucky
 April 14, 2022, 11:00am12:00pm
 Tensor network to learn the wave function of data
 Abstract: We use tensor networkbased architecture to train a network which simultaneously accomplishes two tasks: image classification and image sampling. We argue that simultaneous performance of these tasks means our network has successfully learned the whole “manifold of data” (using the terminology from the literature)  namely all possible images of a particular kind. We use a black and white version of MNIST, hence our network learns all possible images depicting a particular digit. We access global properties of the “manifold of data” by calculating its size. Thus, we found there are 2^72 possible images of digit 3. We explain this number is robust and largely independent of the details of training process etc.
 Resources:
 Slides (For IAIFI members only)
 Carolina Cuesta, PhD Student, Durham University & Incoming IAIFI Fellow
 April 21, 2022, 11:00am12:00pm
 Equivariant normalizing flows and their application to cosmology
 Resources:
 Slides (For IAIFI members only)
 Benjamin Fuks, Professor, Sorbonne University
 April 28, 2022, 11:00am12:00pm
 Precision simulations for new physics
 Resources:
 Dylan Hadfield, Assistant Professor, MIT
 May 5, 2022, 11:00am12:00pm
 Overoptimization, Incompleteness, and Goodhart’s Law
 Resources:
 Mark Hamilton, Graduate Student, MIT
 Manami Kanemura, Undergraduate Student, Northeastern University (completed coop with Bryan Ostdiek)
 May 26, 2022, 11:00am12:00pm
 Using SoftIntrospection to improve anomaly detection at LHC
 Resources:
 Slides (For IAIFI members only)
Fall 2021 Journal Clubs
 Michael Douglas
 Thursday, September 23, 2021, 11:00am12:00pm
 “Solving Combinatorial Problems using AI/ML”
 Abstract/Resources: Bright et al 1907.04408; Heule et al 1905.10192; Halverson et al 1903.11616; McAleer et al 1805.07470; Gukov et al 2010.16263; General sources on reinforcement learning: Sutton and Bardo, The MathCheck SAT+CAS system

Slides (For IAIFI members only)
 Ziming Liu
 Thursday, October 7, 2021, 11:00am12:00pm
 “Dynamics in Modern Deep Learning Models”
 Abstract/Resources: Transient Chaos in BERT; Memory and attention in deep learning; The Brownian motion in the transformer model

Slides (For IAIFI members only)
 Ge Yang
 Thursday, October 21, 2021, 11:00am12:00pm
 “Learning and Generalization: Revisiting Neural Representations”
 Abstract/Resources: Understanding how deep neural networks learn and generalize has been a central pursuit of intelligence research. This is because we want to build agents that can learn quickly from a small amount of data, that also generalizes to a wider set of scenarios. In this talk, we take a systems approach by identifying key bottleneck components that limits learning and generalization. We will present two key results — overcoming the simplicity bias of neural value approximation via random Fourier features and going beyond the training distribution via invariance through inference.
 Eric Michaud, PhD Student, MIT
 Thursday, November 18, 2021 11:00am12:00pm
 “Curious Properties of Neural Networks”
 Abstract/Resources: In this informal talk/discussion, I will highlight some facts about neural networks which I find to be particularly fun and surprising. Possible topics could include the Lottery Ticket Hypothesis (https://arxiv.org/abs/1803.03635), Double Descent (https://arxiv.org/abs/1912.02292), and “grokking” (https://mathaiiclr.github.io/papers/papers/MATHAI_29_paper.pdf). There will be time for discussion and for attendees to bring up their own favorite surprising facts about deep learning.
 Murphy Niu, Google Quantum AI
 Thursday, December 3, 11:00am12:00pm
 “Entangling Quantum Generative Adversarial Networks using Tensorflow Quantum”
 Abstract/Resources: https://arxiv.org/pdf/2105.00080.pdf; https://arxiv.org/pdf/2003.02989.pdf%20%20Page%202.pdf
Spring 2021 Journal Clubs
 Anindita Maiti
 Wednesday, February 17
 “Neural Networks and Quantum Field Theory”
 Abstract/Resources: https://arxiv.org/abs/2008.08601
 Jacob ZavatoneVeth
 Tuesday, March 2
 “NonGaussian Processes and Neural Networks at Finite Widths”
 Abstract/Resources: https://arxiv.org/abs/1910.00019
 Di Luo
 Tuesday, April 6
 “Simulating Quantum ManyBody Physics with Neural Network Representation”
 Abstract/Resources: https://arxiv.org/abs/1807.10770; https://arxiv.org/pdf/1912.11052.pdf; https://arxiv.org/abs/2012.05232
 Anna Golubeva
 Tuesday, April 27
 “Are Wider Nets Better Given the Same Number of Parameters?”
 Abstract/Resources: https://arxiv.org/abs/2010.14495
 Siddharth MishraSharma
 Tuesday, May 11
 SimulationBased Inference Focusing on Astrophysical Applications
 Abstract/Resources: https://arxiv.org/abs/1911.01429; https://arxiv.org/abs/1909.02005
Fall 2020 Journal Clubs
 Bhairav Mehta
 Tuesday, October 20
 “Learning Invariances”
 Abstract/Resources: https://arxiv.org/abs/2009.00329
 Andrew Tan
 Wednesday, November 4
 “Estimating Mutual Information”
 Abstract/Resources: https://arxiv.org/abs/1905.06922
 Ziming Liu
 Wednesday, November 18
 “Scaling Laws of Learning”
 Abstract/Resources: https://arxiv.org/abs/2010.14701; https://arxiv.org/abs/2004.10802; https://arxiv.org/abs/2001.08361
 Dan Roberts
 Wednesday, December 2
 “Effective Theory of Deep Learning”