The IAIFI Journal Club is open to IAIFI members and affiliates.
Upcoming Journal Clubs
- Aishik Ghosh, , Further information to come
- Tuesday, December 17, 2024, 1:00 pm–2:00 pm, IAIFI Penthouse
- Title to come
- Abtract to come
- Nate Woodward, Undergraduate Student, MIT
- Tuesday, December 10, 2024, 1:00 pm–2:00 pm, IAIFI Penthouse
- Title to come
- Abtract to come
- Davide Bray, Graduate Student, Harvard University
- Tuesday, December 3, 2024, 1:00 pm–2:00 pm, IAIFI Penthouse
- Title to come
- Abtract to come
- Mike Toomey, Postdoctoral Student, MIT
- Tuesday, November 26, 2024, 1:00 pm–2:00 pm, IAIFI Penthouse
- Title to come
- Abtract to come
- Neill Warrington, Postdoctoral Student, MIT
- Tuesday, November 19, 2024, 1:00 pm–2:00 pm, IAIFI Penthouse
- Title to come
- Abtract to come
- Thomas Helfer, , Further information to come
- Tuesday, November 12, 2024, 1:00 pm–2:00 pm, IAIFI Penthouse
- Title to come
- Abtract to come
- Konstantin Leyde, , University of Portsmouth
- Tuesday, October 29, 2024, 1:00 pm–2:00 pm, IAIFI Penthouse
- Title to come
- Abtract to come
- Kayla DeHolton, , Penn State University
- Tuesday, October 22, 2024, 1:00 pm–2:00 pm, IAIFI Penthouse
- Title to come
- Abtract to come
- Keiya Hirashima, , University of Tokyo
- Tuesday, October 15, 2024, 1:00 pm–2:00 pm, IAIFI Penthouse
- Title to come
- Abtract to come
- Felix Yu, Graduate Student, Harvard University
- Tuesday, October 8, 2024, 1:00 pm–2:00 pm, IAIFI Penthouse
- Title to come
- Abtract to come
Past Journal Clubs
Fall 2024 Journal Clubs
- Kit Fraser-Taliente, Graduate Student, University of Oxford
- Tuesday, October 1, 2024, 1:00 pm–2:00 pm, IAIFI Penthouse
- Computation of Quark Masses in String Theory
- We present a numerical computation, based on neural network techniques, of the physical Yukawa couplings in a heterotic string theory model obtained after compactification on a Calabi-Yau threefold. I consider examples from a large class of models with precisely the MSSM low-energy spectrum, plus fields uncharged under the standard-model group. Suitable neural networks are used to compute the relevant quantities. I will discuss the general problem of learning functions on manifolds, equivariant neural networks, and generalisation to other models and constructions.
- Slides to come
- Rikab Gambhir, Graduate Student, MIT
- Tuesday, September 24, 2024, 1:00 pm–2:00 pm, IAIFI Penthouse
- Moments Of Clarity in Machine Learning for Jet Physics
- Machine learning models have shown incredible promise for science, especially for physics at the Large Hadron Collider (LHC), through their ability to extract information from huge amounts of data. However, as physicists, we often desire to have precise control of the information input and output of a model, both to improve interpretability and to guarantee properties of interest in our problems. In this talk, I go over three different examples from my work in jet physics at the LHC where targeted and goal-motivated model design and loss function choice can be used to control the extracted information in machine learning models. In particular, I discuss how task-engineered network architectures and losses can be used to extract provably prior-independent and unbiased resolutions for calibrations at the LHC, how they can be used to construct a new class of robust observables for jets, and how they can be used to streamline latent spaces using elementary functions for interpretability.
- Slides (for IAIFI members only)
Spring 2024 Journal Clubs
- Alex Malz, LINCC Frameworks Project Scientist, Carnegie Mellon University
- Tuesday, January 16, 2024, 11:00 am–12:00 pm, MIT LNS Conference Room (26-528)
- Data processing challenges for real-time observational astrophysics
- Astronomical transient and variable events comprise the things that go boom in the night, or otherwise vary in brightness or color over time, and are among the most powerful phenomena of the universe, providing a window into energy scales inaccessible to any laboratory on Earth. The fundamental physics determining the time-series light curves of these astronomical objects, which include exploding stars and black hole mergers, is key to understanding the nature of the dark energy driving the accelerating expansion of the universe, the dark matter guiding the formation and clustering of massive structures, and ultimately our place in the cosmos. During its ten-year mission beginning in 2025, the Legacy Survey of Space and Time (LSST) on the Vera C. Rubin Observatory will observe hundreds of millions of such transient and variable sources, up from the mere millions known to date, by making a ten-year 3D movie of the night sky. In doing so, it will revolutionize astronomy with a deluge of data that could enable boundless discoveries, conditioned on meeting the challenges of the data’s nontrivial noise properties; the scale of the anticipated data is a direct corollary to the strategy of collecting less informative photometric data rather than high-fidelity, resource-intensive spectroscopy. In this talk, I will introduce open problems and evolving data-driven solutions for several interesting aspects of the systems for processing and interpreting the anticipated data.
- Slides to come
- Helen Qu, Grad Student, University of Pennsylvania
- Tuesday, February 6, 2024, 11:00 am–12:00 pm, MIT LNS Conference Room (26-528)
- Enabling precision photometric SN Ia cosmology with machine learning
- The discovery of the accelerating expansion of the universe has led to increasing interest in probing the nature of dark energy. As very bright standardizable candles, type Ia supernovae (SNe Ia) are used to measure precise distances on cosmological scales and thus have been instrumental to this effort. Building a robust dataset of SNe Ia across a wide range of redshifts will allow for the construction of an accurate Hubble diagram, enrich our understanding of the expansion history of the universe, as well as place constraints on the dark energy equation of state. However, much of our analysis pipeline will be overwhelmed by the data deluge of the LSST era. In this talk, I will present recent improvements on two key pieces of SN Ia cosmology analysis: the purity of the photometric SNe Ia sample and the redshift identification accuracy for these SNe. To address the SNe Ia purity problem, I will present SCONE (Supernova Classification with a Convolutional Neural Network), a deep learning-based approach to early and full lightcurve photometric SN classification. On the redshift estimation front, I will present work on characterizing inaccurate redshifts due to SN host galaxy mismatch and its effect on cosmology, as well as Photo-zSNthesis, a machine learning algorithm that uses SN photometry to directly estimate redshift. As long as logistical challenges prevent the spectroscopic follow-up of most detected SNe, a reliable photometric SN classification algorithm and redshift estimation strategy will allow us to tap into the vast potential of the photometric dataset.
- Slides (for IAIFI members only)
- Darius Faroughy, Postdoctoral Associate, Rutgers University
- Tuesday, February 20, 2024, 11:00 am–12:00 pm, MIT LNS Conference Room (26-528)
- Is flow-matching an alternative to diffusion?
- We discuss flow-matching (2210.02747), a recently proposed objective for training continuous normalizing flows inspired by diffusion models. As a generative model, flow-matching can produce state-of-the-art samples for images and other data representations. More interestingly, flow-matching can be used to go beyond generative modeling by learning to approximate the optimal transport map between two arbitrary data distributions. The JC is meant to be an interactive blackboard talk discussing the method. At the end, I’ll flash a few slides illustrating its usefulness for generating jets as particle clouds (2310.00049).
- Slides to come
- Jonas Rigo, Postdoc, Forschungszentrum Jülich GmbH
- Tuesday, February 27, 2024, 11:00 am–12:00 pm, MIT LNS Conference Room (26-528)
- Is the ground state of Anderson’s impurity model a recurrent neural network?
- When the Anderson impurity model (AIM) is expressed in terms of a Wilson chain it assumes a hierarchical Renormalization group structure that translates to a ground state with features like Friedel oscillations and the Kondo screening cloud [1]. Recurrent neural networks (RNNs) have recently gained traction in the form of Neural Quantum States (NQS) ansätze for quantum many body ground states and they are known to be able to learn such complex patterns [2]. We explore RNNs as an ansatz to capture the AIM’s ground state for a given Wilson chain length and investigate its capability to predict the ground state on longer chains for a converged ground state energy. [1] Affleck, Ian, László Borda, and Hubert Saleur. “Friedel oscillations and the Kondo screening cloud.” Physical Review B* 77.18 (2008): 180404. [2] Hibat-Allah, Mohamed, et al. “Recurrent neural network wave functions.” Physical Review Research 2.2 (2020): 023358.
- Slides (for IAIFI members only)
- Kehang Zhu, Grad Student, Harvard
- March 12, 2024, 11:00 am–12:00 pm, MIT LNS Conference Room (26-528)
- Agent-based modeling: Harnessing Large Language Models for Automated Exploration of Emergent Behaviors in Simulated Social Systems
- Two significant impediments to success of the social sciences in comparison to physics are the inherent difficulty in both rapidly executing multiple controlled experiments to explore a parameter space and determining what parameter space to explore. In this work, we present a computational framework and platform that simulates the entire social scientific process, leveraging Large Language Models (LLMs) to study human actors within social systems. We create controlled environments, akin to toy models in physics, that systematically explore the space parameter of variables relevant to any social system (such as attributes of human actors), allowing for the exponentially faster discovery of emergent social behaviors as compared to traditional social science experimentation. Central to our approach is the automatic generation of Structural Causal Models (SCMs) that generate statistical correlations of potential interactions within a social system and outline the requisite metrics and tools to observe and measure these nonlinear dynamics. With the flexibility to vary controlled variables across a nearly infinite parameter space, our system offers a sandbox to simulate and analyze various social scenarios – from wage bargaining and auction mechanics to nuclear weapon negotiations. Our framework and platform offers a new playground for physicists to study the nonlinear dynamics and emergent phenomena in human social systems.
- Slides (for IAIFI members only)
- Katherine Fraser, Graduate Student, Harvard University
- Tuesday, March 19, 2024, 11:00 am–12:00 pm, MIT LNS Conference Room (26-528)
- Combining Energy Correlators with Machine Learning
- Energy correlators, which are correlation functions of the energy flow operator, are theoretically clean observables which can be used to improve various measurements. In this talk, we discuss ongoing work exploring the benefits of combining them with Machine Learning for precisely measuring the Top-quark mass.
- Slides (for IAIFI members only)
- Marisa LaFleur, Project Manager, IAIFI
- Tuesday, April 2, 2024, 11:00 am–12:00 pm, MIT LNS Conference Room (26-528)
- Managing Time and Influencing People
- Taking a break from our regularly scheduled journal club programming, the Industry Partnership Committee have requested a crash course in project management for academics. I’ll share some time management and communication tips and tricks to elevate your project management skills and increase efficiency, leaving more time for research. We’ll leave time for questions, so come with all of your organizational concerns!
- Slides (for IAIFI members only)
- Akshunna Dogra, Graduate Student, Imperial College London
- Tuesday, April 23, 2024, 11:00 am–12:00 pm, MIT LNS Conference Room (26-528)
- Many-Fold Learning
- Machine learning (ML) has been profitably leveraged across a wide variety of problems in recent years. Empirical observations show that ML models from suitable functional spaces are capable of adequately efficient learning across a wide variety of disciplines. In this work (first in a planned sequence of three), we build the foundations for a generic perspective on ML model optimization and generalization dynamics. Specifically, we prove that under variants of gradient descent, “well-initialized” models solve sufficiently well-posed problems at extit{a priori} or extit{in situ} determinable rates. Notably, these results are obtained for a wider class of problems, loss functions, and models than the standard mean squared error and large width regime that is the focus of conventional Neural Tangent Kernel (NTK) analysis. The $ u$ - Tangent Kernel ($ u$TK), a functional analytic object reminiscent of the NTK, emerges naturally as a key object in our analysis and its properties function as the control for learning. We exemplify the power of our proposed perspective by showing that it applies to diverse practical problems solved using real ML models, such as classification tasks, data/regression fitting, differential equations, shape observable analysis, etc. We end with a small discussion of the numerical evidence, and the role $ u$TKs may play in characterizing the search phase of optimization, which leads to the “well-initialized” models that are the crux of this work.
- Slides (for IAIFI members only)
- Radha Mastandrea, Grad Student, University of California, Berkeley
- Tuesday, April 30, 2024, 11:00 am–12:00 pm, MIT LNS Conference Room (26-528)
- A Survey of Machine Learning Methods for Anomaly Detection
- Machine learning-based anomaly detection (AD) methods are promising tools for extending the coverage of searches for physics beyond the Standard Model (BSM). I will first talk about a class of AD methods for “resonant anomaly detection”, where the BSM is assumed to be localized in at least one known variable. There have been many methods proposed to identify such a BSM signal that make use of simulated or detected data in different ways, so I will discuss their complementarity – even if their maximum performance is the same, it may be beneficial more generally to combine approaches. I will then go over a class of AD methods for “nonresonant” detection, where the BSM may arise from off-shell effects or final states with significant missing energy. Using a semi-visible jet signature as a benchmark signal model, I will show that these methods can automatically identify anomalous events, elevating rare nonresonant signal models to the detection threshold.
- Slides (for IAIFI members only)
Fall 2023 Journal Clubs
- Tony Menzo, Graduate Assistant, University of Cincinnati
- Tuesday, September 19, 2023, 11:00 am–12:00 pm, MIT LNS Conference Room (26-528)
- Towards a data-driven model of hadronization
- We will discuss recent and ongoing developments at the intersection of machine learning and simulated hadronization. Specifically, we’ll focus on some of the major challenges presented when attempting to build a data-driven model of hadronization that utilizes experimental data during training. Solutions to some of these challenges will be presented in the context of invertible neural networks or normalizing flows including the introduction of a new paradigm that allows for the training of microscopic hadronization dynamics from macroscopic event-level observables.
- Slides (for IAIFI members only)
- Jeffrey Lazar, Graduate Student, Harvard
- Tuesday, September 26, 2023, 11:00 am–12:00 pm, MIT LNS Conference Room (26-528)
- Open-Source Simulation and Machine Learning for Neutrino Telescopes
- In the last decade, the filed of neutrino astronomy has made major strides, culminating of the definitive detection of galactic and extragalactic components of the astrophysical neutrino flux. We can now begin characterizing these astrophysical beams and pursuing new physics through them. Machine learning techniques have played an integral part in these recent advances, and while these current efforts have been impressive, it is clear that there is room to improve. This face, along with the growing, global network of neutrino telescopes, drives the need for open-source tools to use all person power and avoid reduplicating effort. In this talk I will present Prometheus, the first open-source, end-to-end simulation for neutrino telescopes. Furthermore, I will show a recent example of Prometheus to develop machine learning techniques capable of running at typical neutrino telescope trigger rates.
- Slides (for IAIFI members only)
- Manos Theodosis, Graduate Student, Harvard
- Tuesday, October 3, 2023, 11:00 am–12:00 pm, MIT LNS Conference Room (26-528)
- Learning Group Representations in Neural Networks
- Employing equivariance in neural networks leads to greater parameter efficiency via parameter sharing and improved generalization performance through the encoding of domain knowledge in the architecture; however, the majority of existing approaches require an a priori specification of the data symmetries. We present a neural network architecture, Group Representation Networks (GRNs), that learns symmetries on the weight space of neural networks without any supervision or knowledge of the hidden symmetries in the data. Beyond their interpretability, GRNs’ learned representations distill symmetries of the data domain and the downstream task, which are incorporated when training networks on different datasets. The key idea behind GRNs relates weights in neural networks via a cyclic action whose group representation depends on the data domain, and is learned in an unsupervised manner. Our experiments underline the ability of GRNs to correctly recover symmetries in the data, show competitive performance when GRNs are used as a drop-in replacement for conventional layers, and highlight the ability to transfer learned representations across tasks and datasets.
- Slides (for IAIFI members only)
- Andy Jin, Graduate Student, Harvard University
- Tuesday, October 17, 2023, 11:00 am–12:00 pm, MIT LNS Conference Room (26-528)
- Two Watts is All You Need: Low-Power Machine Learning on TPU for Neutrino Telescopes
- In neutrino experiments, we have seen machine learning software methods to boost our abilities of physics discovery given the hardware experimental setups. Currently, we face upgrades and new telescopes and experimental hardware expecting more statistics as well as more complicated data signals. This calls out for an upgrade on the software side as well for handling the more complex data in a more efficient way. Specifically, we need low power and fast software methods in order to achieve real time signal processing, where current machine learning base methods are too expensive to be deployed in the commonly power-restricted regions where these experiments are located. In this talk, I will present the first attempt at and a proof of concept for enabling machine learning methods to be deployed live in under water/ice neutrino telescopes via quantization and deployment on Tensor Processing Units (TPUs). We use an LSTM-based recursive neural network with residual convolution-based data encoding, combined with specifically tailored data pre-processing and quantization aware training methods for deployment on the Google Edge TPU. This algorithm can achieve state-of-the-art angular resolution in reconstruction with a real-time inference frequency of 100 Hz/Watts in a TPU accelerator at only 2 Watts of power consumption. This opens up a world of chances to integrate machine learning capacity into detectors and electronics deep into even the most power-restricted environments.
- Slides (for IAIFI members only)
- Ryan Raikman, Undergraduate, Carnegie Mellon University (currently working with LIGO), MIT LIGO
- Tuesday, October 24, 2023, 11:00 am–12:00 pm, MIT LNS Conference Room (26-528)
- GWAK: Gravitational Wave Anomalous Knowledge with Recurrent Autoencoders
- Matched-filtering detection techniques for gravitational-wave (GW) signals in ground-based interferometers rely on having well-modeled templates of the GW emission. Such techniques have been traditionally used in searches for compact binary coalescences (CBCs), and have been employed in all known GW detections so far. However, interesting science cases aside from compact mergers do not yet have accurate enough modeling to make matched filtering possible, including core-collapse supernovae and sources where stochasticity may be involved. Therefore the development of techniques to identify sources of these types is of significant interest. In this paper, we present a method of anomaly detection based on deep recurrent autoencoders to enhance the search region to unmodeled transients. We use a semi-supervised strategy that we name Gravitational Wave Anomalous Knowledge (GWAK). While the semi-supervised nature of the problem comes with a cost in terms of accuracy as compared to supervised techniques, there is a qualitative advantage in generalizing experimental sensitivity beyond pre-computed signal templates. We construct a low-dimensional embedded space using the GWAK method, capturing the physical signatures of distinct signals on each axis of the space. By introducing signal priors that capture some of the salient features of GW signals, we allow for the recovery of sensitivity even when an unmodeled anomaly is encountered. We show that regions of the GWAK space can identify CBCs, detector glitches and also a variety of unmodeled astrophysical sources.
- Slides (for IAIFI members only)
- Thorsten Glüsenkamp, Postdoc, Uppsala University
- Tuesday, October 31, 2023, 11:00 am–12:00 pm, MIT LNS Conference Room (26-528)
- Conditional normalizing flows for IceCube event reconstruction
- In this seminar, I will talk about normalizing flows (NFs), in particular about the types that are useful for high-energy neutrino event reconstruction in IceCube. First, I will give an introduction that focuses on essentially two different classes of flows which have quite a citation disparity in the literature: 1) normalizing flows in high dimensions (D>~100), which typically have high citation counts, and 2) normalizing flows in low dimensions (D = 1 - 100), which are typically cited less frequently. I discuss the reasons why I think this latter class, which is often less known, is in particular useful for high-energy physicists, and then briefly review two examples of that class: specific Gaussianization flows (2003.01941) , and exponential-map flows (0906.0874/2002.02428). Finally, I discuss a recent application of these particular flows as conditional NFs for neutrino event econstruction in the IceCube detector (2309.16380).
- Slides (for IAIFI members only)
- Jose Miguel Munoz Arias, Graduate Student, MIT
- Tuesday, November 7, 2023, 11:00 am–12:00 pm, MIT LNS Conference Room (26-528)
- lie-nn: Pioneering lie G-equivariant Neural Networks for Cross-Domain Scientific Applications
- This talk explores a novel Equivariant Neural Network architecture that respects symmetries of finite-dimensional representations of any reductive Lie Group G. These groups span several scientific domains, from high energy physics to computer vision. We extend ACE and MACE frameworks to data equivariant to a reductive Lie group action. We present lie-nn, a software library for building G-equivariant neural networks that simplifies the application to varied problems by decomposing tensor products into irreducible representations. We illustrate the adaptability and effectiveness of our approach with top quark decay tagging and shape recognition applications. We demonstrate that acknowledging these symmetries can boost prediction accuracy while using less training data. Our study represents a significant step towards generating interactive representations of geometric point clouds, offering a fresh problem-solving framework across scientific fields.
- Slides (for IAIFI members only)
- Zeviel Imani, Graduate Student, Tufts
- Tuesday, November 14, 2023, 11:00 am–12:00 pm, MIT LNS Conference Room (26-528)
- Score-based Diffusion Models for Generating LArTPC Images
- Modern generative modeling has demonstrated remarkable success in the realm of natural images. However, these approaches do not necessarily generalize to all image domains. In neutrino physics experiments, our Liquid Argon Time Project Chamber (LArTPC) particle detectors produce images that are globally sparse but locally dense. We have found that some generation algorithms, such as GANs and VQ-VAE, are unable to reproduce these image characteristics. Recently, we have successfully generated high-fidelity images of track and shower particle event types using a score-based diffusion model. In this talk, I will outline the methodology underlying this type of model, explore our quality metrics for these generated images, and discuss planned extensions and applications of this work.
- Slides (for IAIFI members only)
- Neill Warrington, Postdoc, MIT
- Tuesday, November 28, 2023, 11:00 am–12:00 pm, MIT LNS Conference Room (26-528)
- Thimbology and The Sign Problem
- I will talk about thimbology, a technique for taming sign problems in lattice field theory, where the domain of integration of path integral is deformed into complex field space. Machine learning contours proves useful for certain problems and is now a common technique. I’ll review the idea for a general audience, then share some recent results.
- Slides (for IAIFI members only)
Spring 2023 Journal Clubs
- Di Luo, IAIFI Fellow
- April 25, 2023, 11:00am-12:00pm
- Multi-legged Robot Locomotion via Spin Models Duality
- Contact planning is crucial in locomoting systems.Specifically, appropriate contact planning can enable versatile behaviors (e.g., sidewinding in limbless locomotors) and facilitate speed-dependent gait transitions (e.g., walk-trot-gallop in quadrupedal locomotors). The challenges of contact planning include determining not only the sequence by which contact is made and broken between the locomotor and the environments, but also the sequence of internal shape changes (e.g., body bending and limb shoulder joint oscillation). Most state-of-art contact planning algorithms focused on conventional robots (e.g.biped and quadruped) and conventional tasks (e.g. forward locomotion), and there is a lack of study on general contact planning in multi-legged robots. In this talk, I am going to discuss that using geometric mechanics framework, we can obtain the global optimal contact sequence given the internal shape changes sequence. Therefore, we simplify the contact planning problem to a graph optimization problem to identify the internal shape changes. Taking advantages of the spatio-temporal symmetry in locomotion, we map the graph optimization problem to special cases of spin models, which allows us to obtain the global optima in polynomial time. We apply our approach to develop new forward and sidewinding behaviors in a hexapod and a 12-legged centipede. We verify our predictions using numerical and robophysical models, and obtain novel and effective locomotion behaviors.
- Slides (For IAIFI members only)
- Ziming Liu, Grad Student, MIT
- April 25, 2023, 11:00am-12:00pm
- Physics-inspired generative models
- It might be surprising and delightful to physicists that physics has been playing a huge role in diffusion models. In fact, the evolution of our physical world can be viewed as a generation process. In this journal club, I will first review diffusion models, the more recent PFGM/PFGM++ inspired from electrostatics, and then introduce the GenPhys framework which manages to convert even more physical processes to generative models.
- Slides (For IAIFI members only)
- Asem Wardak, Research Fellow, Harvard
- April 11, 2023, 11:00am-12:00pm
- Extended Anderson Criticality in Heavy-Tailed Neural Networks
- This talk focuses on nonlinearly interacting systems with random, heavy-tailed connectivity. We show how heavy-tailed connectivity gives rise to an extended critical regime of spatially multifractal fluctuations between the quiescent and active phases. This phase differs from the edge of chaos in classical networks by the appearance of universal hallmarks of the Anderson transition in condensed matter physics over an extended region in phase space. We then investigate some consequences of the multifractal Anderson regime for performing persistent computations.
- Slides (For IAIFI members only)
- Joshua Villarreal, Grad Student, MIT
- April 4, 2023, 11:00am-12:00pm
- Surrogate Modeling of Particle Accelerators
- Abstract: The design, construction, and fine-tuning of particle accelerators has never been easy. Each is a technical challenge in and of itself, and the need to repeatedly run accurate, high-fidelity simulations of the beam traversing the device can slow development. This is especially true for many modern-day particle accelerators, whose beam dynamics tend to observe more nonlinear effects like those arising from space charge, making their simulation more computationally expensive. Thus, there is demand to develop machine learning and statistical learning models that can reproduce these beam dynamic simulations with orders-of-magnitude improvements in runtime. In this talk, I present an overview of recent efforts to build such accelerator surrogate models, which can be used for the design optimization and real-time commissioning, tuning, and running of the accelerator they aim to replicate. As an example, I also present the status of IsoDAR’s work to build a surrogate model for a Radio-Frequency Quadrupole accelerator, a vital component to IsoDAR’s groundbreaking design. I outline challenges of these and other virtual accelerators, and present future plans to make these surrogate models ubiquitous in future development of accelerator experiments of all kinds.
- Slides (For IAIFI members only)
- Daniel Murnane, Postdoc Researcher, Berkeley Lab
- March 21, 2023, 11:00am-12:00pm
- Multi-Tasking ML for Point Clouds at the LHC
- Abstract: The Large Hadron Collider is one of the world’s most data-intensive experiments. Every second, millions of collisions are processed, each one resembling a jigsaw puzzle with thousands of pieces. With the upcoming upgrade to the High Luminosity LHC, this problem will only become more complex. To make sense of this data, deep learning techniques are increasingly being used. For example, graph neural networks and transformers have proven effective at handling point cloud tasks such as track reconstruction and jet tagging. In this talk, I will review the point cloud problems in collider physics and recent deep learning solutions investigated by the Exatrkx project - an initiative to implement innovative algorithms for HEP at exascale. These architectures can accurately perform tracking and tagging with low latency, even in the high luminosity regime. Additionally, I will explore how multi-tasking and multi-modal networks can combine several of these different tasks.
- Slides (For IAIFI members only)
- Manuel Szewc, Postdoc, University of Cincinnati
- March 14, 2023, 11:00am-12:00pm
- Modeling Hadronization with Machine Learning
- Abstract: A fundamental part of event generation, hadronization is currently simulated with the help of fine-tuned empirical models. In this talk, I’ll present MLHAD, a proposed alternative for hadronization where the empirical model is replaced by a surrogate Machine Learning-based model to be ultimately data-trainable. I’ll detail the current stage of development and discuss possible ways forward.
- Slides (For IAIFI members only)
- Max Tegmark, Professor, MIT
- February 28, 2023, 11:00am-12:00pm
- Mechanistic interpretability
- Abstract: Mechanistic interpretability aims to reverse-engineer trained neural networks to distill out the algorithms they have discovered for performing various tasks. Although such “artificial neuroscience” is hard and fun, it’s easier than conventional neuroscience since you have complete knowledge of what every neuron and synapse is doing.
- Slides (For IAIFI members only)
- Liping Liu, Assistant Professor, Tufts University
- February 14, 2023, 11:00am-12:00pm
- Address combinatorial graph problems with learning methods
- Abstract: There are plenty of hard combinatorial problems defined on graphs. Recently learning algorithms have been used to speed up the search for approximate solutions to these problems. This talk will start with an introduction to hard problems on graphs and traditional algorithms, then it will give an overview of learning algorithms for solving combinatorial problems on graphs. The second part of the talk will focus on two specific problems, graph matching and subgraph distance calculation, and discuss neural methods for these two problems. Finally, it will conclude with open questions: why and when can neural networks help to solve combinatorial problems?
- References:
- Slides (For IAIFI members only)
Fall 2022 Journal Clubs
- Anna Golubeva, IAIFI Fellow and Matt Schwartz, Professor, Harvard
- November 29, 2022, 11:00am-12:00pm
- Should artificial intelligence be interpretable to humans?
- Resource:
- Michael Toomey, PhD Student, Brown University
- November 15, 2022, 11:00am-12:00pm
- Deep Learning the Dark Sector
- Abstract: One of the most pressing questions in physics today is the microphysical origin of dark matter. While there have been numerous experimental programs aimed at detecting its interactions with the Standard Model, all efforts to-date have come up empty. An alternative method to constrain dark matter is purely based on its gravitational interactions. In particular, gravitational lensing can be very sensitive to the distribution and morphology of dark matter substructure which can vary appreciably between different models. However, the complexity of data sets, systematics, and large volumes of data make the dimensionality of this problem difficult to approach from more traditional methods. Thankfully, this is a task ideally suited for machine learning. In this talk we will demonstrate how machine learning will play a critical role in distinguishing between models of dark matter and constraining model parameters in lensing data. We will additionally discuss techniques unique to ML for transferring the knowledge accumulated by models in the controlled setting of simulation to real data sets utilizing unsupervised domain adaptation.
- Slides (For IAIFI members only)
- Ziming Liu, PhD Student, MIT
- November 8, 2022, 11:00am-12:00pm
- Toy Models of Superposition
- Abstract: It would be very convenient if the individual neurons of artificial neural networks corresponded to cleanly interpretable features of the input. For example, in an “ideal” ImageNet classifier, each neuron would fire only in the presence of a specific visual feature, such as the color red, a left-facing curve, or a dog snout. But it isn’t always the case that features correspond so cleanly to neurons, especially in large language models where it actually seems rare for neurons to correspond to clean features. I will present a recent paper “Toy Models of Superposition” from Anthropic, aiming to answer these questions: Why is it that neurons sometimes align with features and sometimes don’t? Why do some models and tasks have many of these clean neurons, while they’re vanishingly rare in others?
- Slides (For IAIFI members only)
- Sona Najafi, Researcher, IBM
- October 25, 2022, 11:00am-12:00pm
- Quantum machine learning from algorithms to hardware
- Abstract: The rapid progress of technology over the past few decades has led to the emergence of two powerful computational paradigms known as quantum computing and machine learning. While machine learning tries to learn the solutions from data, quantum computing harnesses the quantum laws for more powerful computation compared to classical computers. In this talk, I will discuss three domains of quantum machine learning, each harnessing a particular aspect of quantum computers and targeting specific problems. The first domain scrutinizes the power of quantum computers to work with high-dimensional data and speed-up algebra, but raises the caveat of input/output due to the quantum measurement rules. The second domain circumvents this problem by using a hybrid architecture, performing optimization on a classical computer while evaluating parameterized states on a quantum circuit, chosen based on a particular issue. Finally, the third domain is inspired by brain-like computation and uses a given quantum system’s natural interaction and unitary dynamic as a source for learning
- Kim Nicoli, Grad Student, Technical University of Berlin
- October 18, 2022, 11:00am-12:00pm
- Deep Learning approaches in lattice quantum field theory: recent advances and future challenges**
- Abstract: Normalizing flows are deep generative models that leverage the change of variable formula to map simple base densities to arbitrary complex target distributions. Recent works have shown the potential of such methods in learning normalized Boltzmann densities in many fields ranging from condensed matter physics to molecular science to lattice field theory. Though sampling from a flow-based density comes with many advantages over standard MCMC sampling, it is known that these methods still suffer from several limitations. In my talk, I will start to give an overview on how to deploy deep generative models to learn Boltzmann densities in the context of a phi^4 lattice field theory. Specifically, I’ll focus on how these methods open up the possibility to estimate thermodynamic observables, i.e., physical observables which depend on the partition function and hence are not straightforward to estimate using standard MCMC methods. In the second part of my talk, I will present two ideas that have been proposed to mitigate the well-known problem of mode-collapse which often occurs when normalizing flows are trained to learn a multimodal target density. More specifically I’ll talk about a novel “mode-dropping estimator” and path gradients. In the last part of my talk, I’ll present a new idea which aims at using flow-based methods to mitigate the sign problem.
- Slides (For IAIFI members only)
- Adriana Dropulic, Grad Student, Princeton
- October 4, 2022, 11:00am-12:00pm
- Machine Learning the 6th Dimension: Stellar Radial Velocities from 5D Phase-Space Correlations
- Abstract: The Gaia satellite will observe the positions and velocities of over a billion Milky Way stars. In the early data releases, most observed stars do not have complete 6D phase-space information. We demonstrate the ability to infer the missing line-of-sight velocities until more spectroscopic observations become available. We utilize a novel neural network architecture that, after being trained on a subset of data with complete phase-space information, takes in a star’s 5D astrometry (angular coordinates, proper motions, and parallax) and outputs a predicted line-of-sight velocity with an associated uncertainty. Working with a mock Gaia catalog, we show that the network can successfully recover the distributions and correlations of each velocity component for stars that fall within ~5 kpc of the Sun. We also demonstrate that the network can accurately reconstruct the velocity distribution of a kinematic substructure in the stellar halo that is spatially uniform, even when it comprises a small fraction of the total star count. We apply the neural network to real Gaia data and discuss how the inferred information augments our understanding of the Milky Way’s formation history.
- Slides (For IAIFI members only)
- Iris Cong, Grad Student, Harvard
- September 27, 2022, 11:00am-12:00pm
- Quantum Convolutional Neural Networks
- Abstract: Convolutional neural networks (CNNs) have recently proven successful for many complex applications ranging from image recognition to precision medicine. In the first part of my talk, motivated by recent advances in realizing quantum information processors, I introduce and analyze a quantum circuit-based algorithm inspired by CNNs. Our quantum convolutional neural network (QCNN) uses only O(log(N)) variational parameters for input sizes of N qubits, allowing for its efficient training and implementation on realistic, near-term quantum devices. To explicitly illustrate its capabilities, I show that QCNN can accurately recognize quantum states associated with a one-dimensional symmetry-protected topological phase, with performance surpassing existing approaches. I further demonstrate that QCNN can be used to devise a quantum error correction (QEC) scheme optimized for a given, unknown error model that substantially outperforms known quantum codes of comparable complexity. The design of such error correction codes is particularly important for near-term experiments, whose error models may be different from those addressed by general-purpose QEC schemes. If time permits, I will also present our latest results on generalizing the QCNN framework to more accurately and efficiently identify two-dimensional topological phases of matter.
- Slides (For IAIFI members only)
- Miles Cranmer, Grad Student, Princeton
- September 20, 2022, 11:00am–12:00pm
- Interpretable Machine Learning for Physics
- Abstract: Would Kepler have discovered his laws if machine learning had been around in 1609? Or would he have been satisfied with the accuracy of some black box regression model, leaving Newton without the inspiration to find the law of gravitation? In this talk I will present a review of some industry-oriented machine learning algorithms, and discuss a major issue facing their use in the natural sciences: a lack of interpretability. I will then outline several approaches I have created with collaborators to help address these problems, based largely on a mix of structured deep learning and symbolic methods. This will include an introduction to the PySR software (https://astroautomata.com/PySR), a Python/Julia package for high-performance symbolic regression. I will conclude by demonstrating applications of such techniques and how we may gain new insights from such results.
- Resources: https://arxiv.org/abs/2207.12409; https://arxiv.org/abs/2202.02306; https://arxiv.org/abs/2006.11287
- Slides (For IAIFI members only)
- Anindita Maiti, Grad Student, Northeastern
- September 13, 2022, 11:00am-12:00pm
- A Study of Neural Network Field Theories
- Abstract: I will present a systematic exploration of field theories arising in Neural Networks, using a dual framework given by Neural Network parameters. The infinite width limit of NN architectures, combined with i.i.d. parameters, lead to Gaussian Processes in Neural Networks by the Central Limit Theorem (CLT), corresponding to generalized free field theories. Small and large violations of the CLT respectively lead to weakly coupled and non-perturbative non-Lagrangian field theories in Neural Networks. Non-Gaussianity, locality (via cluster decomposition), and symmetries of Neural Network field theories are examined via NN parameter space, without necessitating the knowledge of field theoretic actions. Thus, Neural Network field theories, in conjunction to this duality via parameters, may have potential implications for Physics and Machine Learning both.
- Resources: https://arxiv.org/abs/2106.00694
- Slides (For IAIFI members only)
Spring 2022 Journal Clubs
- Jessie Micallef, PhD Student, Michigan State University & Incoming IAIFI Fellow
- March 10, 2022, 11:00am-12:00pm
- “Adapting CNNs to Reconstruct Sparse, GeV-Scale IceCube Neutrino Events”
- Resources:
- Slides (For IAIFI members only)
- Denis Boyda, Postdoctoral Appointee, Argonne National Laboratory & Incoming IAIFI Fellow
- RESCHEDULED: March 17, 2022, 11:00am-12:00pm
- “Overview of some popular Machine Learning frameworks for data parallelism”
- Resources:
- S. Li et. al. PyTorch Distributed: Experiences on Accelerating Data Parallel Training. 2020. arXiv:2006.15704
- A. Sergeev and Mike Del Balso. Horovod: fast and easy distributed deep learning in TensorFlow. 2018. arXiv:1802.05799
- S. Rajbhandari et.al. ZeRO: Memory Optimizations Toward Training Trillion Parameter Models. 2020. arXiv:1910.02054
- Slides (For IAIFI members only)
- Yin Lin, Postdoctoral Researcher, MIT
- April 7, 2022, 11:00am-12:00pm
- “Accelerating Dirac equation solves in lattice QFT with neural-network preconditioners”
- Resources:
- Slides (For IAIFI members only)
- Anatoly Dymarsky, Associate Professor, University of Kentucky
- April 14, 2022, 11:00am-12:00pm
- Tensor network to learn the wave function of data
- Abstract: We use tensor network-based architecture to train a network which simultaneously accomplishes two tasks: image classification and image sampling. We argue that simultaneous performance of these tasks means our network has successfully learned the whole “manifold of data” (using the terminology from the literature) - namely all possible images of a particular kind. We use a black and white version of MNIST, hence our network learns all possible images depicting a particular digit. We access global properties of the “manifold of data” by calculating its size. Thus, we found there are 2^72 possible images of digit 3. We explain this number is robust and largely independent of the details of training process etc.
- Resources:
- Slides (For IAIFI members only)
- Carolina Cuesta, PhD Student, Durham University & Incoming IAIFI Fellow
- April 21, 2022, 11:00am-12:00pm
- Equivariant normalizing flows and their application to cosmology
- Resources:
- Slides (For IAIFI members only)
- Benjamin Fuks, Professor, Sorbonne University
- April 28, 2022, 11:00am-12:00pm
- Precision simulations for new physics
- Resources:
- Dylan Hadfield, Assistant Professor, MIT
- May 5, 2022, 11:00am-12:00pm
- Overoptimization, Incompleteness, and Goodhart’s Law
- Resources:
- Mark Hamilton, Graduate Student, MIT
- Manami Kanemura, Undergraduate Student, Northeastern University (completed co-op with Bryan Ostdiek)
- May 26, 2022, 11:00am-12:00pm
- Using Soft-Introspection to improve anomaly detection at LHC
- Resources:
- Slides (For IAIFI members only)
Fall 2021 Journal Clubs
- Michael Douglas
- Thursday, September 23, 2021, 11:00am-12:00pm
- “Solving Combinatorial Problems using AI/ML”
- Abstract/Resources: Bright et al 1907.04408; Heule et al 1905.10192; Halverson et al 1903.11616; McAleer et al 1805.07470; Gukov et al 2010.16263; General sources on reinforcement learning: Sutton and Bardo, The MathCheck SAT+CAS system
-
Slides (For IAIFI members only)
- Ziming Liu
- Thursday, October 7, 2021, 11:00am-12:00pm
- “Dynamics in Modern Deep Learning Models”
- Abstract/Resources: Transient Chaos in BERT; Memory and attention in deep learning; The Brownian motion in the transformer model
-
Slides (For IAIFI members only)
- Ge Yang
- Thursday, October 21, 2021, 11:00am-12:00pm
- “Learning and Generalization: Revisiting Neural Representations”
- Abstract/Resources: Understanding how deep neural networks learn and generalize has been a central pursuit of intelligence research. This is because we want to build agents that can learn quickly from a small amount of data, that also generalizes to a wider set of scenarios. In this talk, we take a systems approach by identifying key bottleneck components that limits learning and generalization. We will present two key results — overcoming the simplicity bias of neural value approximation via random Fourier features and going beyond the training distribution via invariance through inference.
- Eric Michaud, PhD Student, MIT
- Thursday, November 18, 2021 11:00am-12:00pm
- “Curious Properties of Neural Networks”
- Abstract/Resources: In this informal talk/discussion, I will highlight some facts about neural networks which I find to be particularly fun and surprising. Possible topics could include the Lottery Ticket Hypothesis (https://arxiv.org/abs/1803.03635), Double Descent (https://arxiv.org/abs/1912.02292), and “grokking” (https://mathai-iclr.github.io/papers/papers/MATHAI_29_paper.pdf). There will be time for discussion and for attendees to bring up their own favorite surprising facts about deep learning.
- Murphy Niu, Google Quantum AI
- Thursday, December 3, 11:00am-12:00pm
- “Entangling Quantum Generative Adversarial Networks using Tensorflow Quantum”
- Abstract/Resources: https://arxiv.org/pdf/2105.00080.pdf; https://arxiv.org/pdf/2003.02989.pdf
Spring 2021 Journal Clubs
- Anindita Maiti
- Wednesday, February 17
- “Neural Networks and Quantum Field Theory”
- Abstract/Resources: https://arxiv.org/abs/2008.08601
- Jacob Zavatone-Veth
- Tuesday, March 2
- “Non-Gaussian Processes and Neural Networks at Finite Widths”
- Abstract/Resources: https://arxiv.org/abs/1910.00019
- Di Luo
- Tuesday, April 6
- “Simulating Quantum Many-Body Physics with Neural Network Representation”
- Abstract/Resources: https://arxiv.org/abs/1807.10770; https://arxiv.org/pdf/1912.11052.pdf; https://arxiv.org/abs/2012.05232
- Anna Golubeva
- Tuesday, April 27
- “Are Wider Nets Better Given the Same Number of Parameters?”
- Abstract/Resources: https://arxiv.org/abs/2010.14495
- Siddharth Mishra-Sharma
- Tuesday, May 11
- Simulation-Based Inference Focusing on Astrophysical Applications
- Abstract/Resources: https://arxiv.org/abs/1911.01429; https://arxiv.org/abs/1909.02005
Fall 2020 Journal Clubs
- Bhairav Mehta
- Tuesday, October 20
- “Learning Invariances”
- Abstract/Resources: https://arxiv.org/abs/2009.00329
- Andrew Tan
- Wednesday, November 4
- “Estimating Mutual Information”
- Abstract/Resources: https://arxiv.org/abs/1905.06922
- Ziming Liu
- Wednesday, November 18
- “Scaling Laws of Learning”
- Abstract/Resources: https://arxiv.org/abs/2010.14701; https://arxiv.org/abs/2004.10802; https://arxiv.org/abs/2001.08361
- Dan Roberts
- Wednesday, December 2
- “Effective Theory of Deep Learning”