- Upcoming Internal Discussion Seminars
- Upcoming Journal Clubs
- Other Upcoming Internal Events
- Past Seminars
- Past Journal Clubs
- Other Past Internal Events

**Note: Events in red are invite-only**

To add this calendar to your Google Calendars click the +GoogleCalendar button in the bottom right corner of the calendar. To add an individual event to your calendar, click on the event and choose “copy to my calendar.”

Click here to add this calendar to a different calendar application

## Upcoming Internal Discussion Seminars

These talks are only open to IAIFI members and affiliates. Access to the Zoom information and recordings can be found on the IAIFI internal website (contact iaifi-management@mit.edu if you have trouble logging in).

**Harini Suresh, PhD Student, Computer Science, MIT****Friday, December 3, 2:00-3:00pm***“Understanding Sources of Harm throughout the Machine Learning Life Cycle”*- As machine learning increasingly affects people and society, awareness of its potential harmful effects has also grown. To anticipate, prevent, and mitigate undesirable downstream consequences, it’s important that we understand when and how harm might be introduced throughout the ML life cycle. This talk will walk through a framework that identifies seven distinct potential sources of downstream harm in machine learning, spanning the data collection, development, and deployment processes. It will also explore how different sources of harm might motivate different mitigation techniques.

## Upcoming Journal Clubs

The IAIFI Journal Club is only open to IAIFI members and affiliates. Access to the Zoom information and recordings can be found on the IAIFI internal website (contact iaifi-management@mit.edu if you have trouble logging in).

- The IAIFI Journal Club will return in February 2022.

## Other Upcoming Internal Events

Internal events are only open to IAIFI members and affiliates. Access to the Zoom information and recordings can be found on the IAIFI internal website (contact iaifi-management@mit.edu if you have trouble logging in).

**AI Lightning Talks****Friday, December 17, 2:00-3:00pm**- IAIFI researchers from the AI thrust will present their work to IAIFI members with a goal of sparking opportunities for collaboration.
*“Equivariant Contrastive Learning,”*presented by Rumen Dangovski*“Sparse Equivariant Convolutions for Neutrino Event Classification,”*presented by Taritree Wongjirad and Tess Smidt*“Can you see the shape of a jet?,”*presented by Akshunna S. Dogra

## Past Seminars

### Fall 2021

**Fabian Ruehle, Assistant Professor, Northeastern University****Friday, September 24, 2:00-3:00pm***“Learning metrics in extra dimensions”*- Abstract: String theory is a very promising candidate for a fundamental theory of our universe. An interesting prediction of string theory is that spacetime is ten-dimensional. Since we only observe four spacetime dimensions, the extra six dimensions are small and compact, thus evading detection. These extra six-dimensional spaces, known as Calabi-Yau spaces, are very special and elusive. They are equipped with a special metric needed to make string theory consistent. This special property is given in terms of a (notoriously hard) type of partial differential equation. While we know, thanks to the heroic work of Calabi and Yau, that this PDE has a unique solution and hence that the metric exists, we neither know what it looks like nor how to construct it explicitly. However, the metric is an important quantity that enters in many physical observables, e.g. particle masses. Thinking of the metric as a function that satisfies three constraints that enter in the Calabi-Yau theorem, we can parameterize the metric as a neural network and formulate the problem as multiple continuous optimization tasks. The neural network is trained (akin to self-supervision) by sampling points from the Calabi-Yau space and imposing the constraints entering the theorem as customized loss functions.

**Di Luo, IAIFI Fellow****Friday, October 8, 2:00-3:00pm***“Machine Learning for Quantum Many-body Physics”*- Abstract: The study of quantum many-body physics plays an crucial role across condensed matter physics, high energy physics and quantum information science. Due to the exponential growing nature of Hilbert space, challenges arise for exact classical simulations of high dimensional wave function which is the core object in quantum many-body physics. A natural question comes as whether machine learning, which is powerful for processing high dimensional probability distribution, can provide new methods for studying quantum many-body physics. In contrast to the standard high dimensional probability distribution, the wave function further exhibits complex phase structure and rich symmetries besides high dimensionality. It opens up a series of interesting questions for high dimensional optimization, sampling and representation imposed by quantum many-body physics. In this talk, I will discuss recent advancement of the field and present (1) neural network representations for quantum states with Fermionic anti-symmetry and gauge symmetries; (2) neural network simulations for ground state and real time dynamics in condensed matter physics, high energy physics and quantum information science; (3) quantum control protocol discovery with machine learning.

**Cengiz Pehlevan, Assistant Professor, Applied Mathematics, Harvard University (SEAS)****Friday, October 22, 2:00-3:00pm***“Inductive bias of neural networks”*- Abstract: A learner’s performance depends crucially on how its internal assumptions, or inductive biases, align with the task at hand. I will present a theory that describes the inductive biases of neural networks in the infinite width limit using kernel methods and statistical mechanics. This theory elucidates an inductive bias to explain data with “simple functions” which are identified by solving a related kernel eigenfunction problem on the data distribution. This notion of simplicity allows us to characterize whether a network is compatible with a learning task, facilitating good generalization performance from a small number of training examples. I will present applications of the theory to deep networks (at finite width) trained on synthetic and real datasets, and recordings from the mouse primary visual cortex. Finally, I will briefly present an extension of the theory to out-of-distribution generalization.

**Bryan Ostdiek, Postdoctoral Fellow, Theoretical Particle Physics, Harvard University****Friday, November 5, 2:00-3:00pm***“Lessons from the Dark Machines Anomaly Score Challenge”*- Abstract: With LHC experiments producing strong exclusion bounds on theoretical new physics models, there has been recent interest in model agnostic methods to search for physics beyond the standard model. The Dark Machines group conducted a “challenge” as an open playground to examine unsupervised anomaly detection methods on simulated collider events. In this discussion, I briefly motivate and introduce anomaly detection, along with the public data set. We found that the methods which performed best across a wide range of signals shared a common feature; the metric for determining how anomalous an event is depends only on how the event can be encoded into a small representation - there is no decoding step. The discussion will start with speculations about why the “fixed target” encoding can work and look to future tests.

**Tess Smidt, Assistant Professor, EECS, MIT****Friday, November 19, 2:00-3:00pm***“Unexpected properties of symmetry equivariant neural networks”*- Abstract: Physical data and the way that it is represented contains rich context, e.g. symmetries, conserved quantities, and experimental setups. There are many ways to imbue machine learning models with this context (e.g. input representation, training schemes, constraining model structure) and each vary in their flexibility and robustness. In this talk, I’ll give examples of some surprising consequences of what happens when we impose constraints on the functional forms of our models. Specifically, I’ll discuss properties of Euclidean Neural Networks which are constructed to preserve 3D Euclidean symmetry. Perhaps unsurprisingly, symmetry preserving algorithms are extremely data-efficient; they are able to achieve better results with less training data. More unexpectedly, Euclidean Neural Networks also act as “symmetry-compilers”: they can only learn tasks that are symmetrically well-posed and they can also help uncover when there is symmetry implied missing information. I’ll give examples of these properties and how they can be used to craft useful training tasks for physical data. To conclude, I’ll highlight some open questions in symmetry equivariant neural networks particularly relevant to representing physical systems.

### Spring 2021

**Justin Solomon****Thursday, February 11, 11am-noon***“Geometric Data Processing at MIT”*

**Phil Harris, Anjali Nambrath, Karna Morey, Michal Szurek, Jade Chongsathapornpong****Thursday, February 25, 11am-noon***“Open Data Science in Physics Courses”*

**Ge Yang****Thursday, Mar 11, 11am-noon***“Learning Task Informed Abstractions”*

**Christopher Rackauckas****Thursday, Mar 25, 11am-noon***“Overview of SciML”*

**George Barbastathis/Demba Ba****Thursday, April 8, 11am-noon***“On the Countinuum between Dictionaries and Neural Nets for Inverse Problems”*

**David Kaiser****Thursday, April 22, 11am-noon***“Ethics and AI”*

**Alexander Rakhlin****Thursday, May 6, 11am-noon***“Deep Learning: A Statistical Viewpoint”*

**Edo Berger****Thursday, May 20, 11am-noon***“Machine Learning for Cosmic Explosions”*

## Past Journal Clubs

### Fall 2021

**Michael Douglas****Thursday, September 23, 11:00am-12:00pm***“Solving Combinatorial Problems using AI/ML”*- Abstract/Resources: Bright et al 1907.04408; Heule et al 1905.10192; Halverson et al 1903.11616; McAleer et al 1805.07470; Gukov et al 2010.16263; General sources on reinforcement learning: Sutton and Bardo, The MathCheck SAT+CAS system

**Ziming Liu****Thursday, October 7, 11:00am-12:00pm***“Dynamics in Modern Deep Learning Models”*- Abstract/Resources: Transient Chaos in BERT; Memory and attention in deep learning; The Brownian motion in the transformer model

**Ge Yang****Thursday, October 21, 11:00am-12:00pm***“Learning and Generalization: Revisiting Neural Representations”*- Abstract/Resources: Understanding how deep neural networks learn and generalize has been a central pursuit of intelligence research. This is because we want to build agents that can learn quickly from a small amount of data, that also generalizes to a wider set of scenarios. In this talk, we take a systems approach by identifying key bottleneck components that limits learning and generalization. We will present two key results — overcoming the simplicity bias of neural value approximation via random Fourier features and going beyond the training distribution via invariance through inference.

**Eric Michaud, PhD Student, MIT****Thursday, November 18, 11:00am-12:00pm***“Curious Properties of Neural Networks”*- Abstract/Resources: In this informal talk/discussion, I will highlight some facts about neural networks which I find to be particularly fun and surprising. Possible topics could include the Lottery Ticket Hypothesis (https://arxiv.org/abs/1803.03635), Double Descent (https://arxiv.org/abs/1912.02292), and “grokking” (https://mathai-iclr.github.io/papers/papers/MATHAI_29_paper.pdf). There will be time for discussion and for attendees to bring up their own favorite surprising facts about deep learning.

**Murphy Niu, Google Quantum AI****Thursday, December 3, 11:00am-12:00pm***“Entangling Quantum Generative Adversarial Networks using Tensorflow Quantum”*- Abstract/Resources: https://arxiv.org/pdf/2105.00080.pdf; https://arxiv.org/pdf/2003.02989.pdf%20-%20Page%202.pdf

### Spring 2021

**Anindita Maiti****Wednesday, February 17***“Neural Networks and Quantum Field Theory”*- Abstract/Resources: https://arxiv.org/abs/2008.08601

**Jacob Zavatone-Veth****Tuesday, March 2***“Non-Gaussian Processes and Neural Networks at Finite Widths”*- Abstract/Resources: https://arxiv.org/abs/1910.00019

**Di Luo****Tuesday, April 6***“Simulating Quantum Many-Body Physics with Neural Network Representation”*- Abstract/Resources: https://arxiv.org/abs/1807.10770; https://arxiv.org/pdf/1912.11052.pdf; https://arxiv.org/abs/2012.05232

**Anna Golubeva****Tuesday, April 27***“Are Wider Nets Better Given the Same Number of Parameters?”*- Abstract/Resources: https://arxiv.org/abs/2010.14495

**Siddharth Mishra-Sharma****Tuesday, May 11***Simulation-Based Inference Focusing on Astrophysical Applications*- Abstract/Resources: https://arxiv.org/abs/1911.01429; https://arxiv.org/abs/1909.02005

### Fall 2020

**Bhairav Mehta****Tuesday, October 20***“Learning Invariances”*- Abstract/Resources: https://arxiv.org/abs/2009.00329

**Andrew Tan****Wednesday, November 4***“Estimating Mutual Information”*- Abstract/Resources: https://arxiv.org/abs/1905.06922

**Ziming Liu****Wednesday, November 18***“Scaling Laws of Learning”*- Abstract/Resources: https://arxiv.org/abs/2010.14701; https://arxiv.org/abs/2004.10802; https://arxiv.org/abs/2001.08361

**Dan Roberts****Wednesday, December 2***“Effective Theory of Deep Learning”*

## Other Past Internal Events

### Community Building

**Spring 2021 Virtual Networking****Thursday, May 13, 11:00am-12:00pm**

**Summer 2021 Virtual Networking****Thursday, August 19, 12:00pm-1:30pm**

**Fall 2021 Networking (in person)****Friday, October 29, 5:30pm-7:30pm**

### Town Halls

**Year 2 State of the IAIFI Town Hall****Friday, September 10, 2:00-3:00pm**

**Year 1 Early Career Town Hall****Tuesday, June 8, 11:00am-12:00pm**

**Year 1 IAIFI Town Hall**- **Monday, February 8, 11:00am-12:00pm

### Research Events

**IAIFI Fall 2020 Unconference****Monday, December 14, 2020, 2pm-5pm**

**IAIFI Fall 2020 Symposium****Monday, November 23, 2020, 2pm-5pm**

**AI Thrust Meeting***Thursday, October 7, 1:00pm-2:00pm**

**Physics Theory Thrust Meeting***Tuesday, October 12, 2:30pm-3:30pm**

**Physics Experiment Thrust Meeting***Monday, October 18, 11:00am-12:00pm*