sanjukta

Speaker: Sanjukta Krishnagopal

Date: May 24th, 2023

Time: 3:30 pm

Location: HFH 1132

Host: Ambuj Singh

Title: Machine learning meets graphs: developing scalable and efficient learning rules 

Abstract: Graphs are a natural way to model time-varying interactions between elements in various systems - social, political, biological, etc. First, I present some rigorous results on how graph neural networks, a form of machine learning (ML) for graphical data that are often treated as black boxes, learn. In particular, I use the recently popularized neural tangent kernel that formulates precisely the learning dynamics of a graph neural network in the wide-network limit. But how does this kernel evolve as I grow the underlying graph in the graphon limit? I introduce the graphon-kernel, and prove convergence to the graphon-kernel (and convergence of the corresponding spectra) as the underlying graph grows. Through this, I show how one can perform 'transfer learning', i.e., train on a smaller graph (e.g. a subgraph a of social media network) and 'transfer' rapidly to a larger network (e.g. the entire social media network) with theoretical performance and early-stopping guarantees. 

 

Next, I introduce an entirely new framework for training neural networks in the first place. Virtually all ML architectures use backpropagation (a form of gradient descent) for learning, however, backpropagation is notoriously opaque, data-intensive, and is rather slow. In recent work, we introduce dendritic gated networks, that use linear methods (that by themselves, cannot learn even the simplest non-linear functions) in conjunction with the new addition of hyperplane-gating, allowing it to match performance of conventional ML methods on a variety of benchmarked tasks while training much faster and with convex loss. This novel learning rule is neuroscience inspired, and applies to a variety of neural networks, including graph neural networks. I show why this new paradigm has desirable properties such as being efficient, not forgetting old tasks while learning new ones (catastrophic forgetting), being less prone to overfitting, and having smooth and interpretable weight functions, unlike conventional training of neural networks. 

Bio:

Sanjukta's research lies at the interface of network science, machine learning, statistics and data science. She draws from her interdisciplinary training to develop methods, sometimes mathematically-rigorous and sometimes application-driven, to bridge the gap between quantitative computational methods and data-driven analyses for broad impact in applied interdisciplinary sciences. She received her PhD from the University of Maryland where she worked on networks, computational biology, and nonlinear dynamics and was a COMBINE fellow (Computation and Mathematics of Biological Networks) program. She then spent two years as a postdoc at the Gatsby Unit at University College London where she also worked with Google Deepmind. She is currently a UC presidential postdoctoral fellow with a joint appointment in Berkeley AI Research (BAIR) at UC Berkeley and UCLA Math. She has lived on 4 continents, and in her personal time, enjoys dancing, diving and attempting to climb mountains.