Colloquia: Difference between revisions

From UW-Math Wiki
Jump to navigation Jump to search
(48 intermediate revisions by 3 users not shown)
Line 6: Line 6:
<!--- in Van Vleck B239, '''unless otherwise indicated'''. --->
<!--- in Van Vleck B239, '''unless otherwise indicated'''. --->


=Fall 2020=
=Spring 2021=


== September 25, 2020, [https://www.math.tamu.edu/~jml/ Joseph Landsberg] (Texas A&M) ==
== January 27, 2021 '''[Wed 4-5pm]''', [https://sites.google.com/view/morganeaustern/home Morgane Austern] (Microsoft Research) ==


(Hosted by Gurevitch)
(Hosted by Roch)


'''From theoretic computer science to algebraic geometry: how the complexity of matrix multiplication led me to the Hilbert scheme of points.'''
'''Asymptotics of learning on dependent and structured random objects'''


In 1968 Strassen discovered the way we multiply nxn matrices
Classical statistical inference relies on numerous tools from probability theory to study
(row/column)
the properties of estimators. However, these same tools are often inadequate to study
is not the most efficient algorithm possible. Subsequent work has led to
modern machine problems that frequently involve structured data (e.g networks) or
the astounding conjecture that as the size n of the matrices grows, it
complicated dependence structures (e.g dependent random matrices). In this talk, we
becomes
extend universal limit theorems beyond the classical setting.
almost as easy to multiply matrices as it is to add them. I will give a
history
of this problem and explain why it is natural to study it using
algebraic geometry
and representation theory. I will conclude by discussing recent exciting
developments
that explain the second phrase in the title.


== October 9, 2020, [https://impa.br/en_US/page-pessoas/carolina-araujo/ Carolina Araujo] (IMPA)  ==
Firstly, we consider distributionally “structured” and dependent random object–i.e
random objects whose distribution are invariant under the action of an amenable group.
We show, under mild moment and mixing conditions, a series of universal second and
third order limit theorems: central-limit theorems, concentration inequalities, Wigner
semi-circular law and Berry-Esseen bounds. The utility of these will be illustrated by
a series of examples in machine learning, network and information theory. Secondly
by building on these results, we establish the asymptotic distribution of the cross-
validated risk with the number of folds allowed to grow at an arbitrary rate. Using
this, we study the statistical speed-up of cross validation compared to a train-test split
procedure, which reveals surprising results even when used on simple estimators.


(Hosted by Ellenberg)
== January 29, 2021, [https://sites.google.com/site/isaacpurduemath/ Isaac Harris] (Purdue) ==
 
(Hosted by Smith)
 
'''Direct Sampling Algorithms for Inverse Scattering'''
 
In this talk, we will discuss a recent qualitative imaging method referred to as the Direct Sampling Method for inverse scattering. This method allows one to recover a scattering object by evaluating an imaging functional that is the inner-product of the far-field data and a known function. It can be shown that the imaging functional is strictly positive in the scatterer and decays as the sampling point moves away from the scatterer. The analysis uses the factorization of the far-field operator and the Funke-Hecke formula. This method can also be shown to be stable with respect to perturbations in the scattering data. We will discuss the inverse scattering problem for both acoustic and electromagnetic waves. This is joint work with A. Kleefeld and D.-L. Nguyen.
 
== February 1, 2021 '''[Mon 4-5pm]''', [https://services.math.duke.edu/~nwu/index.htm Nan Wu] (Duke) ==
 
(Hosted by Roch)
 
'''From Manifold Learning to Gaussian Process Regression on Manifolds'''
 
In this talk, I will review the concepts in manifold learning and discuss a famous manifold learning algorithm, the Diffusion Map. I will talk about my recent research results which theoretically justify that the Diffusion Map reveals the underlying topological structure of the dataset sampled from a manifold in a high dimensional space. Moreover, I will show the application of these theoretical results in solving the regression problems on manifolds and ecological problems in real life.
 
== February 5, 2021, [https://hanbaeklyu.com/ Hanbaek Lyu] (UCLA) ==
 
(Hosted by Roch)
 
'''Dictionary Learning from dependent data samples and networks'''
 
Analyzing group behavior of systems of interacting variables is a ubiquitous problem in many fields including probability, combinatorics, and dynamical systems. This problem also naturally arises when one tries to learn essential features (dictionary atoms) from large and structured data such as networks. For instance, independently sampling some number of nodes in a sparse network hardly detects any edges between adjacent nodes. Instead, we may perform a random walk on the space of connected subgraphs, which will produce more meaningful but correlated samples. As classical results in probability were first developed for independent variables and then gradually generalized for dependent variables, many algorithms in machine learning first developed for independent data samples now need to be extended to correlated data samples. In this talk, we discuss some new results that accomplish this including some for online nonnegative matrix and tensor factorization for Markovian data. A unifying technique for handling dependence in data samples we develop is to condition on the distant past, rather than the recent history. As an application, we present a new approach for learning "basis subgraphs" from network data, that can be used for network denoising and edge inference tasks. We illustrate our method using several synthetic network models as well as Facebook, arXiv, and protein-protein interaction networks, that achieve state-of-the-art performance for such network tasks when compared to several recent methods.
 
== February 8, 2021 '''[Mon 4-5pm]''', [https://sites.google.com/view/mndaoud/home Mohamed Ndaoud] (USC) ==
 
(Hosted by Roch)
 
'''SCALED MINIMAX OPTIMALITY IN HIGH-DIMENSIONAL LINEAR REGRESSION: A NON-CONVEX ALGORITHMIC REGULARIZATION APPROACH'''
 
The question of fast convergence in the classical problem of high dimensional linear regression has been extensively studied. Arguably, one of the fastest procedures in practice is Iterative Hard Thresholding (IHT). Still, IHT relies strongly on the knowledge of the true sparsity parameter s. In this paper, we present a novel fast procedure for estimation in the high dimensional linear regression. Taking advantage of the interplay between estimation, support recovery and optimization we achieve both optimal statistical accuracy and fast convergence. The main advantage of our procedure is that it is fully adaptive, making it more practical than state of the art IHT methods. Our procedure achieves optimal statistical accuracy faster than, for instance, classical algorithms for the Lasso. Moreover, we establish sharp optimal results for both estimation and support recovery. As a consequence, we present a new iterative hard thresholding algorithm for high dimensional linear regression that is scaled minimax optimal (achieves the estimation error of the oracle that knows the sparsity pattern if possible), fast and adaptive.
 
== February 12, 2021, [https://sites.math.washington.edu/~blwilson/ Bobby Wilson] (University of Washington) ==
 
(Hosted by Smith)
 
'''Stability of the cubic nonlinear Schrodinger equation on the Irrational Torus'''
 
Abstract: A characteristic of the defocusing cubic nonlinear Schrodinger equation (NLSE), when defined so that the space variable is the multi-dimensional square  torus, is that  there exist solutions that start with arbitrarily small norms Sobolev norms and  evolve to develop  arbitrarily large modes at later times; this  phenomenon is recognized as a weak  energy transfer to high modes for the NLSE. In this talk we will discuss research and numerical simulations that  show that, when the system is considered on  an irrational torus, energy transfer  is diminished. This is joint work with Gigliola Staffilani and Yulin Pan.
 
== February 17, 2021 '''[Wed 9-10am]''', [https://www.math.ias.edu/~visu/ Visu Makam] (IAS)==
 
(Hosted by Roch)
 
'''Algorithms in invariant theory, connections and applications'''
 
For over a century, computation has played a key role in the development of invariant theory, a subject that studies symmetries captured by group actions. Over the years, major computational advances such as the advent of the digital computer, the discovery of Grobner basis techniques, the development of rigorous notions of computational complexity, etc have served as a stimulus for invariant theory. The perspective adopted in this talk will be contrary — I will explain how developments in invariant theory can inform and make progress on fundamental problems in computational subjects such as complexity theory and statistics.
 
I will discuss how central problems in complexity such as the celebrated P vs NP problem, graph isomorphism, and identity testing arise in the context of invariant theory, focusing on recent results in invariant theory that shed new light on identity testing. I will also outline the challenges going forward in this exciting and rapidly developing field. With regard to statistics, a surprising connection was discovered last year between stability notions in invariant theory and maximum likelihood estimation for a special class of statistical models. This connection allows for invariant theoretic approaches to statistical questions, e.g., we can give exact sample size thresholds for the widely used matrix (and tensor) normal models by utilizing results on quiver representations, castling transforms, etc. I will also briefly point at some exciting current and future directions in this context. No special background will be assumed in this talk.
 
== February 19, 2021, [http://www.mauricefabien.com/ Maurice Fabien] (Brown)==
 
(Hosted by Smith)
 
'''A hybridizable discontinuous method for flow and transport phenomena in porous media'''
 
We present an efficient computational method for the approximation of solutions to partial differential equations that model flow and transport phenomena in porous media. These problems can be challenging to solve as the governing equations are coupled, nonlinear, and material properties are often highly varying and discontinuous. The high-order implicit hybridizable discontinuous method (HDG) is utilized for the discretization, which significantly reduces the computational cost. The HDG method is high-order accurate, locally mass-conservative, allows us to use unstructured complicated meshes, and enables the use of static condensation. We demonstrate that the HDG method is able to efficiently generate high-fidelity simulations of flow and transport phenomena in porous media. Several challenging benchmarks are used to verify and validate the method in heterogeneous porous media.
 
== February 26, 2021, [https://www.math.ias.edu/avi/home Avi Wigderson] (Princeton IAS) ==
 
(Hosted by Gurevich)


'''Symmetries in Algebraic Geometry and Cremona transformations'''
'''Optimization, Complexity and Math'''


In this talk I will discuss symmetries of complex algebraic varieties. When studying a projective variety $X$, one usually wants to understand its symmetries. Conversely, the structure of the group of automorphisms of $X$ encodes relevant geometric properties of $X$. After describing some examples of automorphism groups of projective varieties, I will discuss why the notion of automorphism is too rigid in the scope of birational geometry. We are then led to consider another class of symmetries of $X$, its birational self-maps. Birational self-maps of the projective space $\mathbb{P}^n$ are called Cremona transformations. Describing the structure of the group of Cremona transformations of the plane is a classical problem that goes back to the 19th century. In higher dimensions, not so much is known, and a natural problem is to construct interesting subgroups of the Cremona group. I will end by discussing a recent work with Alessio Corti and Alex Massarenti, where we investigate subgroups of the Cremona group consisting of symmetries preserving some special meromorphic volume forms.
This talk aims to summarize the status and goals of a broad research project. The main messages of this project are summarized below; I plan to describe, through examples, many of the concepts they refer to, and the evolution of ideas leading to them. No special background is assumed.  


== October 23, 2020, [http://www.math.toronto.edu/quastel/ Jeremy Quastel] (University of Toronto) ==
(1)  We extend some basic algorithms of convex optimization from Euclidean space to the far more general setting of Riemannian manifolds, capturing the symmetries of non-commutative group actions. The main tools for analyzing these algorithms combine central results from invariant theory (in particular, non-commutative duality theory) and representation theory.


(Hosted by Gorin)
(2) One main motivation for studying these problems and algorithms comes from algebraic complexity theory, especially attempts to separate Valiant’s algebraic analogs of the P and NP. Symmetries and group actions play a key role in these attempts.


'''Towards KPZ Universality'''
(3)  The new algorithms give exponential (or better) improvements in run-time for solving algorithmic many specific problems across CS, Math and Physics. In particular, in algebra (testing rational identities in non-commutative variables), in analysis (testing the feasibility and tightness of Brascamp-Lieb inequalities), in quantum information theory (to the quantum marginals problem), optimization (testing membership in “moment polytopes”), and others. This work exposes old and new connections between these diverse areas.


The 1-d KPZ universality class contains random interface growth models
Based on many joint works in the past 5 years, with Zeyuan Allen-Zhu, Peter Burgisser, Cole Franks, Ankit Garg, Leonid Gurvits, Pavel Hrubes, Yuanzhi Li, Visu Makam, Rafael Oliveira and Michael Walter.
as well as random polymer free energies and driven diffusive systems. 
The KPZ fixed point has now been determined, through the exact solution of a special model
in the class, TASEP, and is expected to describe the asymptotic fluctuations for all models in the class.
It is an integrable Markov process, with transition probabilities described by a system of integrable PDE’s. 
Very recently, new techniques have become available to prove
the convergence of the KPZ equation itself, as well as some non-integrable extensions
of TASEP, to the KPZ fixed point.  This talk will be a gentle introduction to these developments
with no prior knowledge assumed.  The results are, variously, joint works with
Daniel Remenik, Konstantin Matetski, and Sourav Sarkar.


== November 6, 2020, [http://math.jhu.edu/~sakellar/ Yiannis Sakellaridis] (Johns Hopkins University)==
== March 12, 2021, [https://gauss.math.yale.edu/~il282/ Ivan Losev] (Yale) ==


(Hosted by Gurevitch)
(Hosted by Gurevich)


'''Harmonic analysis, intersection cohomology, and L-functions.'''
'''Modular representations of semisimple Lie algebras'''


The goal of this lecture will be to describe a link between geometric-topological objects (certain intersection complexes on singular loop spaces), and objects of arithmetic interest (L-functions). The link between the two is by a Fourier/spectral transform. I will begin by giving an overview of Iwasawa–Tate theory, which expresses the Riemann zeta function as the Mellin transform of a certain theta series, and will conclude by describing joint work with Jonathan Wang (MIT), which expresses other L-functions as spectral transforms of functions obtained from intersection complexes on singular arc spaces. No prior familiarity with notions such as L-functions or intersection cohomology will be assumed.
Representation theory seeks to understand ways in which a
given algebraic object (a group, an associative algebra, a Lie algebra
etc.) can be represented via linear operators on a vector space over a field.
What the representations are going to look like very much depends on the field
in question, and, in particular, on its characteristic.
Most important questions are settled in characteristic 0, for example,
when we work over the complex numbers. But in the case of postive
characteristic fields, which the word ``modular'' refers to, even basic
questions are wide open.  


== November 20, 2020, [https://web.ma.utexas.edu/users/ntran/ Ngoc Mai Tran] (University of Texas) ==
In my talk I will concentrate on one of the most important algebraic
objects, semisimple Lie algebras, and explain what we know about
about their irreducible (=no subrepresentations) modular representations.
I will start with the case of sl_2 explaining the results of Rudakov
and Shafarevich from 1967 describing the irreducible representations.
Then I will talk about recent work on the general case including my
paper with Bezrukavnikov from 2020, where we get the most explicit
description of irreducible representations available to date. Our primary
tool is relating the modular representations of semisimple Lie algebras
to the (affine) Hecke category, the most fundamental object of modern
Representation theory.


(Hosted by Rodriguez)
== March 26, 2021, [http://www-personal.umich.edu/~itobasco/ Ian Tobasco] (University of Illinois at Chicago) ==


'''Does your problem have a tropical solution?'''
(Hosted by Thiffeault)


Tropical mathematics is mathematics done in the min-plus (or max-plus) algebra.
'''The many, elaborate wrinkle patterns of confined elastic shells'''
The power of tropical mathematics comes from two key ideas: (a) tropical objects are limits of classical ones, and (b) the geometry of tropical objects is polyhedral. In this talk I'll demonstrate how these two ideas are used to solve a variety of problems in different domains the last 10 years, from deep neural networks, semigroups theory, auction theory and extreme value statistics.


== December 4, 2020, [http://math.sfsu.edu/federico/ Federico Ardila] (San Francisco) ==
A basic fact of geometry is that there are no length-preserving maps from a sphere to the plane. But what happens if you confine a thin elastic shell, which prefers to be a curved surface but can deform in an approximately isometric way, nearby a plane? It wrinkles, and forms a remarkable pattern of peaks and troughs, the arrangement of which is sometimes random, sometimes not depending on the shell. After a broad introduction to the mathematics of thin elastic sheets, this talk will focus on a new set of geometric rules explaining the wrinkle patterns of confined shells. Our rules are the latest output from an ongoing analysis of highly wrinkled shells using the tools of Gamma-convergence and convex analysis. The asymptotic expansions they encode reveal a beautiful and unexpected connection between opposite curvatures --- apparently, surfaces with positive or negative Gaussian curvatures are paired according to the way that they wrinkle. Our predicted patterns are easy to explain, and match the results of numerous experiments and simulations done with Eleni Katifori (U. Penn) and Joey Paulsen (Syracuse). Underlying their analysis is a certain class of interpolation inequalities of Gagliardo-Nirenberg type, whose best prefactors encode optimal wrinkle patterns.


(Hosted by Ellenberg)
== April 9, 2021 '''[8pm]''', [https://web.math.princeton.edu/~weinan/ Weinan E] (Princeton) ==


'''Measuring polytopes through their algebraic structure.'''
'''Hans Schneider LAA Lecture''' (Hosted by Shen)


Generalized permutahedra are a beautiful family of polytopes with a rich combinatorial structure, and strong connections to optimization and algebraic geometry. We prove they are the universal family of polyhedra with a certain Hopf-algebraic structure. This Hopf-algebraic structure is compatible with McMullen’s foundational work on the polytope algebra.
'''A Mathematical Perspective of Machine Learning'''


Our construction provides a unifying framework to organize and study many combinatorial families; for example:
The heart of modern machine learning (ML) is the approximation of high dimensional functions.
Traditional approaches, such as approximation by piecewise polynomials, wavelets, or
other linear combinations of fixed basis functions, suffer from the curse of
dimensionality (CoD). This does not seem to be the case for the neural network-based ML models.
To quantify this, we need to develop the corresponding mathematical framework.
In this talk, I will report the progress made so far and the main remaining issues
within the scope of supervised learning.
I will discuss three major issues: approximation theory and error analysis of
modern ML models, qualitative behavior of gradient descent algorithms,
and ML from a continuous viewpoint.


1. It uniformly answers open questions and recovers known results about graphs, posets, matroids, hypergraphs, simplicial complexes, and others.
== April 23, 2021, [http://people.math.harvard.edu/~mmwood/ Melanie Matchett Wood] (Harvard) ==


2. It shows that permutahedra and associahedra “know" how to compute the multiplicative and compositional inverses of power series.
(Hosted by Ellenberg)


3. It explains the mysterious fact that many combinatorial invariants of matroids, posets, and graphs can also be thought of as measures on polytopes, satisfying the inclusion-exclusion relations.
'''Surjectivity of random integral matrices on integral vectors'''


This is joint work with Marcelo Aguiar (2017) and Mario Sanchez (2020).
A random nxm matrix gives a random linear transformation
from Z^m to Z^n (between vectors with integral coordinates).  Asking
for the probability that such a map is injective is a question of the
non-vanishing of determinants.  In this talk, we discuss the
probability that such a map is surjective, which is a more subtle
integral question, and for example is equivalent to the map being surjective mod
p for all primes p.  We show that when m=n+u, for u at least 1, as n
goes to infinity, the surjectivity probability is a non-zero product
of inverse values of the Riemann zeta function.  This probability is
universal, i.e. we prove that it does not depend on the distribution
from which you choose independent entries of the matrix, and this probability also arises in the Cohen-Lenstra heuristics predicting the distribution of class groups of real quadratic fields.  The major challenge
in this work is understanding how one can get the result for all primes at once,
especially in light of dependent behavior between different primes.
This talk is on joint work with Hoi Nguyen.


== Past Colloquia ==
== Past Colloquia ==
[[Colloquia/Fall2020|Fall 2020]]


[[Colloquia/Spring2020|Spring 2020]]
[[Colloquia/Spring2020|Spring 2020]]

Revision as of 15:10, 13 April 2021


UW Madison mathematics Colloquium is ONLINE on Fridays at 4:00 pm.


Spring 2021

January 27, 2021 [Wed 4-5pm], Morgane Austern (Microsoft Research)

(Hosted by Roch)

Asymptotics of learning on dependent and structured random objects

Classical statistical inference relies on numerous tools from probability theory to study the properties of estimators. However, these same tools are often inadequate to study modern machine problems that frequently involve structured data (e.g networks) or complicated dependence structures (e.g dependent random matrices). In this talk, we extend universal limit theorems beyond the classical setting.

Firstly, we consider distributionally “structured” and dependent random object–i.e random objects whose distribution are invariant under the action of an amenable group. We show, under mild moment and mixing conditions, a series of universal second and third order limit theorems: central-limit theorems, concentration inequalities, Wigner semi-circular law and Berry-Esseen bounds. The utility of these will be illustrated by a series of examples in machine learning, network and information theory. Secondly by building on these results, we establish the asymptotic distribution of the cross- validated risk with the number of folds allowed to grow at an arbitrary rate. Using this, we study the statistical speed-up of cross validation compared to a train-test split procedure, which reveals surprising results even when used on simple estimators.

January 29, 2021, Isaac Harris (Purdue)

(Hosted by Smith)

Direct Sampling Algorithms for Inverse Scattering

In this talk, we will discuss a recent qualitative imaging method referred to as the Direct Sampling Method for inverse scattering. This method allows one to recover a scattering object by evaluating an imaging functional that is the inner-product of the far-field data and a known function. It can be shown that the imaging functional is strictly positive in the scatterer and decays as the sampling point moves away from the scatterer. The analysis uses the factorization of the far-field operator and the Funke-Hecke formula. This method can also be shown to be stable with respect to perturbations in the scattering data. We will discuss the inverse scattering problem for both acoustic and electromagnetic waves. This is joint work with A. Kleefeld and D.-L. Nguyen.

February 1, 2021 [Mon 4-5pm], Nan Wu (Duke)

(Hosted by Roch)

From Manifold Learning to Gaussian Process Regression on Manifolds

In this talk, I will review the concepts in manifold learning and discuss a famous manifold learning algorithm, the Diffusion Map. I will talk about my recent research results which theoretically justify that the Diffusion Map reveals the underlying topological structure of the dataset sampled from a manifold in a high dimensional space. Moreover, I will show the application of these theoretical results in solving the regression problems on manifolds and ecological problems in real life.

February 5, 2021, Hanbaek Lyu (UCLA)

(Hosted by Roch)

Dictionary Learning from dependent data samples and networks

Analyzing group behavior of systems of interacting variables is a ubiquitous problem in many fields including probability, combinatorics, and dynamical systems. This problem also naturally arises when one tries to learn essential features (dictionary atoms) from large and structured data such as networks. For instance, independently sampling some number of nodes in a sparse network hardly detects any edges between adjacent nodes. Instead, we may perform a random walk on the space of connected subgraphs, which will produce more meaningful but correlated samples. As classical results in probability were first developed for independent variables and then gradually generalized for dependent variables, many algorithms in machine learning first developed for independent data samples now need to be extended to correlated data samples. In this talk, we discuss some new results that accomplish this including some for online nonnegative matrix and tensor factorization for Markovian data. A unifying technique for handling dependence in data samples we develop is to condition on the distant past, rather than the recent history. As an application, we present a new approach for learning "basis subgraphs" from network data, that can be used for network denoising and edge inference tasks. We illustrate our method using several synthetic network models as well as Facebook, arXiv, and protein-protein interaction networks, that achieve state-of-the-art performance for such network tasks when compared to several recent methods.

February 8, 2021 [Mon 4-5pm], Mohamed Ndaoud (USC)

(Hosted by Roch)

SCALED MINIMAX OPTIMALITY IN HIGH-DIMENSIONAL LINEAR REGRESSION: A NON-CONVEX ALGORITHMIC REGULARIZATION APPROACH

The question of fast convergence in the classical problem of high dimensional linear regression has been extensively studied. Arguably, one of the fastest procedures in practice is Iterative Hard Thresholding (IHT). Still, IHT relies strongly on the knowledge of the true sparsity parameter s. In this paper, we present a novel fast procedure for estimation in the high dimensional linear regression. Taking advantage of the interplay between estimation, support recovery and optimization we achieve both optimal statistical accuracy and fast convergence. The main advantage of our procedure is that it is fully adaptive, making it more practical than state of the art IHT methods. Our procedure achieves optimal statistical accuracy faster than, for instance, classical algorithms for the Lasso. Moreover, we establish sharp optimal results for both estimation and support recovery. As a consequence, we present a new iterative hard thresholding algorithm for high dimensional linear regression that is scaled minimax optimal (achieves the estimation error of the oracle that knows the sparsity pattern if possible), fast and adaptive.

February 12, 2021, Bobby Wilson (University of Washington)

(Hosted by Smith)

Stability of the cubic nonlinear Schrodinger equation on the Irrational Torus

Abstract: A characteristic of the defocusing cubic nonlinear Schrodinger equation (NLSE), when defined so that the space variable is the multi-dimensional square torus, is that there exist solutions that start with arbitrarily small norms Sobolev norms and evolve to develop arbitrarily large modes at later times; this phenomenon is recognized as a weak energy transfer to high modes for the NLSE. In this talk we will discuss research and numerical simulations that show that, when the system is considered on an irrational torus, energy transfer is diminished. This is joint work with Gigliola Staffilani and Yulin Pan.

February 17, 2021 [Wed 9-10am], Visu Makam (IAS)

(Hosted by Roch)

Algorithms in invariant theory, connections and applications

For over a century, computation has played a key role in the development of invariant theory, a subject that studies symmetries captured by group actions. Over the years, major computational advances such as the advent of the digital computer, the discovery of Grobner basis techniques, the development of rigorous notions of computational complexity, etc have served as a stimulus for invariant theory. The perspective adopted in this talk will be contrary — I will explain how developments in invariant theory can inform and make progress on fundamental problems in computational subjects such as complexity theory and statistics.

I will discuss how central problems in complexity such as the celebrated P vs NP problem, graph isomorphism, and identity testing arise in the context of invariant theory, focusing on recent results in invariant theory that shed new light on identity testing. I will also outline the challenges going forward in this exciting and rapidly developing field. With regard to statistics, a surprising connection was discovered last year between stability notions in invariant theory and maximum likelihood estimation for a special class of statistical models. This connection allows for invariant theoretic approaches to statistical questions, e.g., we can give exact sample size thresholds for the widely used matrix (and tensor) normal models by utilizing results on quiver representations, castling transforms, etc. I will also briefly point at some exciting current and future directions in this context. No special background will be assumed in this talk.

February 19, 2021, Maurice Fabien (Brown)

(Hosted by Smith)

A hybridizable discontinuous method for flow and transport phenomena in porous media

We present an efficient computational method for the approximation of solutions to partial differential equations that model flow and transport phenomena in porous media. These problems can be challenging to solve as the governing equations are coupled, nonlinear, and material properties are often highly varying and discontinuous. The high-order implicit hybridizable discontinuous method (HDG) is utilized for the discretization, which significantly reduces the computational cost. The HDG method is high-order accurate, locally mass-conservative, allows us to use unstructured complicated meshes, and enables the use of static condensation. We demonstrate that the HDG method is able to efficiently generate high-fidelity simulations of flow and transport phenomena in porous media. Several challenging benchmarks are used to verify and validate the method in heterogeneous porous media.

February 26, 2021, Avi Wigderson (Princeton IAS)

(Hosted by Gurevich)

Optimization, Complexity and Math

This talk aims to summarize the status and goals of a broad research project. The main messages of this project are summarized below; I plan to describe, through examples, many of the concepts they refer to, and the evolution of ideas leading to them. No special background is assumed.

(1) We extend some basic algorithms of convex optimization from Euclidean space to the far more general setting of Riemannian manifolds, capturing the symmetries of non-commutative group actions. The main tools for analyzing these algorithms combine central results from invariant theory (in particular, non-commutative duality theory) and representation theory.

(2) One main motivation for studying these problems and algorithms comes from algebraic complexity theory, especially attempts to separate Valiant’s algebraic analogs of the P and NP. Symmetries and group actions play a key role in these attempts.

(3) The new algorithms give exponential (or better) improvements in run-time for solving algorithmic many specific problems across CS, Math and Physics. In particular, in algebra (testing rational identities in non-commutative variables), in analysis (testing the feasibility and tightness of Brascamp-Lieb inequalities), in quantum information theory (to the quantum marginals problem), optimization (testing membership in “moment polytopes”), and others. This work exposes old and new connections between these diverse areas.

Based on many joint works in the past 5 years, with Zeyuan Allen-Zhu, Peter Burgisser, Cole Franks, Ankit Garg, Leonid Gurvits, Pavel Hrubes, Yuanzhi Li, Visu Makam, Rafael Oliveira and Michael Walter.

March 12, 2021, Ivan Losev (Yale)

(Hosted by Gurevich)

Modular representations of semisimple Lie algebras

Representation theory seeks to understand ways in which a given algebraic object (a group, an associative algebra, a Lie algebra etc.) can be represented via linear operators on a vector space over a field. What the representations are going to look like very much depends on the field in question, and, in particular, on its characteristic. Most important questions are settled in characteristic 0, for example, when we work over the complex numbers. But in the case of postive characteristic fields, which the word ``modular refers to, even basic questions are wide open.

In my talk I will concentrate on one of the most important algebraic objects, semisimple Lie algebras, and explain what we know about about their irreducible (=no subrepresentations) modular representations. I will start with the case of sl_2 explaining the results of Rudakov and Shafarevich from 1967 describing the irreducible representations. Then I will talk about recent work on the general case including my paper with Bezrukavnikov from 2020, where we get the most explicit description of irreducible representations available to date. Our primary tool is relating the modular representations of semisimple Lie algebras to the (affine) Hecke category, the most fundamental object of modern Representation theory.

March 26, 2021, Ian Tobasco (University of Illinois at Chicago)

(Hosted by Thiffeault)

The many, elaborate wrinkle patterns of confined elastic shells

A basic fact of geometry is that there are no length-preserving maps from a sphere to the plane. But what happens if you confine a thin elastic shell, which prefers to be a curved surface but can deform in an approximately isometric way, nearby a plane? It wrinkles, and forms a remarkable pattern of peaks and troughs, the arrangement of which is sometimes random, sometimes not depending on the shell. After a broad introduction to the mathematics of thin elastic sheets, this talk will focus on a new set of geometric rules explaining the wrinkle patterns of confined shells. Our rules are the latest output from an ongoing analysis of highly wrinkled shells using the tools of Gamma-convergence and convex analysis. The asymptotic expansions they encode reveal a beautiful and unexpected connection between opposite curvatures --- apparently, surfaces with positive or negative Gaussian curvatures are paired according to the way that they wrinkle. Our predicted patterns are easy to explain, and match the results of numerous experiments and simulations done with Eleni Katifori (U. Penn) and Joey Paulsen (Syracuse). Underlying their analysis is a certain class of interpolation inequalities of Gagliardo-Nirenberg type, whose best prefactors encode optimal wrinkle patterns.

April 9, 2021 [8pm], Weinan E (Princeton)

Hans Schneider LAA Lecture (Hosted by Shen)

A Mathematical Perspective of Machine Learning

The heart of modern machine learning (ML) is the approximation of high dimensional functions. Traditional approaches, such as approximation by piecewise polynomials, wavelets, or other linear combinations of fixed basis functions, suffer from the curse of dimensionality (CoD). This does not seem to be the case for the neural network-based ML models. To quantify this, we need to develop the corresponding mathematical framework. In this talk, I will report the progress made so far and the main remaining issues within the scope of supervised learning. I will discuss three major issues: approximation theory and error analysis of modern ML models, qualitative behavior of gradient descent algorithms, and ML from a continuous viewpoint.

April 23, 2021, Melanie Matchett Wood (Harvard)

(Hosted by Ellenberg)

Surjectivity of random integral matrices on integral vectors

A random nxm matrix gives a random linear transformation from Z^m to Z^n (between vectors with integral coordinates). Asking for the probability that such a map is injective is a question of the non-vanishing of determinants. In this talk, we discuss the probability that such a map is surjective, which is a more subtle integral question, and for example is equivalent to the map being surjective mod p for all primes p. We show that when m=n+u, for u at least 1, as n goes to infinity, the surjectivity probability is a non-zero product of inverse values of the Riemann zeta function. This probability is universal, i.e. we prove that it does not depend on the distribution from which you choose independent entries of the matrix, and this probability also arises in the Cohen-Lenstra heuristics predicting the distribution of class groups of real quadratic fields. The major challenge in this work is understanding how one can get the result for all primes at once, especially in light of dependent behavior between different primes. This talk is on joint work with Hoi Nguyen.

Past Colloquia

Fall 2020

Spring 2020

Fall 2019

Spring 2019

Fall 2018

Spring 2018

Fall 2017

Spring 2017

Fall 2016

Spring 2016

Fall 2015

Spring 2015

Fall 2014

Spring 2014

Fall 2013

Spring 2013

Fall 2012

WIMAW