Difference between revisions of "Colloquia"

From UW-Math Wiki
Jump to: navigation, search
(December 4, 2020, Federico Ardila (San Francisco))
(February 8, 2021 [Mon 4-5pm], Mohamed Ndaoud (USC))
 
(21 intermediate revisions by 3 users not shown)
Line 6: Line 6:
 
<!--- in Van Vleck B239, '''unless otherwise indicated'''. --->
 
<!--- in Van Vleck B239, '''unless otherwise indicated'''. --->
  
=Fall 2020=
+
=Spring 2021=
  
== September 25, 2020, [https://www.math.tamu.edu/~jml/ Joseph Landsberg] (Texas A&M) ==
+
== January 27, 2021 '''[Wed 4-5pm]''', [https://sites.google.com/view/morganeaustern/home Morgane Austern] (Microsoft Research) ==
  
(Hosted by Gurevitch)
+
(Hosted by Roch)
 +
 
 +
'''Asymptotics of learning on dependent and structured random objects'''
 +
 
 +
Classical statistical inference relies on numerous tools from probability theory to study
 +
the properties of estimators. However, these same tools are often inadequate to study
 +
modern machine problems that frequently involve structured data (e.g networks) or
 +
complicated dependence structures (e.g dependent random matrices). In this talk, we
 +
extend universal limit theorems beyond the classical setting.
 +
 
 +
Firstly, we consider distributionally “structured” and dependent random object–i.e
 +
random objects whose distribution are invariant under the action of an amenable group.
 +
We show, under mild moment and mixing conditions, a series of universal second and
 +
third order limit theorems: central-limit theorems, concentration inequalities, Wigner
 +
semi-circular law and Berry-Esseen bounds. The utility of these will be illustrated by
 +
a series of examples in machine learning, network and information theory. Secondly
 +
by building on these results, we establish the asymptotic distribution of the cross-
 +
validated risk with the number of folds allowed to grow at an arbitrary rate. Using
 +
this, we study the statistical speed-up of cross validation compared to a train-test split
 +
procedure, which reveals surprising results even when used on simple estimators.
 +
 
 +
== January 29, 2021, [https://sites.google.com/site/isaacpurduemath/ Isaac Harris] (Purdue) ==
 +
 
 +
(Hosted by Smith)
 +
 
 +
== February 1, 2021 '''[Mon 4-5pm]''', [https://services.math.duke.edu/~nwu/index.htm Nan Wu] (Duke) ==
  
'''From theoretic computer science to algebraic geometry: how the complexity of matrix multiplication led me to the Hilbert scheme of points.'''
+
(Hosted by Roch)
  
In 1968 Strassen discovered the way we multiply nxn matrices
+
'''From Manifold Learning to Gaussian Process Regression on Manifolds'''
(row/column)
 
is not the most efficient algorithm possible. Subsequent work has led to
 
the astounding conjecture that as the size n of the matrices grows, it
 
becomes
 
almost as easy to multiply matrices as it is to add them. I will give a
 
history
 
of this problem and explain why it is natural to study it using
 
algebraic geometry
 
and representation theory. I will conclude by discussing recent exciting
 
developments
 
that explain the second phrase in the title.
 
  
== October 9, 2020, [https://impa.br/en_US/page-pessoas/carolina-araujo/ Carolina Araujo] (IMPA)  ==
+
In this talk, I will review the concepts in manifold learning and discuss a famous manifold learning algorithm, the Diffusion Map. I will talk about my recent research results which theoretically justify that the Diffusion Map reveals the underlying topological structure of the dataset sampled from a manifold in a high dimensional space. Moreover, I will show the application of these theoretical results in solving the regression problems on manifolds and ecological problems in real life.
  
(Hosted by Ellenberg)
+
== February 5, 2021, [https://hanbaeklyu.com/ Hanbaek Lyu] (UCLA) ==
  
'''Symmetries in Algebraic Geometry and Cremona transformations'''
+
(Hosted by Roch)
  
In this talk I will discuss symmetries of complex algebraic varieties. When studying a projective variety $X$, one usually wants to understand its symmetries. Conversely, the structure of the group of automorphisms of $X$ encodes relevant geometric properties of $X$. After describing some examples of automorphism groups of projective varieties, I will discuss why the notion of automorphism is too rigid in the scope of birational geometry. We are then led to consider another class of symmetries of $X$, its birational self-maps. Birational self-maps of the projective space $\mathbb{P}^n$ are called Cremona transformations. Describing the structure of the group of Cremona transformations of the plane is a classical problem that goes back to the 19th century. In higher dimensions, not so much is known, and a natural problem is to construct interesting subgroups of the Cremona group. I will end by discussing a recent work with Alessio Corti and Alex Massarenti, where we investigate subgroups of the Cremona group consisting of symmetries preserving some special meromorphic volume forms.
+
'''Dictionary Learning from dependent data samples and networks'''
  
== October 23, 2020, [http://www.math.toronto.edu/quastel/ Jeremy Quastel] (University of Toronto) ==
+
Analyzing group behavior of systems of interacting variables is a ubiquitous problem in many fields including probability, combinatorics, and dynamical systems. This problem also naturally arises when one tries to learn essential features (dictionary atoms) from large and structured data such as networks. For instance, independently sampling some number of nodes in a sparse network hardly detects any edges between adjacent nodes. Instead, we may perform a random walk on the space of connected subgraphs, which will produce more meaningful but correlated samples. As classical results in probability were first developed for independent variables and then gradually generalized for dependent variables, many algorithms in machine learning first developed for independent data samples now need to be extended to correlated data samples. In this talk, we discuss some new results that accomplish this including some for online nonnegative matrix and tensor factorization for Markovian data. A unifying technique for handling dependence in data samples we develop is to condition on the distant past, rather than the recent history. As an application, we present a new approach for learning "basis subgraphs" from network data, that can be used for network denoising and edge inference tasks. We illustrate our method using several synthetic network models as well as Facebook, arXiv, and protein-protein interaction networks, that achieve state-of-the-art performance for such network tasks when compared to several recent methods.
  
(Hosted by Gorin)
+
== February 8, 2021 '''[Mon 4-5pm]''', [https://sites.google.com/view/mndaoud/home Mohamed Ndaoud] (USC) ==
  
'''Towards KPZ Universality'''
+
(Hosted by Roch)
  
The 1-d KPZ universality class contains random interface growth models
+
'''SCALED MINIMAX OPTIMALITY IN HIGH-DIMENSIONAL LINEAR REGRESSION: A NON-CONVEX ALGORITHMIC REGULARIZATION APPROACH'''
as well as random polymer free energies and driven diffusive systems. 
 
The KPZ fixed point has now been determined, through the exact solution of a special model
 
in the class, TASEP, and is expected to describe the asymptotic fluctuations for all models in the class.
 
It is an integrable Markov process, with transition probabilities described by a system of integrable PDE’s. 
 
Very recently, new techniques have become available to prove
 
the convergence of the KPZ equation itself, as well as some non-integrable extensions
 
of TASEP, to the KPZ fixed point.  This talk will be a gentle introduction to these developments
 
with no prior knowledge assumed.  The results are, variously, joint works with
 
Daniel Remenik, Konstantin Matetski, and Sourav Sarkar.
 
  
== November 6, 2020, [http://math.jhu.edu/~sakellar/ Yiannis Sakellaridis] (Johns Hopkins University)==
+
The question of fast convergence in the classical problem of high dimensional linear regression has been extensively studied. Arguably, one of the fastest procedures in practice is Iterative Hard Thresholding (IHT). Still, IHT relies strongly on the knowledge of the true sparsity parameter s. In this paper, we present a novel fast procedure for estimation in the high dimensional linear regression. Taking advantage of the interplay between estimation, support recovery and optimization we achieve both optimal statistical accuracy and fast convergence. The main advantage of our procedure is that it is fully adaptive, making it more practical than state of the art IHT methods. Our procedure achieves optimal statistical accuracy faster than, for instance, classical algorithms for the Lasso. Moreover, we establish sharp optimal results for both estimation and support recovery. As a consequence, we present a new iterative hard thresholding algorithm for high dimensional linear regression that is scaled minimax optimal (achieves the estimation error of the oracle that knows the sparsity pattern if possible), fast and adaptive.
  
(Hosted by Gurevitch)
+
== February 12, 2021, [https://sites.math.washington.edu/~blwilson/ Bobby Wilson] (University of Washington) ==
  
'''Harmonic analysis, intersection cohomology, and L-functions.'''
+
(Hosted by Smith)
  
The goal of this lecture will be to describe a link between geometric-topological objects (certain intersection complexes on singular loop spaces), and objects of arithmetic interest (L-functions). The link between the two is by a Fourier/spectral transform. I will begin by giving an overview of Iwasawa–Tate theory, which expresses the Riemann zeta function as the Mellin transform of a certain theta series, and will conclude by describing joint work with Jonathan Wang (MIT), which expresses other L-functions as spectral transforms of functions obtained from intersection complexes on singular arc spaces. No prior familiarity with notions such as L-functions or intersection cohomology will be assumed.
+
== February 19, 2021, [http://www.mauricefabien.com/ Maurice Fabien] (Brown)==
  
== November 20, 2020, [https://web.ma.utexas.edu/users/ntran/ Ngoc Mai Tran] (University of Texas) ==
+
(Hosted by Smith)
  
(Hosted by Rodriguez)
+
== February 26, 2021, [https://www.math.ias.edu/avi/home Avi Wigderson] (Princeton IAS) ==
  
'''Does your problem have a tropical solution?'''
+
(Hosted by Gurevitch)
  
Tropical mathematics is mathematics done in the min-plus (or max-plus) algebra.
+
== March 12, 2021, [] ==
The power of tropical mathematics comes from two key ideas: (a) tropical objects are limits of classical ones, and (b) the geometry of tropical objects is polyhedral. In this talk I'll demonstrate how these two ideas are used to solve a variety of problems in different domains the last 10 years, from deep neural networks, semigroups theory, auction theory and extreme value statistics.
 
  
== December 4, 2020, [http://math.sfsu.edu/federico/ Federico Ardila] (San Francisco) ==
+
(Hosted by )
  
(Hosted by Ellenberg)
+
== March 26, 2021, [] ==
  
'''Measuring polytopes through their algebraic structure'''
+
(Hosted by )
  
Generalized permutahedra are a beautiful family of polytopes with a rich combinatorial structure, and strong connections to optimization and algebraic geometry. We prove they are the universal family of polyhedra with a certain Hopf-algebraic structure. This Hopf-algebraic structure is compatible with McMullen’s foundational work on the polytope algebra.
+
== April 9, 2021, [] ==
  
Our construction provides a unifying framework to organize and study many combinatorial families; for example:
+
(Hosted by )
  
1. It uniformly answers open questions and recovers known results about graphs, posets, matroids, hypergraphs, simplicial complexes, and others.
+
== April 23, 2021, [] ==
 +
 
 +
(Hosted by )
  
2. It shows that permutahedra and associahedra “know" how to compute the multiplicative and compositional inverses of power series.
 
  
3. It explains the mysterious fact that many combinatorial invariants of matroids, posets, and graphs can also be thought of as measures on polytopes, satisfying the inclusion-exclusion relations.
 
  
This is joint work with Marcelo Aguiar (2017) and Mario Sanchez (2020).
 
  
 
== Past Colloquia ==
 
== Past Colloquia ==
 +
 +
[[Colloquia/Fall2020|Fall 2020]]
  
 
[[Colloquia/Spring2020|Spring 2020]]
 
[[Colloquia/Spring2020|Spring 2020]]

Latest revision as of 11:04, 20 January 2021


UW Madison mathematics Colloquium is ONLINE on Fridays at 4:00 pm.


Spring 2021

January 27, 2021 [Wed 4-5pm], Morgane Austern (Microsoft Research)

(Hosted by Roch)

Asymptotics of learning on dependent and structured random objects

Classical statistical inference relies on numerous tools from probability theory to study the properties of estimators. However, these same tools are often inadequate to study modern machine problems that frequently involve structured data (e.g networks) or complicated dependence structures (e.g dependent random matrices). In this talk, we extend universal limit theorems beyond the classical setting.

Firstly, we consider distributionally “structured” and dependent random object–i.e random objects whose distribution are invariant under the action of an amenable group. We show, under mild moment and mixing conditions, a series of universal second and third order limit theorems: central-limit theorems, concentration inequalities, Wigner semi-circular law and Berry-Esseen bounds. The utility of these will be illustrated by a series of examples in machine learning, network and information theory. Secondly by building on these results, we establish the asymptotic distribution of the cross- validated risk with the number of folds allowed to grow at an arbitrary rate. Using this, we study the statistical speed-up of cross validation compared to a train-test split procedure, which reveals surprising results even when used on simple estimators.

January 29, 2021, Isaac Harris (Purdue)

(Hosted by Smith)

February 1, 2021 [Mon 4-5pm], Nan Wu (Duke)

(Hosted by Roch)

From Manifold Learning to Gaussian Process Regression on Manifolds

In this talk, I will review the concepts in manifold learning and discuss a famous manifold learning algorithm, the Diffusion Map. I will talk about my recent research results which theoretically justify that the Diffusion Map reveals the underlying topological structure of the dataset sampled from a manifold in a high dimensional space. Moreover, I will show the application of these theoretical results in solving the regression problems on manifolds and ecological problems in real life.

February 5, 2021, Hanbaek Lyu (UCLA)

(Hosted by Roch)

Dictionary Learning from dependent data samples and networks

Analyzing group behavior of systems of interacting variables is a ubiquitous problem in many fields including probability, combinatorics, and dynamical systems. This problem also naturally arises when one tries to learn essential features (dictionary atoms) from large and structured data such as networks. For instance, independently sampling some number of nodes in a sparse network hardly detects any edges between adjacent nodes. Instead, we may perform a random walk on the space of connected subgraphs, which will produce more meaningful but correlated samples. As classical results in probability were first developed for independent variables and then gradually generalized for dependent variables, many algorithms in machine learning first developed for independent data samples now need to be extended to correlated data samples. In this talk, we discuss some new results that accomplish this including some for online nonnegative matrix and tensor factorization for Markovian data. A unifying technique for handling dependence in data samples we develop is to condition on the distant past, rather than the recent history. As an application, we present a new approach for learning "basis subgraphs" from network data, that can be used for network denoising and edge inference tasks. We illustrate our method using several synthetic network models as well as Facebook, arXiv, and protein-protein interaction networks, that achieve state-of-the-art performance for such network tasks when compared to several recent methods.

February 8, 2021 [Mon 4-5pm], Mohamed Ndaoud (USC)

(Hosted by Roch)

SCALED MINIMAX OPTIMALITY IN HIGH-DIMENSIONAL LINEAR REGRESSION: A NON-CONVEX ALGORITHMIC REGULARIZATION APPROACH

The question of fast convergence in the classical problem of high dimensional linear regression has been extensively studied. Arguably, one of the fastest procedures in practice is Iterative Hard Thresholding (IHT). Still, IHT relies strongly on the knowledge of the true sparsity parameter s. In this paper, we present a novel fast procedure for estimation in the high dimensional linear regression. Taking advantage of the interplay between estimation, support recovery and optimization we achieve both optimal statistical accuracy and fast convergence. The main advantage of our procedure is that it is fully adaptive, making it more practical than state of the art IHT methods. Our procedure achieves optimal statistical accuracy faster than, for instance, classical algorithms for the Lasso. Moreover, we establish sharp optimal results for both estimation and support recovery. As a consequence, we present a new iterative hard thresholding algorithm for high dimensional linear regression that is scaled minimax optimal (achieves the estimation error of the oracle that knows the sparsity pattern if possible), fast and adaptive.

February 12, 2021, Bobby Wilson (University of Washington)

(Hosted by Smith)

February 19, 2021, Maurice Fabien (Brown)

(Hosted by Smith)

February 26, 2021, Avi Wigderson (Princeton IAS)

(Hosted by Gurevitch)

March 12, 2021, []

(Hosted by )

March 26, 2021, []

(Hosted by )

April 9, 2021, []

(Hosted by )

April 23, 2021, []

(Hosted by )



Past Colloquia

Fall 2020

Spring 2020

Fall 2019

Spring 2019

Fall 2018

Spring 2018

Fall 2017

Spring 2017

Fall 2016

Spring 2016

Fall 2015

Spring 2015

Fall 2014

Spring 2014

Fall 2013

Spring 2013

Fall 2012

WIMAW