https://www.math.wisc.edu/wiki/api.php?action=feedcontributions&user=Ewbates&feedformat=atomUW-Math Wiki - User contributions [en]2020-12-03T18:16:01ZUser contributionsMediaWiki 1.30.1https://www.math.wisc.edu/wiki/index.php?title=Probability_Seminar&diff=20414Probability Seminar2020-12-02T22:29:18Z<p>Ewbates: /* December 3, 2020, Tatyana Shcherbina (UW-Madison) */</p>
<hr />
<div>__NOTOC__<br />
<br />
= Fall 2020 =<br />
<br />
<b>Thursdays in 901 Van Vleck Hall at 2:30 PM</b>, unless otherwise noted. <br />
<b>We usually end for questions at 3:20 PM.</b><br />
<br />
<b> IMPORTANT: </b> In Fall 2020 the seminar is being run online. [https://uwmadison.zoom.us/j/91828707031?pwd=YUJXMUJkMDlPR0VRdkRCQVJtVndIdz09 ZOOM LINK]<br />
<br />
If you would like to sign up for the email list to receive seminar announcements then please join [https://groups.google.com/a/g-groups.wisc.edu/forum/#!forum/probsem our group].<br />
<br />
== September 17, 2020, [https://www.math.tamu.edu/~bhanin/ Boris Hanin] (Princeton and Texas A&M) ==<br />
<br />
'''Pre-Talk: (1:00pm)'''<br />
<br />
'''Neural Networks for Probabilists''' <br />
<br />
Deep neural networks are a centerpiece in modern machine learning. They are also fascinating probabilistic models, about which much remains unclear. In this pre-talk I will define neural networks, explain how they are used in practice, and give a survey of the big theoretical questions they have raised. If time permits, I will also explain how neural networks are related to a variety of classical areas in probability and mathematical physics, including random matrix theory, optimal transport, and combinatorics of hyperplane arrangements.<br />
<br />
'''Talk: (2:30pm)'''<br />
<br />
'''Effective Theory of Deep Neural Networks''' <br />
<br />
Deep neural networks are often considered to be complicated "black boxes," for which a full systematic analysis is not only out of reach but also impossible. In this talk, which is based on ongoing joint work with Sho Yaida and Daniel Adam Roberts, I will make the opposite claim. Namely, that deep neural networks with random weights and biases are exactly solvable models. Our approach applies to networks at finite width n and large depth L, the regime in which they are used in practice. A key point will be the emergence of a notion of "criticality," which involves a finetuning of model parameters (weight and bias variances). At criticality, neural networks are particularly well-behaved but still exhibit a tension between large values for n and L, with large values of n tending to make neural networks more like Gaussian processes and large values of L amplifying higher cumulants. Our analysis at initialization has many consequences also for networks during after training, which I will discuss if time permits.<br />
<br />
== September 24, 2020, [https://people.ucd.ie/neil.oconnell Neil O'Connell] (Dublin) ==<br />
<br />
'''Some new perspectives on moments of random matrices'''<br />
<br />
The study of `moments' of random matrices (expectations of traces of powers of the matrix) is a rich and interesting subject, with fascinating connections to enumerative geometry, as discovered by Harer and Zagier in the 1980’s. I will give some background on this and then describe some recent work which offers some new perspectives (and new results). This talk is based on joint work with Fabio Deelan Cunden, Francesco Mezzadri and Nick Simm.<br />
<br />
== October 1, 2020, [https://marcusmichelen.org/ Marcus Michelen] (UIC) ==<br />
<br />
'''Roots of random polynomials near the unit circle'''<br />
<br />
It is a well-known (but perhaps surprising) fact that a polynomial with independent random coefficients has most of its roots very close to the unit circle. Using a probabilistic perspective, we understand the behavior of roots of random polynomials exceptionally close to the unit circle and prove several limit theorems; these results resolve several conjectures of Shepp and Vanderbei. We will also discuss how our techniques provide a heuristic, probabilistic explanation for why random polynomials tend to have most roots near the unit circle. Based on joint work with Julian Sahasrabudhe.<br />
<br />
== October 8, 2020, [http://sites.harvard.edu/~sus977/index.html Subhabrata Sen] (Harvard) ==<br />
<br />
'''Large deviations for dense random graphs: beyond mean-field'''<br />
<br />
In a seminal paper, Chatterjee and Varadhan derived an Erdős-Rényi random graph, viewed as a random graphon. This directly provides LDPs for continuous functionals such as subgraph counts, spectral norms, etc. In contrast, very little is understood about this problem if the underlying random graph is inhomogeneous or constrained.<br />
<br />
In this talk, we will explore large deviations for dense random graphs, beyond the “mean-field” setting. In particular, we will study large deviations for uniform random graphs with given degrees, and a family of dense block model<br />
random graphs. We will establish the LDP in each case, and identify the rate function. In the block model setting, we will use this LDP to study the upper tail problem for homomorphism densities of regular sub-graphs. Our results establish that this problem exhibits a symmetry/symmetry-breaking transition, similar to one observed for Erdős-Rényi random graphs.<br />
<br />
Based on joint works with Christian Borgs, Jennifer Chayes, Souvik Dhara, Julia Gaudio and Samantha Petti.<br />
<br />
== October 15, 2020, [https://math.cornell.edu/philippe-sosoe Philippe Sosoe] (Cornell) ==<br />
<br />
Title: '''Concentration in integrable polymer models'''<br />
<br />
I will discuss a general method, applicable to all known integrable stationary polymer models, to obtain nearly optimal bounds on the<br />
central moments of the partition function and the occupation lengths for each level of the polymer system. The method was developed<br />
for the O'Connell-Yor polymer, but was subsequently extended to discrete integrable polymers. As an application, we obtain<br />
localization of the OY polymer paths along a straight line on the scale O(n^{2/3+o(1)}). <br />
<br />
Joint work with Christian Noack.<br />
<br />
==October 22, 2020, [http://www.math.toronto.edu/balint/ Balint Virag] (Toronto) ==<br />
<br />
Title: '''The heat and the landscape'''<br />
<br />
Abstract: The directed landscape is the conjectured universal scaling limit of the<br />
most common random planar metrics. Examples are planar first passage<br />
percolation, directed last passage percolation, distances in percolation<br />
clusters, random polymer models, and exclusion processes. The limit laws of distances of objects are given by the KPZ fixed point.<br />
<br />
We show that the KPZ fixed point is characterized by the Baik Ben-Arous<br />
Peche statistics well-known from random matrix theory.<br />
<br />
This provides a general and elementary method for showing convergence to<br />
the KPZ fixed point. We apply this method to two models related to<br />
random heat flow: the O'Connell-Yor polymer and the KPZ equation.<br />
<br />
Note: there will be a follow-up talk with details about the proofs at 11am, Friday, October 23.<br />
<br />
==October 29, 2020, [https://www.math.wisc.edu/node/80 Yun Li] (UW-Madison) ==<br />
<br />
Title: '''Operator level hard-to-soft transition for β-ensembles'''<br />
<br />
Abstract: It was shown that the soft and hard edge scaling limits of β-ensembles can be characterized as the spectra of certain random Sturm-Liouville operators. By tuning the parameter of the hard edge process one can obtain the soft edge process as a scaling limit. In this talk, I will present the corresponding limit on the level of the operators. This talk is based on joint work with Laure Dumaz and Benedek Valkó.<br />
<br />
== November 5, 2020, [http://sayan.web.unc.edu/ Sayan Banerjee] (UNC at Chapel Hill) ==<br />
<br />
Title: '''Persistence and root detection algorithms in growing networks'''<br />
<br />
Abstract: Motivated by questions in Network Archaeology, we investigate statistics of dynamic networks<br />
that are ''persistent'', that is, they fixate almost surely after some random time as the network grows. We<br />
consider ''generalized attachment models'' of network growth where at each time $n$, an incoming vertex<br />
attaches itself to the network through $m_n$ edges attached one-by-one to existing vertices with probability<br />
proportional to an ''arbitrary function'' $f$ of their degree. We identify the class of attachment functions $f$ for<br />
which the ''maximal degree vertex'' persists and obtain asymptotics for its index when it does not. We also<br />
show that for tree networks, the ''centroid'' of the tree persists and use it to device polynomial time root<br />
finding algorithms and quantify their efficacy. Our methods rely on an interplay between dynamic<br />
random networks and their continuous time embeddings.<br />
<br />
This is joint work with Shankar Bhamidi.<br />
<br />
== November 12, 2020, [https://cims.nyu.edu/~ajd594/ Alexander Dunlap] (NYU Courant Institute) ==<br />
<br />
Title: '''A forward-backward SDE from the 2D nonlinear stochastic heat equation'''<br />
<br />
Abstract: I will discuss a two-dimensional stochastic heat equation in the weak noise regime with a nonlinear noise strength. I will explain how pointwise statistics of solutions to this equation, as the correlation length of the noise is taken to 0 but the noise is attenuated by a logarithmic factor, can be related to a forward-backward stochastic differential equation (FBSDE) depending on the nonlinearity. In the linear case, the FBSDE can be explicitly solved and we recover results of Caravenna, Sun, and Zygouras. Joint work with Yu Gu (CMU).<br />
<br />
== November 19, 2020, [https://statistics.wharton.upenn.edu/profile/dingjian/ Jian Ding] (University of Pennsylvania) ==<br />
<br />
Title: '''Correlation length of two-dimensional random field Ising model via greedy lattice animal'''<br />
<br />
Abstract: In this talk, I will discuss two-dimensional random field Ising model where the disorder is given by i.i.d. mean zero Gaussian variables with small variance. In particular, I will present a recent joint work with Mateo Wirth on (one notion of) the correlation length, which is the critical size of the box at which the influences to spin magnetization from the boundary conditions and from the random field are comparable. Our work draws a connection to the greedy lattice animal normalized by the boundary size.<br />
<br />
== December 3, 2020, [https://www.math.wisc.edu/people/faculty-directory Tatyana Shcherbina] (UW-Madison) ==<br />
<br />
Title: '''SUSY transfer matrix approach for the real symmetric 1d random band matrices '''<br />
<br />
Abstract: Random band matrices (RBM) are natural intermediate models to study <br />
eigenvalue statistics and quantum propagation in disordered systems, <br />
since they interpolate between mean-field type Wigner matrices and <br />
random Schrodinger operators. In particular, RBM can be used to model the <br />
Anderson metal-insulator phase transition. The conjecture states that the eigenvectors <br />
of $N\times N$ RBM are completely delocalized and the local spectral statistics governed <br />
by the Wigner-Dyson statistics for large bandwidth $W$ (i.e. the local behavior is <br />
the same as for Wigner matrices), and by Poisson statistics for a small $W$ <br />
(with exponentially localized eigenvectors). The transition is conjectured to <br />
be sharp and for RBM in one spatial dimension occurs around the critical <br />
value $W=\sqrt{N}$. Recently, we proved the universality of the correlation <br />
functions for the whole delocalized region $W\gg \sqrt{N}$ for a certain type <br />
of Hermitian Gaussian RBM. This result was obtained by <br />
application of the supersymmetric method (SUSY) combined with the transfer matrix approach. <br />
In this talk I am going to discuss how this technique can be adapted to the <br />
real symmetric case.<br />
<br />
== December 10, 2020, [https://www.ewbates.com/ Erik Bates] (UW-Madison) ==<br />
<br />
Title: '''Empirical measures, geodesic lengths, and a variational formula in first-passage percolation'''<br />
<br />
Abstract: We consider the standard first-passage percolation model on $\mathbb{Z}^d$, in which each edge is assigned an i.i.d. nonnegative weight, and the passage time between any two points is the smallest total weight of a nearest-neighbor path between them. Our primary interest is in the empirical measures of edge-weights observed along geodesics from $0$ to $n\mathbf{e}_1$. For various dense families of edge-weight distributions, we prove that these measures converge weakly to a deterministic limit as $n$ tends to infinity. The key tool is a new variational formula for the time constant. In this talk, I will derive this formula and discuss its implications for the convergence of both empirical measures and lengths of geodesics.<br />
<br />
<br />
[[Past Seminars]]</div>Ewbateshttps://www.math.wisc.edu/wiki/index.php?title=Probability_Seminar&diff=20409Probability Seminar2020-12-01T05:39:15Z<p>Ewbates: /* December 10, 2020, Erik Bates (UW-Madison) */</p>
<hr />
<div>__NOTOC__<br />
<br />
= Fall 2020 =<br />
<br />
<b>Thursdays in 901 Van Vleck Hall at 2:30 PM</b>, unless otherwise noted. <br />
<b>We usually end for questions at 3:20 PM.</b><br />
<br />
<b> IMPORTANT: </b> In Fall 2020 the seminar is being run online. [https://uwmadison.zoom.us/j/91828707031?pwd=YUJXMUJkMDlPR0VRdkRCQVJtVndIdz09 ZOOM LINK]<br />
<br />
If you would like to sign up for the email list to receive seminar announcements then please join [https://groups.google.com/a/g-groups.wisc.edu/forum/#!forum/probsem our group].<br />
<br />
== September 17, 2020, [https://www.math.tamu.edu/~bhanin/ Boris Hanin] (Princeton and Texas A&M) ==<br />
<br />
'''Pre-Talk: (1:00pm)'''<br />
<br />
'''Neural Networks for Probabilists''' <br />
<br />
Deep neural networks are a centerpiece in modern machine learning. They are also fascinating probabilistic models, about which much remains unclear. In this pre-talk I will define neural networks, explain how they are used in practice, and give a survey of the big theoretical questions they have raised. If time permits, I will also explain how neural networks are related to a variety of classical areas in probability and mathematical physics, including random matrix theory, optimal transport, and combinatorics of hyperplane arrangements.<br />
<br />
'''Talk: (2:30pm)'''<br />
<br />
'''Effective Theory of Deep Neural Networks''' <br />
<br />
Deep neural networks are often considered to be complicated "black boxes," for which a full systematic analysis is not only out of reach but also impossible. In this talk, which is based on ongoing joint work with Sho Yaida and Daniel Adam Roberts, I will make the opposite claim. Namely, that deep neural networks with random weights and biases are exactly solvable models. Our approach applies to networks at finite width n and large depth L, the regime in which they are used in practice. A key point will be the emergence of a notion of "criticality," which involves a finetuning of model parameters (weight and bias variances). At criticality, neural networks are particularly well-behaved but still exhibit a tension between large values for n and L, with large values of n tending to make neural networks more like Gaussian processes and large values of L amplifying higher cumulants. Our analysis at initialization has many consequences also for networks during after training, which I will discuss if time permits.<br />
<br />
== September 24, 2020, [https://people.ucd.ie/neil.oconnell Neil O'Connell] (Dublin) ==<br />
<br />
'''Some new perspectives on moments of random matrices'''<br />
<br />
The study of `moments' of random matrices (expectations of traces of powers of the matrix) is a rich and interesting subject, with fascinating connections to enumerative geometry, as discovered by Harer and Zagier in the 1980’s. I will give some background on this and then describe some recent work which offers some new perspectives (and new results). This talk is based on joint work with Fabio Deelan Cunden, Francesco Mezzadri and Nick Simm.<br />
<br />
== October 1, 2020, [https://marcusmichelen.org/ Marcus Michelen] (UIC) ==<br />
<br />
'''Roots of random polynomials near the unit circle'''<br />
<br />
It is a well-known (but perhaps surprising) fact that a polynomial with independent random coefficients has most of its roots very close to the unit circle. Using a probabilistic perspective, we understand the behavior of roots of random polynomials exceptionally close to the unit circle and prove several limit theorems; these results resolve several conjectures of Shepp and Vanderbei. We will also discuss how our techniques provide a heuristic, probabilistic explanation for why random polynomials tend to have most roots near the unit circle. Based on joint work with Julian Sahasrabudhe.<br />
<br />
== October 8, 2020, [http://sites.harvard.edu/~sus977/index.html Subhabrata Sen] (Harvard) ==<br />
<br />
'''Large deviations for dense random graphs: beyond mean-field'''<br />
<br />
In a seminal paper, Chatterjee and Varadhan derived an Erdős-Rényi random graph, viewed as a random graphon. This directly provides LDPs for continuous functionals such as subgraph counts, spectral norms, etc. In contrast, very little is understood about this problem if the underlying random graph is inhomogeneous or constrained.<br />
<br />
In this talk, we will explore large deviations for dense random graphs, beyond the “mean-field” setting. In particular, we will study large deviations for uniform random graphs with given degrees, and a family of dense block model<br />
random graphs. We will establish the LDP in each case, and identify the rate function. In the block model setting, we will use this LDP to study the upper tail problem for homomorphism densities of regular sub-graphs. Our results establish that this problem exhibits a symmetry/symmetry-breaking transition, similar to one observed for Erdős-Rényi random graphs.<br />
<br />
Based on joint works with Christian Borgs, Jennifer Chayes, Souvik Dhara, Julia Gaudio and Samantha Petti.<br />
<br />
== October 15, 2020, [https://math.cornell.edu/philippe-sosoe Philippe Sosoe] (Cornell) ==<br />
<br />
Title: '''Concentration in integrable polymer models'''<br />
<br />
I will discuss a general method, applicable to all known integrable stationary polymer models, to obtain nearly optimal bounds on the<br />
central moments of the partition function and the occupation lengths for each level of the polymer system. The method was developed<br />
for the O'Connell-Yor polymer, but was subsequently extended to discrete integrable polymers. As an application, we obtain<br />
localization of the OY polymer paths along a straight line on the scale O(n^{2/3+o(1)}). <br />
<br />
Joint work with Christian Noack.<br />
<br />
==October 22, 2020, [http://www.math.toronto.edu/balint/ Balint Virag] (Toronto) ==<br />
<br />
Title: '''The heat and the landscape'''<br />
<br />
Abstract: The directed landscape is the conjectured universal scaling limit of the<br />
most common random planar metrics. Examples are planar first passage<br />
percolation, directed last passage percolation, distances in percolation<br />
clusters, random polymer models, and exclusion processes. The limit laws of distances of objects are given by the KPZ fixed point.<br />
<br />
We show that the KPZ fixed point is characterized by the Baik Ben-Arous<br />
Peche statistics well-known from random matrix theory.<br />
<br />
This provides a general and elementary method for showing convergence to<br />
the KPZ fixed point. We apply this method to two models related to<br />
random heat flow: the O'Connell-Yor polymer and the KPZ equation.<br />
<br />
Note: there will be a follow-up talk with details about the proofs at 11am, Friday, October 23.<br />
<br />
==October 29, 2020, [https://www.math.wisc.edu/node/80 Yun Li] (UW-Madison) ==<br />
<br />
Title: '''Operator level hard-to-soft transition for β-ensembles'''<br />
<br />
Abstract: It was shown that the soft and hard edge scaling limits of β-ensembles can be characterized as the spectra of certain random Sturm-Liouville operators. By tuning the parameter of the hard edge process one can obtain the soft edge process as a scaling limit. In this talk, I will present the corresponding limit on the level of the operators. This talk is based on joint work with Laure Dumaz and Benedek Valkó.<br />
<br />
== November 5, 2020, [http://sayan.web.unc.edu/ Sayan Banerjee] (UNC at Chapel Hill) ==<br />
<br />
Title: '''Persistence and root detection algorithms in growing networks'''<br />
<br />
Abstract: Motivated by questions in Network Archaeology, we investigate statistics of dynamic networks<br />
that are ''persistent'', that is, they fixate almost surely after some random time as the network grows. We<br />
consider ''generalized attachment models'' of network growth where at each time $n$, an incoming vertex<br />
attaches itself to the network through $m_n$ edges attached one-by-one to existing vertices with probability<br />
proportional to an ''arbitrary function'' $f$ of their degree. We identify the class of attachment functions $f$ for<br />
which the ''maximal degree vertex'' persists and obtain asymptotics for its index when it does not. We also<br />
show that for tree networks, the ''centroid'' of the tree persists and use it to device polynomial time root<br />
finding algorithms and quantify their efficacy. Our methods rely on an interplay between dynamic<br />
random networks and their continuous time embeddings.<br />
<br />
This is joint work with Shankar Bhamidi.<br />
<br />
== November 12, 2020, [https://cims.nyu.edu/~ajd594/ Alexander Dunlap] (NYU Courant Institute) ==<br />
<br />
Title: '''A forward-backward SDE from the 2D nonlinear stochastic heat equation'''<br />
<br />
Abstract: I will discuss a two-dimensional stochastic heat equation in the weak noise regime with a nonlinear noise strength. I will explain how pointwise statistics of solutions to this equation, as the correlation length of the noise is taken to 0 but the noise is attenuated by a logarithmic factor, can be related to a forward-backward stochastic differential equation (FBSDE) depending on the nonlinearity. In the linear case, the FBSDE can be explicitly solved and we recover results of Caravenna, Sun, and Zygouras. Joint work with Yu Gu (CMU).<br />
<br />
== November 19, 2020, [https://statistics.wharton.upenn.edu/profile/dingjian/ Jian Ding] (University of Pennsylvania) ==<br />
<br />
Title: '''Correlation length of two-dimensional random field Ising model via greedy lattice animal'''<br />
<br />
Abstract: In this talk, I will discuss two-dimensional random field Ising model where the disorder is given by i.i.d. mean zero Gaussian variables with small variance. In particular, I will present a recent joint work with Mateo Wirth on (one notion of) the correlation length, which is the critical size of the box at which the influences to spin magnetization from the boundary conditions and from the random field are comparable. Our work draws a connection to the greedy lattice animal normalized by the boundary size.<br />
<br />
== December 3, 2020, [https://www.math.wisc.edu/people/faculty-directory Tatyana Shcherbina] (UW-Madison) ==<br />
<br />
Title: '''SUSY transfer matrix approach for the real symmetric 1d random band matrices '''<br />
<br />
Abstract: Random band matrices (RBM) are natural intermediate models to study <br />
eigenvalue statistics and quantum propagation in disordered systems, <br />
since they interpolate between mean-field type Wigner matrices and <br />
random Schrodinger operators. In particular, RBM can be used to model the <br />
Anderson metal-insulator phase transition. The conjecture states that the eigenvectors <br />
of $N\times N$ RBM are completely delocalized and the local spectral statistics governed <br />
by the Wigner-Dyson statistics for large bandwidth $W$ (i.e. the local behavior is <br />
the same as for Wigner matrices), and by Poisson statistics for a small $W$ <br />
(with exponentially localized eigenvectors). The transition is conjectured to <br />
be sharp and for RBM in one spatial dimension occurs around the critical <br />
value $W=\sqrt{N}$. Recently, we proved the universality of the correlation <br />
functions for the whole delocalized region $W\gg \sqrt{N}$ for a certain type <br />
of Hermitian Gaussian RBM. This result was obtained by <br />
application of the supersymmetric method (SUSY) combined with the transfer matrix approach. <br />
In this talk I am going to discuss how this techniques can be adapted to the <br />
real symmetric case.<br />
<br />
== December 10, 2020, [https://www.ewbates.com/ Erik Bates] (UW-Madison) ==<br />
<br />
Title: '''Empirical measures, geodesic lengths, and a variational formula in first-passage percolation'''<br />
<br />
Abstract: We consider the standard first-passage percolation model on $\mathbb{Z}^d$, in which each edge is assigned an i.i.d. nonnegative weight, and the passage time between any two points is the smallest total weight of a nearest-neighbor path between them. Our primary interest is in the empirical measures of edge-weights observed along geodesics from $0$ to $n\mathbf{e}_1$. For various dense families of edge-weight distributions, we prove that these measures converge weakly to a deterministic limit as $n$ tends to infinity. The key tool is a new variational formula for the time constant. In this talk, I will derive this formula and discuss its implications for the convergence of both empirical measures and lengths of geodesics.<br />
<br />
<br />
[[Past Seminars]]</div>Ewbateshttps://www.math.wisc.edu/wiki/index.php?title=Probability_Seminar&diff=20388Probability Seminar2020-11-25T17:52:30Z<p>Ewbates: /* December 3, 2020, Tatyana Shcherbina (UW-Madison) */</p>
<hr />
<div>__NOTOC__<br />
<br />
= Fall 2020 =<br />
<br />
<b>Thursdays in 901 Van Vleck Hall at 2:30 PM</b>, unless otherwise noted. <br />
<b>We usually end for questions at 3:20 PM.</b><br />
<br />
<b> IMPORTANT: </b> In Fall 2020 the seminar is being run online. [https://uwmadison.zoom.us/j/91828707031?pwd=YUJXMUJkMDlPR0VRdkRCQVJtVndIdz09 ZOOM LINK]<br />
<br />
If you would like to sign up for the email list to receive seminar announcements then please join [https://groups.google.com/a/g-groups.wisc.edu/forum/#!forum/probsem our group].<br />
<br />
== September 17, 2020, [https://www.math.tamu.edu/~bhanin/ Boris Hanin] (Princeton and Texas A&M) ==<br />
<br />
'''Pre-Talk: (1:00pm)'''<br />
<br />
'''Neural Networks for Probabilists''' <br />
<br />
Deep neural networks are a centerpiece in modern machine learning. They are also fascinating probabilistic models, about which much remains unclear. In this pre-talk I will define neural networks, explain how they are used in practice, and give a survey of the big theoretical questions they have raised. If time permits, I will also explain how neural networks are related to a variety of classical areas in probability and mathematical physics, including random matrix theory, optimal transport, and combinatorics of hyperplane arrangements.<br />
<br />
'''Talk: (2:30pm)'''<br />
<br />
'''Effective Theory of Deep Neural Networks''' <br />
<br />
Deep neural networks are often considered to be complicated "black boxes," for which a full systematic analysis is not only out of reach but also impossible. In this talk, which is based on ongoing joint work with Sho Yaida and Daniel Adam Roberts, I will make the opposite claim. Namely, that deep neural networks with random weights and biases are exactly solvable models. Our approach applies to networks at finite width n and large depth L, the regime in which they are used in practice. A key point will be the emergence of a notion of "criticality," which involves a finetuning of model parameters (weight and bias variances). At criticality, neural networks are particularly well-behaved but still exhibit a tension between large values for n and L, with large values of n tending to make neural networks more like Gaussian processes and large values of L amplifying higher cumulants. Our analysis at initialization has many consequences also for networks during after training, which I will discuss if time permits.<br />
<br />
== September 24, 2020, [https://people.ucd.ie/neil.oconnell Neil O'Connell] (Dublin) ==<br />
<br />
'''Some new perspectives on moments of random matrices'''<br />
<br />
The study of `moments' of random matrices (expectations of traces of powers of the matrix) is a rich and interesting subject, with fascinating connections to enumerative geometry, as discovered by Harer and Zagier in the 1980’s. I will give some background on this and then describe some recent work which offers some new perspectives (and new results). This talk is based on joint work with Fabio Deelan Cunden, Francesco Mezzadri and Nick Simm.<br />
<br />
== October 1, 2020, [https://marcusmichelen.org/ Marcus Michelen] (UIC) ==<br />
<br />
'''Roots of random polynomials near the unit circle'''<br />
<br />
It is a well-known (but perhaps surprising) fact that a polynomial with independent random coefficients has most of its roots very close to the unit circle. Using a probabilistic perspective, we understand the behavior of roots of random polynomials exceptionally close to the unit circle and prove several limit theorems; these results resolve several conjectures of Shepp and Vanderbei. We will also discuss how our techniques provide a heuristic, probabilistic explanation for why random polynomials tend to have most roots near the unit circle. Based on joint work with Julian Sahasrabudhe.<br />
<br />
== October 8, 2020, [http://sites.harvard.edu/~sus977/index.html Subhabrata Sen] (Harvard) ==<br />
<br />
'''Large deviations for dense random graphs: beyond mean-field'''<br />
<br />
In a seminal paper, Chatterjee and Varadhan derived an Erdős-Rényi random graph, viewed as a random graphon. This directly provides LDPs for continuous functionals such as subgraph counts, spectral norms, etc. In contrast, very little is understood about this problem if the underlying random graph is inhomogeneous or constrained.<br />
<br />
In this talk, we will explore large deviations for dense random graphs, beyond the “mean-field” setting. In particular, we will study large deviations for uniform random graphs with given degrees, and a family of dense block model<br />
random graphs. We will establish the LDP in each case, and identify the rate function. In the block model setting, we will use this LDP to study the upper tail problem for homomorphism densities of regular sub-graphs. Our results establish that this problem exhibits a symmetry/symmetry-breaking transition, similar to one observed for Erdős-Rényi random graphs.<br />
<br />
Based on joint works with Christian Borgs, Jennifer Chayes, Souvik Dhara, Julia Gaudio and Samantha Petti.<br />
<br />
== October 15, 2020, [https://math.cornell.edu/philippe-sosoe Philippe Sosoe] (Cornell) ==<br />
<br />
Title: '''Concentration in integrable polymer models'''<br />
<br />
I will discuss a general method, applicable to all known integrable stationary polymer models, to obtain nearly optimal bounds on the<br />
central moments of the partition function and the occupation lengths for each level of the polymer system. The method was developed<br />
for the O'Connell-Yor polymer, but was subsequently extended to discrete integrable polymers. As an application, we obtain<br />
localization of the OY polymer paths along a straight line on the scale O(n^{2/3+o(1)}). <br />
<br />
Joint work with Christian Noack.<br />
<br />
==October 22, 2020, [http://www.math.toronto.edu/balint/ Balint Virag] (Toronto) ==<br />
<br />
Title: '''The heat and the landscape'''<br />
<br />
Abstract: The directed landscape is the conjectured universal scaling limit of the<br />
most common random planar metrics. Examples are planar first passage<br />
percolation, directed last passage percolation, distances in percolation<br />
clusters, random polymer models, and exclusion processes. The limit laws of distances of objects are given by the KPZ fixed point.<br />
<br />
We show that the KPZ fixed point is characterized by the Baik Ben-Arous<br />
Peche statistics well-known from random matrix theory.<br />
<br />
This provides a general and elementary method for showing convergence to<br />
the KPZ fixed point. We apply this method to two models related to<br />
random heat flow: the O'Connell-Yor polymer and the KPZ equation.<br />
<br />
Note: there will be a follow-up talk with details about the proofs at 11am, Friday, October 23.<br />
<br />
==October 29, 2020, [https://www.math.wisc.edu/node/80 Yun Li] (UW-Madison) ==<br />
<br />
Title: '''Operator level hard-to-soft transition for β-ensembles'''<br />
<br />
Abstract: It was shown that the soft and hard edge scaling limits of β-ensembles can be characterized as the spectra of certain random Sturm-Liouville operators. By tuning the parameter of the hard edge process one can obtain the soft edge process as a scaling limit. In this talk, I will present the corresponding limit on the level of the operators. This talk is based on joint work with Laure Dumaz and Benedek Valkó.<br />
<br />
== November 5, 2020, [http://sayan.web.unc.edu/ Sayan Banerjee] (UNC at Chapel Hill) ==<br />
<br />
Title: '''Persistence and root detection algorithms in growing networks'''<br />
<br />
Abstract: Motivated by questions in Network Archaeology, we investigate statistics of dynamic networks<br />
that are ''persistent'', that is, they fixate almost surely after some random time as the network grows. We<br />
consider ''generalized attachment models'' of network growth where at each time $n$, an incoming vertex<br />
attaches itself to the network through $m_n$ edges attached one-by-one to existing vertices with probability<br />
proportional to an ''arbitrary function'' $f$ of their degree. We identify the class of attachment functions $f$ for<br />
which the ''maximal degree vertex'' persists and obtain asymptotics for its index when it does not. We also<br />
show that for tree networks, the ''centroid'' of the tree persists and use it to device polynomial time root<br />
finding algorithms and quantify their efficacy. Our methods rely on an interplay between dynamic<br />
random networks and their continuous time embeddings.<br />
<br />
This is joint work with Shankar Bhamidi.<br />
<br />
== November 12, 2020, [https://cims.nyu.edu/~ajd594/ Alexander Dunlap] (NYU Courant Institute) ==<br />
<br />
Title: '''A forward-backward SDE from the 2D nonlinear stochastic heat equation'''<br />
<br />
Abstract: I will discuss a two-dimensional stochastic heat equation in the weak noise regime with a nonlinear noise strength. I will explain how pointwise statistics of solutions to this equation, as the correlation length of the noise is taken to 0 but the noise is attenuated by a logarithmic factor, can be related to a forward-backward stochastic differential equation (FBSDE) depending on the nonlinearity. In the linear case, the FBSDE can be explicitly solved and we recover results of Caravenna, Sun, and Zygouras. Joint work with Yu Gu (CMU).<br />
<br />
== November 19, 2020, [https://statistics.wharton.upenn.edu/profile/dingjian/ Jian Ding] (University of Pennsylvania) ==<br />
<br />
Title: '''Correlation length of two-dimensional random field Ising model via greedy lattice animal'''<br />
<br />
Abstract: In this talk, I will discuss two-dimensional random field Ising model where the disorder is given by i.i.d. mean zero Gaussian variables with small variance. In particular, I will present a recent joint work with Mateo Wirth on (one notion of) the correlation length, which is the critical size of the box at which the influences to spin magnetization from the boundary conditions and from the random field are comparable. Our work draws a connection to the greedy lattice animal normalized by the boundary size.<br />
<br />
== December 3, 2020, [https://www.math.wisc.edu/people/faculty-directory Tatyana Shcherbina] (UW-Madison) ==<br />
<br />
Title: '''SUSY transfer matrix approach for the real symmetric 1d random band matrices '''<br />
<br />
Abstract: Random band matrices (RBM) are natural intermediate models to study <br />
eigenvalue statistics and quantum propagation in disordered systems, <br />
since they interpolate between mean-field type Wigner matrices and <br />
random Schrodinger operators. In particular, RBM can be used to model the <br />
Anderson metal-insulator phase transition. The conjecture states that the eigenvectors <br />
of $N\times N$ RBM are completely delocalized and the local spectral statistics governed <br />
by the Wigner-Dyson statistics for large bandwidth $W$ (i.e. the local behavior is <br />
the same as for Wigner matrices), and by Poisson statistics for a small $W$ <br />
(with exponentially localized eigenvectors). The transition is conjectured to <br />
be sharp and for RBM in one spatial dimension occurs around the critical <br />
value $W=\sqrt{N}$. Recently, we proved the universality of the correlation <br />
functions for the whole delocalized region $W\gg \sqrt{N}$ for a certain type <br />
of Hermitian Gaussian RBM. This result was obtained by <br />
application of the supersymmetric method (SUSY) combined with the transfer matrix approach. <br />
In this talk I am going to discuss how this techniques can be adapted to the <br />
real symmetric case.<br />
<br />
== December 10, 2020, [https://www.ewbates.com/ Erik Bates] (UW-Madison) ==<br />
<br />
Title: '''TBA'''<br />
<br />
Abstract: TBA<br />
<br />
<br />
[[Past Seminars]]</div>Ewbateshttps://www.math.wisc.edu/wiki/index.php?title=Probability_Seminar&diff=20210Probability Seminar2020-10-26T20:42:27Z<p>Ewbates: /* November 5, 2020, Sayan Banerjee (UNC at Chapel Hill) */</p>
<hr />
<div>__NOTOC__<br />
<br />
= Fall 2020 =<br />
<br />
<b>Thursdays in 901 Van Vleck Hall at 2:30 PM</b>, unless otherwise noted. <br />
<b>We usually end for questions at 3:20 PM.</b><br />
<br />
<b> IMPORTANT: </b> In Fall 2020 the seminar is being run online. [https://uwmadison.zoom.us/j/91828707031?pwd=YUJXMUJkMDlPR0VRdkRCQVJtVndIdz09 ZOOM LINK]<br />
<br />
If you would like to sign up for the email list to receive seminar announcements then please join [https://groups.google.com/a/g-groups.wisc.edu/forum/#!forum/probsem our group].<br />
<br />
== September 17, 2020, [https://www.math.tamu.edu/~bhanin/ Boris Hanin] (Princeton and Texas A&M) ==<br />
<br />
'''Pre-Talk: (1:00pm)'''<br />
<br />
'''Neural Networks for Probabilists''' <br />
<br />
Deep neural networks are a centerpiece in modern machine learning. They are also fascinating probabilistic models, about which much remains unclear. In this pre-talk I will define neural networks, explain how they are used in practice, and give a survey of the big theoretical questions they have raised. If time permits, I will also explain how neural networks are related to a variety of classical areas in probability and mathematical physics, including random matrix theory, optimal transport, and combinatorics of hyperplane arrangements.<br />
<br />
'''Talk: (2:30pm)'''<br />
<br />
'''Effective Theory of Deep Neural Networks''' <br />
<br />
Deep neural networks are often considered to be complicated "black boxes," for which a full systematic analysis is not only out of reach but also impossible. In this talk, which is based on ongoing joint work with Sho Yaida and Daniel Adam Roberts, I will make the opposite claim. Namely, that deep neural networks with random weights and biases are exactly solvable models. Our approach applies to networks at finite width n and large depth L, the regime in which they are used in practice. A key point will be the emergence of a notion of "criticality," which involves a finetuning of model parameters (weight and bias variances). At criticality, neural networks are particularly well-behaved but still exhibit a tension between large values for n and L, with large values of n tending to make neural networks more like Gaussian processes and large values of L amplifying higher cumulants. Our analysis at initialization has many consequences also for networks during after training, which I will discuss if time permits.<br />
<br />
== September 24, 2020, [https://people.ucd.ie/neil.oconnell Neil O'Connell] (Dublin) ==<br />
<br />
'''Some new perspectives on moments of random matrices'''<br />
<br />
The study of `moments' of random matrices (expectations of traces of powers of the matrix) is a rich and interesting subject, with fascinating connections to enumerative geometry, as discovered by Harer and Zagier in the 1980’s. I will give some background on this and then describe some recent work which offers some new perspectives (and new results). This talk is based on joint work with Fabio Deelan Cunden, Francesco Mezzadri and Nick Simm.<br />
<br />
== October 1, 2020, [https://marcusmichelen.org/ Marcus Michelen] (UIC) ==<br />
<br />
'''Roots of random polynomials near the unit circle'''<br />
<br />
It is a well-known (but perhaps surprising) fact that a polynomial with independent random coefficients has most of its roots very close to the unit circle. Using a probabilistic perspective, we understand the behavior of roots of random polynomials exceptionally close to the unit circle and prove several limit theorems; these results resolve several conjectures of Shepp and Vanderbei. We will also discuss how our techniques provide a heuristic, probabilistic explanation for why random polynomials tend to have most roots near the unit circle. Based on joint work with Julian Sahasrabudhe.<br />
<br />
== October 8, 2020, [http://sites.harvard.edu/~sus977/index.html Subhabrata Sen] (Harvard) ==<br />
<br />
'''Large deviations for dense random graphs: beyond mean-field'''<br />
<br />
In a seminal paper, Chatterjee and Varadhan derived an Erdős-Rényi random graph, viewed as a random graphon. This directly provides LDPs for continuous functionals such as subgraph counts, spectral norms, etc. In contrast, very little is understood about this problem if the underlying random graph is inhomogeneous or constrained.<br />
<br />
In this talk, we will explore large deviations for dense random graphs, beyond the “mean-field” setting. In particular, we will study large deviations for uniform random graphs with given degrees, and a family of dense block model<br />
random graphs. We will establish the LDP in each case, and identify the rate function. In the block model setting, we will use this LDP to study the upper tail problem for homomorphism densities of regular sub-graphs. Our results establish that this problem exhibits a symmetry/symmetry-breaking transition, similar to one observed for Erdős-Rényi random graphs.<br />
<br />
Based on joint works with Christian Borgs, Jennifer Chayes, Souvik Dhara, Julia Gaudio and Samantha Petti.<br />
<br />
== October 15, 2020, [https://math.cornell.edu/philippe-sosoe Philippe Sosoe] (Cornell) ==<br />
<br />
Title: '''Concentration in integrable polymer models'''<br />
<br />
I will discuss a general method, applicable to all known integrable stationary polymer models, to obtain nearly optimal bounds on the<br />
central moments of the partition function and the occupation lengths for each level of the polymer system. The method was developed<br />
for the O'Connell-Yor polymer, but was subsequently extended to discrete integrable polymers. As an application, we obtain<br />
localization of the OY polymer paths along a straight line on the scale O(n^{2/3+o(1)}). <br />
<br />
Joint work with Christian Noack.<br />
<br />
==October 22, 2020, [http://www.math.toronto.edu/balint/ Balint Virag] (Toronto) ==<br />
<br />
Title: '''The heat and the landscape'''<br />
<br />
Abstract: The directed landscape is the conjectured universal scaling limit of the<br />
most common random planar metrics. Examples are planar first passage<br />
percolation, directed last passage percolation, distances in percolation<br />
clusters, random polymer models, and exclusion processes. The limit laws of distances of objects are given by the KPZ fixed point.<br />
<br />
We show that the KPZ fixed point is characterized by the Baik Ben-Arous<br />
Peche statistics well-known from random matrix theory.<br />
<br />
This provides a general and elementary method for showing convergence to<br />
the KPZ fixed point. We apply this method to two models related to<br />
random heat flow: the O'Connell-Yor polymer and the KPZ equation.<br />
<br />
Note: there will be a follow-up talk with details about the proofs at 11am, Friday, October 23.<br />
<br />
==October 29, 2020, [https://www.math.wisc.edu/node/80 Yun Li] (UW-Madison) ==<br />
<br />
Title: '''Operator level hard-to-soft transition for β-ensembles'''<br />
<br />
Abstract: It was shown that the soft and hard edge scaling limits of β-ensembles can be characterized as the spectra of certain random Sturm-Liouville operators. By tuning the parameter of the hard edge process one can obtain the soft edge process as a scaling limit. In this talk, I will present the corresponding limit on the level of the operators. This talk is based on joint work with Laure Dumaz and Benedek Valkó.<br />
<br />
== November 5, 2020, [http://sayan.web.unc.edu/ Sayan Banerjee] (UNC at Chapel Hill) ==<br />
<br />
Title: '''Persistence and root detection algorithms in growing networks'''<br />
<br />
Abstract: Motivated by questions in Network Archaeology, we investigate statistics of dynamic networks<br />
that are ''persistent'', that is, they fixate almost surely after some random time as the network grows. We<br />
consider ''generalized attachment models'' of network growth where at each time $n$, an incoming vertex<br />
attaches itself to the network through $m_n$ edges attached one-by-one to existing vertices with probability<br />
proportional to an ''arbitrary function'' $f$ of their degree. We identify the class of attachment functions $f$ for<br />
which the ''maximal degree vertex'' persists and obtain asymptotics for its index when it does not. We also<br />
show that for tree networks, the ''centroid'' of the tree persists and use it to device polynomial time root<br />
finding algorithms and quantify their efficacy. Our methods rely on an interplay between dynamic<br />
random networks and their continuous time embeddings.<br />
<br />
This is joint work with Shankar Bhamidi.<br />
<br />
== November 12, 2020, [https://cims.nyu.edu/~ajd594/ Alexander Dunlap] (NYU Courant Institute) ==<br />
<br />
Title: '''A forward-backward SDE from the 2D nonlinear stochastic heat equation'''<br />
<br />
Abstract: I will discuss a two-dimensional stochastic heat equation in the weak noise regime with a nonlinear noise strength. I will explain how pointwise statistics of solutions to this equation, as the correlation length of the noise is taken to 0 but the noise is attenuated by a logarithmic factor, can be related to a forward-backward stochastic differential equation (FBSDE) depending on the nonlinearity. In the linear case, the FBSDE can be explicitly solved and we recover results of Caravenna, Sun, and Zygouras. Joint work with Yu Gu (CMU).<br />
<br />
== November 19, 2020, [https://statistics.wharton.upenn.edu/profile/dingjian/ Jian Ding] (University of Pennsylvania) ==<br />
<br />
Title: '''TBA'''<br />
<br />
Abstract: TBA<br />
<br />
== December 3, 2020, [https://www.math.wisc.edu/people/faculty-directory Tatyana Shcherbina] (UW-Madison) ==<br />
<br />
Title: '''TBA'''<br />
<br />
Abstract: TBA<br />
<br />
== December 10, 2020, [https://www.ewbates.com/ Erik Bates] (UW-Madison) ==<br />
<br />
Title: '''TBA'''<br />
<br />
Abstract: TBA<br />
<br />
<br />
[[Past Seminars]]</div>Ewbateshttps://www.math.wisc.edu/wiki/index.php?title=Probability_Seminar&diff=20195Probability Seminar2020-10-22T19:18:34Z<p>Ewbates: /* October 29, 2020, Yun Li (UW-Madison) */</p>
<hr />
<div>__NOTOC__<br />
<br />
= Fall 2020 =<br />
<br />
<b>Thursdays in 901 Van Vleck Hall at 2:30 PM</b>, unless otherwise noted. <br />
<b>We usually end for questions at 3:20 PM.</b><br />
<br />
<b> IMPORTANT: </b> In Fall 2020 the seminar is being run online. [https://uwmadison.zoom.us/j/91828707031?pwd=YUJXMUJkMDlPR0VRdkRCQVJtVndIdz09 ZOOM LINK]<br />
<br />
If you would like to sign up for the email list to receive seminar announcements then please join [https://groups.google.com/a/g-groups.wisc.edu/forum/#!forum/probsem our group].<br />
<br />
== September 17, 2020, [https://www.math.tamu.edu/~bhanin/ Boris Hanin] (Princeton and Texas A&M) ==<br />
<br />
'''Pre-Talk: (1:00pm)'''<br />
<br />
'''Neural Networks for Probabilists''' <br />
<br />
Deep neural networks are a centerpiece in modern machine learning. They are also fascinating probabilistic models, about which much remains unclear. In this pre-talk I will define neural networks, explain how they are used in practice, and give a survey of the big theoretical questions they have raised. If time permits, I will also explain how neural networks are related to a variety of classical areas in probability and mathematical physics, including random matrix theory, optimal transport, and combinatorics of hyperplane arrangements.<br />
<br />
'''Talk: (2:30pm)'''<br />
<br />
'''Effective Theory of Deep Neural Networks''' <br />
<br />
Deep neural networks are often considered to be complicated "black boxes," for which a full systematic analysis is not only out of reach but also impossible. In this talk, which is based on ongoing joint work with Sho Yaida and Daniel Adam Roberts, I will make the opposite claim. Namely, that deep neural networks with random weights and biases are exactly solvable models. Our approach applies to networks at finite width n and large depth L, the regime in which they are used in practice. A key point will be the emergence of a notion of "criticality," which involves a finetuning of model parameters (weight and bias variances). At criticality, neural networks are particularly well-behaved but still exhibit a tension between large values for n and L, with large values of n tending to make neural networks more like Gaussian processes and large values of L amplifying higher cumulants. Our analysis at initialization has many consequences also for networks during after training, which I will discuss if time permits.<br />
<br />
== September 24, 2020, [https://people.ucd.ie/neil.oconnell Neil O'Connell] (Dublin) ==<br />
<br />
'''Some new perspectives on moments of random matrices'''<br />
<br />
The study of `moments' of random matrices (expectations of traces of powers of the matrix) is a rich and interesting subject, with fascinating connections to enumerative geometry, as discovered by Harer and Zagier in the 1980’s. I will give some background on this and then describe some recent work which offers some new perspectives (and new results). This talk is based on joint work with Fabio Deelan Cunden, Francesco Mezzadri and Nick Simm.<br />
<br />
== October 1, 2020, [https://marcusmichelen.org/ Marcus Michelen] (UIC) ==<br />
<br />
'''Roots of random polynomials near the unit circle'''<br />
<br />
It is a well-known (but perhaps surprising) fact that a polynomial with independent random coefficients has most of its roots very close to the unit circle. Using a probabilistic perspective, we understand the behavior of roots of random polynomials exceptionally close to the unit circle and prove several limit theorems; these results resolve several conjectures of Shepp and Vanderbei. We will also discuss how our techniques provide a heuristic, probabilistic explanation for why random polynomials tend to have most roots near the unit circle. Based on joint work with Julian Sahasrabudhe.<br />
<br />
== October 8, 2020, [http://sites.harvard.edu/~sus977/index.html Subhabrata Sen] (Harvard) ==<br />
<br />
'''Large deviations for dense random graphs: beyond mean-field'''<br />
<br />
In a seminal paper, Chatterjee and Varadhan derived an Erdős-Rényi random graph, viewed as a random graphon. This directly provides LDPs for continuous functionals such as subgraph counts, spectral norms, etc. In contrast, very little is understood about this problem if the underlying random graph is inhomogeneous or constrained.<br />
<br />
In this talk, we will explore large deviations for dense random graphs, beyond the “mean-field” setting. In particular, we will study large deviations for uniform random graphs with given degrees, and a family of dense block model<br />
random graphs. We will establish the LDP in each case, and identify the rate function. In the block model setting, we will use this LDP to study the upper tail problem for homomorphism densities of regular sub-graphs. Our results establish that this problem exhibits a symmetry/symmetry-breaking transition, similar to one observed for Erdős-Rényi random graphs.<br />
<br />
Based on joint works with Christian Borgs, Jennifer Chayes, Souvik Dhara, Julia Gaudio and Samantha Petti.<br />
<br />
== October 15, 2020, [https://math.cornell.edu/philippe-sosoe Philippe Sosoe] (Cornell) ==<br />
<br />
Title: '''Concentration in integrable polymer models'''<br />
<br />
I will discuss a general method, applicable to all known integrable stationary polymer models, to obtain nearly optimal bounds on the<br />
central moments of the partition function and the occupation lengths for each level of the polymer system. The method was developed<br />
for the O'Connell-Yor polymer, but was subsequently extended to discrete integrable polymers. As an application, we obtain<br />
localization of the OY polymer paths along a straight line on the scale O(n^{2/3+o(1)}). <br />
<br />
Joint work with Christian Noack.<br />
<br />
==October 22, 2020, [http://www.math.toronto.edu/balint/ Balint Virag] (Toronto) ==<br />
<br />
Title: '''The heat and the landscape'''<br />
<br />
Abstract: The directed landscape is the conjectured universal scaling limit of the<br />
most common random planar metrics. Examples are planar first passage<br />
percolation, directed last passage percolation, distances in percolation<br />
clusters, random polymer models, and exclusion processes. The limit laws of distances of objects are given by the KPZ fixed point.<br />
<br />
We show that the KPZ fixed point is characterized by the Baik Ben-Arous<br />
Peche statistics well-known from random matrix theory.<br />
<br />
This provides a general and elementary method for showing convergence to<br />
the KPZ fixed point. We apply this method to two models related to<br />
random heat flow: the O'Connell-Yor polymer and the KPZ equation.<br />
<br />
Note: there will be a follow-up talk with details about the proofs at 11am, Friday, October 23.<br />
<br />
==October 29, 2020, [https://www.math.wisc.edu/node/80 Yun Li] (UW-Madison) ==<br />
<br />
Title: '''Operator level hard-to-soft transition for β-ensembles'''<br />
<br />
Abstract: It was shown that the soft and hard edge scaling limits of β-ensembles can be characterized as the spectra of certain random Sturm-Liouville operators. By tuning the parameter of the hard edge process one can obtain the soft edge process as a scaling limit. In this talk, I will present the corresponding limit on the level of the operators. This talk is based on joint work with Laure Dumaz and Benedek Valkó.<br />
<br />
== November 5, 2020, [http://sayan.web.unc.edu/ Sayan Banerjee] (UNC at Chapel Hill) ==<br />
<br />
Title: '''TBA'''<br />
<br />
Abstract: TBA<br />
<br />
== November 12, 2020, [https://cims.nyu.edu/~ajd594/ Alexander Dunlap] (NYU Courant Institute) ==<br />
<br />
Title: '''A forward-backward SDE from the 2D nonlinear stochastic heat equation'''<br />
<br />
Abstract: I will discuss a two-dimensional stochastic heat equation in the weak noise regime with a nonlinear noise strength. I will explain how pointwise statistics of solutions to this equation, as the correlation length of the noise is taken to 0 but the noise is attenuated by a logarithmic factor, can be related to a forward-backward stochastic differential equation (FBSDE) depending on the nonlinearity. In the linear case, the FBSDE can be explicitly solved and we recover results of Caravenna, Sun, and Zygouras. Joint work with Yu Gu (CMU).<br />
<br />
== November 19, 2020, [https://statistics.wharton.upenn.edu/profile/dingjian/ Jian Ding] (University of Pennsylvania) ==<br />
<br />
Title: '''TBA'''<br />
<br />
Abstract: TBA<br />
<br />
== December 3, 2020, [https://www.math.wisc.edu/people/faculty-directory Tatyana Shcherbina] (UW-Madison) ==<br />
<br />
Title: '''TBA'''<br />
<br />
Abstract: TBA<br />
<br />
== December 10, 2020, [https://www.ewbates.com/ Erik Bates] (UW-Madison) ==<br />
<br />
Title: '''TBA'''<br />
<br />
Abstract: TBA<br />
<br />
<br />
[[Past Seminars]]</div>Ewbateshttps://www.math.wisc.edu/wiki/index.php?title=Probability_Seminar&diff=20132Probability Seminar2020-10-14T05:19:33Z<p>Ewbates: /* November 12, 2020, Alexander Dunlap (NYU Courant Institute) */</p>
<hr />
<div>__NOTOC__<br />
<br />
= Fall 2020 =<br />
<br />
<b>Thursdays in 901 Van Vleck Hall at 2:30 PM</b>, unless otherwise noted. <br />
<b>We usually end for questions at 3:20 PM.</b><br />
<br />
<b> IMPORTANT: </b> In Fall 2020 the seminar is being run online. [https://uwmadison.zoom.us/j/91828707031?pwd=YUJXMUJkMDlPR0VRdkRCQVJtVndIdz09 ZOOM LINK]<br />
<br />
If you would like to sign up for the email list to receive seminar announcements then please join [https://groups.google.com/a/g-groups.wisc.edu/forum/#!forum/probsem our group].<br />
<br />
== September 17, 2020, [https://www.math.tamu.edu/~bhanin/ Boris Hanin] (Princeton and Texas A&M) ==<br />
<br />
'''Pre-Talk: (1:00pm)'''<br />
<br />
'''Neural Networks for Probabilists''' <br />
<br />
Deep neural networks are a centerpiece in modern machine learning. They are also fascinating probabilistic models, about which much remains unclear. In this pre-talk I will define neural networks, explain how they are used in practice, and give a survey of the big theoretical questions they have raised. If time permits, I will also explain how neural networks are related to a variety of classical areas in probability and mathematical physics, including random matrix theory, optimal transport, and combinatorics of hyperplane arrangements.<br />
<br />
'''Talk: (2:30pm)'''<br />
<br />
'''Effective Theory of Deep Neural Networks''' <br />
<br />
Deep neural networks are often considered to be complicated "black boxes," for which a full systematic analysis is not only out of reach but also impossible. In this talk, which is based on ongoing joint work with Sho Yaida and Daniel Adam Roberts, I will make the opposite claim. Namely, that deep neural networks with random weights and biases are exactly solvable models. Our approach applies to networks at finite width n and large depth L, the regime in which they are used in practice. A key point will be the emergence of a notion of "criticality," which involves a finetuning of model parameters (weight and bias variances). At criticality, neural networks are particularly well-behaved but still exhibit a tension between large values for n and L, with large values of n tending to make neural networks more like Gaussian processes and large values of L amplifying higher cumulants. Our analysis at initialization has many consequences also for networks during after training, which I will discuss if time permits.<br />
<br />
== September 24, 2020, [https://people.ucd.ie/neil.oconnell Neil O'Connell] (Dublin) ==<br />
<br />
'''Some new perspectives on moments of random matrices'''<br />
<br />
The study of `moments' of random matrices (expectations of traces of powers of the matrix) is a rich and interesting subject, with fascinating connections to enumerative geometry, as discovered by Harer and Zagier in the 1980’s. I will give some background on this and then describe some recent work which offers some new perspectives (and new results). This talk is based on joint work with Fabio Deelan Cunden, Francesco Mezzadri and Nick Simm.<br />
<br />
== October 1, 2020, [https://marcusmichelen.org/ Marcus Michelen] (UIC) ==<br />
<br />
'''Roots of random polynomials near the unit circle'''<br />
<br />
It is a well-known (but perhaps surprising) fact that a polynomial with independent random coefficients has most of its roots very close to the unit circle. Using a probabilistic perspective, we understand the behavior of roots of random polynomials exceptionally close to the unit circle and prove several limit theorems; these results resolve several conjectures of Shepp and Vanderbei. We will also discuss how our techniques provide a heuristic, probabilistic explanation for why random polynomials tend to have most roots near the unit circle. Based on joint work with Julian Sahasrabudhe.<br />
<br />
== October 8, 2020, [http://sites.harvard.edu/~sus977/index.html Subhabrata Sen] (Harvard) ==<br />
<br />
'''Large deviations for dense random graphs: beyond mean-field'''<br />
<br />
In a seminal paper, Chatterjee and Varadhan derived an Erdős-Rényi random graph, viewed as a random graphon. This directly provides LDPs for continuous functionals such as subgraph counts, spectral norms, etc. In contrast, very little is understood about this problem if the underlying random graph is inhomogeneous or constrained.<br />
<br />
In this talk, we will explore large deviations for dense random graphs, beyond the “mean-field” setting. In particular, we will study large deviations for uniform random graphs with given degrees, and a family of dense block model<br />
random graphs. We will establish the LDP in each case, and identify the rate function. In the block model setting, we will use this LDP to study the upper tail problem for homomorphism densities of regular sub-graphs. Our results establish that this problem exhibits a symmetry/symmetry-breaking transition, similar to one observed for Erdős-Rényi random graphs.<br />
<br />
Based on joint works with Christian Borgs, Jennifer Chayes, Souvik Dhara, Julia Gaudio and Samantha Petti.<br />
<br />
== October 15, 2020, [https://math.cornell.edu/philippe-sosoe Philippe Sosoe] (Cornell) ==<br />
<br />
Title: '''Concentration in integrable polymer models'''<br />
<br />
I will discuss a general method, applicable to all known integrable stationary polymer models, to obtain nearly optimal bounds on the<br />
central moments of the partition function and the occupation lengths for each level of the polymer system. The method was developed<br />
for the O'Connell-Yor polymer, but was subsequently extended to discrete integrable polymers. As an application, we obtain<br />
localization of the OY polymer paths along a straight line on the scale O(n^{2/3+o(1)}). <br />
<br />
Joint work with Christian Noack.<br />
<br />
==October 22, 2020, [http://www.math.toronto.edu/balint/ Balint Virag] (Toronto) ==<br />
<br />
Title: '''The heat and the landscape'''<br />
<br />
Abstract: The directed landscape is the conjectured universal scaling limit of the<br />
most common random planar metrics. Examples are planar first passage<br />
percolation, directed last passage percolation, distances in percolation<br />
clusters, random polymer models, and exclusion processes. The limit laws of distances of objects are given by the KPZ fixed point.<br />
<br />
We show that the KPZ fixed point is characterized by the Baik Ben-Arous<br />
Peche statistics well-known from random matrix theory.<br />
<br />
This provides a general and elementary method for showing convergence to<br />
the KPZ fixed point. We apply this method to two models related to<br />
random heat flow: the O'Connell-Yor polymer and the KPZ equation.<br />
<br />
Note: there will be a follow-up talk with details about the proofs at 11am, Friday, October 23.<br />
<br />
==October 29, 2020, [https://www.math.wisc.edu/node/80 Yun Li] (UW-Madison) ==<br />
<br />
Title: '''TBA'''<br />
<br />
Abstract: TBA<br />
<br />
== November 5, 2020, [http://sayan.web.unc.edu/ Sayan Banerjee] (UNC at Chapel Hill) ==<br />
<br />
Title: '''TBA'''<br />
<br />
Abstract: TBA<br />
<br />
== November 12, 2020, [https://cims.nyu.edu/~ajd594/ Alexander Dunlap] (NYU Courant Institute) ==<br />
<br />
Title: '''A forward-backward SDE from the 2D nonlinear stochastic heat equation'''<br />
<br />
Abstract: I will discuss a two-dimensional stochastic heat equation in the weak noise regime with a nonlinear noise strength. I will explain how pointwise statistics of solutions to this equation, as the correlation length of the noise is taken to 0 but the noise is attenuated by a logarithmic factor, can be related to a forward-backward stochastic differential equation (FBSDE) depending on the nonlinearity. In the linear case, the FBSDE can be explicitly solved and we recover results of Caravenna, Sun, and Zygouras. Joint work with Yu Gu (CMU).<br />
<br />
== November 19, 2020, [https://statistics.wharton.upenn.edu/profile/dingjian/ Jian Ding] (University of Pennsylvania) ==<br />
<br />
Title: '''TBA'''<br />
<br />
Abstract: TBA<br />
<br />
== December 3, 2020, [https://www.math.wisc.edu/people/faculty-directory Tatyana Shcherbina] (UW-Madison) ==<br />
<br />
Title: '''TBA'''<br />
<br />
Abstract: TBA<br />
<br />
== December 10, 2020, [https://www.ewbates.com/ Erik Bates] (UW-Madison) ==<br />
<br />
Title: '''TBA'''<br />
<br />
Abstract: TBA<br />
<br />
<br />
[[Past Seminars]]</div>Ewbateshttps://www.math.wisc.edu/wiki/index.php?title=Probability_Seminar&diff=20100Probability Seminar2020-10-08T19:06:46Z<p>Ewbates: /* December 3, 2020, Tatyana Shcherbina (UW-Madison) */</p>
<hr />
<div>__NOTOC__<br />
<br />
= Fall 2020 =<br />
<br />
<b>Thursdays in 901 Van Vleck Hall at 2:30 PM</b>, unless otherwise noted. <br />
<b>We usually end for questions at 3:20 PM.</b><br />
<br />
<b> IMPORTANT: </b> In Fall 2020 the seminar is being run online. [https://uwmadison.zoom.us/j/91828707031?pwd=YUJXMUJkMDlPR0VRdkRCQVJtVndIdz09 ZOOM LINK]<br />
<br />
If you would like to sign up for the email list to receive seminar announcements then please join [https://groups.google.com/a/g-groups.wisc.edu/forum/#!forum/probsem our group].<br />
<br />
== September 17, 2020, [https://www.math.tamu.edu/~bhanin/ Boris Hanin] (Princeton and Texas A&M) ==<br />
<br />
'''Pre-Talk: (1:00pm)'''<br />
<br />
'''Neural Networks for Probabilists''' <br />
<br />
Deep neural networks are a centerpiece in modern machine learning. They are also fascinating probabilistic models, about which much remains unclear. In this pre-talk I will define neural networks, explain how they are used in practice, and give a survey of the big theoretical questions they have raised. If time permits, I will also explain how neural networks are related to a variety of classical areas in probability and mathematical physics, including random matrix theory, optimal transport, and combinatorics of hyperplane arrangements.<br />
<br />
'''Talk: (2:30pm)'''<br />
<br />
'''Effective Theory of Deep Neural Networks''' <br />
<br />
Deep neural networks are often considered to be complicated "black boxes," for which a full systematic analysis is not only out of reach but also impossible. In this talk, which is based on ongoing joint work with Sho Yaida and Daniel Adam Roberts, I will make the opposite claim. Namely, that deep neural networks with random weights and biases are exactly solvable models. Our approach applies to networks at finite width n and large depth L, the regime in which they are used in practice. A key point will be the emergence of a notion of "criticality," which involves a finetuning of model parameters (weight and bias variances). At criticality, neural networks are particularly well-behaved but still exhibit a tension between large values for n and L, with large values of n tending to make neural networks more like Gaussian processes and large values of L amplifying higher cumulants. Our analysis at initialization has many consequences also for networks during after training, which I will discuss if time permits.<br />
<br />
== September 24, 2020, [https://people.ucd.ie/neil.oconnell Neil O'Connell] (Dublin) ==<br />
<br />
'''Some new perspectives on moments of random matrices'''<br />
<br />
The study of `moments' of random matrices (expectations of traces of powers of the matrix) is a rich and interesting subject, with fascinating connections to enumerative geometry, as discovered by Harer and Zagier in the 1980’s. I will give some background on this and then describe some recent work which offers some new perspectives (and new results). This talk is based on joint work with Fabio Deelan Cunden, Francesco Mezzadri and Nick Simm.<br />
<br />
== October 1, 2020, [https://marcusmichelen.org/ Marcus Michelen] (UIC) ==<br />
<br />
'''Roots of random polynomials near the unit circle'''<br />
<br />
It is a well-known (but perhaps surprising) fact that a polynomial with independent random coefficients has most of its roots very close to the unit circle. Using a probabilistic perspective, we understand the behavior of roots of random polynomials exceptionally close to the unit circle and prove several limit theorems; these results resolve several conjectures of Shepp and Vanderbei. We will also discuss how our techniques provide a heuristic, probabilistic explanation for why random polynomials tend to have most roots near the unit circle. Based on joint work with Julian Sahasrabudhe.<br />
<br />
== October 8, 2020, [http://sites.harvard.edu/~sus977/index.html Subhabrata Sen] (Harvard) ==<br />
<br />
'''Large deviations for dense random graphs: beyond mean-field'''<br />
<br />
In a seminal paper, Chatterjee and Varadhan derived an Erdős-Rényi random graph, viewed as a random graphon. This directly provides LDPs for continuous functionals such as subgraph counts, spectral norms, etc. In contrast, very little is understood about this problem if the underlying random graph is inhomogeneous or constrained.<br />
<br />
In this talk, we will explore large deviations for dense random graphs, beyond the “mean-field” setting. In particular, we will study large deviations for uniform random graphs with given degrees, and a family of dense block model<br />
random graphs. We will establish the LDP in each case, and identify the rate function. In the block model setting, we will use this LDP to study the upper tail problem for homomorphism densities of regular sub-graphs. Our results establish that this problem exhibits a symmetry/symmetry-breaking transition, similar to one observed for Erdős-Rényi random graphs.<br />
<br />
Based on joint works with Christian Borgs, Jennifer Chayes, Souvik Dhara, Julia Gaudio and Samantha Petti.<br />
<br />
== October 15, 2020, [https://math.cornell.edu/philippe-sosoe Philippe Sosoe] (Cornell) ==<br />
<br />
Title: '''TBA'''<br />
<br />
Abstract: TBA<br />
<br />
==October 22, 2020, [http://www.math.toronto.edu/balint/ Balint Virag] (Toronto) ==<br />
<br />
Title: '''TBA'''<br />
<br />
Abstract: TBA<br />
<br />
==October 29, 2020, [https://www.math.wisc.edu/node/80 Yun Li] (UW-Madison) ==<br />
<br />
Title: '''TBA'''<br />
<br />
Abstract: TBA<br />
<br />
== November 5, 2020, [http://sayan.web.unc.edu/ Sayan Banerjee] (UNC at Chapel Hill) ==<br />
<br />
Title: '''TBA'''<br />
<br />
Abstract: TBA<br />
<br />
== November 12, 2020, [https://cims.nyu.edu/~ajd594/ Alexander Dunlap] (NYU Courant Institute) ==<br />
<br />
Title: '''TBA'''<br />
<br />
Abstract: TBA<br />
<br />
== November 19, 2020, [https://statistics.wharton.upenn.edu/profile/dingjian/ Jian Ding] (University of Pennsylvania) ==<br />
<br />
Title: '''TBA'''<br />
<br />
Abstract: TBA<br />
<br />
== December 3, 2020, [https://www.math.wisc.edu/people/faculty-directory Tatyana Shcherbina] (UW-Madison) ==<br />
<br />
Title: '''TBA'''<br />
<br />
Abstract: TBA<br />
<br />
== December 10, 2020, [https://www.ewbates.com/ Erik Bates] (UW-Madison) ==<br />
<br />
Title: '''TBA'''<br />
<br />
Abstract: TBA<br />
<br />
<br />
[[Past Seminars]]</div>Ewbateshttps://www.math.wisc.edu/wiki/index.php?title=Probability_Seminar&diff=20099Probability Seminar2020-10-08T19:05:35Z<p>Ewbates: /* October 29, 2020, Yun Li (UW Madison) */</p>
<hr />
<div>__NOTOC__<br />
<br />
= Fall 2020 =<br />
<br />
<b>Thursdays in 901 Van Vleck Hall at 2:30 PM</b>, unless otherwise noted. <br />
<b>We usually end for questions at 3:20 PM.</b><br />
<br />
<b> IMPORTANT: </b> In Fall 2020 the seminar is being run online. [https://uwmadison.zoom.us/j/91828707031?pwd=YUJXMUJkMDlPR0VRdkRCQVJtVndIdz09 ZOOM LINK]<br />
<br />
If you would like to sign up for the email list to receive seminar announcements then please join [https://groups.google.com/a/g-groups.wisc.edu/forum/#!forum/probsem our group].<br />
<br />
== September 17, 2020, [https://www.math.tamu.edu/~bhanin/ Boris Hanin] (Princeton and Texas A&M) ==<br />
<br />
'''Pre-Talk: (1:00pm)'''<br />
<br />
'''Neural Networks for Probabilists''' <br />
<br />
Deep neural networks are a centerpiece in modern machine learning. They are also fascinating probabilistic models, about which much remains unclear. In this pre-talk I will define neural networks, explain how they are used in practice, and give a survey of the big theoretical questions they have raised. If time permits, I will also explain how neural networks are related to a variety of classical areas in probability and mathematical physics, including random matrix theory, optimal transport, and combinatorics of hyperplane arrangements.<br />
<br />
'''Talk: (2:30pm)'''<br />
<br />
'''Effective Theory of Deep Neural Networks''' <br />
<br />
Deep neural networks are often considered to be complicated "black boxes," for which a full systematic analysis is not only out of reach but also impossible. In this talk, which is based on ongoing joint work with Sho Yaida and Daniel Adam Roberts, I will make the opposite claim. Namely, that deep neural networks with random weights and biases are exactly solvable models. Our approach applies to networks at finite width n and large depth L, the regime in which they are used in practice. A key point will be the emergence of a notion of "criticality," which involves a finetuning of model parameters (weight and bias variances). At criticality, neural networks are particularly well-behaved but still exhibit a tension between large values for n and L, with large values of n tending to make neural networks more like Gaussian processes and large values of L amplifying higher cumulants. Our analysis at initialization has many consequences also for networks during after training, which I will discuss if time permits.<br />
<br />
== September 24, 2020, [https://people.ucd.ie/neil.oconnell Neil O'Connell] (Dublin) ==<br />
<br />
'''Some new perspectives on moments of random matrices'''<br />
<br />
The study of `moments' of random matrices (expectations of traces of powers of the matrix) is a rich and interesting subject, with fascinating connections to enumerative geometry, as discovered by Harer and Zagier in the 1980’s. I will give some background on this and then describe some recent work which offers some new perspectives (and new results). This talk is based on joint work with Fabio Deelan Cunden, Francesco Mezzadri and Nick Simm.<br />
<br />
== October 1, 2020, [https://marcusmichelen.org/ Marcus Michelen] (UIC) ==<br />
<br />
'''Roots of random polynomials near the unit circle'''<br />
<br />
It is a well-known (but perhaps surprising) fact that a polynomial with independent random coefficients has most of its roots very close to the unit circle. Using a probabilistic perspective, we understand the behavior of roots of random polynomials exceptionally close to the unit circle and prove several limit theorems; these results resolve several conjectures of Shepp and Vanderbei. We will also discuss how our techniques provide a heuristic, probabilistic explanation for why random polynomials tend to have most roots near the unit circle. Based on joint work with Julian Sahasrabudhe.<br />
<br />
== October 8, 2020, [http://sites.harvard.edu/~sus977/index.html Subhabrata Sen] (Harvard) ==<br />
<br />
'''Large deviations for dense random graphs: beyond mean-field'''<br />
<br />
In a seminal paper, Chatterjee and Varadhan derived an Erdős-Rényi random graph, viewed as a random graphon. This directly provides LDPs for continuous functionals such as subgraph counts, spectral norms, etc. In contrast, very little is understood about this problem if the underlying random graph is inhomogeneous or constrained.<br />
<br />
In this talk, we will explore large deviations for dense random graphs, beyond the “mean-field” setting. In particular, we will study large deviations for uniform random graphs with given degrees, and a family of dense block model<br />
random graphs. We will establish the LDP in each case, and identify the rate function. In the block model setting, we will use this LDP to study the upper tail problem for homomorphism densities of regular sub-graphs. Our results establish that this problem exhibits a symmetry/symmetry-breaking transition, similar to one observed for Erdős-Rényi random graphs.<br />
<br />
Based on joint works with Christian Borgs, Jennifer Chayes, Souvik Dhara, Julia Gaudio and Samantha Petti.<br />
<br />
== October 15, 2020, [https://math.cornell.edu/philippe-sosoe Philippe Sosoe] (Cornell) ==<br />
<br />
Title: '''TBA'''<br />
<br />
Abstract: TBA<br />
<br />
==October 22, 2020, [http://www.math.toronto.edu/balint/ Balint Virag] (Toronto) ==<br />
<br />
Title: '''TBA'''<br />
<br />
Abstract: TBA<br />
<br />
==October 29, 2020, [https://www.math.wisc.edu/node/80 Yun Li] (UW-Madison) ==<br />
<br />
Title: '''TBA'''<br />
<br />
Abstract: TBA<br />
<br />
== November 5, 2020, [http://sayan.web.unc.edu/ Sayan Banerjee] (UNC at Chapel Hill) ==<br />
<br />
Title: '''TBA'''<br />
<br />
Abstract: TBA<br />
<br />
== November 12, 2020, [https://cims.nyu.edu/~ajd594/ Alexander Dunlap] (NYU Courant Institute) ==<br />
<br />
Title: '''TBA'''<br />
<br />
Abstract: TBA<br />
<br />
== November 19, 2020, [https://statistics.wharton.upenn.edu/profile/dingjian/ Jian Ding] (University of Pennsylvania) ==<br />
<br />
Title: '''TBA'''<br />
<br />
Abstract: TBA<br />
<br />
== December 3, 2020, Tatyana Shcherbina (UW-Madison) ==<br />
<br />
Title: '''TBA'''<br />
<br />
Abstract: TBA<br />
<br />
== December 10, 2020, [https://www.ewbates.com/ Erik Bates] (UW-Madison) ==<br />
<br />
Title: '''TBA'''<br />
<br />
Abstract: TBA<br />
<br />
<br />
[[Past Seminars]]</div>Ewbateshttps://www.math.wisc.edu/wiki/index.php?title=Probability_Seminar&diff=20077Probability Seminar2020-10-06T04:16:59Z<p>Ewbates: </p>
<hr />
<div>__NOTOC__<br />
<br />
= Fall 2020 =<br />
<br />
<b>Thursdays in 901 Van Vleck Hall at 2:30 PM</b>, unless otherwise noted. <br />
<b>We usually end for questions at 3:20 PM.</b><br />
<br />
<b> IMPORTANT: </b> In Fall 2020 the seminar is being run online. [https://uwmadison.zoom.us/j/91828707031?pwd=YUJXMUJkMDlPR0VRdkRCQVJtVndIdz09 ZOOM LINK]<br />
<br />
If you would like to sign up for the email list to receive seminar announcements then please join [https://groups.google.com/a/g-groups.wisc.edu/forum/#!forum/probsem our group].<br />
<br />
== September 17, 2020, [https://www.math.tamu.edu/~bhanin/ Boris Hanin] (Princeton and Texas A&M) ==<br />
<br />
'''Pre-Talk: (1:00pm)'''<br />
<br />
'''Neural Networks for Probabilists''' <br />
<br />
Deep neural networks are a centerpiece in modern machine learning. They are also fascinating probabilistic models, about which much remains unclear. In this pre-talk I will define neural networks, explain how they are used in practice, and give a survey of the big theoretical questions they have raised. If time permits, I will also explain how neural networks are related to a variety of classical areas in probability and mathematical physics, including random matrix theory, optimal transport, and combinatorics of hyperplane arrangements.<br />
<br />
'''Talk: (2:30pm)'''<br />
<br />
'''Effective Theory of Deep Neural Networks''' <br />
<br />
Deep neural networks are often considered to be complicated "black boxes," for which a full systematic analysis is not only out of reach but also impossible. In this talk, which is based on ongoing joint work with Sho Yaida and Daniel Adam Roberts, I will make the opposite claim. Namely, that deep neural networks with random weights and biases are exactly solvable models. Our approach applies to networks at finite width n and large depth L, the regime in which they are used in practice. A key point will be the emergence of a notion of "criticality," which involves a finetuning of model parameters (weight and bias variances). At criticality, neural networks are particularly well-behaved but still exhibit a tension between large values for n and L, with large values of n tending to make neural networks more like Gaussian processes and large values of L amplifying higher cumulants. Our analysis at initialization has many consequences also for networks during after training, which I will discuss if time permits.<br />
<br />
== September 24, 2020, [https://people.ucd.ie/neil.oconnell Neil O'Connell] (Dublin) ==<br />
<br />
'''Some new perspectives on moments of random matrices'''<br />
<br />
The study of `moments' of random matrices (expectations of traces of powers of the matrix) is a rich and interesting subject, with fascinating connections to enumerative geometry, as discovered by Harer and Zagier in the 1980’s. I will give some background on this and then describe some recent work which offers some new perspectives (and new results). This talk is based on joint work with Fabio Deelan Cunden, Francesco Mezzadri and Nick Simm.<br />
<br />
== October 1, 2020, [https://marcusmichelen.org/ Marcus Michelen] (UIC) ==<br />
<br />
'''Roots of random polynomials near the unit circle'''<br />
<br />
It is a well-known (but perhaps surprising) fact that a polynomial with independent random coefficients has most of its roots very close to the unit circle. Using a probabilistic perspective, we understand the behavior of roots of random polynomials exceptionally close to the unit circle and prove several limit theorems; these results resolve several conjectures of Shepp and Vanderbei. We will also discuss how our techniques provide a heuristic, probabilistic explanation for why random polynomials tend to have most roots near the unit circle. Based on joint work with Julian Sahasrabudhe.<br />
<br />
== October 8, 2020, [http://sites.harvard.edu/~sus977/index.html Subhabrata Sen] (Harvard) ==<br />
<br />
'''Large deviations for dense random graphs: beyond mean-field'''<br />
<br />
In a seminal paper, Chatterjee and Varadhan derived an Erdős-Rényi random graph, viewed as a random graphon. This directly provides LDPs for continuous functionals such as subgraph counts, spectral norms, etc. In contrast, very little is understood about this problem if the underlying random graph is inhomogeneous or constrained.<br />
<br />
In this talk, we will explore large deviations for dense random graphs, beyond the “mean-field” setting. In particular, we will study large deviations for uniform random graphs with given degrees, and a family of dense block model<br />
random graphs. We will establish the LDP in each case, and identify the rate function. In the block model setting, we will use this LDP to study the upper tail problem for homomorphism densities of regular sub-graphs. Our results establish that this problem exhibits a symmetry/symmetry-breaking transition, similar to one observed for Erdős-Rényi random graphs.<br />
<br />
Based on joint works with Christian Borgs, Jennifer Chayes, Souvik Dhara, Julia Gaudio and Samantha Petti.<br />
<br />
== October 15, 2020, [https://math.cornell.edu/philippe-sosoe Philippe Sosoe] (Cornell) ==<br />
<br />
Title: '''TBA'''<br />
<br />
Abstract: TBA<br />
<br />
==October 22, 2020, [http://www.math.toronto.edu/balint/ Balint Virag] (Toronto) ==<br />
<br />
Title: '''TBA'''<br />
<br />
Abstract: TBA<br />
<br />
== November 5, 2020, [http://sayan.web.unc.edu/ Sayan Banerjee] (UNC at Chapel Hill) ==<br />
<br />
Title: '''TBA'''<br />
<br />
Abstract: TBA<br />
<br />
== November 12, 2020, [https://cims.nyu.edu/~ajd594/ Alexander Dunlap] (NYU Courant Institute) ==<br />
<br />
Title: '''TBA'''<br />
<br />
Abstract: TBA<br />
<br />
== November 19, 2020, [https://statistics.wharton.upenn.edu/profile/dingjian/ Jian Ding] (University of Pennsylvania) ==<br />
<br />
Title: '''TBA'''<br />
<br />
Abstract: TBA<br />
<br />
== December 3, 2020, Tatyana Shcherbina (UW-Madison) ==<br />
<br />
Title: '''TBA'''<br />
<br />
Abstract: TBA<br />
<br />
== December 10, 2020, [https://www.ewbates.com/ Erik Bates] (UW-Madison) ==<br />
<br />
Title: '''TBA'''<br />
<br />
Abstract: TBA<br />
<br />
<br />
[[Past Seminars]]</div>Ewbateshttps://www.math.wisc.edu/wiki/index.php?title=Probability_Seminar&diff=20016Probability Seminar2020-09-28T20:32:25Z<p>Ewbates: /* October 8, 2020, Subhabrata Sen (Harvard) */</p>
<hr />
<div>__NOTOC__<br />
<br />
= Fall 2020 =<br />
<br />
<b>Thursdays in 901 Van Vleck Hall at 2:30 PM</b>, unless otherwise noted. <br />
<b>We usually end for questions at 3:20 PM.</b><br />
<br />
<b> IMPORTANT: </b> In Fall 2020 the seminar is being run online. [https://uwmadison.zoom.us/j/91828707031?pwd=YUJXMUJkMDlPR0VRdkRCQVJtVndIdz09 ZOOM LINK]<br />
<br />
If you would like to sign up for the email list to receive seminar announcements then please join [https://groups.google.com/a/g-groups.wisc.edu/forum/#!forum/probsem our group].<br />
<br />
== September 17, 2020, [https://www.math.tamu.edu/~bhanin/ Boris Hanin] (Princeton and Texas A&M) ==<br />
<br />
'''Pre-Talk: (1:00pm)'''<br />
<br />
'''Neural Networks for Probabilists''' <br />
<br />
Deep neural networks are a centerpiece in modern machine learning. They are also fascinating probabilistic models, about which much remains unclear. In this pre-talk I will define neural networks, explain how they are used in practice, and give a survey of the big theoretical questions they have raised. If time permits, I will also explain how neural networks are related to a variety of classical areas in probability and mathematical physics, including random matrix theory, optimal transport, and combinatorics of hyperplane arrangements.<br />
<br />
'''Talk: (2:30pm)'''<br />
<br />
'''Effective Theory of Deep Neural Networks''' <br />
<br />
Deep neural networks are often considered to be complicated "black boxes," for which a full systematic analysis is not only out of reach but also impossible. In this talk, which is based on ongoing joint work with Sho Yaida and Daniel Adam Roberts, I will make the opposite claim. Namely, that deep neural networks with random weights and biases are exactly solvable models. Our approach applies to networks at finite width n and large depth L, the regime in which they are used in practice. A key point will be the emergence of a notion of "criticality," which involves a finetuning of model parameters (weight and bias variances). At criticality, neural networks are particularly well-behaved but still exhibit a tension between large values for n and L, with large values of n tending to make neural networks more like Gaussian processes and large values of L amplifying higher cumulants. Our analysis at initialization has many consequences also for networks during after training, which I will discuss if time permits.<br />
<br />
== September 24, 2020, [https://people.ucd.ie/neil.oconnell Neil O'Connell] (Dublin) ==<br />
<br />
'''Some new perspectives on moments of random matrices'''<br />
<br />
The study of `moments' of random matrices (expectations of traces of powers of the matrix) is a rich and interesting subject, with fascinating connections to enumerative geometry, as discovered by Harer and Zagier in the 1980’s. I will give some background on this and then describe some recent work which offers some new perspectives (and new results). This talk is based on joint work with Fabio Deelan Cunden, Francesco Mezzadri and Nick Simm.<br />
<br />
== October 1, 2020, [https://marcusmichelen.org/ Marcus Michelen] ([https://mscs.uic.edu/ UIC]) ==<br />
<br />
'''Roots of random polynomials near the unit circle'''<br />
<br />
It is a well-known (but perhaps surprising) fact that a polynomial with independent random coefficients has most of its roots very close to the unit circle. Using a probabilistic perspective, we understand the behavior of roots of random polynomials exceptionally close to the unit circle and prove several limit theorems; these results resolve several conjectures of Shepp and Vanderbei. We will also discuss how our techniques provide a heuristic, probabilistic explanation for why random polynomials tend to have most roots near the unit circle. Based on joint work with Julian Sahasrabudhe.<br />
<br />
== October 8, 2020, [http://sites.harvard.edu/~sus977/index.html Subhabrata Sen] ([https://statistics.fas.harvard.edu/ Harvard]) ==<br />
<br />
'''Large deviations for dense random graphs: beyond mean-field'''<br />
<br />
In a seminal paper, Chatterjee and Varadhan derived an Erdős-Rényi random graph, viewed as a random graphon. This directly provides LDPs for continuous functionals such as subgraph counts, spectral norms, etc. In contrast, very little is understood about this problem if the underlying random graph is inhomogeneous or constrained.<br />
<br />
In this talk, we will explore large deviations for dense random graphs, beyond the “mean-field” setting. In particular, we will study large deviations for uniform random graphs with given degrees, and a family of dense block model<br />
random graphs. We will establish the LDP in each case, and identify the rate function. In the block model setting, we will use this LDP to study the upper tail problem for homomorphism densities of regular sub-graphs. Our results establish that this problem exhibits a symmetry/symmetry-breaking transition, similar to one observed for Erdős-Rényi random graphs.<br />
<br />
Based on joint works with Christian Borgs, Jennifer Chayes, Souvik Dhara, Julia Gaudio and Samantha Petti.<br />
<br />
== October 15, 2020, [https://math.cornell.edu/philippe-sosoe Philippe Sosoe] (Cornell) ==<br />
<br />
Title: '''TBA'''<br />
<br />
Abstract: TBA<br />
<br />
==October 22, 2020, [http://www.math.toronto.edu/balint/ Balint Virag] (Toronto) ==<br />
<br />
Title: '''TBA'''<br />
<br />
Abstract: TBA<br />
<br />
== November 5, 2020, [http://sayan.web.unc.edu/ Sayan Banerjee] ([https://stat-or.unc.edu/ UNC at Chapel Hill]) ==<br />
<br />
Title: '''TBA'''<br />
<br />
Abstract: TBA<br />
<br />
== November 12, 2020, [https://cims.nyu.edu/~ajd594/ Alexander Dunlap] ([https://cims.nyu.edu/ NYU Courant Institute]) ==<br />
<br />
Title: '''TBA'''<br />
<br />
Abstract: TBA<br />
<br />
<br />
[[Past Seminars]]</div>Ewbateshttps://www.math.wisc.edu/wiki/index.php?title=Probability_Seminar&diff=20015Probability Seminar2020-09-28T20:32:01Z<p>Ewbates: /* October 8, 2020, Subhabrata Sen (Harvard) */</p>
<hr />
<div>__NOTOC__<br />
<br />
= Fall 2020 =<br />
<br />
<b>Thursdays in 901 Van Vleck Hall at 2:30 PM</b>, unless otherwise noted. <br />
<b>We usually end for questions at 3:20 PM.</b><br />
<br />
<b> IMPORTANT: </b> In Fall 2020 the seminar is being run online. [https://uwmadison.zoom.us/j/91828707031?pwd=YUJXMUJkMDlPR0VRdkRCQVJtVndIdz09 ZOOM LINK]<br />
<br />
If you would like to sign up for the email list to receive seminar announcements then please join [https://groups.google.com/a/g-groups.wisc.edu/forum/#!forum/probsem our group].<br />
<br />
== September 17, 2020, [https://www.math.tamu.edu/~bhanin/ Boris Hanin] (Princeton and Texas A&M) ==<br />
<br />
'''Pre-Talk: (1:00pm)'''<br />
<br />
'''Neural Networks for Probabilists''' <br />
<br />
Deep neural networks are a centerpiece in modern machine learning. They are also fascinating probabilistic models, about which much remains unclear. In this pre-talk I will define neural networks, explain how they are used in practice, and give a survey of the big theoretical questions they have raised. If time permits, I will also explain how neural networks are related to a variety of classical areas in probability and mathematical physics, including random matrix theory, optimal transport, and combinatorics of hyperplane arrangements.<br />
<br />
'''Talk: (2:30pm)'''<br />
<br />
'''Effective Theory of Deep Neural Networks''' <br />
<br />
Deep neural networks are often considered to be complicated "black boxes," for which a full systematic analysis is not only out of reach but also impossible. In this talk, which is based on ongoing joint work with Sho Yaida and Daniel Adam Roberts, I will make the opposite claim. Namely, that deep neural networks with random weights and biases are exactly solvable models. Our approach applies to networks at finite width n and large depth L, the regime in which they are used in practice. A key point will be the emergence of a notion of "criticality," which involves a finetuning of model parameters (weight and bias variances). At criticality, neural networks are particularly well-behaved but still exhibit a tension between large values for n and L, with large values of n tending to make neural networks more like Gaussian processes and large values of L amplifying higher cumulants. Our analysis at initialization has many consequences also for networks during after training, which I will discuss if time permits.<br />
<br />
== September 24, 2020, [https://people.ucd.ie/neil.oconnell Neil O'Connell] (Dublin) ==<br />
<br />
'''Some new perspectives on moments of random matrices'''<br />
<br />
The study of `moments' of random matrices (expectations of traces of powers of the matrix) is a rich and interesting subject, with fascinating connections to enumerative geometry, as discovered by Harer and Zagier in the 1980’s. I will give some background on this and then describe some recent work which offers some new perspectives (and new results). This talk is based on joint work with Fabio Deelan Cunden, Francesco Mezzadri and Nick Simm.<br />
<br />
== October 1, 2020, [https://marcusmichelen.org/ Marcus Michelen] ([https://mscs.uic.edu/ UIC]) ==<br />
<br />
'''Roots of random polynomials near the unit circle'''<br />
<br />
It is a well-known (but perhaps surprising) fact that a polynomial with independent random coefficients has most of its roots very close to the unit circle. Using a probabilistic perspective, we understand the behavior of roots of random polynomials exceptionally close to the unit circle and prove several limit theorems; these results resolve several conjectures of Shepp and Vanderbei. We will also discuss how our techniques provide a heuristic, probabilistic explanation for why random polynomials tend to have most roots near the unit circle. Based on joint work with Julian Sahasrabudhe.<br />
<br />
== October 8, 2020, [http://sites.harvard.edu/~sus977/index.html Subhabrata Sen] ([https://statistics.fas.harvard.edu/ Harvard]) ==<br />
<br />
Title: '''Large deviations for dense random graphs: beyond mean-field'''<br />
<br />
Abstract: In a seminal paper, Chatterjee and Varadhan derived an Erdős-Rényi random graph, viewed as a random graphon. This directly provides LDPs for continuous functionals such as subgraph counts, spectral norms, etc. In contrast, very little is understood about this problem if the underlying random graph is inhomogeneous or constrained.<br />
<br />
In this talk, we will explore large deviations for dense random graphs, beyond the “mean-field” setting. In particular, we will study large deviations for uniform random graphs with given degrees, and a family of dense block model<br />
random graphs. We will establish the LDP in each case, and identify the rate function. In the block model setting, we will use this LDP to study the upper tail problem for homomorphism densities of regular sub-graphs. Our results establish that this problem exhibits a symmetry/symmetry-breaking transition, similar to one observed for Erdős-Rényi random graphs.<br />
<br />
Based on joint works with Christian Borgs, Jennifer Chayes, Souvik Dhara, Julia Gaudio and Samantha Petti.<br />
<br />
== October 15, 2020, [https://math.cornell.edu/philippe-sosoe Philippe Sosoe] (Cornell) ==<br />
<br />
Title: '''TBA'''<br />
<br />
Abstract: TBA<br />
<br />
==October 22, 2020, [http://www.math.toronto.edu/balint/ Balint Virag] (Toronto) ==<br />
<br />
Title: '''TBA'''<br />
<br />
Abstract: TBA<br />
<br />
== November 5, 2020, [http://sayan.web.unc.edu/ Sayan Banerjee] ([https://stat-or.unc.edu/ UNC at Chapel Hill]) ==<br />
<br />
Title: '''TBA'''<br />
<br />
Abstract: TBA<br />
<br />
== November 12, 2020, [https://cims.nyu.edu/~ajd594/ Alexander Dunlap] ([https://cims.nyu.edu/ NYU Courant Institute]) ==<br />
<br />
Title: '''TBA'''<br />
<br />
Abstract: TBA<br />
<br />
<br />
[[Past Seminars]]</div>Ewbateshttps://www.math.wisc.edu/wiki/index.php?title=Probability_Seminar&diff=19943Probability Seminar2020-09-23T23:34:17Z<p>Ewbates: /* Fall 2020 */</p>
<hr />
<div>__NOTOC__<br />
<br />
= Fall 2020 =<br />
<br />
<b>Thursdays in 901 Van Vleck Hall at 2:30 PM</b>, unless otherwise noted. <br />
<b>We usually end for questions at 3:20 PM.</b><br />
<br />
<b> IMPORTANT: </b> In Fall 2020 the seminar is being run online. [https://uwmadison.zoom.us/j/91828707031?pwd=YUJXMUJkMDlPR0VRdkRCQVJtVndIdz09 ZOOM LINK]<br />
<br />
If you would like to sign up for the email list to receive seminar announcements then please join [https://groups.google.com/a/g-groups.wisc.edu/forum/#!forum/probsem our group].<br />
<br />
== September 17, 2020, [https://www.math.tamu.edu/~bhanin/ Boris Hanin] (Princeton and Texas A&M) ==<br />
<br />
'''Pre-Talk: (1:00pm)'''<br />
<br />
'''Neural Networks for Probabilists''' <br />
<br />
Deep neural networks are a centerpiece in modern machine learning. They are also fascinating probabilistic models, about which much remains unclear. In this pre-talk I will define neural networks, explain how they are used in practice, and give a survey of the big theoretical questions they have raised. If time permits, I will also explain how neural networks are related to a variety of classical areas in probability and mathematical physics, including random matrix theory, optimal transport, and combinatorics of hyperplane arrangements.<br />
<br />
'''Talk: (2:30pm)'''<br />
<br />
'''Effective Theory of Deep Neural Networks''' <br />
<br />
Deep neural networks are often considered to be complicated "black boxes," for which a full systematic analysis is not only out of reach but also impossible. In this talk, which is based on ongoing joint work with Sho Yaida and Daniel Adam Roberts, I will make the opposite claim. Namely, that deep neural networks with random weights and biases are exactly solvable models. Our approach applies to networks at finite width n and large depth L, the regime in which they are used in practice. A key point will be the emergence of a notion of "criticality," which involves a finetuning of model parameters (weight and bias variances). At criticality, neural networks are particularly well-behaved but still exhibit a tension between large values for n and L, with large values of n tending to make neural networks more like Gaussian processes and large values of L amplifying higher cumulants. Our analysis at initialization has many consequences also for networks during after training, which I will discuss if time permits.<br />
<br />
== September 24, 2020, [https://people.ucd.ie/neil.oconnell Neil O'Connell] (Dublin) ==<br />
<br />
'''Some new perspectives on moments of random matrices'''<br />
<br />
The study of `moments' of random matrices (expectations of traces of powers of the matrix) is a rich and interesting subject, with fascinating connections to enumerative geometry, as discovered by Harer and Zagier in the 1980’s. I will give some background on this and then describe some recent work which offers some new perspectives (and new results). This talk is based on joint work with Fabio Deelan Cunden, Francesco Mezzadri and Nick Simm.<br />
<br />
== October 1, 2020, [https://marcusmichelen.org/ Marcus Michelen] ([https://mscs.uic.edu/ UIC]) ==<br />
<br />
'''Roots of random polynomials near the unit circle'''<br />
<br />
It is a well-known (but perhaps surprising) fact that a polynomial with independent random coefficients has most of its roots very close to the unit circle. Using a probabilistic perspective, we understand the behavior of roots of random polynomials exceptionally close to the unit circle and prove several limit theorems; these results resolve several conjectures of Shepp and Vanderbei. We will also discuss how our techniques provide a heuristic, probabilistic explanation for why random polynomials tend to have most roots near the unit circle. Based on joint work with Julian Sahasrabudhe.<br />
<br />
== October 8, 2020, [http://sites.harvard.edu/~sus977/index.html Subhabrata Sen] ([https://statistics.fas.harvard.edu/ Harvard]) ==<br />
<br />
Title: '''TBA'''<br />
<br />
Abstract: TBA<br />
<br />
== October 15, 2020, [https://math.cornell.edu/philippe-sosoe Philippe Sosoe] (Cornell) ==<br />
<br />
Title: '''TBA'''<br />
<br />
Abstract: TBA<br />
<br />
==October 22, 2020, [http://www.math.toronto.edu/balint/ Balint Virag] (Toronto) ==<br />
<br />
Title: '''TBA'''<br />
<br />
Abstract: TBA<br />
<br />
== November 5, 2020, [http://sayan.web.unc.edu/ Sayan Banerjee] ([https://stat-or.unc.edu/ UNC at Chapel Hill]) ==<br />
<br />
Title: '''TBA'''<br />
<br />
Abstract: TBA<br />
<br />
== November 12, 2020, [https://cims.nyu.edu/~ajd594/ Alexander Dunlap] ([https://cims.nyu.edu/ NYU Courant Institute]) ==<br />
<br />
Title: '''TBA'''<br />
<br />
Abstract: TBA<br />
<br />
<br />
[[Past Seminars]]</div>Ewbateshttps://www.math.wisc.edu/wiki/index.php?title=Probability_Seminar&diff=19888Probability Seminar2020-09-18T20:33:44Z<p>Ewbates: </p>
<hr />
<div>__NOTOC__<br />
<br />
= Fall 2020 =<br />
<br />
<b>Thursdays in 901 Van Vleck Hall at 2:30 PM</b>, unless otherwise noted. <br />
<b>We usually end for questions at 3:20 PM.</b><br />
<br />
<b> IMPORTANT: </b> In Fall 2020 the seminar is being run online.<br />
<br />
If you would like to sign up for the email list to receive seminar announcements then please join [https://groups.google.com/a/g-groups.wisc.edu/forum/#!forum/probsem our group].<br />
<br />
== September 17, 2020, [https://www.math.tamu.edu/~bhanin/ Boris Hanin] (Princeton and Texas A&M) ==<br />
<br />
'''Pre-Talk: (1:00pm)'''<br />
<br />
'''Neural Networks for Probabilists''' <br />
<br />
Deep neural networks are a centerpiece in modern machine learning. They are also fascinating probabilistic models, about which much remains unclear. In this pre-talk I will define neural networks, explain how they are used in practice, and give a survey of the big theoretical questions they have raised. If time permits, I will also explain how neural networks are related to a variety of classical areas in probability and mathematical physics, including random matrix theory, optimal transport, and combinatorics of hyperplane arrangements.<br />
<br />
'''Talk: (2:30pm)'''<br />
<br />
'''Effective Theory of Deep Neural Networks''' <br />
<br />
Deep neural networks are often considered to be complicated "black boxes," for which a full systematic analysis is not only out of reach but also impossible. In this talk, which is based on ongoing joint work with Sho Yaida and Daniel Adam Roberts, I will make the opposite claim. Namely, that deep neural networks with random weights and biases are exactly solvable models. Our approach applies to networks at finite width n and large depth L, the regime in which they are used in practice. A key point will be the emergence of a notion of "criticality," which involves a finetuning of model parameters (weight and bias variances). At criticality, neural networks are particularly well-behaved but still exhibit a tension between large values for n and L, with large values of n tending to make neural networks more like Gaussian processes and large values of L amplifying higher cumulants. Our analysis at initialization has many consequences also for networks during after training, which I will discuss if time permits.<br />
<br />
== September 24, 2020, [https://people.ucd.ie/neil.oconnell Neil O'Connell] (Dublin) ==<br />
<br />
'''Some new perspectives on moments of random matrices'''<br />
<br />
The study of `moments' of random matrices (expectations of traces of powers of the matrix) is a rich and interesting subject, with fascinating connections to enumerative geometry, as discovered by Harer and Zagier in the 1980’s. I will give some background on this and then describe some recent work which offers some new perspectives (and new results). This talk is based on joint work with Fabio Deelan Cunden, Francesco Mezzadri and Nick Simm.<br />
<br />
== October 1, 2020, [https://marcusmichelen.org/ Marcus Michelen] ([https://mscs.uic.edu/ UIC]) ==<br />
<br />
'''Roots of random polynomials near the unit circle'''<br />
<br />
It is a well-known (but perhaps surprising) fact that a polynomial with independent random coefficients has most of its roots very close to the unit circle. Using a probabilistic perspective, we understand the behavior of roots of random polynomials exceptionally close to the unit circle and prove several limit theorems; these results resolve several conjectures of Shepp and Vanderbei. We will also discuss how our techniques provide a heuristic, probabilistic explanation for why random polynomials tend to have most roots near the unit circle. Based on joint work with Julian Sahasrabudhe.<br />
<br />
== October 8, 2020, [http://sites.harvard.edu/~sus977/index.html Subhabrata Sen] ([https://statistics.fas.harvard.edu/ Harvard]) ==<br />
<br />
Title: '''TBA'''<br />
<br />
Abstract: TBA<br />
<br />
== October 15, 2020, [https://math.cornell.edu/philippe-sosoe Philippe Sosoe] (Cornell) ==<br />
<br />
Title: '''TBA'''<br />
<br />
Abstract: TBA<br />
<br />
==October 22, 2020, [http://www.math.toronto.edu/balint/ Balint Virag] (Toronto) ==<br />
<br />
Title: '''TBA'''<br />
<br />
Abstract: TBA<br />
<br />
== November 5, 2020, [http://sayan.web.unc.edu/ Sayan Banerjee] ([https://stat-or.unc.edu/ UNC at Chapel Hill]) ==<br />
<br />
Title: '''TBA'''<br />
<br />
Abstract: TBA<br />
<br />
== November 12, 2020, [https://cims.nyu.edu/~ajd594/ Alexander Dunlap] ([https://cims.nyu.edu/ NYU Courant Institute]) ==<br />
<br />
Title: '''TBA'''<br />
<br />
Abstract: TBA<br />
<br />
<br />
[[Past Seminars]]</div>Ewbateshttps://www.math.wisc.edu/wiki/index.php?title=Probability_Seminar&diff=19599Probability Seminar2020-08-31T17:28:08Z<p>Ewbates: </p>
<hr />
<div>__NOTOC__<br />
<br />
= Fall 2020 =<br />
<br />
<b>Thursdays in 901 Van Vleck Hall at 2:30 PM</b>, unless otherwise noted. <br />
<b>We usually end for questions at 3:20 PM.</b><br />
<br />
<b> IMPORTANT: </b> In Fall 2020 the seminar is being run online.<br />
<br />
If you would like to sign up for the email list to receive seminar announcements then please send an email to <br />
[mailto:join-probsem@lists.wisc.edu join-probsem@lists.wisc.edu]<br />
<br />
<br />
== September 15, 2020, [https://www.math.tamu.edu/~bhanin/ Boris Hanin] (Princeton and Texas A&M) ==<br />
<br />
<br />
== September 23, 2020, [https://people.ucd.ie/neil.oconnell Neil O'Connell] (Dublin) ==<br />
<br />
<br />
== October 1, 2020, [https://marcusmichelen.org/ Marcus Michelen], [https://mscs.uic.edu/ UIC] ==<br />
<br />
Title: '''Roots of random polynomials near the unit circle'''<br />
<br />
Abstract: It is a well-known (but perhaps surprising) fact that a polynomial with independent random coefficients has most of its roots very close to the unit circle. Using a probabilistic perspective, we understand the behavior of roots of random polynomials exceptionally close to the unit circle and prove several limit theorems; these results resolve several conjectures of Shepp and Vanderbei. We will also discuss how our techniques provide a heuristic, probabilistic explanation for why random polynomials tend to have most roots near the unit circle. Based on joint work with Julian Sahasrabudhe.<br />
<br />
== October 8, 2020, [http://sites.harvard.edu/~sus977/index.html Subhabrata Sen], [https://statistics.fas.harvard.edu/ Harvard] ==<br />
<br />
Title: '''TBA'''<br />
<br />
Abstract: TBA<br />
<br />
== November 12, 2020, [http://stanford.edu/~ajdunl2/ Alexander Dunlap], [https://cims.nyu.edu/ NYU Courant Institute] ==<br />
<br />
Title: '''TBA'''<br />
<br />
Abstract: TBA<br />
<br />
<br />
[[Past Seminars]]</div>Ewbateshttps://www.math.wisc.edu/wiki/index.php?title=Probability_Seminar&diff=19588Probability Seminar2020-08-29T02:58:42Z<p>Ewbates: /* October 1, 2020, Marcus Michelen, UIC */</p>
<hr />
<div>__NOTOC__<br />
<br />
= Fall 2020 =<br />
<br />
<b>Thursdays in 901 Van Vleck Hall at 2:30 PM</b>, unless otherwise noted. <br />
<b>We usually end for questions at 3:20 PM.</b><br />
<br />
<b> IMPORTANT: </b> In Fall 2020 the seminar is being run online.<br />
<br />
If you would like to sign up for the email list to receive seminar announcements then please send an email to <br />
[mailto:join-probsem@lists.wisc.edu join-probsem@lists.wisc.edu]<br />
<br />
<br />
== September 3, 2020, [https://math.wisc.edu TBA] (TBA) ==<br />
'''TBA<br />
'''<br />
<br />
<br />
== September 10, 2020, ==<br />
<br />
<br />
== October 1, 2020, [https://marcusmichelen.org/ Marcus Michelen], [https://mscs.uic.edu/ UIC] ==<br />
<br />
Title: '''Roots of random polynomials near the unit circle'''<br />
<br />
Abstract: It is a well-known (but perhaps surprising) fact that a polynomial with independent random coefficients has most of its roots very close to the unit circle. Using a probabilistic perspective, we understand the behavior of roots of random polynomials exceptionally close to the unit circle and prove several limit theorems; these results resolve several conjectures of Shepp and Vanderbei. We will also discuss how our techniques provide a heuristic, probabilistic explanation for why random polynomials tend to have most roots near the unit circle. Based on joint work with Julian Sahasrabudhe.<br />
<br />
<br />
[[Past Seminars]]</div>Ewbateshttps://www.math.wisc.edu/wiki/index.php?title=Probability_Seminar&diff=19570Probability Seminar2020-08-26T01:52:55Z<p>Ewbates: /* Fall 2020 */</p>
<hr />
<div>__NOTOC__<br />
<br />
= Fall 2020 =<br />
<br />
<b>Thursdays in 901 Van Vleck Hall at 2:30 PM</b>, unless otherwise noted. <br />
<b>We usually end for questions at 3:20 PM.</b><br />
<br />
<b> IMPORTANT: </b> In Fall 2020 the seminar is being run online.<br />
<br />
If you would like to sign up for the email list to receive seminar announcements then please send an email to <br />
[mailto:join-probsem@lists.wisc.edu join-probsem@lists.wisc.edu]<br />
<br />
<br />
== September 3, 2020, [https://math.wisc.edu TBA] (TBA) ==<br />
'''TBA<br />
'''<br />
<br />
<br />
== September 10, 2020, ==<br />
<br />
<br />
== October 1, 2020, [https://marcusmichelen.org/ Marcus Michelen], [https://mscs.uic.edu/ UIC] ==<br />
<br />
Title: '''TBA'''<br />
<br />
Abstract: TBA<br />
<br />
<br />
[[Past Seminars]]</div>Ewbates