- 1 ACMS Abstracts: Fall 2019
ACMS Abstracts: Fall 2019
Leonardo Andrés Zepeda Núñez
Title: Deep Learning for Electronic Structure Computations: A Tale of Symmetries, Locality, and Physics
Abstract: Recently, the surge of interest in deep neural learning has dramatically improved image and signal processing, which has fueled breakthroughs in many domains such as drug discovery, genomics, and automatic translation. These advances have been further applied to scientific computing and, in particular, to electronic structure computations. In this case, the main objective is to directly compute the electron density, which encodes most of information of the system, thus bypassing the computationally intensive solution of the Kohn-Sham equations. However, similar to neural networks for image processing, the performance of the methods depends spectacularly on the physical and analytical intuition incorporated in the network, and on the training stage.
In this talk, I will show how to build a network that respects physical symmetries and locality. I will show how to train the networks and how such properties impact the performance of the resulting network. Finally, I will present several examples for small yet realistic chemical systems.
Daniel Floryan (UW-Madison)
Title: Flexible Inertial Swimmers
Abstract: Inertial swimmers deform their bodies and fins to push against the water and propel themselves forward. The deformation is driven partly by active musculature, and partly by passive elasticity. The interaction between elasticity and hydrodynamics confers features on the swimmers not enjoyed by their rigid friends, for example, boosts in speed when flapping at certain frequencies. We explain the salient features of flexible swimmers by drawing ideas from airfoils, vibrating beams, and flags flapping in the wind. The presence of fluid drag has important consequences. We also explore optimal arrangements of flexibility. (It turns out that nature is quite good.)
Jianfeng Lu (Duke)
Title: How to ``localize" the computation?
It is often desirable to restrict the numerical computation to a local region to achieve best balance between accuracy and affordability in scientific computing. It is important to avoid artifacts and guarantee predictable modelling while artificial boundary conditions have to be introduced to restrict the computation. In this talk, we will discuss some recent understanding on how to achieve such local computation in the context of topological edge states and elliptic random media.
Mitch Bushuk (GFDL/Princeton)
Title: Arctic Sea Ice Predictability in a Changing Cryosphere
Abstract: Forty years of satellite observations have documented a striking decline in the areal extent of Arctic sea ice. The loss of sea ice has impacts on the climate system, human populations, ecosystems, and natural environments across a broad range of spatial and temporal scales. These changes have motivated significant research interest in the predictability and prediction of Arctic sea ice on seasonal-to-interannual timescales. In this talk, I will address two related questions: (1) What is the inherent predictability of Arctic sea ice and what physical mechanisms underlie this predictability? and (2) How can this knowledge be leveraged to improve operational sea ice predictions? I will present findings on the relative roles of the ocean, sea ice, and atmosphere in controlling Arctic sea ice predictability. I will also present evidence for an Arctic spring predictability barrier, which may impose a sharp limit on our ability to make skillful predictions of the summer sea ice minimum.
Qin Li (UW-Madison)
Title: The power of randomness in scientific computing
Abstract: Most numerical methods in scientific computing are deterministic. Traditionally, accuracy has been the target while the cost was not the concern. However, in this era of big data, we incline to relax the strict requirements on the accuracy to reduce numerical cost. Introducing randomness in the numerical solvers could potentially speed up the computation significantly at small sacrifice of accuracy. In this talk, I'd like to show two concrete examples how this is done: first on random sketching in experimental design, and the second on numerical homgenization, hoping the discussion can shed light on potential other applications. Joint work with Ke Chen, Jianfeng Lu, Kit Newton and Stephen Wright.
Joel Nishimura (Arizona State)
Title: Random graph models with fixed degree sequences: choices, consequences and irreducibility proofs for sampling
Abstract: Determining which features of an empirical graph are noteworthy frequently relies upon the ability to sample random graphs with constrained properties. Since empirical graphs have distinctive degree sequences, one of the most popular random graph models is the configuration model, which produces a graph uniformly at random from the set of graphs with a fixed degree sequence. While it is commonly treated as though there is only a single configuration model, one sampled via stub-matching, there are many, depending on whether self-loops and multiedges are allowed and whether edge stubs are labeled or not. We show, these different configuration models can lead to drastically, sometimes opposite, interpretations of empirical graphs. In order to sample from these different configuration models, we review and develop the underpinnings of Markov chain Monte Carlo methods based upon double-edge swaps. We also present new results on the irreducibility of the Markov chain for graphs with self-loops, either proving irreducibility or exactly characterizing the degree sequences for which the Markov chain is reducible. This work completes the study of the irreducibility of double edge-swap Markov chains (and the related Curveball Markov chain) for all combinations of allowing self-loops, multiple self-loops and/or multiedges.
Alex Townsend (Cornell)
Title: Why are so many matrices and tensors of low rank in computational mathematics?
Abstract: Matrices and tensors that appear in computational mathematics are so often well-approximated by low-rank objects. Since random ("average") matrices are almost surely of full rank, mathematics needs to explain the abundance of low-rank structures. We will give various methodologies that allow one to begin to understand the prevalence of compressible matrices and tensors and we hope to reveal an underlying link between disparate applications. In particular, we will show how one can connect the singular values of a matrix with displacement structure to a rational approximation problem that highlights fundamental connections between polynomial interpolation, Krylov methods, and fast Toeplitz solvers.
Prashant G. Mehta
Title: What is the Lagrangian for Nonlinear Filtering?
Abstract: There is a certain magic involved in recasting the equations in Physics, and the algorithms in Engineering, in variational terms. The most classical of these ‘magics’ is the Lagrange’s formulation of the Newtonian mechanics. An accessible modern take on all this and more appears in the February 19, 2019 issue of The New Yorker magazine: https://www.newyorker.com/science/elements/a-different-kind-of-theory-of-everything?reload=true
My talk is concerned with a variational (optimal control type) formulation of the problem of nonlinear filtering/estimation. Such formulations are referred to as duality between optimal estimation and optimal control. The first duality principle appears in the seminal (1961) paper of Kalman-Bucy, where the problem of minimum variance estimation is shown to be dual to a linear quadratic optimal control problem.
In my talk, I will describe a generalization of the Kalman-Bucy duality theory to nonlinear filtering. The generalization is an exact extension, in the sense that the dual optimal control problem has the same minimum variance structure for linear and nonlinear filtering problems. Kalman-Bucy’s classical result is shown to be a special case. During the talk, I will also attempt to review other types of duality relationships that have appeared over the years for the problem of linear and nonlinear filtering.
Title: Shape matters: A Brownian swimmer in a channel
Abstract: We consider a simple model of a two-dimensional microswimmer with fixed swimming speed. The direction of swimming changes according to a Brownian process, and the swimmer is interacting with boundaries. This is a standard model for a simple microswimmer, or a confined wormlike chain polymer. The shape of the swimmer determines the range of allowable values that its degrees of freedom can assume --- its configuration space. Using natural assumptions about reflection of the swimmer at boundaries, we compute the swimmer's invariant distribution across a channel consisting of two parallel walls, and the statistics of spreading in the longitudinal direction. This gives us the effective diffusion constant of the swimmer's large scale motion. When the swimmer is longer than the channel width, it cannot reverse, and we then compute the mean drift velocity of the swimmer. This model offers insight into experiments of scattering of swimmers from boundaries, and serves as an exactly-solvable baseline when comparing to more complex models. This is joint work with Hongfei Chen.
Tan Bui (UT-Austin)
Title: Scalable Algorithms for Data-driven Inverse and Learning Problems
Abstract: Inverse problems and uncertainty quantification (UQ) are pervasive in scientific discovery and decision-making for complex, natural, engineered, and societal systems. They are perhaps the most popular mathematical approaches for enabling predictive scientific simulations that integrate observational/experimental data, simulations and/or models. Unfortunately, inverse/UQ problems for practical complex systems possess these the simultaneous challenges: the large-scale forward problem challenge, the high dimensional parameter space challenge, and the big data challenge.
To address the first challenge, we have developed parallel high-order (hybridized) discontinuous Galerkin methods to discretize complex forward PDEs. To address the second challenge, we have developed various approaches from model reduction to advanced Markov chain Monte Carlo methods to effectively explore high dimensional parameter spaces to compute posterior statistics. To address the last challenge, we have developed a randomized misfit approach that uncovers the interplay between the Johnson-Lindenstrauss and the Morozov's discrepancy principle to significantly reduce the dimension of the data without compromising the quality of the inverse solutions.
In this talk we selectively present scalable and rigorous approaches to tackle these challenges for PDE-governed Bayesian inverse problems. Various numerical results for simple to complex PDEs will be presented to verify our algorithms and theoretical findings. If time permits, we will present our recent work on scientific machine learning for inverse and learning problems.