# Applied/ACMS/absF19

## Contents

# ACMS Abstracts: Fall 2019

### Leonardo Andrés Zepeda Núñez

Title: Deep Learning for Electronic Structure Computations: A Tale of Symmetries, Locality, and Physics

Abstract: Recently, the surge of interest in deep neural learning has dramatically improved image and signal processing, which has fueled breakthroughs in many domains such as drug discovery, genomics, and automatic translation. These advances have been further applied to scientific computing and, in particular, to electronic structure computations. In this case, the main objective is to directly compute the electron density, which encodes most of information of the system, thus bypassing the computationally intensive solution of the Kohn-Sham equations. However, similar to neural networks for image processing, the performance of the methods depends spectacularly on the physical and analytical intuition incorporated in the network, and on the training stage.

In this talk, I will show how to build a network that respects physical symmetries and locality. I will show how to train the networks and how such properties impact the performance of the resulting network. Finally, I will present several examples for small yet realistic chemical systems.

### Daniel Floryan (UW-Madison)

Title: Flexible Inertial Swimmers

Abstract: Inertial swimmers deform their bodies and fins to push against the water and propel themselves forward. The deformation is driven partly by active musculature, and partly by passive elasticity. The interaction between elasticity and hydrodynamics confers features on the swimmers not enjoyed by their rigid friends, for example, boosts in speed when flapping at certain frequencies. We explain the salient features of flexible swimmers by drawing ideas from airfoils, vibrating beams, and flags flapping in the wind. The presence of fluid drag has important consequences. We also explore optimal arrangements of flexibility. (It turns out that nature is quite good.)

### Jianfeng Lu (Duke)

Title: How to ``localize" the computation?

It is often desirable to restrict the numerical computation to a local region to achieve best balance between accuracy and affordability in scientific computing. It is important to avoid artifacts and guarantee predictable modelling while artificial boundary conditions have to be introduced to restrict the computation. In this talk, we will discuss some recent understanding on how to achieve such local computation in the context of topological edge states and elliptic random media.

### Mitch Bushuk (GFDL/Princeton)

Title: Arctic Sea Ice Predictability in a Changing Cryosphere

Abstract: Forty years of satellite observations have documented a striking decline in the areal extent of Arctic sea ice. The loss of sea ice has impacts on the climate system, human populations, ecosystems, and natural environments across a broad range of spatial and temporal scales. These changes have motivated significant research interest in the predictability and prediction of Arctic sea ice on seasonal-to-interannual timescales. In this talk, I will address two related questions: (1) What is the inherent predictability of Arctic sea ice and what physical mechanisms underlie this predictability? and (2) How can this knowledge be leveraged to improve operational sea ice predictions? I will present findings on the relative roles of the ocean, sea ice, and atmosphere in controlling Arctic sea ice predictability. I will also present evidence for an Arctic spring predictability barrier, which may impose a sharp limit on our ability to make skillful predictions of the summer sea ice minimum.

### Qin Li (UW-Madison)

Title: The power of randomness in scientific computing

Abstract: Most numerical methods in scientific computing are deterministic. Traditionally, accuracy has been the target while the cost was not the concern. However, in this era of big data, we incline to relax the strict requirements on the accuracy to reduce numerical cost. Introducing randomness in the numerical solvers could potentially speed up the computation significantly at small sacrifice of accuracy. In this talk, I'd like to show two concrete examples how this is done: first on random sketching in experimental design, and the second on numerical homgenization, hoping the discussion can shed light on potential other applications. Joint work with Ke Chen, Jianfeng Lu, Kit Newton and Stephen Wright.

### Joel Nishimura (Arizona State)

Title: Random graph models with fixed degree sequences: choices, consequences and irreducibility proofs for sampling

Abstract: Determining which features of an empirical graph are noteworthy frequently relies upon the ability to sample random graphs with constrained properties. Since empirical graphs have distinctive degree sequences, one of the most popular random graph models is the configuration model, which produces a graph uniformly at random from the set of graphs with a fixed degree sequence. While it is commonly treated as though there is only a single configuration model, one sampled via stub-matching, there are many, depending on whether self-loops and multiedges are allowed and whether edge stubs are labeled or not. We show, these different configuration models can lead to drastically, sometimes opposite, interpretations of empirical graphs. In order to sample from these different configuration models, we review and develop the underpinnings of Markov chain Monte Carlo methods based upon double-edge swaps. We also present new results on the irreducibility of the Markov chain for graphs with self-loops, either proving irreducibility or exactly characterizing the degree sequences for which the Markov chain is reducible. This work completes the study of the irreducibility of double edge-swap Markov chains (and the related Curveball Markov chain) for all combinations of allowing self-loops, multiple self-loops and/or multiedges.

### Prashant G. Mehta

Title: What is the Lagrangian for Nonlinear Filtering?

Abstract: There is a certain magic involved in recasting the equations in Physics, and the algorithms in Engineering, in variational terms. The most classical of these ‘magics’ is the Lagrange’s formulation of the Newtonian mechanics. An accessible modern take on all this and more appears in the February 19, 2019 issue of The New Yorker magazine: https://www.newyorker.com/science/elements/a-different-kind-of-theory-of-everything?reload=true

My talk is concerned with a variational (optimal control type) formulation of the problem of nonlinear filtering/estimation. Such formulations are referred to as duality between optimal estimation and optimal control. The first duality principle appears in the seminal (1961) paper of Kalman-Bucy, where the problem of minimum variance estimation is shown to be dual to a linear quadratic optimal control problem.

In my talk, I will describe a generalization of the Kalman-Bucy duality theory to nonlinear filtering. The generalization is an exact extension, in the sense that the dual optimal control problem has the same minimum variance structure for linear and nonlinear filtering problems. Kalman-Bucy’s classical result is shown to be a special case. During the talk, I will also attempt to review other types of duality relationships that have appeared over the years for the problem of linear and nonlinear filtering.

This is joint work with Jin Won Kim and Sean Meyn. The talk is based on the following papers: https://arxiv.org/pdf/1903.11195.pdf and https://arxiv.org/pdf/1904.01710.pdf.