# Difference between revisions of "Applied/ACMS/absS15"

(→ACMS Abstracts: Spring 2015) |
(→Stephen Wright (UW)) |
||

Line 85: | Line 85: | ||

=== Stephen Wright (UW) === | === Stephen Wright (UW) === | ||

− | '' | + | ''Optimization in learning and data analysis'' |

The approach of minimizing a function by successively fixing most of its variables and minimizing with respect to the others dates back many years, and has been applied in an enormous range of applications. Until recently, however, the approach did not command much respect among optimization researchers; only a few prominent individuals took it seriously. Recent years have seen an explosion in applications, particularly in data analysis, which has driven a new wave of research into variants of coordinate descent and their convergence properties. Such aspects as randomization in the choice of variants to fix and relax, acceleration methods, extension to regularized objectives, and parallel implementation have commanded a good deal of attention during the past five years. In this lecture, I will survey these recent developments, then focus on recent work on asynchronous parallel implementations for multicore computers. An analysis of the properties of the latter algorithms shows that near-linear speedup can be expected, up to a number of processors that depends on the coupling between the variables. | The approach of minimizing a function by successively fixing most of its variables and minimizing with respect to the others dates back many years, and has been applied in an enormous range of applications. Until recently, however, the approach did not command much respect among optimization researchers; only a few prominent individuals took it seriously. Recent years have seen an explosion in applications, particularly in data analysis, which has driven a new wave of research into variants of coordinate descent and their convergence properties. Such aspects as randomization in the choice of variants to fix and relax, acceleration methods, extension to regularized objectives, and parallel implementation have commanded a good deal of attention during the past five years. In this lecture, I will survey these recent developments, then focus on recent work on asynchronous parallel implementations for multicore computers. An analysis of the properties of the latter algorithms shows that near-linear speedup can be expected, up to a number of processors that depends on the coupling between the variables. |

## Revision as of 08:57, 17 April 2015

## Contents

- 1 ACMS Abstracts: Spring 2015
- 1.1 Irene Kyza (U Dundee)
- 1.2 Daniel Vimont (UW)
- 1.3 Saverio Spagnolie (UW)
- 1.4 Jonathan Freund (UIUC)
- 1.5 Markos Katsoulakis (U Mass Amherst)
- 1.6 Frederic Coquel (Ecole Polytechnique Paris)
- 1.7 Lisa Fauci (Tulane)
- 1.8 Paulo Arratia (U Penn)
- 1.9 Tao Zhou (Chinese Academy of Sciences)
- 1.10 Bin Cheng (University of Surrey)
- 1.11 Murad Banaji (University of Portsmouth)
- 1.12 Manoj Gopalkrishnan (Tata Institute Mumbai)
- 1.13 Stephen Wright (UW)
- 1.14 Thomas Powers (Brown)
- 1.15 Elaine Spiller (Marquette)

# ACMS Abstracts: Spring 2015

### Irene Kyza (U Dundee)

*Adaptivity and blowup detection for semilinear evolution convection-diffusion equations based on a posteriori error control*

We discuss recent results on the a posteriori error control and adaptivity for an evolution semilinear convection-diffusion model problem with possible blowup in finite time. This belongs to the broad class of partial differential equations describing e.g., tumor growth,chemotaxis and cell modelling. In particular, we derive a posteriori error estimates that are conditional (estimates which are valid under conditions of a posteriori type) for an interior penalty discontinuous Galerkin (dG) implicit-explicit (IMEX) method using a continuation argument. Compared to a previous work, the obtained conditions are more localised and allow the efficient error control near the blowup time. Utilising the conditional a posteriori estimator we are able to propose an adaptive algorithm that appears to perform satisfactorily. In particular, it leads to good approximation of the blowup time and of the exact solution close to the blowup. Numerical experiments illustrate and complement our theoretical results. This is joint work with A. Cangiani, E.H. Georgoulis, and S. Metcalfe from the University of Leicester.

### Daniel Vimont (UW)

*Linear Inverse Modeling of Central and East Pacific El Niño / Southern Oscillation (ENSO) Events*

Research on the structure and evolution of individual El Niño / Southern Oscillation (ENSO) events has identified two categories of ENSO event characteristics that can be defined by maximum equatorial SST anomalies centered in the Central Pacific (around the dateline to 150 deg. W; CP events) or in the Eastern Pacific (east of about 150 deg. W; EP events). The distinction between these two events is not just academic: both types of event evolve differently, implying different predictability; the events tend to have different maximum amplitude; and the global teleconnection differs between each type of event.

In this presentation I will (i) describe the Linear Inverse Modeling (LIM) technique, (ii) apply LIM to determine an empirical dynamical operator that governs the evolution of tropical Pacific climate variability, (iii) define norms under which initial conditions can be derived that optimally lead to growth of CP or EP ENSO events, and (iv) identify patterns of stochastic forcing that are responsible for exciting each type of event.

### Saverio Spagnolie (UW)

*Sedimentation in viscous fluids: flexible filaments and boundary effects*

The deformation and transport of elastic filaments in viscous fluids play central roles in many biological and technological processes. Compared with the well-studied case of sedimenting rigid rods, the introduction of filament compliance may cause a significant alteration in the long-time sedimentation orientation and filament geometry. In the weakly flexible regime, a multiple-scale asymptotic expansion is used to obtain expressions for filament translations, rotations and shapes which match excellently with full numerical simulations. In the highly flexible regime we show that a filament sedimenting along its long axis is susceptible to a buckling instability. Embedding the analytical results for a single filament into a mean-field theory, we show how flexibility affects a well established concentration instability in a sedimenting suspension.

Another problem of classical interest in fluid mechanics involves the sedimentation of a rigid particle near a wall, but most studies have been numerical or experimental in nature. We have derived ordinary differential equations describing the sedimentation of arbitrarily oriented prolate and oblate spheroids near a vertical or inclined plane wall which may be solved analytically for many important special cases. Full trajectories are predicted which compare favorably with complete numerical simulations performed using a novel double layer boundary integral formulation, a Method of Stresslet Images. Several trajectory-types emerge, termed tumbling, glancing, reversing, and sliding, along with their fully three-dimensional analogues.

### Jonathan Freund (UIUC)

*Adjoint-based optimization for understanding and reducing flow noise*

Advanced simulation tools, particularly large-eddy simulation techniques, are becoming capable of making quality predictions of jet noise for realistic nozzle geometries and at engineering relevant flow conditions. Increasing computer resources will be a key factor in improving these predictions still further. Quality prediction, however, is only a necessary condition for the use of such simulations in design optimization. Predictions do not of themselves lead to quieter designs. They must be interpreted or harnessed in some way that leads to design improvements. As yet, such simulations have not yielded any simplifying principals that offer general design guidance. The turbulence mechanisms leading to jet noise remain poorly described in their complexity. In this light, we have implemented and demonstrated an aeroacoustic adjoint-based optimization technique that automatically calculates gradients that point the direction in which to adjust controls in order to improve designs. This is done with only a single flow solutions and a solution of an adjoint system, which is solved at computational cost comparable to that for the flow. Optimization requires iterations, but having the gradient information provided via the adjoint accelerates convergence in a manner that is insensitive to the number of parameters to be optimized. The talk will review the formulation of the adjoint of the compressible flow equations for optimizing noise-reducing controls and present examples of its use. We will particularly focus on some mechanisms of flow noise that have been revealed via this approach.

### Markos Katsoulakis (U Mass Amherst)

*Information Theory methods for parameter sensitivity and coarse-graining of high-dimensional stochastic dynamics*

In this talk we discuss path-space information theory-based sensitivity analysis and parameter identification methods for complex high-dimensional dynamics, as well as information-theoretic tools for parameterized coarse-graining of non-equilibrium extended systems. Furthermore, we establish their connections with goal-oriented methods in terms of new, sharp, uncertainty quantification inequalities. The combination of proposed methodologies is capable to (a) handle molecular-level models with a very large number of parameters, (b) address and mitigate the high-variance in statistical estimators, e.g. for sensitivity analysis, in spatially distributed

Kinetic Monte Carlo (KMC), (c) tackle non-equilibrium processes, typically associated with coupled physicochemical mechanisms, boundary conditions, etc. (such as reaction-diffusion systems), and where even steady states are unknown altogether, e.g. do not have a Gibbs structure. Finally, the path-wise information theory tools, (d) yield a surprisingly simple, tractable and easy-to-implement approach to quantify and rank parameter sensitivities, as well as (e) provide reliable molecular model parameterizations for coarse-grained molecular systems and their dynamics, based on fine-scale data and rational model selection methods through suitable path-space (dynamics-based) information criteria. The proposed methods are tested against a wide range of high-dimensional stochastic processes, ranging from complex biochemical reaction networks with hundreds of parameters, to spatially extended Kinetic Monte Carlo models in catalysis and Langevin dynamics of interacting molecules with internal degrees of freedom.

### Frederic Coquel (Ecole Polytechnique Paris)

*Jin and Xin's Relaxation Solvers with Defect Measure Corrections*

We present a class of finite volume methods for approximating entropy weak solutions of non-linear hyperbolic PDEs. The main motivation is to resolve discontinuities as well as Glimm's scheme, but without the need for solving Riemann problems exactly. The sharp capture of discontinuities is known to be mandatory in situations where discontinuities are sensitive to viscous perturbations while exact Riemann solutions may not be available (typically in phase transition problems). More generally, sharp capture also prevent discrete shock proles from exhibiting over and undershoots, which is decisive in a many applications (in combustion for instance). We propose to replace exact Riemann solutions by self-similar solutions conveniently derived from the Jin-Xin's relaxation framework. In the limit of a vanishing relaxation time, the relaxation source term exhibits Dirac measures concentrated on the discontinuities. We show how to handle those so-called defect measures in order to exactly capture propagating shock solutions while achieving computational efficiencies. The lecture will essential focus on the convergence analysis in the scalar setting. A special attention is paid to the consistency of the proposed correction with respect to the entropy condition. We prove the convergence of the method to the unique Kruvkov's solution.

### Lisa Fauci (Tulane)

*Flagellar motility: negotiating sticky elastic bonds and viscoelastic networks*

We will discuss a Stokes fluid model that incorporates forces due to elastic structures in the fluid environment of the actuated flagellum. We will present recent computational investigations of hyperactivated sperm detachment from oviductal epithelium as well as swimming through viscoelastic networks.

### Paulo Arratia (U Penn)

*Pulling and pushing in complex fluids*

Many microorganisms evolve in media that contain (bio)-polymers and/or solids; examples include cervical mucus, intestinal fluid, wet soil, and tissues. These so-called complex fluids often exhibit non-Newtonian rheological behavior such as shear-thinning viscosity and elasticity. In this talk, I will discuss recent experiments on the effects of fluid elasticity on the swimming behavior of microorganisms. Two main microorganisms are used, the green algae C. reinhardtii (a puller-type swimmer) and the bacterium E. coli (a pusher-type swimmer). For the case of pullers (C. reinhardtii), we find that fluid elasticity hinders the cell’s overall swimming speed but leads to an increase in the cell’s flagellum beating frequency. The beating kinematics and flagellum waveforms are also significantly modified by fluid elasticity. For the case of pushers (E. coli), the presence of even small amount of polymers in the medium suppresses the bacteria run-and-tumble mechanism. The bacteria spend more time in ballistic mode and swim faster as well. Single molecule experiments using fluorescently labeled DNA show that the flow fields generated by E. coli are able to stretch initially coiled polymer molecules and thus induce elastic stresses in fluid. These results demonstrate the intimate link between swimming kinematics and fluid rheology and that one can control the spreading and motility of microorganisms by tuning fluid properties.

### Tao Zhou (Chinese Academy of Sciences)

*The Christoffel function weighted least-squares for stochastic collocation approximations: applications to Uncertainty Quantification*

We shall consider the multivariate stochastic collocation methods on unstructured grids. The motivation for such a study is the applications in parametric Uncertainty Quantification (UQ). We will first give a general framework of stochastic collocation methods, which include approaches such as compressed sensing, least-squares, and interpolation. Particular attention will be then given to the least-squares approach, and we will review recent progresses in this topic.

### Bin Cheng (University of Surrey)

'Error estimates and 2nd order corrections to reduced fluid models

In this PDE analysis work, I will discuss the application of time-averaging in getting rigorous error estimates of some reduced fluid models, including the incompressible approximation and quasi-geostrophic approximation. The spatial boundary can be present as a non-penetrable solid wall. I will show a somewhat surprising result on the epsilon^2 accuracy of incompressible approximation of Euler equations, thanks to several decoupling properties.

### Murad Banaji (University of Portsmouth)

*Nonexpansivity in chemical reaction networks*

This work is motivated by the observation that quite often systems of differential equations describing chemical reaction networks (CRNs) display simple global behaviour such as convergence of all orbits to a unique equilibrium under only weak and physically reasonable assumptions on the reaction rates (kinetics). We are led to wonder if the structure of a CRN may sometimes force some distance between solutions to decrease (or at least not increase) with time. If so, how can we find this nonincreasing quantity? We explore different ways in which CRNs can define nonexpansive semiflows (recall that a semiflow [math](\phi_t)_{t \geq 0}[/math] on some Banach space [math](X, |\cdot|)[/math] is nonexpansive if [math]|\phi_t(x)-\phi_t(y)| \leq |x-y|[/math] for all [math]x,y \in X[/math] and all [math]t \geq 0[/math]). It turns out that in CRNs the natural evolution of chemical concentrations may be nonexpansive; or a nonexpansive semiflow may be obtained from the evolution of the so-called "extents" of reactions. In both cases we may be able to draw global conclusions about convergence of chemical concentrations. In each case the challenge is to find the correct norm to get nonexpansivity for arbitrary kinetics. To construct such norms and show nonexpansivity we appeal to the theory of monotone dynamical systems. Families of CRNs which can be analysed in this way are presented; however characterising fully the class of CRNs to which this theory applies remains an open - and undoubtedly difficult - task.

This is joint work with Bas Lemmens (University of Kent) and Pete Donnell (University of Portsmouth).

### Manoj Gopalkrishnan (Tata Institute Mumbai)

*Autocatalysis in Reaction Networks"*

Abstract: The notion of "critical siphon" has been studied in reaction network theory in the context of the persistence question. We explore the combinatorics of critical siphons. We introduce the notions of "drainable" and "self-replicable" (or autocatalytic) siphons. We show that: every minimal critical siphon is either drainable or self-replicable; reaction networks without drainable siphons are persistent; and non-autocatalytic weakly-reversible networks are persistent. This result generalizes previous results for catalytic and atomic reaction networks. The proof is combinatorial in nature

Reference: http://arxiv.org/abs/1309.3957

### Stephen Wright (UW)

*Optimization in learning and data analysis*

The approach of minimizing a function by successively fixing most of its variables and minimizing with respect to the others dates back many years, and has been applied in an enormous range of applications. Until recently, however, the approach did not command much respect among optimization researchers; only a few prominent individuals took it seriously. Recent years have seen an explosion in applications, particularly in data analysis, which has driven a new wave of research into variants of coordinate descent and their convergence properties. Such aspects as randomization in the choice of variants to fix and relax, acceleration methods, extension to regularized objectives, and parallel implementation have commanded a good deal of attention during the past five years. In this lecture, I will survey these recent developments, then focus on recent work on asynchronous parallel implementations for multicore computers. An analysis of the properties of the latter algorithms shows that near-linear speedup can be expected, up to a number of processors that depends on the coupling between the variables.

This talk covers joint work with Ji Liu and other colleagues.

### Thomas Powers (Brown)

*Fluid mechanics of swimming microorganisms in viscoelastic and anisotropic fluids*

Swimming microorganisms are typically found in complex fluids, which are full of polymers. We use theory and simple scale-model experiments to study how viscoelasticity affects the swimming speed of swimmers with simple illustrative stroke patterns, such as small-amplitude traveling waves and rigid-body rotation of helices. We also study swimming mechanics in anisotropic media, using a hexatic liquid crystal as a model. We find that the nature of anchoring conditions for the liquid-crystalline degrees of freedom plays a critical role in determining the swimming speed. Furthermore, we study the fluid transport induced by the swimmers motion by calculating the flux of fluid in the laboratory frame.

### Elaine Spiller (Marquette)

*Uncertainty quantification and geophysical hazard mapping*

PDE models of granular flows are invaluable tools for developing probabilistic hazards maps for volcanic landslides, but they are far from perfect. First, any probabilistic hazard map is conditioned on assumptions about the aleatoric uncertainty -- how mother nature rolls the dice -- and is hence tied to the choice of probability distributions describing various scenarios (e.g. initial and/or boundary conditions). Thus new data, differing expert opinion, or emergent scenarios may suggest that the original assumptions were invalid and thus the hazard map made under those assumptions is not terribly useful. Epistemic uncertainty -- uncertainty due to a lack of model refinement -- arises through assumptions made in physical models, numerical approximation, and imperfect statistical models. In the context of geophysical hazard mapping, we propose a surrogate-based methodology which efficiently assesses the impact of various uncertainties enabling a quick yet methodical comparison of the effects of uncertainty and error on computer model output.