Files in this item



application/pdfSHOU-DISSERTATION-2018.pdf (3MB)
(no description provided)PDF


Title:Learning-accelerated algorithms for simulation and optimization
Author(s):Shou, Chenchao
Director of Research:West, Matthew
Doctoral Committee Chair(s):West, Matthew
Doctoral Committee Member(s):Srikant, Rayadurgam; Mehta, Prashant; Chronopoulou, Alexandra; He, Niao
Department / Program:Mechanical Sci & Engineering
Discipline:Mechanical Engineering
Degree Granting Institution:University of Illinois at Urbana-Champaign
Subject(s):particle simulation
machine learning
variance reduction
control variates
mean field
agent-based modeling
surrogate optimization
expensive black-box optimization
simulation optimization
hyperparameter tuning
Abstract:Simulation and optimization are fundamental building blocks for many computational methods in science and engineering. In this work, we explore the use of machine learning techniques to accelerate compute-intensive tasks in both simulation and optimization. Specifically, two algorithms are developed: (1) a variance reduction algorithm for Monte Carlo simulations of mean-field particle systems, and (2) a global optimization algorithm for noisy expensive functions. For the variance reduction algorithm, we develop an adaptive-control-variates technique for a class of simulations, where many particles interact via common mean fields. Due to the presence of a large number of particles and highly nonlinear dynamics, simulating these mean-field particle models is often time-consuming. Our algorithm treats the body of particles in the system as a source of training data, then uses machine learning to automatically build a model for the underlying particle dynamics, and finally constructs control variates with the learned model. We prove that the mean estimators from our algorithm are unbiased. More importantly, we show that, for a system with sufficiently many particles, our algorithm asymptotically produces more efficient estimators than naive Monte Carlo under certain regularity conditions. We applied our variance reduction algorithm to an aerosol particle simulation and found that the resulting simulation is about 7 times faster. The second algorithm is a parallel surrogate optimization algorithm, known as ProSRS, for noisy expensive black-box functions. Within this algorithm, we develop an efficient weighted-radial-basis regression procedure for constructing the surrogates. Furthermore, we introduce a novel tree-based technique, called the “zoom strategy”, to further improve optimization efficiency. We prove that if ProSRS is run for sufficiently long, with probability converging to one there will be at least one sample among all the evaluations that will be arbitrarily close to the global minimum. We compared ProSRS to several state-of-the-art Bayesian optimization algorithms on a suite of standard benchmark functions and two real machine-learning hyperparameter-tuning problems. We found that our algorithm not only achieves significantly faster optimization convergence, but is also orders of magnitude cheaper in computational cost. We also applied ProSRS to the problem of characterizing and validating a complex aerosol model against experimental measurements, where twelve simulation parameters must be optimized. This case illustrates the use of ProSRS for general global optimization problems.
Issue Date:2018-11-30
Rights Information:Copyright 2018 Chenchao Shou
Date Available in IDEALS:2019-02-06
Date Deposited:2018-12

This item appears in the following Collection(s)

Item Statistics