KL divergence – based disagreement sampling for multi-fidelity Bayesian optimization
Iyer, Abhishek
Loading…
Permalink
https://hdl.handle.net/2142/129346
Description
Title
KL divergence – based disagreement sampling for multi-fidelity Bayesian optimization
Author(s)
Iyer, Abhishek
Issue Date
2025-05-09
Director of Research (if dissertation) or Advisor (if thesis)
Smart, Jordan
Department of Study
Aerospace Engineering
Discipline
Aerospace Engineering
Degree Granting Institution
University of Illinois Urbana-Champaign
Degree Name
M.S.
Degree Level
Thesis
Keyword(s)
Active Learning
Bayesian Optimization
Multi-fidelity
Airfoil
Language
eng
Abstract
Computational costs have increased tremendously over time owing to the increase in complexity of design. The motivation to reduce costs has led to the adoption of multi-fidelity optimization techniques. Active Learning has gained prominence as a sampling method which allows for maximum optimization of the design problem while utilizing minimal available data, consequently decreasing the cost of computation. This thesis proposes a novel KL-Divergence sampling strategy which iteratively makes use of fidelities to sample through a design space to optimize according to a target objective. This method was evaluated using the Rastrigin function and then applied on a design space consisting of 4-digit NACA airfoil combinations. It is observed that the KL divergence method provided a better optimization result compared to the Least Confidence method while simultaneously reducing reliance on information from higher fidelities. The disagreement sampler iterated over the airfoil design space and provided a better result as compared to the Least Confidence method.
Use this login method if you
don't
have an
@illinois.edu
email address.
(Oops, I do have one)
IDEALS migrated to a new platform on June 23, 2022. If you created
your account prior to this date, you will have to reset your password
using the forgot-password link below.