Withdraw
Loading…
Exploiting and generalizing relative speculative leakage
Choudhary, Rutvik
Loading…
Permalink
https://hdl.handle.net/2142/129893
Description
- Title
- Exploiting and generalizing relative speculative leakage
- Author(s)
- Choudhary, Rutvik
- Issue Date
- 2025-07-18
- Director of Research (if dissertation) or Advisor (if thesis)
- Fletcher, Christopher W
- Doctoral Committee Chair(s)
- Fletcher, Christopher W
- Committee Member(s)
- Marinov, Darko
- Kumar, Rakesh
- Morrison, Adam
- Department of Study
- Siebel School Comp & Data Sci
- Discipline
- Computer Science
- Degree Granting Institution
- University of Illinois Urbana-Champaign
- Degree Name
- Ph.D.
- Degree Level
- Dissertation
- Keyword(s)
- hardware security
- Abstract
- Constant-time code has been, until recently, the de-facto way to write secure programs. Unfortunately recent microarchitecture-based attacks have been shown to undermine the security of modern software. The most notorious of these were speculative execution attacks. Previous attempts to protect against these attacks have either been too costly in terms of performance overhead, or had to reduce their protection scope, making them unusable for constant-time programs. There have been attempts to formalize speculative safety of programs and describe speculative leakage, but these were focused on modeling speculative execution only, and are only capable of providing "satisfied"/"not-satisfied" as an answer rather than something more descriptive. The first chapter proposes a hardware scheme, SPT, that utilizes a new security definition to provide comprehensive, low-overhead and software-transparent protection against speculative execution attacks. A "secret" is defined to be only the data that does NOT leak non-speculatively. We delay the execution of all speculative transmitters that operate on secret data. When a transmitter becomes non-speculative, we mark that the data passed to it as non-secret, and thus that data need not be protected from speculative leakage down the line. We use dynamic information flow techniques to further deduce what other data can be marked as non-secret as well. We describe the microarchitectural changes required to implement SPT in an out-of-order processor. We implement SPT in the gem5 simulator and evaluate it against the SPEC2017 benchmarks as well as some constant-time code. We find that it adds only 45%/11% overhead on average (depending on the attack model) relative to an insecure processor. Compared to a secure baseline with the same protection scope, SPT reduces overhead by an average 3.6×/3×. The second chapter utilizes the same definition of "secret" to reduce the overheads of software-level protection against speculative execution attacks. Rather than explicitly tracking secrets, we instead model "attacker knowledge"— data that is guaranteed to be leaked in a non-speculative execution. We can then remove protections for data that are part of attacker knowledge. This provides the guarantee that any data that does not leak in a non-speculative execution will never leak in a speculative execution, but with lower overhead than other software-level mechanisms. We deduce attacker knowledge for a given program using our proposed analysis, Declassiflow. It consists of a series of data-flow analyses and a symbolic execution pass. We implement the data-flow passes in LLVM and utilize Klee for the symbolic execution. We then use Declassiflow to show that we can reduce the protection overhead of speculative load hardening (SLH)—a state-of-the-art software-level mechanism—for protecting several constant-time programs. The third chapter generalizes the ideas of relative speculative leakage exploited by SPT and Declassiflow and proposes a theory for describing what can be leaked by arbitrary microarchitectural optimizations. We define a general notion of relative leakage and propose a concept known as the "reach", which describes what architectural state can be leaked by a program running on one machine relative to another machine. We provide a method for statically estimating the reach of arbitrary programs by using known leaky patterns called "gadgets". We also describe a method for formally verifying that a set of gadgets is safe to use for any possible program. We implement several CPU models in the uclid5 model checker and test various gadgets libraries against them.
- Graduation Semester
- 2025-08
- Type of Resource
- Thesis
- Handle URL
- https://hdl.handle.net/2142/129893
- Copyright and License Information
- Copyright 2025 Rutvik Choudhary
Owning Collections
Graduate Dissertations and Theses at Illinois PRIMARY
Graduate Theses and Dissertations at IllinoisManage Files
Loading…
Edit Collection Membership
Loading…
Edit Metadata
Loading…
Edit Properties
Loading…
Embargoes
Loading…