Withdraw
Loading…
ApproxTuner 2.0: Towards quality-driven approximation tuning
Rambhia, Vidhi
This item's files can only be accessed by the System Administrators group.
Permalink
https://hdl.handle.net/2142/129683
Description
- Title
- ApproxTuner 2.0: Towards quality-driven approximation tuning
- Author(s)
- Rambhia, Vidhi
- Issue Date
- 2025-04-10
- Director of Research (if dissertation) or Advisor (if thesis)
- Adve, Vikram
- Department of Study
- Siebel School Comp & Data Sci
- Discipline
- Computer Science
- Degree Granting Institution
- University of Illinois Urbana-Champaign
- Degree Name
- M.S.
- Degree Level
- Thesis
- Keyword(s)
- Approximate Computing
- Approximation Tuning
- Edge Computing
- Model Optimization
- Model Merging
- Abstract
- Deploying deep neural networks in real-world applications often requires a balance between model fidelity and resource efficiency. Traditional approximation techniques are usually applied in isolation and evaluated using proxy metrics such as accuracy, which may not reflect actual downstream task performance. This work presents ApproxTuner 2.0 building on top of ApproxTuner [1] and [2], a system for application-aware approximation tuning that puts the downstream task at the center of the optimization process. This system explores a configuration space of approximations using modular, pluggable components—knobs, applications, and QoS evaluators—and scores each configuration using domain-specific metrics that directly reflect utility. We validate our approach across two case studies, monocular depth estimation with DepthAnythingV2 and object tracking with YOLOv8, demonstrating that ApproxTuner can uncover configurations with 4× speedups without compromising application-level quality. Complementing this, we also discuss LEWIS (LayEr-WIse Sparsity) [3], a guided model merging technique that approximates traditional fine-tuning by combining task vectors from pre-trained models. Rather than relying on expensive retraining or naive averaging, LEWIS uses layerwise-activation norm deltas to guide sparsity during model merging. This offers a practical approximation of fine-tuning, especially in resource-constrained scenarios. Our experiments demonstrate that LEWIS significantly improves model merging effectiveness. Together, ApproxTuner and LEWIS represent two complementary axes of approximate computing for real-world AI: one focuses on tuning approximation strategies for a given model and a downstream task, and the other facilitates rapid adaptation across tasks by approximating fine-tuning itself.
- Graduation Semester
- 2025-05
- Type of Resource
- Thesis
- Handle URL
- https://hdl.handle.net/2142/129683
- Copyright and License Information
- Copyright 2025 Vidhi Rambhia
Owning Collections
Graduate Dissertations and Theses at Illinois PRIMARY
Graduate Theses and Dissertations at IllinoisManage Files
Loading…
Edit Collection Membership
Loading…
Edit Metadata
Loading…
Edit Properties
Loading…
Embargoes
Loading…