Dora: QoE-aware hybrid parallelism for distributed edge AI
Jin, Jianli
This item is only available for download by members of the University of Illinois community. Students, faculty, and staff at the U of I may log in with your NetID and password to view the item. If you are trying to access an Illinois-restricted dissertation or thesis, you can request a copy through your library's Inter-Library Loan office or purchase a copy directly from ProQuest.
Permalink
https://hdl.handle.net/2142/132704
Description
Title
Dora: QoE-aware hybrid parallelism for distributed edge AI
Author(s)
Jin, Jianli
Issue Date
2025-12-10
Director of Research (if dissertation) or Advisor (if thesis)
Lai, Fan
Department of Study
Siebel School Comp & Data Sci
Discipline
Computer Science
Degree Granting Institution
University of Illinois Urbana-Champaign
Degree Name
M.S.
Degree Level
Thesis
Keyword(s)
Edge Computing
Distributed Training
System for Artificial Intelligence
Abstract
With the proliferation of edge AI applications, satisfying user quality of experience (QoE) requirements, such as model inference latency, has become a first-class objective, as these models operate in resource-constrained settings and directly interact with users. Yet, modern AI models routinely exceed the resource capacity of individual devices, necessitating distributed execution across heterogeneous devices over variable and contention-prone networks. Existing planners for hybrid (e.g., data and pipeline) parallelism largely optimize for throughput or device utilization, overlooking QoE, leading to severe resource inefficiency (e.g., unnecessary energy drain), or QoE violations under runtime dynamics. We present Dora, a framework for QoE-aware hybrid parallelism in distributed edge AI training and inference. Dora jointly optimizes heterogeneous computation, contention prone networks, and multi-dimensional QoE objectives via three key mechanisms: (i) a heterogeneity-aware model partitioner that determines and assigns model partitions across devices, forming a compact set of QoE-compliant plans; (ii) a contention-aware network scheduler further refines these candidate plans by maximizing compute–communication overlap; and (iii) a runtime adapter that adaptively composes multiple plans to maximize global efficiency while respecting overall QoEs. Across representative edge deployments—including smart homes, traffic analytics, and small edge clusters—Dora achieves 1.1–6.3× faster execution and, alternatively, reduces energy consumption by 21–82%, all while maintaining QoE under runtime dynamics.
Use this login method if you
don't
have an
@illinois.edu
email address.
(Oops, I do have one)
IDEALS migrated to a new platform on June 23, 2022. If you created
your account prior to this date, you will have to reset your password
using the forgot-password link below.