Withdraw
Loading…
Context-aware spatiotemporal reconstruction for loss-resilient video offloading under timing constraints
Li, John
Loading…
Permalink
https://hdl.handle.net/2142/129940
Description
- Title
- Context-aware spatiotemporal reconstruction for loss-resilient video offloading under timing constraints
- Author(s)
- Li, John
- Issue Date
- 2025-07-18
- Director of Research (if dissertation) or Advisor (if thesis)
- Nahrstedt, Klara
- Department of Study
- Electrical & Computer Eng
- Discipline
- Electrical & Computer Engr
- Degree Granting Institution
- University of Illinois Urbana-Champaign
- Degree Name
- M.S.
- Degree Level
- Thesis
- Keyword(s)
- IoT, Video Offloading
- Abstract
- The deployment of IoT devices for video offloading is rapidly expanding, driven by the growing incentive for efficient visual data processing at the edge. However, as these IoT video systems scale, strict real-time constraints and limited, variable network bandwidth can undermine the reliability of video transmission. To contend with such latency budgets and unpredictable network conditions, video frames that arrive late are typically discarded at the application layer. These losses—often occurring even without network-layer drops or corruption—can drastically degrade Quality of Experience (QoE). Conventional video codecs provide little protection against such loss, while loss-resilient solutions such as packet retransmission and Forward Error Correction (FEC) struggle to operate effectively under increasingly severe losses. This becomes especially evident when subject to more stringent timing constraints and volatile network bandwidth. This thesis presents CASTR, a context-aware spatiotemporal reconstruction framework designed for loss-resilient, adaptive video offloading. CASTR consists of a progressive encoder to transmit essential features first, increasing the likelihood that the most important semantic information arrives before their real-time deadlines. Next, CASTR employs a convolutional Long Short-Term Memory (ConvLSTM) that leverages spatiotemporal context from neighboring frames to impute missing features and mitigate the effects of data loss. A decoder then reconstructs the features back into video frames. Experiments show that CASTR generally degrades more gracefully across a broad spectrum of packet loss scenarios compared to prior neural baselines. Furthermore, CASTR maintains perceptually reasonable video quality even under very severe loss rates (up to 90%+), demonstrating CASTR’s potential for robust video offloading in time-sensitive and network-constrained settings.
- Graduation Semester
- 2025-08
- Type of Resource
- Thesis
- Handle URL
- https://hdl.handle.net/2142/129940
- Copyright and License Information
- Copyright 2025 John Li
Owning Collections
Graduate Dissertations and Theses at Illinois PRIMARY
Graduate Theses and Dissertations at IllinoisManage Files
Loading…
Edit Collection Membership
Loading…
Edit Metadata
Loading…
Edit Properties
Loading…
Embargoes
Loading…