Resource-efficient optimizations of 3D vision models for segmentation and detection
Zheng, Hongbo
This item is only available for download by members of the University of Illinois community. Students, faculty, and staff at the U of I may log in with your NetID and password to view the item. If you are trying to access an Illinois-restricted dissertation or thesis, you can request a copy through your library's Inter-Library Loan office or purchase a copy directly from ProQuest.
Permalink
https://hdl.handle.net/2142/129530
Description
Title
Resource-efficient optimizations of 3D vision models for segmentation and detection
Author(s)
Zheng, Hongbo
Issue Date
2025-04-15
Director of Research (if dissertation) or Advisor (if thesis)
Zhang, Minjia
Department of Study
Siebel School Comp & Data Sci
Discipline
Computer Science
Degree Granting Institution
University of Illinois Urbana-Champaign
Degree Name
M.S.
Degree Level
Thesis
Keyword(s)
3D Vision
Resource-Efficient Deep Learning
Volumetric Medical Image Segmentation
In-Place Operation
LiDAR-based 3D Object Detection
Post-Training Quantization (PTQ)
Training-Free Quantization (TFQ)
Mixed-Precision Inference
Abstract
The growing deployment of deep learning models in real-world 3D vision applications, ranging from medical image segmentation to autonomous perception, demands solutions that are not only accurate but also computationally and memory efficient. However, the resource intensity of 3D models presents persistent challenges at both ends of the deployment pipeline, particularly during training and inference. This thesis addresses these two critical bottlenecks with a unified objective: to improve the efficiency and scalability of 3D vision systems under real-world constraints.
We begin by examining the shared computational challenges faced by 3D models, including high-dimensional input, constrained batch sizes, prolonged training times, and the need for low-latency, high-throughput inference on edge devices. Using volumetric medical imaging and LiDAR-based 3D object detection as case studies, we identify common structural and performance limitations and develop domain-adaptive optimization techniques.
To mitigate training-time memory overhead, we employ an in-place normalization strategy that replaces conventional normalization layers (e.g., \texttt{BatchNorm3d}, \texttt{InstanceNorm3d}) with memory-efficient in-place variants. This technique reduces activation memory consumption without altering model behavior, enabling larger batch sizes and improved throughput. We validate the approach through theoretical gradient correctness analysis and empirical evaluations on representative 3D medical imaging models, including LHU-Net and 3D Res-UNet, across high-resolution volumetric datasets such as the Brain Tumor Segmentation Challenge (BraTS), the Left Atrial dataset (LA), and the NIH pancreas dataset (CT-82).
For inference-time optimization, we introduce \qlidar, a training-free and calibration-free quantization framework designed to address the structural heterogeneity of modern 3D LiDAR object detection models. \qlidar incorporates \smoothconv and fine-grained channel-wise quantization to effectively compress a diverse set of model components, including 1D/2D convolutions (Conv1d/2d), sparse convolutions (SPConv), submanifold convolutions (SubMConv), and multi-layer perceptrons (MLP). By eliminating the need for retraining or calibration data, \qlidar achieves high compression ratios while maintaining competitive accuracy under mixed-precision settings (W4A8 and W8A8) across standard benchmarks, including Waymo, nuScenes, and KITTI datasets.
Collectively, these contributions advance the design of efficient and scalable 3D vision models by addressing critical training and inference inefficiencies. By bridging memory-aware learning with deployment-aware quantization, this thesis lays the groundwork for future 3D vision systems that are not only high-performing but also resource-efficient and deployment-ready.
Use this login method if you
don't
have an
@illinois.edu
email address.
(Oops, I do have one)
IDEALS migrated to a new platform on June 23, 2022. If you created
your account prior to this date, you will have to reset your password
using the forgot-password link below.