Building rearticulable models for 3D articulated objects from multi-view RGB videos
Jiang, Wei
This item is only available for download by members of the University of Illinois community. Students, faculty, and staff at the U of I may log in with your NetID and password to view the item. If you are trying to access an Illinois-restricted dissertation or thesis, you can request a copy through your library's Inter-Library Loan office or purchase a copy directly from ProQuest.
Permalink
https://hdl.handle.net/2142/125661
Description
Title
Building rearticulable models for 3D articulated objects from multi-view RGB videos
Author(s)
Jiang, Wei
Issue Date
2024-05-24
Director of Research (if dissertation) or Advisor (if thesis)
Wang, Shenlong
Department of Study
Computer Science
Discipline
Computer Science
Degree Granting Institution
University of Illinois at Urbana-Champaign
Degree Name
M.S.
Degree Level
Thesis
Keyword(s)
3D Articulated Object Understanding
Gaussian Splatting
Neural Rendering
Part Discovery
Abstract
This thesis presents a solution to the novel task of building a 3D rearticulable models from multi-view colored (RGB) videos, which enables to render the articulable object at sampled state, discover the rigid parts, and reconstruct the kinematic structure. The solution is comprised of Gaussian Articulated Implicit Model (GAIM), a representation for dynamic scenes with an articulable object, and a relax-and-project-based optimization framework. Evaluated on the Watch-It-Move (WIM) dataset, the proposed solution achieves on par view synthesis performance as other rendering baselines, and can effectively discover rigid parts and properly reconstruct the kinematic model of the observed articulated object.
Use this login method if you
don't
have an
@illinois.edu
email address.
(Oops, I do have one)
IDEALS migrated to a new platform on June 23, 2022. If you created
your account prior to this date, you will have to reset your password
using the forgot-password link below.