Files in this item



application/pdfDENG-DISSERTATION-2020.pdf (41MB)Restricted to U of Illinois
(no description provided)PDF


Title:Vision-based 6D object pose estimation for robot manipulation
Author(s):Deng, Xinke
Director of Research:Bretl, Timothy Wolfe
Doctoral Committee Chair(s):Bretl, Timothy Wolfe
Doctoral Committee Member(s):Do, Minh; Fox, Dieter; Gupta, Saurabh; Hu, Bin
Department / Program:Electrical & Computer Eng
Discipline:Electrical & Computer Engr
Degree Granting Institution:University of Illinois at Urbana-Champaign
State estimation
Computer vision
6D object pose estimation
Abstract:Vision-based 6D object pose estimation focuses on estimating the 3D translation and 3D orientation of an object with respect to the camera. Accurately estimating the 6D object pose plays a crucial role in various robotic applications such as robot manipulation and semantic navigation. In this dissertation, we study the problem of 6D object pose estimation and its application to manipulation. We first introduce PoseRBPF, a Rao-Blackwellized particle filter for tracking 6D object poses. In the framework, each particle samples 3D translation and estimates the distribution over 3D rotations conditioned on the image bonding box corresponding to the sampled translation. \prbpf\ compares each bounding box embedding to learned viewpoint embeddings so as to efficiently update distributions over time. We demonstrate that the tracked distributions capture both the uncertainties from the symmetry of objects and the uncertainty from object pose with RGB or RGB-D measurements. We propose a category-level extension of the PoseRBPF framework which effectively estimates the 6D poses and sizes of unseen objects. In particular, we propose a category-level auto-encoder network for depth measurements so that the feature embeddings are independent of the object instances. We extend the states in the PoseRBPF to handle the objects in different sizes. We evaluate our tracking framework on a category-level pose estimation benchmark, and achieve state-of-the-art performance. We introduce a robot system for self-supervised 6D object pose estimation. Starting from modules trained in simulation, our system is able to label real world images with accurate 6D object poses for self-supervised learning. In addition, the robot interacts with objects in the environment to change the object configuration by grasping or pushing objects. In this way, our system is able to continuously collect data and improve its pose estimation modules. We show that the self-supervised learning improves object segmentation and 6D pose estimation performance, and consequently enables the system to grasp objects more reliably.
Issue Date:2020-07-14
Rights Information:Copyright 2020 Xinke Deng
Date Available in IDEALS:2020-10-07
Date Deposited:2020-08

This item appears in the following Collection(s)

Item Statistics