Files in this item

FilesDescriptionFormat

application/pdf

application/pdfSorokin_Alexander.pdf (71MB)
(no description provided)PDF

Description

Title:Expanding the limits of predictive methods: from supervised learning to novel sensors and massive human supervision
Author(s):Sorokin, Alexander
Director of Research:Forsyth, David A.
Doctoral Committee Member(s):Roth, Dan; Hoiem, Derek W.; Hockenmaier, Julia C.; Bradski, Gary R.
Department / Program:Computer Science
Discipline:Computer Science
Degree Granting Institution:University of Illinois at Urbana-Champaign
Degree:Ph.D.
Genre:Dissertation
Subject(s):crowdsourcing
computer vision
robotics
semi-supervised learning
object recognition
human activity recognition
sensor networks
Abstract:The mission of machine learning is to empower computers to make generalizations from available data: labeled and unlabeled. The more labeled data we have the better predictions we’ll make, but labeled data usually comes at a cost and should be used sparingly. In some cases, the nature of prediction problem can be changed by using a different sensor modality or by obtaining a different kind of annotation. In this dissertation we first present methods to enhance predictive ability by improving the use of existing data: by constructing feature spaces for human activity recognition and by developing semi-supervised methods for object recognition. We then develop methods for collecting, storing and visualizing information about activity in an indoor office en- vironment. By using a dense array of simple motion sensors, we can track people in the office space, while preserving reasonable expectations of privacy. We develop methods for efficient access to data annotation services via crowdsourcing. We develop tools for formalizing interactions in the domain of computer vision. By designing a general-purpose toolkit we present a com- putational abstraction of otherwise undefined human abilities. To ensure high quality of crowdsourced annotations, we developed programmatic gold framework. By automatically generating gold standard data for crowdsourced tasks, we can present clear expectations to the workers, provide in-task training and explicitly measure worker accuracy. Crowdsourced annotations present an opportunity to re-formulate what an AI agent should be able to do. An indoor robot can safely operate in the environment with unknown objects. To interact with the objects, however, it must have a detailed model of the object: semantic label, visual model for recognition and geometry model for grasp and manipulation planning. We develop a robot supervision framework where crowdsourced on-demand annotations allow a robot to collect necessary information about unseen objects, build object models and proceed to manipulate these previously unseen objects.
Issue Date:2012-02-06
Genre:thesis
URI:http://hdl.handle.net/2142/29722
Rights Information:Copyright 2011 Alexander Sorokin
Date Available in IDEALS:2012-02-06
Date Deposited:2011-12


This item appears in the following Collection(s)

Item Statistics