Files in this item



application/pdfKaijian_Liu.pdf (15MB)
(no description provided)PDF


Title:Crowdsourcing construction activity analysis from jobsite video streams
Author(s):Liu, Kaijian
Advisor(s):Golparvar-Fard, Mani
Department / Program:Civil & Environmental Eng
Discipline:Civil Engineering
Degree Granting Institution:University of Illinois at Urbana-Champaign
Subject(s):Construction Productivity
Activity Analysis
Workface Assessment
Video-based Monitoring
Abstract:The advent of affordable cameras is reshaping the way owners and contractors are documenting ongoing construction operations by providing large amount of jobsite video streams for assessing on-site activities. To facilitate assessing these large collections of videos, recent studies have focused on leveraging computer vision algorithms for construction activity analysis and automatic workface assessment process. Despite promising results, understanding human activities from videos are still rather limited. The main gaps in knowledge are: 1) the lack of comprehensive datasets with groundtruth to cover all construction activities, and 2) methods that can deal with high degree of intra-class variability among activities and visual feature similarities among non-direct works. To address the need of reliable workface assessment, and to facilitate the development of computer vision algorithms for automatic activity analysis, this thesis proposes conducting video-based workface assessment in form of a crowdsourcing task at massive marketplace, such as Amazon Mechanical Turk (AMT). The presented method can attract hundreds of human annotators from around the world in seconds to use the compositional structure of worker-activity-posture-tool to analyze videos. Today, the human ability to interpret video content outperforms current vision-based algorithms. Thus, it is hypothesized that with the assistance from crowd intelligence (non-experts) and automatic detection and tracking algorithms, reliable workface assessment results can be quickly collected from jobsite video streams. To validate this hypothesis, a web-based workface assessment tool is developed that supports 1) crowdsourcing of video-based activity analysis tasks at AMT by calling for human annotators’ intelligence to interpret a sparse set of keyframes; 2) a detection and a tracking algorithm to automatically generate workface assessment results for the rest of non-key frames based on the sparse set of user-assisted key frames; and 3) intuitive interfaces for 2D construction resources localization, presentation of compositional structure taxonomy of construction worker activities, visualization of activity analysis results, and quality control strategies. Six different exhaustive experiments are conducted to examine different annotation methods and frequencies, different video lengths to construct HITs (Human Intelligence Tasks), difference between expert and non-expert annotators, difference between linear and detection-based extrapolation methods, and optimal cross-validation strategy to improve workface assessment’s accuracy. The experimental results are presented and discussed by annotation time and accuracy at each level of compositional structure. Our experiments with overall accuracy of 85% for non-expert annotators testify that the quality of work by non-experts annotators at AMT is as reliable as experts on providing accurate and complete workforce assessment. The introduced method has potential to minimize time needed for workface assessment and allows professionals to focus their time on the more important task of root-cause analysis and investigating alternatives for performance improvement.
Issue Date:2014-09-16
Rights Information:Copyright 2014 Kaijian Liu
Date Available in IDEALS:2014-09-16
Date Deposited:2014-08

This item appears in the following Collection(s)

Item Statistics