Files in this item



application/pdfHOSKERE-DISSERTATION-2020.pdf (7MB)Restricted Access
(no description provided)PDF


Title:Developing autonomy in structural inspections through computer vision and graphics
Author(s):Hoskere, Vedhus A
Director of Research:Spencer Jr., Billie F.
Doctoral Committee Chair(s):Spencer Jr., Billie F.
Doctoral Committee Member(s):Golparvar-Fard, Mani; Lazebnik, Svetlana; Chowdhary, Girish; Smith, Matthew D.
Department / Program:Civil & Environmental Eng
Discipline:Civil Engineering
Degree Granting Institution:University of Illinois at Urbana-Champaign
Subject(s):deep learning, structural inspections, structural health monitoring, computer vision, computer graphics, unmanned aerial vehicles
Abstract:Visual inspection is the most common means of assessing the condition of civil infrastructure in the United States but can be exceedingly laborious, time consuming, subjective, and even dangerous. Inspections are typically performed by human experts who collect visual or other non-destructive data and combine it with relevant decision-making criteria. Computer vision techniques, in conjunction with acquisition through remote cameras and unmanned aerial vehicles (UAVs), offer promising non-contact solutions to automate civil infrastructure condition assessment. The ultimate goal of such a system is to automatically collect and robustly convert the image data into actionable information. Critical challenges exist to realizing an automated visual inspection system for civil infrastructure. While image data provides high spatial density, extracting relevant information from images that is useful for a specific inspection task, such as the presence and location of damage, is very challenging. To compound this problem, multiple types of information would need to be extracted from every image: for example, most guidelines for inspections require the identification of multiple damage types and describe evaluating the significance of the damage based on the associated material type on which the damage occurs. Additionally, methods need to be developed to extract high-fidelity information from images. When conducting visual inspections, human inspectors are effortlessly able to retrieve high-fidelity descriptions like the precise shape of defects (e.g., cracks), differentiate damage from damage like patterns (e.g., shadows, cables), and identify the structural components on which the damage occurs (e.g., walls, beams, columns, etc). Finally, the extracted information must be used to make objective decisions about the condition of the structure. This dissertation develops an automated inspection framework with four main parts, (i) the development of datasets to train supervised deep learning methods, (ii) the acquisition of data from damaged structures using UAVs, (iii) the application of trained deep networks to extract actionable information from the acquired data such as the location and types of damage, and the materials and types of components on which the damage occurs, and (iv) the visualization of results for review by inspectors to enable decision-making. Key challenges in implementing this framework are identified and addressed.
Issue Date:2020-11-19
Rights Information:Copyright 2020 Vedhus Hoskere
Date Available in IDEALS:2021-03-05
Date Deposited:2020-12

This item appears in the following Collection(s)

Item Statistics