Files in this item



application/pdfSHETTY-THESIS-2017.pdf (9MB)
(no description provided)PDF


Title:GPS-LiDAR sensor fusion aided by 3D city models for UAVs
Author(s):Shetty, Akshay Prabhakar
Advisor(s):Gao, Grace Xingxin
Department / Program:Aerospace Engineering
Discipline:Aerospace Engineering
Degree Granting Institution:University of Illinois at Urbana-Champaign
Subject(s):Global positioning system (GPS)
Light detection and ranging (LiDAR)
Unmanned aerial vehicles
Sensor fusion
Outdoor navigation
Light detection and ranging (LiDAR) covariance
Iterative closest point
Abstract:Recently, there has been an increase in outdoor applications for small-scale Unmanned Aerial Vehicles (UAVs), such as 3D modelling, filming, surveillance, and search and rescue. To perform these tasks safely and reliably, a continuous and accurate estimate of the UAVs’ positions is needed. Global Positioning System (GPS) receivers are commonly used for this purpose. However, navigating in urban areas using only GPS is challenging, since satellite signals might be reflected or blocked by buildings, resulting in multipath errors or non-line-of-sight (NLOS) situations. In such cases, additional on-board sensors are desirable to improve global positioning of the UAV. Light Detection and Ranging (LiDAR), one such sensor, provides a real-time point cloud of its surroundings. In a dense urban environment, LiDAR is able to detect a large number of features of surrounding structures, such as buildings, as opposed to in an open-sky environment. This characteristic of LiDAR complements GPS, which is accurate in open-sky environments, but may suffer large errors in urban areas. To fuse GPS and LiDAR measurements, Kalman Filtering and its variations are commonly used. However, it is important, yet challenging, to accurately characterize the error covariance of the sensor measurements. In this thesis, we propose a GPS-LiDAR fusion technique with a novel method for efficiently modelling the error covariance in position measurements based on LiDAR point clouds. For GPS measurements, we eliminate NLOS satellites and model the covariance based on the measurement signal-to-noise ratio (SNR) values. We use the LiDAR point clouds in two ways: to estimate incremental motion by matching consecutive point clouds; and, to estimate global pose by matching with a 3D city model. We aim to characterize the error covariance matrices in these two aspects as a function of the distribution of features in the LiDAR point cloud. To estimate the incremental motion between two consecutive LiDAR point clouds, we use the Iterative Closest Point (ICP) algorithm. We perform simulations in different environments to showcase the dependence of ICP on features in the point cloud. While navigating in urban areas, we expect the LiDAR to detect structured objects, such as buildings, which are primarily composed of surfaces and edges. Thus, we develop an efficient way for modelling the error covariance in the estimated incremental position based on each surface and edge feature point in the point cloud. A surface point helps to estimate motion of the LiDAR perpendicular to the surface, while an edge point helps to estimate motion of the LiDAR perpendicular to the edge. We treat each feature point independently and combine their individual error covariance to obtain a total error covariance ellipsoid for the estimated incremental position. For our 3D city model, we use elevation data of the State of Illinois available online and combine it with building information extracted from OpenStreetMap, a crowd-sourced mapping platform. We again use the ICP algorithm to match the LiDAR point cloud with our 3D city model, which provides us with an estimate of the UAV's global pose. Additionally, we also use the 3D city model to determine and eliminate NLOS GPS satellites. We use remaining pseudorange measurements from the on-board GPS receiver and a stationary reference receiver to create a vector of double-difference measurements. We create a covariance matrix for the GPS double-difference measurement vector based on SNR of the individual pseudorange measurements. Finally, all the above measurements and error covariance matrices are provided as an input to an Unscented Kalman Filter (UKF). The states of the filter include the globally referenced pose of the UAV. Before implementation, we perform an observability analysis for our filter. To validate our algorithm, we conduct UAV experiments in GPS-challenged urban environments on the University of Illinois at Urbana-Champaign campus. We observe that our model for the covariance ellipsoid from on-board LiDAR point clouds accurately represents the position errors and improves the filter output. We demonstrate a clear improvement in the UAV's global pose estimates using the proposed sensor fusion technique.
Issue Date:2017-04-27
Rights Information:Copyright 2017 Akshay Shetty
Date Available in IDEALS:2017-08-10
Date Deposited:2017-05

This item appears in the following Collection(s)

Item Statistics