Files in this item

FilesDescriptionFormat

application/pdf

application/pdfJIN-THESIS-2020.pdf (858kB)Restricted Access
(no description provided)PDF

Description

Title:Semi-supervised universal yolov3-spp with GIoU loss for autonomous driving object detection under sunny and foggy weather
Author(s):Jin, Zhijie
Advisor(s):Chen, Deming
Department / Program:Electrical & Computer Eng
Discipline:Electrical & Computer Engr
Degree Granting Institution:University of Illinois at Urbana-Champaign
Degree:M.S.
Genre:Thesis
Subject(s):Object detection
Universal Model
Autonomous driving
Semi-supervised model
Abstract:Object detection for autonomous driving has made a huge progress in recent years. However, detecting objects in foggy weather is still a challenging problem. In this thesis, we first propose a two-stage pipeline for foggy weather object detection. The two-stage pipeline removes fog from images using a Generative Adversarial Network (GAN) based model called DehazeGAN. Then, an object detection step using GIoU YOLOv3-SPP is performed on the defogged images. Following the two-stage method, we introduce a supervised universal model that deals with object detection in both sunny and foggy weather. The supervised universal model is built with a universal adapter that consists a “feature map”-based attention module and a weather-based attention module. The “feature map”-based attention module has a parallel of Squeeze-and-Excitation (SE) adapters, where each SE adapter extracts different features from each channel. The weather-based attention module looks at the input image and assigns weights to each SE adapter so that the universal adapter can describe the weather of the input image. To evaluate the performance of the supervised universal model, we established a new benchmark called CPR-A containing the following datasets: COCO, Pascal VOC, and annotated RESIDE-β. RESIDE-β only contains foggy images captured from the real world while COCO ad Pascal VOC are general object detection datasets that contain mostly clear images. We then take advantage of unannotated images in our dataset by proposing a semi-supervised universal object detection pipeline. The performance of this semi-supervised universal model is tested on CPR-U which is constructed by CPR-A and unannotated RESIDE-β. Experimentally, the trained two-stage pipeline outperforms GIoU YOLOv3-SPP by 0.17 mAP. The supervised universal model trained on CPR-A outperforms single weather/domain models (GIoU YOLOv3-SPP and the proposed two-stage model). Semi-supervised universal model further improves mAP accuracy from 0.633 to 0.654.
Issue Date:2020-05-14
Type:Thesis
URI:http://hdl.handle.net/2142/108236
Rights Information:Copyright 2020 Zhijie Jin
Date Available in IDEALS:2020-08-27
Date Deposited:2020-05


This item appears in the following Collection(s)

Item Statistics