Files in this item
|(no description provided)||MPEG-4 video|
|Title:||Real-time visualization of end-to-end learned single camera based autonomous rover|
Our visualization is for understanding a vision based end-to-end imitation learning autonomous rover, with only one single camera as its perception input.
The autonomous rover is modeled based on end-to-end imitation learning approach, which currently reaches L3 level in autonomous driving industries. However, few companies achieve L4 (a higher autonomous level) with end-to-end training.
This visualization helps us understand what knowledge is really learned within this “black box,” for developer to potentially reinforce the badly-handled situations through hard sample minings. Through visualizing the intermediate feature maps of the deep model, we found out that the convolutional kernels have learnt quite some obstacle detection and image segmentation capabilities even though we were not specifically training for those tasks. In addition to helping developer analyze thoroughly on the robustness of deep models, this visualization also potentially helps accelerate other deep learning applications. End-to-end labeled data are relatively easy to obtain while data for other tasks such as image segmentation and object detection models requires a lot of time to hand label. From the visualization we believe that end-to-end model have very high potential to be fine grained to a specialized model.
|Rights Information:||Copyright 2018 Alvin Sun|
|Date Available in IDEALS:||2019-02-18|