Files in this item



application/pdfSP21-ECE499-Thesis-Shin, Kazuki.pdf (5MB)Restricted to U of Illinois
(no description provided)PDF


Title:Interpretable Multi-Pedestrian Trajectory Prediction Using Social GAN and Social GCNN
Author(s):Shin, Kazuki
Degree:B.S. (bachelor's)
Subject(s):trajectory prediction
deep neural networks
Abstract:Multi-pedestrian trajectory prediction is a challenging problem that has a wide variety of real-world applications ranging from crowd navigation to self-driving cars. Though much research is done in this field, most methods are focused on showing qualitative results which capture the interaction. As a consequence, the interpretability of these prediction algorithms is not well studied. Interpretability helps us to inspect the dynamics between input features and output predictions. The block box property of neural networks makes models, such as generative adversarial networks (GAN) and graph convolutional neural networks (GCNN), hard to visualize and understand what is going on internally. This thesis presents explanations in addition to the predictions as a way to improve transparency in these trajectory predictions from prior works. The presentation is twofold: (1) Based on the original Social GAN training model, we explore the interpretability of the latent code, and (2) We improve the prediction performance of Social-STGCNN, based on the integration of physical intuition and attention-based mechanisms. Our parametric study of these physical intuitions shows that including both the velocity and position of neighboring pedestrians in the attention mechanism improves model performance. Furthermore, we demonstrate that single attributes of multi-pedestrian trajectories can be explored without affecting other attributes through latent space disentanglement. Through this technique, various pedestrian behaviors can be identified and controlled by finding meaningful representations from the manifolds. We evaluate our approach on the ETH/UCY pedestrian datasets using average displacement error (ADE) and final displacement error (FDE) metrics. The results show social interactions that are intuitive in understanding why exactly these algorithms make the decisions they do.
Issue Date:2021-05
Genre:Dissertation / Thesis
Date Available in IDEALS:2021-08-11

This item appears in the following Collection(s)

Item Statistics