Files in this item



application/pdfLI-DISSERTATION-2020.pdf (9MB)
(no description provided)PDF


Title:Knowledge transfer in vision tasks with incomplete data
Author(s):Li, Zhizhong
Director of Research:Hoiem, Derek
Doctoral Committee Chair(s):Hoiem, Derek
Doctoral Committee Member(s):Lazebnik, Svetlana; Schwing, Alexander G; Luo, Linjie
Department / Program:Computer Science
Discipline:Computer Science
Degree Granting Institution:University of Illinois at Urbana-Champaign
Subject(s):Knowledge transfer
incomplete data
transfer learning
continual learning
deep learning
computer vision
Abstract:In many machine learning applications, some assumptions are so prevalent as to be left unwritten: all necessary data are available throughout the training process, the training and test data are independent and identically distributed (i.i.d.), and the dataset sampling sufficiently represent the test data of the model's usage scenario. Transfer learning methods can help when some of these assumptions are broken in real life, but still often assume all-time availability of data that the old and new knowledge can be learned from. In practice, necessary data or aspects of them can become inaccessible due to incomplete knowledge of test scenarios, privacy or legal concerns, protection of business leverage, evolving goals, etc. In this thesis, we address three transfer learning scenarios in neural networks that regularly occur in practice but differ from both standard i.i.d. assumptions and common transfer learning data availability assumptions. First, when transferring knowledge from previous tasks but the data used for training them is no longer available, we propose a method to extend and fine-tune the neural network to incorporate new classifiers while retaining the performance of existing classifiers. Second, with unsupervised domain adaptation where the target domain annotations are unavailable, we propose a method to more effectively transfer models to the unsupervised target domain, but guiding it using a common auxiliary task whose ground truth can be obtained for free or is already annotated. Finally, we show that, when test data is not i.i.d. with training data, classifiers are prone to confident but wrong predictions. In practical scenarios where the test data distribution is unknown before deploying the model, we explore ideas in several research fields to reduce confident errors. We observe that calibrated ensembles are the most effective, followed by single models calibrated using temperature scaling.
Issue Date:2020-05-04
Rights Information:(c) 2020 Zhizhong Li
Date Available in IDEALS:2020-08-26
Date Deposited:2020-05

This item appears in the following Collection(s)

Item Statistics