Files in this item



application/pdfYANG-THESIS-2019.pdf (554kB)
(no description provided)PDF


Title:Image captioning using compositional sentiments
Author(s):Yang, Yi
Advisor(s):Hasegawa-Johnson, Mark
Department / Program:Electrical & Computer Eng
Discipline:Electrical & Computer Engr
Degree Granting Institution:University of Illinois at Urbana-Champaign
Subject(s):Image captioning: sentiment
Abstract:This thesis presents a method to generate emotional captions of images. An adequate caption should precisely describe the contents in an image. While humans can readily identify the most emotionally salient aspects of an image, many captioning models have difficulties in detecting and generating these non-factual aspects. This is caused by lack of sentiment information in the caption dataset. We solve this issue by preprocessing the text captions in an image captioning dataset with a sentiment analyzer to determine sentiment scores of all images in the training dataset. The model trained from this dataset is able to generate captions that communicate sentiment effectively, without requiring human judges to label sentiment of the training images. The model learns contents of training images, along with embedded word and sentence sentiments. Compared with the model without sentiment, it has better text captioning performance on BLEU-2, which improved from 17.15 to 18.25, and on CIDEr, which improved from 45.21 to 45.68. Automatic sentiment classification of generated captions matches the target sentiment as specified to the captioning system, with accuracy reaching 77.30%, 66.25%, 27.05 on negative, neutral and positive sentiments respectively.
Issue Date:2019-12-12
Rights Information:Copyright 2019 Yi Yang
Date Available in IDEALS:2020-03-02
Date Deposited:2019-12

This item appears in the following Collection(s)

Item Statistics