Files in this item

FilesDescriptionFormat

application/pdf

application/pdfGIRLEA-DISSERTATION-2017.pdf (3MB)Restricted to U of Illinois
(no description provided)PDF

Description

Title:Deception detection in dialogues
Author(s):Girlea, Codruta Liliana
Director of Research:Amir, Eyal; Girju, Roxana
Doctoral Committee Chair(s):Amir, Eyal
Doctoral Committee Member(s):Roth, Dan; Hockenmaier, Julia; Shahaf, Dafna
Department / Program:Computer Science
Discipline:Computer Science
Degree Granting Institution:University of Illinois at Urbana-Champaign
Degree:Ph.D.
Genre:Dissertation
Subject(s):Natural language dialogues
Beliefs over beliefs
Psycholinguistics
Deception detection
Dynamic Bayesian networks
Abstract:In the social media era, it is commonplace to engage in written conversations. People sometimes even form connections across large distances, in writing. However, human communication is in large part non-verbal. This means it is now easier for people to hide their harmful intentions. At the same time, people can now get in touch with more people than ever before. This puts vulnerable groups at higher risk for malevolent interactions, such as bullying, trolling, or predatory behavior. Furthermore, such growing behaviors have most recently led to waves of fake news and a growing industry of deceit creators and deceit detectors. There is now an urgent need for both theory that explains deception and applications that automatically detect deception. In this thesis I address this need with a novel application that learns from examples and detects deception reliably in natural-language dialogues. I formally define the problem of deception detection and identify several domains where it is useful. I introduce and evaluate new psycholinguistic features of deception in written dialogues for two datasets. My results shed light on the connection between language, deception, and perception. They also underline the challenges and difficulty of assessing perceptions from written text. To automatically learn to detect deception I first introduce an expressive logical model and then present a probabilistic model that simplifies the first and is learnable from labeled examples. I introduce a belief-over-belief formalization, based on Kripke semantics and situation calculus. I use an observation model to describe how utterances are produced from the nested beliefs and intentions. This allows me to easily make inferences about these beliefs and intentions given utterances, without needing to explicitly represent perlocutions. The agents’ belief states are filtered with the observed utterances, resulting in an updated Kripke structure. I then translate my formalization to a practical system that can learn from a small dataset and is able to perform well using very little structural background knowledge in the form of a relational dynamic Bayesian network structure.
Issue Date:2017-07-12
Type:Text
URI:http://hdl.handle.net/2142/99091
Rights Information:Copyright 2017 Codruta Girlea
Date Available in IDEALS:2018-03-02
Date Deposited:2017-08


This item appears in the following Collection(s)

Item Statistics