Files in this item

FilesDescriptionFormat

application/pdf

application/pdfGHASSAMI-DISSERTATION-2020.pdf (5MB)
(no description provided)PDF

Description

Title:Causal discovery beyond Markov equivalence
Author(s):Ghassami, Amiremad
Director of Research:Kiyavash, Negar
Doctoral Committee Chair(s):Kiyavash, Negar
Doctoral Committee Member(s):Koyejo, Sanmi; Raginsky, Maxim; Srikant, Rayadurgam; Zhang, Kun
Department / Program:Electrical & Computer Eng
Discipline:Electrical & Computer Engr
Degree Granting Institution:University of Illinois at Urbana-Champaign
Degree:Ph.D.
Genre:Dissertation
Subject(s):Causal Discovery
Directed Graphs
Markov Equivalence
Interventional Causal Structure Learning
Multi-domain Causal Structure Learning
Distribution Equivalence
Abstract:The focus of the dissertation is on learning causal diagrams beyond Markov equivalence. The baseline assumptions in causal structure learning are the acyclicity of the underlying structure and causal sufficiency, which requires that there are no unobserved confounder variables in the system. Under these assumptions, conditional independence relationships contain all the information in the distribution that can be used for structure learning. Therefore, the causal diagram can be identified only up to Markov equivalence, which is the set of structures reflecting the same conditional independence relationships. Hence, for many ground truth structures, the direction of a large portion of the edges will remain unidentified. Hence, in order to learn the structure beyond Markov equivalence, generating or having access to extra joint distributions from the perturbed causal system is required. There are two main scenarios for acquiring the extra joint distributions. The first and main scenario is when an experimenter is directly performing a sequence of interventions on subsets of the variables of the system to generate interventional distributions. We refer to the task of causal discovery from such interventional data as interventional causal structure learning. In this setting, the key question is determining which variables should be intervened on to gain the most information. This is the first focus of this dissertation. The second scenario for acquiring the extra joint distributions is when a subset of causal mechanisms, and consequently the joint distribution of the system, have varied or evolved due to reasons beyond the control of the experimenter. In this case, it is not even a priori known to the experimenter which causal mechanisms have varied. We refer to the task of causal discovery from such multi-domain data as multi-domain causal structure learning. In this setup the main question is how one can take the most advantage of the changes across domains for the task of causal discovery. This is the second focus of this dissertation. Next, we consider cases under which conditional independency may not reflect all the information in the distribution that can be used to identify the underlying structure. One such case is when cycles are allowed in the underlying structure. Unfortunately, a suitable characterization for equivalence for the case of cyclic directed graphs has been unknown so far. The third focus of this dissertation is on bridging the gap between cyclic and acyclic directed graphs by introducing a general approach for equivalence characterization and structure learning. Another case in which conditional independency may not reflect all the information in the distribution is when there are extra assumptions on the generating causal modules. A seminal result in this direction is that a linear model with non-Gaussian exogenous variables is uniquely identifiable. As the forth focus of this dissertation, we consider this setup, yet go one step further and allow for violation of causal sufficiency, and investigate how this generalization affects the identifiability.
Issue Date:2020-07-07
Type:Thesis
URI:http://hdl.handle.net/2142/108456
Rights Information:Copyright 2020 AmirEmad Ghassami
Date Available in IDEALS:2020-10-07
Date Deposited:2020-08


This item appears in the following Collection(s)

Item Statistics