Withdraw
Loading…
Interpretable belief representation learning on social networks
Li, Jinning
Loading…
Permalink
https://hdl.handle.net/2142/129375
Description
- Title
- Interpretable belief representation learning on social networks
- Author(s)
- Li, Jinning
- Issue Date
- 2025-03-25
- Doctoral Committee Chair(s)
- Abdelzaher, Tarek
- Committee Member(s)
- Tong, Hanghang
- Zhai, Chengxiang
- Szymanski, Boleslaw K.
- Department of Study
- Computer Science
- Discipline
- Computer Science
- Degree Granting Institution
- University of Illinois Urbana-Champaign
- Degree Name
- Ph.D.
- Degree Level
- Dissertation
- Keyword(s)
- Interpretable Belief Representation
- Variational Graph Auto-Encoders
- Large Language Models
- Ideological Embedding
- Social Networks
- Abstract
- Social media have become a major platform that reflects social beliefs and reveals ideological polarization through ongoing debates and conflicts, prompting the need for computational belief models that are both effective and insightful. These models are crucial for applications such as polarization detection, preventing social crises and extreme content, and enhancing information retrieval and recommendation systems. This dissertation addresses these needs by developing a family of interpretable belief representation methods that integrate social-network interactions with language modeling, thereby capturing and interpreting individuals’ beliefs and how they aggregate into broader ideological factions, as well as revealing how belief polarizations contribute to ideological conflicts. The work spans a spectrum of approaches, beginning with graph-centric solutions and advancing to language-centric methods and mixture models. First, we propose a general framework for interpretable belief representation learning, including feasibility conditions and the theoretical validation of interpretability. Under this framework, we introduce an InfoVGAE model to learn a disentangled, non-negative latent space, ensuring that each axis aligns with a distinct and semantically meaningful ideology. To address the challenge of sparse and noisy networks, a weakly supervised graph-centric model (SGVGAE) is proposed to incorporate minimal guidance from large language models (LLMs), providing enhancements to connectivity and axis alignment that strengthen robustness without sacrificing interpretability. Next, our NTULM model inverts the focus by enriching LLMs with structural context, and accommodates belief learning through knowledge distillation for isolated or emerging users who lack strong network ties. Additionally, a mixture model of graph-centric and language-centric approaches is proposed to further boost the effectiveness. Finally, the dissertation addresses the higher-level challenge of uncovering how beliefs converge into broader ideologies, namely, the ideological conflict discovery task. The IGAT model introduces a differentiable tree-splitting mechanism that segments user communities hierarchically, revealing multiple layers of ideological conflict while preserving transparency in how beliefs form and align. Taken as a whole, these contributions push the boundary of interpretable belief representation learning in social networks. They combine rigorous theoretical insights, such as the conditions and validations for interpretability, with scalable implementations that leverage the benefits of both structured and unstructured data. In doing so, this dissertation lays the groundwork for further exploration into belief foundation models, multi-modal integration, and cross-platform analyses of how beliefs and ideologies emerge, intersect, and evolve.
- Graduation Semester
- 2025-05
- Type of Resource
- Thesis
- Handle URL
- https://hdl.handle.net/2142/129375
- Copyright and License Information
- Copyright 2025 Jinning Li
Owning Collections
Graduate Dissertations and Theses at Illinois PRIMARY
Graduate Theses and Dissertations at IllinoisManage Files
Loading…
Edit Collection Membership
Loading…
Edit Metadata
Loading…
Edit Properties
Loading…
Embargoes
Loading…