#### Withdraw

Loading…

# Learning network structure from node behavior

#### Wu, Rui

Loading…

## Permalink

https://hdl.handle.net/2142/72830

## Description

- Title
- Learning network structure from node behavior
- Author(s)
- Wu, Rui
- Issue Date
- 2015-01-21
- Director of Research (if dissertation) or Advisor (if thesis)
- Srikant, R.
- Doctoral Committee Chair(s)
- Srikant, R.
- Committee Member(s)
- Hajek, Bruce
- Oh, Sewoong
- Veeravalli, Venugopal V.

- Department of Study
- Electrical & Computer Eng
- Discipline
- Electrical & Computer Engr
- Degree Granting Institution
- University of Illinois at Urbana-Champaign
- Degree Name
- Ph.D.
- Degree Level
- Dissertation
- Keyword(s)
- Markov random field
- structure learning
- recommender systems
- clustering
- convex optimization
- ranking
- Bradley-Terry model
- projection
- maximum likelihood estimation

- Abstract
- Understanding the network structure connecting a group of entities is of interest in applications such as predicting stock prices and making recommendations to customers. The network structure is usually not directly observable. However, due to improvements in technology and the ever-increasing use of the Internet, large amounts of data about individual node behavior is becoming more easily available. Thus, an interesting problem is to devise algorithms to infer network structure from node behavior data. Since network sizes are enormous in typical applications, the learning problem is not tractable for general network topology. In this thesis, we focus on three models with simplifying assumptions on the underlying network. The first model represents the network as a Markov random field, where each node in the network is viewed as a random variable and the conditional independence relations among them is encoded by a graph. The simplifying assumption is that the underlying graph is loosely connected: the number of short paths between any pair of nodes is small. We point out that many previously studied models are examples of this family. Given i.i.d. samples from the joint distribution, we present a natural low complexity algorithm for learning the structure of loosely connected Markov random fields. In particular, our algorithm learns the graph correctly with high probability using $n = O(\log p)$ samples, where $p$ is the size of the graph. If there are at most $D_1$ short paths between non-neighbor nodes and $D_2$ non-direct short paths between neighboring nodes, the running time of our algorithm is $O(np^{D_1+D_2+2})$. The second model arises from the recommender systems where users give ratings to items. We make the assumption that both users and items form clusters and users in the same cluster give the same binary rating to items in the same cluster. The goal is to recover the user and item clusters by observing only a small fraction of noisy entries. We first derive a lower bound on the minimum number of observations needed for exact cluster recovery. Then, we study three algorithms with different running time and compare the number of observations needed for successful cluster recovery. Our analytical results show smooth time-data trade-offs: one can gradually reduce the computational complexity when increasingly more observations are available. The third model considers a similar scenario as the previous one: instead of giving binary ratings, users give pairwise comparisons to items. We assume the users form clusters where users in the same cluster share the same score vector for the items, and the pairwise comparisons obtained from each user are generated according to the Bradley-Terry model with his/her score vector. We propose a two-step algorithm for estimating the score vectors: first cluster the users using projected comparison vectors and then estimate a score vector separately for each cluster by the maximum likelihood estimation for the classical Bradley-Terry model. The key observation is that, though each user is represented by a high-dimensional comparison vector, the corresponding expected comparison vector is determined by only a small number of parameters and it lies close to a low-dimensional linear subspace. When projecting the comparison vectors onto this subspace, it significantly reduces the noise and improves the clustering performance. Moreover, we show that the maximum likelihood estimation is robust to clustering errors.
- Graduation Semester
- 2014-12
- Permalink
- http://hdl.handle.net/2142/72830
- Copyright and License Information
- Copyright 2014 Rui Wu

## Owning Collections

##### Graduate Dissertations and Theses at Illinois PRIMARY

Graduate Theses and Dissertations at Illinois##### Dissertations and Theses - Electrical and Computer Engineering

Dissertations and Theses in Electrical and Computer Engineering#### Manage Files

Loading…

#### Edit Collection Membership

Loading…

#### Edit Metadata

Loading…

#### Edit Properties

Loading…

#### Embargoes

Loading…