Files in this item



application/pdfHAZIMEH-THESIS-2016.pdf (591kB)
(no description provided)PDF


Title:Axiomatic analysis of smoothing methods in language models for pseudo-relevance feedback
Author(s):Hazimeh, Hussein
Advisor(s):Zhai, ChengXiang
Department / Program:Computer Science
Discipline:Computer Science
Degree Granting Institution:University of Illinois at Urbana-Champaign
Subject(s):Search Engines
Text Retrieval
Relevance Feedback
Pseudo-Relevance Feedback
Implicit Feedback
Blind Feedback
Abstract:Pseudo-Relevance Feedback (PRF) is an important general technique for improving retrieval effectiveness without requiring any user effort. Several state-of-the-art PRF models are based on the language modeling approach where a query language model is learned based on feedback documents. In all these models, feedback documents are represented with unigram language models smoothed with a collection language model. While collection language model-based smoothing has proven both effective and necessary in using language models for retrieval, we use axiomatic analysis to show that this smoothing scheme inherently causes the feedback model to favor frequent terms and thus violates the IDF constraint needed to ensure selection of discriminative feedback terms. To address this problem, we propose replacing collection language model-based smoothing in the feedback stage with additive smoothing, which is analytically shown to select more discriminative terms. Empirical evaluation further confirms that additive smoothing indeed significantly outperforms collection-based smoothing methods in multiple language model-based PRF models.
Issue Date:2016-06-02
Rights Information:Copyright 2016 Hussein Hazimeh
Date Available in IDEALS:2016-11-10
Date Deposited:2016-08

This item appears in the following Collection(s)

Item Statistics