LLM2IR: simple unsupervised contrastive learning makes long-context LLM great retriever
Yang, Xiaocong
Loading…
Permalink
https://hdl.handle.net/2142/129301
Description
Title
LLM2IR: simple unsupervised contrastive learning makes long-context LLM great retriever
Author(s)
Yang, Xiaocong
Issue Date
2025-05-08
Director of Research (if dissertation) or Advisor (if thesis)
Zhai, Chengxiang
Department of Study
Siebel School Comp & Data Sci
Discipline
Computer Science
Degree Granting Institution
University of Illinois Urbana-Champaign
Degree Name
M.S.
Degree Level
Thesis
Keyword(s)
Information Retrieval, Large Language Models
Abstract
Modern dense information retrieval (IR) models usually rely on costly large-scale pretraining. In this paper, we introduce LLM2IR, an efficient unsupervised contrastive learning framework to convert any decoder-only large language model (LLM) to an information retrieval model. Despite its simplicity, the effectiveness is proven among different LLMs on multiple IR benchmarks including LoCo, LongEmbed and BEIR. We also find that models with a longer context length tend to have a stronger IR capacity by comparing task performances of models in the same model family. Our work not only provides an effective way to build IR models on the state-of-the-art LLMs, but also shed light on the relationship between information retrieval ability and model context length, which helps the design of better information retrievers.
Use this login method if you
don't
have an
@illinois.edu
email address.
(Oops, I do have one)
IDEALS migrated to a new platform on June 23, 2022. If you created
your account prior to this date, you will have to reset your password
using the forgot-password link below.