Skip to main content

Contextual Bandits for Information Retrieval

Katja Hofmann‚ Shimon Whiteson and Maarten de Rijke

Abstract

In this paper we give an overview of and outlook on research at the intersection of information retrieval (IR) and contextual bandit problems. A critical problem in information retrieval is online learning to rank, where a search engine strives to improve the quality of the ranked result lists it presents to users on the basis of those users' interactions with those result lists. Recently, researchers have started to model interactions between users and search engines as contextual bandit problems, and initial methods for learning in this setting have been devised. Our research focuses on two aspects: balancing exploration and exploitation and inferring preferences from implicit user interactions. This paper summarizes our recent work on online learning to rank for information retrieval and points out challenges that are characteristic of this application area.

Book Title
NeurIPS 2011: Proceedings of the Conference on Neural Information Processing Systems‚ Workshop on Bayesian Optimization‚ Experimental Design and Bandits: Theory and Applications
Month
December
Year
2011