Skip to main content

Using Confidence Bounds for Efficient On−Line Ranker Evaluation

Masrour Zoghi‚ Shimon Whiteson‚ Maarten de Rijke and Remi Munos

Abstract

A key challenge in information retrieval is that of on-line ranker evaluation: determining which one of a finite set of rankers performs the best in expectation on the basis of user clicks on presented document lists. When the presented lists are constructed using interleaved comparison methods, which interleave lists proposed by two different candidate rankers, then the problem of minimizing the total regret accumulated while evaluating the rankers can be formalized as a K-armed dueling bandits problem. In this paper, we propose a new method called relative confidence sampling (RCS) that aims to reduce cumulative regret by being less conservative than existing methods in eliminating rankers from contention. In addition, we present an empirical comparison between RCS and two state-of-the-art methods, relative upper confidence bound and SAVAGE. The results demonstrate that RCS can substantially outperform these alternatives on several large learning to rank datasets.

Book Title
WSDM 2014: Proceedings of the Seventh ACM International Conference on Web Search and Data Mining
Month
February
Pages
73−82
Year
2014