Skip to main content

Multileaved Comparisons for Fast Online Evaluation

Anne Schuth‚ Floor Sietsma‚ Shimon Whiteson‚ Damien Lefortier and Maarten de Rijke

Abstract

Evaluation methods for information retrieval systems come in three types: off\-line evaluation, using static data sets annotated for relevance by human judges; user studies, usually conducted in a lab-based setting; and online evaluation, using implicit signals such as clicks from actual users. For the latter, preferences between rankers are typically inferred from implicit signals via interleaved comparison methods, which combine a pair of rankings and display the result to the user. We propose a new approach to online evaluation called multileaved comparisons that is useful in the prevalent case where designers are interested in the relative performance of more than two rankers. Rather than combining only a pair of rankings, multileaved comparisons combine an arbitrary number of rankings. The resulting user clicks then give feedback about how all these rankings compare to each other. We propose two specific multileaved comparison methods. The first, called team draft multileave, is an extension of team draft interleave. The second, called optimized multileave, is an extension of optimized interleave and is designed to handle cases where a large number of rankers must be multileaved. We present experimental results that demonstrate that both team draft multileave and optimized multileave can accurately determine all pairwise preferences among a set of rankers using far less data than the interleaving methods that they extend.

Book Title
CIKM 2014: Proceedings of the Twenty−Third Conference on Information and Knowledge Management
Month
November
Pages
71−80
Year
2014