Skip to main content

Reusing Historical Interaction Data for Faster Online Learning to Rank for IR

Katja Hofmann‚ Anne Schuth‚ Shimon Whiteson and Maarten de Rijke


Online learning to rank for information retrieval (IR) holds promise for allowing the development of ``self-learning'' search engines that can automatically adjust to their users. With the large amount of e.g., click data that can be collected in web search settings, such techniques could enable highly scalable ranking optimization. However, feedback obtained from user interactions is noisy, and developing approaches that can learn from this feedback quickly and reliably is a major challenge. In this paper we investigate whether and how previously collected (historical) interaction data can be used to speed up learning in online learning to rank for IR. We devise the first two methods that can utilize historical data (1) to make feedback available during learning more reliable and (2) to preselect candidate ranking functions to be evaluated in interactions with users of the retrieval system. We evaluate both approaches on 9 learning to rank data sets and find that historical data can speed up learning, leading to substantially and significantly higher online performance. In particular, our pre-selection method proves highly effective at compensating for noise in user feedback. Our results show that historical data can be used to make online learning to rank for IR much more effective than previously possible, especially when feedback is noisy.

Book Title
WSDM 2013: Proceedings of the Sixth ACM International Conference on Web Search and Data Mining