Skip to main content

A Surprisingly Robust Trick for the Winograd Schema Challenge

Vid Kocijan‚ Ana−Maria Cretu‚ Oana−Maria Camburu‚ Yordan Yordanov and Thomas Lukasiewicz

Abstract

The Winograd Schema Challenge (WSC) dataset WSC273 and its inference counterpart WNLI are popular benchmarks for natural language understanding and commonsense reasoning. In this paper, we show that the performance of three language models on WSC273 strongly improves when fine-tuned on a similar pronoun disambiguation problem dataset (denoted WSCR). We additionally generate a large unsupervised WSC-like dataset. By fine-tuning the BERT language model, both on the introduced and on the WSCR dataset, we achieve overall accuracies of 72.2% and 71.9% on WSC273 and WNLI, improving the previous state-of-the-art solutions by 8.5% and 6.8%, respectively. Furthermore, our fine-tuned models are also consistently more robust on the “complex” subsets of WSC273, introduced by Trichelair et al. (2018).

Book Title
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics‚ ACL 2019‚ Florence‚ Italy‚ July 28 − August 2‚ 2019
Editor
Anna Korhonen and David Traum
Month
July
Publisher
Association for Computational Linguistics
Year
2019