Skip to main content

BECEL: Benchmark for Consistency Evaluation of Language Models

Myeongjun Jang‚ Deuk Sin Kwon and Thomas Lukasiewicz

Abstract

Behavioural consistency is a critical condition for an LM to become trustworthy as humans. Despite its importance, however, there is little consensus on the definition of LM consistency, resulting in different definitions across many studies. In this paper, we first propose the idea of LM's consistency based on the spirit of behavioural consistency and establish a taxonomy that classifies previously studied consistencies into several sub-categories. Next, we create a new benchmark that allows us to evaluate a model on 19 test cases, distinguished by multiple types of consistency and diverse downstream tasks. Through extensive experiments on the new benchmark, we ascertain that none of the modern PLMs performs well in every test case while exhibiting high inconsistency in many cases. Our experimental results suggest that a unified benchmark that covers broad aspects (i.e., multiple consistency types and tasks) is essential for a more precise evaluation.

Book Title
Proceedings of the 29th International Conference on Computational Linguistics‚ COLING 2022‚ Gyeongju‚ Republic of Korea‚ October 2022
Month
October
Pages
3680–3696
Publisher
International Committee on Computational Linguistics
Year
2022