Skip to main content

NoiER: An Approach for Training more Reliable Fine−Tuned Downstream Task Models

Myeongjun Jang and Thomas Lukasiewicz

Abstract

The recent development in pretrained language models that are trained in a self-supervised fashion, such as BERT, is driving rapid progress in natural language processing. However, their brilliant performance is based on leveraging syntactic artefacts of the training data rather than fully understanding the intrinsic meaning of language. The excessive exploitation of spurious artefacts is a problematic issue: the distribution collapse problem, which is the phenomenon that the model fine-tuned on downstream tasks is unable to distinguish out-of-distribution sentences while producing a high confidence score. In this paper, we argue that the distribution collapse is a prevalent issue in pretrained language models and propose noise entropy regularisation (NoiER) as an efficient learning paradigm that solves the problem without auxiliary models and additional data. The proposed approach improved traditional out-of-distribution detection evaluation metrics by 55% on average compared to the original fine-tuned models.

Journal
IEEE Transactions on Audio‚ Speech and Language Processing
Month
July
Pages
2514–2525
Volume
30
Year
2022