Skip to main content

An Empirical Analysis of Parameter−Efficient Methods for Debiasing Pre−Trained Language Models

Zhongbin Xie and Thomas Lukasiewicz

Abstract

The increasingly large size of modern pretrained language models not only makes them inherit more human-like biases from the training corpora, but also makes it computationally expensive to mitigate such biases. Therefore, in this paper, we investigate recent parameter-efficient methods in combination with counterfactual data augmentation (CDA) for bias mitigation. We conduct comprehensive experiments with prefix tuning, prompt tuning, and adapter tuning on different language models and bias types to evaluate their debiasing performance and abilities to preserve the internal knowledge of a pre-trained model. We find that the parameter-efficient methods (i) can perform similarly to or sometimes better than full fine-tuning with improved time and memory efficiency, where adapter tuning is consistently the most effective one for both BERT and GPT-2; (ii) are better at preserving the language modeling ability compared to strong post-hoc debiasing methods, while still achieve a competitive or superior debiasing performance; and (iii) can largely maintain the internal knowledge in both BERT and GPT-2 evaluated via fact retrieval and downstream fine-tuning.

Book Title
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics‚ ACL 2023‚ Toronto‚ Canada‚ July 9–14‚ 2023
Month
July
Publisher
Association for Computational Linguistics
Year
2023