A Predictive Factor Analysis of Social Biases and Task-Performance in Pretrained Masked Language Models



Zhou, Yi, Camacho-Collados, Jose and Bollegala, Danushka ORCID: 0000-0003-4476-7003
(2023) A Predictive Factor Analysis of Social Biases and Task-Performance in Pretrained Masked Language Models. In: Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, 2023-12 - 2023-12, Singapore.

[img] Text
EMNLP_2023_MLM_Bias_Evaluation.pdf - Author Accepted Manuscript

Download (322kB) | Preview

Abstract

Various types of social biases have been reported with pretrained Masked Language Models (MLMs) in prior work. However, multiple underlying factors are associated with an MLM such as its model size, size of the training data, training objectives, the domain from which pretraining data is sampled, tokenization, and languages present in the pretrained corpora, to name a few. It remains unclear as to which of those factors influence social biases that are learned by MLMs. To study the relationship between model factors and the social biases learned by an MLM, as well as the downstream task performance of the model, we conduct a comprehensive study over 39 pretrained MLMs covering different model sizes, training objectives, tokenization methods, training data domains and languages. Our results shed light on important factors often neglected in prior literature, such as tokenization or model objectives.

Item Type: Conference or Workshop Item (Unspecified)
Divisions: Faculty of Science and Engineering > School of Electrical Engineering, Electronics and Computer Science
Depositing User: Symplectic Admin
Date Deposited: 08 Nov 2023 09:27
Last Modified: 15 Mar 2024 02:24
DOI: 10.18653/v1/2023.emnlp-main.683
Related URLs:
URI: https://livrepository.liverpool.ac.uk/id/eprint/3176685