CHBias: Bias Evaluation and Mitigation of Chinese Conversational Language Models



Zhao, J, Fang, M ORCID: 0000-0001-6745-286X, Shi, Z, Li, Y, Chen, L and Pechenizkiy, M
(2023) CHBias: Bias Evaluation and Mitigation of Chinese Conversational Language Models. In: The 61st Annual Meeting of the Association for Computational Linguistics, Toronto, Canada.

[img] Text
ACL_2023_CHBias__Bias_Evaluation_and_Mitigation_of_Chinese_Conversational_Language_Models-camera.pdf - Submitted version

Download (825kB) | Preview

Abstract

Pretrained conversational agents have been exposed to safety issues, exhibiting a range of stereotypical human biases such as gender bias. However, there are still limited bias categories in current research, and most of them only focus on English. In this paper, we introduce a new Chinese dataset, CHBias, for bias evaluation and mitigation of Chinese conversational language models. Apart from those previous well-explored bias categories, CHBias includes under-explored bias categories, such as ageism and appearance biases, which received less attention. We evaluate two popular pretrained Chinese conversational models, CDial-GPT and EVA2.0, using CHBias. Furthermore, to mitigate different biases, we apply several debiasing methods to the Chinese pretrained models. Experimental results show that these Chinese pretrained models are potentially risky for generating texts that contain social biases, and debiasing methods using the proposed dataset can make response generation less biased while preserving the models' conversational capabilities.

Item Type: Conference or Workshop Item (Unspecified)
Divisions: Faculty of Science and Engineering > School of Electrical Engineering, Electronics and Computer Science
Depositing User: Symplectic Admin
Date Deposited: 24 May 2023 08:20
Last Modified: 30 Oct 2023 03:55
URI: https://livrepository.liverpool.ac.uk/id/eprint/3170616