Measuring Political Bias in Large Language Models: What is Said and How It is Said
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1 Long Papers)(2024)
摘要
We propose to measure political bias in LLMs by analyzing both the contentand style of their generated content regarding political issues. Existingbenchmarks and measures focus on gender and racial biases. However, politicalbias exists in LLMs and can lead to polarization and other harms in downstreamapplications. In order to provide transparency to users, we advocate that thereshould be fine-grained and explainable measures of political biases generatedby LLMs. Our proposed measure looks at different political issues such asreproductive rights and climate change, at both the content (the substance ofthe generation) and the style (the lexical polarity) of such bias. We measuredthe political bias in eleven open-sourced LLMs and showed that our proposedframework is easily scalable to other topics and is explainable.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
数据免责声明
页面数据均来自互联网公开来源、合作出版商和通过AI技术自动分析结果,我们不对页面数据的有效性、准确性、正确性、可靠性、完整性和及时性做出任何承诺和保证。若有疑问,可以通过电子邮件方式联系我们:report@aminer.cn