ChatPaper.aiChatPaper

何為傷害?以人為中心的研究量化機器翻譯中性別偏見的具體影響

What the Harm? Quantifying the Tangible Impact of Gender Bias in Machine Translation with a Human-centered Study

October 1, 2024
作者: Beatrice Savoldi, Sara Papi, Matteo Negri, Ana Guerberof, Luisa Bentivogli
cs.AI

摘要

機器翻譯(MT)中的性別偏見被認為是可能傷害人們和社會的問題。然而,在這個領域的進展很少涉及最終的MT使用者,或者告知他們可能受到偏見技術影響的方式。目前的評估通常僅限於自動方法,這些方法提供了對性別差異可能帶來的下游影響的不透明估計。我們進行了一項廣泛的以人為中心的研究,以檢驗MT中的偏見是否以及在多大程度上帶來了實際成本的危害,例如在女性和男性之間的服務質量差距。為此,我們從90名參與者收集行為數據,他們對MT輸出進行了後期編輯,以確保正確的性別翻譯。在多個數據集、語言和用戶類型中,我們的研究顯示,女性後期編輯需求明顯需要更多的技術和時間成本,這也對應著更高的財務成本。然而,現有的偏見測量未能反映出發現的差異。我們的研究結果主張採用以人為中心的方法,以便了解偏見的社會影響。
English
Gender bias in machine translation (MT) is recognized as an issue that can harm people and society. And yet, advancements in the field rarely involve people, the final MT users, or inform how they might be impacted by biased technologies. Current evaluations are often restricted to automatic methods, which offer an opaque estimate of what the downstream impact of gender disparities might be. We conduct an extensive human-centered study to examine if and to what extent bias in MT brings harms with tangible costs, such as quality of service gaps across women and men. To this aim, we collect behavioral data from 90 participants, who post-edited MT outputs to ensure correct gender translation. Across multiple datasets, languages, and types of users, our study shows that feminine post-editing demands significantly more technical and temporal effort, also corresponding to higher financial costs. Existing bias measurements, however, fail to reflect the found disparities. Our findings advocate for human-centered approaches that can inform the societal impact of bias.

Summary

AI-Generated Summary

PDF52November 13, 2024