ChatPaper.aiChatPaper

经常使用ChatGPT进行写作任务的人是对人工智能生成文本的准确和稳健的检测器。

People who frequently use ChatGPT for writing tasks are accurate and robust detectors of AI-generated text

January 26, 2025
作者: Jenna Russell, Marzena Karpinska, Mohit Iyyer
cs.AI

摘要

本文研究人类在检测商用LLMs(GPT-4o,Claude,o1)生成的文本方面的表现。我们雇佣标注者阅读了300篇非虚构英文文章,将它们标记为人类撰写或AI生成,并为他们的决定提供段落长度的解释。我们的实验表明,经常使用LLMs进行写作任务的标注者在检测AI生成文本方面表现出色,即使没有接受任何专门的培训或反馈。事实上,五位这样的“专家”标注者中的多数意见仅将300篇文章中的1篇误分类,明显优于我们评估的大多数商用和开源检测器,即使存在改写和人性化等规避策略。对专家们自由形式解释的定性分析显示,他们虽然在很大程度上依赖特定的词汇线索('AI词汇'),但也注意到文本中更复杂的现象(例如,正式性,独创性,清晰度),这对于自动检测器来说是具有挑战性的。我们发布了我们的标注数据集和代码,以促进未来对人类和自动检测AI生成文本的研究。
English
In this paper, we study how well humans can detect text generated by commercial LLMs (GPT-4o, Claude, o1). We hire annotators to read 300 non-fiction English articles, label them as either human-written or AI-generated, and provide paragraph-length explanations for their decisions. Our experiments show that annotators who frequently use LLMs for writing tasks excel at detecting AI-generated text, even without any specialized training or feedback. In fact, the majority vote among five such "expert" annotators misclassifies only 1 of 300 articles, significantly outperforming most commercial and open-source detectors we evaluated even in the presence of evasion tactics like paraphrasing and humanization. Qualitative analysis of the experts' free-form explanations shows that while they rely heavily on specific lexical clues ('AI vocabulary'), they also pick up on more complex phenomena within the text (e.g., formality, originality, clarity) that are challenging to assess for automatic detectors. We release our annotated dataset and code to spur future research into both human and automated detection of AI-generated text.

Summary

AI-Generated Summary

PDF142January 30, 2025