OmniEval:金融领域全向自动RAG评估基准
OmniEval: An Omnidirectional and Automatic RAG Evaluation Benchmark in Financial Domain
December 17, 2024
作者: Shuting Wang, Jiejun Tan, Zhicheng Dou, Ji-Rong Wen
cs.AI
摘要
作为大型语言模型(LLMs)的典型和实用应用,检索增强生成(RAG)技术已经引起了广泛关注,特别是在垂直领域,LLMs可能缺乏领域特定知识的情况下。在本文中,我们介绍了一个全方位自动的金融领域RAG基准,名为OmniEval。我们的基准以其多维评估框架为特点,包括(1)基于矩阵的RAG场景评估系统,将查询分类为五个任务类别和16个金融主题,从而对不同查询场景进行结构化评估;(2)多维度评估数据生成方法,结合基于GPT-4的自动生成和人工标注,使得在生成实例上人工评估的接受率达到87.47%;(3)多阶段评估系统,评估检索和生成性能,从而全面评估RAG流程;以及(4)基于规则和基于LLM的强大评估指标,通过手动注释和LLM评估器的监督微调增强评估的可靠性。我们的实验展示了OmniEval的全面性,包括广泛的测试数据集,并突出了RAG系统在不同主题和任务中性能变化,揭示了RAG模型在垂直领域改进能力的重要机会。我们在https://github.com/RUC-NLPIR/OmniEval{https://github.com/RUC-NLPIR/OmniEval}开源了我们基准的代码。
English
As a typical and practical application of Large Language Models (LLMs),
Retrieval-Augmented Generation (RAG) techniques have gained extensive
attention, particularly in vertical domains where LLMs may lack domain-specific
knowledge. In this paper, we introduce an omnidirectional and automatic RAG
benchmark, OmniEval, in the financial domain. Our benchmark is characterized by
its multi-dimensional evaluation framework, including (1) a matrix-based RAG
scenario evaluation system that categorizes queries into five task classes and
16 financial topics, leading to a structured assessment of diverse query
scenarios; (2) a multi-dimensional evaluation data generation approach, which
combines GPT-4-based automatic generation and human annotation, achieving an
87.47\% acceptance ratio in human evaluations on generated instances; (3) a
multi-stage evaluation system that evaluates both retrieval and generation
performance, result in a comprehensive evaluation on the RAG pipeline; and (4)
robust evaluation metrics derived from rule-based and LLM-based ones, enhancing
the reliability of assessments through manual annotations and supervised
fine-tuning of an LLM evaluator. Our experiments demonstrate the
comprehensiveness of OmniEval, which includes extensive test datasets and
highlights the performance variations of RAG systems across diverse topics and
tasks, revealing significant opportunities for RAG models to improve their
capabilities in vertical domains. We open source the code of our benchmark in
https://github.com/RUC-NLPIR/OmniEval{https://github.com/RUC-NLPIR/OmniEval}.Summary
AI-Generated Summary