OmniEval:金融領域中的全方位自動RAG評估基準
OmniEval: An Omnidirectional and Automatic RAG Evaluation Benchmark in Financial Domain
December 17, 2024
作者: Shuting Wang, Jiejun Tan, Zhicheng Dou, Ji-Rong Wen
cs.AI
摘要
作為大型語言模型(LLMs)的典型且實用應用,檢索增強生成(RAG)技術已獲得廣泛關注,尤其在垂直領域中,LLMs可能缺乏特定領域知識。本文介紹了一個全方位且自動化的金融領域RAG基準測試,名為OmniEval。我們的基準測試以其多維評估框架為特色,包括(1)基於矩陣的RAG場景評估系統,將查詢分為五個任務類別和16個金融主題,從而對不同查詢場景進行結構化評估;(2)多維評估數據生成方法,結合基於GPT-4的自動生成和人工標註,使在生成實例的人工評估中達到87.47%的接受率;(3)多階段評估系統,評估檢索和生成性能,實現對RAG管道的全面評估;以及(4)從基於規則和基於LLM的評估指標中衍生出的強大評估指標,通過手動標註和LLM評估器的監督微調,增強評估的可靠性。我們的實驗展示了OmniEval的全面性,其中包括廣泛的測試數據集,並突出了RAG系統在不同主題和任務中的性能變化,揭示了RAG模型在垂直領域中提升能力的重要機會。我們在https://github.com/RUC-NLPIR/OmniEval{https://github.com/RUC-NLPIR/OmniEval}開源了我們基準測試的代碼。
English
As a typical and practical application of Large Language Models (LLMs),
Retrieval-Augmented Generation (RAG) techniques have gained extensive
attention, particularly in vertical domains where LLMs may lack domain-specific
knowledge. In this paper, we introduce an omnidirectional and automatic RAG
benchmark, OmniEval, in the financial domain. Our benchmark is characterized by
its multi-dimensional evaluation framework, including (1) a matrix-based RAG
scenario evaluation system that categorizes queries into five task classes and
16 financial topics, leading to a structured assessment of diverse query
scenarios; (2) a multi-dimensional evaluation data generation approach, which
combines GPT-4-based automatic generation and human annotation, achieving an
87.47\% acceptance ratio in human evaluations on generated instances; (3) a
multi-stage evaluation system that evaluates both retrieval and generation
performance, result in a comprehensive evaluation on the RAG pipeline; and (4)
robust evaluation metrics derived from rule-based and LLM-based ones, enhancing
the reliability of assessments through manual annotations and supervised
fine-tuning of an LLM evaluator. Our experiments demonstrate the
comprehensiveness of OmniEval, which includes extensive test datasets and
highlights the performance variations of RAG systems across diverse topics and
tasks, revealing significant opportunities for RAG models to improve their
capabilities in vertical domains. We open source the code of our benchmark in
https://github.com/RUC-NLPIR/OmniEval{https://github.com/RUC-NLPIR/OmniEval}.Summary
AI-Generated Summary