多專家提示提高了大型語言模型的可靠性、安全性和實用性。
Multi-expert Prompting Improves Reliability, Safety, and Usefulness of Large Language Models
November 1, 2024
作者: Do Xuan Long, Duong Ngoc Yen, Anh Tuan Luu, Kenji Kawaguchi, Min-Yen Kan, Nancy F. Chen
cs.AI
摘要
我們提出了多專家提示(Multi-expert Prompting),這是對專家提示(ExpertPrompting)的一項新穎增強,旨在改進大型語言模型(LLM)的生成。具體而言,它通過模擬多個專家,匯總他們的回應,並從個別和匯總的回應中選擇最佳回應,來引導LLM完成輸入指令。這個過程是通過我們從Nominal Group Technique(Ven和Delbecq,1974)中精心設計的七個子任務在一個思維鏈中進行的,該技術是一個成熟的決策框架。我們的評估表明,多專家提示在增強回應的真實性、事實性、信息量和有用性方面顯著優於專家提示和可比較的基準,同時減少了毒性和傷害性。它進一步通過ChatGPT比最佳基準高出8.69%,實現了最新的真實性。多專家提示高效、可解釋,並且高度適應各種情境,消除了手動提示構建的需求。
English
We present Multi-expert Prompting, a novel enhancement of ExpertPrompting (Xu
et al., 2023), designed to improve the large language model (LLM) generation.
Specifically, it guides an LLM to fulfill an input instruction by simulating
multiple experts, aggregating their responses, and selecting the best among
individual and aggregated responses. This process is performed in a single
chain of thoughts through our seven carefully designed subtasks derived from
the Nominal Group Technique (Ven and Delbecq, 1974), a well-established
decision-making framework. Our evaluations demonstrate that Multi-expert
Prompting significantly outperforms ExpertPrompting and comparable baselines in
enhancing the truthfulness, factuality, informativeness, and usefulness of
responses while reducing toxicity and hurtfulness. It further achieves
state-of-the-art truthfulness by outperforming the best baseline by 8.69% with
ChatGPT. Multi-expert Prompting is efficient, explainable, and highly adaptable
to diverse scenarios, eliminating the need for manual prompt construction.Summary
AI-Generated Summary