Hogwild! 推理:通過並行注意力實現大型語言模型的同步生成
Hogwild! Inference: Parallel LLM Generation via Concurrent Attention
April 8, 2025
作者: Gleb Rodionov, Roman Garipov, Alina Shutova, George Yakushev, Vage Egiazarian, Anton Sinitsin, Denis Kuznedelev, Dan Alistarh
cs.AI
摘要
大型語言模型(LLMs)已展現出通過高級推理、長篇內容生成及工具使用來處理日益複雜任務的能力。解決這些任務通常涉及長時間的推理計算。在人類解決問題的過程中,一個常見的加速策略是協作:將問題分解為子任務,並行探索不同策略等。最近的研究表明,LLMs也能通過實施明確的合作框架(如投票機制或創建可並行執行的獨立子任務)來並行運作。然而,這些框架可能並不適用於所有類型的任務,這限制了它們的適用性。在本研究中,我們提出了一種不同的設計方法:我們並行運行LLM“工作者”,允許它們通過同步更新的注意力緩存進行同步,並提示這些工作者決定如何最佳地協作。我們的方法讓這些實例能夠針對當前問題自行制定協作策略,同時在並行緩存中“看到”彼此的進展。我們通過Hogwild!推理實現了這一方法:這是一個並行LLM推理引擎,其中多個相同LLM的實例在相同的注意力緩存下並行運行,並能“即時”訪問彼此生成的詞元。Hogwild!推理利用旋轉位置嵌入(RoPE)來避免重新計算,同時提高並行硬件的利用率。我們發現,現代具備推理能力的LLMs能夠無需額外微調即可使用共享的鍵值緩存進行推理。
English
Large Language Models (LLMs) have demonstrated the ability to tackle
increasingly complex tasks through advanced reasoning, long-form content
generation, and tool use. Solving these tasks often involves long
inference-time computations. In human problem solving, a common strategy to
expedite work is collaboration: by dividing the problem into sub-tasks,
exploring different strategies concurrently, etc. Recent research has shown
that LLMs can also operate in parallel by implementing explicit cooperation
frameworks, such as voting mechanisms or the explicit creation of independent
sub-tasks that can be executed in parallel. However, each of these frameworks
may not be suitable for all types of tasks, which can hinder their
applicability. In this work, we propose a different design approach: we run LLM
"workers" in parallel , allowing them to synchronize via a concurrently-updated
attention cache and prompt these workers to decide how best to collaborate. Our
approach allows the instances to come up with their own collaboration strategy
for the problem at hand, all the while "seeing" each other's partial progress
in the concurrent cache. We implement this approach via Hogwild! Inference: a
parallel LLM inference engine where multiple instances of the same LLM run in
parallel with the same attention cache, with "instant" access to each other's
generated tokens. Hogwild! inference takes advantage of Rotary Position
Embeddings (RoPE) to avoid recomputation while improving parallel hardware
utilization. We find that modern reasoning-capable LLMs can perform inference
with shared Key-Value cache out of the box, without additional fine-tuning.Summary
AI-Generated Summary