ChatPaper.aiChatPaper

從數據中學習遊戲的潛在規則:一個象棋故事

Learning the Latent Rules of a Game from Data: A Chess Story

October 3, 2024
作者: Ben Fauber
cs.AI

摘要

我們展示了具有數百萬參數的小型預訓練基礎生成語言模型能夠從與該過程相關的數據中學習過程的潛在規則。受斯特凡·茨威格的中篇小說《與魔鬼對弈》啟發,我們展示了具有 28M 和 125M 參數的預訓練基礎小型語言模型(SLMs)可以通過 1,000 到 1,000,000 個示例進行指導微調,以學習棋盤遊戲的規則,提出合法移動並準確解決棋題。我們還探討了連續語言模型微調時期對改善結果的影響,並通過增加指導微調示例數量來展示模型幻覺的減少。
English
We demonstrate that small pretrained foundational generative language models with millions of parameters can learn the latent rules of a process from data associated with the process. Inspired by Stefan Zweig's novella "Schachnovelle," also known as "The Royal Game" in English, we show that 28M and 125M parameter pretrained foundational small language models (SLMs) can be instruction fine-tuned with 1,000-to-1,000,000 examples to learn the rules of chess, propose legal moves, and accurately solve chess problems. We also explore the impact of successive language model fine-tuning epochs on improved outcomes and demonstrate reductions in model hallucinations by increasing the number of instruction fine-tuning examples.

Summary

AI-Generated Summary

PDF52November 16, 2024