ChatPaper.aiChatPaper

基于大语言模型的时间序列预测高效模型选择

Efficient Model Selection for Time Series Forecasting via LLMs

April 2, 2025
作者: Wang Wei, Tiankai Yang, Hongjie Chen, Ryan A. Rossi, Yue Zhao, Franck Dernoncourt, Hoda Eldardiry
cs.AI

摘要

模型选择是时间序列预测中的关键步骤,传统上需要跨多个数据集进行广泛的性能评估。元学习方法旨在自动化这一过程,但它们通常依赖于预先构建的性能矩阵,而这些矩阵的构建成本高昂。在本研究中,我们提出利用大型语言模型(LLMs)作为模型选择的轻量级替代方案。我们的方法通过利用LLMs的固有知识和推理能力,消除了对显式性能矩阵的需求。通过对LLaMA、GPT和Gemini的广泛实验,我们证明了该方法优于传统的元学习技术和启发式基线,同时显著降低了计算开销。这些发现凸显了LLMs在时间序列预测中高效模型选择的潜力。
English
Model selection is a critical step in time series forecasting, traditionally requiring extensive performance evaluations across various datasets. Meta-learning approaches aim to automate this process, but they typically depend on pre-constructed performance matrices, which are costly to build. In this work, we propose to leverage Large Language Models (LLMs) as a lightweight alternative for model selection. Our method eliminates the need for explicit performance matrices by utilizing the inherent knowledge and reasoning capabilities of LLMs. Through extensive experiments with LLaMA, GPT and Gemini, we demonstrate that our approach outperforms traditional meta-learning techniques and heuristic baselines, while significantly reducing computational overhead. These findings underscore the potential of LLMs in efficient model selection for time series forecasting.

Summary

AI-Generated Summary

PDF162April 4, 2025