BoxingGym:自动实验设计和模型发现进展的基准测试
BoxingGym: Benchmarking Progress in Automated Experimental Design and Model Discovery
January 2, 2025
作者: Kanishk Gandhi, Michael Y. Li, Lyle Goodyear, Louise Li, Aditi Bhaskar, Mohammed Zaman, Noah D. Goodman
cs.AI
摘要
理解世界并用科学理论解释它是人工智能研究的核心愿望。提出理论、设计实验来测试它们,然后根据数据进行修订对于科学发现至关重要。尽管基于大型语言模型(LLM)的科学代理人具有重要潜力,但目前没有基准系统地测试LLM提出科学模型、收集实验数据并根据新数据进行修订的能力。我们引入了BoxingGym,一个包含10个环境的基准测试,用于系统评估实验设计(例如收集数据以测试科学理论)和模型发现(例如提出和修订科学理论)。为了实现可行且定量的评估,我们将每个环境实现为一个生成概率模型,科学代理人可以利用它们进行交互式实验。这些概率模型涵盖了从心理学到生态学等各种真实科学领域。为了定量评估科学代理人收集信息丰富的实验数据的能力,我们计算期望信息增益(EIG),这是一个信息论量,用于衡量实验如何减少对生成模型参数的不确定性。一个优秀的科学理论是一个简洁且具有预测性的解释。因此,为了定量评估模型发现,我们要求科学代理人解释他们的模型,然后评估这个解释是否能使另一个科学代理人对这个环境做出可靠的预测。除了这种基于解释的评估之外,我们还计算标准的模型评估指标,如预测误差。我们发现目前的LLM,如GPT-4o,在实验设计和模型发现方面都存在困难。我们发现,将基于LLM的代理人与显式统计模型相结合并不能可靠地改善这些结果。
English
Understanding the world and explaining it with scientific theories is a
central aspiration of artificial intelligence research. Proposing theories,
designing experiments to test them, and then revising them based on data are
fundamental to scientific discovery. Despite the significant promise of
LLM-based scientific agents, no benchmarks systematically test LLM's ability to
propose scientific models, collect experimental data, and revise them in light
of new data. We introduce BoxingGym, a benchmark with 10 environments for
systematically evaluating both experimental design (e.g. collecting data to
test a scientific theory) and model discovery (e.g. proposing and revising
scientific theories). To enable tractable and quantitative evaluation, we
implement each environment as a generative probabilistic model with which a
scientific agent can run interactive experiments. These probabilistic models
are drawn from various real-world scientific domains ranging from psychology to
ecology. To quantitatively evaluate a scientific agent's ability to collect
informative experimental data, we compute the expected information gain (EIG),
an information-theoretic quantity which measures how much an experiment reduces
uncertainty about the parameters of a generative model. A good scientific
theory is a concise and predictive explanation. Therefore, to quantitatively
evaluate model discovery, we ask a scientific agent to explain their model and
then assess whether this explanation enables another scientific agent to make
reliable predictions about this environment. In addition to this
explanation-based evaluation, we compute standard model evaluation metrics such
as prediction errors. We find that current LLMs, such as GPT-4o, struggle with
both experimental design and model discovery. We find that augmenting the
LLM-based agent with an explicit statistical model does not reliably improve
these results.Summary
AI-Generated Summary