机械解释性中的未解之谜
Open Problems in Mechanistic Interpretability
January 27, 2025
作者: Lee Sharkey, Bilal Chughtai, Joshua Batson, Jack Lindsey, Jeff Wu, Lucius Bushnaq, Nicholas Goldowsky-Dill, Stefan Heimersheim, Alejandro Ortega, Joseph Bloom, Stella Biderman, Adria Garriga-Alonso, Arthur Conmy, Neel Nanda, Jessica Rumbelow, Martin Wattenberg, Nandi Schoots, Joseph Miller, Eric J. Michaud, Stephen Casper, Max Tegmark, William Saunders, David Bau, Eric Todd, Atticus Geiger, Mor Geva, Jesse Hoogland, Daniel Murfet, Tom McGrath
cs.AI
摘要
机械解释性旨在理解神经网络能力背后的计算机制,以实现具体的科学和工程目标。因此,该领域的进展有望提供对人工智能系统行为更大的保证,并阐明有关智能本质的激动人心的科学问题。尽管在这些目标方面取得了最新进展,但该领域存在许多需要解决的开放性问题,这些问题需要在许多科学和实际利益实现之前找到解决方案:我们的方法需要在概念和实践上进行改进,以揭示更深层次的见解;我们必须找出如何最好地应用我们的方法来追求具体目标;该领域必须应对影响和受到我们工作影响的社会技术挑战。这篇前瞻性综述讨论了机械解释性的当前前沿和该领域可能受益于优先考虑的开放性问题。
English
Mechanistic interpretability aims to understand the computational mechanisms
underlying neural networks' capabilities in order to accomplish concrete
scientific and engineering goals. Progress in this field thus promises to
provide greater assurance over AI system behavior and shed light on exciting
scientific questions about the nature of intelligence. Despite recent progress
toward these goals, there are many open problems in the field that require
solutions before many scientific and practical benefits can be realized: Our
methods require both conceptual and practical improvements to reveal deeper
insights; we must figure out how best to apply our methods in pursuit of
specific goals; and the field must grapple with socio-technical challenges that
influence and are influenced by our work. This forward-facing review discusses
the current frontier of mechanistic interpretability and the open problems that
the field may benefit from prioritizing.Summary
AI-Generated Summary