建立信任:人工智能安全、安全性和透明度的基础
Building Trust: Foundations of Security, Safety and Transparency in AI
November 19, 2024
作者: Huzaifa Sidhpurwala, Garth Mollett, Emily Fox, Mark Bestavros, Huamin Chen
cs.AI
摘要
本文探讨了公开可用人工智能模型生态系统迅速发展的情况,以及它们对安全和安全领域的潜在影响。随着人工智能模型变得日益普及,了解它们潜在的风险和漏洞至关重要。我们审查了当前的安全和安全场景,同时强调了诸如跟踪问题、补救措施以及人工智能模型生命周期和所有权流程明显缺失等挑战。提出了增强模型开发者和最终用户安全性和安全性的全面策略。本文旨在为更加标准化的人工智能模型开发和运行中的安全、安全和透明度提供一些基础性支持,并围绕其形成的更大规模的开放生态系统和社区。
English
This paper explores the rapidly evolving ecosystem of publicly available AI
models, and their potential implications on the security and safety landscape.
As AI models become increasingly prevalent, understanding their potential risks
and vulnerabilities is crucial. We review the current security and safety
scenarios while highlighting challenges such as tracking issues, remediation,
and the apparent absence of AI model lifecycle and ownership processes.
Comprehensive strategies to enhance security and safety for both model
developers and end-users are proposed. This paper aims to provide some of the
foundational pieces for more standardized security, safety, and transparency in
the development and operation of AI models and the larger open ecosystems and
communities forming around them.Summary
AI-Generated Summary