建立信任:人工智慧中安全、安全性和透明度的基礎

Building Trust: Foundations of Security, Safety and Transparency in AI

November 19, 2024
作者: Huzaifa Sidhpurwala, Garth Mollett, Emily Fox, Mark Bestavros, Huamin Chen
cs.AI

摘要

本文探討了公開可用人工智慧模型生態系統的快速演變,以及其對安全和安全领域的潜在影响。隨著人工智慧模型日益普及,了解其潜在风险和漏洞至关重要。我們审查了当前的安全和安全场景,同时突出挑战,如跟踪问题、补救措施,以及人工智慧模型生命周期和所有权流程的明显缺失。提出了增强模型开发者和最终用户安全和安全性的全面策略。本文旨在为更加标准化的安全、安全和透明度提供一些基础要素,以促进人工智慧模型的开发和运作,以及围绕它们形成的更大规模的开放生态系统和社区。
English
This paper explores the rapidly evolving ecosystem of publicly available AI models, and their potential implications on the security and safety landscape. As AI models become increasingly prevalent, understanding their potential risks and vulnerabilities is crucial. We review the current security and safety scenarios while highlighting challenges such as tracking issues, remediation, and the apparent absence of AI model lifecycle and ownership processes. Comprehensive strategies to enhance security and safety for both model developers and end-users are proposed. This paper aims to provide some of the foundational pieces for more standardized security, safety, and transparency in the development and operation of AI models and the larger open ecosystems and communities forming around them.

Summary

AI-Generated Summary

PDF102November 20, 2024