ChatPaper.aiChatPaper

在语言模型API中审计提示缓存

Auditing Prompt Caching in Language Model APIs

February 11, 2025
作者: Chenchen Gu, Xiang Lisa Li, Rohith Kuditipudi, Percy Liang, Tatsunori Hashimoto
cs.AI

摘要

大型语言模型(LLMs)中的提示缓存会导致数据相关的时间变化:缓存的提示比非缓存的提示处理速度更快。这些时间差异会引入侧信道时间攻击的风险。例如,如果缓存是跨用户共享的,攻击者可以通过快速API响应时间识别缓存的提示,从而获取关于其他用户提示的信息。由于提示缓存可能导致隐私泄露,API提供者在缓存策略方面的透明度至关重要。为此,我们开发并进行统计审计,以检测现实世界中LLM API提供者中的提示缓存。我们检测到七个API提供者中存在用户之间的全局缓存共享,包括OpenAI,在这些情况下可能泄露有关用户提示的隐私信息。由于提示缓存导致的时间变化还可能导致有关模型架构的信息泄露。具体来说,我们发现OpenAI的嵌入模型是一个仅解码器的Transformer,这一点以前并不为公众所知。
English
Prompt caching in large language models (LLMs) results in data-dependent timing variations: cached prompts are processed faster than non-cached prompts. These timing differences introduce the risk of side-channel timing attacks. For example, if the cache is shared across users, an attacker could identify cached prompts from fast API response times to learn information about other users' prompts. Because prompt caching may cause privacy leakage, transparency around the caching policies of API providers is important. To this end, we develop and conduct statistical audits to detect prompt caching in real-world LLM API providers. We detect global cache sharing across users in seven API providers, including OpenAI, resulting in potential privacy leakage about users' prompts. Timing variations due to prompt caching can also result in leakage of information about model architecture. Namely, we find evidence that OpenAI's embedding model is a decoder-only Transformer, which was previously not publicly known.

Summary

AI-Generated Summary

PDF53February 12, 2025