ChatPaper.aiChatPaper

使用本地化消息为任何内容添加水印

Watermark Anything with Localized Messages

November 11, 2024
作者: Tom Sander, Pierre Fernandez, Alain Durmus, Teddy Furon, Matthijs Douze
cs.AI

摘要

图像水印方法并未针对处理小水印区域进行定制。这限制了在现实场景中的应用,其中图像的部分可能来自不同来源或已经被编辑。我们引入了一种用于定位图像水印的深度学习模型,命名为Watermark Anything Model(WAM)。WAM嵌入器在不引人注意地修改输入图像的同时,提取器将接收到的图像分割为带水印和无水印区域,并从发现为带水印的区域中恢复一个或多个隐藏信息。这些模型在低分辨率下联合训练,没有感知约束,然后进行后期训练以实现不可察觉性和多水印。实验证明,WAM在不可察觉性和鲁棒性方面与最先进的方法相媲美,尤其是对抗修补和拼接,即使在高分辨率图像上也是如此。此外,它还提供了新的功能:WAM能够定位拼接图像中的带水印区域,并从多个小区域(不超过图像表面的10%)中提取不同的32位信息,即使是对于小尺寸的256x256图像,也能保证少于1比特的错误。
English
Image watermarking methods are not tailored to handle small watermarked areas. This restricts applications in real-world scenarios where parts of the image may come from different sources or have been edited. We introduce a deep-learning model for localized image watermarking, dubbed the Watermark Anything Model (WAM). The WAM embedder imperceptibly modifies the input image, while the extractor segments the received image into watermarked and non-watermarked areas and recovers one or several hidden messages from the areas found to be watermarked. The models are jointly trained at low resolution and without perceptual constraints, then post-trained for imperceptibility and multiple watermarks. Experiments show that WAM is competitive with state-of-the art methods in terms of imperceptibility and robustness, especially against inpainting and splicing, even on high-resolution images. Moreover, it offers new capabilities: WAM can locate watermarked areas in spliced images and extract distinct 32-bit messages with less than 1 bit error from multiple small regions - no larger than 10% of the image surface - even for small 256times 256 images.

Summary

AI-Generated Summary

PDF224November 12, 2024