Investigating Style Similarity in Diffusion Models

ECCV 2024(2024)

引用 0|浏览1
摘要
Generative models are now widely used by graphic designers and artists. Prior works have shown that these models tend to remember and often replicate content from the training data during generation. Hence as their proliferation increase, it has become important to perform a database search to determine whether the properties of the image are attributable to specific training data, every time before a generated image is used for professional purposes. Existing tools for this purpose focus largely on retrieving images of similar semantic content. Meanwhile, many artists are concerned with the extent of style replication in text-to-image models. We present a framework to understand and extract style descriptors from images. Our framework comprises a new dataset curated using the insight that style is a subjective property of an image capturing complex yet meaningful interactions of factors including but not limited to colors, textures, shapes, etc. We also propose a method to extract style descriptors that can be used to attribute style of a generated image to the images used in the training dataset of a text-to-image model. We show promising results in various style retrieval tasks. We also quantitatively and qualitatively analyze style attribution and matching in the Stable Diffusion model.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
0
您的评分 :

暂无评分

数据免责声明
页面数据均来自互联网公开来源、合作出版商和通过AI技术自动分析结果,我们不对页面数据的有效性、准确性、正确性、可靠性、完整性和及时性做出任何承诺和保证。若有疑问,可以通过电子邮件方式联系我们:report@aminer.cn