Skip to Content Skip to Navigation
Login

There's a tool doing the rounds called "Stable Attribution" which claims to be able to identify the images in the Stable Diffusion training set that were used to generate an image

It's extremely misleading. All it does is show you images in the Stable Diffusion training set that are visually similar. You can prove this to yourself by running it against a photo you have taken yourself.

I took this photo. It was not generated by AI at all.

@simon Sounds like that’s not a useful test: twitter.com/atroyn/status/1622

@bitprophet I think they are being very disingenuous

I know how image Diffusion models work. They don't take a dozen images and merge them together, which is what this tool implies

The copy that says "these human-made source images were used by AI to generate this image" is plain wrong, and uploading your own non-AI image is a great way to illustrate that

@simon @bitprophet I’m not sure what you mean, the reasoning behind this is correct. They use the open source LAION datasets which were used to train SD v1/2 and find the closest images to the uploaded image (has to be generated by SD v1/2) in terms of CLIP embeddings.