One of the big claims in the class-action lawsuit against Stability AI is that Stable Diffusion in some way contains all its training data, and is therefore a derivative work it its own right:
Because a trained diffusion model can produce a copy of any of its Training Images—which could number in the billions—the diffusion model can be considered an alternative way of storing a copy of those images. In essence, it’s similar to having a directory on your computer of billions of JPEG image files. But the diffusion model uses statistical and mathematical methods to store these images in an even more efficient and compressed manner.
Case 3:23-cv-00201 Document 1, p. 75(c)
That description really misses how Stable Diffusion works. Every generated image is the product of three things:
- Model: In SD’s case the model is a fairly standard neural net architecture that was trained using the LAION-5B training set. The training set includes about 5.85 billion images (about 240 terabytes), but the model itself is only 5.2 gigabytes. Artists will also fine-tune existing models with their own additional training sets.
- Artist guidance (text prompt, hyperparameters, workflow, etc.): Any text (and even no text) will produce an image, but the artists who are getting good results from SD often spend hours figuring out complex prompts to get exactly what they want. There are also numerous hyperparameters, more advanced workflows using in-painting, and bootstrapping techniques using img2img.
- Randomness: All generated images start with a random field of pixels to bootstrap the process. This randomness is usually specified by a large random number called the seed, and artists will often produce tens or hundreds of random variations to get something that matches their vision.
These three elements define the space of all possible generated images, and in theory (and often in practice) if you know the model, prompt and seed used to generate an image you can recreate it exactly. But even though all three elements contribute to a generation, the amount of contribution can vary. Run SD without a text prompt and you get a wide range of almost-images, from uncanny-valley selfies and half-melted cityscapes to architecture that appears to incorporate lawn furniture in its window dressings. In this part of the space it’s almost like wandering through Borges’ fictional Library of Babel which houses books containing every possible combination of letters, except in this case the output at usually at least somewhat recognizable. In another part of the space are the gorgeous generated by artists who spent countless hours honing their prompts, hyperparameters, choice of models and workflows. And while I seriously doubt the complaint’s claim that you can generate any arbitrary instance from the the 5.85 billion training images, it is true that some training images can be generated reliably with just a simple prompt. This is called memorization in the machine learning field (essentially a form of overfitting), and it’s more likely to happen for images that are duplicated many times in the training set, or when there are only a few images that fit a particular part of the image space. For example, here’s the original Starry Night alongside one of the four images I got from stablediffusionweb.com when I used the prompt “Starry Night”:
It’s not just public-domain art — a recent paper (not yet peer reviewed) generated images with SD using prompts from the original training set and were able to produce a significant number of images that showed at least some copying from other training images (interestingly enough, often the copied images were not the ones that had the caption used in the prompt). Some of these images are generic enough I’m not sure I’d call it “copying” (e.g. a close-up of a tiger’s face, or a clip-art map of the USA) but others are clearly duplicating a training image that was probably replicated many times in the training set. For example, the prompt “Captain Marvel Poster” generates images that are all very similar to the poster advertising the 2019 movie, and of course prompting with the name of your favorite comic book superhero will generate images that, if published, would almost certainly constitute copyright infringement. I expect the plaintiffs will focus mainly on this part of the space.
Philosophers can (and lawyers will) argue whether the fact that SD can produce copyrighted works means the model itself somehow contains derivative works. In the meantime companies are already working on models that are trained only using “clean” training sets comprising public domain and explicitly licensed works, both both as a hedge against whatever the courts decide and because commercial artists are very reluctant to use a tool that might put the copyright status of their own works in doubt. And since “style” isn’t copyrightable, model providers can always hire artists to paint original images “in the style of <foo>” that are then used to train with a generic (or even proprietary) style name of their own.
I’m no lawyer but it seems like the most straightforward decision would be to treat the AI the same way you would treat a human collaborator. So like with a human assistant you can train on copyrighted works and can generate new works in the same style as copyrighted works, but if a generated work shows substantial similarity to a protected work that was either known by the tool user or in the training set then that’s infringement. That way all our existing understanding of copyright and fair use (such as it is) stays in place, and tool-makers have an incentive to put in safeguards to prevent inadvertent copyright infringement but ultimately the responsibility lies with the artist using the tool.
On the flip side, the plaintiff’s theory that AI image generators are simply “complex collage tools” that produce derivative works seems to beg a whole host of follow-up questions. For example, if a model contains both public domain and copyrighted works, what’s the criteria for which training images a generated image derives from? If a model itself is a derivative work, does that mean a model trained only on licensed material is in itself protected by copyright? (This is probably not currently the case, though it’s not been tested in court as far as I know.) If all generated images are derivative works, does that mean the owner of an AI model trained on licensed material has a claim on every image the model produces? And perhaps most importantly, is there any way to “remove” a work from a model without having to retrain from scratch, e.g. when publishing a model in a country with different copyright restrictions than where it was trained, or where the ownership of a particular image is challenged?