Image courtesy of the Technical University of Darmstadt.

Submitted by Rachel Gardner on Tue, 24/06/2025 - 11:34
Artists urgently need stronger defences to protect their work from being used to train AI models without their consent.
So say a team of researchers who have uncovered significant weaknesses in two of the art protection tools most used by artists to safeguard their work.
They are popular with digital artists who want to stop artificial intelligence models (like the AI art generator Stable Diffusion) from copying their unique styles without their consent. Together, Glaze and NightShade have been downloaded almost nine million times.
But according to an international group of researchers, these tools have critical weaknesses that mean they cannot reliably stop AI models from training on artists' work.
"With our work, we hope to highlight the urgent need for more resilient, artist-centered protection strategies."
Hanna Foerster, PhD student
The tools add subtle, invisible distortions (known as poisoning perturbations) to digital images. These 'poisons' are designed to confuse AI models during training. Glaze takes a passive approach, hindering the AI model's ability to extract key stylistic features. NightShade goes further, actively corrupting the learning process by causing the AI model to associate an artist's style with unrelated concepts.
But the researchers have created a method – called LightShed – that can bypass these protections. LightShed can detect, reverse-engineer and remove these distortions, effectively stripping away the protections and rendering the images usable again for Generative AI model training.
It was developed by a PhD student here along with colleagues at the Technical University Darmstadt and the University of Texas at San Antonio. The researchers hope that by publicising their work – which will be presented at the USENIX Security Symposium, a major security conference, in August – they can let creatives know that there are major issues with art protection tools.
Detecting, reverse-engineering and removing image protections
LightShed works through a three-step process. It first identifies whether an image has been altered with known poisoning techniques. In a second, reverse engineering, step, it learns the characteristics of the perturbations using publicly available poisoned examples. Finally, it eliminates the poison to restore the image to its original, unprotected form.
In experimental evaluations, LightShed successfully detected NightShade-protected images with 99.98% accuracy and effectively removed the embedded protections from those images.
"This shows that even when using tools like NightShade, artists are still at risk of their work being used for training AI models without their consent," says Hanna Foerster (above), a PhD student here in the Department, who conducted the work during an internship at the Technical University of Darmstadt. Hanna is the first author of the USENIX conference paper, LightShed: Defeating Perturbation-based Image Copyright Protections.
Although LightShed reveals serious vulnerabilities in art protection tools, the researchers stress that it was developed not as an attack on them – but rather an urgent call to action to produce better ones.
Developing tools to withstand advanced adversaries
"We see this as a chance to co-evolve defenses," says co-author Prof Ahmad-Reza Sadeghi from the Technical University of Darmstadt. "Our goal is to collaborate with other scientists in this field and support the artistic community in developing tools that can withstand advanced adversaries."
It is certainly the case that the landscape of AI and digital creativity is evolving very rapidly. In March this year, OpenAI rolled out a ChatGPT image model that could produce artwork "in the style of Studio Ghibli", the renowned Japanese animation studio.
This sparked a wide range of viral memes – and equally wide discussions about image copyright, in which legal analysts noted that Studio Ghibli would be limited in how it could respond to this since copyright law protects specific expression, not a specific artistic 'style'.
Following these discussions, OpenAI subsequently announced prompt safeguards to block some user requests to generate images in the styles of living artists.
Legal battles over AI and copyright
But issues over generative AI and copyright are still going on, as highlighted by the copyright and trademark infringement case currently being heard in London's high court.
Global photography agency Getty Images is alleging that London-based AI company Stability AI trained its image generation model on the agency's huge archive of copyrighted pictures. Stability AI is fighting Getty's claim and arguing that the case represents an "overt threat" to the generative AI industry.
And earlier this month, Disney and Universal announced they are suing AI firm Midjourney over its image generator, which the two companies said is a "bottomless pit of plagiarism".
"What we hope to do with our work," says Hanna Foerster, "is to highlight the urgent need for a roadmap towards more resilient, artist-centered protection strategies. We must let creatives know that they are still at risk and collaborate with others to develop better art protection tools in future."