Comics and Art

AI Art generated programs need to be held accountable

Friends and Fam, you guys know me… I’m pretty laid back, but I want to talk to you about something that has made me extremely upset. As many of you know, like most artists these days, I have had to fight to have my works removed from people who have stolen them for profit. The internet gave me the chance to reach a larger audience at the cost of constantly having to fight people who attempt to pass my work off as their own. This post will be long so bear with me.

Recently I’ve been seeing a lot of LENSA posts on social media. I’m well aware that many of you didn’t read the Terms of Service and are completely unaware of the very ethically questionable datasets that were used to train the AI in question. This is very long but I will explain, those of you who are in GDPR regions, you might want to proactively check to see if you’re included and demand to be removed. The LENSA AI (which is a paid for app) uses the datasets from Stable Diffusion whose dataset relies heavily on LAION dataset. LAION uses an ethically questionable algorithm that crawls for photos, some of which are extremely sensitive like photos of people with medical conditions who signed limited consent forms to only have the photos shared with their doctor and specialists. The dataset also contains a lot of hacked information from people’s personal phones, as well as violent ISIS executions. It also includes MANY artists galleries from big and small studio’s without their consent or knowledge (I am one of them). It is not a benign dataset in the very least and it crawled millions of photos with zero way to opt out nor (originally) a way to even know that you were opted in. This is important to mention because the excuse that is being used to justify this behavior is “fair use”. Fair use comes with specific terms and conditions: Fair use is one of the limitations to copyright intended to balance the interests of copyright holders with the public interest in the wider distribution and use of creative works by allowing as a defense to copyright infringement claims certain limited use.

A company (LENZA) whose entire business model is reliant on an algorithm trained on unsuspecting artists works, for the purpose of creating easy access forgeries of said art style could be arguably not covered by “fair use”. However, the legal gray area continues because it’s difficult to prove and AI is a machine so it’s not like you can sue it. Further adding to the problem is that until recently many artists weren’t even aware that their work was used. Add in the average people whose images were also trolled (medical images, etc) but I digress. To help artists and average people discover if their images/likenesses etc were used some artists created a website that lets you check. https://haveibeentrained.com/

When I make a digital photomanipulation I share the copyright with everyone else’s image that resides within my work. I am not the sole owner of said work as long as someone else’s work is within my own. I cannot make any money off of said work unless I have permission to do so and I can be sued if I break these rules. There are many different forms of copyright that have been in use which have given people more options such as Creative Commons or other variations. Within these rules and limitations artists have continued to share their works and grow the overall community, that only works as long as everyone follows the rules for each of the various copyright terms. This isn’t to say it’s perfect, we still are forced to fight to protect our rights individually. Which is often time-consuming and frustrating.

AI however does not have to follow any of these rules. It is free to “borrow” pieces of other people’s work without any need to attribute it nor any recourse for those of us whose works have been “borrowed”. AI does not “draw” the image, it manipulates it using an algorithm it was trained to follow via the images that were fed into it. These pixels are arranged by the AI (it can be parts, whole pieces etc). The art is not “original” it is derivative by its very nature. When AI copies someone else’s art in a way that could convince an outside viewer that the original artist created it, it is not called a forgery or even a “study”. Even though, in art circles, that is precisely what it is. For those who need reminding, think about all of the art forgeries out there where someone exactly copies the painter/artists style and passes it off as said artist for profit. Do not think for a moment that this isn’t coming, it is. A person copying another person’s art style exactly and precisely is a forger. The AI can skirt around it being called a forgery by claiming it’s “new art” and it’s “completely AI generated”. A computer made it so it can’t be “forged”. The computer does not draw it, it arranges pieces of various images that it has learned to detect via pattern recognition based on the data that it was fed (other people’s images). It then produces an image arranged using parts or whole pieces from the information that the program was fed. If a human attempted to do what the AI is doing, they would rightfully find themselves in trouble. This is because we all recognize that the work is not theirs to take. We would hold them accountable for literally incorporating pieces of someone else’s work into their own while taking all of the credit. A human could excuse this behavior if they are truly trying to master a particular technique but they would admit that it was a study. A human would also be held accountable for attempts to pass off someone else’s work as their own.

AI doesn’t “study”, it uses complex algorithms to generate and render previously inputted pieces of information. It can quickly be trained to change outputs over time by continually tweaking the algorithm by adding more images. It can be argued that it’s not really a study in that sense. In a generous world I suppose we could label it a study, however none of the individuals selling/influencing/using said apps or software call it any of these things. They insist that it’s “original” and “new”. The most insidious part of it is that the original artists’ style is not credited, nor are they compensated for their non-consensual contribution to someone else’s profit making product. The consumer has no idea about the backend that rendered the image nor any idea about who the artist was who created the style or that it was stolen. They will likely never even look. In fact it’s more likely that they will continue to return to the AI app and use it again. This isn’t mentioned to judge them, quite the contrary. It’s very unlikely they even realize that the people who run the apps didn’t create any of the work used in the product and they are even less likely to know that none of the apps paid for any of the art used by the app either. How could they? Everything is obscure and only those who have even a passing interest in AI would even bother to check.

LENSA is especially horrid, so I will break it down for you. On top of using the LAION dataset, they have some incredibly “interesting” terms and a “trust us” attitude that should cause buyers to beware. Within their terms of service they have a form of double-speak where they first tell you that they are not taking control over your “User Content” while also saying that they are. This double-speak gives them the ability to further train their AI on your likeness and anything else you feed into it. Which means that the AI will eventually be using YOU and selling your likeness to others. “Digging into the fine print of the Lensa User Agreement reveals that users are granting them “a perpetual, revocable, nonexclusive, royalty-free, worldwide, fully-paid, transferable, sub-licensable license to use, reproduce, modify, adapt, translate, create derivative works from and transfer your User Content, without any additional compensation to you…”” (from printmag). They have actively directly copied the artists and used images from said artists to represent that style without credit nor compensation, living working active artists. (https://twitter.com/arvalis/status/1558632898336501761)

If you decide to remove your account or terminate it you also have no way to be absolutely certain that they have actually removed your likeness. I’m sure they will absolutely claim they have but just by sharing your likeness with them you have “trained” their AI further. You have no guarantee nor way to prove that they have done as they claim because they would have start from zero to remove your likeness and start the entire program over. That is how learning algorithms work. Once it’s incorporated into the learning program it is there until it is corrupted or destroyed. Hopefully this will change in the future but this is where we are right now.

If you have gotten this far, thanks because I do realize it’s long.

As a digital artist who absolutely loves to do art, I want to point out that I wouldn’t have had a problem with AI generated art, or even helping to train said AI, had they approached the artists in question to ask for consent (myself included), they did not. I might have consented to let them use one or two images. I would not, however have agreed to let them take my entire gallery.

To explain the overall frustration that many of the artists, myself included have had to endure let’s discuss something artists are very used to. For years, I have had people ask me for my art for free. Whether it was someone who wanted to use it for a flier, for a band, or within their book/poem etc… The most common conversation would start with someone complimenting my art. They would then state that they wanted to use it in (whatever thing they were in the process of making at the time for their sole benefit). This would lead to me offering to license it for fair compensation. Here is where the divergence would start, some would immediately agree. Then there were the ones who were “taken aback”. They would insist to me that I should let them use it without a licensing fee to “get my name out there”. These same individuals had no intention of actually telling anyone my name of course. They didn’t recognize nor care about the time or effort put into it. They wanted it (full stop). They knew that my work would only benefit them, their brand, or their cause but they absolutely did not think they should have to “pay” for that. Instead they attempted to convince me that I should be grateful to them because they were “helping” me. Obviously this did not convince me. But earlier in my career, I fell prey to it because I didn’t know what my work was worth.

This trivialization of the hard work and effort it took to get to where I am, is something I hate but it’s a shared experience for all of the artists I know. It should not be normalized and it should be called out for what it is but it has been there for a long time.

What is happening with AI, the art theft, the forgeries etc is a further example of this exact same idea… The idea that someone else’s hard work doesn’t belong to them and anyone should be allowed to take it without knowing who created it or why. That a person who worked so hard to build the skills to make something should be “fine” with an algorithm grabbing everything you have ever done to render something for someone else’s benefit and profit, no attribution or compensation required.

Those who created the datasets did not consider the work of artists, the photos of other people (medical photos, the hacked personal phones etc), to be something that didn’t belong to them or that they should have to ask permission to use. They stole these images to help themselves create a profit making algorithmic based computer learning image rendering piece of software and any attempts to obfuscate it or deflect are disingenuous. Whether they meant to cause harm or not, they have and they should be expected to compensate every single person involved for what they have done.

As an artist whose entire gallery was stolen, as a person whose child has been photographed for medical purposes and as someone who would never condone the use of someone’s hacked private phone images, I ask anyone who reads this to please stop using the AI generated art programs. Until they are willing to be ethical about their methodology, do not let them get away with this. It is one thing to opt in and allow someone to use an image or two. It’s a completely different thing to steal someone’s entire gallery, someone’s medical photos, or someone’s personal photos hacked from their phone.

I would also ask each and every one of you to ask yourself if you would be ok with images of you in a medical office taken by a doctor being used by anyone for profit. If you would be ok if your personal phone was hacked and the images on it were used by a company for profit… If the thought makes you queasy, be aware that all AI generated art includes things that the creators of these computer programs did not own, nor have any permission to have.

Finally, I would also like to add one more word of caution, the more photos the AI gets the higher the risk of “deep fakes” becomes. Whether these are art forgeries for profit or hit jobs to destroy someone’s reputation it doesn’t matter. The AI generated images will continue to get better at faking and if those who are creating these datasets do not have to obtain them ethically then at the very least refuse to make it profitable, if there is no profit there is no incentive.

Thanks for reading.

Links to articles: https://www.printmag.com/design-news/lensa/
https://www.vice.com/en/article/3ad58k/ai-is-probably-using-your-images-and-its-not-easy-to-opt-out
https://kotaku.com/ai-art-dall-e-midjourney-stable-diffusion-copyright-1849388060
https://www.techdirt.com/2022/11/22/ai-art-is-eating-the-world-and-we-need-to-discuss-its-wonders-and-dangers/