this post was submitted on 09 Jan 2024
84 points (100.0% liked)
Technology
38435 readers
10 users here now
A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.
Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.
Subcommunities on Beehaw:
This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.
founded 3 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I'm actually fine with generative AI that uses only public domain and creative commons content. I'm not threatened by AI as a creative, because AI can only iterate on its own training data. Only humans can create something genuinely new and original. My objection is solely on the basis of theft. If we agree that everybody has the basic right to control their own data and content, than that logically has to extend to artists: they must have the right to control their own work, and consenting to humans viewing it isn't the same as consenting to having it fed into an AI.
I suspect there would be a lot more artists open to considering the benefits of a generative AI using only public domain and creative commons works if they weren't justifiably aggrieved at having their life's work strip-mined. Expecting the victims of exploitation to be 100% rational about their exploiter (or other adjacent parties trying to argue why it's fine when they do it) isn't reasonable. At this point, artists simply don't trust the generative AI industry, and there needs to be a significant and concerted effort to rectify existing wrongs to repair that trust. One organisation offering a model based on creative commons artworks, when the rest of the generative AI industry is still stealing everything that's not nailed down, does not promote trust. Regulate, compensate, mend some fences, and build trust. Then go and talk to artists, and have the conversations that should have been had before the first AI models were built. The AI industry needs to prove it can be trusted, and then learn to ask for permission. Then, maybe, it can ask for forgiveness.
If the models were purely being used for research, I might buy the argument that fair use applies. But the fair use balancing act also incorporates an element of whether the usage is commercial in nature and is intended to compete with the rights holder in a way that affects their livelihood. Taking an artist's work in order to mass produce pieces that replicates their style, in such a way that it prevents the artist from earning a living, definitely affects their livelihood, so there is a very solid argument that fair use ceased to apply when the generative AI entered commercial use. The people that made the AI models aren't engaging in self-expression at this point. The users of the AI models may be, but they're not the ones that used all the art without consent or compensation. The companies running the AI models are engaged purely in profit-seeking, making money from other people's work. That's not self-expression and it's not discussion. It's greed.
Although the courts ruled that reverse engineering software to make an emulator was fair use, it's worth bearing in mind that the emulator is intended to allow people to continue using software they have purchased after the lifespan of the console has elapsed - so the existence of an emulator is preserving consumers' rights to use the games they legally own. Taking artists' work to create an AI so you no longer need the artist has more in common with pirating the games rather than creating an emulator. You're not trying to preserve access to something you already have a licence to use. An AI isn't replacing artwork that you have the right to use but that you can no longer access because of changing hardware. AI is allowing you to use an artist's work in order to cut them out of the equation without you ever paying them for the work you have benefitted from.
The AI models can combine concepts in new ways, but it still can't create anything truly new. An AI could never have given us something like Cubism, for example, because visually nothing like it had ever existed before, so there would have been nothing in its training data that could have made anything like it. What a human brings to the process is life experience and an emotional component that an AI lacks. All an AI can do is combine existing concepts into new combinations (like combining fried eggs and flowers - both of those objects are existing concepts). It can't create entirely new things that aren't represented somewhere in its training data. If it didn't know what fried eggs and flowers were, it would be unable to create them.
I think we're very much in "agree to disagree" territory here.