

I know this is an anarchist instance. It’s part of the reason I assumed that anti-capitalism would be a given and I didn’t need to bang the drum about it before stating my arguments. I am anti-capitalist.
It seems like your faith is much higher than mine that people are vetting the AI tools they use, or that they exclusively use their own works as training material.
From what I can tell, our stable diffusion art communities make no distinction between training sets, nor do they require that shared images be trained on public-domain or user-owned data only. Given that, I don’t think it’s completely unreasonable that people are equating stable diffusion users with users generating their content on the big models that were indiscriminately fed the entire internet. There’s no way to easily tell.
And outside of capitalism and industry, there are interesting philosophical discussions that need to be had around generative AI that I don’t see enough. Here are a few of the topics I think need to be examined more, both by human society at large, and by AI-art communities especially:
-
What does “good artists borrow, great artists steal” mean when the artist in question is modulating their output by inhuman means - parsing millions of images in ways that are physical impossibility? I think that’s worth interrogating.
-
What say do living artists get in who uses their work in training sets, and how should that be respected? Is ignorance of publicly-stated wishes an acceptable excuse? How should this be moderated?
-
How do we assign value (cultural, economic, personal, sentimental, or any other) to creative works? I think arguably that both human-created and generative AI art are the product of thousands of years of human creative output, but they’re vastly different in terms of the skill, types of knowledge, and time required to create one piece.
And it worries me that a lot of people seem pretty inclined to dismiss criticism of AI use as frivolous or reactionary, or couch it as a base refusal to adapt or learn new technologies. Especially when the people driving policy around the largest implementations of that technology are the ones who are the least principled in its deployment.
I know that this is a small community. I know that the proportion of people here who use custom stable diffusion models is almost definitely much higher than many other forums on the internet.
But I worry that if we don’t have this kind of discussion here, where people are (maybe, optimistically/flatteringly) more judicious in their use of AI than elsewhere - if we don’t have clear, principled guidelines, then the prevailing attitudes are ultimately going to wind up being those of Microsoft, Google, OpenAI, or fucking Grok.
For now though, unless I know that someone is using models trained on their own work, or at least public-domain works, I feel like I’m crossing a picket line, and I don’t like that.
I switched a few months ago, and I’ve honestly been so impressed with how far Blender has come since the last time I tried it (more than 10 years ago, probably).
I don’t work in creative industry anymore and I haven’t had a ton of time to noodle around and actually try out the tools I’ve seen demo’d, but it was mindblowing discovering how many different software suites I had used to do stuff that Blender has been incorporating into their one package.
Maya? Obviously does most of that. ZBrush? Yep, pretty comparable. Marvelous Designer? Holy shit, yep. ToonBoom? Also that.
By far the worst part has just been trying to retrain hotkey muscle memory and learn minor (but fundamental) differences, and that’s not as small a thing as a lot of people make it out to be - it does add a lot of cognitive noise and you really can’t just hop in and flow right from the get go (depending on what you’re doing).
Absolutely worth it to get away from Adobe though, and not having to bounce between programs while working on a model is very, very pleasant.