• 1 Post
  • 271 Comments
Joined 1 year ago
cake
Cake day: July 5th, 2023

help-circle
  • The problem is that undermining artists by dispersing open source AI to everyone, without having a fundamental change in copyright law that removes power from the corporations as well as individual artists, and a fundamental change in labour law, wealth distribution, and literally everything else, just screws artists over. Proceeding with open source AI, without any other plans or even a realistic path to a complete change in our social and economic structure, is basically just saying “yeah, we’ll sort out the problems later, but right now we’re entitled to do whatever we want, and fuck everybody else”. And that is the tech bro mindset, and the fossil fuel industry, and so, so many others.

    AI should be regulated into oblivion until such a time as our social and economic structures can handle it, ie, when all the power and wealth has been redistributed away from the 1% and evenly into the hands of everyone. Open source AI will not change the power that corporations hold. We know this because open source software hasn’t meaningfully changed the power they hold.

    I’m also sick of the excuse that AI helps people express themselves, like artistic expression has always been behind some impenetrable wall, with some gatekeeper only allowing a chosen few access. Every single artist had to work incredibly hard to learn the skill. It’s not some innate talent that is gifted to a lucky few. It takes hard work and dedication, just like any other skill. Nothing has ever stopped anyone learning that except the willingness to put the effort in. I don’t think people who tried one doodle and gave up because it was hard are a justifiable reason to destroy workers’ livelihoods.


  • When the purpose of gathering the data is to create a tool that destroys someone’s livelihood, the act of training an AI is not merely “observation”. The AIs cannot exist without using content created by other people, and the spirit of open source doesn’t include appropriating content without consent - especially when it is not for research or educational purposes, but to create a tool that will be used commercially, which open source ones inevitably will be, given the stated purpose is to compete with corporate models.

    No argument you can make will convince me that what open source AI proponents are doing is any less unethical or exploitative than what the corporate ones are. Both feel entitled to artists’ labour in exchange for no compensation, and have absolutely no regard for the negative impacts of their projects. The only difference between CEO AI tech bros and open source AI tech bros is the level of wealth. The arrogant entitlement is just the same in both.


  • Taking artists’ work without consent or compensation goes against the spirit of open source, though, doesn’t it? The concept of open source relies upon the fact that everyone involved is knowingly and voluntarily contributing towards a project that is open for all to use. It has never, ever been the case that if someone doesn’t volunteer their contributions, their work should simply be appropriated for the project without their consent. Just look at open source software: that is created and maintained by volunteers, and others contribute to it voluntarily. It has never, ever been okay for an open source dev to simply grab whatever they want to use if the creator hasn’t explicitly released it under an applicable licence.

    If the open source AI movement wants to be seen as anything but an enemy to artists, then it cannot just stomp on artists’ rights in exactly the same way the corporate AIs have. Open source AIs need to have a conversation about consent and informed participation in the project. If an artist chooses to release all their work under an open source licence, then of course open source AIs should be free to use it. But simply taking art without consent or compensation with the claim that it’s fine because the corporate AIs are doing it too is not a good look and goes against the spirit of what open source is. Destroying artists’ livelihoods while claiming they are saving them from someone else destroying their livelihoods will never inspire the kind of enthusiasm from artists that open source AI proponents weirdly feel entitled to.

    This is ultimately my problem with the proponents of AI. The open source community is, largely, an amazing group of people whose work I really respect and admire. But genuine proponents of open source aren’t so entitled that they think anyone who doesn’t voluntarily agree to participate in their project should be compelled to do so, which is at the centre of the open source AI community. Open source AI proponents want to have all the data for free, just like the corporate AIs and their tech bro CEOs do, cloaking it in the words of open source while undermining everything that is amazing about open source. I really can’t understand why you don’t see that forcing artists to work for open source projects for free is just as unethical as corporations doing it, and the more AI proponents argue that it’s fine because it’s not evil when they do it, the more artists will see them as being just as evil as the corporations. You cannot force someone to volunteer.


  • Destroying the rights of artists to the benefit of AI owners doesn’t achieve that goal. Outside of the extremely wealthy who can produce art for art’s sake, art is a form of skilled labour that is a livelihood for a great many people, particularly the forms of art that are most at risk from AI - graphic design, illustration, concept art, etc. Most of the people in these roles are freelancers who aren’t in salaried jobs that can be regulated with labour laws. They are typically commissioned to produce specific pieces of art. I really don’t think AI enthusiasts have any idea how rare stable, long-term jobs in art actually are. The vast majority of artists are freelancers: it’s essentially a gig-economy.

    Changes to labour laws protect artists who are employees - which we absolutely should do, so that companies can’t simply employ artists, train AI on their work, then fire them all. That absolutely needs to happen. But that doesn’t protect freelancers from companies that say “we’ll buy a few pieces from that artist, then train an AI on their work so we never have to commission them again”. It is incredibly complex to redefine commissions as waged employment in such a way that the company can both use the work for AI training while the artist is ensured future employment. And then there’s the issue of the companies that say “we’ll just download their portfolio, then train an AI on the portfolio so we never have to pay them anything”. All of the AI companies in existence fall into this category at present - they are making billions on the backs of labour they have never paid for, and have no intention of ever paying for. There seems to be no rush to say that they were actually employing those millions of artists, who are now owed back-pay for years worth of labour and all the other rights that workers protected by labour laws should have.


  • Labour law alone, in terms of the terms under which people are employed and how they are paid, does not protect freelancers from the scenario that you, and so many others, advocate for: a multitude of individuals all training their own AIs. No AI advocate has ever proposed a viable and practical solution to the large number of artists who aren’t directly employed by a company but are still exposed to all the downsides of unregulated AI.

    The reality is that artists need to be paid for their work. That needs to happen at some point in the process. If AI companies (or individuals setting up their own customised AIs) don’t want to pay in advance to obtain the training data, then they’re going to have to pay from the profits generated by the AI. Continuing the status quo, where AIs can use artists’ labour without paying them at all is not an acceptable or viable long-term plan.



  • I did actually specify that I think the solution is extending labour laws to cover the entire sector, although it seems that you accidentally missed that in your enthusiasm to insist that the solution is having AI on more devices. However, so far I haven’t seen any practical solutions as to how to extend labour laws to protect freelancers who will lose business to AI but don’t have a specific employer that the labour laws will apply to. Retroactively assigning profits from AI to freelancers who have lost out during the process doesn’t seem practical.



  • I remember reading that a little while back. I definitely agree that the solution isn’t extending copyright, but extending labour laws on a sector-wide basis. Because this is the ultimate problem with AI: the economic benefits are only going to a small handful, while everybody else loses out because of increased financial and employment insecurity.

    So the question that comes to mind is exactly how, on a practical level, it would work to make sure that when a company scrapes data, trains and AI, and then makes billions of dollars, the thousands or millions of people who created the data all get a cut after the fact. Because particularly in the creative sector, a lot of people are freelancers who don’t have a specific employer they can go after. From a purely practical perspective, paying artists before the data is used makes sure all those freelancers get paid. Waiting until the company makes a profit, taxing it out of them, and then distributing it to artists doesn’t seem practical to me.




  • But this is the point: the AIs will always need input from some source or another. Consider using AI to generate search results. Those will need to be updated with new information and knowledge, because an AI that can only answer questions related to things known before 2023 will very quickly become obsolete. So it must be updated. But AIs do not know what is going on in the world. They have no sensory capacity of their own, and so their inputs require data that is ultimately, at some point in the process, created by a human who does have the sensory capacity to observe what is happening in the world and write it down. And if the AI simply takes that writing without compensating the human, then the human will stop writing, because they will have had to get a different job to buy food, rent, etc.

    No amount of “we can train AIs on AI-generated content” is going to fix the fundamental problem that the world is not static and AI’s don’t have the capacity to observe what is changing. They will always be reliant on humans. Taking human input without paying for it disincentivises humans from producing content, and this will eventually create problems for the AI.






  • Yeah, I think you could be right there, actually. My instinct on this from the start is that it would prevent the grieving process from completing properly. There’s a thing called the gestalt cycle of experience where there’s a normal, natural mechanism for a person going through a new experience, whether it’s good and bad, and a lot of unhealthy behaviour patterns stem from a part of that cycle being interrupted - you need to go through the cycle for everything that happens in your life, reaching closure so that you’re ready for the next experience to begin (most basic explanation), and when that doesn’t happen properly, it creates unhealthy patterns that influence everything that happens after that.

    Now I suppose, theoretically, there’s a possibility that being able to talk to an AI replication of a loved one might give someone a chance to say things they couldn’t say before the person died, which could aid in gaining closure… but we already have methods for doing that, like talking to a photo of them or to their grave, or writing them a letter, etc. Because the AI still creates the sense of the person still being “there”, it seems more likely to prevent closure - because that concrete ending is blurred.

    Also, your username seems really fitting for this conversation. :)




  • Given the husband is likely going to die in a few weeks, and the wife is likely already grieving for the man she is shortly going to lose, I think that still places both of them into the “vulnerable” category, and the owner of this technology approached them while they were in this vulnerable state. So yes, I have concerns, and the fact that the owner is allegedly a friend of the family (which just means they were the first vulnerable couple he had easy access to, in order to experiment on) doesn’t change the fact that there are valid concerns about the exploitation of grief.

    With the way AI techbros have been behaving so far, I’m not willing to give any of them the benefit of the doubt about claims of wanting to help rather than make money - such as using a vulnerable couple to experiment on while making a “proof of concept” that can be used to sell this to other vulnerable people.