‘Impossible’ to create AI tools like ChatGPT without copyrighted material, OpenAI says::Pressure grows on artificial intelligence firms over the content used to train their products
‘Impossible’ to create AI tools like ChatGPT without copyrighted material, OpenAI says::Pressure grows on artificial intelligence firms over the content used to train their products
OK, so pay for it.
Pretty simple really.
Or let’s use this opportunity to make copyright much less draconian.
¿Porque no los dos?
I don’t understand why people are defending AI companies sucking up all human knowledge by saying “well, yeah, copyrights are too long anyway”.
Even if we went back to the pre-1976 term of 28 years, renewable once for a total of 56 years, there’s still a ton of recent works that AI are using without any compensation to their creators.
I think it’s because people are taking this “intelligence” metaphor a bit too far and think if we restrict how the AI uses copyrighted works, that would restrict how humans use them too. But AI isn’t human, it’s just a glorified search engine. At least all standard search engines do is return a link to the actual content. These AI models chew up the content and spit out something based on it. It simply makes sense that this new process should be licensed separately, and I don’t care if it makes some AI companies go bankrupt. Maybe they can work adequate payment for content into their business model going forward.
Would you characterize projects like wikipedia or the internet archive as “sucking up all human knowledge”?
Does Wikipedia ever have issues with copyright? If you don’t cite your sources or use a copyrighted image, it will get removed
The copyright shills in this thread would shutdown Wikipedia
Wikipedia is free to the public. OpenAI is more than welcome to use whatever they want if they become free to the public too.
In Wikipedia’s case, the text is (well, at least so far), written by actual humans. And no matter what you think about the ethics of Wikipedia editors, they are humans also. Human oversight is required for Wikipedia to function properly. If Wikipedia were to go to a model where some AI crawls the web for knowledge and writes articles based on that with limited human involvement, then it would be similar. But that’s not what they are doing.
The Internet Archive is on a bit less steady legal ground (see the resent legal actions), but in its favor it is only storing information for archival and lending purposes, and not using that information to generate derivative works which it is then selling. (And it is the lending that is getting it into trouble right now, not the archiving).
Wikipedia has had bots writing articles since the 2000 census information was first published. The 2000 census article writing bot was actually the impetus for Wikipedia to make the WP:bot policies.
Because it’s not just big companies that are affected; it’s the technology itself. People saying you can’t train a model on copyrighted works are essentially saying nobody can develop those kinds of models at all. A lot of people here are naturally opposed to the idea that the development of any useful technology should be effectively illegal.
You can make these models just fine using licensed data. So can any hobbyist.
You just can’t steal other people’s creations to make your models.
Of course it sounds bad when you using the word “steal”, but I’m far from convinced that training is theft, and using inflammatory language just makes me less inclined to listen to what you have to say.
Training is theft imo. You have to scrape and store the training data, which amounts to copyright violation based on replication. It’s an incredibly simple concept. The model isn’t the problem here, the training data is.
Removed by mod
Yes it is. Moralize it all you want, but it’s still theft
As long as capitalism exist in society, just being able go yoink and taking everyone’s art will never be a practical rule set.
I’m no fan of the current copyright law - the Statue of Anne was much better - but let’s not kid ourselves that some of the richest companies in the world have any desire what so ever to change it.
My brother in Christ I’m begging you to look just a little bit into the history of copyright expansion.
I looked a bit into your history, and discovered you only discuss copyright when it’s on a post about AI corporations. It seems that up until OpenAI stood to exploit workers, you didn’t much care…
I only discuss copyright on posts about AI copyright issues. Yes, brilliant observation. I also talk about privacy y issues on privacy relevant posts, labor issues on worker rights related articles and environmental justice on global warming pieces. Truly a brilliant and skewering observation. Youre a true internet private eye.
Fair use and pushing back against (corporate serving) copyright maximalism is an issue I am passionate about and engage in. Is that a problem for you?
You only act concerned about copyright when AI is brought up. You’ve never mentioned it outside of that context.
I’m sure your concern is genuine…
I am an attorney that has worked extensively in copyright and specifically fair use and the only interesting issue in copyright I feel like discussing is generative art — primarily because the discourse is getting overwhelmed by well meaning lefties and creators in my circles who are unfortunately carrying water for the same copyright maximalist arguments corporate IP holders have been marketing for years. We need more fair use, not less. Expanding copyright once again to cover training a model - which is what would be required because as is these models are not infringing on any existing copyright rights - would have disastrous results on small artists and creators, only surge to further enrich corporate IP hoarders that can handle the exploding court costs that will result from such an expansion.
To be clear, I also am very concerned about the many externalities of generative AI. Labor, environmental, competition, national security, privacy — but we’re not going to copyright our way out of those problems.
Creators in your circles? Does that include your clients, because apparently small artists hire you?
Well now I’m intrigued.
Exactly how do you prevent your clients from getting their content stolen by a corporation created by Sam Altman, who is worth half a billion dollars on his own?
I am well aware.
Every work is protected by copyright, unless stated otherwise by the author.
If you want to create a capable system, you want real data and you want a wide range of it, including data that is rarely considered to be a protected work, despite being one.
I can guarantee you that you’re going to have a pretty hard time finding a dataset with diverse data containing things like napkin doodles or bathroom stall writing that’s compiled with permission of every copyright holder involved.
I never said it was going to be easy - and clearly that is why OpenAI didn’t bother.
If trey want to advocate for changes to copyright law then I’m all ears, but let’s not pretend they actually have any interest in that.
Sounds like a OpenAI problem and not an us problem.