‘Impossible’ to create AI tools like ChatGPT without copyrighted material, OpenAI says::Pressure grows on artificial intelligence firms over the content used to train their products
Yeah, I also have no way to own a billion dollar. Sucks for both of us…
OK, so pay for it.
Pretty simple really.
Every work is protected by copyright, unless stated otherwise by the author.
If you want to create a capable system, you want real data and you want a wide range of it, including data that is rarely considered to be a protected work, despite being one.
I can guarantee you that you’re going to have a pretty hard time finding a dataset with diverse data containing things like napkin doodles or bathroom stall writing that’s compiled with permission of every copyright holder involved.Sounds like a OpenAI problem and not an us problem.
I never said it was going to be easy - and clearly that is why OpenAI didn’t bother.
If trey want to advocate for changes to copyright law then I’m all ears, but let’s not pretend they actually have any interest in that.
Or let’s use this opportunity to make copyright much less draconian.
I’m no fan of the current copyright law - the Statue of Anne was much better - but let’s not kid ourselves that some of the richest companies in the world have any desire what so ever to change it.
My brother in Christ I’m begging you to look just a little bit into the history of copyright expansion.
I looked a bit into your history, and discovered you only discuss copyright when it’s on a post about AI corporations. It seems that up until OpenAI stood to exploit workers, you didn’t much care…
I only discuss copyright on posts about AI copyright issues. Yes, brilliant observation. I also talk about privacy y issues on privacy relevant posts, labor issues on worker rights related articles and environmental justice on global warming pieces. Truly a brilliant and skewering observation. Youre a true internet private eye.
Fair use and pushing back against (corporate serving) copyright maximalism is an issue I am passionate about and engage in. Is that a problem for you?
Fair use and pushing back against (corporate serving) copyright maximalism is an issue I am passionate about and engage in
You only act concerned about copyright when AI is brought up. You’ve never mentioned it outside of that context.
I’m sure your concern is genuine…
I am an attorney that has worked extensively in copyright and specifically fair use and the only interesting issue in copyright I feel like discussing is generative art — primarily because the discourse is getting overwhelmed by well meaning lefties and creators in my circles who are unfortunately carrying water for the same copyright maximalist arguments corporate IP holders have been marketing for years. We need more fair use, not less. Expanding copyright once again to cover training a model - which is what would be required because as is these models are not infringing on any existing copyright rights - would have disastrous results on small artists and creators, only surge to further enrich corporate IP hoarders that can handle the exploding court costs that will result from such an expansion.
To be clear, I also am very concerned about the many externalities of generative AI. Labor, environmental, competition, national security, privacy — but we’re not going to copyright our way out of those problems.
I am well aware.
¿Porque no los dos?
I don’t understand why people are defending AI companies sucking up all human knowledge by saying “well, yeah, copyrights are too long anyway”.
Even if we went back to the pre-1976 term of 28 years, renewable once for a total of 56 years, there’s still a ton of recent works that AI are using without any compensation to their creators.
I think it’s because people are taking this “intelligence” metaphor a bit too far and think if we restrict how the AI uses copyrighted works, that would restrict how humans use them too. But AI isn’t human, it’s just a glorified search engine. At least all standard search engines do is return a link to the actual content. These AI models chew up the content and spit out something based on it. It simply makes sense that this new process should be licensed separately, and I don’t care if it makes some AI companies go bankrupt. Maybe they can work adequate payment for content into their business model going forward.
I don’t understand why people are defending AI companies sucking up all human knowledge by saying “well, yeah, copyrights are too long anyway”.
Would you characterize projects like wikipedia or the internet archive as “sucking up all human knowledge”?
Wikipedia is free to the public. OpenAI is more than welcome to use whatever they want if they become free to the public too.
The copyright shills in this thread would shutdown Wikipedia
Does Wikipedia ever have issues with copyright? If you don’t cite your sources or use a copyrighted image, it will get removed
In Wikipedia’s case, the text is (well, at least so far), written by actual humans. And no matter what you think about the ethics of Wikipedia editors, they are humans also. Human oversight is required for Wikipedia to function properly. If Wikipedia were to go to a model where some AI crawls the web for knowledge and writes articles based on that with limited human involvement, then it would be similar. But that’s not what they are doing.
The Internet Archive is on a bit less steady legal ground (see the resent legal actions), but in its favor it is only storing information for archival and lending purposes, and not using that information to generate derivative works which it is then selling. (And it is the lending that is getting it into trouble right now, not the archiving).
Wikipedia has had bots writing articles since the 2000 census information was first published. The 2000 census article writing bot was actually the impetus for Wikipedia to make the WP:bot policies.
I don’t understand why people are defending AI companies
Because it’s not just big companies that are affected; it’s the technology itself. People saying you can’t train a model on copyrighted works are essentially saying nobody can develop those kinds of models at all. A lot of people here are naturally opposed to the idea that the development of any useful technology should be effectively illegal.
You can make these models just fine using licensed data. So can any hobbyist.
You just can’t steal other people’s creations to make your models.
Of course it sounds bad when you using the word “steal”, but I’m far from convinced that training is theft, and using inflammatory language just makes me less inclined to listen to what you have to say.
Training is theft imo. You have to scrape and store the training data, which amounts to copyright violation based on replication. It’s an incredibly simple concept. The model isn’t the problem here, the training data is.
Removed by mod
As long as capitalism exist in society, just being able go yoink and taking everyone’s art will never be a practical rule set.
They’re not wrong, though?
Almost all information that currently exists has been created in the last century or so. Only a fraction of all that information is available to be legally acquired for use and only a fraction of that already small fraction has been explicitly licensed using permissive licenses.
Things that we don’t even think about as “protected works” are in fact just that. Doesn’t matter what it is: napkin doodles, writings on bathrooms stall walls, letters written to friends and family. All of those things are protected, unless stated otherwise. And, I don’t know about you, but I’ve never seen a license notice attached to a napkin doodle.
Now, imagine trying to raise a child while avoiding every piece of information like that; information that you aren’t licensed to use. You wouldn’t end up with a person well suited to exist in the world. They’d lack education regarding science, technology, they’d lack understanding of pop-culture, they’d know no brand names, etc.
Machine learning models are similar. You can train them that way, sure, but they’d be basically useless for real-world applications.
This is the best summary I could come up with:
The developer OpenAI has said it would be impossible to create tools like its groundbreaking chatbot ChatGPT without access to copyrighted material, as pressure grows on artificial intelligence firms over the content used to train their products.
Chatbots such as ChatGPT and image generators like Stable Diffusion are “trained” on a vast trove of data taken from the internet, with much of it covered by copyright – a legal protection against someone’s work being used without permission.
AI companies’ defence of using copyrighted material tends to lean on the legal doctrine of “fair use”, which allows use of content in certain circumstances without seeking the owner’s permission.
John Grisham, Jodi Picoult and George RR Martin were among 17 authors who sued OpenAI in September alleging “systematic theft on a mass scale”.
Getty Images, which owns one of the largest photo libraries in the world, is suing the creator of Stable Diffusion, Stability AI, in the US and in England and Wales for alleged copyright breaches.
The submission said it backed “red-teaming” of AI systems, where third-party researchers test the safety of a product by emulating the behaviour of rogue actors.
The original article contains 530 words, the summary contains 190 words. Saved 64%. I’m a bot and I’m open source!
I’m dumbfounded that any Lemmy user supports OpenAI in this.
We’re mostly refugees from Reddit, right?
Reddit invited us to make stuff and share it with our peers, and that was great. Some posts were just links to the content’s real home: Youtube, a random Wordpress blog, a Github project, or whatever. The post text, the comments, and the replies only lived on Reddit. That wasn’t a huge problem, because that’s the part that was specific to Reddit. And besides, there were plenty of third-party apps to interact with those bits of content however you wanted to.
But as Reddit started to dominate Google search results, it displaced results that might have linked to the “real home” of that content. And Reddit realized a tremendous opportunity: They now had a chokehold on not just user comments and text posts, but anything that people dare to promote online.
At the same time, Reddit slowly moved from a place where something may get posted by the author of the original thing to a place where you’ll only see the post if it came from a high-karma user or bot. Mutated or distorted copies of the original instance, reformated to cut through the noise and gain the favor of the algorithm. Re-posts of re-posts, with no reference back to the original, divorced of whatever context or commentary the original creator may have provided. No way for the audience to respond to the author in any meaningful way and start a dialogue.
This is a miniature preview of the future brought to you by LLM vendors. A monetized portal to a dead internet. A one-way street. An incestuous ouroborous of re-posts of re-posts. Automated remixes of automated remixes.
–
There are genuine problems with copyright law. Don’t get me wrong. Perhaps the most glaring problem is the fact that many prominent creators don’t even own the copyright to the stuff they make. It was invented to protect creators, but in practice this “protection” gets assigned to a publisher immediately after the protected work comes into being.
And then that copyright – the very same thing that was intended to protect creators – is used as a weapon against the creator and against their audience. Publishers insert a copyright chokepoint in-between the two, and they squeeze as hard as they desire, wringing it of every drop of profit, keeping creators and audiences far away from each other. Creators can’t speak out of turn. Fans can’t remix their favorite content and share it back to the community.
This is a dysfunctional system. Audiences are denied the ability to access information or participate in culture if they can’t pay for admission. Creators are underpaid, and their creative ambitions are redirected to what’s popular. We end up with an auto-tuned culture – insular, uncritical, and predictable. Creativity reduced to a product.
But.
If the problem is that copyright law has severed the connection between creator and audience in order to set up a toll booth along the way, then we won’t solve it by giving OpenAI a free pass to do the exact same thing at massive scale.
if it’s impossible for you to have something without breaking the law you have to do without it
if it’s impossible for the artistocrat class to have something without breaking the law, we change or ignore the law
Copyright law is mostly bullshit, though.
Oh sure. But why is it only the massive AI push that allows the large companies owning the models full of stolen materials that make basic forgeries of the stolen items the ones that can ignore the bullshit copyright laws?
It wouldn’t be because it is super profitable for multiple large industries right?
Wow! You’re telling me that onerous and crony copyright laws stifle innovation and creativity? Thanks for solving the mystery guys, we never knew that!
I’ve learned from lemmy that individual’s abuse of copyright is good👍
LLMs trained on copyrighted material and suddenly everyone is an advocate for more strict copyright enforcement?
Who is behind each? Individual abuse is just an expense to a corporation, LLMs caused a lot of fear in regular artists.
You’re not afraid of the technology you’re afraid of corporations abusing it to exploit their workforce. Don’t blame the technology, blame the corporations.
You’re describing the difference between the original Luddism that’s against exploitation and the degenerate form that’s just a blind hatred of new technology. Unfortunately there seems to be a lot of the latter on Lemmy.
I bet a lot of the AI bashers are the same demographic that grew up with the Internet and mocked the baby boomers who were Internet skeptics.
Yeah Lemmy and the world in general seems to just parrot the opinions of whichever talking head they listen to. I recognize that there are certainly issues both ethically and technically with LLMs and image generation especially. However I also utilize both these tools on a daily basis to make my life more efficient which frees me up to do more things I enjoy. That to me is the most important thing we should regulate about automation, it should make lives easier, not give us more work to do.
I have the perfect solution. Shorten the copyright duration.
“Impossible”? They just need to ask for permission from each source. It’s not like they don’t already know who the sources are, since the AIs are issuing HTTP(S) requests to fetch them.
If OpenAI is right (I think they are) one of two things need to happen.
- All AI should be open source and non-profit
- Copywrite law needs to be abolished
For number 1. Good luck for all the reasons we all know. Capitalism must continue to operate.
For number 1. Good luck because those in power are mostly there off the backs of those before them (see Disney, Apple, Microsoft, etc)
Anyways, fun to watch play out.
A ton of people need to read some basic background on how copyright, trademark, and patents protect people. Having none of those things would be horrible for modern society. Wiping out millions of jobs, medical advancements, and putting control into the hands of companies who can steal and strongarm the best. If you want to live in a world run by Mafia style big business then sure.
I see and understand your point regarding trademark, but I don’t understand how removing copyright or patents would have this effect, could you elaborate ?
Meh, patents are monopolies over ideas, do much more harm than good, and help big business much more than they help the little guy. Being able to own an idea seems crazy to me.
I marginally support copyright laws, just because they provide a legal framework to enforce copyleft licenses. Though, I think copyright is abused too much on places like YouTube. In regards to training generative AI, the goal is not to copy works, and that would make the model’s less useful. It’s very much fair use.
Trademarks are generally good, but sometimes abused as well.
Patents don’t let you own an idea. They give you an exclusive right to use the idea for a limited time in exchange for detailed documentation on how your idea works. Once the patent expires everyone can use it. But while it’s under patent anyone can look up the full documentation and learn from it. Without this, big business could reverse engineer the little guys invention and just steal it.
Goes both ways. As someone who has tried bringing new products to market, it’s extremely annoying that nearly everything you can think of already has similar patent. I’ve also reverse engineered a few things (circuits and disassembled code), as a little guy, working for a small business . I don’t think people usually scan patents to learn things, and reverse engineering usually isn’t too hard.
If I were a capitalist, I’d argue that if a big business “steals” an idea, and implements it more effectively and efficiently than the small business, then the small business should probably fail.
Sounds like a fatal problem. That’s a shame.
Maybe you shouldn’t have done it then.
I can’t make a Jellyfin server full of content without copyrighted material either, but the key difference here is I’m not then trying to sell that to investors.
Its almost like we had a thing where copyrighted things used to end up but they extended the dates because money