OpenAI collapses media reality with Sora AI video generator | If trusting video from anonymous sources on social media was a bad idea before, it’s an even worse idea now::Hello, cultural singularity—soon, every video you see online could be completely fake.
Why are they working so hard on making humanity worse?
I really have to say it?
Because we’re all born selfish assholes*, and some people never learn to not be so.
*We’re all born as selfish idiots, how can we be otherwise? We’re helpless at birth, thrust from perfect comfort and safety into discomfort, utterly ignorant and wholly dependent, with no knowledge there are others, who are just as dependent and helpless when they’re born. Learning about others, and how to get along is part of maturing.
Sorry no, your perceptions are skewed by how well society rewards selfish assholes.
Most humans are inherently empathic and compassionate, just the tiny handful of sociopaths that run everything are projecting.
Because the rich profit the most when everyone else is in fear and confusion.
So they generate it as much as possible and reap the rewards of ignorance and knee-jerk policies.
It’s like we’re going back to the pre-internet era but it’s obviously a little different. Before the internet, there were just a few major media providers on TV plus lots of local newspapers. I would say that, for the most part in the USA, the public trusted TV news sources even though their material interests weren’t aligned (regular people vs big media corporations). It felt like there wasn’t a reason not to trust them, since they always told an acceptable version of the truth and there wasn’t an easy way to find a different narrative (no internet or crazy cable news). Local newspapers were usually very trusted, since they were often locally owned and part of the community.
The internet broke all of those business models. Local newspapers died because why do you need a paper when there are news websites? Major media companies were big enough to weather the storm and could buy up struggling competitors. They consolidated and one in particular started aggressively spinning the news to fit a narrative for ratings and political gain of the ownership class. Other companies followed suit.
This, paired with the thousands of available narratives online, weakened the credibility of the major media companies. Anyone could find the other side of the story or fact check whatever was on TV.
Now what is happening? The internet is being polluted with garbage and lies. It hasn’t been good for some time now. Obviously anyone could type up bullshit, but for a minute photos were considered reliable proof (usually). Then photoshopping something became easier and easier, which made videos the new standard of reliable proof (in most cases).
But if anything can be fake now and difficult to identify as fake, then how can you fact check anything? Only those with the means will be able to produce undeniably real news with great difficulty, which I think will return power to major news companies or something equivalent.
I’m probably wrong about what the future holds, so what do you think is going to happen?
Now what is happening? The internet is being polluted with garbage and lies. It hasn’t been good for some time now.
Social media as content aggregation is generally garbage, but it’s a far stretch to apply that to the Internet or even the Web as a whole. Don’t forget Wikipedia is still a thing and almost every creator of primary source data publishes online.
But if anything can be fake now and difficult to identify as fake, then how can you fact check anything? Only those with the means will be able to produce undeniably real news with great difficulty, which I think will return power to major news companies or something equivalent.
That’s kind of always been true. And I agree, we need to find a way to maintain information sourcing organizations (e.g. news) that we can trust as the arbiters of this information. If Washington Post can actually put credible reporters on the ground to confirm something, and I know I can trust WaPo, I can fairly say with some confidence that it’s good information.
I think we all (or some of us at least) just need to be willing to pay for this service.
Fake photos existed before Photoshop, with scissors and glue
I don’t think you’re wrong, I have been thinking the same thing.
Everyone has been worried about “AI misinformation” - but if misinformation becomes so commoditized online that someone convinced the moon landing is fake finds two dozen different AI generated sources agreeing with them but disagreeing with each other (i.e. a video of Orson Wells filming it but also a video of Stanley Kubrick filming it) we may well end up in a world where people just stop paying attention to the bullshit online that has been destroying people’s minds for years now.
Couple this with the advances in AI correctly identifying misinformation and live fact checking it with citations to reputable and/or certified sources, combined with things like Elon Musk’s ‘uncensored’ Grok turning around and calling his conservative Twitter fans racist and small minded morons while pointing out why they are wrong, or Gab’s literal Adolf Hitler AI telling a user they were disgusting for asking if Jews were vermin - and we may just end up on a narrow path out of the mess we’ve found ourselves in well before AI was suddenly a thing.
I had been really worried about the AI misinformation angle, but given some recent developments in the past few months I’m actually hopeful about the future of a better informed public for the first time in years.
Agreed, people are up in arms that misinformation will become easier. But I think the naive idea that the internet is inherently a reliable source of truth when it is mixed with subtler forms of misinformation, is much more insidious. Journalism used to be a highly respected field before we all forgot why it was so important.
Isn’t this a bit over dramatic, seeing as we had deepfake tech for a while now?
Imagine generating 5,000 videos of different people (likenesses pulled from Facebook) reacting to a fake calamity staged in a certain city.
Imagine seeing it every day to a point you can’t see the real calamity coming because you stopped believing in them entirely.
I don’t think so, as deep fake stuff was about switching faces and voices. You needed actual footage to train this on.
So if you wanted to stage something, it would take considerable effort, money, time, and manpower.
Now anyone will be able to just type in a prompt and have a video generated.
We saw the Joe Biden deepfake that made calls to tell people not to vote for him. Now just wait until we are having videos of him saying it sent out in mass.
Oh man reality is going to be so strange to you when you get old enough to understand it.
OpenAI singlehandedly breaking the internet. Props, tbh.
They really are. I don’t know about Google but with DDG when searching for information I feel like most of the top results are articles written by AI. Luckily it’s still somewhat easy to recognize but that’s not going to be the case for long. It’s inevitable though so I don’t really blame them. If not OpenAI then it would have just been someone else. I’m just worried about where this is going. I can think of more ways this could go wrong than right.
Every day were getting closer to “The Running Man.”
Here is an alternative Piped link(s):
Piped is a privacy-respecting open-source alternative frontend to YouTube.
I’m open-source; check me out at GitHub.
More dancing women in tv???
FANTASTIC reference! This movie is so funny and awesome, and it seems to have completely disappeared from pop culture. I never understood why Conan looms so large in our collective memory, but this movie totally vanished.
Another stepping stone to a much worse world. We won’t know what is real anymore.
I think it’s very cool technology, but in the hands of governments and psyops, it’s going to brainwash entire countries.
Want another 9/11? Sure no problem. Blow up a building, tell people you have some random video of what happened, captured by civilians…place evidence in locations where it will be found.
We already don’t know what is real. This will only make that clearer.
I think some governments already had tech like this but not all.
It will be interesting to follow this. Probably lots of fake videos on YouTube as a consequence where events are not real but used to stir up aggression.
Photoshop has existed for a long time. Three Letter Agencies have been faking stuff forever. Not new.
Will this make it easier/faster? For sure. The one upside I can see is it brings the conversation to everyone, even those folks who don’t want to acknowledge government is as bad an actor as anyone else.
Maybe, but I doubt it, only because traditional propaganda has been %100 effective without generative AI.
We won’t know what is real anymore.
Of all the things, this really scares me. Many people scroll through their socials so quickly they will definitely not be able to tell apart generated clips from real ones. And the generated ones will only get better. One generation later, nobody will believe anything they see on a screen. And no, I don’t think regulation can do much here as it will only end up in heavily censoring everything, leading to more distrust in media.
I think it could end up being a good thing if it causes social media to collapse into smaller, better known social groups.
I looked at these videos with very mixed emotions. On the one hand, I marveled at how far we’ve gotten. In a few years we went from generating sort of okay images in a very confined domain and essentially uncontrollable, to generating high resolution video that on first glance looks real.
But then the sadness struck me. I think we’re entering the post-truth era, where the truth is harder and harder to find because all the fake stuff looks so real. We can generate text, images, sound, and now also video of whatever we want in the blink of an eye. Combine this with the tendency of people to accept any “information” that fits their view, and the filter bubbles that already exist, and we can see that humanity will start living in separate bubbles. Every bubble will have their own truth, and even if someone proves that a video or image is fake, that information will probably not even reach them because the truth doesn’t generate enough clicks.
I want to stay optimistic, we’ve overcome so much stuff as a species, maybe we’ll right the ship at some point. But with all the shit that is already going on in the world, the last thing we need is the ability to fake videos like this in no time at all. At some point the separate filter bubbles will tear our stable western world as we knew it apart, and we’ll see shit like WW II again. The situation is already heating up.
I’m actually glad that AI is making people realize that what they see is likely not real. For the history of media, the default has been for the written word or images or video to be taken as 100% truth, when in reality, it has always been very easy to deceive and manipulate. Now that we will suspect everything, maybe there will finally be critical thinking.
It’s funny that in the human history there will be a gap of around 100 years where photos and video were considered to be solid proof and evidence that could determine the outcome of somebody’s future
we’re back at square one I guess
Naah, that was never a thing. EG: In 1917, two young girls created some photographs of fairies, the Cottingley Fairies. Arthur Conan Doyle, the inventor of Sherlock Holmes, endorsed them as real. When you have eliminated the impossible, whatever remains, however improbable, must be the truth. That quote is terrible advice.
The last 1 or 2 decades were really the golden age of credible evidence. Everyone has a video camera and can upload these videos almost immediately (proving that the videos were not edited later). Yet, at the same time, misinformation has become this huge topic.
We’re not back to square 1, either. You can still immediately upload a video (or a hash, or get it certified in some way). Say, you do this with dashcam footage after a collision, ASAP. That makes it almost unassailable as evidence, because you can’t have had time to forge it; certainly not in a way that is congruent with independent evidence and testimony.
If several people, upload videos of the same event at about the same time, then they either are all in it together and carefully prepared the videos beforehand, or not.
Sounds like what the board of openai thought when they attempted to fire him
Honestly I think we’ve been there for a while. The only difference now is that it’s very easy for anyone to fake something, which might actually force us to face it? Or not who knows.
Damn, the AI that wrote this is really good!
Haha thanks
I don’t really see the big problem yet. There’s still a hint of uncanny valley in that video.
Show it to your parents and ask what they think. Guaranteed they can’t tell it’s fake.
My mother is about the only person of that vintage that would probably say “this doesn’t look right”
Even my older siblings (boomers) would probably fall for it.
My parents are dead.
Lets revisit this comment in 3 years.
One year. The will smith spaghetti video came out last year. It’s progressing at an impossible pace already.
9 months
This is only the beginning. It’s only going to get harder and harder to know what is and isn’t real online.
Sure, you and I are aware of this and have an idea of what to look out for. But do my older parents or grandparents know about this stuff and what to look for? I seriously doubt it.With the amount of people who are either lying or genuinely can’t tell when images are made by AI… I’m scared
https://aftermath.site/openai-sora-scam-sillicon-valley
It was a scam peoples
Targeted stupids investors were not supposed to share it
The link does not say in what way people were not supposed to share.
The link is the same kind of self-delusion people show around all of these generative tools: “look the faces are weird, the bird has wrong feathers, the cat has only 2 legs, nothing to worry about” while forgetting that most everything else in a clip works well and that it is the first-of-the-first releases which will get gradually better.
Well I get that AI companies over promise and stuff, but that opinion piece really just confirmes what we’re already able to see in said clips. Sure, many animals look eerie es hell and that monobloc excavation video is one hell of an acid trip, but there’s already a lot there. More than I’m comfortable with.
Genuine question: why do we need this type of thing?
Especially in view of the harm it can cause, what’s the point of creating this aside from generating shareholder value?
Sure, creating a video out of text is cool quickly is cool, but is there an actual need for this?
I mean, the ability to generate whatever video you want without having to pay the costs normally associated with filming, location, actors etc is going to be very appealing to people like advertisers. This way you can have a few seconds of a beach for your travel company advert, for example, without having to pay for the stock footage or film it yourself. In fact I can see this transforming stock footage in general. Why bother to pay someone to make a generic video of ‘people having a meeting’ when an AI can do it for free in half the time. Doesn’t even need to be that good if you’re only using it briefly in a presentation. Not saying any of this is a good thing, but here we are…
This way you can have a few seconds of a beach for your travel company advert, for example, without having to pay for the stock footage or film it yourself.
Advertising holidays at places that do not exist! Exactly what we needed!
So no different than currently? Pictures and videos in ads all have heavy editing and post processing to make them look better.
Uhh, ads already do that.
I get what you’re saying, but there isn’t really any NEED for that
I mean, yes in so far that it opens those options up to people who may not have been able to afford it before. Whether that’s a ‘need’ or not depends on your opinion of the company I guess.
Also there are other applications beyond this, of course. Easily made videos could help reduce the costs associated with treating some mental health issues for example.
The you have the ability for novice film makers to make content, a bit like how engines like Unity have made it easier for people to make games. Sure there’ll be shit-tier tat, but there’ll also be content made by creators that may never have had the chance otherwise.
It may help to make synthetic training data for other models / simulators
Which then just feeds to the system. But as this is a globally impactful thing, is there any real world need that outweighs the harm?
Cost savings are a need. It frees resources for everyone. Sure the vast majority of the profit goes to the shareholders but that’s true of every labor saving device.
Do we NEED computers? You can hire people to do calculations by hand. The word Computer used to mean a job title, not a device.
OpenAI’s take is someone will create this technology - it might as well be them since their motivation is relatively pure. OpenAI is a non profit and they do work hard to minimise the damage their tech can cause. Which is why this video generation feature has not been launched yet.
OpenAI is only technically non-profit. They’re a proxy for Microsoft in all but name. They started out mostly pure, but their dickhead CEO has worked hard to undo all of that nonsense and has created parallel companies for OpenAI that can absolutely make profit while the main company gets to keep its nonprofit status. That was literally the entire basis for the board firing him (the CEO) a few months back.
OpenAI is no longer “pure.” They are not open. They do not publish the details of any of the discoveries they’ve made (which used to be standard practice, even in the private sector). Their leadership is now in the “effective accelerationism” camp that worships capitalism, and sees developing AGI as their moral obligation, regardless of what harm it may cause to society. (They are also delusional, because it’s very unlikely AGI will be developed anytime soon).
Thats kind of like saying we never needed cars because we had horses.
Who said anything about need?
It was created because someone thought of it. How it’s used is a measure of the person using it.
People will find ways to utilize whatever someone creates. And usually in ways the creator never envisioned.
“Needing” something comes after a tool becomes ubiquitous. Imagine trying to screw in a Phillips screw with a slot screw driver - you’d need a Phillips driver because those screws are now ubiquitous (and I can’t wait for them to go away. All hail our stripped screw saviour Torx!)
Art. Presentations. Visualizations. Porn.
Weird music videos. Just this week Youtube had pushed me music videos all done in the weird warpy AI style.
It’s kind of cool and simultaneously already feels like a fad that tired with.
This kind of AI stuff bums me out. You get people legitimately sharing AI images (and potentially videos in the future) and saying “look what I made!”. It’s totally inauthentic.
My boss loves this shit, on the other hand. Looking forward to the day she can automate our jobs away, I assume.