It’s going to be used prolifically for something much more boring. Embellished product listings and fake reviews. If online shopping is frustrating now. It’s probably going to get a lot worse trying to weed out good quality things to buy as photographs are no longer reliable.
Well, one may hope for a “worse is better” scenario. As in Star Wars EU, where people generally do shopping as they still do in less developed areas of our planet - asking people they trust, which ask other people they trust, and so on.
This is going to make centralized media a hellscape of fakery.
It’s like with viruses - if a virus kills people too fast, it’ll kill itself.
Maybe cypherpunk-style “public web” technologies will finally become mainstream, because the rest simply won’t be usable.
This is a hyperbolic article to be sure. But many in this thread are missing the point. It’s not that photo manipulation is new.
It’s the volume and quality of photo manipulation that’s new. “Flooding the zone with bullshit,” i.e. decreasing the signal-to-noise ratio, has demonstrable social effect.
It seems like the only defense against this would be something along the lines of FUTO’s Harbor, or maybe Ghost Keys. I’m not gonna pretend to know enough about them technically or practically, but a system that can anonymously prove that you’re you across websites could potentially de-fuel that fire.
Image manipulation has always been a thing, and there are ways to counter it…
But we already know that a shocking amount of people will simply take what they see at face value, even if it does look suspicious. The volume of AI generated misinformation online is already too damn high, without it getting more new strings in it’s bow.
Governments don’t seem to be anywhere near on top of keeping up with these AI developments either, so by the law starts accounting for all of this, the damage will be far done already.
But it’s never been this absolutely trivial to generate and distribute completely synthetic media. THAT is the real problem here.
Yep, this is a problem of volume of misinformation, the truth can just get buried by one single person generating thousands of fake photos, it’s really easy to lie, it’s really time consuming to fact check.
That’s precisely what I mean.
The effort ratio between generating synthetic visual media and corroborating or disproving a given piece of visual media has literally inverted and then grown by an order of magnitude in the last 3-5 years. That is fucking WILD. And more than a bit scary, when you really start to consider the potential malicious implications. Which you can see being employed all over the place today.
On our vacation 2 weeks ago my wife made an awesome picture just with one guy annoyingly in the background. She just tucked him and clicked the button… poof gone, perfect photo.
Honestly yeah I agree. Many mainstream social media platforms are infested with shitty generated content to the point of being insanity.
TL;DR: The new Reimage feature on the Google Pixel 9 phones is really good at AI manipulation, while being very easy to use. This is bad.
This is bad
Some serious old-man-yelling-at-cloud energy
It’ll sink in for you when photographic evidence is no longer admissible in court
Photoshop has existed for a bit now. So incredibly shocking it was only going to get better and easier to do, move along with the times oldtimer.
Photoshop requires time and talent to make a believable image.
This requires neither.
But it has been possible, for more than a decade
You said “but” like it invalidated what I said, instead of being a true statement and a non sequitur.
You aren’t wrong, and I don’t think that changes what I said either.
Lmao, “but” means your statement can be true and irrelevant at the same time. From the day photoshop could fool people lawyers have been trying to mark any image as faked, misplaced or out of context.
When you just now realise it’s an issue, that’s your problem. People can’t stop these tools from existing, so like, go yell at a cloud or something.
Well yeah, I’m not concerned with its ease of use nowadays. I’m more concerned with the computer forensics experts not being able to detect a fake for which Photoshop has always been detectable.
As the cat and mouse game continues, we ask ourselves, is water still wet?
Just wait, image manipulation will happen at image creation and there will be no “original”. Proving an image is unmanipulated will be a landmark legal precedent and set the standard for being able to introduce photographic evidence. It is already a problem for audio recordings and will be eventually for video.
I really don’t have much knowledge on it but it sound like it’s would be an actual good application of blockchain.
Couldn’t a blockchain be used to certify that pictures are original and have not been tampered with ?
On the other hand if it was possible I’m certain someone either have already started it, it is the prefect investor magnet “Using blockchain to counter AI”
How would that work?
I am being serious, I am an IT and can’t see how that would work in any realistic way.
And even if we had a working system to track all changes made to a photo, it would only work if the author submitted the original image before any change haf been made, but how would you verify that the original copy of a photo submitted to the system has not been tempered with?
Sure, you could be required to submit the raw file from the camera, but it is only a matter of time untill AI can perfectly simulate an optical sensor to take a simulated raw of a simulated scene.
Nope, we simply have to fall back on building trust with photo journalists, and trust digital signatures to tell us when we are seeing a photograph modified outsided of the journalist’s agency.
Yep, I think we pictures are becoming a valuable as text and it is fine, we just need to get used to it.
Before photography became mainstream the only source of information was written, it is extremely simple to make a fake story so people had to rely on trusted sources. Then for a short period of history photography became a (kinda) reliable sources of information by itself and this trust system lost its importance.
In most cases seeing a photo means that we were seeing a true reflection of what happened, especially if we were song multiple photos of the same event.
Now we are arriving at the end of this period, we cannot trust a photo by itself anymore, tampering a photo is becoming as easy as writing a fake story. This is a great opportunity for journalists I believe.
Meh, those edited photos could have been created in Photoshop as well.
This makes editing and retouching photos easier, and that’s a concern, but it’s not new.
Something I heard in the photoshop VS ai argument is it makes an already existing process much faster and almost anyone can do it which increases the shear amount that one person or a group could make almost how a printing press made the production of books so much faster (if you’re in to history)
I’m too tired to take a stance so I’m just sharing some arguments I’ve heard
Making creating fake images even easier definitely isn’t great, I agree with you there, but it’s nothing that couldn’t already be done with Photoshop.
I definitely don’t like the idea you can do this on your phone.
Exactly, it was already established that pictures from untrusted sources are to be disregarded unless they can be verified by trusted sources.
It is basically how it has been forever with the written press: Just like everyone now has the capability to manipulate a picture. Everyone can write we are being invaded by aliens, but whether we should believe it is another thing.
It might take some time for the general public to learn this, but it should be a focus area of general schooling within the area of source criticism.
almost how a printing press made the production of books so much faster
… and we all know that lead to 30 years of bloody war, btw
I wish tools to detect if an image is real or not become as easy to use and good as these AI tools bullshit.
Any tool someone invents will be used to train an AI to circumvent that tool.
In fact that’s how a lot of AI training is done in the first place.
We need to bring back people who can identify shops from some of the pixels and having seen quite a few shops in their time.
Captain Disillusion vs. The Artificer
It’s fundamentally not possible.
At some point fakes will be picture perfect indistinguishable.
Damn, those are pretty damn good!
The world’s billionaires probably know there’s lots of photographic evidence of stuff they did at Epstien island floating around out there. This is why they’re trying to make ai produce art so realistic that photographs are no longer considered evidence so they can just claim its ai generated if any of that stuff ever gets out.
Wont work against any good digital forensics.
These photoshop comments are missing the point that it’s just like art, a good edit that can fool everyone needs someone that practiced a lot and has lots of experience, now even the lazy asses on the right can fake it easily.
I think this comment misses the point that even one doctored photo created by a team of highly skilled individuals can change the course of history. And when that’s what it takes, it’s easier to sell it to the public.
What matters is the source. What we’re being forced to reckon with now is: the assumption that photos capture indisputable reality has never and will never be true. That’s why we invented journalism. Ethically driven people to investigate and be impartial sources of truth on what’s happening in the world. But we’ve neglected and abused the profession so much that it’s a shell of what we need it to be.
The thing is that in the future the mere quantity of fakes will make the careful vetting process you describe physically impossible. You will be bombarded with high quality fakes to such an extent that you will simply have to give up trying to keep up, so it will be a choice of either dropping the vetting process or dropping bringing any pictures altogether. For profit driven corporate jwbed media outlets, the choice unfortunately will be obvious.
I’m not talking about vetting pictures. I’m talking about journalists who investigate issues THEMSELVES and uncover the truth. They take their OWN pictures and post them on their website and accounts putting their credibility as collateral. We trust them, not because it’s a picture, but because of who took it.
This already happened with text, people learned “Don’t believe everything you read!” And invented the press to figure out the truth. It used to be a core part of our society. But people were tricked into thinking pictures and video were somehow mediums of empirical truth, just because it’s HARD to fake. But never impossible. Which is worse, actually. So we neglected the press and let it collapse into a shit show because we thought we could do it ourselves.
Yeah, it is going to be mainly a quantity issue rather than a quality one. The quality of faked photos has already been high since photoshop. Now a constant growing avalanche of high quality fakes (produced by all sorts of different vested interests with their own particular purposes) is going to barrage us on a daily basis, simply because it is cheap and easy
If I say Tiananmen Square, you will, most likely, envision the same photograph I do.
There was film of that exact event. The guy didn’t get run over by the tank, he got on the hood and berated the driver.
Cops in America would run you over for less
Explain away all these other photos then
What do you mean explain away? I pointed out that they always stop the footage in a way that implies he dies- when he clearly doesn’t. Having an article about how AI photos can be used to manipulate our perception of reality cite an instance of careful propaganda manipulating the perception of what happened was just a little on the nose.
Seriously posting about a massacre from over 30 years ago where a few hundred people were killed fighting the cops like its supposed to carry water today? Just compare that to the massacre that’s happening right now in Gaza, way more actual evidence of heinous crimes and it’s way more of a concern to me because it’s my government funding it.
Well, luckily all just had a talk and some tea about it and nobody died
I think this is a good thing.
Pictures/video without verified provenance have not constituted legitimate evidence for anything with meaningful stakes for several years. Perfect fakes have been possible at the level of serious actors already.
Putting it in the hands of everyone brings awareness that pictures aren’t evidence, lowering their impact over time. Not being possible for anyone would be great, but that isn’t and hasn’t been reality for a while.
I completely agree. This is going to free kids from someone taking a picture of them doing something relatively harmless and extorting them. “That was AI, I wasn’t even at that party 🤷”
I can’t wait for childhood and teenage life to being a bit more free and a bit less constantly recorded.
yeah, every time you go to a party, and fun happens, somebody pulls out their smartphone and starts filming. it’s really bad. people can only relax when there’s privacy, and smartphones have stolen privacy from society for over 10 years now. we need to either ban filming in general (which is not doable) or discredit photographs - which we’re doing right now.
While this is good thing, not being able to tell what is real and what is not would be disaster. What if every comment here but you were generated by some really advanced ai? What they can do now will be laughable compared to what they can do many years from now. And at that point it will be too late to demand anything to be done about it.
Ai generated content should have somekind of tag or mark that is inherently tied to it that can be used to identify it as ai generated, even if only part is used. No idea how that would work though if its even possible.
You already can’t. You can’t close Pandora’s box.
Adding labels just creates a false sense of security.
it wouldnt be label, that wouldnt do anything since it could just be erased. It should be something like invisible set of pixels on pictures or some inaudible soundpattern on sounds that can be detected in some way.
But it’s irrelevant. You can watermark all you want in the algorithms you control, but it doesn’t change the underlying fact that pictures have been capable of lying for years.
People just recognizing that a picture is not evidence of anything is better.
Yes, but reason why people dont already consider pictures irrelevant is that it takes time and effort to manipulate a picture. With ai not only is it fast it can be automated. Of course you shouldnt accept something so unreliable as legal evidence but this will spill over to everything else too
It doesn’t matter. Any time there are any stakes at all (and plenty of times there aren’t), there’s someone who will do the work.
It doesnt matter if you cant trust anything you see? What if you couldn’t be sure if you weren’t talking to bot right now?
No sweat since i am eschewing most things google related.
The majority of others aren’t. The technology also isn’t exclusive to Google, or won’t be for long. Forget placing drugs on a person to have an excuse to arrest them, there will be photographic evidence, completely fake, of anyone counter to the system doing whatever crime they want to pin on us.
Look at the good side of this - now nobody has any reason to trust central authorities or any kind of official organization.
Previously it required enormous power to do such things. Now it’s a given that if there’s no chain of trust from the object to the spectator, any information is noise.
It all looks dark everywhere, but what if we will finally have that anarchist future, made by the hands of our enemies?
TAKING OUR JOBSHARASSING WOMEN AND CHILDRENA THREAT TO OUR WAY OF LIFETHEY’RE SHITTING ON THE BEACHESREWRITING HISTORY BY DOCTORING PHOTOS WITH NEVER SEEN BEFORE PHOTO MANIPULATIONS
Sorry everyone I keep forgetting which zeitgeist that media is currently using to make us hate and fear something.
…did you just post 6 completely random articles as if there was some sort of point other than “news sites report lots of different news?”
did you just post 6 completely random articles
No, I mean there’s headings and groupings to assist with the inference
as if there was some sort of point other than “news sites report lots of different news?”
There might be a point. I see an association. If others do as well that’s good. If others don’t that is also ok.
To spell it out directly. I think its weird that media is recycling headlines for AI from republican headlines for immigration.
Often I cannot see the forest for the trees but sometimes I feel the presence of it even when I’m in it.
I work at a newspaper as both a writer and photographer. I deal with images all day.
Photo manipulation has been around as long as the medium itself. And throughout the decades, people have worried about the veracity of images. When PhotoShop became popular, some decried it as the end of truthful photography. And now here’s AI, making things up entirely.
So, as a professional, am I worried? Not really. Because at the end of the day, it all comes down to ‘trust and verify when possible’. We generally receive our images from people who are wholly reliable. They have no reason to deceive us and know that burning that bridge will hurt their organisation and career. It’s not worth it.
If someone was to send us an image that’s ‘too interesting’, we’d obviously try to verify it through other sources. If a bunch of people photographed that same incident from different angles, clearly it’s real. If we can’t verify it, well, we either trust the source and run it, or we don’t.
Thank you. This was a well thought out and logical response.
If a bunch of people photographed that same incident from different angles, clearly it’s real
Interesting that this is the threshold because it might need to be raised. In the past it was definitely true that perspective was a hard problem to solve, so multiple angles would increase the likelihood of veracity. Now with AI tools and even just the proliferation and access to 3D effects packages it might no longer be the case.
Well again, multiple, independent sources that each have a level of trust go pretty far.
From my personal experience with AI though… I found it difficult to get it to generate consistent images. So if I’d ask it for different angles of the same thing, details on it would change. Can it be done? Sure. With good systems and a bit of photoshopping you could likely fake multiple angles of it.
But for the images we run? It wouldn’t really be worth the effort I imagine. We’re not talking iconic shots like the ones mentioned in the article.
Personally I think this kind of response shows how not ready we are, because it is grounded in the antiquated assumption that it is just more of the same old instead of a complete revolution in both the quality and quantity of fakery going to happen.
I disagree, they are not talking about the online low trust sources that will indeed undergo massive changes, they’re talking about organisations with chains of trust, and they make a compelling case that they won’t be affected as much.
Not that you’re wrong either, but your points don’t really apply to their scenario. People who built their career in photography will have t more to lose, and more opportunity to be discovered, so they really don’t want to play silly games when a single proven fake would end their career for good. It’ll happen no doubt, but it’ll be rare and big news, a great embarrassment for everyone involved.
Online discourse, random photos from events, anything without that chain of trust (or where the “chain of trust” is built by people who don’t actually care), that’s where this is a game changer.
So politicians and other scum have gotten themselves a technology to put the jinn back into the bottle.
Except that’s not what happens.
Just take a look at Facebook. Tons of AI generated slop with tens or even hundred thousands likes and people actually believing them. I live in Indonesia, and people often shares fake things just for monetisation engagement and ordinary people have no skill no discern them.
You and I, or even every person here are belong to the rare people that actually able to discern information properly. Most people are just doom scrolling the internet and believing random things that appears to be realistic. Especially for people where tech eduation and literation are not widespread.
Unfortunately, newspapers and news sources like it that verify information reasonably well aren’t where most people get their info from anymore, and IMO, are unlikely to be around in a decade. It’s become pretty easy to get known misinformation widely distributed and refuting it does virtually nothing to change popular opinion on these stories anymore. This is only going to get worse with tools like this.
I can’t control where people find their information, that’s a fact. If people choose to find their news on unreliable, fake, agenda-driven, bot-infested social media, there’s very little I can do to stop that.
All I can do is be the best possible source who people who choose to find their news with us.
The ‘death of newspapers’ has been a theme throughout the decades. Radio is faster, it’s going to kill papers. TV is faster, it’s going to kill papers. The internet is faster, it’s going to kill newspapers… and yet, there’s still newspapers. And we’re evolving too. We’re not just a printed product, we also ARE an internet news source. The printed medium isn’t as fast, sure, but that’s also something that our actual readers like. The ability to sit down and read a properly sourced, well written story at a time and place of their choosing. A lot of them still prefer to read their paper saturday morning over a nice breakfast. Like any business, we adapt to the changing needs of consumers. Printed papers might not be as big as they once were, but they won’t be dying out any time soon.
I don’t dispute the usefulness of proper reporting, but at the rate I see newspapers dropping all around us, I’ll be astounded if there’s more than a very few around in a decade. But maybe I’m wrong and people will surprise me and start looking for quality reporting. Doubt it, but maybe.
If a bunch of people photographed that same incident from different angles, clearly it’s real.
I don’t think you can assume this anymore.
Yeah photo editing software, and AI, can be used to create images from different points of view, mimicking different styles, and qualities, of different equipment, and make adjustments for continuity from perspective, to perspective. Unless we have way for something, like AI, to be able to identify fabricated images, using some sort of encoding fingerprint, or something, it won’t be forever until they are completely indiscernible from the genuine article. You would have to be able to prove a negative, that the person who claims to have taken the photo could not have, in order to do so. This, as we know, is far more difficult than current discretionary methods.
The point I’m making isn’t really about the ability to fake specific angles or the tech side of it. It’s about levels of trust and independent sources.
It’s certainly possible for people to put up some fake accounts and tweet some fake images of seperate angles. But I’m not trusting random accounts on Twitter for that. We look at sources like AP, Reuters, AFP… if they all have the same news images from different angles, it’s trustworthy enough for me. On a smaller scale, we look at people and sources we trust and have vetted personally. People with longstanding relationships. It really does boil down to a ‘circle of trust’: if I don’t know a particular photographer, I’ll talk to someone who can vouch for them based on past experiences.
And if all else fails and it’s just too juicy not to run? We’d slap a big 'ole ‘this image has not been verified’ on it. Which we’ve never had to do so far, because we’re careful with our sources.
Sorry, but if traditional news media loses much more ground to “alternative fact” land, and other reasons for decline vs the new media, I have zero faith they won’t just give in and go with it. I mean, if they are gonna fail anyway, why not at least see if they can get themselves a slice of that pie.
Photo manipulation has been around as long as the medium itself. And throughout the decades, people have worried about the veracity of images. When PhotoShop became popular, some decried it as the end of truthful photography. And now here’s AI, making things up entirely.
I actually think it isn’t the AI photo or video manipulation part that makes it a bigger issue nowadays (at least not primarily), but the way in which they are consumed. AI making things easier is just another puzzle piece in this trend.
Information volume and speed has increased dramatically, resulting in an overflow that significantly shortens the timespan that is dedicated to each piece of content. If i slowly read my sunday newspaper during breakfast, then i’ll give it much more attention, compared to scrolling through my social media feed. That lack of engagement makes it much easier for missinformation to have the desired effect.
There’s also the increased complexity of the world. Things can on the surface seem reasonable and true, but have knock on consequences that aren’t immediately apparent or only hold true within a narrow picture, but fall appart once viewed from a wider perspective. This just gets worse combined with the point above.
Then there’s the downfall of high profile leading newsoutlets in relevance and the increased fragmentation of the information landscape. Instead of carefully curated and verified content, immediacy and clickbait take priority. And this imo also has a negative effect on those more classical outlets, which have to compete with it.
You also have increased populism especially in politics and many more trends, all compounding on the same issue of missinformation.
And even if caught and corrected, usually the damage is done and the correction reaches far fewer people.
oddly enough, there are models trained to generate different angles of a given scene!
you’re right about the importance of trust. leveraging and scaling interpersonal trust is the key to consensus.
This is one of the required steps on the way to holodecks. I’ve been ready for it for 30 years.