Lawyers for a man charged with murder in a triple homicide had sought to introduce cellphone video enhanced by machine-learning software.
A Washington state judge overseeing a triple murder case barred the use of video enhanced by artificial intelligence as evidence in a ruling that experts said may be the first-of-its-kind in a United States criminal court.
The ruling, signed Friday by King County Superior Court Judge Leroy McCullogh and first reported by NBC News, described the technology as novel and said it relies on “opaque methods to represent what the AI model ‘thinks’ should be shown.”
“This Court finds that admission of this Al-enhanced evidence would lead to a confusion of the issues and a muddling of eyewitness testimony, and could lead to a time-consuming trial within a trial about the non-peer-reviewable-process used by the AI model,” the judge wrote in the ruling that was posted to the docket Monday.
The ruling comes as artificial intelligence and its uses — including the proliferation of deepfakes on social media and in political campaigns — quickly evolve, and as state and federal lawmakers grapple with the potential dangers posed by the technology.
Does this ruling apply to all AI processed images or only ones for generative AI? What about stuff like DLSS that utilizes deep learning?
I would imagine that using an AI to create a video and voice of a defendant to “say” something from a transcript would be much more impressive than someone reading it.
Given AI models’ penchant for hallucinating and the blackbox nature of it all, it seems like it shouldn’t be admissible. AI is fine for creative endeavors, but in arenas where facts matter, AI can’t be trusted.
But I thought we were trying to make Black Mirror a reality?
Oh no, we’re still plowing ahead with this self-induced AI nightmare, this is just a speed bump…
Friend Computer always knows what’s best for us. All praise the Computer and woe to the Mutant, Commie, Scum who would try to bring ruin upon our beneficent Computer overlord!
Good on that judge. If the video is unclear before AI fucks with it then whatever you’re trying to show falls well within reasonable doubt.
Excellent ruling. Scary times.
But CSI told me that all you have to do to catch a criminal is to enhance! What will they do now?!
Some new camera tech might be opening a can of worms about whether what’s pictured can be taken literally.
There was a story late last year of a woman trying on a wedding dress in front of two mirrors and someone snapped a photo.
When they looked at it, the reflection on the left mirror had a different pose to the reflection on The right mirror.
And this cast doubt on what exactly was going on the moment the shutter was pressed.
It looks like the camera had one of the stitch together the best photo of the people pictured (e.g. don’t show shots of people blinking etc) and it treated the mirror images as different people.
I mean, yeah but in that everything that happened was real, and happen within a second probably at most of eachother. Still definitely permissible. AI is a very different story.
My info may be out of date but last I knew you could not use any edited photographic evidence in court, done by ai or not, in the US.
“Your Honor, as you can see from the footage, my client sprouted 7 fingers out of his hand, with such a condition, he couldn’t possibly operate a firearm…”
This seems like one of those technologies which may be useful as an investigatory tool, but should ultimately not admissible in court. For example, if law enforcement has a grainy video of a crime, and they use AI enhancement to generate leads, that could be ok. Though, it will still have issues with bias and false leads; so, such usage should be tracked and data kept on it to show usefulness and bias. But, anything done to a video by AI should almost universally be considered suspect. AI is really good at making up plausible results which are complete bullshit.
I’m generally against the whole anti-AI stuff these days but this makes perfect sense. There’s no way of verifying whether or not the content of an upscaled image is accurate.
This isn’t really new. It came up at the Kyle Rittenhouse trial back in 2021.
It’s just that everything wasn’t called “AI” back then. Same enhancement algorithms and processing techniques being used though.