A judge in Washington state has blocked video evidence that’s been “AI-enhanced” from being submitted in a triple murder trial. And that’s a good thing, given the fact that too many people seem to think applying an AI filter can give them access to secret visual data.

  • GenderNeutralBro@lemmy.sdf.org
    link
    fedilink
    English
    arrow-up
    0
    ·
    9 months ago

    AI-based video codecs are on the way. This isn’t necessarily a bad thing because it could be designed to be lossless or at least less lossy than modern codecs. But compression artifacts will likely be harder to identify as such. That’s a good thing for film and TV, but a bad thing for, say, security cameras.

    The devil’s in the details and “AI” is way too broad a term. There are a lot of ways this could be implemented.

    • Buelldozer@lemmy.today
      link
      fedilink
      English
      arrow-up
      0
      ·
      9 months ago

      AI-based video codecs are on the way.

      Arguably already here.

      Look at this description of Samsungs mobile AI for their S24 phone and newer tablets:

      AI-powered image and video editing

      Galaxy AI also features various image and video editing features. If you have an image that is not level (horizontally or vertically) with respect to the object, scene, or subject, you can correct its angle without losing other parts of the image. The blank parts of that angle-corrected image are filled with Generative AI-powered content. The image editor tries to fill in the blank parts of the image with AI-generated content that suits the best. You can also erase objects or subjects in an image. Another feature lets you select an object/subject in an image and change its position, angle, or size.

      It can also turn normal videos into slow-motion videos. While a video is playing, you need to hold the screen for the duration of the video that you want to be converted into slow-motion, and AI will generate frames and insert them between real frames to create a slow-motion effect.

    • DarkenLM@kbin.social
      link
      fedilink
      arrow-up
      0
      ·
      9 months ago

      I don’t think AI codecs will be anything revolutionary. There are plenty of lossless codecs already, but if you want more detail, you’ll need a better physical sensor, and I doubt there’s anything that can be done to go around that (that actually represents what exists, not an hallucination).

      • foggenbooty@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        9 months ago

        It’s an interesting thought experiment, but we don’t actually see what really exists, our brains essentially are AI vision, filling in things we don’t actually perceive. Examples are movement while we’re blinking, objects and colors in our peripheral vision, the state of objects when our eyes dart around, etc.

        The difference is we can’t go back frame by frame and analyze these “hallucinations” since they’re not recorded. I think AI enhanced video will actually bring us closer to what humans see even if some of the data doesn’t “exist”, but the article is correct that it should never be used as evidence.

    • jeeva@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      9 months ago

      I don’t think loss is what people are worried about, really - more injecting details that fit the training data but don’t exist in the source.

      Given the hoopla Hollywood and directors made about frame-interpolation, do you think generated frames will be any better/more popular?