• jet@hackertalks.com
    link
    fedilink
    English
    arrow-up
    0
    ·
    4 months ago

    This has been around for a while in research papers. Getting people’s pulse rate, and even blood pressure from videos.

    Other things you can get from videos, electrical interference to determine which power grid somebody is using. Noises in the background can be mapped as well. So uploading a video deanonymizes you quite well, for properly motivated investigator.

    In the escalating war against deepfakes however it will just be part of the arms race, and new deepfakes will now include those fluctuations.

        • GamingChairModel@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          4 months ago

          We’re starting to see it in some cameras, mostly for still photography, but I don’t see why the basic concept wouldn’t extend to video files, too. Leica released a camera last year that signs the photo, including the timestamp and location data, and Canon, Nikon, Sony, Adobe, and Getty have various implementations of the technique.

          Once the major photo software editing workflows support it, we’ll probably see some kind of chain of custody authentication support from camera to publication.

          Of course, that doesn’t prevent fakes in the sense of staged productions, but the timestamp and location data would go a long way.

          • Laser@feddit.org
            link
            fedilink
            English
            arrow-up
            0
            ·
            4 months ago

            But then what? So you have a camera signing its files and we pretend that extraction of the secret key is impossible (which it probably isn’t). You load the file into your editing program because usually, the source files are processed further. You create a derivative of the signed file and there’s no connection to the old signature anymore, so this would only make sense if you provide the original file for verification purposes, which most people won’t do.

            I guess it’s better than nothing but it will require more infrastructure to turn it into something usable, or of this was only used in important situations where manual checking isn’t an issue, like a newspaper posting a picture but keeping the original to verify the authenticity.

      • Etterra@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        4 months ago

        Or, alternatively, just showing up to do stuff in person. Of course that’s not always feasible but still.

  • AnAmericanPotato@programming.dev
    link
    fedilink
    English
    arrow-up
    0
    ·
    4 months ago

    Honestly, I don’t find this very creepy. This is information you are already putting out there for everyone to see. If I post a video of myself speaking, I am not concerned about people seeing how my skin vibrates in that video.

    As video generation tools become more advanced, we will need better algorithms to validate videos. The bar for “fooling the vast majority of humans” is much, much lower than the bar for “being literally indistinguishable from a real video”. The main problem I see is that it’s going to be a cat-and-mouse game, and I don’t think any method you publish will remain valid for very long in practice. The same method will be used to improve the next version of video generators.

    Also, lots of real videos use post-processing that might wash out some of the details they are looking for. Video producers might re-record lines so they don’t perfectly match the video to begin with. It’s been a while since I long time since a Samsung phone, but on my old S6, I remember that it always had a beauty filter applied to the selfie camera that made me me look like a creepy porcelain doll. I could probably make a deepfake of myself that looks more “real” than those real videos and photos.