• suburban_hillbilly@lemmy.ml
    link
    fedilink
    arrow-up
    0
    ·
    edit-2
    6 months ago

    photos

    They aren’t photos. They’re photorealistic drawings done by computer algorithms. This might seem like a tiny quibble to many, but as far as I can tell it is the crux of the entire issue.

    There isn’t any actual private information about the girls being disclosed. The algorithms, for example, do not and could not know about and produce an unseen birthmark, mole, tattoo, piercing, etc. A photograph would have that information. What is being shown is an approxomation of what similar looking girls in the training set look like, with the girls’ faces stiched on top. That is categorically different than something like revenge porn which is purely private information specific to the individual.

    I’m sure it doesn’t feel all that different to the girls in the photos, or to the boys looking at it for that matter. There is some degree of harm here without question. But we must tread lightly because there is real danger in categorizing algorithmic guesswork as reliable which many authoritarian types are desperate to do.

    https://www.wired.com/story/parabon-nanolabs-dna-face-models-police-facial-recognition/

    This is the other side of the same coin. We cannot start treating the output of neural networks as facts. These are error prone black-boxes and that fact must be driven hard into the consciousness of every living person.

    For some, I’m sure purely unrelated reason, I feel like reading Phillip K Dick again…

    • daellat@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      6 months ago

      I’ve only read do androids dream of electric sheep by him, what other book(s) should I check out by him?

    • KillingTimeItself@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      0
      ·
      6 months ago

      They aren’t photos. They’re photorealistic drawings done by computer algorithms. This might seem like a tiny quibble to many, but as far as I can tell it is the crux of the entire issue.

      most phone cameras alter the original image with AI shit now, it’s really common, they apply all kinds of weird correction to make it look better. Plus if it’s social media there’s probably a filter somewhere in there. At what point does this become the ship of thesseus?

      my point here, is that if we’re arguing that AI images are semantically, not photos, than most photos on the internet including people would also arguably, not be photos to some degree.

      • suburban_hillbilly@lemmy.ml
        link
        fedilink
        arrow-up
        0
        ·
        6 months ago

        The difference is that a manipulated photo starts with a photo. It actually contains recorded information about the subject. Deepfakes do not contain any recorded information about the subject unless that subject is also in the training set.

        Yes it is semantics, it’s the reason why we have different words for photography and drawing and they are not interchangeable.

        • Rekorse@lemmy.dbzer0.com
          link
          fedilink
          arrow-up
          0
          ·
          6 months ago

          The deepfakes would contain the prompt image provided by the creator. They did not create a whole new approximation of their face as the entire pool it can pull on for that specific part is a single or group of images provided by the prompter.

        • KillingTimeItself@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          0
          ·
          6 months ago

          Deepfakes do not contain any recorded information about the subject unless that subject is also in the training set.

          this is explicitly, untrue, they literally do. You are just factually wrong about this. While it may not be in the training data, how do you think it manages to replace the face of someone in one picture, with the face of someone else in some other video.

          Do you think it just magically guesses? No, it literally uses a real picture of someone. In fact, back in the day with ganimation and early deepfake software, you literally had to train these AIs on pictures of the person you wanted it to do a faceswap on. Remember all those singing deepfakes that were super popular back a couple of years ago? Yep, those literally trained on real pictures.

          Regardless, you are still ignoring my point. My question here was how do we consider AI content to be “not photo” but consider photos manipulated numerous times, through numerous different processes, which are quite literally, not the original photo, and a literal “photo” to rephrase it simpler for you, and other readers. “why is ai generated content not considered to be a photo, when a heavily altered photo of something that vaugely resembles it’s original photo in most aspects, is considered to be a photo”

          You seem to have missed the entire point of my question entirely. And simply said something wrong instead.

          Yes it is semantics

          no, it’s not, this is a ship of thesseus premise here. The semantics results in how we contextualize and conceptualize things into word form. The problem is not semantics (they are just used to convey the problem at hand), the problem is a philosophical conundrum that has existed for thousands of years.

          in fact, if we’re going by semantics here, technically photograph is rather broad as it literally just defines itself as “something in likeness of” though it defines it as taken by method of photography. We could arguably remove that part of it, and simply use it to refer to something that is a likeness of something else. And we see this is contextual usage of words, a “photographic” copy is often used to describe something that is similar enough to something else, that in terms of a photograph, they appear to be the same thing.

          Think about scanning a paper document, that would be a photographic copy of some physical item. While it is literally taken via means of photography. In a contextual and semantic sense, it just refers to the fact that the digital copy is photographically equivalent to the physical copy.

          • suburban_hillbilly@lemmy.ml
            link
            fedilink
            arrow-up
            0
            ·
            6 months ago

            Oh FFS, I clipped the word new. Of course it uses information in the prompt. That’s trivial. No one cares about it returning the information that was given to it in the prompt. Nevertheless, mea culpa. You got me.

            this is a ship of thesseus premise here

            No, it really isn’t.

            The pupose of that paradox is that you unambiguously are recreating/replacing the ship exactly as you already know it is. The reason the ‘ai’ in question here is even being used is that it isn’t doing that. It’s giving you back much more than it was given.

            The comparison would be if Thesues’ ship had been lost and you definitely don’t have the ship anymore, but had managed to recover the sail. If you take the sail to an experienced builder (the ai) who had never seen the ship, then he might be able to build a reasonable approximation based on inferences from the sail and his wealth of knowledge, but nobody is going to be daft enough to assert it is same ship. Does the wheel even have the same number of spokes? Does it have the same number of oars? The same weight of anchor?

            The only way you could even tell if his attempted fascimile was close is if you had already intimate knowledge of the ship from some other source.

            …when a heavily altered photo of something that vaugely resembles it’s original photo in most aspects, is considered to be a photo”

            Disagree.