Artificial intelligence is spurring a new type of identity theft — with ordinary people finding their faces and words twisted to push often offensive products and ideas

  • silence7@slrpnk.netOP
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    7 months ago

    In the US, kinda sorta.

    Advertisers are liable if they use your likeness to promote a product, imply endorsement, or otherwise make commercial use of it without your consent. This gives you the right to sue, which is worth absolutely nothing when you’re dealing with a shady overseas shell company hawking fake Viagra.

    News organizations, artists, and random private individuals can publish a photo or other image of you taken in a place where you do not have a reasonable expectation of privacy without having to contact you or have your consent. This is important: think of trying to share a photograph of a public event, and having to track down people in the background, or create public awareness when you photograph politician committing a crime.

    • paridoxical@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      7 months ago

      In your example at the end, why can’t the other people’s faces be blurred out before releasing the photo? Just playing devil’s advocate on that point.

      • silence7@slrpnk.netOP
        link
        fedilink
        English
        arrow-up
        0
        ·
        7 months ago

        Because it’s a pain to go do (and was especially so in the film era) and it change what the photo conveys in a meaningful way.

        Think of for example a photo like this, showing anti-civil-rights protesters in 1969:

        Blurring the faces would meaningfully obscure what was going on, and confuse people about who held what kinds of views.

        • paridoxical@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          7 months ago

          Historically, that is correct. However, the technology to automate this is extremely accessible now and low/no cost. Also, there was no widespread threat of misuse via AI in the past, so I get that there was no need in the past. Going forward, I think it’s something we need to think about.

          Today, the same photo you presented could be misused with AI to meaningfully obscure what is going on and confuse people about who held what kind of views. So there’s a double-edged sword here.

          Just to be clear, I do believe in the right to photograph anyone and anything in public, at least in the United States and any other countries that respect that freedom. I’m just trying to point out that the issue is complicated.