Artificial intelligence is spurring a new type of identity theft — with ordinary people finding their faces and words twisted to push often offensive products and ideas
Artificial intelligence is spurring a new type of identity theft — with ordinary people finding their faces and words twisted to push often offensive products and ideas
Because it’s a pain to go do (and was especially so in the film era) and it change what the photo conveys in a meaningful way.
Think of for example a photo like this, showing anti-civil-rights protesters in 1969:
Blurring the faces would meaningfully obscure what was going on, and confuse people about who held what kinds of views.
Historically, that is correct. However, the technology to automate this is extremely accessible now and low/no cost. Also, there was no widespread threat of misuse via AI in the past, so I get that there was no need in the past. Going forward, I think it’s something we need to think about.
Today, the same photo you presented could be misused with AI to meaningfully obscure what is going on and confuse people about who held what kind of views. So there’s a double-edged sword here.
Just to be clear, I do believe in the right to photograph anyone and anything in public, at least in the United States and any other countries that respect that freedom. I’m just trying to point out that the issue is complicated.