• kennismigrant@feddit.nl
    link
    fedilink
    arrow-up
    1
    ·
    1 year ago

    MIT Technology Review got an exclusive preview of the research

    The article was published 3 days after the arxiv release. How is this an “exclusive preview”?

    Successfully tricking existing models by a few crafted samples doesn’t seem like a significant achievement. Can someone highlight what exactly is interesting here? Anything that can’t be resolved by routine adjustments to loss/evaluation functions?

  • kakes@sh.itjust.works
    link
    fedilink
    arrow-up
    0
    arrow-down
    1
    ·
    1 year ago

    I don’t believe for a second that this works, and if it did, it would be trivial to get around.

    It claims to “change the pixel values imperceptibly”. That just isn’t how these generative models work. These models are just looking at the colors, the same way a human would. If it’s imperceptible to a human, it won’t affect these models. They could subtly influence it, perhaps, but it would be nothing near the scale they claim.

    My first thought was that they’re trying to cash in, but from what I can tell it seems to be free (for now, at least?). Is it for academic “cred”? Or do they somehow actually think this works?

    It just seems to be such a direct appeal to non-tech-savvy people that I can’t help but question their motivations.