• Stovetop@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    6 months ago

    I think you’re painting a false equivalency. This isn’t about surveillance or incitement or any other pre-crime hypotheticals, but simply adjusting what material is considered infringing in light of new developments which can prevent justice from being carried out on actual cases of abuse.

    How do you prove what is fictional versus what is real? Unless there is some way to determine with near 100% certainty that a given image or video is AI generated and not real, or even that an AI generated image wasn’t trained on real images of abuse, you invite scenarios where real images of abuse get passed off as “fictional content” and make it easier for predators to victimize more children.