• over_clox@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    6 months ago

    Yes, I get all that, duh. Did you read the original post title? CSAM?

    I thought you could catch a clue when I said inappropriate.

    • FaceDeer@fedia.io
      link
      fedilink
      arrow-up
      0
      ·
      6 months ago

      Yes. You’re saying that the AI trainers must have had CSAM in their training data in order to produce an AI that is able to generate CSAM. That’s simply not the case.

      You also implied earlier on that these AIs “act or respond on their own”, which is also not true. They only generate images when prompted to by a user.

      The fact that an AI is able to generate inappropriate material just means it’s a versatile tool.

      • over_clox@lemmy.world
        link
        fedilink
        arrow-up
        0
        ·
        6 months ago

        Alright, well let’s play an innocent hypothetical here.

        Let’s pretend you only know some magic word model (doesn’t exist without thousands or millions of images by the way).

        But anyways, let’s say you’re the AI. Now, with no vision of the world, what would you, as an AI, say if I asked you about how crescent wrenches and channel locks reproduced?

        Now try the same hypothetical question again. This time, you actually have a genuine set of images of clean new tools, plus information that tools can’t reproduce.

        And now let’s go to the modern day. Where AI has zillions of images of rusty redneck toolboxes, and a bunch of janky dialogue…

        After all that, then where do crowbars come from?

        AI is just as dumb as the people using it.

        • FaceDeer@fedia.io
          link
          fedilink
          arrow-up
          0
          ·
          6 months ago

          3,226 suspected images out of 5.8 billion. About 0.00006%. And probably mislabeled to boot, or it would have been caught earlier. I doubt it had any significant impact on the model’s capabilities.