• General_Effort@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    8 months ago

    Sorta. This “model collapse” thing is basically an urban legend at this point.

    The kernel of truth is this: A model learns stuff. When you use that model to generate training data, it will not output all it has learned. The second generation model will not know as much as the first. If you repeat this process a couple times, you are left with nothing. It’s hard to see how this could become a problem in the real world.

    Incest is a good analogy, if you know what the problem with inbreeding is: You lose genetic diversity. Still, breeders use this to get to desired traits and so does nature (genetic bottleneck, founder effect).

    • gapbetweenus@feddit.de
      link
      fedilink
      English
      arrow-up
      0
      ·
      8 months ago

      Training data for models in general was a big problem when I studied systems biology. Interesting that we finding works around, since it sounded rather fundamental to me. I found your metaphor rather helpful, thanks.

      • jacksilver@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        8 months ago

        I wouldn’t say we’ve really found a workaround. AI companies hire lots of people to parse and clean data. That can work for things like pose estimation, which are largely a once and done thing. But for things that are constantly evolving, language/art/videos, it may not be a viable long term strategy.