• Wiitigo@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    7 months ago

    If the AI generated content is labeled, or has context, or has comments or descriptions created by people, then wouldn’t it just be the same as synthetic training data? Which is shown to still be very useful for training.

    • General_Effort@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      7 months ago

      Sorta. This “model collapse” thing is basically an urban legend at this point.

      The kernel of truth is this: A model learns stuff. When you use that model to generate training data, it will not output all it has learned. The second generation model will not know as much as the first. If you repeat this process a couple times, you are left with nothing. It’s hard to see how this could become a problem in the real world.

      Incest is a good analogy, if you know what the problem with inbreeding is: You lose genetic diversity. Still, breeders use this to get to desired traits and so does nature (genetic bottleneck, founder effect).

    • linearchaos@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      7 months ago

      Yes it’s still useful and it’s basically how we made our last couple of jumps. An AI training on AI generated data being graded by another AI. We’ve hit diminishing returns though.

    • lolcatnip@reddthat.com
      link
      fedilink
      English
      arrow-up
      0
      ·
      7 months ago

      Most AI-generated data in the wild won’t have labels because there’s no incentive to label it, and in a lot of cases there are incentives to not label it.

  • Uranium3006@kbin.social
    link
    fedilink
    arrow-up
    0
    ·
    7 months ago

    now that the low hanging fruit of internet scraping is exhausted, we’re gonna have to start purpose-building datasets. this will be expensive and might be the new bottleneck on AI progress.

  • numberfour002@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    7 months ago

    Anecdotally speaking, I’ve been suspecting this was happening already with code related AI as I’ve been noticing a pretty steep decline in code quality of the code suggestions various AI tools have been providing.

    Some of these tools, like GitHub’s AI product, are trained on their own code repositories. As more and more developers use AI to help generate code and especially as more novice level developers rely on AI to help learn new technologies, more of that AI generated code is getting added to the repos (in theory) that are used to train the AI. Not that all AI code is garbage, but there’s enough that is garbage in my experience, that I suspect it’s going to be a garbage in, garbage out affair sans human correction/oversight. Currently, as far as I can tell, these tools aren’t really using much in the way of good metrics to rate whether the code they are training on is quality or not, nor whether it actually even works or not.

    More and more often I’m getting ungrounded output (the new term for hallucinations) when it comes to code, rather than the actual helpful and relevant stuff that had me so excited when I first started using these products. And I worry that it’s going to get worse. I hope not, of course, but it is a little concerning when the AI tools are more consistently providing useless / broken suggestions.

    • Thorny_Insight@lemm.ee
      link
      fedilink
      English
      arrow-up
      0
      ·
      7 months ago

      It’s not going to. It’s just going to get more widespread and harder to detect. The incentives favor developing better and better AI. Luckily one of the solutions to this issue is - wait for it - AI. With a good enough AI, especially a generally intelligent one you don’t need search engines anymore. You just ask and it gives you the answer. If you think AI couldn’t do this reliably then that is not the AI I’m talking about.

    • zcd@lemmy.ca
      link
      fedilink
      English
      arrow-up
      0
      ·
      7 months ago

      The Internet is fucked now, the only valuable untainted training data is the Internet as it existed prior to this AI bullshit coming online. Confirmed human content is going to be super valuable, so expect our privacy to be fucked as well…

        • IninewCrow@lemmy.ca
          link
          fedilink
          English
          arrow-up
          0
          ·
          7 months ago

          Even that is going to turn into a shit show … It will become a copy of a copy of a copy of a backup of a backup of a copy and all of it will just get rendered down to some common basics based on whatever the hell was marketed and promoted by bots

    • rosemash@social.raincloud.dev
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      7 months ago

      The collapse won’t stop ai output fron spamming the internet though. It will just make it worse and more lilely to be incorrect

  • gapbetweenus@feddit.de
    link
    fedilink
    English
    arrow-up
    0
    ·
    7 months ago

    Wasn’t there a paper not long time ago that it was possible to generate data with AI as a training set for AI? I was surprised (and the math is to much for me to check out my self) but that seems to solve that problem.

    • realharo@lemm.ee
      link
      fedilink
      English
      arrow-up
      0
      ·
      7 months ago

      As far as I know, that is mainly used where a better, bigger model generates training data for a more efficient smaller model to bring it a bit closer to its level.

      Were there any cases of an already state of the art model using this method to improve itself?

    • General_Effort@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      7 months ago

      Sorta. This “model collapse” thing is basically an urban legend at this point.

      The kernel of truth is this: A model learns stuff. When you use that model to generate training data, it will not output all it has learned. The second generation model will not know as much as the first. If you repeat this process a couple times, you are left with nothing. It’s hard to see how this could become a problem in the real world.

      Incest is a good analogy, if you know what the problem with inbreeding is: You lose genetic diversity. Still, breeders use this to get to desired traits and so does nature (genetic bottleneck, founder effect).

      • gapbetweenus@feddit.de
        link
        fedilink
        English
        arrow-up
        0
        ·
        7 months ago

        Training data for models in general was a big problem when I studied systems biology. Interesting that we finding works around, since it sounded rather fundamental to me. I found your metaphor rather helpful, thanks.

        • jacksilver@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          7 months ago

          I wouldn’t say we’ve really found a workaround. AI companies hire lots of people to parse and clean data. That can work for things like pose estimation, which are largely a once and done thing. But for things that are constantly evolving, language/art/videos, it may not be a viable long term strategy.

  • webghost0101@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    0
    ·
    7 months ago

    Back when i was though concept art as a subject at college my teacher had a name for this.

    “Incest” cause every generation of art that references other art becomes more and more strange looking and detached from reality.

    If you thought Skyrim weapons look ridiculous you should have seen my classmates Skyrim inspired weapons.

  • FaceDeer@kbin.social
    link
    fedilink
    arrow-up
    0
    ·
    7 months ago

    This article is from June 12, 2023. That’s practically stone-aged as far as AI technology has been progressing.

    The paper it’s based on used a very simplistic approach, training AIs purely on the outputs of its previous “generation.” Turns out that’s not a realistic real-world scenario, though. In reality AIs can be trained on a mixture of human-generated and AI-generated content and it can actually turn out better than training on human-generated content alone. AI-generated content can be curated and custom-made to be better suited to training, and the human-generated stuff adds back in the edge cases that might disappear when doing repeated training generations.

  • BananaTrifleViolin@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    7 months ago

    The “solutions” to model collapse - essentially retraining on the original data set - suggests LLMs plateau or deteriorate. Especially without a way to separate out good and bad quality data (or ad they euohemistically try and say human vs AI data).

    Were increasingly seeing the limitations and flaws with LLMs. “Hallucinations” or better described as serious errors, model collapse and complete collapse suggest the current approach to LLMs is probably not going to lead to some gone of general AI. We have models we don’t really understand that have fundamental flaws and limitations.

    Unsurprising that they probably can’t live up to the hype.

    • zwaetschgeraeuber@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      7 months ago

      Even if it will plateau, same was said with moorrs law, which held up way longer than expected. There are so many ways to improve this. Open source community is getting to the point where you can actually run decent models on normal private hardware (talking about 70-120b model)

  • holycrap@lemm.ee
    link
    fedilink
    English
    arrow-up
    0
    ·
    7 months ago

    My team has been calling models that use ai generated data “Habsberg models”

  • spawnsalot@kbin.social
    link
    fedilink
    arrow-up
    0
    ·
    7 months ago

    It would be hilarious if we entered the deep fried Marquaud era of ai where responses degenerate into rehashed responses that just get progressively more jumbled and unintelligible as the models cannibalise each other’s generated content

      • SkaveRat@discuss.tchncs.de
        link
        fedilink
        English
        arrow-up
        0
        ·
        7 months ago

        Low Background Radiation Steel was/is valuable, because it’s made of steel from before nuclear testing. As the bombs contaminated the produced steel.

        In the same sense, anything before the creation of LLMs would be considered “low background radiation” content, as that’s the only content to be sure to be made without LLMs in the loop

  • Dojan@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    7 months ago

    I mean it makes sense. Machine learning is fantastic at noticing patterns, and the stuff they generate most definitely do have patterns. We might not notice them, but the models will pick up on them and eventually, if you keep training them on that data, they’ll skew more and more in that direction.

    They’ve been marketing things like there isn’t a limit to how good these things can get, but there is. Nothing is infinite.

    • circuitfarmer@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      7 months ago

      I’ve tried to make this point several times to folks in the industry. I work in AI, and yet every time I approach some people with “you know it ultimately just repeats patterns”, I’m met with scoffs and those people telling me I’m just not “seeing the big picture”.

      But I am, and the truth is that there are limits. This tech is not the digital singularity the marketers and business goons want everyone to think it is.

      • zurohki@aussie.zone
        link
        fedilink
        English
        arrow-up
        0
        ·
        7 months ago

        It repeats things that sort of sound intelligent to try and convince everyone that actual intelligent thought is taking place? It really is just like humans!

        • Dojan@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          7 months ago

          They don’t really parrot unless they’re overfitted.

          It’s more that they have been trained to produce a certain kind of result. One method you can train them on is by basically assigning a score on how good the output is. Doing this manually takes a lot of time (Google has been doing this for years via captcha), or you could train other models to score text for you.

          The obvious problem with the latter solution is that then you need to ensure that that model is scoring roughly in line with how humans would score it; the technical term for this is alignment. There’s a pretty funny story about that with GPT-2, presented in a really cute animation format by Robert Miles.