• Balder@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    17 days ago

    All I see is people chatting with an LLM as if it was a person. “How catastrophic from 0 to 100?”, you’re just tweeting to get some random answer based solely on whatever context is being fed in the input and that you probably don’t know the extent of it.

    Trying to make the LLM “see its mistakes” is a pointless exercise.

    • cyrano@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      0
      ·
      17 days ago

      Yeah the interaction are pure waste of time I agree, make it write an apology letter? WTF! For me it looks like a fast track way to learn environment segregation, & secret segregation. Data is lost, learn from it and there are tool already in place like git like alembic for proper development.

      • UntitledQuitting@reddthat.com
        link
        fedilink
        English
        arrow-up
        0
        ·
        17 days ago

        the apology letter(s) is what made me think this was satire. using shame to punish “him” like a child is an interesting troubleshooting method.

        the lying robot hasn’t heel-turned, any truth you’ve gleaned has been accidental.

    • andallthat@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      17 days ago

      I wonder if it can be used legally against the company behind the model, though. I doubt that it’s possible, but having a “your own model says it effed up my data” could give some beef to a complaint. Or at least to a request to get a refund on the fees.