Over just a few months, ChatGPT went from correctly answering a simple math problem 98% of the time to just 2%, study finds. Researchers found wild fluctuations—called drift—in the technology’s abi…::ChatGPT went from answering a simple math correctly 98% of the time to just 2%, over the course of a few months.

  • Windex007@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 year ago
    1. It isn’t and has never been a truth machine, and while it may have performed worse with the question “is 10777 prime” it may have performed better on “is 526713 prime”

    ChatGPT generates responses that it believes would “look like” what a response “should look like” based on other things it has seen. People still very stubbornly refuse to accept that generating responses that “look appropriate” and “are right” are two completely different and unrelated things.