we love google (and LLMs)

  • Icalasari@fedia.io
    link
    fedilink
    arrow-up
    0
    ·
    6 months ago

    These chatbots are AI - They tailor responses over time so long as previous messages are in memory, showing a limited level of learning

    The issue is these chatbots either:

    A) Get so little memory that they effectively don’t even have short term memory, or

    B) Are put in situations where that chat memory learning feature is moot

    They are AI, they are just stupidly simple and inept AI that barely qualify

    • BradleyUffner@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      6 months ago

      They have no memory actually. They are completely static. When you chat with them, every single previous prompt and response from that session is fed back through as if it were one large single prompt. They are just faking it behind a chat-like user interface. They most definitely do not learn anything after training is complete.

      • RealFknNito@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        6 months ago

        … No. They’re instanced so that when a new person interacts with them, they don’t have the memories of interacting with the person before them. A clean slate, using only the training data in the form the developers want it to. It’s still AI, it’s just not your girlfriend. The fact you don’t realize that they do and can learn after their training data proves people just hate what they don’t understand. I get it, most people don’t even know the difference between a neural network and AI because who has the time for that? But if you just sit here and go “nuh uh they’re faking it” rather than push people and yourself to learn more, I invite you, cordially, to shut the fuck up.

        Dipshits giving their opinions as fact is a scourge with no cure.

          • RealFknNito@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            5 months ago

            About which part? The part that they can remember and expand their training data to new interactions but often become corrupted by them so much so that the original intent behind the AI is irreversibly altered? That’s been around for about a decade. How about the fact they’re “not faking it” because the added capacity to compute and generate the new content has to have sophisticated plans just to continue running in a timely manner?

            I’d love to know which part you took issue with but you seemingly took my advice to shut the fuck up and I do profoundly appreciate it.