• huginn@feddit.it
    link
    fedilink
    English
    arrow-up
    0
    ·
    11 months ago

    Friendly reminder that your predictive text, while very compelling, is not alive.

    It’s not a mind.

    • _NoName_@lemmy.ml
      link
      fedilink
      English
      arrow-up
      0
      ·
      11 months ago

      I don’t think most people will care, so long as their NPC interaction ends up compelling. We’ve been reading stories about people who don’t exist for centuries, and that’s stopped no one from sympathizing with them - and now there’s a chance you could have an open conversation with them.

      Like, I think alot of us assume that we care about the authors who write the character dialogs but I think most people actually choose not to know who is behind their favorite NPCs to preserve some sense that the NPC personality isn’t manufactured.

      Combine that with everyone becoming steadily more lonely over the years, and I think AI-generated NPC interactions are going to take escapism to another level.

        • _NoName_@lemmy.ml
          link
          fedilink
          English
          arrow-up
          0
          ·
          11 months ago

          Lol, yeah. If generative AI text stays as shitty as it is now, then this whole discussion moot. Whether that will be the case has yet to be seen. What is an indisputable fact, though, is that right now is the worst that generative AI will ever be again. It’s only able to improve from here.

          • Barbarian@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            0
            ·
            11 months ago

            It’s only able to improve from here.

            That isn’t actually true. With the rise in articles, posts and comments written by these algorithms, experts are warning about model collapse. Basically, the lack of decent human-written training data will destroy future generative AI before it can even start.

            • _NoName_@lemmy.ml
              link
              fedilink
              English
              arrow-up
              0
              ·
              11 months ago

              That’s an interesting point. We are seeing a similar kind of issue with search engines losing effectiveness due to search engine optimization on websites.

              So it is possible that generative AI will become enshittened.

    • MxM111@kbin.social
      link
      fedilink
      arrow-up
      0
      ·
      11 months ago

      While it is not alive, whether it is a mind is not a clear cut. It can be called kind of a mind, a mind different of that of human.

      • huginn@feddit.it
        link
        fedilink
        English
        arrow-up
        0
        ·
        11 months ago

        Unless you want to call your predictive text on your keyboard a mind you really can’t call an LLM a mind. It is nothing more than a linear progression from that. Mathematically proven to not show any form of emergent behavior.

        • MxM111@kbin.social
          link
          fedilink
          arrow-up
          0
          ·
          11 months ago

          I do not think that it is “linear” progression. ANN by definition is nonlinear. Neither I think anything is “mathematically proven”. If I am wrong, please provide a link.

            • MxM111@kbin.social
              link
              fedilink
              arrow-up
              0
              ·
              11 months ago

              Thank you. This paper though does not state that there are no emergent abilities. It only states that one can introduce a metric with respect to which the emergent ability behaves smoothly and not threshold-like. While interesting, it only suggests that things like intelligence are smooth functions, but so what? Some other metrics show exponential or threshold dependence and whether the metric is right depends only how one will use it. And there is no law that emerging properties have to be threshold like. Quite the opposite - nearly all examples in physics that I know, the emergence appears gradually.

        • Kogasa@programming.dev
          link
          fedilink
          English
          arrow-up
          0
          ·
          11 months ago

          No such thing has been “mathematically proven.” The emergent behavior of ML models is their notable characteristic. The whole point is that their ability to do anything is emergent behavior.

          • huginn@feddit.it
            link
            fedilink
            English
            arrow-up
            0
            ·
            edit-2
            11 months ago

            Here’s a white paper explicitly proving:

            1. No emergent properties (illusory due to bad measures)
            2. Predictable linear progress with model size

            https://arxiv.org/abs/2304.15004

            The field changes fast, I understand it is hard to keep up

            • Kogasa@programming.dev
              link
              fedilink
              English
              arrow-up
              0
              ·
              11 months ago

              Sure, if you define “emergent abilities” just so. It’s obvious from context that this is not what I described.

    • Poggervania@kbin.social
      link
      fedilink
      arrow-up
      0
      ·
      11 months ago

      Cyberpunk 2077 sorta explores this a bit.

      There’s a vending machine that has a personality and talks to people walking by it. The quest chain basically has you and the vending machine chatting a bit and even giving the vending machine some advice on a person he has a crush on. You eventually become friends with this vending machine.

      When it seems like it’s becoming more apparent it’s an AI and is developing sentience, it turns out the vending machine just has a really well-coded socializing program. He even admits as much when he’s about to be deactivated.

      So, to reiterate what you said: predictive text and LLMs are not alive nor a mind.