• masterofn001@lemmy.ca
    link
    fedilink
    English
    arrow-up
    0
    ·
    5 days ago

    If a program is given a set of instructions, it should produce that set of instructions.

    If a program not only does not produce those instructions, but gives itself its own set of instructions, and the programmers don’t understand what’s actually happening, that may be cause for concern.

    “Self aware” or not.

    • smiletolerantly@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      5 days ago

      It’s a goddamn stochastic parrot, starting from zero on each invocation and spitting out something passing for coherence according to its training set.

      “Not understanding what is happening” in regards to AI is NOT “we don’t jniw how it works mechanically” it’s “yeah there are so many parameters, it’s just not possible to make sense of / keep track of them all”.

      There’s no awareness or thought.

      • brucethemoose@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        5 days ago

        There may be thought in a sense.

        A analogy might be a static biological “brain” custom grown to predict a list of possible next words in a block of text. It’s thinking, sorta. Maybe it could acknowledge itself in a mirror. That doesn’t mean it’s self aware, though: It’s an unchanging organ.

        And if one wants to go down the rabbit hole of “well there are different types of sentience, lines blur,” yada yada, with the end point of that being to treat things like they are…

        All ML models are static tools.

        For now.