Am I the only one getting agitated by the word AI (Artificial Intelligence)?

Real AI does not exist yet,
atm we only have LLMs (Large Language Models),
which do not think on their own,
but pass turing tests
(fool humans into thinking that they can think).

Imo AI is just a marketing buzzword,
created by rich capitalistic a-holes,
who already invested in LLM stocks,
and now are looking for a profit.

  • hperrin@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    edit-2
    1 year ago

    I think most people consider LLMs to be real AI, myself included. It’s not AGI, if that’s what you mean, but it is AI.

    What exactly is the difference between being able to reliably fool someone into thinking that you can think, and actually being able to think? And how could we, as outside observers, be able to tell the difference?

    • okamiueru@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      1 year ago

      Maybe I’m just a little bit too familiar with it, but I don’t find LLMs particularly convincing of anything I would call “real AI”. But I suppose that entirely depends on what you mean with “real”. Their flaws are painfully obvious. I even use ChatGPT 4 in hopes of it being better.

  • bilboswaggings@sopuli.xyz
    link
    fedilink
    arrow-up
    0
    ·
    edit-2
    1 year ago

    This has been a thing for a long time

    Clippy was an assistant, Cortana was an intelligent assistant and Copilot is AI

    None of these are accurate, it’s always like a generation behind

    Clippy just was, Cortana was an assistant, And copilot is an intelligent assistant

    The next one they make could actually be AI

  • dangblingus@lemmy.dbzer0.com
    link
    fedilink
    arrow-up
    0
    ·
    1 year ago

    Humans possess an esoteric ability to create new ideas out of nowhere, never before thought of. Humans are also capable of inspiration, which may appear similar to the way that AI’s remix old inputs into “new” outputs, but the rules of creativity aren’t bound by any set parameters the way a LLM is. I’m going to risk making a comment that ages like milk and just spitball: true artificial intelligence that matches a human is impossible.

    • thedeadwalking4242@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      edit-2
      1 year ago

      If I stuck you on a black box with and removed every single one of your senses, took away your ability to memorize things I don’t really think you’d generate new ideas either. Human creativity relies heavily on output from the outside world. LLMs are not human like intelligence but do exhibit pretty amazing emergent behavior. LLMs are more sophisticated than you think. Human like AI has to be possible unless there is something intrinsically different about the human brain that breaks our current understanding of the world. Barring a “soul”, the human brain has to be nothing but calculations taking place in a chemical medium. Meaning that human like AI or even better must be achievable.

      • 0ops@lemm.ee
        link
        fedilink
        arrow-up
        0
        ·
        1 year ago

        I pretty much agree, but Imo, it’s not so much that LLMs are more sophisticated then people think, it’s more that people are less sophisticated then people think. Homosapiens have proven over and over again that we’re biased toward seeing ourselves as the center of, and the most important part of the universe. I know that Chatgpt isn’t magic, but I bet that I’m not either

    • UnityDevice@startrek.website
      link
      fedilink
      arrow-up
      0
      ·
      1 year ago

      They didn’t just start calling it AI recently. It’s literally the academic term that has been used for almost 70 years.

      The term “AI” could be attributed to John McCarthy of MIT (Massachusetts Institute of Technology), which Marvin Minsky (Carnegie-Mellon University) defines as "the construction of computer programs that engage in tasks that are currently more satisfactorily performed by human beings because they require high-level mental processes such as: perceptual learning, memory organization and critical reasoning. The summer 1956 conference at Dartmouth College (funded by the Rockefeller Institute) is considered the founder of the discipline.

      • 9bananas@lemmy.world
        link
        fedilink
        arrow-up
        0
        ·
        1 year ago

        perceptual learning, memory organization and critical reasoning

        i mean…by that definition nothing currently in existence deserves to be called “AI”.

        none of the current systems do anything remotely approaching “perceptual learning, memory organization, and critical reasoning”.

        they all require pre-processed inputs and/or external inputs for training/learning (so the opposite of perceptual), none of them really do memory organization, and none are capable of critical reasoning.

        so OPs original question remains:

        why is it called “AI”, when it plainly is not?

        (my bet is on the faceless suits deciding it makes them money to call everything “AI”, even though it’s a straight up lie)

        • UnityDevice@startrek.website
          link
          fedilink
          arrow-up
          0
          ·
          edit-2
          1 year ago

          so OPs original question remains: why is it called “AI”, when it plainly is not?

          Because a bunch of professors defined it like that 70 years ago, before the AI winter set in. Why is that so hard to grasp? Not everything is a conspiracy.

          I had a class at uni called AI, and no one thought we were gonna be learning how to make thinking machines. In fact, compared to fact the stuff we did learn to make then, modern AI looks godlike.

          Honestly you all sound like the people that snidely complain how it’s called “global warming” when it’s freezing outside.

          • 9bananas@lemmy.world
            link
            fedilink
            arrow-up
            0
            ·
            1 year ago

            just because the marketing idiots keep calling it AI, doesn’t mean it IS AI.

            words have meaning; i hope we agree on that.

            what’s around nowadays cannot be called AI, because it’s not intelligence by any definition.

            imagine if you were looking to buy a wheel, and the salesperson sold you a square piece of wood and said:

            “this is an artificial wheel! it works exactly like a real wheel! this is the future of wheels! if you spin it in the air it can go much faster!”

            would you go:

            “oh, wow, i guess i need to reconsider what a wheel is, because that’s what the salesperson said is the future!”

            or would you go:

            “that’s idiotic. this obviously isn’t a wheel and this guy’s a scammer.”

            if you need to redefine what intelligence is in order to sell a fancy statistical model, then you haven’t invented intelligence, you’re just lying to people. that’s all it is.

            the current mess of calling every fancy spreadsheet an “AI” is purely idiots in fancy suits buying shit they don’t understand from other fancy suits exploiting that ignorance.

            there is no conspiracy here, because it doesn’t require a conspiracy; only idiocy.

            p.s.: you’re not the only one here with university credentials…i don’t really want to bring those up, because it feels like devolving into a dick measuring contest. let’s just say I’ve done programming on industrial ML systems during my bachelor’s, and leave it at that.

            • UnityDevice@startrek.website
              link
              fedilink
              arrow-up
              0
              ·
              1 year ago

              These arguments are so overly tired and so cyclic that AI researchers coined a name for them decades ago - the AI effect. Or succinctly just: “AI is whatever hasn’t been done yet.”

              • 9bananas@lemmy.world
                link
                fedilink
                arrow-up
                0
                ·
                1 year ago

                i looked it over and … holy mother of strawman.

                that’s so NOT related to what I’ve been saying at all.

                i never said anything about the advances in AI, or how it’s not really AI because it’s just a computer program, or anything of the sort.

                my entire argument is that the definition you are using for intelligence, artificial or otherwise, is wrong.

                my argument isn’t even related to algorithms, programs, or machines.

                what these tools do is not intelligence: it’s mimicry.

                that’s the correct word for what these systems are capable of. mimicry.

                intelligence has properties that are simply not exhibited by these systems, THAT’S why it’s not AI.

                call it what it is, not what it could become, might become, will become. because that’s what the wiki article you linked bases its arguments on: future development, instead of current achievement, which is an incredibly shitty argument.

                the wiki talks about people using shifting goal posts in order to “dismiss the advances in AI development”, but that’s not what this is. i haven’t changed what intelligence means; you did! you moved the goal posts!

                I’m not denying progress, I’m denying the claim that the goal has been reached!

                that’s an entirely different argument!

                all of the current systems, ML, LLM, DNN, etc., exhibit a massive advancement in computational statistics, and possibly, eventually, in AI.

                calling what we have currently AI is wrong, by definition; it’s like saying a single neuron is a brain, or that a drop of water is an ocean!

                just because two things share some characteristics, some traits, or because one is a subset of the other, doesn’t mean that they are the exact same thing! that’s ridiculous!

                the definition of AI hasn’t changed, people like you have simply dismissed it because its meaning has been eroded by people trying to sell you their products. that’s not ME moving goal posts, it’s you.

                you said a definition of 70 years ago is “old” and therefore irrelevant, but that’s a laughably weak argument for anything, but even weaker in a scientific context.

                is the Pythagorean Theorem suddenly wrong because it’s ~2500 years old?

                ridiculous.

    • Meowoem@sh.itjust.works
      link
      fedilink
      arrow-up
      0
      ·
      1 year ago

      It’s a computer science term that’s been used for this field of study for decades, it’s like saying calling a tomato a fruit is a marketing decision.

      Yes it’s somewhat common outside computer science to expect an artificial intelligence to be sentient because that’s how movies use it. John McCarthy’s which coined the term in 1956 is available online if you want to read it

      • ToRA@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 year ago

        “Quantum” is a scientific term, yet it’s used as a gimmicky marketing term.

        • Meowoem@sh.itjust.works
          link
          fedilink
          arrow-up
          0
          ·
          1 year ago

          Yes perfect example, people use quantum as the buzzword in every film so people think of it as a silly thing but when CERN talk about quantum communication or using circuit quantum electrodynamics then it’d be silly to try and tell them they’re wrong.

    • dutchkimble@lemy.lol
      link
      fedilink
      arrow-up
      0
      ·
      1 year ago

      It doesn’t rhyme, And the content is not really interesting, Maybe it’s just a rant, But with a weird writing format.

  • Rooki@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    1 year ago

    Yes your summary is correct, its just a buzzword.

    You can still check if its a real human if you do something really stupid or speak or write giberisch. Almost every AI will try to reply to it or say “Sorry i couldnt understand it” or recent events ( most of the LLMs arent trained on the newest events )

  • flop_leash_973@lemmy.world
    cake
    link
    fedilink
    arrow-up
    0
    ·
    1 year ago

    The term is so over used at this point I could probably start referring to any script I write that has condition statements in it and convince my boss I have created our own “AI”.

    • TeckFire@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      1 year ago

      For real. Like some enemies in Killzone 2 “act” pretty clever, but aren’t using anything close to LLM, let alone “AI,” but I bet you if you implemented their identical behavior into a modern 2024 game and marketed it as the enemies having “AI” everyone would believe you in a heartbeat.

      It’s just too overencompasing. Saying “large language model technology” may not be as eye catching, but it means I know if you at least used the technology. Anyone can market as “AI” and it could be an excel formula for all I know.

      • Gabu@lemmy.world
        link
        fedilink
        arrow-up
        0
        ·
        1 year ago

        The enemies in killzone do use AI… the Goombas in the first Super Mario bros. used AI. This term has been used to refer to npc behavior since the dawn of videogames.

        • TeckFire@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          1 year ago

          I know. That’s not my point. I know that technically, “AI” could mean anything that gives the illusion of intelligence artificially. My use of the term was more of the OP, that of a machine achieving sapience, not just the illusion of one. It’s just down to definitions. I just prefer to use the term in a different way, and wish it was, but I accept that the world does not

  • Dasnap@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    1 year ago

    I assume you’re referring to the sci-fi kind of self-aware AI because we’ve had ‘artificial intelligence’ in computing for decades in the form of decision making algorithms and the like. Whether any of that should be classed as AI is up for debate as again, it’s still all a facade. In those cases, people only really cared about the outputs and weren’t trying to argue they were alive or anything.

    But yeah, I get what you mean.

  • Despair@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 year ago

    A lot of the comments I’ve seen promoting AI sound very similar to ones made around the time GME was relevant or cryptocurrency. Often, the conversations sounded very artificial and the person just ends up repeating buzzwords/echo chamber instead of actually demonstrating that they have an understanding of what the technology is or its limitations.

  • PonyOfWar@pawb.social
    link
    fedilink
    arrow-up
    0
    ·
    1 year ago

    The word “AI” has been used for way longer than the current LLM trend, even for fairly trivial things like enemy AI in video games. How would you even define a computer “thinking on its own”?

          • Markimus@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            1 year ago

            Sorry, to be clear I meant it can mimic the conversational symptoms of depression as if it actually had depression; there’s no understanding there though.

            You can’t use that as a metric because you wouldn’t be able to tell the difference between real depression and trained depression.

      • Lath@kbin.social
        link
        fedilink
        arrow-up
        0
        ·
        1 year ago

        But will they be depressed or will they just simulate it because they’re too lazy to work?

          • meyotch@slrpnk.net
            link
            fedilink
            arrow-up
            0
            ·
            1 year ago

            It’s only tangentially related to the topic, since it involves brain enhancements, not ‘AI’. However, you may enjoy the short story “Reasons to be cheerful” by Greg Egan.

        • JackFrostNCola@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          1 year ago

          If they are too lazy to work that would imply they have motivation and choice beyond “doing what my programming tells me to do ie. input, process, output”. And if they have the choice not to do work because they dont ‘feel’ like doing it (and not a programmed/coded option given to them to use) then would they not be thinking for themselves?

      • PonyOfWar@pawb.social
        link
        fedilink
        arrow-up
        0
        ·
        1 year ago

        Not sure about that. A LLM could show symptoms of depression by mimicking depressed texts it was fed. A computer with a true consciousness might never get depression, because it has none of the hormones influencing our brain.

        • Deceptichum@kbin.social
          link
          fedilink
          arrow-up
          0
          ·
          1 year ago

          Me: Pretend you have depression

          LLM: I’m here to help with any questions or support you might need. If you’re feeling down or facing challenges, feel free to share what’s on your mind. Remember, I’m here to provide information and assistance. If you’re dealing with depression, it’s important to seek support from qualified professionals like therapists or counselors. They can offer personalized guidance and support tailored to your needs.

          • PonyOfWar@pawb.social
            link
            fedilink
            arrow-up
            0
            ·
            1 year ago

            Give it the right dataset and you could easily create a depressed sounding LLM to rival Marvin the paranoid android.

        • Feathercrown@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          1 year ago

          Hormones aren’t depression, and for that matter they aren’t emotions either. They just cause them in humans. An analogous system would be fairly trivial to implement in an AI.

          • PonyOfWar@pawb.social
            link
            fedilink
            arrow-up
            0
            ·
            1 year ago

            That’s exactly my point though, as OP stated we could detect if an AI was truly intelligent if it developed depression. Without hormones or something similar, there’s no reason to believe it ever would develop those on its own. The fact that you could artificially give it depressions is besides the point.

            • Feathercrown@lemmy.world
              link
              fedilink
              English
              arrow-up
              0
              ·
              1 year ago

              I don’t think we have the same point here at all. First off, I don’t think depression is a good measure of intelligence. But mostly, my point is that it doesn’t make it less real when hormones aren’t involved. Hormones are simply the mediator that causes that internal experience in humans. If a true AI had an internal experience, there’s no reason to believe that it would require hormones to be depressed. Do text-to-speech systems require a mouth and vocal chords to speak? Do robots need muscle fibers to walk? Do LLMs need neurons to form complete sentences? Do cameras need eyes to see? No, because it doesn’t matter what something is made of. Intelligence and emotions are made of signals. What those signals physically are is irrelevant.

              As for giving it feelings vs it developing them on its own-- you didn’t develop the ability to feel either. That was the job of evolution, or in the case of AI, it could be intentionally designed. It could also be evolved given the right conditions.

              • PonyOfWar@pawb.social
                link
                fedilink
                arrow-up
                0
                ·
                1 year ago

                First off, I don’t think depression is a good measure of intelligence.

                Exactly. Which is why we shouldn’t judge an AIs intelligence based on whether it can develop depression. Sure, it’s feasible it could develop it through some other mechanism. But there’s no reason to assume it would, in absence of the factors that cause depressions in humans.

    • Ratulf@feddit.de
      link
      fedilink
      arrow-up
      0
      ·
      1 year ago

      The best thing is enemy “AI” only needs to be made worse right away after creating it. First they’ll headshot everything across the map in milliseconds. The art is to make it dumber.

  • raspberriesareyummy@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    1 year ago

    you are not alone, it annoys me to no avail and I keep correcting & explaining to people who have no clue about how computers and LLMs work.

  • alien@lemm.ee
    cake
    link
    fedilink
    arrow-up
    0
    ·
    1 year ago

    It really depends on how you define the term. In the tech world AI is used as a general term to describe many sorts of generative and predictive models. At one point in time you could’ve called a machine that can solve arithmetic problems “AI” and now here we are, Feels like the goalpost gets moved further every time we get close so I guess we’ll never have “true” AI?

    So, the point is, what is AI for you?

  • ikidd@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 year ago

    As a farmer, my kneejerk interpretation is “artificial insemination” and I get confused for a second every time.