We are constantly fed a version of AI that looks, sounds and acts suspiciously like us. It speaks in polished sentences, mimics emotions, expresses curiosity, claims to feel compassion, even dabbles in what it calls creativity.

But what we call AI today is nothing more than a statistical machine: a digital parrot regurgitating patterns mined from oceans of human data (the situation hasn’t changed much since it was discussed here five years ago). When it writes an answer to a question, it literally just guesses which letter and word will come next in a sequence – based on the data it’s been trained on.

This means AI has no understanding. No consciousness. No knowledge in any real, human sense. Just pure probability-driven, engineered brilliance — nothing more, and nothing less.

So why is a real “thinking” AI likely impossible? Because it’s bodiless. It has no senses, no flesh, no nerves, no pain, no pleasure. It doesn’t hunger, desire or fear. And because there is no cognition — not a shred — there’s a fundamental gap between the data it consumes (data born out of human feelings and experience) and what it can do with them.

Philosopher David Chalmers calls the mysterious mechanism underlying the relationship between our physical body and consciousness the “hard problem of consciousness”. Eminent scientists have recently hypothesised that consciousness actually emerges from the integration of internal, mental states with sensory representations (such as changes in heart rate, sweating and much more).

Given the paramount importance of the human senses and emotion for consciousness to “happen”, there is a profound and probably irreconcilable disconnect between general AI, the machine, and consciousness, a human phenomenon.

https://archive.ph/Fapar

  • psycho_driver@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 month ago

    Hey AI helped me stick it to the insurance man the other day. I was futzing around with coverage amounts on one of the major insurance companies websites pre-renewal to try to get the best rate and it spit up a NaN renewal amount for our most expensive vehicle. It let me go through with the renewal less that $700 and now says I’m paid in full for the six month period. It’s been days now with no follow-up . . . I’m pretty sure AI snuck that one through for me.

    • laranis@lemmy.zip
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 month ago

      Be careful… If you get in an accident I guaran-god-damn-tee you they will use it as an excuse not to pay out. Maybe after a lawsuit you’d see some money but at that point half of it goes to the lawyer and you’re still screwed.

      • psycho_driver@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 month ago

        Oh I’m aware of the potential pitfalls but it’s something I’m willing to risk to stick it to insurance. I wouldn’t even carry it if it wasn’t required by law. I have the funds to cover what they would cover.

        • JcbAzPx@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          1 month ago

          If you have the funds you could self insure. You’d need to look up the details for your jurisdiction, but the gist of it is you keep the amount required coverage in an account that you never touch until you need to pay out.

          • psycho_driver@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            edit-2
            1 month ago

            Hmm I have daydreamed about this scenario. I didn’t realize it was a thing. Thanks, I’ll check into it, though I wouldn’t doubt if it’s not a thing in my dystopian red flyover state.

            Edit: Yeah, you have to be the registered owner of 25 or more vehicles to qualify for self insurance in my state. So, dealers and rich people only, unfortunately.

      • Blue_Morpho@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 month ago

        AI didn’t write the insurance policy. It only helped him search for the best deal. That’s like saying your insurance company will cancel you because you used a phone to comparison shop.

  • Geodad@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 month ago

    I’ve never been fooled by their claims of it being intelligent.

    Its basically an overly complicated series of if/then statements that try to guess the next series of inputs.

      • A_norny_mousse@feddit.org
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        1 month ago

        And they’re running into issues due to increasingly ingesting AI-generated data.

        There we go. Who coulda seen that coming! While that’s going to be a fun ride, at the same time companies all but mandate AS* to their employees.

      • aesthelete@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        1 month ago

        I really hate the current AI bubble but that article you linked about “chatgpt 2 was literally an Excel spreadsheet” isn’t what the article is saying at all.

    • adr1an@programming.dev
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      1 month ago

      I love this resource, https://thebullshitmachines.com/ (i.e. see lesson 1)…

      In a series of five- to ten-minute lessons, we will explain what these machines are, how they work, and how to thrive in a world where they are everywhere.

      You will learn when these systems can save you a lot of time and effort. You will learn when they are likely to steer you wrong. And you will discover how to see through the hype to tell the difference. …

      Also, Anthropic (ironically) has some nice paper(s) about the limits of “reasoning” in AI.

    • kromem@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 month ago

      It very much isn’t and that’s extremely technically wrong on many, many levels.

      Yet still one of the higher up voted comments here.

      Which says a lot.

      • Hotzilla@sopuli.xyz
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        1 month ago

        Calling these new LLM’s just if statements is quite a over simplification. These are technically something that has not existed before, they do enable use cases that previously were impossible to implement.

        This is far from General Intelligence, but there are solutions now to few coding issues that were near impossible 5 years ago

        5 years ago I would have laughed in your face if you came to suggest that can you code a code that summarizes this description that was inputed by user. Now I laugh that give me your wallet because I need to call API or buy few GPU’s.

        • JcbAzPx@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          1 month ago

          I think the point is that this is not the path to general intelligence. This is more like cheating on the Turing test.

      • Blue_Morpho@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        1 month ago

        Given that the weights in a model are transformed into a set of conditional if statements (GPU or CPU JMP machine code), he’s not technically wrong. Of course, it’s more than just JMP and JMP represents the entire class of jump commands like JE and JZ. Something needs to act on the results of the TMULs.

          • Blue_Morpho@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            1 month ago

            That is not really true. Yes, there are jump instructions being executed when you run interference on a model, but they are in no way related to the model itself.

            The model is data. It needs to be operated on to get information out. That means lots of JMPs.

            If someone said viewing a gif is just a bunch of if-else’s, that’s also true. That the data in the gif isn’t itself a bunch of if-else’s isn’t relevant.

            Executing LLM’S is particularly JMP heavy. It’s why you need massive fast ram because caching doesn’t help them.

            • tmpod@lemmy.pt
              link
              fedilink
              English
              arrow-up
              0
              ·
              1 month ago

              You’re correct, but that’s like saying along the lines of manufacturing a car is just bolting and soldering a bunch of stuff. It’s technically true to some degree, but it’s very disingenuous to make such a statement without being ironic. If you’re making these claims, you’re either incompetent or acting in bad faith.

              I think there is a lot wrong with LLMs and how the public at large uses them, and even more so with how companies are developing and promoting them. But to spread misinformation and polute an already overcrowded space with junk is irresponsible at best.

      • El Barto@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 month ago

        I’ll be pedantic, but yeah. It’s all transistors all the way down, and transistors are pretty much chained if/then switches.

    • Endmaker@ani.social
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      1 month ago

      Artificial Intelligent is supposed to be intelligent.

      For the record, AI is not supposed to be intelligent.

      It just has to appear intelligent. It can be all smoke-and-mirrors, giving the impression that it’s smart enough - provided it can perform the task at hand.

      That’s why it’s termed artificial intelligence.

      The subfield of Artificial General Intelligence is another story.

      • nfh@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 month ago

        The field of artificial intelligence has also made incredible strides in the last decade, and the decade before that. The field of artificial general intelligence has been around for something like 70 years, and has made a really modest amount of progress in that time, on the scale of what they’re trying to do.

        • Endmaker@ani.social
          link
          fedilink
          English
          arrow-up
          0
          ·
          edit-2
          1 month ago

          The field of artificial general intelligence has been around for something like 70 years, and has made a really modest amount of progress in that time, on the scale of what they’re trying to do.

          I daresay it would stay this way until we figure out what intelligence is.

  • some_guy@lemmy.sdf.org
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 month ago

    People who don’t like “AI” should check out the newsletter and / or podcast of Ed Zitron. He goes hard on the topic.

    • kibiz0r@midwest.social
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      1 month ago

      Citation Needed (by Molly White) also frequently bashes AI.

      I like her stuff because, no matter how you feel about crypto, AI, or other big tech, you can never fault her reporting. She steers clear of any subjective accusations or prognostication.

      It’s all “ABC person claimed XYZ thing on such and such date, and then 24 hours later submitted a report to the FTC claiming the exact opposite. They later bought $5 million worth of Trumpcoin, and two weeks later the FTC announced they were dropping the lawsuit.”

      • some_guy@lemmy.sdf.org
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 month ago

        I’m subscribed to her Web3 is Going Great RSS. She coded the website in straight HTML, according to a podcast that I listen to. She’s great.

        I didn’t know she had a podcast. I just added it to my backup playlist. If it’s as good as I hope it is, it’ll get moved to the primary playlist. Thanks!

  • Knock_Knock_Lemmy_In@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 month ago

    So why is a real “thinking” AI likely impossible? Because it’s bodiless. It has no senses, no flesh, no nerves, no pain, no pleasure.

    This is not a good argument.

    • bitjunkie@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 month ago

      philosopher

      Here’s why. It’s a quote from a pure academic attempting to describe something practical.

      • Knock_Knock_Lemmy_In@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 month ago

        The philosopher has made an unproven assumption. An erroneously logical leap. Something an academic shouldn’t do.

        Just because everything we currently consider conscious has a physical presence, does not imply that consciousness requires a physical body.

    • fodor@lemmy.zip
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 month ago

      Actually it’s a very very brief summary of some philosophical arguments that happened between the 1950s and the 1980s. If you’re interested in the topic, you could go read about them.

      • Knock_Knock_Lemmy_In@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 month ago

        I’m not attacking philosophical arguments between the 1950s and the 1980s.

        I’m pointing out that the claim that consciousness must form inside a fleshy body is not supported by any evidence.

      • Knock_Knock_Lemmy_In@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        1 month ago

        It’s hard to see that books argument from the Wikipedia entry, but I don’t see it arguing that intelligence needs to have senses, flesh, nerves, pain and pleasure.

        It’s just saying computer algorithms are not what humans use for consciousness. Which seems a reasonable conclusion. It doesn’t imply computers can’t gain consciousness, or that they need flesh and senses to do so.

        • Simulation6@sopuli.xyz
          link
          fedilink
          English
          arrow-up
          0
          ·
          1 month ago

          I think what he is implying is that current computer design will never be able to gain consciousness. Maybe a fundamentally different type of computer can, but is anything like that even on the horizon?

          • 𝕛𝕨𝕞-𝕕𝕖𝕧@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            0
            ·
            1 month ago

            possibly.

            current machines aren’t really capable of what we would consider sentience because of the von neumann bottleneck.

            simply put, computers consider memory and computation separate tasks leading to an explosion in necessary system resources for tasks that would be relatively trivial for a brain-system to do, largely due to things like buffers and memory management code. lots of this is hidden from the engineer and end user these days so people aren’t really super aware of exactly how fucking complex most modern computational systems are.

            this is why if, for example, i threw a ball at you you will reflexively catch it, dodge it, or parry it; and your brain will do so for an amount of energy similar to that required to power a simple LED. this is a highly complex physics calculation ran in a very short amount of time for an incredibly low amount of energy relative to the amount of information in the system. the brain is capable of this because your brain doesn’t store information in a chest and later retrieve it like contemporary computers do. brains are turing machines, they just aren’t von neumann machines. in the brain, information is stored… within the actual system itself. the mechanical operation of the brain is so highly optimized that it likely isn’t physically possible to make a much more efficient computer without venturing into the realm of strange quantum mechanics. even then, the verdict is still out on whether or not natural brains don’t do something like this to some degree as well. we know a whole lot about the brain but it seems some damnable incompleteness theorem-adjacent affect prevents us from easily comprehending the actual mechanics of our own brains from inside the brain itself in a wholistic manner.

            that’s actually one of the things AI and machine learning might be great for. if it is impossible to explain the human experience from inside of the human experience… then we must build a non-human experience and ask its perspective on the matter - again, simply put.

  • merc@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 month ago

    The other thing that most people don’t focus on is how we train LLMs.

    We’re basically building something like a spider tailed viper. A spider tailed viper is a kind of snake that has a growth on its tail that looks a lot like a spider. It wiggles it around so it looks like a spider, convincing birds they’ve found a snack, and when the bird gets close enough the snake strikes and eats the bird.

    Now, I’m not saying we’re building something that is designed to kill us. But, I am saying that we’re putting enormous effort into building something that can fool us into thinking it’s intelligent. We’re not trying to build something that can do something intelligent. We’re instead trying to build something that mimics intelligence.

    What we’re effectively doing is looking at this thing that mimics a spider, and trying harder and harder to tweak its design so that it looks more and more realistic. What’s crazy about that is that we’re not building this to fool a predator so that we’re not in danger. We’re not doing it to fool prey, so we can catch and eat them more easily. We’re doing it so we can fool ourselves.

    It’s like if, instead of a spider-tailed snake, a snake evolved a bird-like tail, and evolution kept tweaking the design so that the tail was more and more likely to fool the snake so it would bite its own tail. Except, evolution doesn’t work like that because a snake that ignored actual prey and instead insisted on attacking its own tail would be an evolutionary dead end. Only a truly stupid species like humans would intentionally design something that wasn’t intelligent but mimicked intelligence well enough that other humans preferred it to actual information and knowledge.

    • jj4211@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 month ago

      To the extent it is people trying to fool people, it’s rich people looking to fool poorer people for the most part.

      To the extent it’s actually useful, it’s to replace certain systems.

      Think of the humble phone tree, designed to make it so humans aren’t having to respond, triage, and route calls. So you can have an AI system that can significantly shorten that role, instead of navigating a tedious long maze of options, a couple of sentences back and forth and you either get the portion of automated information that would suffice or routed to a human to take care of it. Same analogy for a lot of online interactions where you have to input way too much and if automated data, you get a wall of text of which you’d like something to distill the relevant 3 or 4 sentences according to your query.

      So there are useful interactions.

      However it’s also true that it’s dangerous because the “make user approve of the interaction” can bring out the worst in people when they feel like something is just always agreeing with them. Social media has been bad enough, but chatbots that by design want to please the enduser and look almost legitimate really can inflame the worst in our minds.

  • Buffalox@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 month ago

    That headline is a straw man, and the article really argues on General AI, which also has consciousness.
    The current state of AI is definitely intelligent, but it’s not GAI.
    Bullshit headline.

    • SupraMario@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 month ago

      Todays AI is clippy on steroids. It’s not intelligent or creative. You can’t feed it physics and astronomy books with the equation for C and tell it to create the equation for C. It’s fancy autocorrect, and it’s a waste of compute and energy.

    • decarabas42@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      1 month ago

      I think you’re misunderstanding the point the author is making. He is arguing that even the current state is not intelligent, it is merely a fancy autocorrect, it doesn’t know or understand anything about the prompts it receives, as the author stated, it can only guess at the next statistically most likely piece of information based on the data that has been fed into it. That’s not intelligence.

      • Buffalox@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        1 month ago

        it doesn’t know or understand

        But that’s not what intelligence is, that’s what consciousness is.
        Intelligence is not understanding shit, it’s the ability to for instance solve a problem, so a frigging calculator has a tiny degree of intelligence, but not enough for us to call it AI.
        There is simply zero doubt an AI is intelligent, claiming otherwise just shows people don’t know the difference between intelligence and consciousness.

        Passing an exam is a form of intelligence.
        Can a good AI pass a basic exam?
        YES.
        Does passing an exam require consciousness?
        NO.
        Because an exam tests abilities of intelligence, not level of consciousness.

        it can only guess at the next statistically most likely piece of information based on the data that has been fed into it. That’s not intelligence.

        Except we do the exact same thing! Based on prior experience (learning) we choose what we find to be the most likely answer. And that is indeed intelligence.

        Current AI does not have the reasoning abilities we have yet, but they are not completely without it, and it’s a subject that is currently worked on and improved. So current AI is actually a pretty high form of intelligence. And can sometimes out compete average humans in certain areas.

        • decarabas42@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          1 month ago

          Intelligence is not understanding shit, it’s the ability to for instance solve a problem, so a frigging calculator has a tiny degree of intelligence, but not enough for us to call it AI.

          I have to disagree that a calculator has intelligence. The calculator has the mathematical functions programmed into it, but it couldn’t use those on its own. The intelligence in your example is that of the operator of the calculator and the programmer who designed the calculator’s software.

          Can a good AI pass a basic exam?
          YES

          I agree with you that the ability to pass an exam isn’t a great test for this situation. In my opinion, the major factor that would point to current state AI not being intelligent is that it doesn’t know why a given answer is correct, beyond that it is statistically likely to be correct.

          Except we do the exact same thing! Based on prior experience (learning) we choose what we find to be the most likely answer.

          Again, I think this points to the idea that knowing why an answer is correct is important. A person can know something by rote, which is what current AI does, but that doesn’t mean that person knows why that is the correct answer. The ability to extrapolate from existing knowledge and apply that to other situations that may not seem directly applicable is an important aspect of intelligence.

          As an example, image generation AI knows that a lot of the artwork that it has been fed contains watermarks or artist signatures, so it would often include things that look like those in the generated piece. It knew that it was statistically likely for that object to be there in a piece of art, but not why it was there, so it could not make a decision not to include them. Maybe that issue has been removed from the code of image generation AI by now, it has been a long time since I’ve messed around with that kind of tool, but even if it has been fixed, it is not because the AI knew it was wrong and self-corrected, it is because a programmer had to fix a bug in the code that the AI model had no awareness of.

          • Buffalox@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            1 month ago

            I think this points to the idea that knowing why an answer is correct is important.

            That’s consciousness, and beyond ordinary AI. That is in the realm of General AI or Strong AI.

      • FourWaveforms@lemm.ee
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 month ago

        Predicting sequences of things is foundational to intelligence. In fact, it is the whole point.

  • scarabic@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 month ago

    My thing is that I don’t think most humans are much more than this. We too regurgitate what we have absorbed in the past. Our brains are not hard logic engines but “best guess” boxes and they base those guesses on past experience and probability of success. We make choices before we are aware of them and then apply rationalizations after the fact to back them up - is that true “reasoning?”

    It’s similar to the debate about self driving cars. Are they perfectly safe? No, but have you seen human drivers???

    • fishos@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      1 month ago

      I’ve been thinking this for awhile. When people say “AI isn’t really that smart, it’s just doing pattern recognition” all I can help but think is “don’t you realize that is one of the most commonly brought up traits concerning the human mind?” Pareidolia is literally the tendency to see faces in things because the human mind is constantly looking for the “face pattern”. Humans are at least 90% regurgitating previous data. It’s literally why you’re supposed to read and interact with babies so much. It’s how you learn “red glowy thing is hot”. It’s why education and access to knowledge is so important. It’s every annoying person who has endless “did you know?” facts. Science is literally “look at previous data, iterate a little bit, look at new data”.

      None of what AI is doing is truly novel or different. But we’ve placed the human mind on this pedestal despite all the evidence to the contrary. Eyewitness testimony, optical illusions, the hundreds of common fallacies we fall prey to… our minds are icredibly fallible and are really just a hodgepodge of processes masquerading as “intelligence”. We’re a bunch of instincts in a trenchcoat. To think AI isn’t or can’t reach our level is just hubris. A trait that probably is more unique to humans.

      • scarabic@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 month ago

        Yep we are on the same page. At our best, we can reach higher than regurgitating patterns. I’m talking about things like the scientific method and everything we’ve learned by it. But still, that’s a 5% minority, at best, of what’s going on between human ears.

    • MangoCats@feddit.it
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 month ago

      If an IQ of 100 is average, I’d rate AI at 80 and down for most tasks (and of course it’s more complex than that, but as a starting point…)

      So, if you’re dealing with a filing clerk with a functional IQ of 75 in their role - AI might be a better experience for you.

      Some of the crap that has been published on the internet in the past 20 years comes to an IQ level below 70 IMO - not saying I want more AI because it’s better, just that - relatively speaking - AI is better than some of the pay-for-clickbait garbage that came before it.

    • Puddinghelmet@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      1 month ago

      Human brains are much more complex than a mirroring script xD The amount of neurons in your brain, AI and supercomputers only have a fraction of that. But you’re right, for you its not much different than AI probably

      • TangledHyphae@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 month ago

        The human brain contains roughly 86 billion neurons, while ChatGPT, a large language model, has 175 billion parameters (often referred to as “artificial neurons” in the context of neural networks). While ChatGPT has more “neurons” in this sense, it’s important to note that these are not the same as biological neurons, and the comparison is not straightforward.

        86 billion neurons in the human brain isn’t that much compared to some of the larger 1.7 trillion neuron neural networks though.

            • MangoCats@feddit.it
              link
              fedilink
              English
              arrow-up
              0
              ·
              1 month ago

              Nah, I went to public high school - I got to see “the average” citizen who is now voting. While it is distressing that my ex-classmates now seem to control the White House, Congress and Supreme Court, what they’re doing with it is not surprising at all - they’ve been talking this shit since the 1980s.

        • AppleTea@lemmy.zip
          link
          fedilink
          English
          arrow-up
          0
          ·
          1 month ago

          It’s when you start including structures within cells that the complexity moves beyond anything we’re currently capable of computing.

        • MangoCats@feddit.it
          link
          fedilink
          English
          arrow-up
          0
          ·
          1 month ago

          But, are these 1.7 trillion neuron networks available to drive YOUR car? Or are they time-shared among thousands or millions of users?

    • Saledovil@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 month ago

      Ai models are trained on basically the entirety of the internet, and more. Humans learn to speak on much less info. So, there’s likely a huge difference in how human brains and LLMs work.

    • AppleTea@lemmy.zip
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 month ago

      Self Driving is only safer than people in absolutely pristine road conditions with no inclement weather and no construction. As soon as anything disrupts “normal” road conditions, self driving becomes significantly more dangerous than a human driving.

        • jj4211@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          1 month ago

          I think the self driving is likely to be safer in the most boring scenarios, the sort of situations where a human driver can get complacent because things have been going so well for the past hour of freeway driving. The self driving is kind of dumb, but it’s at least consistently paying attention, and literally has eyes in the back of it’s head.

          However, there’s so much data about how it fails in stupidly obvious ways that it shouldn’t, so you still need the human attention to cover the more anomalous scenarios that foul self driving.

      • MangoCats@feddit.it
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 month ago

        Human drivers are only safe when they’re not distracted, emotionally disturbed, intoxicated, and physically challenged (vision, muscle control, etc.) 1% of the population has epilepsy, and a large number of them are in denial or simply don’t realize that they have periodic seizures - until they wake up after their crash.

        So, yeah, AI isn’t perfect either - and it’s not as good as an “ideal” human driver, but at what point will AI be better than a typical/average human driver? Not today, I’d say, but soon…

        • jj4211@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          1 month ago

          The thing about self driving is that it has been like 90-95% of the way there for a long time now. It made dramatic progress then plateaued, as approaches have failed to close the gap, with exponentially more and more input thrown at it for less and less incremental subjective improvement.

          But your point is accurate, that humans have lapses and AI have lapses. The nature of those lapses is largely disjoint, so that makes an opportunity for AI systems to augment a human driver to get the best of both worlds. A constantly consistently vigilant computer driving monitoring and tending the steering, acceleration, and braking to be the ‘right’ thing in a neutral behavior, with the human looking for more anomolous situations that the AI tends to get confounded about, and making the calls on navigating certain intersections that the AI FSD still can’t figure out. At least for me the worst part of driving is the long haul monotony on freeway where nothing happens, and AI excels at not caring about how monotonous it is and just handling it, so I can pay a bit more attention to what other things on the freeway are doing that might cause me problems.

          I don’t have a Tesla, but have a competitor system and have found it useful, though not trustworthy. It’s enough to greatly reduce the drain of driving, but I have to be always looking around, and have to assert control if there’s a traffic jam coming up (it might stop in time, but it certainly doesn’t slow down soon enough) or if I have to do a lane change in some traffic (if traffic conditions are light, it can change langes nicely, but without a whole lot of breathing room, it won’t do it, which is nice when I can afford to be stupidly cautious).

          • MangoCats@feddit.it
            link
            fedilink
            English
            arrow-up
            0
            ·
            1 month ago

            The one “driving aid” that I find actually useful is the following distance maintenance cruise control. I set that to the maximum distance it can reliably handle and it removes that “dimension” of driving problem from needing my constant attention - giving me back that attention to focus on other things (also driving / safety related.) “Dumb” cruise control works similarly when there’s no traffic around at all, but having the following distance control makes it useful in traffic. Both kinds of cruise control have certain situations that you need to be aware of and ready to take control back at a moment’s notice - preferably anticipating the situation and disengaging cruise control before it has a problem - but those exceptions are pretty rare / easily handled in practice.

            Things like lane keeping seem to be more trouble than they’re worth, to me in the situations I drive in.

            Not “AI” but a driving tech that does help a lot is parking cameras. Having those additional perspectives from the camera(s) at different points on the vehicle is a big benefit during close-space maneuvers. Not too surprising that “AI” with access to those tools does better than normal drivers without.

            • jj4211@lemmy.world
              link
              fedilink
              English
              arrow-up
              0
              ·
              1 month ago

              At least in my car, the lane following (not keeping system) is handy because the steering wheel naturally tends to go where it should and less often am I “fighting” the tendency to center. The keeping system is at least for me largely nothing. If I turn signal, it ignores me crossing a lane. If circumstances demand an evasive maneuver that crosses a line, it’s resistance isn’t enough to cause an issue. At least mine has fared surprisingly well in areas where the lane markings are all kind of jacked up due to temporary changes for construction. If it is off, then my arms are just having to generally assert more effort to be in the same place I was going to be with the system. Generally no passenger notices when the system engages/disengages in the car except for the chiming it does when it switches over to unaided operation.

              So at least my experience has been a positive one, but it hits things just right with intervention versus human attention, including monitoring gaze to make sure I am looking where I should. However there are people who test “how long can I keep my hands off the steering wheel”, which is a more dangerous mode of thinking.

              And yes, having cameras everywhere makes fine maneuvering so much nicer, even with the limited visualization possible in the synthesized ‘overhead’ view of your car.

    • Auli@lemmy.ca
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 month ago

      Get a self driven ng car to drive in a snow storm or a torrential downpour. People are really downplaying humans abilities.

  • El Barto@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    1 month ago

    I agreed with most of what you said, except the part where you say that real AI is impossible because it’s bodiless or “does not experience hunger” and other stuff. That part does not compute.

    A general AI does not need to be conscious.

    • NιƙƙιDιɱҽʂ@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      1 month ago

      That and there is literally no way to prove something is or isn’t conscious. I can’t even prove to another human being that I’m a conscious entity, you just have to assume I am because from your own experience, you are so therefor I too must be, right?

      Not saying I consider AI in it’s current form to be conscious, more so the whole idea is just silly and unfalsifiable.

      • amelia@feddit.org
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 month ago

        No idea why you’re getting downvoted. People here don’t seem to understand even the simplest concepts of consciousness.

    • MangoCats@feddit.it
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 month ago

      AI is not actual intelligence. However, it can produce results better than a significant number of professionally employed people…

      I am reminded of when word processors came out and “administrative assistant” dwindled as a role in mid-level professional organizations, most people - even increasingly medical doctors these days - do their own typing. The whole “typing pool” concept has pretty well dried up.

      • tartarin@reddthat.com
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 month ago

        However, there is a huge energy cost for that speed to process statistically the information to mimic intelligence. The human brain is consuming much less energy. Also, AI will be fine with well defined task where innovation isn’t a requirement. As it is today, AI is incapable to innovate.

        • cheesorist@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          1 month ago

          much less? I’m pretty sure our brains need food and food requires lots of other stuff that need transportation or energy themselves to produce.

          • Potatar@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            1 month ago

            Customarily, when doing these kind of calculations we ignore stuff which keep us alive because these things are needed regardless of economic contributions, since you know people are people and not tools.

            • MangoCats@feddit.it
              link
              fedilink
              English
              arrow-up
              0
              ·
              1 month ago

              people are people and not tools

              But this comparison is weighing people as tools vs alternative tools.

        • MangoCats@feddit.it
          link
          fedilink
          English
          arrow-up
          0
          ·
          1 month ago

          The human brain is consuming much less energy

          Yes, but when you fully load the human brain’s energy costs with 20 years of schooling, 20 years of “retirement” and old-age care, vacation, sleep, personal time, housing, transportation, etc. etc. - it adds up.

    • amelia@feddit.org
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 month ago

      You know, and I think it’s actually the opposite. Anyone pretending their brain is doing more than pattern recognition and AI can therefore not be “intelligence” is a fucking idiot.

      • outhouseperilous@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        1 month ago

        I think there’s a strong strain of essentialist human chauvinism.

        But it’s more kinds of thing than LLM’s are doing. Except in the case of llmbros fascists and other opt-outs.

  • Basic Glitch@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    1 month ago

    It’s only as intelligent as the people that control and regulate it.

    Given all the documented instances of Facebook and other social media using subliminal emotional manipulation, I honestly wonder if the recent cases of AI chat induced psychosis are related to something similar.

    Like we know they’re meant to get you to continue using them, which is itself a bit of psychological manipulation. How far does it go? Could there also be things like using subliminal messaging/lighting? This stuff is all so new and poorly understood, but that usually doesn’t stop these sacks of shit from moving full speed with implementing this kind of thing.

    It could be that certain individuals have unknown vulnerabilities that make them more susceptible to psychosis due to whatever manipulations are used to make people keep using the product. Maybe they’re doing some things to users that are harmful, but didn’t seem problematic during testing?

    Or equally as likely, they never even bothered to test it out, just started subliminally fucking with people’s brains, and now people are going haywire because a bunch of unethical shit heads believe they are the chosen elite who know what must be done to ensure society is able to achieve greatness. It just so happens that “what must be done,” also makes them a ton of money and harms people using their products.

    It’s so fucking absurd to watch the same people jamming AI and automation down our throats while simultaneously forcing traditionalism, and a legal system inspired by Catholic integralist belief on society.

    If you criticize the lack of regulations in the wild west of technology policy, or even suggest just using a little bit of fucking caution, then you’re trying to hold back progress.

    However, all non-tech related policy should be based on ancient traditions and biblical text with arbitrary rules and restrictions that only make sense and benefit the people enforcing the law.

    What a stupid and convoluted way to express you just don’t like evidence based policy or using critical thinking skills, and instead prefer to just navigate life by relying on the basic signals from your lizard brain. Feels good so keep moving towards, feels bad so run away, or feels scary so attack!

    Such is the reality of the chosen elite, steering us towards greatness.

    What’s really “funny” (in a we’re all doomed sort of way) is that while writing this all out, I realized the “chosen elite” controlling tech and policy actually perfectly embody the current problem with AI and bias.

    Rather than relying on intelligence to analyze a situation in the present, and create the best and most appropriate response based on the information and evidence before them, they default to a set of pre-concieved rules written thousands of years ago with zero context to the current reality/environment and the problem at hand.

    • MangoCats@feddit.it
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 month ago

      A gun isn’t dangerous, if you handle it correctly.

      Same for an automobile, or aircraft.

      If we build powerful AIs and put them “in charge” of important things, without proper handling they can - and already have - started crashing into crowds of people, significantly injuring them - even killing some.

  • confuser@lemmy.zip
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 month ago

    The thing is, ai is compassion of intelligence but not intelligence itself. That’s the part that confuses people. Ai is the ability to put anything describable into a compressed zip.

    • elrik@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 month ago

      I think you meant compression. This is exactly how I prefer to describe it, except I also mention lossy compression for those that would understand what that means.

      • interdimensionalmeme@lemmy.ml
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 month ago

        Hardly surprising human brains are also extremely lossy. Way more lossy than AI. If we want to keep up our manifest exceptionalism, we’d better start definning narrower version of intelligence that isn’t going to soon have. Embodied intelligence, is NOT one of those.

  • Lovable Sidekick@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 month ago

    Amen! When I say the same things this author is saying I get, “It’S NoT StAtIsTiCs! LeArN aBoUt AI bEfOrE yOu CoMmEnT, dUmBaSs!”