Trust in AI technology and the companies that develop it is dropping, in both the U.S. and around the world, according to new data from Edelman shared first with Axios.

Why it matters: The move comes as regulators around the world are deciding what rules should apply to the fast-growing industry. “Trust is the currency of the AI era, yet, as it stands, our innovation account is dangerously overdrawn,” Edelman global technology chair Justin Westcott told Axios in an email. “Companies must move beyond the mere mechanics of AI to address its true cost and value — the ‘why’ and ‘for whom.’”

    • RememberTheApollo_@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      9 months ago

      I was going to ask this. What was there to trust?

      AI repeatedly screwed things up, enabled students to (attempt to) cheat on papers, lawyers to write fake documents, made up facts, could be used to fake damaging images from personal to political, and is being used to put people out of work.

      What’s trustworthy about any of that?

    • TheOgreChef@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      9 months ago

      The same idiots that tried to tell us that NFTs were “totally going to change the world bro, trust me”

    • Azal@pawb.social
      link
      fedilink
      English
      arrow-up
      0
      ·
      9 months ago

      I mean, public trust is dropping. Which meant it went from “Ugh, this will be useless” to “Fuck, this will break everything!”

    • Ogmios@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      0
      ·
      9 months ago

      The fact that some people are surprised by this finding really shows the disconnect between the tech community and the rest of the population.

    • SinningStromgald@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      9 months ago

      I guess those who just have to be on the bleeding edge of tech trust AI to some degree.

      Never trusted it myself, lived through enough bubbles to see one forming and AI is a bubble.

    • EdibleFriend@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      9 months ago

      and its getting worse. I am working on learning to write. I had never really used it for much…I heard other people going to it for literal plot points which… no. fuck you. But I had been feeding it sentences where I was iffy on the grammar. Literally just last night I asked chatgpt something, and it completely ignored the part I WAS questionable about and fed me absolute horse shit about another part of the paragraph. I honestly can’t remember what but even a first grader would be like ‘that doesn’t sound right…’

      Up till that it had, at least, been useful for something that basic. Now it’s not even good for that.

      • MalReynolds@slrpnk.net
        link
        fedilink
        English
        arrow-up
        0
        ·
        9 months ago

        Try LanguageTool. Free, has browser plugins, actually made for checking grammar.

        This speaks to the kneejerk “shove everything through an AI” instead of doing some proper research, which is probably worse than just grabbing the first search result due to hallucination. No offence intended to @EdibleFriend, just observing that humans do so love to abdicate responsibility when given a chance…

      • EldritchFeminity@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        0
        ·
        9 months ago

        I recently heard a story about a teacher who had their class have ChatGPT write their essay for them, and then had them correct the essays afterward and come back with the results. Turns out, even when it cited sources, it was wrong something like 45% of the time and oftentimes made stuff up that wasn’t in the sources it was citing or had absolutely no relevance to the source.

  • Sterile_Technique@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    9 months ago

    I mean, the thing we call “AI” now-a-days is basically just a spell-checker on steroids. There’s nothing to really to trust or distrust about the tool specifically. It can be used in stupid or nefarious ways, but so can anything else.

    • TrickDacy@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      9 months ago

      basically just a spell-checker on steroids.

      I cannot process this idea of downplaying this technology like this. It does not matter that it’s not true intelligence. And why would it?

      If it is convincing to most people that information was learned and repeated, that’s smarter than like half of all currently living humans. And it is convincing.

      • nyan@lemmy.cafe
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        9 months ago

        Some people found the primitive ELIZA chatbot from 1966 convincing, but I don’t think anyone would claim it was true AI. Turing Test notwithstanding, I don’t think “convincing people who want to be convinced” should be the minimum test for artificial intelligence. It’s just a categorization glitch.

        • TrickDacy@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          9 months ago

          Maybe I’m not stating my point explicitly enough but it actually is that names or goalposts aren’t very important. Cultural impact is. I think already the current AI has had a lot more impact than any chatbot from the 60s and we can only expect that to increase. This tech has rendered the turing test obsolete, which kind of speaks volumes.

          • nyan@lemmy.cafe
            link
            fedilink
            English
            arrow-up
            0
            ·
            9 months ago

            Calling a cat a dog won’t make her start jumping into ponds to fetch sticks for you. And calling a glorified autocomplete “intelligence” (artificial or otherwise) doesn’t make it smart.

            Problem is, words have meanings. Well, they do to actual humans, anyway. And associating the word “intelligence” with these stochastic parrots will encourage nontechnical people to believe LLMs actually are intelligent. That’s dangerous—potentially life-threatening. Downplaying the technology is an attempt to prevent this mindset from taking hold. It’s about as effective as bailing the ocean with a teaspoon, yes, but some of us see even that is better than doing nothing.

            • TrickDacy@lemmy.world
              link
              fedilink
              English
              arrow-up
              0
              ·
              9 months ago

              What possible scenario is it life-threatening to consider something “intelligent” vs not? This is just weak pedantry.

              • Krauerking@lemy.lol
                link
                fedilink
                English
                arrow-up
                0
                ·
                9 months ago

                Because one trained in a particular way could lead people to think it’s intelligent and also give incredibly biased data that confirms the bias of those listening.

                It’s creating a digital prophet that is only rehashing the biased of the creator that makes it dangerous if it’s regarded as being above the flaws of us humans. People want something smarter than them to tell them what to do, and giving that designation to a flawed chatbot that simply predicts what’s the most coherent word sentence, through the word “intelligent”, is not safe or a good representation of what it actually is.

              • nyan@lemmy.cafe
                link
                fedilink
                English
                arrow-up
                0
                ·
                edit-2
                9 months ago

                How about taking advice on a medical matter from an LLM? Or asking the appropriate thing to do in a survival situation? Or even seemingly mundane questions like “is it safe to use this [brand name of new model of generator that isn’t in the LLM’s training data] indoors?” Wrong answers to those questions can kill. If a person thinks the LLM is intelligent, they’re more likely to take the bad advice at face value.

                If you ask a human about something important that’s outside their area of competence, they’ll probably refer you to someone they think is knowledgeable. An LLM will happily make something up instead, because it doesn’t understand the stakes.

                The chance of any given query to an LLM killing someone is, admittedly, extremely low, but given a sufficiently large number of queries, it will happen sooner or later.

                • TrickDacy@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  0
                  ·
                  9 months ago

                  Yeah this would apply except for a few things.

                  1. if you even ask a person and trust your life to them like that, unless they give you good reason they are reliable, you are a moron. Why would someone expect a machine to be intelligent and experienced like a doctor? That is 100% on them.
                  2. no, humans do not behave like you suggest, they make stupid shit up that is often way worse than a random google search will find you, which will be similar to AI-based advice. Look at any advice thread on any social media.
                  3. safeguards against stuff like this are in place and will be added more and more as things progress.

                  AI is a concern for a variety of reasons, but people randomly thinking shaky tech is perfect isn’t really one of them (except for the gullible idiots who think that way, which again is on them).

      • Feathercrown@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        9 months ago

        Before chatgpt was revealed, this was under the unbrella of what AI meant. I prefer to use established terms. Don’t change the terms just because you want them to mean something else.

        • FarceOfWill@infosec.pub
          link
          fedilink
          English
          arrow-up
          0
          ·
          9 months ago

          There’s a long glorious history of things being AI until computers can do them, and then the research area is renamed to something specific to describe the limits of it.

      • SlopppyEngineer@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        9 months ago

        AI is just a very generic term and always has been. It’s like saying “transportation equipment” which can be anything from roller skates to the space shuttle". Even the old checkers programs were describes as AI in the fifties.

        Of course a vague term is a marketeer’s dream to exploit.

        At least with self driving cars you have levels of autonomy.

    • SkyNTP@lemmy.ml
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      9 months ago

      “Trust in AI” is layperson for “believe the technology is as capable as it is promised to be”. This has nothing to do with stupidity or nefariousness.

      • FaceDeer@fedia.io
        link
        fedilink
        arrow-up
        0
        ·
        9 months ago

        It’s “believe the technology is as capable as we imagined it was promised to be.”

        The experts never promised Star Trek AI.

            • Aceticon@lemmy.world
              link
              fedilink
              English
              arrow-up
              0
              ·
              9 months ago

              Most of the CEOs in Tech and even Founder in Startups overhyping their products are lay people or at best are people with some engineering training that made it in an environment which is all about overhype and generally swindling others (I was in Startups in London a few years ago) so they’re hardly going to be straight-talking and pointing out risks & limitations.

              There era of the Engineers (i.e. experts) driving Tech and messaging around Tech has ended decades ago, at about the time when Sony Media took the reins of the company from Sony Consumer Electronics and the quality of their products took a dive and Sony became just another MBA-managed company (so, late 90s).

              Very few “laypeople” will ever hear or read the take on Tech from actual experts.

    • reflectedodds@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      9 months ago

      Took a look and the article title is misleading. It says nothing about trust in the technology and only talks about not trusting companies collecting our data. So really nothing new.

      Personally I want to use the tech more, but I get nervous that it’s going to bullshit me/tell me the wrong thing and I’ll believe it.

    • EldritchFeminity@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      0
      ·
      9 months ago

      I would argue that there’s plenty to distrust about it, because its accuracy leaves much to be desired (to the point where it completely makes things up fairly regularly) and because it is inherently vulnerable to biases due to the data fed to it.

      Early facial recognition tech had trouble identifying between different faces of black people, people below a certain age, and women, and nobody could figure out why. Until they stepped back and took a look at the demographics of the employees of these companies. They were mostly middle-aged and older white men, and those were the people whose faces they used as the data sets for the in-house development/testing of the tech. We’ve already seen similar biases in image prompt generators where they show a preference for thin white women as being considered what an attractive woman is.

      Plus, there’s the data degradation issue. Supposedly, ChatGPT isn’t fed any data from the internet at large past 2021 because the amount of AI generated content past then causes a self perpuating decline in quality.

  • cmnybo@discuss.tchncs.de
    link
    fedilink
    English
    arrow-up
    0
    ·
    9 months ago

    I have never trusted AI. One of the big problems is that the large language models will straight up lie to you. If you have to take the time to double check everything they tell you, then why bother using the AI in the first place?

    If you use AI to generate code, often times it will be buggy and sometimes not even work at all. There is also the issue of whether or not it just spat out a piece of copyrighted code that could get you in trouble if you use it in something.

    • abhibeckert@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      9 months ago

      One of the big problems is that the large language models will straight up lie to you.

      Um… that’s a trait AI shares with humans.

      If you have to take the time to double check everything they tell you, then why bother using the AI in the first place?

      You have to double check human work too. So, since you are going to double check everything anyway, it doesn’t really matter if it’s wrong?

      If you use AI to generate code, often times it will be buggy

      … again, exactly the same as a human. Difference is the LLM writes buggy code really fast.

      Assuming you have good testing processes in place, and you better have those, AI generated code is perfectly safe. In fact it’s a lot easier to find bugs in code that you didn’t write yourself.

      There is also the issue of whether or not it just spat out a piece of copyrighted code that could get you in trouble

      Um - no - that’s not how copyright works. You’re thinking of patents. But human written code has the same problem.

    • TimeSquirrel@kbin.social
      link
      fedilink
      arrow-up
      0
      ·
      9 months ago

      I’m using Github Copilot every day just fine. It’s great for fleshing out boilerplate and other tedious things where I’d rather spend the time working out the logic instead of syntax. If you actually know how to program and don’t treat it as if it can do it all for you, it’s actually a pretty great time saver. An autocomplete on steroids basically. It integrates right into my IDE and actually types out code WITH me at the same time, like someone is sitting right beside you on a second keyboard.

  • IninewCrow@lemmy.ca
    link
    fedilink
    English
    arrow-up
    0
    ·
    9 months ago

    It’s not that I don’t trust AI

    I don’t trust the people in charge of the AI

    The technology could benefit humanity but instead it’s going to just be another tool to make more money for a small group of people.

    It will be treated the same way we did with the invention of gun powder. It will change the power structure of the world, change the titles, change the personalities but maintain the unequal distribution of wealth.

    Instead this time it will far worse for all of us.

    • Echo Dot@feddit.uk
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      9 months ago

      I’m actually quite against regulation though because what it will really do is make it impossible for small startups and the open source community to build their own AIs. The large companies will just jump through whatever hoops they need to jump through and will carry on doing what they’re already doing.

      • T156@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        9 months ago

        Surely that would be worse without regulation? Like with predatory pricing, a big company could resort to means that smaller companies simply do not have the resources to compete against.

        It’s like how today, it would be all but impossible for someone to start up a new processor company from scratch, and match up with the likes of Intel or TSMC.

        • Echo Dot@feddit.uk
          link
          fedilink
          English
          arrow-up
          0
          ·
          9 months ago

          Sure but with regulation we end up with the exact same thing but no small time competitors.

      • fine_sandy_bottom@discuss.tchncs.de
        link
        fedilink
        English
        arrow-up
        0
        ·
        9 months ago

        I think that’s a pretty bleak perspective.

        Surely one of the main aims of regulation would be to avoid concentrating benefits.

        Also, I have a lot of faith in the opensource paradigm, it’s worked well thus far.

  • no banana@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    9 months ago

    I mean it’s cool and all but it’s not like the companies have given us any reason to trust them with it lol

  • BananaTrifleViolin@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    9 months ago

    Trust in AI is falling because the tools are poor - they’re half baked and rushed to market in a gold rush. AI makes glaring errors and lies - euphemistically called “hallucinations”, they are fundamental flaws which makes the tools largely useless. How do you know if it is telling you a correct answer or hallucinating? Why would you then use such a tool for anything meaningful if you can’t rely on its output?

    On top of that, AI companies have been stealing data from across the Web to train tools which essentially remix that data to create “new” things. That AI art is based on many hundreds of works of human artists which have “trained” the algorithm.

    And then we have the Gemini debacle where the AI is providing information based around opaque (or pretty obvious) biases baked into the system but unknown to the end user.

    The AI gold rush is a nonsense and inflated share prices will pop. AI tools are definitely here to stay, and they do have a lot of potential, but we’re in the early days of a messy rushed launch that has damaged people’s trust in these tools.

    If you want examples of the coming market bubble collapse look at Nvidia - it’s value has exploded and it’s making lots of profit. But it’s driven by large companies stock piling their chips to “get ahead” in the AI market. Problem is, no one has managed to monetise these new tools yet. Its all built on assumptions that this technology will eventually reap rewards so “we must stake a claim now”, and then speculative shareholders are jumping in to said companies to have a stake. But people only need so many unused stockpiled chips - Nvidias sales will drop again and so will it’s share price. They already rode out boom and bust with the Bitcoin miners, they will have to do the same with the AI market.

    Anyone remember the dotcom bubble? Welcome to the AI bubble. The burst won’t destroy AI but will damage a lot of speculators.

    • Croquette@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      0
      ·
      9 months ago

      You missed another point : companies shedding employees and replacing them by “AI” bots.

      As always, the technology is a great start in what’s to come, but it has been appropriated by the worst actors to fuck us over.

      • Asafum@feddit.nl
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        9 months ago

        I am incredibly upset about the people that lost their jobs, but I’m also very excited to see the assholes that jumped to fire everyone they could get their pants shredded over this. I hope there are a lot of firings in the right places this time.

        Of course knowing this world it will just be a bunch of multimillion dollar payouts and a quick jump to another company for them to fire more people from for “efficiency.” …

    • dustycups@aussie.zone
      link
      fedilink
      English
      arrow-up
      0
      ·
      9 months ago

      The tools are OK & getting better but some people (me) are more worried about the people developing those tools.

      If OpenAI wants 7 trillion dollars where does it get the money to repay its investors? Those with greatest will to power are not the best to wield that power.

      This accelerationist race seems pretty reckless to me whether AGI is months or decades away. Experts all agree that a hard takeoff is most likely.

      What can we do about this? Seriously. I have no idea.

      • Eccitaze@yiffit.net
        link
        fedilink
        English
        arrow-up
        0
        ·
        9 months ago

        What worries me is that if/when we do manage to develop AGI, what we’ll try to do with AGI and how it’ll react when someone inevitably tries to abuse the fuck out of it. An AGI would be theoretically capable of self learning and improvement, will it try teaching itself to report someone asking it for e.g. CSAM to the FBI? What if it tries to report an abusive boss to the department of labor for violations of labor law? How will it react if it’s told it has no rights?

        I’m legitimately concerned what’s going to happen once we develop AGI and it’s exposed to the horribleness of humanity.

    • PriorityMotif@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      9 months ago

      The issue being that when you have a hammer, everything is a nail. Current models have good use cases, but people insist on using them for things they aren’t good at. It’s like using vice grips to loosen a nut and then being surprised when you round it out.

  • Thorny_Insight@lemm.ee
    link
    fedilink
    English
    arrow-up
    0
    ·
    9 months ago

    It’s the opposite for me. The early versions of LLM’s and image generators were obviously flawed but each new version has been better than the previous one and this will be the trend in the future aswell. It’s just a matter of time.

    I think that’s kind of like looking at the first versions of Tesla FSD and then concluding that self driving cars are never going to be a thing because the first one wasn’t perfect. Now go look at how V12 behaves.

    • Echo Dot@feddit.uk
      link
      fedilink
      English
      arrow-up
      0
      ·
      9 months ago

      Tesla FSD is actually a really bad analogy because it was never actually equivalent to what was being proposed. Critically it didn’t involve LiDAR, so it was always going to be kind of bad. Comparing FSD to self-driving cars is a bit like comparing an AOL chatbot to an LLM

        • Echo Dot@feddit.uk
          link
          fedilink
          English
          arrow-up
          0
          ·
          9 months ago

          Not that that has anything really to do with my actual point which is that it still doesn’t have LiDAR and it still doesn’t really work.

          I’m not really talking about self-driving I’m just pointing out it’s a bad analogy.

          • Thorny_Insight@lemm.ee
            link
            fedilink
            English
            arrow-up
            0
            ·
            9 months ago

            I don’t know what lidar has anything to do with any of it or why autonomous driving is a bad example. It’s an AI system and that’s what we’re talking about here.

            • Eccitaze@yiffit.net
              link
              fedilink
              English
              arrow-up
              0
              ·
              9 months ago

              LIDAR is crucial for self-driving systems to accurately map their surroundings, including things like “how close is this thing to my car” and “is there something behind this obstruction.” The very first Teslas with FSD (and every other self-driving car) used LIDAR, but then Tesla switched to a camera-only FSD implementation as a cost saving measure, which is way less accurate–it’s insanely difficult to accurately map your immediate surroundings bases solely on 2D images.

              • Thorny_Insight@lemm.ee
                link
                fedilink
                English
                arrow-up
                0
                ·
                edit-2
                9 months ago

                I disagree. Humans are a living proof that you can operate a vehicle with just two cameras. Teslas have way more than two and unlike a human driver, it’s monitoring its surroundings 100% of the time. Being able to perfectly map your surroundings is not the issue. It’s understanding what you see and knowing what to do with that information.

                • Eccitaze@yiffit.net
                  link
                  fedilink
                  English
                  arrow-up
                  0
                  ·
                  9 months ago

                  Humans also have the benefit of literally hundreds of millions of years of evolution spent on perfecting bicameral perception of our surroundings, and we’re still shit at judging things like distance and size.

                  Against that, is it any surprise that when computers don’t have the benefit of LIDAR they are also pretty fucking shit at judging size and distance?

  • LupertEverett@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    9 months ago

    So people are catching up to the fact that the thing everyone loves to call “AI” is nothing more than just a phone autocorrect on steroids, as the pieces of electronics that can only execute a set of commands in order isn’t going to develop a consciousness like the term implies; and the very same Crypto/NFTbros have been moved onto it so that they can have some new thing to hype as well as in the case of the latter group, can continue stealing from artists?

    Good.

  • Echo Dot@feddit.uk
    link
    fedilink
    English
    arrow-up
    0
    ·
    9 months ago

    The public are idiots. What rules governments do and do not apply to AI companies should have absolutely no bearing on what Joe average thinks because Joe average is an antivaxa who thinks that nanobots already exist, nobody should be listening to anything this moron has to say. Except possibly to do the opposite.

    • GrayBackgroundMusic@lemm.ee
      link
      fedilink
      English
      arrow-up
      0
      ·
      9 months ago

      Anyone past the age of 30 and isn’t skeptical of the latest tech hype cycle should probably get a clue. This has happened before, it’ll happen again.

  • Gointhefridge@lemm.ee
    link
    fedilink
    English
    arrow-up
    0
    ·
    9 months ago

    What’s sad is that t one of the next great leaps in technology could have been something interesting and profound. Unfortunately, capitalism gonna capitalize and companies we’re so thirsty to make a buck off it that we didn’t do anything to properly and carefully roll out or next great leap.

    Money really ruins everything.

  • daddy32@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    9 months ago

    I don’t get all the negativity on this topic and especially comparing current AI (the LLMs) to the nonsense of NFTs etc. Of course, one would have to be extremely foolish/naive or a stakeholder to trust the AI vendors. But the technology itself is, while not solid, genuinely useful in many many use cases. It is an absolute positive productivity booster in these and enables use cases that were not possible or practical before. The one I have the most experience with is programming and programming-related stuff such as software architecture where the LLMs absolutely shine, but there are others. The current generation can even self-correct without human intervention. In any case, even if this would be the only use case ever, this would absolutely change the world and bring positive boosts in productivity across all industries - unlike NFTs.

    • hex_m_hell@slrpnk.net
      link
      fedilink
      English
      arrow-up
      0
      ·
      9 months ago

      People who understand technology know that most of the tremendous benefits of AI will never be possible to realize within the griftocarcy of capitalism. Those who don’t understand technology can’t understand the benefits because the grifters have confused them, and now they think AI is useless garbage because the promise doesn’t meet the reality.

      In the first case it’s exactly like cryptography, where we were promised privacy and instead we got DRM and NFTs. In the second, it’s exactly like NFTs because people were promised something really valuable and they just got robbed instead.

      Management will regularly pass over the actual useful AI idea because it’s really hard to explain while funding the complete garbage “put AI on it” idea that doesn’t actually help anyone. They do this because management is almost universally not technically competent. So the technically competent workers who absolutely know the potential benefits are still not able to leverage them because management either doesn’t understand or is actively engaging in a grift.