Trust in AI technology and the companies that develop it is dropping, in both the U.S. and around the world, according to new data from Edelman shared first with Axios.
Why it matters: The move comes as regulators around the world are deciding what rules should apply to the fast-growing industry. “Trust is the currency of the AI era, yet, as it stands, our innovation account is dangerously overdrawn,” Edelman global technology chair Justin Westcott told Axios in an email. “Companies must move beyond the mere mechanics of AI to address its true cost and value — the ‘why’ and ‘for whom.’”
It’s the opposite for me. The early versions of LLM’s and image generators were obviously flawed but each new version has been better than the previous one and this will be the trend in the future aswell. It’s just a matter of time.
I think that’s kind of like looking at the first versions of Tesla FSD and then concluding that self driving cars are never going to be a thing because the first one wasn’t perfect. Now go look at how V12 behaves.
Tesla FSD is actually a really bad analogy because it was never actually equivalent to what was being proposed. Critically it didn’t involve LiDAR, so it was always going to be kind of bad. Comparing FSD to self-driving cars is a bit like comparing an AOL chatbot to an LLM
Have you actually watched any videos of the new entirely AI based version 12 in action?
Not that that has anything really to do with my actual point which is that it still doesn’t have LiDAR and it still doesn’t really work.
I’m not really talking about self-driving I’m just pointing out it’s a bad analogy.
I don’t know what lidar has anything to do with any of it or why autonomous driving is a bad example. It’s an AI system and that’s what we’re talking about here.
LIDAR is crucial for self-driving systems to accurately map their surroundings, including things like “how close is this thing to my car” and “is there something behind this obstruction.” The very first Teslas with FSD (and every other self-driving car) used LIDAR, but then Tesla switched to a camera-only FSD implementation as a cost saving measure, which is way less accurate–it’s insanely difficult to accurately map your immediate surroundings bases solely on 2D images.
I disagree. Humans are a living proof that you can operate a vehicle with just two cameras. Teslas have way more than two and unlike a human driver, it’s monitoring its surroundings 100% of the time. Being able to perfectly map your surroundings is not the issue. It’s understanding what you see and knowing what to do with that information.
Humans also have the benefit of literally hundreds of millions of years of evolution spent on perfecting bicameral perception of our surroundings, and we’re still shit at judging things like distance and size.
Against that, is it any surprise that when computers don’t have the benefit of LIDAR they are also pretty fucking shit at judging size and distance?
Reality just doesn’t seem to agree with you. Did you see the video I linked above? I feel like most people have no real understanding of how damn good FSD V12 is despite being 100% AI and camera based.