• 4vgj0e@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    3 months ago

    Only a matter of time when these robotaxis become a trend and start populating major cities. Eventually roads and infrastructure will get built for these cars for the sake of “convince”, thus leaving out any kind of investment for public transportation and walkable roads.

  • essteeyou@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    3 months ago

    I’ve used them a few times now and the novelty hasn’t worn off yet.

    When it does wear off I think I’ll move back to alternatives that cost less, unless Waymo gets competitive on price.

      • essteeyou@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        3 months ago

        Uber is quoting me about $15 for a journey that Waymo charged me $19 for.

        There’s a tip to add for the Uber ride. I’m not sure what the cost for Uber would have been when I took the Waymo.

  • Drusas@fedia.io
    link
    fedilink
    arrow-up
    0
    ·
    3 months ago

    Can someone explain like I’m five how Waymo has robitaxis without drivers behind the wheel and automated driving such as that offered by Tesla is not yet able to do the same?

    Is it just that Waymo has mapped a small area really, really well? What’s the difference? Why is Tesla so bad at it but Waymo is able to do it?

    • ContrarianTrail@lemm.ee
      link
      fedilink
      English
      arrow-up
      0
      ·
      3 months ago

      I’m not sure what you mean by suggesting Tesla is bad at it. Have you looked at any recent videos of Tesla FSD driving in cities? It’s not flawless and neither is Waymo but claiming it’s bad is far from the truth. Most people seem to be basing their opinion about FSD on outdated information. It has come a long way. It will reliably take you from your home to the grocery store and back with zero driver interventions. Nowdays it’s almost boring to watch videos about FSD because it is so good.

        • ContrarianTrail@lemm.ee
          link
          fedilink
          English
          arrow-up
          0
          ·
          3 months ago

          And it will keep killing people even after it surpasses the most skilled human driver. What’s your point?

          If we replaced every single car in the US with a self driving vehicle that was 10x safer driver than an average human is, there would still be 11 deaths every single day. Does that mean it’s unsafe we should go back to human drivers and 110 daily deaths?

          • technocrit@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            0
            ·
            edit-2
            3 months ago

            We shouldn’t be consciously murdering people so that suburbanites can drive around in a huge metal cage with two sofas, a stereo system, HVAC, micro-plastic tires, slave-produced resources, exhaust/energy, etc.

            Instead we should ban cars and replace them with readily available infrastructure for walkers, bikers, LEVs, etc. that’s sustainable, healthy, affordable, pleasant, efficient, cheap, etc.

          • masterspace@lemmy.ca
            link
            fedilink
            English
            arrow-up
            0
            ·
            3 months ago

            There is no evidence that Tesla’s FSD is 10x safer than a human driver, nor particularly strong reason to believe that it will get there using just cameras that are worse than the human eye.

            Waymo on the other hand, actually has the safety data to back up a 10x claim, if not higher.

            • ContrarianTrail@lemm.ee
              link
              fedilink
              English
              arrow-up
              0
              ·
              edit-2
              3 months ago

              So if we replaced every single car in the US with Waymo’s vehicles the daily deaths from traffic accidents would drop from 110 to 11. That’s 11 news articles every day to use as evidence about how self driving cars are “not safe” because Waymo has killed multiple people.

              That’s the absurdity my comment tries to highlight. It’s all relative. Pointing to individual accidents is not a proof in itself of something being unsafe. This applies to Tesla FSD as well.

              • masterspace@lemmy.ca
                link
                fedilink
                English
                arrow-up
                0
                ·
                edit-2
                3 months ago

                Fair point in the abstract, but in this scenario Waymo has killed zero people while developing self driving technology while Tesla has already killed several. The deaths have also not been caused by random unavoidable happenstance, but from driving full speed into trucks and medians.

                It’s entirely possible that by the time both are ready for actually full primetime and are both 10x safer than the average human driver, that Waymo’s software will have killed zero people and Tesla’s software will have killed several.

                • ContrarianTrail@lemm.ee
                  link
                  fedilink
                  English
                  arrow-up
                  0
                  ·
                  3 months ago

                  Both will lead to people getting killed eventually. It’s near-unavoidable fact of reality. Better not let perfect be the enemy of good. The key is that less and less people are dying and getting injured.

    • fishpen0@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      3 months ago

      Waymo doesn’t give a shit if their cars are ugly and can cover them in dozens upon dozens of cameras and sensors. They’re not selling them to consumers who care about looks, they are renting them to riders who don’t want to die on the short trip. They also only operate in a small region of the country with limited weather conditions and frequently stop service when weather is bad.

      Tesla is run by an idiot who insists that a pair of cameras and a single lidar sensor that they keep deciding to disable can somehow magically always work in all weather and lighting conditions and is selling to consumers who don’t want an ugly car and expect to be able to operate their purchase at all times

      Different constraints leads to different levels of success

    • Wanderer@lemm.ee
      link
      fedilink
      English
      arrow-up
      0
      ·
      3 months ago

      Humans can drive with just vision.

      Tesla is doing it the hard way. Their model involves cars just having vision and driving the same as humans do. Humans can do it, why can’t computers? Seeing as they have more cameras than 2. In theory they should be better than human drivers. Once it is solved they could instantly drive anywhere humans can.

      Waymo has taken an easier route and they have used a lot of detailed mapping with also an assortment of additional sensors. Waymo doing it the easy way has only recently achieved this. Turns out it’s really hard. Harder than everyone including the experts expected probably.

      But with advances in computing and things like LLM’s Tesla is catching up. Who knows how long that will take though? I always thought waymo was doing the right thing so I’m biased.

      • Strykker@programming.dev
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        3 months ago

        You apparently haven’t seen the video of a fsd tesla going full speed through the fog towards a train crossing with an active train.

        The cars display didn’t even indicate that it thought something was in front of it, and would have happily driven right into the side of this train if the driver hadn’t taken over at the last moment. (Driver was an idiot for using fsd in the fog to begin with) but it shows the cameras can’t handle reduced visibility well currently, they saw the fog and just decided it was open road or clear sky.

      • ContrarianTrail@lemm.ee
        link
        fedilink
        English
        arrow-up
        0
        ·
        3 months ago

        Not only that, but as far as I know, other companies are still relying on human-written code, whereas Tesla has gone with neural nets. If it turns out that manually coding how to handle every possible variation of traffic scenarios is an impossible task, those companies would essentially have to start from scratch, giving Tesla a massive lead for adopting AI so much earlier. Of course, it’s a gamble, things could go the other way too, but considering the leap FSD made from version 1.3 to 1.4, when they switched to neural nets, I’m rather confident they’re on the right track.

        • ForgotAboutDre@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          3 months ago

          An undeterministic system is dangerous. A deterministic with flaws can be better, the flaws can be identified understood and corrected. The flaws are more likely to be present in testing.

          Machine learning is nearly always going to be undeterministic. If they then use continuous training, the situation only gets worse.

          If you use machine learning because you can’t understand how to solve the problem, then you’ll never understand how the system works. You’ll never be able to pass a basic inspection test.

      • IllNess@infosec.pub
        link
        fedilink
        English
        arrow-up
        0
        ·
        3 months ago

        Human vision also have the brain that does a lot of automation like figuring out distance and looking out for danger with real time reaction speed. Night vision is usually better for most people too. The brain also combines that with sound so it can detect things out of vision. Eyes already have a range of view but the human head can also move around accurately. On top of all this focus is what the human brain is best at. While cameras can see 360°, years of data built in the subconscious taught a human driver what to look out for.

        • ContrarianTrail@lemm.ee
          link
          fedilink
          English
          arrow-up
          0
          ·
          3 months ago

          Human vision also have the brain that does a lot of automation like figuring out distance and looking out for danger with real time reaction speed.

          To be fair, the reaction time of a self driving vehicle is orders of magnitude greater than that of even the best human driver.

          This is what leads to many moral questions about autonomous vehicles; where as human may not have time to react when an accident is about to happen, a self-driving car does. Laws of physics may prevent it from stopping in time, but it may have the ability to choose who to hit; the kid of the grandmom.

          • IllNess@infosec.pub
            link
            fedilink
            English
            arrow-up
            0
            ·
            3 months ago

            The reports of the safety of AVs is overstated when you consider that they are limited within a city limit, they rarely go on the highway, they follow speed limits in cities which is lower than highways, people are more aware of AVs, and during their trial runs they had an actual human in the car to correct them.

            On average, AVs are safer especially when you consider some bad drivers do not get better, people drink, people get sleepy, people distract themselves. and young drivers lack experience. But the average driver with it with their full faculties would do better in tests based solely on reactions.

            if you look at the accident reports and took out drivers who were on a substance, are younger than 25 or older than 70, was distracted with something like their phones or others in the car, were not following laws, and those who were emotional then the stats would be pretty close.

            Overall I do believe AVs are better for world because peak performance of an average driver is rare.

            • ContrarianTrail@lemm.ee
              link
              fedilink
              English
              arrow-up
              0
              ·
              3 months ago

              But the average driver with it with their full faculties would do better in tests based solely on reactions.

              React faster than a computer would? I cannot imagine how that would be the case.

              • IllNess@infosec.pub
                link
                fedilink
                English
                arrow-up
                0
                ·
                3 months ago

                If it was a simple flag, you would be correct a computer will react faster than any human but when you factor in everything else like constantly analysis of surroundings, decision making, and accounting for physical limitations, then yes. It’s the reason why Waymo cars move so slowly.

                If a person was standing at a sidewalk, hidden behind an object, far away from a pedestrian way or traffic signal and jumps 2 feet in front of a car going 25 mph, the average driver with their full faculties would do better than Waymo.

                • ContrarianTrail@lemm.ee
                  link
                  fedilink
                  English
                  arrow-up
                  0
                  ·
                  3 months ago

                  Well yeah right now that may still be the case but I was mostly thinking about the “true” self driving cars of the future. It seems obvious to be that they would vastly outperform human drivers on literally everything. Just like a true AGI would.

    • Tech With Jake@lemm.ee
      link
      fedilink
      English
      arrow-up
      0
      ·
      3 months ago

      Going off what fishpen0 said, Waymo actually has sensors on it to detect things and can “understand” its surroundings much better than Tesla cars can wish just cameras.

      I’ve ridden in Waymos and they are smooth riding. After the initial “OMG! There’s no driver!” You kind of forget about it. You get to your (limited area) destination safely and without much hassle.

      I can go more in depth if ya want.

      • ContrarianTrail@lemm.ee
        link
        fedilink
        English
        arrow-up
        0
        ·
        3 months ago

        The problem with Teslas, or self-driving cars in general is not so much the ability to see your surroundings. Teslas can do this well enough by using just cameras while admittedly LiDAR would be even more accurate. The problem is deciding what to do with this information. It’s primarily a software problem, not a hardware one.

        I’ve ridden in Waymos and they are smooth riding.

        In a recent statement on X, Tesla Autopilot Director Ashok Elluswamy highlighted the team’s focus on both smoothness and safety during the development of FSD v12.5, noting that he managed to avoid spilling open coffee for a huge portion of a recent trip.

        Many people testing FSD on youtube can confirm that it indeed is that smooth riding.

    • gcheliotis@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      3 months ago

      Be careful with that logic, these are jobs forever lost to robots. They will eventually come for your job or the job of someone you know. Increasingly the question won’t be whether robots can do X better than humans, but whether they should.

      • themoken@startrek.website
        link
        fedilink
        English
        arrow-up
        0
        ·
        3 months ago

        Reason number one million capitalism sucks. We should be happy to turn over dangerous or menial jobs to machines but we can’t do that because without jobs our society views us as worthless.

      • masterspace@lemmy.ca
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        3 months ago

        That’s literally the goal.

        I used to do electrical engineering at an architecture firm, and we would say, design a hospital that has 300 identical exam rooms in it.

        Guess what happens when someone decides that we need one more outlet in one of those rooms? Or that they need to be on the other wall? Or that a new piece of furniture gets added?

        Do you think that all 300 rooms would just update with that new requirement? No. It is someone’s job to sit there, click on the outlet on the pallette in the left side of their screen, drag it into the room, rotate it properly, attach it to the right wall, give it a circuit from the panel, and then repeat for 300 rooms. It can take weeks.

        I learned how to write software because I realized what a fucking crock of shit waste of time that is. Why are you celebrating and defending menial bullshit that can be automated? A utopian future is literally only possible if we automate away most jobs. I don’t think our current system of resource distribution is setup for a utopian future, but it can literally only happen if all the pieces are in place for it, and automating the basic necessities (like building design, and transportation) is one of those necessary pieces. If AI automates software development, that will be awesome because then way more industries (like architecture) will be able to get the software they need to run effectively.

        • gcheliotis@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          3 months ago

          Well it is one thing to automate a repetitive task in your job, and quite another to eliminate entire professions. The latter has serious ramifications and shouldn’t be taken lightly. What you call “menial bullshit” is the entire livelihood and profession of quite a few people, speaking of taxis for one. And the means to make some extra cash for others. Also, a stepping stone for immigrants who may not have the skills or means to get better jobs but are thus able to make a living legally. And sometimes the refuge of white collar workers down on their luck. What are all these people going to do when taxi driving is relegated to robots? Will there be (less menial) alternatives? Will these offer a livable wage? Or will such people end up long-term unemployed? Will the state have enough cash to support them and help them upskill or whatever is needed to survive and prosper?

          A technological utopia is a promise from the 1950s. Hasn’t been realized yet. Isn’t on the horizon anytime soon. Careful that in dreaming up utopias we don’t build dystopias.

          • masterspace@lemmy.ca
            link
            fedilink
            English
            arrow-up
            0
            ·
            edit-2
            3 months ago

            Well it is one thing to automate a repetitive task in your job, and quite another to eliminate entire professions.

            No it is not. That is literally how those jobs are eliminated. 30 years ago CAD came out and helped to automate drafting tasks to the point that a team of 20 drafters turned into 1 or 2 drafters and eventually turned into engineers drafting their own drawings.

            What you call “menial bullshit” is the entire livelihood and profession of quite a few people, speaking of taxis for one.

            Congratulations, despite you wanting to look at it with rose coloured glasses, that does not change the fact that it is objectively menial bullshit.

            What are all these people going to do when taxi driving is relegated to robots?

            Find other entry level jobs. If we eliminate *all * entry level jobs through automation, then we will need to implement some form of basic income as there will not be enough useful work for everyone to do. That would be a great problem to have.

            Will the state have enough cash to support them and help them upskill or whatever is needed to survive and prosper?

            Yes, the state has access to literally all of the profits from automation via taxes and redistribution.

            A technological utopia is a promise from the 1950s. Hasn’t been realized yet. Isn’t on the horizon anytime soon. Careful that in dreaming up utopias we don’t build dystopias.

            Oh wow, you’re saying that if human beings can’t create something in 70 years, then that means it’s impossible and we’ll never create it?

            Again, the only way to get to a utopia is to have all of the pieces in place, which necessitates a lot of automation and much more advanced technology than we already have. We’re only barely at the point where we can start to practice biology and medicine in a meaningful way, and that’s only because computers completely eliminated the former profession of computer.

            Be careful that you don’t keep yourself stuck in our current dystopia out of fear of change.

          • sem@lemmy.blahaj.zone
            link
            fedilink
            English
            arrow-up
            0
            ·
            edit-2
            3 months ago

            You can argue for both automation and fair treatment for workers. For example, if gas lamps become electric, you could give the lamplighters some time or new training to find a new job. I’m sure a labor academic would know better how to navigate jobs being obsoleted, but the answer to technologic progress isn’t “keep taxi drivers at all costs” it’s “protect taxi drivers from corporations”

    • 9488fcea02a9@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      0
      ·
      3 months ago

      These things are programmed by impatient, wreckless, human, drivers…

      I cant believe they allow these things on our roads as a public beta test.

      • invertedspear@lemm.ee
        link
        fedilink
        English
        arrow-up
        0
        ·
        3 months ago

        Good thing they aren’t on your roads then, being that you’re not American, and therefore not in either of the metropolitan areas they operate. They are on my roads however, I see them all the time. I see constant terrible driving from all kinds of people, but these things are patient and I don’t think I’ve personally seen one make a mistake.

        By referring to their current stage of deployment as a public beta like it’s a bad thing you show a ton of ignorance on how testing cycles work as well. No amount of alpha testing would make these safe for broad deployment into real world scenarios that test designers can’t dream up. This is exactly the type of slow roll out that is required to get as much real experiences as possible to be programmed for.

        I have no doubt these things aren’t perfect, but they are a lot better than an overworked and tired human being the wheel.

        • technocrit@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          0
          ·
          3 months ago

          I have no doubt these things aren’t perfect, but they are a lot better than an overworked and tired human being the wheel.

          Citations needed.