That’s basically what happens right now. Remember Amazon’s smart grocery store? It was just people in India watching cameras. Computer vision wasn’t capable of it.
Do you really think Amazon wants to pay humans to be cashiers?
No but if they spend a bunch of money and time designing it, spend a bunch of time and money retrofitting stores, and then a bunch of time and money marketing it and the technology doesn’t actually work when it’s ‘showtime,’ I can easily see a company with deep pockets like Amazon faking it all by hiring dirt cheap labor to make it seem like it works rather than the alternative.
Exactly. The people watching videos were doing QC, not actually operating the entire thing. Closer scrutiny with the first few stores makes a ton of sense (i.e. watching every interaction) because there will be a bunch of bugs. But as they scale out, I would expect a much smaller portion of videos to be actually watched live.
Have you seen humans drive? Now imagine them driving with significant visual and steering input latency, distorted wide angle cameras, and the lack of steering and acceleration feedback. Unless they are used to sim racing, I bet most people would drive worse than Tesla’s FSD if done remotely.
The mountain of footage of idiots in cars on the Internet disagrees even more. OTOH, what’s the worst footage of Tesla’s FSD you’ve can show me? I’m curious how much worse it is that what I’ve seen.
Well, I think the self driving taxis across the us apparently need human interaction every 6 minutes on average… So are they self driving? I don’t know.
We can’t use our phones and drive, but someone can have a screen and drive 6 cars at the same time…
I’m pretty sure this story was blown out of proportion and exaggerated. These people were training and validating the automated systems not watching the cameras 24/7.
That’s how AI is trained, manual intervention. It wasn’t working as well as they hoped, but it wasn’t humans watching cameras in real time.
It sounds like the best way to bootstrap a machine learning system. You generate the data the system will be seeing in production along with the proper labels. Then in a later stage you can start doing reinforcement learning.
Is the future just having a human slave in a third world country strap into VR and carry your groceries for you?
That’s basically what happens right now. Remember Amazon’s smart grocery store? It was just people in India watching cameras. Computer vision wasn’t capable of it.
That’s not true at all. I personally know a person who worked on that technology.
Human beings got involved only when necessary. Do you really think Amazon wants to pay humans to be cashiers?
No but if they spend a bunch of money and time designing it, spend a bunch of time and money retrofitting stores, and then a bunch of time and money marketing it and the technology doesn’t actually work when it’s ‘showtime,’ I can easily see a company with deep pockets like Amazon faking it all by hiring dirt cheap labor to make it seem like it works rather than the alternative.
But the technology does actually work.
You don’t come up with an idea, announce it to the world, and then start figuring out how to implement it.
Exactly. The people watching videos were doing QC, not actually operating the entire thing. Closer scrutiny with the first few stores makes a ton of sense (i.e. watching every interaction) because there will be a bunch of bugs. But as they scale out, I would expect a much smaller portion of videos to be actually watched live.
Maybe in an ideal world but that’s not the world we live in.
I worked at Amazon for 8 years. That’s not how it works.
Makes me wonder how much of Tesla’s “Full Self Driving” is just some dude playing GTA VR with you in the passenger seat.
That would probably be better than waymo
If it was actually that it would work better…
Have you seen humans drive? Now imagine them driving with significant visual and steering input latency, distorted wide angle cameras, and the lack of steering and acceleration feedback. Unless they are used to sim racing, I bet most people would drive worse than Tesla’s FSD if done remotely.
Some footage of tesla’s full self driving disagrees.
The mountain of footage of idiots in cars on the Internet disagrees even more. OTOH, what’s the worst footage of Tesla’s FSD you’ve can show me? I’m curious how much worse it is that what I’ve seen.
Well, I think the self driving taxis across the us apparently need human interaction every 6 minutes on average… So are they self driving? I don’t know.
We can’t use our phones and drive, but someone can have a screen and drive 6 cars at the same time…
And I only need human interaction every few days. Take that AI… :)
AI (Anonymous Indians)
I’m pretty sure this story was blown out of proportion and exaggerated. These people were training and validating the automated systems not watching the cameras 24/7.
That’s how AI is trained, manual intervention. It wasn’t working as well as they hoped, but it wasn’t humans watching cameras in real time.
https://www.theverge.com/2024/4/17/24133029/amazon-just-walk-out-cashierless-ai-india
It sounds like the best way to bootstrap a machine learning system. You generate the data the system will be seeing in production along with the proper labels. Then in a later stage you can start doing reinforcement learning.
The problem is the lying about it.