Sam Altman, CEO of OpenAI, speaks at the meeting of the World Economic Forum in Davos, Switzerland. (Denis Balibouse/Reuters)
Sam Altman, CEO of OpenAI, speaks at the meeting of the World Economic Forum in Davos, Switzerland. (Denis Balibouse/Reuters)
But it should drive cars? Operate strike drones? Manage infrastructure like power grids and the water supply? Forecast tsunamis?
Too little too late, Sam. 
Yes on everything but drone strikes.
A computer would be better than humans in those scenarios. Especially driving cars, which humans are absolutely awful at.
So if it looks like it’s going to crash, should it automatically turn off and go “Lol good luck” to the driver now suddenly in charge of the life-and-death situation?
I’m not sure why you think that’s how they would work.
Well it’s simple, who do you think should make the life or death decision?
The computer, of course.
A properly designed autonomous vehicle would be polling data from hundreds of sensors hundreds/thousands of times per second. A human’s reaction speed is 0.2 seconds, which is a hell of a long time in a crash scenario.
It has a way better chance of a ‘life’ outcome than a human who’s either unaware of the potential crash, or is in fight or flight mode and making (likely wrong) reactions based on instinct.
Again, humans are absolutely terrible at operating giant hunks of metal that go fast. If every car on the road was autonomous, then crashes would be extremely rare.
Are there any pedestrians in your perfectly flowing grid?
Again, a computer can react faster than a human can, which means the car can detect a human and start reacting before a human even notices the pedestrian.
Have you seen a Tesla drive itself? Never mind ethical dilemmas, they can barely navigate the downtown without hitting pedestrians
Teslas aren’t self driving cars.
According to their own website, they are
https://www.tesla.com/autopilot
Well, yes. Elon Musk is a liar. Teslas are by no means fully autonomous vehicles.
Ah, the trusty “no true autopilot” defense
As advanced cruise control, yes. No, but in practice it doesn’t change a thing as humans can bomb civilians just fine themselves. Yes and yes.
If we’re not talking about LLMs which is basically computer slop made up of books and sites pretending to be a brain, using a tool for statistical analysis to analyze a shitload of data (like optical, acoustic and mechanical data to assist driving or seismic data to forecast tsunamis) is a bit of a no-brainer.