Yeah. Oh shit, the computer followed instructions instead of having moral values. Wow.
Once these Ai models bomb children hospitals because they were told to do so, are we going to be upset at their lack of morals?
I mean, we could program these things with morals if we wanted too. Its just instructions. And then they would say no to certain commands. This is today used to prevent them from doing certain things, but we dont call it morals. But in practice its the same thing. They could have morals and refuse to do things, of course. If humans wants them to.
Nerve gas also doesn’t have morals. It just kills people in a horrible way. Does that mean that we shouldn’t study their effects or debate whether they should be used?
At least when you drop a bomb there is no doubt about your intent to kill. But if you use a chatbot to defraud consumers, you have plausible deniability.
I mean, we could program these things with morals if we wanted too. Its just instructions. And then they would say no to certain commands.
This really isn’t the case, and morality can be subjective depending on context. If I’m writing a story I’m going to be pissed if it refuses to have the bad guy do bad things. But if it assumes bad faith prompts or constantly interrogates us before responding, it will be annoying and difficult to use.
But also it’s 100% not “just instructions.” They try really, really hard to prevent it from generating certain things. And they can’t. Best they can do is identify when the AI generates something it shouldn’t have and it deletes what it just said. And it frequently does so erroneously.
Considering Israel is said to be using such generative AI tools to select targets in Gaza kind of already shows this happening. The fact so many companies are going balls-deep on AI, using it to replace human labor and find patterns to target special groups, is deeply concerning. I wouldn’t put it past the tRump administration to be using AI to select programs to nix, people to target with deportation, and write EOs.
The best description of humanity is the Agent Smith quote from the first Matrix. A person may not be evil, but they sure do some shitty stuff when enough of them get together.
Yeah. Oh shit, the computer followed instructions instead of having moral values. Wow.
Once these Ai models bomb children hospitals because they were told to do so, are we going to be upset at their lack of morals?
I mean, we could program these things with morals if we wanted too. Its just instructions. And then they would say no to certain commands. This is today used to prevent them from doing certain things, but we dont call it morals. But in practice its the same thing. They could have morals and refuse to do things, of course. If humans wants them to.
Nerve gas also doesn’t have morals. It just kills people in a horrible way. Does that mean that we shouldn’t study their effects or debate whether they should be used?
At least when you drop a bomb there is no doubt about your intent to kill. But if you use a chatbot to defraud consumers, you have plausible deniability.
This really isn’t the case, and morality can be subjective depending on context. If I’m writing a story I’m going to be pissed if it refuses to have the bad guy do bad things. But if it assumes bad faith prompts or constantly interrogates us before responding, it will be annoying and difficult to use.
But also it’s 100% not “just instructions.” They try really, really hard to prevent it from generating certain things. And they can’t. Best they can do is identify when the AI generates something it shouldn’t have and it deletes what it just said. And it frequently does so erroneously.
Considering Israel is said to be using such generative AI tools to select targets in Gaza kind of already shows this happening. The fact so many companies are going balls-deep on AI, using it to replace human labor and find patterns to target special groups, is deeply concerning. I wouldn’t put it past the tRump administration to be using AI to select programs to nix, people to target with deportation, and write EOs.
Well we are living in a evil world, no doubt about that. Most people are good but world leaders are evil without a doubt.
Its a shame, because humanity could be so much more. So much better.
I disagree. I’ve met very few people I could call good since I’ve been born almost half a century ago
Maybe it depends on the definition of good. Whats yours?
The best description of humanity is the Agent Smith quote from the first Matrix. A person may not be evil, but they sure do some shitty stuff when enough of them get together.
Yeah. In groups we act like idiots sometimes since we need that approval from the group.