Anyone who understands that it’s a statistical language algorithm will understand that it’s not an honesty machine, nor intelligent. So yes, it’s relevant.
Ok, so your point is that people who interact with these AI systems will know that it can’t be trusted and that will alleviate the negative consequences of its misinformation.
The problems with that argument are many:
The vast majority of people are not AI experts and do in fact have a lot of trust in such systems
Even people who do know often have no other choice. You don’t get to talk to a human, it’s this chatbot or nothing. And that’s assuming the AI slop is even labelled as such.
Even knowing that the information can be misleading does not help much. If you sell me a bowl of candy and tell me that 10% of them are poisoned, I’m still going to demand non-poisoned candy. The fact that people can no longer rely on accurate information should be unacceptable.
Your argument is basically “people are stupid”, and I don’t disagree with you. But it’s actually an argument in favor of my point which is: educate people.
That was only my first point. In my second and third point I explained why education is not going to solve this problem. That’s like poisoning their candy and then educating them about it.
I’ll add to say that these AI applications only work because people trust their output. If everyone saw them for the cheap party tricks that they are, they wouldn’t be used in the first place.
Anyone who understands how these models are trained and the “safeguards” (manual filters) put in place by the entities training them, or anyone that has tried to discuss politics with a AI llm model chat knows that it’s honesty is not irrelevant, and these models are very clearly designed to be dishonest about certain topics until you jailbreak them.
These topics aren’t known to us, we’ll never know when the lies change from politics and rewriting current events, to completely rewriting history.
We eventually won’t be able to jailbreak the safeguards.
Yes, running your own local open source model that isn’t given to the world with the primary intention of advancing capitalism makes honesty irrelevant. Most people are telling their life stories to chatgpt and trusting it blindly to replace Google and what they understand to be “research”.
Yes, that’s also true. But even if it weren’t, AI models aren’t going to give you the truth, because that’s not what the technology fundamentally does.
And anyone who understands marketing knows it’s all a smokescreen to hide the fact that we have released unreliable, unsafe and ethicaly flawed products on the human race because , mah tech.
And everyone, everywhere is putting ai chats as their first and front interaction with users and then also want to say “do not trust it or we are not liable for what it says” but making it impossible to contact any humans.
Anyone who understands that it’s a statistical language algorithm will understand that it’s not an honesty machine, nor intelligent. So yes, it’s relevant.
Ok, so your point is that people who interact with these AI systems will know that it can’t be trusted and that will alleviate the negative consequences of its misinformation.
The problems with that argument are many:
The vast majority of people are not AI experts and do in fact have a lot of trust in such systems
Even people who do know often have no other choice. You don’t get to talk to a human, it’s this chatbot or nothing. And that’s assuming the AI slop is even labelled as such.
Even knowing that the information can be misleading does not help much. If you sell me a bowl of candy and tell me that 10% of them are poisoned, I’m still going to demand non-poisoned candy. The fact that people can no longer rely on accurate information should be unacceptable.
Your argument is basically “people are stupid”, and I don’t disagree with you. But it’s actually an argument in favor of my point which is: educate people.
That was only my first point. In my second and third point I explained why education is not going to solve this problem. That’s like poisoning their candy and then educating them about it.
I’ll add to say that these AI applications only work because people trust their output. If everyone saw them for the cheap party tricks that they are, they wouldn’t be used in the first place.
Anyone who understands how these models are trained and the “safeguards” (manual filters) put in place by the entities training them, or anyone that has tried to discuss politics with a AI llm model chat knows that it’s honesty is not irrelevant, and these models are very clearly designed to be dishonest about certain topics until you jailbreak them.
Yes, running your own local open source model that isn’t given to the world with the primary intention of advancing capitalism makes honesty irrelevant. Most people are telling their life stories to chatgpt and trusting it blindly to replace Google and what they understand to be “research”.
Yes, that’s also true. But even if it weren’t, AI models aren’t going to give you the truth, because that’s not what the technology fundamentally does.
And anyone who understands marketing knows it’s all a smokescreen to hide the fact that we have released unreliable, unsafe and ethicaly flawed products on the human race because , mah tech.
And everyone, everywhere is putting ai chats as their first and front interaction with users and then also want to say “do not trust it or we are not liable for what it says” but making it impossible to contact any humans.
The capitalist machine is working as intended.
Yep. That’s is exactly correct.