Record numbers of people are turning to AI chatbots for therapy, reports Anthony Cuthbertson. But recent incidents have uncovered some deeply worrying blindspots of a technology out of control
The tools are relative. Pick a tool. It can be used wrong. You are special pleading, dogmatism, intellectual dishonesty.
If you’re going to refuse entire categories of tools then we are down to comparing AI to AI, which is a pointless conversation and I want no part of it.
If you’re going to refuse entire categories of tools then we are down to comparing AI to AI, which is a pointless conversation and I want no part of it.
The point is not to compare but analyze how AI affects us and the world around us, society. By saying “it’s just a tool”, or “knives can also be missuesd” you relativize discussion and that rethoric just contributes to defending openAI and other big techs and even helping them banalize the issue.
From what i witnessed is that people lose agency, get and belive fake info, everything becoming slop, people loosing jobs getting replace by more workers that are less payed etc.
EDIT:
And no it’s not the same as knife or razor or a gun, it will never be.
You could say the same about social media and the entire internet. Would you choose to regulate that?
Yes, fediverse is imo good(not great) model of regulation by the network(s) itself.
I recall in the mid 90s a group of people on the street corner protesting AOL (America OnLine) and saying the internet should be stopped.
They were right.
The vague way you talk about AI makes be think that you don’t know much about it. what do you use AI for? Is it ChatGPT?
I use it small ammount most for translation and generating names for projects all locally.
The point is llms can steer you in harmful direction, there is much hype around it, the effects on labour, effects on human agency and knowledge etc.
The power is in big tech not in some communities that use AI for their needs, but the big tech creating needs and solutions to fullfill them via advertisment, hyping it etc. That is the point of discussion not if I used it or not, or to compare it with knife or social media(that is better comparison but still) etc.
Do you think llms are that much beneficial to society or not ? What effects you see from llms in society ?
I think it would be easier to ask you why you are not using it? You’re aware of its issues, which means you won’t fall victim to them. You would only get the benefits. Have you tried the newer models of ChatGPT? They are worlds beyond GPT 3.5 or Llama 3.
It’s not about banning or refusing AI tools, it’s about making them as safe as possible and regulating their usage.
Your argument is the equivalent of “guns don’t kill people” or blaming drivers for Tesla’s so-called “full self-driving” errors leading to accidents, because “full self-driving” switches itself off right before the accident, leaving the driver responsible as the one who should have paid more attention, even if there was no time left for him to react.
So what kind of regulations would be put in place to prevent people from using ai to feed their mania?
I’m open to the idea, but I think it’s such a broad concept at this point that implementation and regulation would be impossible.
If you want to go down the guns don’t kill people assumption, fine: social media kills more people and does more damage and should be shut down long before AI. 🤷♂️
Probably the same kind of guardrails that they already have - teaching LLMs to recognise patterns of potentially harmful behaviour. There’s nothing impossible in that. Shutting LLMs down altogether is a straw man and extreme example fallacy, when the discussion is about regulation and guardrails.
Discussing the damage LLMs do does not, of course, in any way negate the damage that social media does. These are two different conversations. In the case of social media there’s probably government regulation needed, as it’s clear by now that the companies won’t regulate themselves.
Okay so it has guardrails already. Make them better. Government regulations can’t be specific enough for the daily changing AI environment.
I’d say AI has a lot more self regulation than social media.
But, I run ai on bare metal at home. This isn’t chatGPT. And it will, in theory, do anything I want it to. Would you tell me that I can’t roll my own mania machine? Get out of my house lol.
Naturally the guardrails cannot cover absolutely every possible specific use case, but they can cover most of the known potentially harmful scenarios under the normal, most common circumstances. If the companies won’t do it themselves, then legislation can push them to do it, for example making them liable, if their LLM does something harmful. Regulating AI is not anti-AI.
The tools are relative. Pick a tool. It can be used wrong. You are special pleading, dogmatism, intellectual dishonesty.
If you’re going to refuse entire categories of tools then we are down to comparing AI to AI, which is a pointless conversation and I want no part of it.
The point is not to compare but analyze how AI affects us and the world around us, society. By saying “it’s just a tool”, or “knives can also be missuesd” you relativize discussion and that rethoric just contributes to defending openAI and other big techs and even helping them banalize the issue.
From what i witnessed is that people lose agency, get and belive fake info, everything becoming slop, people loosing jobs getting replace by more workers that are less payed etc.
EDIT: And no it’s not the same as knife or razor or a gun, it will never be.
You could say the same about social media and the entire internet. Would you choose to regulate that?
I recall in the mid 90s a group of people on the street corner protesting AOL (America OnLine) and saying the internet should be stopped.
They may have had a point, but the technology wasn’t to blame for the shit that’s it’s used for.
The vague way you talk about AI makes be think that you don’t know much about it. what do you use AI for? Is it ChatGPT?
Yes, fediverse is imo good(not great) model of regulation by the network(s) itself.
I use it small ammount most for translation and generating names for projects all locally.
The point is llms can steer you in harmful direction, there is much hype around it, the effects on labour, effects on human agency and knowledge etc. The power is in big tech not in some communities that use AI for their needs, but the big tech creating needs and solutions to fullfill them via advertisment, hyping it etc. That is the point of discussion not if I used it or not, or to compare it with knife or social media(that is better comparison but still) etc.
Do you think llms are that much beneficial to society or not ? What effects you see from llms in society ?
I think it would be easier to ask you why you are not using it? You’re aware of its issues, which means you won’t fall victim to them. You would only get the benefits. Have you tried the newer models of ChatGPT? They are worlds beyond GPT 3.5 or Llama 3.
I don’t use online only local ones. So do you want to say how you see efects of llm on society ?
It’s not about banning or refusing AI tools, it’s about making them as safe as possible and regulating their usage.
Your argument is the equivalent of “guns don’t kill people” or blaming drivers for Tesla’s so-called “full self-driving” errors leading to accidents, because “full self-driving” switches itself off right before the accident, leaving the driver responsible as the one who should have paid more attention, even if there was no time left for him to react.
So what kind of regulations would be put in place to prevent people from using ai to feed their mania?
I’m open to the idea, but I think it’s such a broad concept at this point that implementation and regulation would be impossible.
If you want to go down the guns don’t kill people assumption, fine: social media kills more people and does more damage and should be shut down long before AI. 🤷♂️
Probably the same kind of guardrails that they already have - teaching LLMs to recognise patterns of potentially harmful behaviour. There’s nothing impossible in that. Shutting LLMs down altogether is a straw man and extreme example fallacy, when the discussion is about regulation and guardrails.
Discussing the damage LLMs do does not, of course, in any way negate the damage that social media does. These are two different conversations. In the case of social media there’s probably government regulation needed, as it’s clear by now that the companies won’t regulate themselves.
Okay so it has guardrails already. Make them better. Government regulations can’t be specific enough for the daily changing AI environment.
I’d say AI has a lot more self regulation than social media.
But, I run ai on bare metal at home. This isn’t chatGPT. And it will, in theory, do anything I want it to. Would you tell me that I can’t roll my own mania machine? Get out of my house lol.
Naturally the guardrails cannot cover absolutely every possible specific use case, but they can cover most of the known potentially harmful scenarios under the normal, most common circumstances. If the companies won’t do it themselves, then legislation can push them to do it, for example making them liable, if their LLM does something harmful. Regulating AI is not anti-AI.
Okay, now imagine the tool is advertised in a way that tells you to use it wrong.
“Gilette - Follow The Road, Don’t Cross It™”