Can only speak for the UK but as the lowest civil court here, small claims decisions are not binding on any other court (including other small claims courts) but they are considered “pervasive” and thus a judge should be aware and take them into consideration.
I could see this simply resulting in every chatbot having a disclaimer that it might be spitting straight bullshit and you should not use it for legal advice.
At this point, I do consider this a positive outcome, too, because it’s not always made obvious whether you’re talking with something intelligent or just a text generator.
But yeah, I would still prefer, if companies simply had to have intelligent support. This race to the bottom isn’t helping humanity.
Experts told the Vancouver Sun that Air Canada may have succeeded in avoiding liability in Moffatt’s case if its chatbot had warned customers that the information that the chatbot provided may not be accurate.
Surprised Air Canada’s lawyers had the bravado to make claims like this. So glad they lost, I hope this becomes precedent for anything similar.
I don’t know if small claims create precedent in the same way that a normal lawsuit would.
Can only speak for the UK but as the lowest civil court here, small claims decisions are not binding on any other court (including other small claims courts) but they are considered “pervasive” and thus a judge should be aware and take them into consideration.
I could see this simply resulting in every chatbot having a disclaimer that it might be spitting straight bullshit and you should not use it for legal advice.
At this point, I do consider this a positive outcome, too, because it’s not always made obvious whether you’re talking with something intelligent or just a text generator.
But yeah, I would still prefer, if companies simply had to have intelligent support. This race to the bottom isn’t helping humanity.
That won’t hold up, though.
I don’t know about that. From the article: