A tragic scandal at the UK Post Office highlights the need for legal change, especially as organizations embrace artificial intelligence to enhance decision-making.
All the more reason that devs and admins need to take responsibility and NOT roll out “AI” solutions withoit backstopping them with human verification, or at minimum ensure that the specific solutions they employ are ready for production.
It’s all cool and groovy that we have a new software stack that can remove a ton of labor from humans, but if it’s too error-prone, is it really useful? I get that the bean counters and suits are ready to boot the data entry and other low level employees to boost their bottom line, but this will become a race to the bottom via blowing their collective loads too early.
Though let’s be real, we already know that too many companies are going to do this and then try to absolve themselves of liability when shit goes to hell because of their shit.
You might be presenting it backwards. We need LLMs to be right-sized for translation between pure logical primitives and human language. Let a theorem prover or logical inference system (probably written in Prolog :-) ) provide the smarts. A LLM can help make the front end usable by regular people.
All the more reason that devs and admins need to take responsibility and NOT roll out “AI” solutions withoit backstopping them with human verification, or at minimum ensure that the specific solutions they employ are ready for production.
It’s all cool and groovy that we have a new software stack that can remove a ton of labor from humans, but if it’s too error-prone, is it really useful? I get that the bean counters and suits are ready to boot the data entry and other low level employees to boost their bottom line, but this will become a race to the bottom via blowing their collective loads too early.
Though let’s be real, we already know that too many companies are going to do this and then try to absolve themselves of liability when shit goes to hell because of their shit.
Soon there will be modules added to LLMs, that they can use to logically (fact)check the output on their own.
https://deepmind.google/discover/blog/alphageometry-an-olympiad-level-ai-system-for-geometry/
This is so awesome, watch Yannic explaining it:
https://youtu.be/ZNK4nfgNQpM?si=CN1BW8yJD-tcIIY9
You might be presenting it backwards. We need LLMs to be right-sized for translation between pure logical primitives and human language. Let a theorem prover or logical inference system (probably written in Prolog :-) ) provide the smarts. A LLM can help make the front end usable by regular people.