LLM system input is unsanitizable, according to NVidia:
The control-data plane confusion inherent in current LLMs means that prompt injection attacks are common, cannot be effectively mitigated, and enable malicious users to take control of the LLM and force it to produce arbitrary malicious outputs with a very high likelihood of success.
LLM system input is unsanitizable, according to NVidia:
https://developer.nvidia.com/blog/securing-llm-systems-against-prompt-injection/
Everything old is new again (GIGO)