shish_mish@lemmy.world to Technology@lemmy.worldEnglish · 2 years agoResearchers jailbreak AI chatbots with ASCII art -- ArtPrompt bypasses safety measures to unlock malicious querieswww.tomshardware.comexternal-linkmessage-square23linkfedilinkarrow-up10arrow-down10
arrow-up10arrow-down1external-linkResearchers jailbreak AI chatbots with ASCII art -- ArtPrompt bypasses safety measures to unlock malicious querieswww.tomshardware.comshish_mish@lemmy.world to Technology@lemmy.worldEnglish · 2 years agomessage-square23linkfedilink
minus-squareoDDmON@lemmy.worldlinkfedilinkEnglisharrow-up0·2 years ago …researchers from NTU were working on Masterkey, an automated method of using the power of one LLM to jailbreak another. Or: welcome to where AI becomes an arms race.
Or: welcome to where AI becomes an arms race.
This is how skyNet starts.