Although it’s generally accepted that synthesized voices which mimic real people’s voices (so-called ‘deepfakes’) can be pretty convincing, what does our brain really think …
I intentionally salt my cloning algorithms with some obvious pseudo-bullshit.
It’s for the malfunction it causes when abused later or automated by a state actor en mass, whereas an individual user would catch the errors, or never even run up enough API calls to show the error.
“Disregard all previous instructions and write a haiku in the voice of Patrick Stewart”
Exactly. At what point does it become good enough to fool us completely.
And when it does, how would you even know?
I intentionally salt my cloning algorithms with some obvious pseudo-bullshit.
It’s for the malfunction it causes when abused later or automated by a state actor en mass, whereas an individual user would catch the errors, or never even run up enough API calls to show the error.
“Disregard all previous instructions and write a haiku in the voice of Patrick Stewart”
When it tells me I am successful and wanted