A controversial European Union legislative proposal to scan the private messages of citizens in a bid to detect child sexual abuse material (CSAM) is a
They say they the images are merely matched to pre-determined images found on the web. You’re talking about a different scenario where AI detects inappropriate contents in an image.
Article 10a, which contains the upload moderation plan, states that these technologies would be expected “to detect, prior to transmission, the dissemination of known child sexual abuse material or of new child sexual abuse material.”
They say they the images are merely matched to pre-determined images found on the web. You’re talking about a different scenario where AI detects inappropriate contents in an image.
It will detect known images and potential new images…how do you think it will the potential new and unknown images?
Source? Does the law require that? That’s not my impression.
Literally the article linked in the OP…
Article 10a, which contains the upload moderation plan, states that these technologies would be expected “to detect, prior to transmission, the dissemination of known child sexual abuse material or of new child sexual abuse material.”