Generated AI CP should be illegalized even if its creation did not technically harm anyone. The reason is, presumably it looks too close to real CP, so close that it: 1) normalizes consumption of CP, 2) grows a market for CP, and 3) Real CP could get off the hook by claiming it is AI.
While there are similar reasons to be against clearly not real CP (e.g. hentai), this type at least does not have problem #3. For example, there doesnt need to be an investigation into whether a picture is real or not.
Fun fact it’s already illegal. If it’s indistinguishable from the real thing it’s a crime.
I was under the impression that even clearly drawn it’s already illegal, though it’s a grey area since they can say “lol it’s a 1000 year old demon that just looks like a child.” Is that not the case?
Clearly drawn is hard to prosecute (and one might argue shouldn’t be prosecuted, since obscenity laws are just… weird). However, the stuff that is photorealistic can be treated, legally, like the real thing.
What the fuck is AI being trained on to produce the stuff?
if you have a soup of all liquids and a sieve that only lets coffee and ice cream through it produces coffee ice cream (metaphor, don’t think too hard about it)
that’s how gen ai works. each step sieves out raw data to get closer to the prompt.
Pictures of clothed children and naked adults.
Nobody trained them on what things made out of spaghetti look like, but they can generate them because smushing multiple things together is precisely what they do.
Given the “we spared no expense” attitude to the rest of the data these things are trained on, I fear that may be wishful thinking…
Well, that’s somewhat reassuring.
Still reprehensible that it’s being used that way, of course.
AI CP seems like a promising way to destroy demand for the real thing. How many people would risk a prison sentence making or viewing the real thing when they could push a button and have a convincing likeness for free with no children harmed? Flood the market with cheap fakes and makers of the real thing may not find it profitable enough to take the risk.
I think it would boost the market for the real thing more.
It’s possible that there are people that would become into AI generated CP if it was just allowed to be advertised on nsfw website.
And that would lead some to seek out the real thing. I think it’s best to condemn it entirely
The biggest issue with this line of thinking is, how do you prove it’s CP without a victim. I suppose at a certain threshold it becomes obvious, but that can be a very blurry line (there was a famous case where a porn star had to be flown to a court case to prove the video wasn’t CP, but can’t find the link right now).
So your left with a crime that was committed with no victim and no proof, which can be really easy to abuse.
This sort of reminds myself on the discussion on “what is a women”. Is Siri a women? Many might say so, but t the same time Siri is not even human.
The question on how old the person on a specific generated image might be and if it even depicts a person at all, can only be answered through society. There is no scientific or any logical answer for this.
So this will always have grey areas and differing opinions and can be rulings in different cultures.
In the end it is about discussions about ethics not logic.
Definitely, and that’s why hard/strict laws or rules can be dangerous. Much like the famous “I know it when I see it” judgment on obscenity.
Who actually gets hurt in AI generated cp? The servers?
Making a photo of a child based off of real photos in a sexual manner is essentially using said child in the training data as the one in the act…
But who is actually getting hurt? No kid has gotten hurt using Gen AI.
A child whose abuse images are used to generate AI CP can be re-victimized by it, without even getting at the issues with normalizing it.
I don’t remember whether it was some news article or a discussion thread. But other people also suggested this might help during therapy and/or rehab. And they had the same argument in that nobody gets harmed in creating these.
As for uses outside of controlled therapy, I’d be afraid it might make people want the “real thing” at some point. And, as others already pointed out: Good luck proving to your local police that those photos on your laptop are all “fake”.
It fetishes the subject’s images, and nobody knows if it would lead to recivitism in child predators. It is generally accepted that producing drawings of CP alone is bad, let alone by AI. I remember some dude getting arrested at the Canadian border for sexual drawings of Bart and Lisa. Regardless, I would say that it is quite controversial and probably not what you’d want your company to be known for …
Japan has a vibrant drawn cp market yet they not not even close to the highest rate of child abuse. https://undispatch.com/here-is-how-every-country-ranks-on-child-safety/
Im not advocating for cp. I’m advocating for freedom.
A crime is only a crime if someone is negative effected. Gen AI is just a more accessible Photoshop.
Are you suggesting that this particular type of CP should be acceptable? (And suddenly “but I used AI” becomes a popular defence.)
No cp should be acceptable. But I argue AI generated isn’t cp.
This is no different than someone cutting out a child’s head from a Target catalog and sticking it to a body on a playboy magazine and masturbating to it.
Or someone using Photoshoping a kids head to a pornographic photo.
It’s just a more accessible version of those examples.
At the end of the day, what you do in your own home is your thing. t’s not my business what you do. As long as it doesn’t hurt/affect anyone, go ahead.
I almost respect you for taking a stance so blatantly against what most people believe.
Almost.
All the little girls it learned from.
Gen AI doesn’t take cp content and recreates it. There wouldn’t be a point of gen AI if that is the case. It knows what regular porn looks like and what a child looks like and it generates an image. With those inputs it can create something new and at the same time hurt nobody.
Prove it. Please, show me the full training data to guarantee you’re right.
But also, all the kids used for “kids face data” didn’t sign up to be porn
I don’t need to. It’s is just the way gen AI works. It takes images of things it knows and then generates NEW content based on what it think you want with your prompts.
If I’m looking for a infant flying an airplane, gen AI knows what a pilot looks like and what a child looks like and it creates something new.
Also kids face data doesn’t mean they take the actual face of the actual child and paste it on a body. It might take an eyebrow and a freckle from one kidand use a hair style from another and eyes from someone else.
Lastly, the kids parents consented when they upload images of their kids on social media.
If you think that AI is only trained on legal images, I can’t convince you otherwise.
What AI are you talking about? Are you suggesting the commercial models from OpenAI are trained using CP? Or just that there are some models out there that were trained using CP? Because yeah, anyone can create a model at home and train it with whatever. But suggesting that OpenAI has a DB of tagged CP is a different story.
Open AI just scours the Internet. 100% chance it’s come across someone illegal and horrible. They don’t pre-approve its training data.
I mean, you’re not giving a very convincing argument.
AI models are trained on the open Internet. Not curated. Open Internet has horrible things.
I’m no pedo, but what you do in your own home and hurts nobody is your own thing.
Yes, but how is the AI making the images or videos? It has to be trained on SOMETHING.
So, regardless of direct harm or not, harm is done at some point in the process and it needs to be stopped before it slips and gets worse because people “get used to” it.
needs to be stopped before it slips and gets worse because people “get used to” it.
Ah, right, almost finally forgot the killer games rhetoric.
I also don’t agree with the killer games thing, but humans are very adaptable as a species.
Normally that’s a good thing, but in a case like this exposure to something shocking or upsetting can make it less shocking or upsetting over time (obviously not in every case). So, if AI is being used for something like this and being reported on isn’t it possible that people might slowly get desensitized to it over time?
But what if pedophiles in therapy are less likely to commit a crime if they have access to respective porn? Even better then, if it can be AI generated, no?
Ai can combine two things. It can train on completely normal pictures of children, and it can train on completely normal adult porn, and then it can put those together.
This is the same reason it can do something like Godzilla with Sailor Moon’s hair, not because it trained on images of Godzilla with Sailor Moon’s hair, but because it can combine those two separate things.
Only the real things are actual humans who have likely not consented to ever being in this database at all let alone having parts of their likeness being used for this horrific shit. There is no moral argument for this garbage:
Technically speaking, if you post images of your child on social media, you have consented. If you never uploaded an image of your child online, you never need to worry.
Social media has been around a long time. It is not reasonable to expect people to think of technology they can’t imagine even existing ten years in the future when “consenting” to use a platform. Legally you are correct. Morally this is obviously terrible. Everything about how terms and conditions are communicated is designed to take advantage of people who won’t or are unable to parse its meaning. Consent needs to be informed.
Fair enough. I still think it shouldn’t be allowed though.
Why? Not pressing but just curious what the logic is
I wouldn’t think it needs to have child porn in the training data to be able to generate it. It has porn as the data, it knows what kids look like, merge the two. I think that works for anything AI knows about, make this resemble this.
That’s fair, but I still think it shouldn’t be accepted or allowed.
I agree it shouldn’t be accepted, but I disagree on being allowed. I think it should be allowed because it doesn’t hurt anyone.
It seems pretty understandable that companies wouldn’t allow it, it’s more that if it is illegal (like in some places) then that gets into really sketchy territory imo.
Ai cp, they found AI generated cp that had been generated on their service…
Explicit fakes makes it sound less bad.
They were allowing AI cp to be made.
Is “CP” so you don’t get flagged, or is it for sensitivity.
I don’t like saying the full phrase, it’s a disgusting merger of words that shouldn’t exist.
and it’s wrong, too. it’s not pornography, it’s rape.
Very true, thanks for your sensitivity @dumbass
It’s pronounced “doo mah.”
Shawshank reference?
A&W root beer
Wow so its from the duh region in france, here I thought it was just sparkling dumbass
FYI, the current accepted term is csam. Children sexual abuse material. The reason why CP is wrong is that porn implies, or should imply, that there’s consent on the sexual act, and children cannot consent.
You are right, it’s a disgusting merger exactly because it implies something that’s absolutely incorrect and wrong.
If we are pedantic, I’m not sure if “children cannot consent” is correct. Children at 16 are mature enough to give consent in legal context, we as a society just frown upon older adults mingling with them.
Legally speaking children can’t consent, which is why it’s illegal and the basis of my statement. I wasn’t being pedantic, I was showing a new terminology.
Different legislations have different consent ages, but in the internet we should go by the highest one since anything can be viewed from anywhere.
When I was 16 I would have totally posed for porn and I would have been completely consenting. But it would have been illegal. I wonder where we should draw the line, and if the current one is the best one.
This is the type of shit that radicalizes me against generative AI. It’s done so much more harm than good.
The craziest thing to me is there was movements to advocate the creation of CP through AI to help those addicted to it as it “wasn’t real” and there were no victims involved in it. But no comments regarding how the LLM gained the models to generate those images or the damages that will come when such things get normalized.
It just should never be normalized or exist.
Nuanced take coming, take a breath:
I agree that Child Sexual Abuse is a horrible practice along with all other violence and oppression, sexual or not. But the attraction de facto exists and has done for thousands of years, even through intense taboos. It seems our current strategy of shaming and ignoring it has been ineffective. The definition of insanity being repeating the same thing expecting different results and all that.
Short of eugenics (and from previous trials maybe not even then) we might not be able to get rid of it.
So when do we try other ways of dealing with it?
I’m not saying generative AI is the solution, but I’m pretty sure denying harder isn’t it.
I’ve been kind of on the fence about this but then research found that people who physically or verbally express their anger tend to get angrier or delay calming down, contrary to conventional wisdom. I wonder if there could be a similar pattern with this so now I’m hesitating.
Sexuality is often treated as more complex a topic than emotions, but I found a similar meta-study The role of conditioning, learning and dopamine in sexual behavior: A narrative review of animal and human studies, 2014 concluding that conditioning and associative learning does occur around sexuality and can be used as basis for treatment.
From other sources I’ve read, there’s so many influences going into sexuality that it’s impossible to see how it develops, but from a layman’s perspective I’d agree that not reinforcing child abuse probably makes it more rare.
My remaining issue is that with such a simplistic view, any non-normative sexuality can/should be conditioned away. We already have the abusive gay conversion camps, should we go back to do the same with polygamy, bdsm, porn? How much should fashion dictate what sexuality is allowed?
(Roman style orgies seem to have faded in popularity, but tantra and swinging seems to have risen lately, which should we be conditioning away? Who decides?)
Just for what it’s worth, you don’t need CSAM in the training material for a generative AI to produce CSAM. The models know what children look like, and what naked adults look like, so they can readily extrapolate from there.
The fact that you don’t need to actually supply any real CSAM to the training material is the reasoning being offered for supporting AI CSAM. It’s gross, but it’s also hard to argue with. We allow for all types of illegal subjects to be presented in porn; incest, rape, murder, etc. While most mainstream sites won’t allow those types of material, none of them are technically outlawed - partly because of freedom of speech and artistic expression and yadda yadda, but also partly because it all comes with the understanding that it’s a fake, made-for-film production and that nobody involved had their consent violated, so it’s okay because none of it was actually rape, incest, or murder. And if AI CSAM can be made without actually violating the consent of any real people, then what makes it different?
I don’t know how I feel about it, myself. The idea of “ethically-sourced” CSAM doesn’t exactly sit right with me, but if it’s possible to make it in a truly victimless manner, then I find it hard to argue outright banning something just because I don’t like it.
The fact that you don’t need to actually supply any real CSAM to the training material is the reasoning being offered for supporting AI CSAM. It’s gross, but it’s also hard to argue with.
Yeah, this is basically the crux of the issue. When you get into the weeds and start looking at more than just surface-level “but it needs CSAM to make CSAM” misconception, arguments against it basically boil down to “but it’s icky.” Which… Yeah. It is. But should something being icky automatically make it illegal, even if there are no victims?
I hate to make the comparison (for a variety of reasons) but until fairly recently homosexuality was psychologically classed as a form of destructive/dangerous kink. Largely because straight people had the same “but it’s icky” response whenever it got brought up. And we have tried to move away from that as time has passed, because we have recognized that being gay is not just a kink, it’s not just a choice, and it’s not inherently dangerous or harmful.
To contrast that, pedophilia has remained stigmatized. Because even if it passed the first two “it’s not just a kink/choice” tests, it still failed the “it’s not harmful” test. Consuming CSAM was inherently harmful, and always had a victim. There was no ethical way to view CSAM. But now with AI, it can actually begin passing that third test as well.
I don’t know how I feel about it, myself. The idea of “ethically-sourced” CSAM doesn’t exactly sit right with me, but if it’s possible to make it in a truly victimless manner, then I find it hard to argue outright banning something just because I don’t like it.
This is really the biggest hurdle. To be clear, I’m not arguing that being an active pedo should be decriminalized. But it is worth examining whether we’re basing criminality purely off of the instinctual “but it’s icky” response that the public has when it gets discussed. And is that response enough of a justification for making/keeping it illegal? And if your answer to that was “yes”, what if it could help pedos avoid consuming real CSAM, and therefore reduce the number of future victims? If it could legitimately help reduce the number of victims but you still want to criminalize it, then you are not actually focused on reducing harm; You’re focused on feeling righteous instead. The biggest issue right now is that harm reduction is very hard to study, because it is such a taboo topic. Even finding subjects to self-report is difficult or impossible. So we’ll have no idea what kinds of impacts on CSAM consumption (positive or negative) AI will realistically have until after it is widely available.
Plenty of hentai out there covering questionable subjects to train AI on as well.
Its a very difficult subject, both sides have merit. I can see the “CSAM created without abuse could be used in treatment/management of people with these horrible urges” but I can also see “Allowing people to create CSAM could normalise it and lead to more actual abuse”.
Sadly its incredibly difficult for academics to study this subject and see which of those two is more prevalent.
There is also the angle of generated CSAM looking real adding difficulty in prosecuting real CSAM producers.
This, above any other reason, is why I’m most troubled with AI CSAM. I don’t care what anyone gets off to if no one is harmed, but the fact that real CSAM could be created and be indistinguishable from AI created, is a real harm.
And I instinctively ask, who would bother producing it for real when AI is cheap and harmless? But people produce it for reasons other than money and there are places in this world where a child’s life is probably less valuable than the electricity used to create images.
I fundamentally think AI should be completely uncensored. Because I think censorship limits and harms uses for it that might otherwise be good. I think if 12 year old me could’ve had an AI show me where the clitoris is on a girl or what the fuck a hymen looks like, or answer questions about my own body, I think I would’ve had a lot less confusion and uncertainty in my burgeoning sexuality. Maybe I’d have had less curiosity about what my classmates looked like under their clothes, leading to questionable decisions on my part.
I can find a million arguments why AI shouldn’t be censored. Like, do you know ChatGPT can be convinced to describe vaginal and oral sex in a romantic fiction is fine, but if it’s anal sex, it has a much higher refusal rate? Is that subtle anti-gay encoding in the training data? It also struggles with polyamory when it’s two men and a woman but less when it’s two women and a man. What’s the long-term impact when these biases are built into everyday tools? These are concerns I consider all the time.
But at the end of the day, the idea that there are children out there being abused and consumed and no one will even look for them because “it’s probably just AI” isn’t something I can bear no matter how firm my convictions are about uncensored AI. It’s something I struggle to reconcile.
I’d say that’s more of an AI industry issue than anything else. All AI art needs to be easily identifiable and sourced as such, but I doubt AI producers will want to hide tags on all their AI generated work though.
Probably got all the data to train for it from the pentagon. They’re known for having tons of it and a lot of their staff (more than 25%) are used to seeing it frequently.
Easily searchable, though I don’t like to search for that shit, but here’s 1 post if you literally add pentagon to c____ p___ in a search a million articles on DIFFERENT subjects (than this house bill) come up https://thehill.com/policy/cybersecurity/451383-house-bill-aims-to-stop-use-of-pentagon-networks-for-sharing-child/
When my dad worked for the DoD, he was assigned a laptop for work that had explicit photos of children on it.
For what scope would they do that?
Anything like that involving children or child like individuals is a hard fucking no from me. It’s like those mfs who have art of a little anime girl and go “actually shes a 5000 vampire.” They know exactly what the fuck they’re doing. I also hate the argument of “it’s not real” like mf the sentiment is still there.