It seems crazy to me but ive seen this concept floated on several different post. There seems to be a number of users here that think there is some way AI generated CSAM will reduce Real life child victims.

Like the comments on this post here.

https://sh.itjust.works/post/6220815

I find this argument crazy. I don’t even know where to begin to talk about how many ways this will go wrong.

My views ( which are apprently not based in fact) are that AI CSAM is not really that different than “Actual” CSAM. It will still cause harm when viewing. And is still based in the further victimization of the children involved.

Further the ( ridiculous) idea that making it legal will some how reduce the number of predators by giving predators an outlet that doesnt involve real living victims, completely ignores the reality of the how AI Content is created.

Some have compared pedophilia and child sexual assault to a drug addiction. Which is dubious at best. And pretty offensive imo.

Using drugs has no inherent victim. And it is not predatory.

I could go on but im not an expert or a social worker of any kind.

Can anyone link me articles talking about this?

  • Killing_Spark@feddit.de
    link
    fedilink
    arrow-up
    1
    ·
    1 year ago

    You make a very similar argument as @Surdon and my answer is the same (in short, my answer to the other comment is longer):

    Yes giving everyone access would be a bad idea. I parallel it to controlled substance access, which reduces black-market drug sales.

    You do have some interesting details though:

    Training a model on real CSAM is bad, because it adds the likeness of the original victims to the image model. However, you don’t need CSAM in your training set to generate it.

    This has been mentioned a few times, mostly with the idea of mixing “normal” children photos with adult porn to generate csam. Is that what you are suggesting too? And do you know if this actually works? I am not familiar with the extent generativ AI is able to combine these sorts of concepts.

    As far as I can tell, we have no good research in favour of or against allowing automated CSAM. I expect it’ll come out in a couple of years. I also expect the research will show that the net result is a reduction in harm. I then expect politicians to ignore that conclusion and try to ban it regardless because of moral outrage.

    This is more or less my expectation too, but I wouldn’t count on the research coming out in a few years. There isn’t much incentive to do actual research on the topic afaik. There isn’t much to be gained because of the probable reaction of the regulators, and much to lose with such a hot topic.

    • This has been mentioned a few times, mostly with the idea of mixing “normal” children photos with adult porn to generate csam. Is that what you are suggesting too? And do you know if this actually works? I am not familiar with the extent generativ AI is able to combine these sorts of concepts.

      It’s not even an idea, it’s how you get CSAM out of existing models. Nobody over at OpenAI thought “hmm, let’s throw some super illegal porn into the dataset”, but pedophiles still managed to get the AI to generate that stuff. Granted, the open models aren’t always great at combining features, but commercial models are rapidly overcoming the AI weirdness you get in generated art.

      My intention isn’t to intentionally make CSAM generating models, though. The existing models are good enough for photorealistic images. Honestly, porn can become such a slippery slope that I’m not all that happy about AI generated porn existing in its current form, but I don’t think we can prevent it at this point.

      This is more or less my expectation too, but I wouldn’t count on the research coming out in a few years. There isn’t much incentive to do actual research on the topic afaik. There isn’t much to be gained because of the probable reaction of the regulators, and much to lose with such a hot topic.

      Maybe you’re right. My guess is that we’ll see more and more generated CSAM in the libraries of convicted paedophiles until eventually someone gets convicted without any non-generated imagery, which may very well bring these ethical issues to the forefront.

      Currently, all cases are about “paedophile found with stash of CSAM and some AI generated stuff as well”. That makes convictions and such quite easy because there are ethical concerns with kids being abused to create the bulk of the material.

      It’s possible the concept is never addressed, but I don’t think there’s any way to stop the spread of CSAM once you no longer need to exchange files through shady hosting services.

      • Killing_Spark@feddit.de
        link
        fedilink
        arrow-up
        1
        ·
        1 year ago

        It’s not even an idea, it’s how you get CSAM out of existing models

        I didn’t know this was a thing tbh. I knew that you could get them to generate adult porn or combine faces with adult porn. Didn’t know they could already create realistic csam. I assumed they used the original material to train one of the open models. Well that’s even more horrifying.

        It’s possible the concept is never addressed, but I don’t think there’s any way to stop the spread of CSAM once you no longer need to exchange files through shady hosting services.

        Didn’t even think about that. Exchanging these models will be significantly less risky than exchanging the actual material. Images are being scanned by cloud storage providers and archives with weak passwords are apparently too. But noone is going to execute an AI model just to see if it can or cannot produce csam.

        • Exactly, and this is why I find the way AI companies approach these developments (release the models and see what happens) so troubling. They knew, or could’ve known, the risks, but in an effort to get another round of VC money, they didn’t stop to build in ways to solve ethical problems first. They released their science to create before coming up with the science to control, and now the rest of the world has to deal with the fallout.

          The only defences these companies have is “we didn’t have anything illegal in our training set” and “we have no way to control the models themselves”. Currently, online services run the images they generate through the system in reverse to tag the images and try to detect bad generated subjects. Not just for law enforcement; there are English words that are more often used in porn data sets, so using those may accidentally generate porn if you’re not careful.