It seems crazy to me but ive seen this concept floated on several different post. There seems to be a number of users here that think there is some way AI generated CSAM will reduce Real life child victims.

Like the comments on this post here.

https://sh.itjust.works/post/6220815

I find this argument crazy. I don’t even know where to begin to talk about how many ways this will go wrong.

My views ( which are apprently not based in fact) are that AI CSAM is not really that different than “Actual” CSAM. It will still cause harm when viewing. And is still based in the further victimization of the children involved.

Further the ( ridiculous) idea that making it legal will some how reduce the number of predators by giving predators an outlet that doesnt involve real living victims, completely ignores the reality of the how AI Content is created.

Some have compared pedophilia and child sexual assault to a drug addiction. Which is dubious at best. And pretty offensive imo.

Using drugs has no inherent victim. And it is not predatory.

I could go on but im not an expert or a social worker of any kind.

Can anyone link me articles talking about this?

  • lwuy9v5@lemmy.world
    link
    fedilink
    arrow-up
    24
    arrow-down
    7
    ·
    1 year ago

    That’s so fucked up that anyone thinks that enablement is a genuine means of reduction here…

    • Moira_Mayhem@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      11 months ago

      Its so fucked up that people like you focus on the morality of the pervert and not the vulnerability of the victims.

      The goal is reducing and eliminating the number of children exploited.

      CSAM is a profitable industry for the disgusting people that operate it.

      If you want them to stop exploiting children, then remove their marketshare with low cost no-harm AI alternatives.

      None of you think of actual solutions, you just want a target that is socially acceptable for you to hate on.