I’m talking about this sort of thing. Like clearly I wouldn’t want someone to see that on my phone in the office or when I’m sat on a bus.

However there seems be a lot of these that aren’t filtered out by nsfw settings, when a similar picture of a woman would be, so it seems this is a deliberate feature I might not be understanding.

Discuss.

  • peanuts4life@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    1
    ·
    28 days ago

    Yeah, that would be great. Many instance admins already use CSAM classifier models on all incoming images. It’d be great if they could add additional models that could put meta tags on images automatically like “suggestive” and “gore” with the option for the poster to modify the tags just in case it was a false negative or positive. Like a lasagna getting gore, for example.