Other samples:

Android: https://github.com/nipunru/nsfw-detector-android

Flutter (BSD-3): https://github.com/ahsanalidev/flutter_nsfw

Keras MIT https://github.com/bhky/opennsfw2

I feel it’s a good idea for those building native clients for Lemmy implement projects like these to run offline inferences on feed content for the time-being. To cover content that are not marked NSFW and should be.

What does everyone think, about enforcing further censorship, especially in open-source clients, on the client side as long as it pertains to this type of content?

Edit:

There’s also this, but it takes a bit more effort to implement properly. And provides a hash that can be used for reporting needs. https://github.com/AsuharietYgvar/AppleNeuralHash2ONNX .

Python package MIT: https://pypi.org/project/opennsfw-standalone/

    • WhoRoger@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      1 year ago

      I wish there were such detectors for other triggering stuff, like gore, or creepy insects, or any visual based phobia. Everyone just freaks out about porn.

      • pexavc@lemmy.worldOP
        link
        fedilink
        arrow-up
        2
        ·
        edit-2
        1 year ago

        Actually am looking at this exact thing. Compiling them into an open source package to use on Swift. Just finished nsfw. But everything you mentioned should be in a “ModerationKit” as well. Allowing users to toggle based on their needs.

    • JuxtaposedJaguar@lemmy.ml
      link
      fedilink
      arrow-up
      3
      ·
      1 year ago

      To be fair, most non-porn “NSFW” is probably “NSFL”. So NSFW in its exclusive usage is almost entirely porn.

        • janAkali@lemmy.one
          link
          fedilink
          English
          arrow-up
          2
          ·
          edit-2
          1 year ago

          In many cultures around the world nudity in itself isn’t considered inappropriate or sexual.