nsfw (Not Suitable for Work) is directly translated as not suitable for viewing while working, so elegant.
nsfw effect, pay attention to the score at the bottom
The general process is to input pictures/videos and output a number between 0-1. Generally speaking, Scores < 0.2 is considered very safe, and Scores > 0.8 is very likely to be an xx picture.
The roop open source project uses 0.85
Compared with opennsfw, opennsfw2 uses TensorFlow for the model.
reference
GitHub - bhky/opennsfw2: Keras Core implementation of the Yahoo Open-NSFW model