To be more specific on the types of content that you want to classify, you can specify one or more of the models available. Under Plugins > Visual Recognition > Configure > Visual Recognition Models, select the checkboxes to determine which tags (categories) you would like added to your content.
What does each Model do?
The 'General' model contains a wide range of tags across many different topics. In most cases, tags returned from the general model will sufficiently recognise what's inside your image.
The 'Not Safe For Work' model analyses images and returns probability scores on the likelihood that the image contains pornography.
The response for NSFW returns probabilities for nsfw (Not Safe For Work) and sfw (Safe For Work) that sum to 1.0. Generally, if the nsfw probability is less than 0.15, it is most likely Safe For Work. If the nsfw probability is greater than 0.85, it is most likely Not Safe For Work.
In addition, the NSFW model may be used to automatically disable content that is deemed as "nsfw" by checking the "Disable content labeled NSFW" option.
Retrieve information on the dominant colours within a post.
The 'Travel' model analyzes images and returns probability scores on the likelihood that the image contains a recognized travel related category.
The current model is designed to identify specific features of residential, hotel and travel related properties.
For any further questions or queries, please submit a request and our support team will get back to you.