Image moderation
Sendbird’s image moderation is powered by Google Cloud Vision API. This feature moderates text and file messages that contain explicit images or inappropriate image URLs. It uses five categories to moderate images: adult, spoof, medical, violence, and racy. Image moderation doesn't apply when uploading channel images or profile images to the Sendbird server.
How it works
Image moderation is not a standalone API that you call to inspect individual images. Instead, it is a server-side setting that you configure once through the API below. After you enable it, the Sendbird server automatically moderates images whenever users send messages through Sendbird client SDK. The workflow is as follows:
- Enable image moderation by sending a
PUTrequest to the settings API below. This is a one-time configuration per application or custom channel type. - When a user sends a file message or a text message containing image URLs through Sendbird client SDK, the Sendbird server automatically calls Google Cloud Vision API to inspect the image.
- Based on the inspection result and the configured limit values, the server allows or blocks the message.
- The sender receives the moderation result in the send message response. You can also monitor blocked messages through the image moderation webhook event.
What messages are moderated
| Sent through | Message type | Moderated |
|---|---|---|
Client SDK | File messages | Yes, by default |
Client SDK | Text messages | Yes, when |
Platform API | All messages | No |
All | Channel/profile image uploads | No |
Limit values
After an image is uploaded and moderated, the feature returns limit values. These numbers range from 1 to 5 for each category and indicate how likely the image is to be blocked. The following shows what each limit means:
| Limit | Description |
|---|---|
1 (very unlikely) | The probability of the image getting blocked is very unlikely. |
2 (unlikely) | The probability of the image getting blocked is unlikely. |
3 (possible) | The probability of the image getting blocked is possible. |
4 (likely) | The probability of the image getting blocked is likely. |
5 (very likely) | The probability of the image getting blocked is very likely. |
You can test different images on the Google Cloud Vision API's try-it tool to see how moderation works and determine which image moderation settings suit your needs.
Note: This feature may not work on culturally sensitive images such as those related to religion, drugs, or weapons.
When a message passes moderation
When a message passes image moderation, the Sendbird server sends back a success response body. The response contains the following moderation fields:
| Property name | Type | Description |
|---|---|---|
moderation_action | integer | Indicates the moderation decision. Valid values are the following: |
moderation_info.scores | nested object | Contains a set of booleans for each category. |
moderation_info.cache_hit | boolean | Indicates whether the moderation result was returned from cache. If |
Note: The fields in this response aren't saved to the message and are only visible one time.
When moderation blocks a message
When image moderation blocks a message, the Sendbird server sends back an error response containing the 900066 (ERROR_FILE_MOD_BLOCK) or 900065 (ERROR_FILE_URL_BLOCK) code.
HTTP request
The following API enables or disables image moderation for your application or a specific custom channel type.
Parameters
The following table lists a parameter that this action supports.
Required
| Parameter name | Type | Description |
|---|---|---|
custom_type | string | Specifies the custom channel type to apply a set of settings. |
Request body
The following table lists the properties of an HTTP request that this action supports.
Properties
| Property name | Type | Description |
|---|---|---|
image_moderation | nested object | Specifies a moderation configuration to moderate inappropriate images in the application. This feature is powered by Google Cloud Vision API, which supports various image types. |
image_moderation.type | integer | Determines which moderation method to apply to images and image URLs in text and file messages. Acceptable values are the following: |
image_moderation.soft_block | boolean | Determines whether to moderate images in messages. If |
image_moderation.limits | nested object | Specifies a set of values returned after an image has been moderated. These limit numbers range from 1 to 5 and indicate the likelihood of the image passing the moderation standard. (Default: |
image_moderation.limits.adult | integer | Specifies the likelihood that the image contains adult content. |
image_moderation.limits.spoof | integer | Specifies the likelihood that the image is a spoof. |
image_moderation.limits.medical | integer | Specifies the likelihood that the image contains medical content. |
image_moderation.limits.violence | integer | Specifies the likelihood that the image contains violent content. |
image_moderation.limits.racy | integer | Specifies the likelihood that the image contains racy content. |
image_moderation.check_urls | boolean | Determines whether to check if the image URLs in text and file messages are appropriate. This property can filter URLs of inappropriate images, but it can’t moderate URLs of websites containing inappropriate images. For example, image search results of adult images on "google.com" are not filtered. |
If you want to turn off image moderation, send a PUT request with the type property set to 0 as shown below:
Response
If successful, this action returns the updated moderation settings or channels with a custom channel type in the response body.
In the case of an error, an error object is returned. A detailed list of error codes is available here.