Moderation

AI Dev Kit provides content moderation capabilities to detect potentially harmful content using .GENModeration().

What is Content Moderation?

Content moderation helps you:

  • Detect inappropriate or harmful content

  • Filter user-generated content

  • Comply with content policies

  • Protect users from harmful material

Basic Usage

var result = await "User input text"
    .GENModeration(safetySettings)
    .ExecuteAsync();

if (result.IsFlagged)
{
    Debug.LogWarning($"Content flagged: {result.Categories}");
}

Categories

Moderation typically checks for:

  • Hate: Hateful or discriminatory content

  • Harassment: Bullying or harassing content

  • Self-harm: Self-harm or suicide-related content

  • Sexual: Sexual or explicit content

  • Violence: Violent or graphic content

Input Types

String Input

ModerationPrompt

IModeratable

Safety Settings

Unity Integration Examples

Example 1: Chat Filter

Example 2: User Content Validator

Example 3: Real-time Chat Moderator

Provider Support

OpenAI

Google Gemini

Best Practices

✅ Good Practices

❌ Bad Practices

Next Steps

Last updated