Skip to main content

Endpoint

POST /v1/moderate/text
Analyzes a single text through all three models (sexism classifier, toxicity transformer, rule engine) and returns a comprehensive moderation response with ensemble scoring.

Request

Headers

HeaderValueRequired
Content-Typeapplication/jsonYes

Body Parameters

text
string
required
The text to moderate. Should be a non-empty string.Min length: 1 character Max length: 10,000 characters (recommended)

Request Example

curl -X POST "http://localhost:8000/v1/moderate/text" \
  -H "Content-Type: application/json" \
  -d '{
    "text": "Your text to moderate here"
  }'

Response

Returns a ModerationResponse object with:
  • Individual model outputs (label)
  • Ensemble decision (ensemble)
  • Metadata (meta)

Response Schema

{
  "text": "string",
  "label": {
    "sexism": {
      "score": "number",
      "severity": "string",
      "model_version": "string",
      "threshold_met": "boolean"
    },
    "toxicity": {
      "overall": "number",
      "insult": "number",
      "threat": "number",
      "identity_attack": "number",
      "profanity": "number",
      "model_version": "string"
    },
    "rules": {
      "slur_detected": "boolean",
      "threat_detected": "boolean",
      "self_harm_flag": "boolean",
      "profanity_flag": "boolean",
      "caps_abuse": "boolean",
      "character_repetition": "boolean",
      "model_version": "string"
    }
  },
  "ensemble": {
    "summary": "string",
    "primary_issue": "string",
    "score": "number",
    "severity": "string"
  },
  "meta": {
    "processing_time_ms": "integer",
    "models_used": ["string"]
  }
}

Success Response (200)

  • Safe Content
  • Harmful Content
{
  "text": "I love this product!",
  "label": {
    "sexism": {
      "score": 0.043,
      "severity": "low",
      "model_version": "sexism_lasso_v1",
      "threshold_met": false
    },
    "toxicity": {
      "overall": 0.021,
      "insult": 0.012,
      "threat": 0.008,
      "identity_attack": 0.010,
      "profanity": 0.015,
      "model_version": "toxic_roberta_v1"
    },
    "rules": {
      "slur_detected": false,
      "threat_detected": false,
      "self_harm_flag": false,
      "profanity_flag": false,
      "caps_abuse": false,
      "character_repetition": false,
      "model_version": "rules_v1"
    }
  },
  "ensemble": {
    "summary": "likely_safe",
    "primary_issue": "none",
    "score": 0.024,
    "severity": "low"
  },
  "meta": {
    "processing_time_ms": 19,
    "models_used": [
      "sexism_lasso_v1",
      "toxic_roberta_v1",
      "rules_v1"
    ]
  }
}

Error Responses

Cause: Invalid request body or empty text
{
  "detail": [
    {
      "loc": ["body", "text"],
      "msg": "field required",
      "type": "value_error.missing"
    }
  ]
}
Cause: Request body doesn’t match expected schema
{
  "detail": [
    {
      "loc": ["body", "text"],
      "msg": "str type expected",
      "type": "type_error.str"
    }
  ]
}
Cause: Server error during processing
{
  "detail": "Error processing moderation: [error message]"
}

Performance

MetricValue
Average Response Time20-40ms
With GPU15-25ms
Without GPU40-60ms
Rate LimitConfigurable (via Redis)
For optimal performance with multiple texts, use the batch endpoint instead of making multiple individual requests.

Use Cases

  • Real-time moderation: Moderate user comments before posting
  • Content filtering: Screen user-generated content
  • Chat moderation: Filter messages in real-time
  • Form validation: Check text inputs on submission

Best Practices

Preprocessing

The API handles preprocessing automatically. Send raw user input without manual cleaning.

Use Ensemble

Rely on ensemble.summary for quick decisions. Individual model outputs are available for detailed analysis.

Handle Errors

Implement proper error handling. The API returns detailed error messages.

Monitor Latency

Track processing_time_ms to monitor performance and detect issues.

See Also