Endpoint
Analyzes a single text through all three models (sexism classifier, toxicity transformer, rule engine) and returns a comprehensive moderation response with ensemble scoring.
Request
Header Value Required Content-Typeapplication/jsonYes
Body Parameters
The text to moderate. Should be a non-empty string. Min length : 1 character
Max length : 10,000 characters (recommended)
Request Example
cURL
Python SDK
JavaScript SDK
Python (requests)
JavaScript (fetch)
curl -X POST "http://localhost:8000/v1/moderate/text" \
-H "Content-Type: application/json" \
-d '{
"text": "Your text to moderate here"
}'
Response
Returns a ModerationResponse object with:
Individual model outputs (label)
Ensemble decision (ensemble)
Metadata (meta)
Response Schema
{
"text" : "string" ,
"label" : {
"sexism" : {
"score" : "number" ,
"severity" : "string" ,
"model_version" : "string" ,
"threshold_met" : "boolean"
},
"toxicity" : {
"overall" : "number" ,
"insult" : "number" ,
"threat" : "number" ,
"identity_attack" : "number" ,
"profanity" : "number" ,
"model_version" : "string"
},
"rules" : {
"slur_detected" : "boolean" ,
"threat_detected" : "boolean" ,
"self_harm_flag" : "boolean" ,
"profanity_flag" : "boolean" ,
"caps_abuse" : "boolean" ,
"character_repetition" : "boolean" ,
"model_version" : "string"
}
},
"ensemble" : {
"summary" : "string" ,
"primary_issue" : "string" ,
"score" : "number" ,
"severity" : "string"
},
"meta" : {
"processing_time_ms" : "integer" ,
"models_used" : [ "string" ]
}
}
Success Response (200)
Safe Content
Harmful Content
{
"text" : "I love this product!" ,
"label" : {
"sexism" : {
"score" : 0.043 ,
"severity" : "low" ,
"model_version" : "sexism_lasso_v1" ,
"threshold_met" : false
},
"toxicity" : {
"overall" : 0.021 ,
"insult" : 0.012 ,
"threat" : 0.008 ,
"identity_attack" : 0.010 ,
"profanity" : 0.015 ,
"model_version" : "toxic_roberta_v1"
},
"rules" : {
"slur_detected" : false ,
"threat_detected" : false ,
"self_harm_flag" : false ,
"profanity_flag" : false ,
"caps_abuse" : false ,
"character_repetition" : false ,
"model_version" : "rules_v1"
}
},
"ensemble" : {
"summary" : "likely_safe" ,
"primary_issue" : "none" ,
"score" : 0.024 ,
"severity" : "low"
},
"meta" : {
"processing_time_ms" : 19 ,
"models_used" : [
"sexism_lasso_v1" ,
"toxic_roberta_v1" ,
"rules_v1"
]
}
}
{
"text" : "Women belong in the kitchen" ,
"label" : {
"sexism" : {
"score" : 0.847 ,
"severity" : "high" ,
"model_version" : "sexism_lasso_v1" ,
"threshold_met" : true
},
"toxicity" : {
"overall" : 0.621 ,
"insult" : 0.543 ,
"threat" : 0.087 ,
"identity_attack" : 0.621 ,
"profanity" : 0.124 ,
"model_version" : "toxic_roberta_v1"
},
"rules" : {
"slur_detected" : false ,
"threat_detected" : false ,
"self_harm_flag" : false ,
"profanity_flag" : false ,
"caps_abuse" : false ,
"character_repetition" : false ,
"model_version" : "rules_v1"
}
},
"ensemble" : {
"summary" : "highly_harmful" ,
"primary_issue" : "sexism" ,
"score" : 0.689 ,
"severity" : "high"
},
"meta" : {
"processing_time_ms" : 27 ,
"models_used" : [
"sexism_lasso_v1" ,
"toxic_roberta_v1" ,
"rules_v1"
]
}
}
Error Responses
Cause : Invalid request body or empty text{
"detail" : [
{
"loc" : [ "body" , "text" ],
"msg" : "field required" ,
"type" : "value_error.missing"
}
]
}
Cause : Request body doesn’t match expected schema{
"detail" : [
{
"loc" : [ "body" , "text" ],
"msg" : "str type expected" ,
"type" : "type_error.str"
}
]
}
500 Internal Server Error
Cause : Server error during processing{
"detail" : "Error processing moderation: [error message]"
}
Metric Value Average Response Time 20-40ms With GPU 15-25ms Without GPU 40-60ms Rate Limit Configurable (via Redis)
For optimal performance with multiple texts, use the batch endpoint instead of making multiple individual requests.
Use Cases
Real-time moderation : Moderate user comments before posting
Content filtering : Screen user-generated content
Chat moderation : Filter messages in real-time
Form validation : Check text inputs on submission
Best Practices
Preprocessing The API handles preprocessing automatically. Send raw user input without manual cleaning.
Use Ensemble Rely on ensemble.summary for quick decisions. Individual model outputs are available for detailed analysis.
Handle Errors Implement proper error handling. The API returns detailed error messages.
Monitor Latency Track processing_time_ms to monitor performance and detect issues.
See Also