Skip to main content

Prerequisites

Before you begin, ensure you have:
  • Python 3.9 or higher
  • Git (for cloning the repository)
  • (Optional) CUDA-capable GPU for faster toxicity model inference

Installation

Step 1: Clone the Repository

git clone https://github.com/Ksmith18skc/GuardianAPI.git
cd GuardianAPI/backend

Step 2: Install Dependencies

pip install -r requirements.txt
If you have a CUDA-capable GPU, install PyTorch with CUDA support for faster inference:
pip install torch==2.1.0+cu121 --index-url https://download.pytorch.org/whl/cu121

Step 3: Train the Sexism Model

The sexism classifier needs to be trained before running the API:
cd ..
python scripts/train_and_save_sexism_model.py
This will create:
  • backend/app/models/sexism/classifier.pkl
  • backend/app/models/sexism/vectorizer.pkl
Training takes 1-2 minutes and uses the dataset in data/train_sexism.csv. The model achieves ~82% F1 score on the test set.

Step 4: Configure Environment (Optional)

Create a .env file in the backend/ directory for optional features:
# Rate Limiting (optional)
REDIS_URL=rediss://default:<token>@<host>:<port>

# Logging
LOG_LEVEL=INFO
Guardian API supports rate limiting via Redis (Upstash compatible). If you don’t configure Redis, the API will still work but rate limiting will be disabled.To set up Upstash:
  1. Create a free account at Upstash
  2. Create a new Redis database
  3. Copy the Redis URL from your dashboard
  4. Add it to your .env file

Step 5: Start the API

cd backend
uvicorn app.main:app --reload --host 0.0.0.0 --port 8000
The API will be available at:
Success! Your Guardian API is now running.

Test Your API

Using cURL

Test the moderation endpoint:
curl -X POST "http://localhost:8000/v1/moderate/text" \
  -H "Content-Type: application/json" \
  -d '{"text": "This is a sample text to moderate"}'

Using the Interactive Docs

  1. Navigate to http://localhost:8000/docs
  2. Click on POST /v1/moderate/text
  3. Click “Try it out”
  4. Enter your text in the request body
  5. Click “Execute”

Health Check

Verify all models are loaded:
curl http://localhost:8000/v1/health
Expected response:
{
  "status": "healthy",
  "models": {
    "sexism_classifier": "loaded",
    "toxicity_model": "loaded",
    "rule_engine": "loaded"
  }
}

Example Response

Here’s what a typical moderation response looks like:
{
  "text": "This is a sample text to moderate",
  "label": {
    "sexism": {
      "score": 0.12,
      "severity": "low",
      "model_version": "sexism_lasso_v1",
      "threshold_met": false
    },
    "toxicity": {
      "overall": 0.08,
      "insult": 0.05,
      "threat": 0.02,
      "identity_attack": 0.03,
      "profanity": 0.04,
      "model_version": "toxic_roberta_v1"
    },
    "rules": {
      "slur_detected": false,
      "threat_detected": false,
      "self_harm_flag": false,
      "profanity_flag": false,
      "caps_abuse": false,
      "character_repetition": false,
      "model_version": "rules_v1"
    }
  },
  "ensemble": {
    "summary": "likely_safe",
    "primary_issue": null,
    "score": 0.10,
    "severity": "low"
  },
  "meta": {
    "processing_time_ms": 24,
    "models_used": [
      "sexism_lasso_v1",
      "toxic_roberta_v1",
      "rules_v1"
    ]
  }
}

Next Steps

Architecture

Learn how the multi-model system works

API Reference

Explore all available endpoints

Python SDK

Use the Python SDK in your application

Configuration

Configure rate limiting and other features

Troubleshooting

Ensure you’ve run the training script:
python scripts/train_and_save_sexism_model.py
Check that the model files exist in backend/app/models/sexism/
If you encounter CUDA errors, the toxicity model will automatically fall back to CPU. To use GPU:
  1. Ensure you have a CUDA-capable GPU
  2. Install CUDA toolkit (12.1 recommended)
  3. Install PyTorch with CUDA support:
    pip install torch==2.1.0+cu121 --index-url https://download.pytorch.org/whl/cu121
    
  4. Verify CUDA is available:
    python -c "import torch; print(torch.cuda.is_available())"
    
If port 8000 is already in use, specify a different port:
uvicorn app.main:app --reload --host 0.0.0.0 --port 8080