Prerequisites
Before you begin, ensure you have:- Python 3.9 or higher
- Git (for cloning the repository)
- (Optional) CUDA-capable GPU for faster toxicity model inference
Installation
Step 1: Clone the Repository
Step 2: Install Dependencies
Step 3: Train the Sexism Model
The sexism classifier needs to be trained before running the API:backend/app/models/sexism/classifier.pklbackend/app/models/sexism/vectorizer.pkl
Training takes 1-2 minutes and uses the dataset in
data/train_sexism.csv. The model achieves ~82% F1 score on the test set.Step 4: Configure Environment (Optional)
Create a.env file in the backend/ directory for optional features:
Setting up rate limiting
Setting up rate limiting
Guardian API supports rate limiting via Redis (Upstash compatible). If you don’t configure Redis, the API will still work but rate limiting will be disabled.To set up Upstash:
- Create a free account at Upstash
- Create a new Redis database
- Copy the Redis URL from your dashboard
- Add it to your
.envfile
Step 5: Start the API
- API: http://localhost:8000
- Interactive Docs: http://localhost:8000/docs
- ReDoc: http://localhost:8000/redoc
Success! Your Guardian API is now running.
Test Your API
Using cURL
Test the moderation endpoint:Using the Interactive Docs
- Navigate to http://localhost:8000/docs
- Click on
POST /v1/moderate/text - Click “Try it out”
- Enter your text in the request body
- Click “Execute”
Health Check
Verify all models are loaded:Example Response
Here’s what a typical moderation response looks like:Next Steps
Architecture
Learn how the multi-model system works
API Reference
Explore all available endpoints
Python SDK
Use the Python SDK in your application
Configuration
Configure rate limiting and other features
Troubleshooting
Models not loading
Models not loading
Ensure you’ve run the training script:Check that the model files exist in
backend/app/models/sexism/CUDA errors
CUDA errors
If you encounter CUDA errors, the toxicity model will automatically fall back to CPU. To use GPU:
- Ensure you have a CUDA-capable GPU
- Install CUDA toolkit (12.1 recommended)
- Install PyTorch with CUDA support:
- Verify CUDA is available:
Port already in use
Port already in use
If port 8000 is already in use, specify a different port: