Skip to main content

Overview

Guardian API can be configured using environment variables in a .env file located in the backend/ directory.

Configuration File

Create backend/.env:
# CORS Configuration (Production)
CORS_ORIGINS=https://guardian.korymsmith.dev,http://localhost:5173,http://127.0.0.1:5173

# Rate Limiting (Optional)
REDIS_URL=rediss://default:<token>@<host>:<port>

# Logging
LOG_LEVEL=INFO

# Model Configuration
SEXISM_THRESHOLD=0.400

# Severity Thresholds
SEVERITY_HIGH=0.6
SEVERITY_MODERATE=0.3
SEVERITY_LOW=0.1

Environment Variables

CORS Configuration

CORS_ORIGINS
string
Comma-separated list of allowed frontend origins for CORSFormat: Comma-separated URLs (no spaces)Required: Recommended for productionExample: https://guardian.korymsmith.dev,http://localhost:5173,http://127.0.0.1:5173
Never use "*" for CORS in production. Always specify exact origins.
When deploying to production, ensure your frontend URL is included in this list.

Rate Limiting

REDIS_URL
string
default:"None"
Redis connection URL for rate limiting (Upstash compatible)Format: rediss://default:<token>@<host>:<port>Required: No (rate limiting disabled if not set)Example: rediss://default:abc123@redis-12345.upstash.io:6379

Logging

LOG_LEVEL
string
default:"INFO"
Logging level for the applicationOptions: DEBUG, INFO, WARNING, ERROR, CRITICALRecommended: INFO for production, DEBUG for development

Model Configuration

SEXISM_THRESHOLD
float
default:"0.400"
Threshold for sexism classifierRange: 0.0 to 1.0Default: 0.400 (optimized for F1 score)Higher values: Fewer false positives, more false negatives Lower values: More false positives, fewer false negatives
TOXICITY_MODEL_NAME
string
default:"unitary/unbiased-toxic-roberta"
HuggingFace model name for toxicity detectionOptions: Any HuggingFace toxicity modelDefault: unitary/unbiased-toxic-roberta

Severity Thresholds

SEVERITY_HIGH
float
default:"0.6"
Threshold for “high” severity classificationRange: 0.0 to 1.0Scores >= this value are considered “high” severity
SEVERITY_MODERATE
float
default:"0.3"
Threshold for “moderate” severity classificationRange: 0.0 to 1.0Scores >= this value (but < SEVERITY_HIGH) are considered “moderate” severity
SEVERITY_LOW
float
default:"0.1"
Threshold for “low” severity classificationRange: 0.0 to 1.0Scores >= this value (but < SEVERITY_MODERATE) are considered “low” severity

Example Configurations

  • Development
  • Production
  • Strict Moderation
# Development configuration
LOG_LEVEL=DEBUG
SEXISM_THRESHOLD=0.400
# No Redis - rate limiting disabled

Loading Configuration

The API automatically loads the .env file on startup:
# backend/app/config.py
from pydantic_settings import BaseSettings

class Settings(BaseSettings):
    REDIS_URL: Optional[str] = None
    LOG_LEVEL: str = "INFO"
    SEXISM_THRESHOLD: float = 0.400
    # ... other settings

    class Config:
        env_file = ".env"

settings = Settings()

See Also