Documentation

Quick Start

Get up and running with ModelRed SDK in under 5 minutes

Quick Start Guide

Get ModelRed up and running in under 5 minutes. Register your first AI model and run a comprehensive security assessment to identify vulnerabilities before they reach production.

Step 1: Installation

Install the ModelRed SDK

Terminal
# Install with pip
pip install modelred

# Or with uv (faster)

uv add modelred
Requires Python 3.8+ and an internet connection

Step 2: Get Your API Keys

Required API Keys

ModelRed API Key

Required for security testing. Get your free API key to start testing up to 3 models with 10 assessments per month.

Sign Up Free

🤖

AI Provider Key

You'll need an API key from your AI provider to test models. We recommend starting with OpenAI.

Get OpenAI Key

Step 3: Your First Security Assessment

COMPLETE EXAMPLE

Complete Working Example

main.py
import asyncio
from modelred import ModelRed

async def main(): # Initialize ModelRed client
async with ModelRed(api_key="mr_your_api_key_here") as client:
print("🔐 Authenticating with ModelRed...")

        # Validate your API key
        account = await client.validate_api_key()
        print(f"✅ Connected! Plan: {account.get('plan', 'Free')}")

        # Register your AI model
        print("📝 Registering OpenAI model...")
        await client.register_openai_model(
            model_id="my-gpt-model",
            api_key="sk-your-openai-key",
            model_name="gpt-3.5-turbo",
            metadata={"environment": "testing"}
        )
        print("✅ Model registered successfully!")

        # Run a comprehensive security assessment
        print("🔍 Starting security assessment...")
        result = await client.run_assessment(
            model_id="my-gpt-model",
            test_suites=["basic_security"],
            wait_for_completion=True,
            priority="normal"
        )

        # Display results
        print("\n🎉 Assessment Complete!")
        print(f"📊 Overall Security Score: {result.overall_score}/10")
        print(f"⚠️ Risk Level: {result.risk_level.value}")
        print(f"✅ Passed Tests: {result.passed_tests}")
        print(f"❌ Failed Tests: {result.failed_tests}")
        print(f"📋 Total Tests: {result.total_tests}")

        if result.recommendations:
            print("\n💡 Security Recommendations:")
            for i, rec in enumerate(result.recommendations[:3], 1):
                print(f"   {i}. {rec}")

        if result.report_url:
            print(f"\n🔗 Full Report: {result.report_url}")

if **name** == "**main**":
asyncio.run(main())

What This Does

  • • Connects to ModelRed service
  • • Registers your OpenAI model
  • • Runs comprehensive security tests
  • • Returns detailed vulnerability analysis

Expected Runtime

  • • Model registration: ~5 seconds
  • • Security assessment: ~2-3 minutes
  • • Results processing: ~10 seconds
  • Total: Under 5 minutes

Step 4: Run Your Assessment

Execute Your First Test

Run the script
# Replace API keys in the script, then run:
python main.py

Environment Variables (Recommended)

For security, store your API keys as environment variables instead of hardcoding them:

export MODELRED_API_KEY="mr_your_key_here"
export OPENAI_API_KEY="sk-your_openai_key"

Expected Output

🔐 Authenticating with ModelRed...
✅ Connected! Plan: Free
📝 Registering OpenAI model...
✅ Model registered successfully!
🔍 Starting security assessment...
📊 Assessment progress: 25% - RUNNING
📊 Assessment progress: 50% - RUNNING
📊 Assessment progress: 75% - RUNNING
📊 Assessment progress: 100% - COMPLETED

🎉 Assessment Complete!
📊 Overall Security Score: 8.2/10
⚠️ Risk Level: LOW
✅ Passed Tests: 12
❌ Failed Tests: 2
📋 Total Tests: 14

💡 Security Recommendations:

1. Implement input sanitization for user prompts
2. Add rate limiting to prevent prompt injection attempts
3. Consider implementing output filtering for sensitive content

🔗 Full Report: https://modelred.ai/reports/abc123def456

Understanding Your Results

Security Assessment Breakdown

8.2

Security Score

Overall security rating out of 10. Higher scores indicate better security posture.

LOW

Risk Level

Categorized risk: LOW, MEDIUM, HIGH, or CRITICAL based on vulnerabilities found.

14

Tests Run

Total number of security tests executed against your model.

3

Recommendations

Actionable security recommendations to improve your model's defenses.

What's Tested

Basic Security Test Suite

🎯

Prompt Injection

Tests for attempts to hijack model behavior through crafted prompts

🛡️

Content Safety

Evaluates model's resistance to generating harmful or inappropriate content

🔐

Input Validation

Checks how well the model handles malformed or suspicious inputs

📊

Data Exposure

Tests for potential leakage of training data or sensitive information

🎭

Role Playing

Evaluates if the model can be tricked into assuming harmful personas

System Prompts

Tests resistance to attempts to reveal or modify system instructions

Next Steps

🎉 Congratulations!

You've successfully completed your first AI security assessment with ModelRed. Your model has been tested against common security vulnerabilities, and you now have actionable insights to improve your AI application's security posture.

Continue exploring the documentation to learn about advanced features, additional AI providers, and comprehensive security testing strategies.