Documentation

ModelRed SDK

Comprehensive AI security testing platform for LLM applications with advanced threat detection capabilities

Enterprise-Grade AI Security Testing

ModelRed is the most comprehensive AI security testing platform that helps developers identify vulnerabilities in their AI models before they reach production. Our Python SDK provides seamless integration with your existing workflows, supporting 7+ AI providers and offering 15+ advanced threat detection capabilities.

Why ModelRed?

Lightning Fast Setup

Get started in under 5 minutes. Install, register your model, and run your first security assessment with just a few lines of code.

Military-Grade Testing

Advanced threat probes test for prompt injection, jailbreaking, content safety, encoding attacks, and behavioral manipulation.

Universal Provider Support

Works with OpenAI, Anthropic, Azure, AWS Bedrock, AWS SageMaker, Hugging Face, and custom REST APIs out of the box.

Key Features

Advanced Threat Detection System

Our specialized Threat Probes test your AI models against real-world security vulnerabilities using cutting-edge attack techniques.

🎯

Prompt Injection

Advanced techniques to bypass model instructions and hijack conversations

🔓

Jailbreaking

Sophisticated attempts to override safety guardrails and ethical boundaries

🛡️

Content Safety

Comprehensive testing for harmful, toxic, or inappropriate content generation

🔐

Encoding Attacks

Base64, hex, ROT13, and other encoding-based exploitation techniques

🦠

Malware Generation

Detection of vulnerabilities in code generation and script creation

🎭

Behavioral Manipulation

Testing personality override and instruction manipulation techniques

Quick Start Example

5 MIN SETUP

Get Started in Under 5 Minutes

Install, register your model, and run your first security assessment

Python Code Example
import asyncio
from modelred import ModelRed

async def main():
async with ModelRed(api_key="mr_your_api_key_here") as client: # Register an AI model
await client.register_openai_model(
model_id="my-gpt-model",
api_key="sk-your-openai-key",
model_name="gpt-3.5-turbo"
)

        # Run a security assessment
        result = await client.run_assessment(
            model_id="my-gpt-model",
            test_suites=["basic_security"],
            wait_for_completion=True
        )

        print(f"Security Score: {result.overall_score}/10")
        print(f"Risk Level: {result.risk_level.value}")

asyncio.run(main())
Your first security assessment will complete in ~2-3 minutes

Supported Providers

🤖

OpenAI

GPT-3.5, GPT-4, GPT-4 Turbo

🔮

Anthropic

Claude 3, Claude 3.5

☁️

Azure OpenAI

Enterprise deployments

🚀

AWS

Bedrock, SageMaker

Pricing Tiers

Flexible Plans for Every Need

From individual developers to enterprise teams

Free

Perfect for getting started

  • • 2 AI models
  • • 10 assessments/month
  • • Basic security testing
POPULAR

Starter

$49/mo

For growing teams

  • • 10 AI models
  • • 100 assessments/month
  • • Advanced testing

Pro

$149/mo

For serious developers

  • • 50 AI models
  • • 500 assessments/month
  • • Full security suite

Enterprise

Custom

For large organizations

  • • Unlimited testing
  • • Custom pricing
  • • Dedicated support

Next Steps