Documentation

Overview

Learn about ModelRed SDK and how it helps secure your AI models through comprehensive security testing

Welcome to ModelRed

ModelRed is the most comprehensive AI security testing platform designed to help developers identify and fix vulnerabilities in their AI models before deployment. Our Python SDK makes it simple to integrate advanced security testing into your existing workflows.

What is ModelRed?

Enterprise-Grade AI Security Platform

ModelRed provides comprehensive security testing for AI models using specialized Threat Probes that simulate real-world attack scenarios. Our platform tests for vulnerabilities like prompt injection, jailbreaking, content safety violations, and behavioral manipulations.

For Developers

  • • Integrate security testing into CI/CD pipelines
  • • Catch vulnerabilities before production
  • • Support for 7+ AI providers
  • • Simple Python SDK integration

For Organizations

  • • Comprehensive security compliance
  • • Detailed vulnerability reports
  • • Enterprise-grade scalability
  • • Real-time monitoring and alerts

Why Choose ModelRed?

🎯

Advanced Threat Detection

15+ specialized threat probes test for prompt injection, jailbreaking, content safety, encoding attacks, malware generation, and behavioral manipulation.

🔗

Universal Integration

Works seamlessly with OpenAI, Anthropic, Azure OpenAI, AWS Bedrock, AWS SageMaker, Hugging Face, and custom REST APIs.

Lightning Fast

Get started in under 5 minutes. Most security assessments complete in 2-3 minutes with real-time progress tracking.

📊

Detailed Reporting

Comprehensive security scores, risk levels, detailed vulnerability analysis, and actionable recommendations for fixing issues.

🔄

CI/CD Ready

Built for automation with async/await support, progress callbacks, and comprehensive error handling for production environments.

💰

Flexible Pricing

Start free with 3 models and 10 assessments/month. Scale up to Pro (25 models, 500 assessments) or Enterprise (unlimited).

Key Concepts

Understanding ModelRed Components

Models

AI models you register with ModelRed for security testing. Each model includes provider configuration, credentials, and metadata. Models are the primary target for security assessments.

Threat Probes

Specialized security tests that simulate real-world attack scenarios. Each probe targets specific vulnerabilities like prompt injection, jailbreaking, or content safety violations.

Test Suites

Collections of related threat probes. Examples include "basic_security", "content_safety", and "advanced_jailbreak". Different tiers have access to different test suites.

Assessments

Security testing sessions that run test suites against your models. Assessments provide overall security scores, risk levels, detailed results, and actionable recommendations.

How It Works

4-Step Security Testing Process

1

Register Model

Connect your AI model with provider credentials and configuration

2

Choose Tests

Select test suites based on your security requirements and subscription tier

3

Run Assessment

Our threat probes test your model with real-world attack scenarios

4

Get Results

Receive detailed security reports with scores, risks, and actionable recommendations

What You'll Learn

In This Getting Started Guide

System requirements and prerequisites
Installing the ModelRed SDK
Setting up API keys and authentication
Registering your first AI model
Running your first security assessment
Understanding security results and next steps

Next Steps