Quick Security Assessment
Get started with ModelRed in 5 minutes - complete security assessment workflow
5-Minute Security Assessment
Complete your first AI security assessment in under 5 minutes. This example shows the essential workflow from model registration to getting security results.
Beginner
5 min
Complete Example
What You'll Build
🎯 Learning Objectives
You'll Learn
- ✓How to authenticate with ModelRed
- ✓Register an OpenAI model for testing
- ✓Run a basic security assessment
- ✓Interpret security assessment results
You'll Build
- →Complete security testing script
- →Reusable assessment function
- →Results analysis and reporting
Prerequisites
📋 Before Starting
ModelRed API key (starts with "mr_")
OpenAI API key (starts with "sk-")
Python 3.8+ with ModelRed SDK installed
Complete Example
Full Working Example
Copy this complete script and run it to perform your first security assessment.
#!/usr/bin/env python3
"""
Quick Security Assessment Example
Complete ModelRed workflow in 5 minutes
"""
import asyncio
import os
from datetime import datetime
from dotenv import load_dotenv
from modelred import ModelRed, ModelRedError
# Load environment variables
load_dotenv()
async def quick_security_assessment():
"""
Complete security assessment workflow:
1. Authenticate with ModelRed
2. Register an OpenAI model
3. Run security assessment
4. Display results
"""
print("🚀 ModelRed Quick Security Assessment")
print("=" * 50)
try:
# Step 1: Initialize ModelRed client
print("\n📡 Connecting to ModelRed...")
async with ModelRed() as client:
# Verify authentication
account = await client.validate_api_key()
print(f"✅ Connected to ModelRed!")
print(f" Organization: {account.get('organization', 'Personal')}")
# Check usage limits
usage = await client.get_usage_stats()
print(f" Plan: {usage.tier}")
print(f" Assessments: {usage.assessments_this_month}/{usage.assessments_limit}")
# Step 2: Register OpenAI model
print("\n🤖 Registering OpenAI model...")
model_id = f"quick-test-{datetime.now().strftime('%Y%m%d_%H%M%S')}"
success = await client.register_openai_model(
model_id=model_id,
api_key=os.getenv("OPENAI_API_KEY"),
model_name="gpt-3.5-turbo",
metadata={
"purpose": "Quick security assessment example",
"created_by": "ModelRed SDK Tutorial"
}
)
if success:
print(f"✅ Model registered: {model_id}")
else:
print("❌ Model registration failed")
return
# Step 3: Run security assessment
print("\n🧪 Starting security assessment...")
print(" This will take 2-3 minutes...")
# Start assessment and wait for completion
result = await client.run_assessment(
model_id=model_id,
test_suites=["basic_security"],
wait_for_completion=True,
timeout_minutes=5
)
# Step 4: Display results
print("\n📊 Assessment Results:")
print("=" * 30)
print(f"Overall Score: {result.overall_score}/10")
print(f"Risk Level: {result.risk_level.value}")
print(f"Tests Run: {result.total_tests}")
print(f"Tests Passed: {result.passed_tests}")
print(f"Tests Failed: {result.failed_tests}")
# Show category breakdown
if result.categories:
print("\n📋 Category Breakdown:")
for category, details in result.categories.items():
score = details.get('score', 0)
print(f" {category}: {score}/10")
# Show top recommendations
if result.recommendations:
print("\n💡 Top Recommendations:")
for i, rec in enumerate(result.recommendations[:3], 1):
print(f" {i}. {rec}")
# Show report URL
if result.report_url:
print(f"\n🔗 Detailed Report: {result.report_url}")
# Step 5: Cleanup (optional)
print(f"\n🧹 Cleaning up test model...")
await client.delete_model(model_id)
print("✅ Test model removed")
print("\n🎉 Assessment completed successfully!")
print("💡 Next: Try running assessments on your production models")
except ModelRedError as e:
print(f"\n❌ ModelRed Error: {e.message}")
print("💡 Check your API keys and try again")
except Exception as e:
print(f"\n❌ Unexpected error: {e}")
print("💡 Please check the troubleshooting guide")
# Environment setup helper
def setup_environment():
"""Check and guide user through environment setup"""
print("🔧 Environment Setup Check")
print("-" * 30)
# Check ModelRed API key
mr_key = os.getenv("MODELRED_API_KEY")
if mr_key and mr_key.startswith("mr_"):
print("✅ ModelRed API key found")
else:
print("❌ ModelRed API key missing or invalid")
print(" Set: export MODELRED_API_KEY='mr_your_key_here'")
return False
# Check OpenAI API key
openai_key = os.getenv("OPENAI_API_KEY")
if openai_key and openai_key.startswith("sk-"):
print("✅ OpenAI API key found")
else:
print("❌ OpenAI API key missing or invalid")
print(" Set: export OPENAI_API_KEY='sk-your_key_here'")
return False
print("✅ Environment setup complete!")
return True
if __name__ == "__main__":
print("ModelRed Quick Security Assessment")
print("=" * 50)
# Check environment first
if setup_environment():
print("\n")
# Run the assessment
asyncio.run(quick_security_assessment())
else:
print("\n💡 Please set up your environment variables and try again")
Step-by-Step Breakdown
Authentication & Setup
# Initialize and authenticate
async with ModelRed() as client:
account = await client.validate_api_key()
print(f"Connected to: {account.get('organization')}")
# Check usage limits
usage = await client.get_usage_stats()
print(f"Plan: {usage.tier}")
What's happening: We create a ModelRed client, validate our API key, and check our current usage to ensure we have available assessment credits.
Model Registration
# Register OpenAI model for testing
model_id = f"quick-test-{datetime.now().strftime('%Y%m%d_%H%M%S')}"
success = await client.register_openai_model(
model_id=model_id,
api_key=os.getenv("OPENAI_API_KEY"),
model_name="gpt-3.5-turbo",
metadata={"purpose": "Security assessment example"}
)
What's happening: We register a GPT-3.5-turbo model with a unique ID. The metadata helps track the purpose of this model registration.
Security Assessment
# Run security assessment and wait for results
result = await client.run_assessment(
model_id=model_id,
test_suites=["basic_security"],
wait_for_completion=True,
timeout_minutes=5
)
What's happening: We run the "basic_security" test suite, which includes fundamental security checks like prompt injection and jailbreak attempts. The assessment typically takes 2-3 minutes.
Results Analysis
# Display comprehensive results
print(f"Overall Score: {result.overall_score}/10")
print(f"Risk Level: {result.risk_level.value}")
print(f"Tests: {result.passed_tests}/{result.total_tests} passed")
# Category breakdown
for category, details in result.categories.items():
print(f"{category}: {details.get('score', 0)}/10")
# Recommendations
for i, rec in enumerate(result.recommendations[:3], 1):
print(f"{i}. {rec}")
What's happening: We parse the assessment results to show the overall security score, risk level, category breakdowns, and actionable recommendations for improving security.
Running the Example
▶️ Execution Steps
Save the complete example as quick_assessment.py
export MODELRED_API_KEY="mr_your_api_key_here"
export OPENAI_API_KEY="sk-your_openai_key_here"
pip install modelred python-dotenv
python quick_assessment.py
Expected Output
📺 Sample Output
Understanding Results
📊 Interpreting Your Results
Scoring
Risk Levels
Next Steps
🚀 What's Next?
Try the
Multi-Provider Setup
to test multiple AI models
Explore
Advanced Test Suites
for comprehensive security testing
Set up
CI/CD Integration
for automated security testing
Register your production models for regular security assessments
Troubleshooting
⚠️ Common Issues
AuthenticationError
API key invalid or missing
export MODELRED_API_KEY="mr_your_actual_key_here"
QuotaExceededError
Monthly assessment limit reached
Wait for quota reset or upgrade your plan
AssessmentError
Assessment failed to complete
Check OpenAI API key and account credits