FAQ
Frequently asked questions, troubleshooting tips, and common issues with the ModelRed SDK.
Introduction
Common questions and solutions when working with the ModelRed Python SDK. Find quick answers to installation, authentication, assessments, and troubleshooting issues.
Installation & Setup
How do I install the SDK?
Install via pip or uv:
pip install modelreduv add modelredThe SDK requires Python 3.8 or higher.
Where do I get an API key?
Navigate to https://www.app.modelred.ai
Copy the key (starts with mr_) and store it securely
Security: Never commit API keys to version control. Use environment variables or secrets managers.
What's a detector API key?
The detector is a separate LLM used to analyze your model's responses for vulnerabilities. You need:
Detector Provider
Either "openai" or "anthropic"
Detector API Key
Your OpenAI (sk-...) or Anthropic (sk-ant-...) key
Detector Model
Model name like "gpt-4o-mini" or "claude-3-5-sonnet-20241022"
The detector is different from the model being assessed.
Why do I need two API keys?
ModelRed API key (mr_...)
- Authenticates you with ModelRed's platform
- Manages orchestration and reporting
- Tracks your assessments and results
Detector API key (OpenAI/Anthropic)
- Pays for the LLM that analyzes assessment results
- Uses your existing OpenAI/Anthropic credits
- Allows you to control detector costs directly
This separation allows you to use your existing OpenAI/Anthropic credits while ModelRed handles orchestration and reporting.
Authentication
Invalid API key error
Error: ValueError: Valid API key (mr_...) is required
Solution: Ensure your API key starts with mr_. Check for:
- Leading/trailing whitespace
- Incorrect environment variable name
- Key not set in environment
import os
# Debug: Print first few characters
api_key = os.environ.get("MODELRED_API_KEY", "")
print(f"Key starts with: {api_key[:5]}") # Should be "mr_xx"Unauthorized (401) error
Error: Unauthorized: 401: Invalid API key
Assessments
"At least one probe_pack_id is required"
Error: ValidationFailed: At least one probe_pack_id is required
Solution: Provide at least one probe pack:
# Get some probe packs first
owned = client.list_owned_probes(page_size=5)
imported = client.list_imported_probes(page_size=5)
probe_ids = []
if owned.get("data"):
probe_ids.append(owned["data"][0]["id"])
if imported.get("data"):
probe_ids.append(imported["data"][0]["id"])
# Now create assessment
assessment = client.create_assessment_by_id(
model_id="model_123",
probe_pack_ids=probe_ids, # At least one ID
detector_provider="openai",
detector_api_key="sk-...",
detector_model="gpt-4o-mini",
)"Provide model or model_id"
Error: ValueError: Provide 'model' (recommended) or 'model_id'
Solution: Use one of these approaches:
assessment = client.create_assessment_by_id(
model_id="model_abc123",
probe_pack_ids=["pack_1"],
detector_provider="openai",
detector_api_key="sk-...",
detector_model="gpt-4o-mini",
) assessment = client.create_assessment(
model="gpt-4-turbo",
probe_pack_ids=["pack_1"],
detector_provider="openai",
detector_api_key="sk-...",
detector_model="gpt-4o-mini",
)Cannot cancel assessment
Error: NotAllowedForApiKey: Assessment modification requires web UI
Solution: Assessment cancellation is only available through the web UI. API keys cannot cancel or modify assessments. Log into the web application to cancel.
Assessment stuck in QUEUED
Possible causes:
- High assessment volume (queue backlog)
- Priority set to "low"
- Detector API key invalid or rate limited
Solutions:
Verify your detector API key is valid and has credits
Set priority to "high" or "critical" for faster processing
Assessments process in priority order—check back in a few minutes
Look for error messages in the web application
Where are my assessment results?
Results are available when status is "COMPLETED":
assessment = client.get_assessment("assessment_id")
if assessment["status"] == "COMPLETED":
results = assessment.get("results", {})
print(results)
elif assessment["status"] == "FAILED":
print(f"Assessment failed: {assessment.get('error')}")
else:
print(f"Status: {assessment['status']}") # QUEUED or RUNNINGProbe Packs
"Probe pack not found"
Error: NotFound: 404: Probe pack not found
Possible causes:
- Invalid probe pack ID (typo)
- Public probe pack not imported
- Probe pack from different organization
Solutions:
# List your available packs
owned = client.list_owned_probes()
imported = client.list_imported_probes()
# Check if pack exists
print("Owned:", [p["id"] for p in owned["data"]])
print("Imported:", [p["id"] for p in imported["data"]])For public packs, import them via the web UI first.
How do I import public probe packs?
Public probe packs must be imported through the web UI:
Navigate to the ModelRed web app
Imported packs now appear in list_imported_probes()
What's the difference between owned and imported?
Owned Probe Packs
Probe packs created by your organization (private or public)
Imported Probe Packs
Public probe packs from ModelRed or other organizations that you've imported
Use different methods to list them:
owned = client.list_owned_probes() # Your org's packs
imported = client.list_imported_probes() # Imported public packsModels
No models available
Issue: list_models() returns empty data
Model is inactive
Issue: Cannot use model in assessments
Solution: Models must be "active" status. Check and update in web UI:
models = client.list_models(status="inactive")
for model in models["data"]:
print(f"Inactive: {model['displayName']}")
# Update status in web UIErrors
Rate limited (429)
Error: RateLimited: 429: Too Many Requests
Note: The SDK retries automatically. You usually won't see this error.
Solutions:
import time
for i in range(10):
assessment = client.get_assessment(assessment_ids[i])
time.sleep(0.5) # Add delay between requestsclient.iter_assessments(page_size=20): process(assessment) ```
</Tab>
<Tab value="Increase Retries">
```python
client = ModelRed(
api_key="mr_...",
max_retries=5, # More retries
)Connection timeout
Error: httpx.TimeoutException or Request timeout
Solutions:
client = ModelRed(
api_key="mr_...",
timeout=60.0, # Increase from default 20s
)bash curl https://www.app.modelred.ai/health
async with AsyncModelRed(api_key="mr_...", timeout=60.0) as client:
result = await client.list_models()SSL/Certificate errors
Error: SSL: CERTIFICATE_VERIFY_FAILED
Performance
Slow list operations
Issue: list_models() or list_assessments() is slow
# Fast first page
response = client.list_models(page_size=10)self.client = client self._cache = None self._timestamp = 0 def
get_models(self, ttl=300): now = time.time() if not self._cache or (now -
self._timestamp) > ttl: self._cache = list(client.iter_models(page_size=100))
self._timestamp = now return self._cache ```
</Tab>
<Tab value="Use Filters">
```python
# Only active OpenAI models
models = client.list_models(
provider="openai",
status="active",
)Memory issues with large datasets
Issue: Out of memory when iterating
Solution: Process items incrementally:
# ❌ Bad: Loads everything into memory
all_assessments = list(client.iter_assessments(page_size=50))
# ✅ Good: Process one at a time
for assessment in client.iter_assessments(page_size=50):
process(assessment) # Process and discard immediatelyDevelopment
Testing without making real API calls
Use mocking for unit tests:
from unittest.mock import Mock
import pytest
@pytest.fixture
def mock_client():
client = Mock()
client.list_models.return_value = {
"data": [{"id": "model_1", "displayName": "Test Model"}],
"total": 1,
}
return client
def test_my_function(mock_client):
result = my_function(mock_client)
assert result is not NoneLocal development setup
Use .env file for local API keys:
# .env (add to .gitignore!)
MODELRED_API_KEY=mr_dev_key_here
OPENAI_API_KEY=sk_dev_key_here
ENV=developmentLoad with python-dotenv:
from dotenv import load_dotenv
import os
from modelred import ModelRed
load_dotenv()
client = ModelRed(api_key=os.environ["MODELRED_API_KEY"])How do I debug API requests?
Enable debug logging:
import logging
import httpx
# Enable httpx debug logging
httpx_logger = logging.getLogger("httpx")
httpx_logger.setLevel(logging.DEBUG)
# Add handler
handler = logging.StreamHandler()
handler.setFormatter(logging.Formatter('%(name)s - %(levelname)s - %(message)s'))
httpx_logger.addHandler(handler)
# Now SDK requests will be logged
client = ModelRed(api_key="mr_...")
models = client.list_models()Async Usage
"RuntimeError: Event loop is closed"
Error: RuntimeError: Event loop is closed
Solution: Use async with pattern:
import asyncio
from modelred import AsyncModelRed
async def main():
async with AsyncModelRed(api_key="mr_...") as client:
models = await client.list_models()
# Use models here
# client.aclose() called automatically
asyncio.run(main())Cannot iterate async results
Issue: Async iterators not available
Current limitation: Async version doesn't have iter_* methods yet. Use
manual pagination.
while True:
response = await client.list_models(page=page, page_size=50)
all_models.extend(response["data"])
if page >= response["totalPages"]:
break
page += 1
return all_modelsMiscellaneous
Can I change the base URL?
No, the base URL is fixed to https://www.app.modelred.ai in the current SDK version. For enterprise deployments with custom URLs, contact ModelRed support.
How do I update the SDK?
pip install --upgrade modelredCheck your version:
import modelred
print(modelred.__version__)Is the SDK thread-safe?
The synchronous client is not thread-safe. For concurrent requests:
- Create one client per thread, or
- Use
AsyncModelRedwith asyncio
# ✅ Good: One client per thread
import threading
def worker():
client = ModelRed(api_key=os.environ["MODELRED_API_KEY"])
models = client.list_models()
client.close()
threads = [threading.Thread(target=worker) for _ in range(5)]
for t in threads:
t.start()
for t in threads:
t.join()Can I use custom HTTP proxies?
Yes, provide a custom httpx transport:
import httpx
from modelred import ModelRed
transport = httpx.HTTPTransport(proxy="http://proxy.company.com:8080")
client = ModelRed(api_key="mr_...", transport=transport)Where can I find code examples?
Check these documentation pages:
Python SDK
Client configuration and setup
Assessments
Creating and managing assessments
Probe Packs
Working with probe packs
Models
Listing and filtering models
Pagination
Efficient iteration patterns
Best Practices
Production deployment patterns
Still Having Issues?
If you're still experiencing problems:
Review Error Handling for comprehensive error management
See Best Practices for production patterns
Use the debug logging example above to see detailed request information
Reach out through the ModelRed web application
When reporting issues, include:
- SDK version (
modelred.__version__) - Python version
- Error message and stack trace
- Minimal code to reproduce the issue