CrewAI
Use Obiguard with CrewAI to take your AI Agents to production
Introduction
CrewAI is a framework for orchestrating role-playing, autonomous AI agents designed to solve complex, open-ended tasks through collaboration. It provides a robust structure for agents to work together, leverage tools, and exchange insights to accomplish sophisticated objectives.
Obiguard enhances CrewAI with production-readiness features, turning your experimental agent crews into robust systems by providing:
- Complete observability of every agent step, tool use, and interaction
- Built-in reliability with fallbacks, retries, and load balancing
- Cost tracking and optimization to manage your AI spend
- Access to 200+ LLMs through a single integration
- Guardrails to keep agent behavior safe and compliant
- Version-controlled prompts for consistent agent performance
CrewAI Official Documentation
Learn more about CrewAI’s core concepts and features
Installation & Setup
Install the required packages
Generate API Key
Create a Obiguard API key with optional budget/rate limits from the Obiguard dashboard. You can also attach configurations for reliability, caching, and more to this key. More on this later.
Configure CrewAI with Obiguard
The integration is simple - you just need to update the LLM configuration in your CrewAI setup:
What are Virtual Keys? Virtual keys in Obiguard securely store your LLM provider API keys (OpenAI, Anthropic, etc.) in an encrypted vault. They allow for easier key rotation and budget management. Learn more about virtual keys here.
Production Features
1. Enhanced Observability
Obiguard provides comprehensive observability for your CrewAI agents, helping you understand exactly what’s happening during each execution.
Traces provide a hierarchical view of your crew’s execution, showing the sequence of LLM calls, tool invocations, and state transitions.
Traces provide a hierarchical view of your crew’s execution, showing the sequence of LLM calls, tool invocations, and state transitions.
Obiguard logs every interaction with LLMs, including:
- Complete request and response payloads
- Latency and token usage metrics
- Cost calculations
- Tool calls and function executions
All logs can be filtered by metadata, trace IDs, models, and more, making it easy to debug specific crew runs.
Obiguard provides built-in dashboards that help you:
- Track cost and token usage across all crew runs
- Analyze performance metrics like latency and success rates
- Identify bottlenecks in your agent workflows
- Compare different crew configurations and LLMs
2. Model Interoperability
CrewAI supports multiple LLM providers, and Obiguard extends this capability by providing access to over 200 LLMs through a unified interface. You can easily switch between different models without changing your core agent logic:
Obiguard provides access to LLMs from providers including:
- OpenAI (GPT-4o, GPT-4 Turbo, etc.)
- Anthropic (Claude 3.5 Sonnet, Claude 3 Opus, etc.)
- Mistral AI (Mistral Large, Mistral Medium, etc.)
- Google Vertex AI (Gemini 1.5 Pro, etc.)
- Cohere (Command, Command-R, etc.)
- AWS Bedrock (Claude, Titan, etc.)
- Local/Private Models
Supported Providers
See the full list of LLM providers supported by Obiguard.
Set Up Enterprise Governance for CrewAI
Why Enterprise Governance?
If you are using CrewAI inside your organization, you need to consider several governance aspects:
- Cost Management: Controlling and tracking AI spending across teams
- Access Control: Managing which teams can use specific models
- Usage Analytics: Understanding how AI is being used across the organization
- Security & Compliance: Maintaining enterprise security standards
- Reliability: Ensuring consistent service across all users
Obiguard adds a comprehensive governance layer to address these enterprise needs. Let’s implement these controls step by step.
Create guardrail policy
You can choose to create a guardrail policy to protect your data and ensure compliance with organizational policies. Add guardrail validators on your LLM inputs and output to govern your LLM usage.
Create Virtual Key
Virtual Keys are Obiguard’s secure way to manage your LLM provider API keys. Think of them like disposable credit cards for your LLM API keys.
To create a virtual key: Go to Virtual Keys in the Obiguard dashboard. Select the guardrail policy and your LLM provider. Save and copy the virtual key ID
Save your virtual key ID - you’ll need it for the next step.
Connect to CrewAI
After setting up your Obiguard API key with the attached config, connect it to your CrewAI agents:
Enterprise Features Now Available
Your CrewAI integration now has:
- Departmental budget controls
- Model access governance
- Usage tracking & attribution
- Security guardrails
- Reliability features