Introduction
CrewAI is a framework for orchestrating role-playing, autonomous AI agents designed to solve complex, open-ended tasks through collaboration. It provides a robust structure for agents to work together, leverage tools, and exchange insights to accomplish sophisticated objectives. Obiguard enhances CrewAI with production-readiness features, turning your experimental agent crews into robust systems by providing:- Complete observability of every agent step, tool use, and interaction
- Built-in reliability with fallbacks, retries, and load balancing
- Cost tracking and optimization to manage your AI spend
- Access to 200+ LLMs through a single integration
- Guardrails to keep agent behavior safe and compliant
- Version-controlled prompts for consistent agent performance
CrewAI Official Documentation
Learn more about CrewAI’s core concepts and features
Installation & Setup
1
Install the required packages
Generate API Key
Create a Obiguard API key with optional budget/rate limits from the Obiguard dashboard.
You can also attach configurations for reliability, caching, and more to this key. More on this later.
3
Configure CrewAI with Obiguard
The integration is simple - you just need to update the LLM configuration in your CrewAI setup:
What are Virtual Keys? Virtual keys in Obiguard securely store your LLM provider API keys (OpenAI, Anthropic,
etc.) in an encrypted vault.
They allow for easier key rotation and budget management. Learn more about virtual keys here.
Production Features
1. Enhanced Observability
Obiguard provides comprehensive observability for your CrewAI agents, helping you understand exactly what’s happening during each execution.Traces provide a hierarchical view of your crew’s execution, showing the sequence of LLM calls, tool invocations,
and state transitions.
2. Model Interoperability
CrewAI supports multiple LLM providers, and Obiguard extends this capability by providing access to over 200 LLMs through a unified interface. You can easily switch between different models without changing your core agent logic:- OpenAI (GPT-4o, GPT-4 Turbo, etc.)
- Anthropic (Claude 3.5 Sonnet, Claude 3 Opus, etc.)
- Mistral AI (Mistral Large, Mistral Medium, etc.)
- Google Vertex AI (Gemini 1.5 Pro, etc.)
- Cohere (Command, Command-R, etc.)
- AWS Bedrock (Claude, Titan, etc.)
- Local/Private Models
Supported Providers
See the full list of LLM providers supported by Obiguard.
Set Up Enterprise Governance for CrewAI
Why Enterprise Governance? If you are using CrewAI inside your organization, you need to consider several governance aspects:- Cost Management: Controlling and tracking AI spending across teams
- Access Control: Managing which teams can use specific models
- Usage Analytics: Understanding how AI is being used across the organization
- Security & Compliance: Maintaining enterprise security standards
- Reliability: Ensuring consistent service across all users
1
Create guardrail policy
You can choose to create a guardrail policy to protect your data and ensure compliance with organizational policies.
Add guardrail validators on your LLM inputs and output to govern your LLM usage.
2
Create Virtual Key
Virtual Keys are Obiguard’s secure way to manage your LLM provider API keys.
Think of them like disposable credit cards for your LLM API keys.To create a virtual key:
Go to Virtual Keys in the Obiguard dashboard. Select the guardrail policy and your LLM provider.
Save and copy the virtual key ID
Save your virtual key ID - you’ll need it for the next step.
3
Connect to CrewAI
After setting up your Obiguard API key with the attached config, connect it to your CrewAI agents:
Enterprise Features Now Available
Your CrewAI integration now has:- Departmental budget controls
- Model access governance
- Usage tracking & attribution
- Security guardrails
- Reliability features
Frequently Asked Questions
How does Obiguard enhance CrewAI?
How does Obiguard enhance CrewAI?
Obiguard adds production-readiness to CrewAI through comprehensive observability (traces, logs, metrics),
reliability features (fallbacks, retries, caching), and access to 200+ LLMs through a unified interface. This makes
it easier to debug, optimize, and scale your agent applications.
Can I use Obiguard with existing CrewAI applications?
Can I use Obiguard with existing CrewAI applications?
Yes! Obiguard integrates seamlessly with existing CrewAI applications. You just need to update your LLM
configuration code with the Obiguard-enabled version. The rest of your agent and crew code remains unchanged.
Does Obiguard work with all CrewAI features?
Does Obiguard work with all CrewAI features?
Obiguard supports all CrewAI features, including agents, tools, human-in-the-loop workflows, and all task process
types (sequential, hierarchical, etc.). It adds observability and reliability without limiting any of the
framework’s functionality.
Can I use my own API keys with Obiguard?
Can I use my own API keys with Obiguard?
Yes! Obiguard uses your own API keys for the various LLM providers.
It securely stores them as virtual keys, allowing you to easily manage and rotate keys without changing your code.