Introduction

CrewAI is a framework for orchestrating role-playing, autonomous AI agents designed to solve complex, open-ended tasks through collaboration. It provides a robust structure for agents to work together, leverage tools, and exchange insights to accomplish sophisticated objectives.

Obiguard enhances CrewAI with production-readiness features, turning your experimental agent crews into robust systems by providing:

  • Complete observability of every agent step, tool use, and interaction
  • Built-in reliability with fallbacks, retries, and load balancing
  • Cost tracking and optimization to manage your AI spend
  • Access to 200+ LLMs through a single integration
  • Guardrails to keep agent behavior safe and compliant
  • Version-controlled prompts for consistent agent performance

CrewAI Official Documentation

Learn more about CrewAI’s core concepts and features

Installation & Setup

1

Install the required packages

pip install -U crewai obiguard

Generate API Key

Create a Obiguard API key with optional budget/rate limits from the Obiguard dashboard. You can also attach configurations for reliability, caching, and more to this key. More on this later.

3

Configure CrewAI with Obiguard

The integration is simple - you just need to update the LLM configuration in your CrewAI setup:

from crewai import LLM
from obiguard import OBIGUARD_GATEWAY_URL, createHeaders

# Create an LLM instance with Obiguard integration
gpt_llm = LLM(
  model="gpt-4o",
  base_url=OBIGUARD_GATEWAY_URL,
  api_key="dummy", # We are using a Virtual key, so this is a placeholder
  extra_headers=obiguard.copy_headers()
)

#Use them in your Crew Agents like this:

@agent
def lead_market_analyst(self) -> Agent:
  return Agent(
    config=self.agents_config['lead_market_analyst'],
    verbose=True,
    memory=False,
    llm=gpt_llm
  )

What are Virtual Keys? Virtual keys in Obiguard securely store your LLM provider API keys (OpenAI, Anthropic, etc.) in an encrypted vault. They allow for easier key rotation and budget management. Learn more about virtual keys here.

Production Features

1. Enhanced Observability

Obiguard provides comprehensive observability for your CrewAI agents, helping you understand exactly what’s happening during each execution.

Traces provide a hierarchical view of your crew’s execution, showing the sequence of LLM calls, tool invocations, and state transitions.

2. Model Interoperability

CrewAI supports multiple LLM providers, and Obiguard extends this capability by providing access to over 200 LLMs through a unified interface. You can easily switch between different models without changing your core agent logic:

from crewai import Agent, LLM
from obiguard import OBIGUARD_GATEWAY_URL, createHeaders

# Set up LLMs with different providers
openai_llm = LLM(
    model="gpt-4o",
    base_url=OBIGUARD_GATEWAY_URL,
    api_key="dummy",
    extra_headers=createHeaders(
        obiguard_api_key="vk-obg***",  # Your Obiguard virtual key
    )
)

anthropic_llm = LLM(
    model="claude-3-5-sonnet-latest",
    max_tokens=1000,
    base_url=OBIGUARD_GATEWAY_URL,
    api_key="dummy",
    extra_headers=createHeaders(
        obiguard_api_key="vk-obg***",  # Your Obiguard virtual key
    )
)

# Choose which LLM to use for each agent based on your needs
researcher = Agent(
    role="Senior Research Scientist",
    goal="Discover groundbreaking insights about the assigned topic",
    backstory="You are an expert researcher with deep domain knowledge.",
    verbose=True,
    llm=openai_llm  # Use anthropic_llm for Anthropic
)

Obiguard provides access to LLMs from providers including:

  • OpenAI (GPT-4o, GPT-4 Turbo, etc.)
  • Anthropic (Claude 3.5 Sonnet, Claude 3 Opus, etc.)
  • Mistral AI (Mistral Large, Mistral Medium, etc.)
  • Google Vertex AI (Gemini 1.5 Pro, etc.)
  • Cohere (Command, Command-R, etc.)
  • AWS Bedrock (Claude, Titan, etc.)
  • Local/Private Models

Supported Providers

See the full list of LLM providers supported by Obiguard.

Set Up Enterprise Governance for CrewAI

Why Enterprise Governance?

If you are using CrewAI inside your organization, you need to consider several governance aspects:

  • Cost Management: Controlling and tracking AI spending across teams
  • Access Control: Managing which teams can use specific models
  • Usage Analytics: Understanding how AI is being used across the organization
  • Security & Compliance: Maintaining enterprise security standards
  • Reliability: Ensuring consistent service across all users

Obiguard adds a comprehensive governance layer to address these enterprise needs. Let’s implement these controls step by step.

1

Create guardrail policy

You can choose to create a guardrail policy to protect your data and ensure compliance with organizational policies. Add guardrail validators on your LLM inputs and output to govern your LLM usage.

2

Create Virtual Key

Virtual Keys are Obiguard’s secure way to manage your LLM provider API keys. Think of them like disposable credit cards for your LLM API keys.

To create a virtual key: Go to Virtual Keys in the Obiguard dashboard. Select the guardrail policy and your LLM provider. Save and copy the virtual key ID

Save your virtual key ID - you’ll need it for the next step.

3

Connect to CrewAI

After setting up your Obiguard API key with the attached config, connect it to your CrewAI agents:

from crewai import Agent, LLM
from obiguard import OBIGUARD_GATEWAY_URL, createHeaders

# Configure LLM with your API key
obiguard_llm = LLM(
  model="gpt-4o",
  base_url=OBIGUARD_GATEWAY_URL,
  api_key="YOUR_Obiguard_API_KEY"
)

# Create agent with Obiguard-enabled LLM
researcher = Agent(
  role="Senior Research Scientist",
  goal="Discover groundbreaking insights about the assigned topic",
  backstory="You are an expert researcher with deep domain knowledge.",
  verbose=True,
  llm=obiguard_llm
)

Enterprise Features Now Available

Your CrewAI integration now has:

  • Departmental budget controls
  • Model access governance
  • Usage tracking & attribution
  • Security guardrails
  • Reliability features

Frequently Asked Questions

Resources