Introduction
OpenAI Agents SDK enables the development of complex AI agents with tools, planning, and memory capabilities. Obiguard enhances OpenAI Agents with observability, reliability, and production-readiness features. Obiguard turns your experimental OpenAI Agents into production-ready systems by providing:- Complete observability of every agent step, tool use, and interaction
- Cost tracking and optimization to manage your AI spend
- Access to 200+ LLMs through a single integration
- Guardrails to keep agent behavior safe and compliant
OpenAI Agents SDK Official Documentation
Learn more about OpenAI Agents SDK’s core concepts
Installation & Setup
1
Install the required packages
2
Generate API Key
Create a Obiguard API key
3
Connect to OpenAI Agents
There are 3 ways to integrate Obiguard with OpenAI Agents:
- Set a client that applies to all agents in your application
- Use a custom provider for selective Obiguard integration
- Configure each agent individually
4
Configure Obiguard Client
For a simple setup, we’ll use the global client approach:
What are Virtual Keys? Virtual keys in Obiguard securely store your LLM provider API keys (OpenAI, Anthropic,
etc.) in an encrypted vault. They allow for easier key rotation and budget management. Learn more about virtual
keys here.
Getting Started
Let’s create a simple question-answering agent with OpenAI Agents SDK and Obiguard. This agent will respond directly to user messages using a language model:- We set up Obiguard as the global client for OpenAI Agents SDK
- We create a simple agent with instructions and a model
- We run the agent synchronously with a user query
- We print the final output
Integration Approaches
There are three ways to integrate Obiguard with OpenAI Agents SDK, each suited for different scenarios:Set a global client that affects all agents in your application:Best for: Whole application migration to Obiguard with minimal code changes
Strategy | Code Touchpoints | Best For |
---|---|---|
Global Client via set_default_openai_client | One-time setup; agents need only model names | Whole app uses Obiguard; simplest migration |
ModelProvider in RunConfig | Add a provider + pass run_config | Toggle Obiguard per run; A/B tests, staged rollouts |
Explicit Model per Agent | Specify OpenAIChatCompletionsModel in agent | Mixed fleet: each agent can talk to a different provider |
End-to-End Example
Research Agent with Tools: Here’s a more comprehensive agent that can use tools to perform tasks.Production Features
1. Enhanced Observability
Obiguard provides comprehensive observability for your OpenAI Agents, helping you understand exactly what’s happening during each execution.Traces provide a hierarchical view of your agent’s execution, showing the sequence of LLM calls, tool invocations,
and state transitions.
2. Guardrails for Safe Agents
Guardrails ensure your OpenAI Agents operate safely and respond appropriately in all situations. Why Use Guardrails? OpenAI Agents can experience various failure modes:- Generating harmful or inappropriate content
- Leaking sensitive information like PII
- Hallucinating incorrect information
- Generating outputs in incorrect formats
- Detect and redact PII in both inputs and outputs
- Filter harmful or inappropriate content
- Validate response formats against schemas
- Check for hallucinations against ground truth
- Apply custom business logic and rules
Learn More About Guardrails
Explore Obiguard’s guardrail features to enhance agent safety
3. Tracing
Obiguard provides an opentelemetry compatible backend to store and query your traces. You can trace your OpenAI Agents using any OpenTelemetry compatible tracing library.Tool Use in OpenAI Agents
OpenAI Agents SDK natively supports tools that enable your agents to interact with external systems and APIs. Obiguard provides full observability for tool usage in your agents:Set Up Enterprise Governance for OpenAI Agents
Why Enterprise Governance? If you are using OpenAI Agents inside your orgnaization, you need to consider several governance aspects:- Cost Management: Controlling and tracking AI spending across teams
- Access Control: Managing which teams can use specific models
- Usage Analytics: Understanding how AI is being used across the organization
- Security & Compliance: Maintaining enterprise security standards
- Reliability: Ensuring consistent service across all users
1
Create guardrail policy
You can choose to create a guardrail policy to protect your data and ensure compliance with organizational policies.
Add guardrail validators on your LLM inputs and output to govern your LLM usage.
2
Create Virtual Key
Virtual Keys are Obiguard’s secure way to manage your LLM provider API keys.
Think of them like disposable credit cards for your LLM API keys.To create a virtual key:
Go to Virtual Keys in the Obiguard dashboard. Select the guardrail policy and your LLM provider.
Save and copy the virtual key ID
Save your virtual key ID - you’ll need it for the next step.
3
Once you have created your API Key after attaching default config, you can directly pass the API key + base URL in
the AsyncOpenAI client. Here’s how:
Enterprise Features Now Available
OpenAI Agents now has:- Departmental budget controls
- Model access governance
- Usage tracking & attribution
- Security guardrails
- Reliability features
Frequently Asked Questions
How does Obiguard enhance OpenAI Agents?
How does Obiguard enhance OpenAI Agents?
Obiguard adds production-readiness to OpenAI Agents through comprehensive observability (traces, logs, metrics),
reliability features (fallbacks, retries, caching), and access to 200+ LLMs through a unified interface. This makes
it easier to debug, optimize, and scale your agent applications.
Can I use Obiguard with existing OpenAI Agents?
Can I use Obiguard with existing OpenAI Agents?
Yes! Obiguard integrates seamlessly with existing OpenAI Agents. You only need to replace your client initialization
code with the Obiguard-enabled version. The rest of your agent code remains unchanged.
Does Obiguard work with all OpenAI Agents features?
Does Obiguard work with all OpenAI Agents features?
Obiguard supports all OpenAI Agents SDK features, including tool use, memory, planning, and more. It adds
observability and reliability without limiting any of the SDK’s functionality.
How does Obiguard handle streaming in OpenAI Agents?
How does Obiguard handle streaming in OpenAI Agents?
Obiguard fully supports streaming responses in OpenAI Agents. You can enable streaming by using the appropriate
methods in the OpenAI Agents SDK, and Obiguard will properly track and log the streaming interactions.
How do I filter logs and traces for specific agent runs?
How do I filter logs and traces for specific agent runs?
Obiguard allows you to add custom metadata to your agent runs, which you can then use for filtering. Add fields like
agent_name
, agent_type
, or session_id
to easily find and analyze specific agent executions.Can I use my own API keys with Obiguard?
Can I use my own API keys with Obiguard?
Yes! Obiguard uses your own API keys for the various LLM providers. It securely stores them as virtual keys,
allowing
you to easily manage and rotate keys without changing your code.