OpenAI Agents SDK enables the development of complex AI agents with tools, planning, and memory capabilities.
Obiguard enhances OpenAI Agents with observability, reliability, and production-readiness features.
Obiguard turns your experimental OpenAI Agents into production-ready systems by providing:
Complete observability of every agent step, tool use, and interaction
Cost tracking and optimization to manage your AI spend
Access to 200+ LLMs through a single integration
Guardrails to keep agent behavior safe and compliant
For a simple setup, we’ll use the global client approach:
Copy
from agents import ( set_default_openai_client, set_default_openai_api, Agent, Runner)from openai import AsyncOpenAIfrom obiguard import OBIGUARD_GATEWAY_URL, createHeadersimport os# Set up Obiguard as the global clientclient = AsyncOpenAI( base_url=OBIGUARD_GATEWAY_URL, obiguard_api_key=os.environ["OBIGUARD_API_KEY"], default_headers=createHeaders( obiguard_api_key="vk-obg***", # Your Obiguard virtual key ))# Register as the SDK-wide defaultset_default_openai_client(client, use_for_tracing=False)set_default_openai_api("chat_completions") # Responses API → Chat
What are Virtual Keys? Virtual keys in Obiguard securely store your LLM provider API keys (OpenAI, Anthropic,
etc.) in an encrypted vault. They allow for easier key rotation and budget management. Learn more about virtual
keys here.
Let’s create a simple question-answering agent with OpenAI Agents SDK and Obiguard.
This agent will respond directly to user messages using a language model:
Copy
from agents import ( set_default_openai_client, set_default_openai_api, Agent, Runner)from openai import AsyncOpenAIfrom obiguard import OBIGUARD_GATEWAY_URL, createHeadersimport os# Set up Obiguard as the global clientclient = AsyncOpenAI( base_url=OBIGUARD_GATEWAY_URL, default_headers=createHeaders( obiguard_api_key="vk-obg***", # Your Obiguard virtual key ))# Register as the SDK-wide defaultset_default_openai_client(client, use_for_tracing=False)set_default_openai_api("chat_completions") # Responses API → Chat# Create agent with any supported modelagent = Agent( name="Assistant", instructions="You are a helpful assistant.", model="gpt-4o" # Using Anthropic Claude through Obiguard)# Run the agentresult = Runner.run_sync(agent, "Tell me about quantum computing.")print(result.final_output)
In this example:
We set up Obiguard as the global client for OpenAI Agents SDK
We create a simple agent with instructions and a model
We run the agent synchronously with a user query
We print the final output
Visit your Obiguard dashboard to see detailed logs of this agent’s execution!
There are three ways to integrate Obiguard with OpenAI Agents SDK, each suited for different scenarios:
Set a global client that affects all agents in your application:
Copy
from agents import ( set_default_openai_client, set_default_openai_api, set_tracing_disabled, Agent, Runner)from openai import AsyncOpenAIfrom obiguard import OBIGUARD_GATEWAY_URL, createHeadersimport os# Set up Obiguard as the global clientclient = AsyncOpenAI( base_url=OBIGUARD_GATEWAY_URL, obiguard_api_key=os.environ["OBIGUARD_API_KEY"], default_headers=createHeaders( virtual_key="YOUR_OPENAI_VIRTUAL_KEY" ))# Register it as the SDK-wide defaultset_default_openai_client(client, use_for_tracing=False) # skip OpenAI tracingset_default_openai_api("chat_completions") # Responses API → Chatset_tracing_disabled(True) # optional# Regular agent code—just a model nameagent = Agent( name="Haiku Writer", instructions="Respond only in haikus.", model="claude-3-7-sonnet-latest")print(Runner.run_sync(agent, "Write a haiku on recursion.").final_output)
Best for: Whole application migration to Obiguard with minimal code changes
Set a global client that affects all agents in your application:
Copy
from agents import ( set_default_openai_client, set_default_openai_api, set_tracing_disabled, Agent, Runner)from openai import AsyncOpenAIfrom obiguard import OBIGUARD_GATEWAY_URL, createHeadersimport os# Set up Obiguard as the global clientclient = AsyncOpenAI( base_url=OBIGUARD_GATEWAY_URL, obiguard_api_key=os.environ["OBIGUARD_API_KEY"], default_headers=createHeaders( virtual_key="YOUR_OPENAI_VIRTUAL_KEY" ))# Register it as the SDK-wide defaultset_default_openai_client(client, use_for_tracing=False) # skip OpenAI tracingset_default_openai_api("chat_completions") # Responses API → Chatset_tracing_disabled(True) # optional# Regular agent code—just a model nameagent = Agent( name="Haiku Writer", instructions="Respond only in haikus.", model="claude-3-7-sonnet-latest")print(Runner.run_sync(agent, "Write a haiku on recursion.").final_output)
Best for: Whole application migration to Obiguard with minimal code changes
Use a custom ModelProvider to control which runs use Obiguard:
Research Agent with Tools: Here’s a more comprehensive agent that can use tools to perform tasks.
Copy
from agents import Agent, Runner, Tool, set_default_openai_clientfrom openai import AsyncOpenAIfrom obiguard import OBIGUARD_GATEWAY_URL, createHeadersimport os# Configure Obiguard clientclient = AsyncOpenAI( base_url=OBIGUARD_GATEWAY_URL, default_headers=createHeaders( obiguard_api_key="vk-obg***", # Your Obiguard virtual key ))set_default_openai_client(client)# Define agent toolsdef get_weather(location: str) -> str: """Get the current weather for a location.""" return f"It's 72°F and sunny in {location}."def search_web(query: str) -> str: """Search the web for information.""" return f"Found information about: {query}"# Create agent with toolsagent = Agent( name="Research Assistant", instructions="You are a helpful assistant that can search for information and check the weather.", model="claude-3-opus-20240229", tools=[ Tool( name="get_weather", description="Get current weather for a location", input_schema={ "location": { "type": "string", "description": "City and state, e.g. San Francisco, CA" } }, callback=get_weather ), Tool( name="search_web", description="Search the web for information", input_schema={ "query": { "type": "string", "description": "Search query" } }, callback=search_web ) ])# Run the agentresult = Runner.run_sync( agent, "What's the weather in San Francisco and find information about Golden Gate Bridge?")print(result.final_output)
Visit your Obiguard dashboard to see the complete execution flow visualized!
OpenAI Agents SDK natively supports tools that enable your agents to interact with external systems and APIs.
Obiguard provides full observability for tool usage in your agents:
Copy
from agents import Agent, Runner, Tool, set_default_openai_clientfrom openai import AsyncOpenAIfrom obiguard import OBIGUARD_GATEWAY_URL, createHeadersimport os# Configure Obiguard client with tracingclient = AsyncOpenAI( base_url=OBIGUARD_GATEWAY_URL, default_headers=createHeaders( obiguard_api_key="vk-obg***", # Your Obiguard virtual key trace_id="tools_example", metadata={"agent_type": "research"} ))set_default_openai_client(client)# Define toolsdef get_weather(location: str, unit: str = "fahrenheit") -> str: """Get the current weather in a given location""" return f"The weather in {location} is 72 degrees {unit}"def get_population(city: str, country: str) -> str: """Get the population of a city""" return f"The population of {city}, {country} is 1,000,000"# Create agent with toolsagent = Agent( name="Research Assistant", instructions="You are a helpful assistant that can look up weather and population information.", model="gpt-4o-mini", tools=[ Tool( name="get_weather", description="Get the current weather in a given location", input_schema={ "location": { "type": "string", "description": "City and state, e.g. San Francisco, CA" }, "unit": { "type": "string", "description": "Temperature unit (celsius or fahrenheit)", "default": "fahrenheit" } }, callback=get_weather ), Tool( name="get_population", description="Get the population of a city", input_schema={ "city": { "type": "string", "description": "City name" }, "country": { "type": "string", "description": "Country name" } }, callback=get_population ) ])# Run the agentresult = Runner.run_sync( agent, "What's the weather in San Francisco and what's the population of Tokyo, Japan?")print(result.final_output)
Reliability: Ensuring consistent service across all users
Obiguard adds a comprehensive governance layer to address these enterprise needs. Let’s implement these controls step by step.
Enterprise Implementation Guide
Obiguard allows you to use 1600+ LLMs with your OpenAI Agents setup, with minimal configuration required. Let’s set up the core components in Obiguard that you’ll need for integration.
1
Create guardrail policy
You can choose to create a guardrail policy to protect your data and ensure compliance with organizational policies.
Add guardrail validators on your LLM inputs and output to govern your LLM usage.
2
Create Virtual Key
Virtual Keys are Obiguard’s secure way to manage your LLM provider API keys.
Think of them like disposable credit cards for your LLM API keys.
To create a virtual key:
Go to Virtual Keys in the Obiguard dashboard. Select the guardrail policy and your LLM provider.
Save and copy the virtual key ID
Save your virtual key ID - you’ll need it for the next step.
3
Once you have created your API Key after attaching default config, you can directly pass the API key + base URL in
the AsyncOpenAI client. Here’s how:
Copy
from obiguard import createHeaders, OBIGUARD_GATEWAY_URLfrom openai import AsyncOpenAIclient=AsyncOpenAI( obiguard_api_key="vk-obg***", # Your Obiguard virtual key base_url="OBIGUARD_GATEWAY_URL")# your rest of the code remains same
Obiguard adds production-readiness to OpenAI Agents through comprehensive observability (traces, logs, metrics),
reliability features (fallbacks, retries, caching), and access to 200+ LLMs through a unified interface. This makes
it easier to debug, optimize, and scale your agent applications.
Yes! Obiguard integrates seamlessly with existing OpenAI Agents. You only need to replace your client initialization
code with the Obiguard-enabled version. The rest of your agent code remains unchanged.
Obiguard supports all OpenAI Agents SDK features, including tool use, memory, planning, and more. It adds
observability and reliability without limiting any of the SDK’s functionality.
Obiguard fully supports streaming responses in OpenAI Agents. You can enable streaming by using the appropriate
methods in the OpenAI Agents SDK, and Obiguard will properly track and log the streaming interactions.
Obiguard allows you to add custom metadata to your agent runs, which you can then use for filtering. Add fields like
agent_name, agent_type, or session_id to easily find and analyze specific agent executions.
Yes! Obiguard uses your own API keys for the various LLM providers. It securely stores them as virtual keys,
allowing
you to easily manage and rotate keys without changing your code.