OpenAI Swarm
The Obiguard x Swarm integration brings advanced AI gateway capabilities, full-stack observability, and reliability features to build production-ready AI agents.
Swarm is an experimental framework by OpenAI for building multi-agent systems. It showcases the handoff & routines pattern, making agent coordination and execution lightweight, highly controllable, and easily testable. Obiguard integration extends Swarm’s capabilities with production-ready features like observability, reliability, and more.
Getting Started
Install the Obiguard SDK
Configure the LLM Client used in OpenAI Swarm
To build Swarm Agents with Obiguard, you’ll need either one of these keys:
-
Obiguard API Key: Sign up on the Obiguard app and copy your API key.
-
Virtual Key: Virtual Keys are a secure way to manage your LLM API KEYS in one place. Instead of handling multiple API keys in your code, you can store your LLM provider API Keys securely in Obiguard’s vault
Create a Virtual Key in the Obiguard app
Create and Run an Agent
In this example we are building a simple Weather Agent using OpenAI Swarm with Obiguard.
E2E example with Function Calling in OpenAI Swarm
Here’s a complete example showing function calling and agent interaction:
The current temperature in New York City is 67°F.
Enabling Obiguard Features
By routing your OpenAI Swarm requests through Obiguard, you get access to the following production-grade features:
Interoperability
Call various LLMs like Anthropic, Gemini, Mistral, Azure OpenAI, Google Vertex AI, and AWS Bedrock with minimal code changes.
Observability
Get comprehensive logs of agent interactions, including cost, tokens used, response time, and function calls. Send custom metadata for better analytics.
Logs
Access detailed logs of agent executions, function calls, and interactions. Debug and optimize your agents effectively.
Security & Compliance
Implement budget limits, role-based access control, and audit trails for your agent operations.
1. Interoperability - Calling Different LLMs
When building with Swarm, you might want to experiment with different LLMs or use specific providers for different agent tasks. Obiguard makes this seamless - you can switch between OpenAI, Anthropic, Gemini, Mistral, or cloud providers without changing your agent code.
Instead of managing multiple API keys and provider-specific configurations, Obiguard’s Virtual Keys give you a single point of control. Here’s how you can use different LLMs with your Swarm agents:
2. Observability - Understand Your Agents
Building agents is the first step - but how do you know they’re working effectively? Obiguard provides comprehensive visibility into your agent operations through multiple lenses:
Metrics Dashboard: Track 40+ key performance indicators like:
- Cost per agent interaction
- Response times and latency
- Token usage and efficiency
- Success/failure rates
- Cache hit rates
3. Logs and Traces
Logs are essential for understanding agent behavior, diagnosing issues, and improving performance. They provide a detailed record of agent activities and tool use, which is crucial for debugging and optimizing processes.
Access a dedicated section to view records of agent executions, including parameters, outcomes, function calls, and errors. Filter logs based on multiple parameters such as trace ID, model, tokens used, and metadata.
4. Security & Compliance - Enterprise-Ready Controls
When deploying agents in production, security is crucial. Obiguard provides enterprise-grade security features:
Budget Controls
Set and monitor spending limits per Virtual Key. Get alerts before costs exceed thresholds.
Access Management
Control who can access what. Assign roles and permissions for your team members.
Audit Logging
Track all changes and access. Know who modified agent settings and when.
Data Privacy
Configure data retention and processing policies to meet your compliance needs.
Configure these settings in the Obiguard Dashboard.