Getting Started
1. Install the required packages:
2. Configure your OpenAI object:
Make your agents Production-ready with Obiguard
Obiguard makes your agents reliable, robust, and production-grade with its observability suite and AI Gateway. Seamlessly integrate 200+ LLMs with your custom agents using Obiguard. Implement fallbacks, gain granular insights into agent performance and costs, and continuously optimize your AI operations—all with just 2 lines of code. Let’s dive deep! Let’s go through each of the use cases!1. Interoperability
Easily switch between 200+ LLMs. Call various LLMs such as Anthropic, Gemini, Mistral, Azure OpenAI, Google Vertex AI, AWS Bedrock, and many more by simply changing theprovider
and obiguard_api_key
in the ChatOpenAI
object.
If you are using OpenAI with CrewAI, your code would look like this:To switch to Azure as your provider, add your Azure details to Obiguard vault (here’s how) and use Azure OpenAI using virtual keys
2. Metrics
Agent runs can be costly. Tracking agent metrics is crucial for understanding the performance and reliability of your AI agents. Metrics help identify issues, optimize runs, and ensure that your agents meet their intended goals. Obiguard automatically logs comprehensive metrics for your AI agents, including cost, tokens used, latency, etc. Whether you need a broad overview or granular insights into your agent runs, Obiguard’s customizable filters provide the metrics you need. For agent-specific observability, addTrace-id
to the request headers for each agent.