Obiguard makes your agents reliable, robust, and production-grade with its observability suite and AI Gateway. Seamlessly integrate 200+ LLMs with your custom agents using Obiguard. Implement fallbacks, gain granular insights into agent performance and costs, and continuously optimize your AI operations—all with just 2 lines of code.Let’s dive deep! Let’s go through each of the use cases!
Easily switch between 200+ LLMs. Call various LLMs such as Anthropic, Gemini, Mistral, Azure OpenAI, Google Vertex AI, AWS Bedrock, and many more by
simply changing the provider and obiguard_api_key in the ChatOpenAI object.
If you are using OpenAI with CrewAI, your code would look like this:
Copy
from openai import OpenAIfrom obiguard import OBIGUARD_GATEWAY_URL, createHeadersclient = OpenAI( api_key="OPENAI_API_KEY", base_url=OBIGUARD_GATEWAY_URL, default_headers=createHeaders( provider="openai", obiguard_api_key="sk-obg***", # Your Obiguard API key ))
To switch to Azure as your provider, add your Azure details to Obiguard vault (here’s how) and use Azure OpenAI using virtual keys
Copy
client = OpenAI( api_key="API_KEY", #We will use Virtual Key in this base_url=OBIGUARD_GATEWAY_URL, default_headers=createHeaders( provider="azure-openai", obiguard_api_key="sk-obg***", # Your Obiguard API key virtual_key="AZURE_VIRTUAL_KEY" #Azure Virtual key ))
Agent runs can be costly. Tracking agent metrics is crucial for understanding the performance and reliability of your AI agents.
Metrics help identify issues, optimize runs, and ensure that your agents meet their intended goals.Obiguard automatically logs comprehensive metrics for your AI agents, including cost, tokens used, latency, etc. Whether you need a broad overview or granular insights into your agent runs, Obiguard’s customizable filters provide the metrics you need. For agent-specific observability, add Trace-id to the request headers for each agent.
Copy
llm2 = ChatOpenAI( api_key="Anthropic_API_Key", base_url=OBIGUARD_GATEWAY_URL, default_headers=createHeaders( obiguard_api_key="sk-obg***", # Your Obiguard API key provider="anthropic", trace_id="research_agent1" #Add individual trace-id for your agent analytics ))
Agent runs are complex. Logs are essential for diagnosing issues, understanding agent behavior, and improving performance. They provide a detailed record of agent activities and tool use, which is crucial for debugging and optimizing processes.Obiguard offers comprehensive logging features that capture detailed information about every action and decision made by your AI agents. Access a dedicated section to view records of agent executions, including parameters, outcomes, function calls, and errors. Filter logs based on multiple parameters such as trace ID, model, tokens used, and metadata.