Documentation Index
Fetch the complete documentation index at: https://docs.obiguard.ai/llms.txt
Use this file to discover all available pages before exploring further.
Getting Started
1. Install the required packages:
pip install obiguard openai
from openai import OpenAI
from obiguard import OBIGUARD_GATEWAY_URL, createHeaders
client = OpenAI(
api_key="OPENAI_API_KEY",
base_url=OBIGUARD_GATEWAY_URL,
default_headers=createHeaders(
provider="openai",
obiguard_api_key="sk-obg***", # Your Obiguard API key
)
)
Make your agents Production-ready with Obiguard
Obiguard makes your agents reliable, robust, and production-grade with its observability suite and AI Gateway. Seamlessly integrate 200+ LLMs with your custom agents using Obiguard. Implement fallbacks, gain granular insights into agent performance and costs, and continuously optimize your AI operations—all with just 2 lines of code.
Let’s dive deep! Let’s go through each of the use cases!
Easily switch between 200+ LLMs. Call various LLMs such as Anthropic, Gemini, Mistral, Azure OpenAI, Google Vertex AI, AWS Bedrock, and many more by
simply changing the provider and obiguard_api_key in the ChatOpenAI object.
OpenAI to Azure OpenAI
Anthropic to AWS Bedrock
If you are using OpenAI with CrewAI, your code would look like this:from openai import OpenAI
from obiguard import OBIGUARD_GATEWAY_URL, createHeaders
client = OpenAI(
api_key="OPENAI_API_KEY",
base_url=OBIGUARD_GATEWAY_URL,
default_headers=createHeaders(
provider="openai",
obiguard_api_key="sk-obg***", # Your Obiguard API key
)
)
To switch to Azure as your provider, add your Azure details to Obiguard vault (here’s how) and use Azure OpenAI using virtual keysclient = OpenAI(
api_key="API_KEY", #We will use Virtual Key in this
base_url=OBIGUARD_GATEWAY_URL,
default_headers=createHeaders(
provider="azure-openai",
obiguard_api_key="sk-obg***", # Your Obiguard API key
virtual_key="AZURE_VIRTUAL_KEY" #Azure Virtual key
)
)
If you are using Anthropic with CrewAI, your code would look like this:client = OpenAI(
api_key="ANTROPIC_API_KEY",
base_url=OBIGUARD_GATEWAY_URL,
default_headers=createHeaders(
provider="anthropic",
obiguard_api_key="sk-obg***", # Your Obiguard API key
)
)
To switch to AWS Bedrock as your provider, add your AWS Bedrock details to Obiguard vault (here’s how) and use AWS Bedrock using virtual keys,client = OpenAI(
api_key="api_key", #We will use Virtual Key in this
base_url=OBIGUARD_GATEWAY_URL,
default_headers=createHeaders(
provider="bedrock",
obiguard_api_key="sk-obg***", # Your Obiguard API key
virtual_key="AWS_VIRTUAL_KEY" #Bedrock Virtual Key
)
)
Agent runs can be costly. Tracking agent metrics is crucial for understanding the performance and reliability of your AI agents.
Metrics help identify issues, optimize runs, and ensure that your agents meet their intended goals.
Obiguard automatically logs comprehensive metrics for your AI agents, including cost, tokens used, latency, etc. Whether you need a broad overview or granular insights into your agent runs, Obiguard’s customizable filters provide the metrics you need. For agent-specific observability, add Trace-id to the request headers for each agent.
llm2 = ChatOpenAI(
api_key="Anthropic_API_Key",
base_url=OBIGUARD_GATEWAY_URL,
default_headers=createHeaders(
obiguard_api_key="sk-obg***", # Your Obiguard API key
provider="anthropic",
trace_id="research_agent1" #Add individual trace-id for your agent analytics
)
)
Agent runs are complex. Logs are essential for diagnosing issues, understanding agent behavior, and improving performance. They provide a detailed record of agent activities and tool use, which is crucial for debugging and optimizing processes.
Obiguard offers comprehensive logging features that capture detailed information about every action and decision made by your AI agents. Access a dedicated section to view records of agent executions, including parameters, outcomes, function calls, and errors. Filter logs based on multiple parameters such as trace ID, model, tokens used, and metadata.