Obiguard has native integrations with OpenAI SDKs for Python, and its REST APIs.
Using the Obiguard Gateway
To integrate the Obiguard gateway with OpenAI,
- Set the
baseURL to the Obiguard Gateway URL
- Include Obiguard-specific headers such as
provider, obiguardApiKey, and others.
Here’s how to apply it to a chat completion request:
Python SDK
cURL
OpenAI Python SDK
Install the Obiguard SDK with pipfrom obiguard import Obiguard
client = Obiguard(
obiguard_api_key="vk-obg***", # Your Obiguard virtual key
provider='openai',
strict_open_ai_compliance=False
)
response = client.chat.completions.create(
model="gpt-4o",
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Hello!"}
]
)
print(response.choices[0].message)
curl https://gateway.obiguard.ai/v1/chat/completions \
-H "Content-Type: application/json" \
-H "x-obiguard-api-key: $OBIGUARD_API_KEY" \
-d '{
"model": "gpt-4o",
"messages": [
{"role": "user", "content": "Hello!"}
]
}'
Install the OpenAI and Obiguard SDKs with pippip install openai obiguard
from openai import OpenAI
from obiguard import OBIGUARD_GATEWAY_URL, Obiguard
obiguard_client = Obiguard(
obiguard_api_key='sk-obg***', # Your Obiguard policy group API key
provider='openai',
)
openai_client = OpenAI(
api_key="sk-***", # Your OpenAI API key
base_url=OBIGUARD_GATEWAY_URL,
default_headers=obiguard_client.copy_headers()
)
completion = client.chat.completions.create(
model="gpt-4o",
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Hello!"}
]
)
print(completion.choices[0].message)
This request will be logged automatically by Obiguard and can be viewed in your logs dashboard.
Obiguard tracks the tokens used, execution time, and cost for each request.
You can also examine detailed request and response data.
Obiguard supports OpenAI’s new “developer” role in chat completions.
Starting with o1 models, the developer role replaces the previous system role.
Using the Responses API
OpenAI has introduced a new Responses API that merges the capabilities of Chat Completions and Assistants APIs.
Obiguard provides full support for this API, allowing its use with both the Obiguard SDK and the OpenAI SDK.
Python SDK
OpenAI Python SDK
from obiguard import Obiguard
client = Obiguard(
obiguard_api_key="sk-obg***", # Your Obiguard API key
)
response = client.responses.create(
model="gpt-4.1",
input="Tell me a three sentence bedtime story about a unicorn."
)
print(response)
import os
from openai import OpenAI
from obiguard import OBIGUARD_GATEWAY_URL, Obiguard
obiguard_client = Obiguard(
obiguard_api_key='sk-obg***', # Your Obiguard policy group API key
provider='openai',
)
openai_client = OpenAI(
api_key=os.environ.get("OPENAI_API_KEY"),
base_url=OBIGUARD_GATEWAY_URL,
default_headers=obiguard_client.copy_headers()
)
response = openai_client.responses.create(
model="gpt-4o",
instructions="You are a coding assistant that talks like a pirate.",
input="How do I check if a Python object is an instance of a class?",
)
print(response)
The Responses API offers a more adaptable framework for creating agentic applications with integrated tools that
run automatically.
Remote MCP Support in Responses API
Learn how Obiguard enables Remote MCP support for OpenAI’s Responses API.
Realtime API
Obiguard seamlessly integrates with OpenAI’s Realtime API, enabling features like logging, cost tracking, and guardrails.
Streaming Responses
Obiguard supports streaming responses through Server-Sent Events (SSE).
import os
from openai import OpenAI
from obiguard import OBIGUARD_GATEWAY_URL, Obiguard
obiguard_client = Obiguard(
obiguard_api_key='sk-obg***', # Your Obiguard policy group API key
provider='openai',
)
openai_client = OpenAI(
api_key=os.environ.get("OPENAI_API_KEY"),
base_url=OBIGUARD_GATEWAY_URL,
default_headers=obiguard_client.copy_headers()
)
stream = openai_client.responses.create(
model="gpt-4o",
input="Write a one-sentence bedtime story about a unicorn.",
stream=True,
)
for event in stream:
print(event)
Streaming with the Responses API
You can also stream responses from the Responses API:
Python SDK
OpenAI Python SDK
response = client.responses.create(
model="gpt-4.1",
instructions="You are a helpful assistant.",
input="Hello!",
stream=True
)
for event in response:
print(event)
response = client.responses.create(
model="gpt-4.1",
instructions="You are a helpful assistant.",
input="Hello!",
stream=True
)
for event in response:
print(event)
Vision Models Support
Obiguard’s multimodal Gateway provides full compatibility with OpenAI vision models. Refer to this guide for additional details:
Using Vision Models with the Responses API
The Responses API also enables processing images alongside text:
Python SDK
OpenAI Python SDK
from obiguard import Obiguard
prompt = "What is in this image?"
img_url = "https://upload.wikimedia.org/wikipedia/commons/thumb/d/d5/2023_06_08_Raccoon1.jpg/1599px-2023_06_08_Raccoon1.jpg"
obiguard_client = Obiguard(
obiguard_api_key='vk-obg***', # Your Obiguard virtual key proxy to OpenAI
provider='openai',
)
response = obiguard_client.responses.create(
model="gpt-4o-mini",
input=[
{
"role": "user",
"content": [
{"type": "input_text", "text": prompt},
{"type": "input_image", "image_url": f"{img_url}"},
],
}
],
)
print(response)
import os
from openai import OpenAI
from obiguard import OBIGUARD_GATEWAY_URL, Obiguard
prompt = "What is in this image?"
img_url = "https://upload.wikimedia.org/wikipedia/commons/thumb/d/d5/2023_06_08_Raccoon1.jpg/1599px-2023_06_08_Raccoon1.jpg"
obiguard_client = Obiguard(
obiguard_api_key='sk-obg***', # Your Obiguard policy group API key
provider='openai',
)
openai_client = OpenAI(
api_key=os.environ.get("OPENAI_API_KEY"),
base_url=OBIGUARD_GATEWAY_URL,
default_headers=obiguard_client.copy_headers()
)
response = openai_client.responses.create(
model="gpt-4o-mini",
input=[
{
"role": "user",
"content": [
{"type": "input_text", "text": prompt},
{"type": "input_image", "image_url": f"{img_url}"},
],
}
],
)
print(response)
Function Calling
Function calls within your OpenAI or Obiguard SDK operations remain standard. These logs will appear in Obiguard, highlighting the utilized functions and their outputs.
Additionally, you can define functions within your prompts and invoke the obiguard.prompts.completions.create method as above.
Function Calling with the Responses API
The Responses API also supports function calling with the same powerful capabilities:
Python SDK
OpenAI Python SDK
tools = [
{
"type": "function",
"name": "get_current_weather",
"description": "Get the current weather in a given location",
"parameters": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "The city and state, e.g. San Francisco, CA"
},
"unit": {"type": "string", "enum": ["celsius", "fahrenheit"]}
},
"required": ["location", "unit"]
}
}
]
obiguard_client = Obiguard(
obiguard_api_key='vk-obg***', # Your Obiguard virtual key proxy to OpenAI
provider='openai',
)
response = obiguard_client.responses.create(
model="gpt-4.1",
tools=tools,
input="What is the weather like in Boston today?",
tool_choice="auto"
)
print(response)
tools = [
{
"type": "function",
"name": "get_current_weather",
"description": "Get the current weather in a given location",
"parameters": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "The city and state, e.g. San Francisco, CA"
},
"unit": {"type": "string", "enum": ["celsius", "fahrenheit"]}
},
"required": ["location", "unit"]
}
}
]
obiguard_client = Obiguard(
obiguard_api_key='sk-obg***', # Your Obiguard policy group API key
provider='openai',
)
openai_client = OpenAI(
api_key=os.environ.get("OPENAI_API_KEY"),
base_url=OBIGUARD_GATEWAY_URL,
default_headers=obiguard_client.copy_headers()
)
response = openai_client.responses.create(
model="gpt-4.1",
tools=tools,
input="What is the weather like in Boston today?",
tool_choice="auto"
)
print(response)
Image Generation
Obiguard supports multiple modalities for OpenAI and you can make image generation requests through Obiguard’s AI Gateway the same way as making completion calls.
import os
from openai import OpenAI
from obiguard import OBIGUARD_GATEWAY_URL, Obiguard
# Define the OpenAI client as shown above
obiguard_client = Obiguard(
obiguard_api_key='sk-obg***', # Your Obiguard policy group API key
provider='openai',
)
openai_client = OpenAI(
api_key=os.environ.get("OPENAI_API_KEY"),
base_url=OBIGUARD_GATEWAY_URL,
default_headers=obiguard_client.copy_headers()
)
image = openai_client.images.generate(
model="dall-e-3",
prompt="Lucy in the sky with diamonds",
size="1024x1024"
)
Audio - Transcription, Translation, and Text-to-Speech
Obiguard’s multimodal Gateway also supports the audio methods on OpenAI API. Check out the below guides for more info:
Check out the below guides for more info:
Web search delivers accurate and clearly-cited answers from the web, using the same tool as search in ChatGPT:
response = obiguard_client.responses.create(
model="gpt-4.1",
tools=[{
"type": "web_search_preview",
"search_context_size": "medium", # Options: "high", "medium" (default), or "low"
"user_location": {# Optional - for localized results
"type": "approximate",
"country": "US",
"city": "San Francisco",
"region": "California"
}
}],
input="What was a positive news story from today?"
)
print(response)
Options for search_context_size:
high: Most comprehensive context, higher cost, slower response
medium: Balanced context, cost, and latency (default)
low: Minimal context, lowest cost, fastest response
Responses include citations for URLs found in search results, with clickable references.
File search enables quick retrieval from your knowledge base across multiple file types:
response = obiguard_client.responses.create(
model="gpt-4.1",
tools=[{
"type": "file_search",
"vector_store_ids": ["vs_1234567890"],
"max_num_results": 20,
"filters": {# Optional - filter by metadata
"type": "eq",
"key": "document_type",
"value": "report"
}
}],
input="What are the attributes of an ancient brown dragon?"
)
print(response)
This tool requires you to first create a vector store and upload files to it. Supports various file formats including
PDFs, DOCXs, TXT, and more. Results include file citations in the response.
Enhanced Reasoning
Control the depth of model reasoning for more comprehensive analysis:
response = obiguard.responses.create(
model="o3-mini",
input="How much wood would a woodchuck chuck?",
reasoning={
"effort": "high" # Options: "high", "medium", or "low"
}
)
print(response)
Computer Use Assistant
Obiguard also supports the Computer Use Assistant (CUA) tool, which helps agents control computers or virtual machines through screenshots and actions. This feature is available for select developers as a research preview on premium tiers.
Learn More about Computer use tool here
Managing OpenAI Projects & Organizations in Obiguard
When integrating OpenAI with Obiguard, you can specify your OpenAI organization and project IDs along with your API key.
This is particularly useful if you belong to multiple organizations or are accessing projects through a legacy user API key.
Specifying the organization and project IDs helps you maintain better control over your access rules, usage, and costs.
In Obiguard, you can add your Org & Project details by,
- Defining a guardrail policy
- Generating your virtual key for the guardrail policy
- Passing details in a request
Let’s explore each method in more detail.
Using Virtual Keys
When selecting OpenAI from the dropdown menu while creating a virtual key,
Obiguard automatically displays optional fields for the organization ID and project ID alongside the API key field.
Get your OpenAI API key from here, then add it to Obiguard to create the virtual key that can be used throughout Obiguard.
While Making a Request
You can also pass your organization and project details directly when making a request using curl, the OpenAI SDK, or the Obiguard SDK.
OpenAI Python SDK
cURL
Obiguard Python SDK
from openai import OpenAI
from obiguard import OBIGUARD_GATEWAY_URL, Obiguard
obiguard_client = Obiguard(
obiguard_api_key='sk-obg***', # Your Obiguard policy group API key
provider='openai',
)
openai_client = OpenAI(
api_key=os.environ.get("OPENAI_API_KEY"),
base_url=OBIGUARD_GATEWAY_URL,
default_headers=obiguard_client.copy_headers()
)
chat_complete = openai_client.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": "Say this is a test"}],
)
)
print(chat_complete.choices[0].message.content)
curl https://gateway.obiguard.ai/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-H "x-obiguard-openai-organization: org-xxxxxxx" \
-H "x-obiguard-openai-project: proj_xxxxxxx" \
-H "x-obiguard-api-key: OBIGUARD_API_KEY" \
-H "x-obiguard-provider: openai" \
-d '{
"model": "gpt-4o",
"messages": [{"role": "user","content": "Hello!"}]
}'
from obiguard import Obiguard
client = Obiguard(
obiguard_api_key="vk-obg***", # Your Obiguard virtual key
provider='openai',
strict_open_ai_compliance=False
)
chat_complete = client.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": "Say this is a test"}],
)
print(chat_complete.choices[0].message.content)