Obiguard has native integrations with OpenAI SDKs for Python, and its REST APIs.

Provider slug: openai

Using the Obiguard Gateway

To integrate the Obiguard gateway with OpenAI,

  • Set the baseURL to the Obiguard Gateway URL
  • Include Obiguard-specific headers such as provider, obiguardApiKey, and others.

Here’s how to apply it to a chat completion request:

Install the Obiguard SDK with pip

pip install obiguard
from obiguard import Obiguard

client = Obiguard(
  obiguard_api_key = "sk-obg***",  # Your Obiguard API key
)

response = client.chat.completions.create(
  model="gpt-4o",
  messages=[
    {"role": "system", "content": "You are a helpful assistant."},
    {"role": "user", "content": "Hello!"}
  ]
)

print(response.choices[0].message)

This request will be logged automatically by Obiguard and can be viewed in your logs dashboard. Obiguard tracks the tokens used, execution time, and cost for each request. You can also examine detailed request and response data.

Obiguard supports OpenAI’s new “developer” role in chat completions. Starting with o1 models, the developer role replaces the previous system role.

Using the Responses API

OpenAI has introduced a new Responses API that merges the capabilities of Chat Completions and Assistants APIs. Obiguard provides full support for this API, allowing its use with both the Obiguard SDK and the OpenAI SDK.

from obiguard import Obiguard

client = Obiguard(
  obiguard_api_key="sk-obg***",  # Your Obiguard API key
)

response = client.responses.create(
  model="gpt-4.1",
  input="Tell me a three sentence bedtime story about a unicorn."
)

print(response)

The Responses API offers a more adaptable framework for creating agentic applications with integrated tools that run automatically.

Remote MCP Support in Responses API

Learn how Obiguard enables Remote MCP support for OpenAI’s Responses API.

Realtime API

Obiguard seamlessly integrates with OpenAI’s Realtime API, enabling features like logging, cost tracking, and guardrails.

Realtime API

Streaming Responses

Obiguard supports streaming responses through Server-Sent Events (SSE).

from openai import OpenAI
from obiguard import OBIGUARD_GATEWAY_URL, createHeaders

client = OpenAI(
  api_key="OPENAI_API_KEY", # defaults to os.environ.get("OPENAI_API_KEY")
  base_url=OBIGUARD_GATEWAY_URL,
  default_headers=createHeaders(
    provider="openai",
    obiguard_api_key="OBIGUARD_API_KEY" # defaults to os.environ.get("OBIGUARD_API_KEY")
  )
)

chat_complete = client.chat.completions.create(
  model="gpt-4",
  messages=[{"role": "user", "content": "Say this is a test"}],
  stream=True
)

for chunk in chat_complete:
  print(chunk.choices[0].delta.content, end="", flush=True)

Streaming with the Responses API

You can also stream responses from the Responses API:

response = client.responses.create(
  model="gpt-4.1",
  instructions="You are a helpful assistant.",
  input="Hello!",
  stream=True
)

for event in response:
  print(event)

Vision Models Support

Obiguard’s multimodal Gateway provides full compatibility with OpenAI vision models. Refer to this guide for additional details:

Using Vision Models with the Responses API

The Responses API also enables processing images alongside text:

response = client.responses.create(
  model="gpt-4.1",
  input=[
    {
      "role": "user",
      "content": [
        {"type": "input_text", "text": "What is in this image?"},
        {
          "type": "input_image",
          "image_url": "https://upload.wikimedia.org/wikipedia/commons/thumb/d/dd/Gfp-wisconsin-madison-the-nature-boardwalk.jpg/2560px-Gfp-wisconsin-madison-the-nature-boardwalk.jpg"
        }
      ]
    }
  ]
)
print(response)

Function Calling

Function calls within your OpenAI or Obiguard SDK operations remain standard. These logs will appear in Obiguard, highlighting the utilized functions and their outputs.

Additionally, you can define functions within your prompts and invoke the obiguard.prompts.completions.create method as above.

Function Calling with the Responses API

The Responses API also supports function calling with the same powerful capabilities:

tools = [
  {
    "type": "function",
    "name": "get_current_weather",
    "description": "Get the current weather in a given location",
    "parameters": {
      "type": "object",
      "properties": {
      "location": {
        "type": "string",
        "description": "The city and state, e.g. San Francisco, CA"
      },
        "unit": {"type": "string", "enum": ["celsius", "fahrenheit"]}
      },
      "required": ["location", "unit"]
    }
  }
]

response = client.responses.create(
  model="gpt-4.1",
  tools=tools,
  input="What is the weather like in Boston today?",
  tool_choice="auto"
)

print(response)

Image Generation

Obiguard supports multiple modalities for OpenAI and you can make image generation requests through Obiguard’s AI Gateway the same way as making completion calls.

# Define the OpenAI client as shown above

image = openai.images.generate(
  model="dall-e-3",
  prompt="Lucy in the sky with diamonds",
  size="1024x1024"
)

Audio - Transcription, Translation, and Text-to-Speech

Obiguard’s multimodal Gateway also supports the audio methods on OpenAI API. Check out the below guides for more info:

Check out the below guides for more info:


Integrated Tools with Responses API

Web Search Tool

Web search delivers accurate and clearly-cited answers from the web, using the same tool as search in ChatGPT:

response = client.responses.create(
  model="gpt-4.1",
  tools=[{
    "type": "web_search_preview",
    "search_context_size": "medium", # Options: "high", "medium" (default), or "low"
    "user_location": {# Optional - for localized results
      "type": "approximate",
      "country": "US",
      "city": "San Francisco",
      "region": "California"
    }
  }],
  input="What was a positive news story from today?"
)
print(response)

Options for search_context_size:

  • high: Most comprehensive context, higher cost, slower response
  • medium: Balanced context, cost, and latency (default)
  • low: Minimal context, lowest cost, fastest response

Responses include citations for URLs found in search results, with clickable references.

File Search Tool

File search enables quick retrieval from your knowledge base across multiple file types:

response = obiguard.responses.create(
  model="gpt-4.1",
  tools=[{
    "type": "file_search",
    "vector_store_ids": ["vs_1234567890"],
    "max_num_results": 20,
    "filters": {# Optional - filter by metadata
      "type": "eq",
      "key": "document_type",
      "value": "report"
    }
  }],
  input="What are the attributes of an ancient brown dragon?"
)

print(response)

This tool requires you to first create a vector store and upload files to it. Supports various file formats including PDFs, DOCXs, TXT, and more. Results include file citations in the response.

Enhanced Reasoning

Control the depth of model reasoning for more comprehensive analysis:

response = obiguard.responses.create(
  model="o3-mini",
  input="How much wood would a woodchuck chuck?",
  reasoning={
    "effort": "high"  # Options: "high", "medium", or "low"
  }
)
print(response)

Computer Use Assistant

Obiguard also supports the Computer Use Assistant (CUA) tool, which helps agents control computers or virtual machines through screenshots and actions. This feature is available for select developers as a research preview on premium tiers.

Learn More about Computer use tool here

Managing OpenAI Projects & Organizations in Obiguard

When integrating OpenAI with Obiguard, you can specify your OpenAI organization and project IDs along with your API key. This is particularly useful if you belong to multiple organizations or are accessing projects through a legacy user API key.

Specifying the organization and project IDs helps you maintain better control over your access rules, usage, and costs.

In Obiguard, you can add your Org & Project details by,

  1. Defining a guardrail policy
  2. Generating your virtual key for the guardrail policy
  3. Passing details in a request

Let’s explore each method in more detail.

Using Virtual Keys

When selecting OpenAI from the dropdown menu while creating a virtual key, Obiguard automatically displays optional fields for the organization ID and project ID alongside the API key field.

Get your OpenAI API key from here, then add it to Obiguard to create the virtual key that can be used throughout Obiguard.

While Making a Request

You can also pass your organization and project details directly when making a request using curl, the OpenAI SDK, or the Obiguard SDK.

from openai import OpenAI
from obiguard import OBIGUARD_GATEWAY_URL, createHeaders

client = OpenAI(
  api_key="OPENAI_API_KEY",
  organization="org-xxxxxxxxxx",
  project="proj_xxxxxxxxx",
  base_url=OBIGUARD_GATEWAY_URL,
  default_headers=createHeaders(
    provider="openai",
    obiguard_api_key="OBIGUARD_API_KEY"
  )
)

chat_complete = client.chat.completions.create(
  model="gpt-4o",
  messages=[{"role": "user", "content": "Say this is a test"}],
)
)

print(chat_complete.choices[0].message.content)