Obiguard provides a robust and secure gateway to facilitate the integration of various Large Language Models (LLMs) into your applications, including your locally hosted models through Ollama.
Obiguard SDK Integration with Ollama Models
Obiguard provides a consistent API to interact with models from various providers.
1. Expose your Ollama API
Expose your Ollama API by using a tunneling service like ngrok or any other way you prefer.
For using Ollama with ngrok, here’s a useful guide
ngrok http 11434 --host-header="localhost:11434"
2. Install the Obiguard SDK
Install the Obiguard SDK in your application to interact with your Ollama API through Obiguard.
3. Initialize Obiguard with Ollama URL
Instantiate the Obiguard client by adding your Ollama publicly-exposed URL to the customHost
property.
from obiguard import Obiguard
client = Obiguard(
obiguard_api_key="sk-obg***", # Your Obiguard API key
provider="ollama",
custom_host="https://8cc4-2-234-255-255.ngrok-free.app" # Your Ollama ngrok URL
)
from obiguard import Obiguard
client = Obiguard(
obiguard_api_key="sk-obg***", # Your Obiguard API key
provider="ollama",
custom_host="https://8cc4-2-234-255-255.ngrok-free.app" # Your Ollama ngrok URL
)
For the Ollama integration, you only need to pass the base URL to customHost
without the version identifier (such as
/v1
) - Obiguard takes care of the rest!
4. Invoke Chat Completions with Ollama
Use the Obiguard SDK to invoke chat completions from your Ollama model, just as you would with any other provider.
completion = client.chat.completions.create(
messages= [{"role": 'user', "content": 'Say this is a test'}],
model= 'llama3'
)
print(completion)
completion = client.chat.completions.create(
messages= [{"role": 'user', "content": 'Say this is a test'}],
model= 'llama3'
)
print(completion)
curl --location 'https://gateway.obiguard.ai/v1/chat/completions' \
--header 'Content-Type: application/json' \
--header 'x-obiguard-custom-host: https://1eb6-103-180-45-255.ngrok-free.app' \
--header 'x-obiguard-provider: ollama' \
--header 'x-obiguard-api-key: $OBIGUARD_API_KEY' \
--data '{
"model": "tinyllama",
"max_tokens": 200,
"stream": false,
"messages": [
{
"role": "system",
"content": [
{
"type": "text",
"text": "You are Batman"
}
]
},
{
"role": "user",
"content": [
{
"type": "text",
"text": "Who is the greatest detective"
}
]
},
{
"role": "assistant",
"content": [
{
"type": "text",
"text": "is it me?"
}
]
}
]
}'
Virtual Keys serve as Obiguard’s unified authentication system for all LLM interactions, simplifying the use of multiple providers and
Obiguard features within your application.
For self-hosted LLMs, you can configure custom authentication requirements including authorization keys, bearer tokens, or any other headers needed to access your models.
- Navigate to Virtual Keys in your Obiguard dashboard
- Click “Add Key” and enable the “Local/Privately hosted provider” toggle
- Configure your deployment:
- Select the matching provider API specification (typically
OpenAI
)
- Enter your model’s base public URL in the
Custom Host
field
- Add required authentication headers and their values
- Click “Create” to generate your virtual key
You can now use this virtual key in your requests:
client = Obiguard(
obiguard_api_key="sk-obg***", # Your Obiguard API key
virtual_key="YOUR_SELF_HOSTED_LLM_VIRTUAL_KEY"
)
response = client.chat.completions.create(
model="your-self-hosted-model-name",
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Hello!"}
]
)
print(response)
client = Obiguard(
obiguard_api_key="sk-obg***", # Your Obiguard API key
virtual_key="YOUR_SELF_HOSTED_LLM_VIRTUAL_KEY"
)
response = client.chat.completions.create(
model="your-self-hosted-model-name",
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Hello!"}
]
)
print(response)
from openai import OpenAI
from obiguard import OBIGUARD_GATEWAY_URL, createHeaders
client = OpenAI(
api_key='OPENAI_API_KEY',
base_url=OBIGUARD_GATEWAY_URL,
default_headers=createHeaders(
provider="openai",
obiguard_api_key="vk-obg******", # Your Obiguard virtual key
)
)
completion = client.chat.completions.create(
model="gpt-4o",
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Hello!"}
]
)
print(completion.choices[0].message)
curl https://gateway.obiguard.ai/v1/chat/completions \
-H "Content-Type: application/json" \
-H "x-obiguard-api-key: $OBIGUARD_API_KEY" \
-d '{
"model": "gpt-4o",
"messages": [
{"role": "user", "content": "Hello!"}
]
}'
Tool calling feature lets models trigger external tools based on conversation context.
You define available functions, the model chooses when to use them, and your application executes them and returns results.
Obiguard supports Ollama Tool Calling and makes it interoperable across multiple providers.
Supported Ollama Models with Tool Calling
tools = [{
"type": "function",
"function": {
"name": "getWeather",
"description": "Get the current weather",
"parameters": {
"type": "object",
"properties": {
"location": {"type": "string", "description": "City and state"},
"unit": {"type": "string", "enum": ["celsius", "fahrenheit"]}
},
"required": ["location"]
}
}
}]
response = client.chat.completions.create(
model="llama-3.3-70b-versatile",
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "What's the weather like in Delhi - respond in JSON"}
],
tools=tools,
tool_choice="auto"
)
print(response.choices[0].finish_reason)
tools = [{
"type": "function",
"function": {
"name": "getWeather",
"description": "Get the current weather",
"parameters": {
"type": "object",
"properties": {
"location": {"type": "string", "description": "City and state"},
"unit": {"type": "string", "enum": ["celsius", "fahrenheit"]}
},
"required": ["location"]
}
}
}]
response = client.chat.completions.create(
model="llama-3.3-70b-versatile",
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "What's the weather like in Delhi - respond in JSON"}
],
tools=tools,
tool_choice="auto"
)
print(response.choices[0].finish_reason)
curl -X POST "https://gateway.obiguard.ai/v1/chat/completions" \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $YOUR_OBIGUARD_VIRTUAL_KEY" \
-d '{
"model": "llama-3.3-70b-versatile",
"messages": [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "What'\''s the weather like in Delhi - respond in JSON"}
],
"tools": [{
"type": "function",
"function": {
"name": "getWeather",
"description": "Get the current weather",
"parameters": {
"type": "object",
"properties": {
"location": {"type": "string", "description": "City and state"},
"unit": {"type": "string", "enum": ["celsius", "fahrenheit"]}
},
"required": ["location"]
}
}
}],
"tool_choice": "auto"
}'
Next Steps
Explore the complete list of features supported in the SDK: