Obiguard provides a robust and secure gateway to facilitate the integration of various Large Language Models (LLMs) into your applications, including your locally hosted models through Ollama.

Provider Slug. ollama

Obiguard SDK Integration with Ollama Models

Obiguard provides a consistent API to interact with models from various providers.

1. Expose your Ollama API

Expose your Ollama API by using a tunneling service like ngrok or any other way you prefer.

For using Ollama with ngrok, here’s a useful guide

ngrok http 11434 --host-header="localhost:11434"

2. Install the Obiguard SDK

Install the Obiguard SDK in your application to interact with your Ollama API through Obiguard.

pip install obiguard

3. Initialize Obiguard with Ollama URL

Instantiate the Obiguard client by adding your Ollama publicly-exposed URL to the customHost property.

from obiguard import Obiguard

client = Obiguard(
  obiguard_api_key="sk-obg***",  # Your Obiguard API key
  provider="ollama",
  custom_host="https://8cc4-2-234-255-255.ngrok-free.app" # Your Ollama ngrok URL
)

For the Ollama integration, you only need to pass the base URL to customHost without the version identifier (such as /v1) - Obiguard takes care of the rest!

4. Invoke Chat Completions with Ollama

Use the Obiguard SDK to invoke chat completions from your Ollama model, just as you would with any other provider.

completion = client.chat.completions.create(
  messages= [{"role": 'user', "content": 'Say this is a test'}],
  model= 'llama3'
)

print(completion)

Using Virtual Keys

Virtual Keys serve as Obiguard’s unified authentication system for all LLM interactions, simplifying the use of multiple providers and Obiguard features within your application.

For self-hosted LLMs, you can configure custom authentication requirements including authorization keys, bearer tokens, or any other headers needed to access your models.

  1. Navigate to Virtual Keys in your Obiguard dashboard
  2. Click “Add Key” and enable the “Local/Privately hosted provider” toggle
  3. Configure your deployment:
  • Select the matching provider API specification (typically OpenAI)
  • Enter your model’s base public URL in the Custom Host field
  • Add required authentication headers and their values
  1. Click “Create” to generate your virtual key

You can now use this virtual key in your requests:

client = Obiguard(
  obiguard_api_key="sk-obg***",  # Your Obiguard API key
  virtual_key="YOUR_SELF_HOSTED_LLM_VIRTUAL_KEY"
)

response = client.chat.completions.create(
  model="your-self-hosted-model-name",
  messages=[
    {"role": "system", "content": "You are a helpful assistant."},
    {"role": "user", "content": "Hello!"}
  ]
)

print(response)

Ollama Tool Calling

Tool calling feature lets models trigger external tools based on conversation context. You define available functions, the model chooses when to use them, and your application executes them and returns results.

Obiguard supports Ollama Tool Calling and makes it interoperable across multiple providers.

Supported Ollama Models with Tool Calling

tools = [{
  "type": "function",
  "function": {
    "name": "getWeather",
    "description": "Get the current weather",
    "parameters": {
      "type": "object",
      "properties": {
        "location": {"type": "string", "description": "City and state"},
        "unit": {"type": "string", "enum": ["celsius", "fahrenheit"]}
      },
      "required": ["location"]
    }
  }
}]


response = client.chat.completions.create(
  model="llama-3.3-70b-versatile",
  messages=[
    {"role": "system", "content": "You are a helpful assistant."},
    {"role": "user", "content": "What's the weather like in Delhi - respond in JSON"}
  ],
  tools=tools,
  tool_choice="auto"
)

print(response.choices[0].finish_reason)

Next Steps

Explore the complete list of features supported in the SDK:

SDK