Obiguard provides a robust and secure platform to observe, govern, and manage your locally or privately hosted custom models using vLLM.
Here’s a list of all model architectures supported on vLLM.
Integrating Custom Models with Obiguard SDK
Expose your vLLM Server
Expose your vLLM server by using a tunneling service like ngrok or any other way you prefer. You can skip this step if you’re self-hosting the Gateway.
ngrok http 11434 --host-header="localhost:8080"
Initialize Obiguard with vLLM custom URL
- Pass your publicly-exposed vLLM server URL to Obiguard with
customHost
(by default, vLLM is on http://localhost:8000/v1
)
- Set target
provider
as openai
since the server follows OpenAI API schema.
from obiguard import Obiguard
client = Obiguard(
obiguard_api_key="sk-obg***", # Your Obiguard API key
provider="openai",
custom_host="https://7cc4-3-235-157-146.ngrok-free.app" # Your vLLM ngrok URL
Authorization="AUTH_KEY", # If you need to pass auth
)
from obiguard import Obiguard
client = Obiguard(
obiguard_api_key="sk-obg***", # Your Obiguard API key
provider="openai",
custom_host="https://7cc4-3-235-157-146.ngrok-free.app" # Your vLLM ngrok URL
Authorization="AUTH_KEY", # If you need to pass auth
)
More on custom_host
here.
Invoke Chat Completions
Use the Obiguard SDK to invoke chat completions from your model, just as you would with any other provider:
completion = client.chat.completions.create(
messages= [{ "role": 'user', "content": 'Say this is a test' }]
)
print(completion)
completion = client.chat.completions.create(
messages= [{ "role": 'user', "content": 'Say this is a test' }]
)
print(completion)
Virtual Keys serve as Obiguard’s unified authentication system for all LLM interactions, simplifying the use of multiple providers and Obiguard features within your application. For self-hosted LLMs, you can configure custom authentication requirements including authorization keys, bearer tokens, or any other headers needed to access your model.
- Navigate to Virtual Keys in your Obiguard dashboard
- Click “Add Key” and enable the “Local/Privately hosted provider” toggle
- Configure your deployment:
- Select the matching provider API specification (typically
OpenAI
)
- Enter your model’s base URL in the
Custom Host
field
- Add required authentication headers and their values
- Click “Create” to generate your virtual key
You can now use this virtual key in your requests:
client = Obiguard(
obiguard_api_key="vk-obg***", # Your Obiguard virtual key
virtual_key="YOUR_SELF_HOSTED_LLM_VIRTUAL_KEY"
)
response = client.chat.completions.create(
model="your-self-hosted-model-name",
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Hello!"}
]
)
client = Obiguard(
obiguard_api_key="vk-obg***", # Your Obiguard virtual key
virtual_key="YOUR_SELF_HOSTED_LLM_VIRTUAL_KEY"
)
response = client.chat.completions.create(
model="your-self-hosted-model-name",
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Hello!"}
]
)
For more information about managing self-hosted LLMs with Obigaurd, see Bring Your Own LLM.
Next Steps
Explore the complete list of features supported in the SDK:
Responses are generated using AI and may contain mistakes.