Integrate your privately hosted LLMs with Obiguard for unified management, observability, and reliability.
Obiguard’s Bring Your Own LLM feature allows you to seamlessly integrate privately hosted language models into your AI infrastructure.
This powerful capability enables unified management of both private and commercial LLMs through a consistent interface while
leveraging Obiguard’s comprehensive suite of observability and reliability features.
Your private LLM must implement an API specification compatible with one of Obiguard’s supported providers
(e.g., OpenAI’s /chat/completions, Anthropic’s /messages, etc.).
Obiguard offers two primary methods to integrate your private LLMs:
Using Virtual Keys: Store your deployment details securely in Obiguard’s vault
Direct Integration: Pass deployment details in your requests without storing them
After creating your virtual key, you can use it in your applications:
Python
Copy
from obiguard import Obiguardclient = Obiguard( obiguard_api_key="sk-obg***", # Your Obiguard API key virtual_key="YOUR_PRIVATE_LLM_VIRTUAL_KEY")response = client.chat.completions.create( model="YOUR_MODEL_NAME", # The model name your private deployment expects messages=[ {"role": "user", "content": "Explain quantum computing in simple terms"} ])print(response.choices[0].message.content)
Python
Copy
from obiguard import Obiguardclient = Obiguard( obiguard_api_key="sk-obg***", # Your Obiguard API key virtual_key="YOUR_PRIVATE_LLM_VIRTUAL_KEY")response = client.chat.completions.create( model="YOUR_MODEL_NAME", # The model name your private deployment expects messages=[ {"role": "user", "content": "Explain quantum computing in simple terms"} ])print(response.choices[0].message.content)
If you prefer not to store your private LLM details in Obiguard’s vault, you can pass them directly in your API requests:
Python
Copy
from obiguard import Obiguardclient = Obiguard( obiguard_api_key="sk-obg***", # Your Obiguard API key provider="openai", # The API spec your LLM implements custom_host="https://your-llm-server.com/v1/", # Include the API version Authorization="Bearer YOUR_AUTH_TOKEN", # Optional: Any auth headers needed forward_headers=["Authorization"] # Headers to forward without processing)response = client.chat.completions.create( model="YOUR_MODEL_NAME", messages=[{"role": "user", "content": "Explain quantum computing in simple terms"}])print(response.choices[0].message.content)
Python
Copy
from obiguard import Obiguardclient = Obiguard( obiguard_api_key="sk-obg***", # Your Obiguard API key provider="openai", # The API spec your LLM implements custom_host="https://your-llm-server.com/v1/", # Include the API version Authorization="Bearer YOUR_AUTH_TOKEN", # Optional: Any auth headers needed forward_headers=["Authorization"] # Headers to forward without processing)response = client.chat.completions.create( model="YOUR_MODEL_NAME", messages=[{"role": "user", "content": "Explain quantum computing in simple terms"}])print(response.choices[0].message.content)
The custom_host must include the API version path (e.g., /v1/). Obiguard will automatically append the endpoint path (/chat/completions, /completions, or /embeddings).
Yes, provided it adheres to an API specification supported by Obiguard (e.g., OpenAI, Anthropic, etc.).
The model must handle requests and responses in the expected format and be publicly accessible.
How do I handle multiple deployment endpoints?
Create separate virtual keys for each endpoint.
Are there any request volume limitations?
Obiguard itself doesn’t impose specific request volume limitations for private LLMs.
Your throughput will be limited only by your private LLM deployment’s capabilities and any rate limits you configure in Obiguard.
Can I use different models with the same private deployment?
Yes, you can specify different model names in your requests as long as your private LLM deployment supports them. The model name is passed through to your deployment.
Can I mix private and commercial LLMs in the same application?
Absolutely! One of Obiguard’s key benefits is the ability to manage both private and commercial LLMs through a unified interface.