Key Benefits
- Unified API Access: Manage private and commercial LLMs through a single, consistent interface
- Comprehensive Monitoring: Track performance, usage, and costs alongside your commercial LLM usage
- Simplified Access Control: Manage team-specific permissions and usage limits
- Secure Credential Management: Protect sensitive authentication details through Obiguard’s secure vault
Integration Options
PrerequisitesYour private LLM must implement an API specification compatible with one of Obiguard’s supported providers
(e.g., OpenAI’s
/chat/completions
, Anthropic’s /messages
, etc.).- Using Virtual Keys: Store your deployment details securely in Obiguard’s vault
- Direct Integration: Pass deployment details in your requests without storing them
Option 1: Using Virtual Keys
Step 1: Add Your Deployment Details
Navigate to the Virtual Keys section in your Obiguard dashboard and create a new Virtual Key.- Click “Add Key” and enable the “Local/Privately hosted provider” toggle
- Configure your deployment:
- Select the matching provider API specification (typically
OpenAI
) - Enter your model’s base URL in the
Custom Host
field - Add required authentication headers and their values
- Select the matching provider API specification (typically
- Click “Create” to generate your virtual key
Step 2: Use Your Virtual Key in Requests
After creating your virtual key, you can use it in your applications:Python
Option 2: Direct Integration Without Virtual Keys
If you prefer not to store your private LLM details in Obiguard’s vault, you can pass them directly in your API requests:Python
The
custom_host
must include the API version path (e.g., /v1/
). Obiguard will automatically append the endpoint path (/chat/completions
, /completions
, or /embeddings
).Advanced Features
Monitoring and Analytics
Obiguard provides comprehensive observability for your private LLM deployments, just like it does for commercial providers:- Log Analysis: View detailed request and response logs
- Performance Metrics: Track latency, token usage, and error rates
- User Attribution: Associate requests with specific users via metadata
Troubleshooting
Issue | Possible Causes | Solutions |
---|---|---|
Connection Errors | Incorrect URL, network issues, firewall rules | Verify URL format, check network connectivity, confirm firewall allows traffic |
Authentication Failures | Invalid credentials, incorrect header format | Check credentials, ensure headers are correctly formatted and forwarded |
Timeout Errors | LLM server overloaded, request too complex | Adjust timeout settings, implement load balancing, simplify requests |
Inconsistent Responses | Different model versions, configuration differences | Standardize model versions, document expected behavior differences |
FAQs
Can I use any private LLM with Obiguard?
Can I use any private LLM with Obiguard?
Yes, provided it adheres to an API specification supported by Obiguard (e.g., OpenAI, Anthropic, etc.).
The model must handle requests and responses in the expected format and be publicly accessible.
How do I handle multiple deployment endpoints?
How do I handle multiple deployment endpoints?
Create separate virtual keys for each endpoint.
Are there any request volume limitations?
Are there any request volume limitations?
Obiguard itself doesn’t impose specific request volume limitations for private LLMs.
Your throughput will be limited only by your private LLM deployment’s capabilities and any rate limits you configure in Obiguard.
Can I use different models with the same private deployment?
Can I use different models with the same private deployment?
Yes, you can specify different model names in your requests as long as your private LLM deployment supports them. The model name is passed through to your deployment.
Can I mix private and commercial LLMs in the same application?
Can I mix private and commercial LLMs in the same application?
Absolutely! One of Obiguard’s key benefits is the ability to manage both private and commercial LLMs through a unified interface.