Integrate your privately hosted LLMs with Obiguard for unified management, observability, and reliability.
/chat/completions
, Anthropic’s /messages
, etc.).OpenAI
)Custom Host
fieldcustom_host
must include the API version path (e.g., /v1/
). Obiguard will automatically append the endpoint path (/chat/completions
, /completions
, or /embeddings
).Issue | Possible Causes | Solutions |
---|---|---|
Connection Errors | Incorrect URL, network issues, firewall rules | Verify URL format, check network connectivity, confirm firewall allows traffic |
Authentication Failures | Invalid credentials, incorrect header format | Check credentials, ensure headers are correctly formatted and forwarded |
Timeout Errors | LLM server overloaded, request too complex | Adjust timeout settings, implement load balancing, simplify requests |
Inconsistent Responses | Different model versions, configuration differences | Standardize model versions, document expected behavior differences |
Can I use any private LLM with Obiguard?
How do I handle multiple deployment endpoints?
Are there any request volume limitations?
Can I use different models with the same private deployment?
Can I mix private and commercial LLMs in the same application?