Integrate vLLM-hosted custom models with Obiguard and take them to production
Expose your vLLM Server
Install the Obiguard SDK
Initialize Obiguard with vLLM custom URL
customHost
(by default, vLLM is on http://localhost:8000/v1
)provider
as openai
since the server follows OpenAI API schema.custom_host
here.Invoke Chat Completions
OpenAI
)Custom Host
field