Obiguard provides a robust and secure gateway to facilitate the integration of various Large Language Models (LLMs) into your applications, including your locally hosted models through LocalAI.

Obiguard SDK Integration with LocalAI

1. Install the Obiguard SDK

pip install obiguard

2. Initialize Obiguard with LocalAI URL

First, ensure that your API is externally accessible. If you’re running the API on http://localhost, consider using a tool like ngrok to create a public URL. Then, instantiate the Obiguard client by adding your LocalAI URL (along with the version identifier) to the customHost property, and add the provider name as openai.

Note: Don’t forget to include the version identifier (e.g., /v1) in the customHost URL

from obiguard import Obiguard

client = Obiguard(
  obiguard_api_key="sk-obg***",  # Your Obiguard API key
  provider="openai",
  custom_host="https://7cc4-3-235-157-146.ngrok-free.app/v1" # Your LocalAI ngrok URL
)

Obiguard currently supports all endpoints that adhere to the OpenAI specification. This means, you can access and observe any of your LocalAI models that are exposed through OpenAI-compliant routes.

List of supported endpoints here.

3. Invoke Chat Completions

Use the Obiguard SDK to invoke chat completions from your LocalAI model, just as you would with any other provider.

completion = client.chat.completions.create(
    messages= [{ "role": 'user', "content": 'Say this is a test' }],
    model= 'ggml-koala-7b-model-q4_0-r2.bin'
)

print(completion)

Using Virtual Keys

Virtual Keys serve as Obiguard’s unified authentication system for all LLM interactions, simplifying the use of multiple providers and Obiguard features within your application. For self-hosted LLMs, you can configure custom authentication requirements including authorization keys, bearer tokens, or any other headers needed to access your model.

  1. Navigate to Virtual Keys in your Obiguard dashboard
  2. Click “Add Key” and enable the “Local/Privately hosted provider” toggle
  3. Configure your deployment:
    • Select the matching provider API specification (typically OpenAI)
    • Enter your model’s base URL in the Custom Host field
    • Add required authentication headers and their values
  4. Click “Create” to generate your virtual key

You can now use this virtual key in your requests:

from obiguard import Obiguard

client = Obiguard(
  obiguard_api_key="sk-obg***",  # Your Obiguard API key
  virtual_key="YOUR_SELF_HOSTED_LLM_VIRTUAL_KEY"
)

response = client.chat.completions.create(
  model="your-self-hosted-model-name",
  messages=[
    {"role": "system", "content": "You are a helpful assistant."},
    {"role": "user", "content": "Hello!"}
  ]
)

LocalAI Endpoints Supported

EndpointResource
/chat/completions (Chat, Vision, Tools support)Doc
/images/generationsDoc
/embeddingsDoc
/audio/transcriptionsDoc

Next Steps

Explore the complete list of features supported in the SDK:

SDK