Obiguard provides a robust and secure gateway to facilitate the integration of various Large Language Models (LLMs) into your applications, including all the text generation models supported by Hugging Face’s Inference endpoints.

With Obiguard, you can take advantage of features like fast AI gateway access, observability, prompt management, and more, all while ensuring the secure management of your LLM API keys through a virtual key system.

Provider Slug. Hugging Face

Obiguard SDK Integration with Hugging Face

Obiguard provides a consistent API to interact with models from various providers. To integrate Hugging Face with Obiguard:

1. Install the Obiguard SDK

Add the Obiguard SDK to your application to interact with Hugging Face’s API through Obiguard’s gateway.

pip install obiguard

2. Initialize Obiguard with the Virtual Key

To use Hugging Face with Obiguard, get your Hugging Face Access token from here, then add it to Obiguard to create the virtual key.

from obiguard import Obiguard

client = Obiguard(
  obiguard_api_key="sk-obg***", # Your Obiguard API key # Replace with your Obiguard API key
  virtual_key="VIRTUAL_KEY", # Replace with your virtual key for Hugging Face
  huggingface_base_url="Hugging Face_DEDICATED_URL" # Optional: Use this if you have a dedicated server hosted on Hugging Face
)

3. Invoke Chat Completions with Hugging Face

Use the Obiguard instance to send requests to Hugging Face. You can also override the virtual key directly in the API call if needed.

chat_completion = client.chat.completions.create(
  messages = [{"role": 'user', "content": 'Say this is a test'}],
  model = 'meta-llama/meta-llama-3.1-8b-instruct', # make sure your model is hot
)

print(chat_completion.choices[0].message.content)

Using Virtual Keys

Virtual Keys serve as Obiguard’s unified authentication system for all LLM interactions, simplifying the use of multiple providers and Obiguard features within your application. For self-hosted LLMs, you can configure custom authentication requirements including authorization keys, bearer tokens, or any other headers needed to access your model.

  1. Navigate to Virtual Keys in your Obiguard dashboard
  2. Click “Add Key” and enable the “Local/Privately hosted provider” toggle
  3. Configure your deployment:
  • Select the matching provider API specification (typically OpenAI)
  • Enter your model’s base URL in the Custom Host field
  • Add required authentication headers and their values

You can now use this virtual key in your requests:

client = Obiguard(
  obiguard_api_key="vk-obg***",  # Your Obiguard virtual key
  virtual_key="YOUR_SELF_HOSTED_LLM_VIRTUAL_KEY"
)

response = client.chat.completions.create(
  model="your-self-hosted-model-name",
  messages=[
    {"role": "system", "content": "You are a helpful assistant."},
    {"role": "user", "content": "Hello!"}
  ]
)

For more information about managing self-hosted LLMs with Obiguard, see Bring Your Own LLM.

Next Steps

The complete list of features supported in the SDK are available on the link below.

Obiguard SDK Client

Learn more about the Obiguard SDK Client