Obiguard provides a robust and secure gateway to facilitate the integration of various Large Language Models (LLMs) into your applications,
including all the text generation models supported by Hugging Face’s Inference endpoints.
With Obiguard, you can take advantage of features like fast AI gateway access, observability, prompt management, and more,
all while ensuring the secure management of your LLM API keys through a virtual key system.
from obiguard import Obiguardclient = Obiguard( obiguard_api_key="sk-obg***", # Your Obiguard API key # Replace with your Obiguard API key virtual_key="VIRTUAL_KEY", # Replace with your virtual key for Hugging Face huggingface_base_url="Hugging Face_DEDICATED_URL" # Optional: Use this if you have a dedicated server hosted on Hugging Face)
Copy
from obiguard import Obiguardclient = Obiguard( obiguard_api_key="sk-obg***", # Your Obiguard API key # Replace with your Obiguard API key virtual_key="VIRTUAL_KEY", # Replace with your virtual key for Hugging Face huggingface_base_url="Hugging Face_DEDICATED_URL" # Optional: Use this if you have a dedicated server hosted on Hugging Face)
Copy
from openai import OpenAIfrom obiguard import OBIGUARD_GATEWAY_URL, createHeadersclient = OpenAI( api_key="Hugging Face_ACCESS_TOKEN", base_url=OBIGUARD_GATEWAY_URL, default_headers=createHeaders( obiguard_api_key="sk-obg***", # Your Obiguard API key provider="Hugging Face", huggingface_base_url="Hugging Face_DEDICATED_URL" ))
Use the Obiguard instance to send requests to Hugging Face. You can also override the virtual key directly in the API call if needed.
Copy
chat_completion = client.chat.completions.create( messages = [{"role": 'user', "content": 'Say this is a test'}], model = 'meta-llama/meta-llama-3.1-8b-instruct', # make sure your model is hot)print(chat_completion.choices[0].message.content)
Copy
chat_completion = client.chat.completions.create( messages = [{"role": 'user', "content": 'Say this is a test'}], model = 'meta-llama/meta-llama-3.1-8b-instruct', # make sure your model is hot)print(chat_completion.choices[0].message.content)
Copy
chat_completion = client.chat.completions.create( messages = [{"role": 'user', "content": 'Say this is a test'}], model = 'meta-llama/meta-llama-3.1-8b-instruct', # make sure your model is hot)print(chat_completion.choices[0].message.content)
Virtual Keys serve as Obiguard’s unified authentication system for all LLM interactions,
simplifying the use of multiple providers and Obiguard features within your application.
For self-hosted LLMs, you can configure custom authentication requirements including authorization keys, bearer tokens, or any other headers needed to access your model.
Navigate to Virtual Keys in your Obiguard dashboard
Click “Add Key” and enable the “Local/Privately hosted provider” toggle
Configure your deployment:
Select the matching provider API specification (typically OpenAI)
Enter your model’s base URL in the Custom Host field
Add required authentication headers and their values
You can now use this virtual key in your requests:
Copy
client = Obiguard( obiguard_api_key="vk-obg***", # Your Obiguard virtual key virtual_key="YOUR_SELF_HOSTED_LLM_VIRTUAL_KEY")response = client.chat.completions.create( model="your-self-hosted-model-name", messages=[ {"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": "Hello!"} ])
Copy
client = Obiguard( obiguard_api_key="vk-obg***", # Your Obiguard virtual key virtual_key="YOUR_SELF_HOSTED_LLM_VIRTUAL_KEY")response = client.chat.completions.create( model="your-self-hosted-model-name", messages=[ {"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": "Hello!"} ])
For more information about managing self-hosted LLMs with Obiguard, see Bring Your Own LLM.