LLMs can be unpredictable — not just in terms of API reliability or unexpected 400/500 errors, but also in their fundamental behavior. A response with a 200 status code might still break your application’s workflow due to unexpected or malformed output. Obiguard’s AI Guardrails help you enforce consistent LLM behavior in real time, using a guardrail on the Gateway approach. Leverage Obiguard’s Guardrails to validate both your LLM inputs and outputs according to your specified checks. Built on our Gateway, Guardrails let you orchestrate requests with actions such as denying requests, logging results, falling back to alternative LLMs or prompts, retrying requests, and more. Here are some examples of guardrails offered by Obiguard:
  • Regex Match: Checks if the request or response text matches a specific regex pattern.
  • JSON Schema: Verifies that the response JSON adheres to a defined schema.
  • Code Detection: Detects code snippets in formats such as SQL, Python, or TypeScript.
  • …and more.
Obiguard includes over 20 deterministic guardrails, along with LLM-based options like gibberish detection and prompt injection scanning. These guardrails provide robust protection, enabling organizations to deploy Gen AI securely and responsibly.

How to use Guardrails AI

Putting Obiguard guardrails in production is just a 4-step process:
  1. Create a policy group
  2. Add validators to the policy group
  3. Generate an API key for the policy group
  4. Attach the API key to a request
This flowchart shows how Obiguard processes a guardrail request: Let’s see in detail how to set up guardrails in Obiguard.

1. Create a Policy Group

Navigate to the Project page, select guardrail Policies, and click Create to set up a new policy group. Give your policy group a name and, if desired, add a description for easier identification later.

2. Add Validators to the Policy Group

In Obiguard, you can assign a guardrail validator to either the INPUT (PROMPT) or the OUTPUT. Ensure each validator is configured to check only one: either the input or the output. Each guardrail Check provides a specific input field tailored to its purpose—simply fill in the required details and save your check. You can include multiple checks within a single validator. Each check returns a simple boolean (passed/failed) result.

3. Generate an API Key for the Policy Group

Once your policy group is set up with the desired validators, generate an API key for it. This key will be used to authenticate requests that require the guardrails defined in this policy group. You can generate multiple API keys for the same policy group if needed.

4. Attach the API Key to a Request

To use the guardrails in your application, attach the generated API key to your requests. This is where Obiguard’s magic comes into play. The guardrail you created above is yet not an Active guardrail because it is not attached to any request. Provide the API key whenever you make a request to Obiguard, and it will apply the guardrails defined in your policy group to that request.
from obiguard import Obiguard

client = Obiguard(
  provider='openai',
  base_url='https://gateway.obiguard.ai/v1',
  obiguard_api_key='vk-obg***',  # Your Obiguard virtual key here
)

response = client.chat.completions.create(
  messages = [{ "role": 'user', "content": 'Say this is a test' }],
  model = "Qwen/Qwen2.5-32B-Instruct"  # The LLM model tied to the virtual key
)
You can monitor all requests and responses processed by the guardrails using Obiguard Logs.