Obiguard provides a robust and secure gateway to facilitate the integration of various Large Language Models (LLMs) into your applications, including models hosted on AWS Bedrock.
With Obiguard, you can take advantage of features like fast AI gateway access, observability, prompt management, and more, all while ensuring the secure management of your LLM API keys through a virtual key system.
Obiguard SDK Integration with AWS Bedrock
Obiguard provides a consistent API to interact with models from various providers. To integrate Bedrock with Obiguard:
1. Install the Obiguard SDK
Add the Obiguard SDK to your application to interact with Anthropic’s API through Obiguard’s gateway.
2. Initialize Obiguard with the Virtual Key
There are two ways to integrate AWS Bedrock with Obiguard:
AWS Access Key
Use your AWS Secret Access Key
, AWS Access Key Id
, and AWS Region
to create your Virtual key.
Integration Guide
AWS Assumed Role
Take your AWS Assumed Role ARN
and AWS Region
to create the virtaul key.
Integration Guide
from obiguard import Obiguard
client = Obiguard(
obiguard_api_key="sk-obg***", # Your Obiguard API key
virtual_key="VIRTUAL_KEY" # Replace with your virtual key for Bedrock
)
from obiguard import Obiguard
client = Obiguard(
obiguard_api_key="sk-obg***", # Your Obiguard API key
virtual_key="VIRTUAL_KEY" # Replace with your virtual key for Bedrock
)
Using Virtual Key with AWS STS
If you’re using AWS Security Token Service, you can pass your aws_session_token
along with the Virtual key:
from obiguard import Obiguard
client = Obiguard(
obiguard_api_key="sk-obg***", # Your Obiguard API key
virtual_key="VIRTUAL_KEY" # Replace with your virtual key for Bedrock,
aws_session_token=""
)
from obiguard import Obiguard
client = Obiguard(
obiguard_api_key="sk-obg***", # Your Obiguard API key
virtual_key="VIRTUAL_KEY" # Replace with your virtual key for Bedrock,
aws_session_token=""
)
Not using Virtual Keys?
Check out this example on how you can directly use your AWS details to make a Bedrock request through Obiguard.
3. Invoke Chat Completions with AWS bedrock
Use the Obiguard instance to send requests to Bedrock. You can also override the virtual key directly in the API call if needed.
completion = client.chat.completions.create(
messages= [{"role": 'user', "content": 'Say this is a test'}],
model= 'anthropic.claude-v2:1',
max_tokens=250 # Required field for Anthropic
)
print(completion.choices)
completion = client.chat.completions.create(
messages= [{"role": 'user', "content": 'Say this is a test'}],
model= 'anthropic.claude-v2:1',
max_tokens=250 # Required field for Anthropic
)
print(completion.choices)
Using Vision Models
Obiguard’s multimodal Gateway fully supports Bedrock’s vision models anthropic.claude-3-sonnet
, anthropic.claude-3-haiku
, and anthropic.claude-3-opus
For more info, check out this guide:
Vision
Extended Thinking (Reasoning Models) (Beta)
The assistants thinking response is returned in the response_chunk.choices[0].delta.content_blocks
array, not the
response.choices[0].message.content
string.
Models like us.anthropic.claude-3-7-sonnet-20250219-v1:0
support extended thinking.
This is similar to openai thinking, but you get the model’s reasoning as it processes the request as well.
Note that you will have to set strict_open_ai_compliance=False
in the headers to use this feature.
Single turn conversation
from obiguard import Obiguard
client = Obiguard(
obiguard_api_key="sk-obg***", # Your Obiguard API key
virtual_key="VIRTUAL_KEY", # Add your provider's virtual key
strict_openai_compliance=False
)
response = client.chat.completions.create(
model="us.anthropic.claude-3-7-sonnet-20250219-v1:0",
max_tokens=3000,
thinking={
"type": "enabled",
"budget_tokens": 2030
},
stream=True,
messages=[
{
"role": "user",
"content": [
{
"type": "text",
"text": "when does the flight from new york to bengaluru land tomorrow, what time, what is its flight number, and what is its baggage belt?"
}
]
}
]
)
print(response)
# in case of streaming responses you'd have to parse the response_chunk.choices[0].delta.content_blocks array
# response = client.chat.completions.create(
# ...same config as above but with stream: true
# )
# for chunk in response:
# if chunk.choices[0].delta:
# content_blocks = chunk.choices[0].delta.get("content_blocks")
# if content_blocks is not None:
# for content_block in content_blocks:
# print(content_block)
from obiguard import Obiguard
client = Obiguard(
obiguard_api_key="sk-obg***", # Your Obiguard API key
virtual_key="VIRTUAL_KEY", # Add your provider's virtual key
strict_openai_compliance=False
)
response = client.chat.completions.create(
model="us.anthropic.claude-3-7-sonnet-20250219-v1:0",
max_tokens=3000,
thinking={
"type": "enabled",
"budget_tokens": 2030
},
stream=True,
messages=[
{
"role": "user",
"content": [
{
"type": "text",
"text": "when does the flight from new york to bengaluru land tomorrow, what time, what is its flight number, and what is its baggage belt?"
}
]
}
]
)
print(response)
# in case of streaming responses you'd have to parse the response_chunk.choices[0].delta.content_blocks array
# response = client.chat.completions.create(
# ...same config as above but with stream: true
# )
# for chunk in response:
# if chunk.choices[0].delta:
# content_blocks = chunk.choices[0].delta.get("content_blocks")
# if content_blocks is not None:
# for content_block in content_blocks:
# print(content_block)
from openai import OpenAI
from obiguard import OBIGUARD_GATEWAY_URL, createHeaders
openai = OpenAI(
api_key='BEDROCK_API_KEY',
base_url=OBIGUARD_GATEWAY_URL,
default_headers=createHeaders(
provider="bedrock",
obiguard_api_key="sk-obg******", # Your Obiguard API key
strict_openai_compliance=False
)
)
response = openai.chat.completions.create(
model="us.anthropic.claude-3-7-sonnet-20250219-v1:0",
max_tokens=3000,
thinking={
"type": "enabled",
"budget_tokens": 2030
},
stream=True,
messages=[
{
"role": "user",
"content": [
{
"type": "text",
"text": "when does the flight from new york to bengaluru land tomorrow, what time, what is its flight number, and what is its baggage belt?"
}
]
}
]
)
print(response)
curl "https://gateway.obiguard.ai/v1/chat/completions" \
-H "Content-Type: application/json" \
-H "x-obiguard-api-key: $OBIGUARD_API_KEY" \
-H "x-obiguard-provider: bedrock" \
-H "x-api-key: $BEDROCK_API_KEY" \
-H "x-obiguard-strict-openai-compliance: false" \
-d '{
"model": "us.anthropic.claude-3-7-sonnet-20250219-v1:0",
"max_tokens": 3000,
"thinking": {
"type": "enabled",
"budget_tokens": 2030
},
"stream": true,
"messages": [
{
"role": "user",
"content": [
{
"type": "text",
"text": "when does the flight from new york to bengaluru land tomorrow, what time, what is its flight number, and what is its baggage belt?"
}
]
}
]
}'
Multi turn conversation
from obiguard import Obiguard
client = Obiguard(
obiguard_api_key="sk-obg***", # Your Obiguard API key
virtual_key="VIRTUAL_KEY", # Add your provider's virtual key
strict_openai_compliance=False
)
# Create the request
response = client.chat.completions.create(
model="us.anthropic.claude-3-7-sonnet-20250219-v1:0",
max_tokens=3000,
thinking={
"type": "enabled",
"budget_tokens": 2030
},
stream=True,
messages=[
{
"role": "user",
"content": [
{
"type": "text",
"text": "when does the flight from baroda to bangalore land tomorrow, what time, what is its flight number, and what is its baggage belt?"
}
]
},
{
"role": "assistant",
"content": [
{
"type": "thinking",
"thinking": "The user is asking several questions about a flight from Baroda (also known as Vadodara) to Bangalore:\n1. When does the flight land tomorrow\n2. What time does it land\n3. What is the flight number\n4. What is the baggage belt number at the arrival airport\n\nTo properly answer these questions, I would need access to airline flight schedules and airport information systems. However, I don't have:\n- Real-time or scheduled flight information\n- Access to airport baggage claim allocation systems\n- Information about specific flights between these cities\n- The ability to look up tomorrow's specific flight schedules\n\nThis question requires current, specific flight information that I don't have access to. Instead of guessing or providing potentially incorrect information, I should explain this limitation and suggest ways the user could find this information.",
"signature": "EqoBCkgIARABGAIiQBVA7FBNLRtWarDSy9TAjwtOpcTSYHJ+2GYEoaorq3V+d3eapde04bvEfykD/66xZXjJ5yyqogJ8DEkNMotspRsSDKzuUJ9FKhSNt/3PdxoMaFZuH+1z1aLF8OeQIjCrA1+T2lsErrbgrve6eDWeMvP+1sqVqv/JcIn1jOmuzrPi2tNz5M0oqkOO9txJf7QqEPPw6RG3JLO2h7nV1BMN6wE="
}
]
},
{
"role": "user",
"content": "thanks that's good to know, how about to chennai?"
}
]
)
print(response)
from obiguard import Obiguard
client = Obiguard(
obiguard_api_key="sk-obg***", # Your Obiguard API key
virtual_key="VIRTUAL_KEY", # Add your provider's virtual key
strict_openai_compliance=False
)
# Create the request
response = client.chat.completions.create(
model="us.anthropic.claude-3-7-sonnet-20250219-v1:0",
max_tokens=3000,
thinking={
"type": "enabled",
"budget_tokens": 2030
},
stream=True,
messages=[
{
"role": "user",
"content": [
{
"type": "text",
"text": "when does the flight from baroda to bangalore land tomorrow, what time, what is its flight number, and what is its baggage belt?"
}
]
},
{
"role": "assistant",
"content": [
{
"type": "thinking",
"thinking": "The user is asking several questions about a flight from Baroda (also known as Vadodara) to Bangalore:\n1. When does the flight land tomorrow\n2. What time does it land\n3. What is the flight number\n4. What is the baggage belt number at the arrival airport\n\nTo properly answer these questions, I would need access to airline flight schedules and airport information systems. However, I don't have:\n- Real-time or scheduled flight information\n- Access to airport baggage claim allocation systems\n- Information about specific flights between these cities\n- The ability to look up tomorrow's specific flight schedules\n\nThis question requires current, specific flight information that I don't have access to. Instead of guessing or providing potentially incorrect information, I should explain this limitation and suggest ways the user could find this information.",
"signature": "EqoBCkgIARABGAIiQBVA7FBNLRtWarDSy9TAjwtOpcTSYHJ+2GYEoaorq3V+d3eapde04bvEfykD/66xZXjJ5yyqogJ8DEkNMotspRsSDKzuUJ9FKhSNt/3PdxoMaFZuH+1z1aLF8OeQIjCrA1+T2lsErrbgrve6eDWeMvP+1sqVqv/JcIn1jOmuzrPi2tNz5M0oqkOO9txJf7QqEPPw6RG3JLO2h7nV1BMN6wE="
}
]
},
{
"role": "user",
"content": "thanks that's good to know, how about to chennai?"
}
]
)
print(response)
from openai import OpenAI
from obiguard import OBIGUARD_GATEWAY_URL, createHeaders
openai = OpenAI(
api_key='BEDROCK_API_KEY',
base_url=OBIGUARD_GATEWAY_URL,
default_headers=createHeaders(
provider="bedrock",
obiguard_api_key="sk-obg******", # Your Obiguard API key
strict_openai_compliance=False
)
)
response = openai.chat.completions.create(
model="us.anthropic.claude-3-7-sonnet-20250219-v1:0",
max_tokens=3000,
thinking={
"type": "enabled",
"budget_tokens": 2030
},
stream=True,
messages=[
{
"role": "user",
"content": [
{
"type": "text",
"text": "when does the flight from baroda to bangalore land tomorrow, what time, what is its flight number, and what is its baggage belt?"
}
]
},
{
"role": "assistant",
"content": [
{
"type": "thinking",
"thinking": "The user is asking several questions about a flight from Baroda (also known as Vadodara) to Bangalore:\n1. When does the flight land tomorrow\n2. What time does it land\n3. What is the flight number\n4. What is the baggage belt number at the arrival airport\n\nTo properly answer these questions, I would need access to airline flight schedules and airport information systems. However, I don't have:\n- Real-time or scheduled flight information\n- Access to airport baggage claim allocation systems\n- Information about specific flights between these cities\n- The ability to look up tomorrow's specific flight schedules\n\nThis question requires current, specific flight information that I don't have access to. Instead of guessing or providing potentially incorrect information, I should explain this limitation and suggest ways the user could find this information.",
signature: "EqoBCkgIARABGAIiQBVA7FBNLRtWarDSy9TAjwtOpcTSYHJ+2GYEoaorq3V+d3eapde04bvEfykD/66xZXjJ5yyqogJ8DEkNMotspRsSDKzuUJ9FKhSNt/3PdxoMaFZuH+1z1aLF8OeQIjCrA1+T2lsErrbgrve6eDWeMvP+1sqVqv/JcIn1jOmuzrPi2tNz5M0oqkOO9txJf7QqEPPw6RG3JLO2h7nV1BMN6wE="
}
]
},
{
"role": "user",
"content": "thanks that's good to know, how about to chennai?"
}
]
)
print(response)
curl "https://gateway.obiguard.ai/v1/chat/completions" \
-H "Content-Type: application/json" \
-H "x-obiguard-api-key: $OBIGUARD_API_KEY" \
-H "x-obiguard-provider: bedrock" \
-H "x-api-key: $BEDROCK_API_KEY" \
-H "x-obiguard-strict-openai-compliance: false" \
-d '{
"model": "us.anthropic.claude-3-7-sonnet-20250219-v1:0",
"max_tokens": 3000,
"thinking": {
"type": "enabled",
"budget_tokens": 2030
},
"stream": true,
"messages": [
{
"role": "user",
"content": [
{
"type": "text",
"text": "when does the flight from baroda to bangalore land tomorrow, what time, what is its flight number, and what is its baggage belt?"
}
]
},
{
"role": "assistant",
"content": [
{
"type": "thinking",
"thinking": "The user is asking several questions about a flight from Baroda (also known as Vadodara) to Bangalore:\n1. When does the flight land tomorrow\n2. What time does it land\n3. What is the flight number\n4. What is the baggage belt number at the arrival airport\n\nTo properly answer these questions, I would need access to airline flight schedules and airport information systems. However, I don't have:\n- Real-time or scheduled flight information\n- Access to airport baggage claim allocation systems\n- Information about specific flights between these cities\n- The ability to look up tomorrow's specific flight schedules\n\nThis question requires current, specific flight information that I don't have access to. Instead of guessing or providing potentially incorrect information, I should explain this limitation and suggest ways the user could find this information.",
"signature": "EqoBCkgIARABGAIiQBVA7FBNLRtWarDSy9TAjwtOpcTSYHJ+2GYEoaorq3V+d3eapde04bvEfykD/66xZXjJ5yyqogJ8DEkNMotspRsSDKzuUJ9FKhSNt/3PdxoMaFZuH+1z1aLF8OeQIjCrA1+T2lsErrbgrve6eDWeMvP+1sqVqv/JcIn1jOmuzrPi2tNz5M0oqkOO9txJf7QqEPPw6RG3JLO2h7nV1BMN6wE="
}
]
},
{
"role": "user",
"content": "thanks that's good to know, how about to chennai?"
}
]
}'
Inference Profiles
Inference profiles are a resource in Amazon Bedrock that define a model and one or more Regions to which the inference profile can route model invocation requests.
To use inference profiles, your IAM role needs to additionally have the following permissions:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"bedrock:GetInferenceProfile"
],
"Resource": [
"arn:aws:bedrock:*:*:inference-profile/*",
"arn:aws:bedrock:*:*:application-inference-profile/*"
]
}
]
}
This is a pre-requisite for using inference profiles, as the gateway needs to fetch the foundation model to process the request.
For reference, see the following documentation:
https://docs.aws.amazon.com/bedrock/latest/userguide/inference-profiles-prereq.html
Bedrock Converse API
Obiguard uses the AWS Converse API internally for making chat completions requests.
If you need to pass additional input fields or parameters like anthropic_beta
, top_k
, frequency_penalty
etc. that are specific to a model, you can pass it with this key:
"additionalModelRequestFields": {
"frequency_penalty": 0.4
}
If you require the model to respond with certain fields that are specific to a model, you need to pass this key:
"additionalModelResponseFieldPaths": [ "/stop_sequence" ]
Making Requests without Virtual Keys
If you do not want to add your AWS details to Obiguard vault, you can also directly pass them while instantiating the Obiguard client.
Mapping the Bedrock Details
Node SDK | Python SDK | REST Headers |
---|
awsAccessKeyId | aws_access_key_id | x-obiguard-aws-access-key-id |
awsSecretAccessKey | aws_secret_access_key | x-obiguard-aws-secret-access-key |
awsRegion | aws_region | x-obiguard-aws-region |
awsSessionToken | aws_session_token | x-obiguard-aws-session-token |
Example
from obiguard import Obiguard
client = Obiguard(
obiguard_api_key="sk-obg***", # Your Obiguard API key
provider="bedrock",
aws_access_key_id="",
aws_secret_access_key="",
aws_region="us-east-1",
aws_session_token=""
)
from obiguard import Obiguard
client = Obiguard(
obiguard_api_key="sk-obg***", # Your Obiguard API key
provider="bedrock",
aws_access_key_id="",
aws_secret_access_key="",
aws_region="us-east-1",
aws_session_token=""
)
curl https://gateway.obiguard.ai/v1/chat/completions \
-H "Content-Type: application/json" \
-H "x-obiguard-api-key: $OBIGUARD_API_KEY" \
-H "x-obiguard-provider: bedrock" \
-H "x-obiguard-aws-access-key-id: $AWS_ACCESS_KEY_ID" \
-H "x-obiguard-aws-secret-access-key: $AWS_SECRET_ACCESS_KEY" \
-H "x-obiguard-aws-region: $AWS_REGION" \
-H "x-obiguard-aws-session-token: $AWS_TOKEN" \
-d '{
"model": "gpt-4o",
"messages": [{"role": "user","content": "Hello!"}]
}'
Supported Models
List of supported Amazon Bedrock model IDs
How to Find Your AWS Credentials
Navigate here in the AWS Management Console to obtain your AWS Access Key ID and AWS Secret Access Key.
- In the console, you’ll find the ‘Access keys’ section. Click on ‘Create access key’.
- Copy the
Secret Access Key
once it is generated, and you can view the Access Key ID
along with it.
- On the same page under the ‘Access keys’ section, where you created your Secret Access key, you will also find your Access Key ID.
- And lastly, get Your
AWS Region
from the Home Page of AWS Bedrock as shown in the image below.
Next Steps
The complete list of features supported in the SDK are available on the link below.