Azure AI Foundry provides a unified platform for enterprise AI operations, model building, and application development. With Obiguard, you can seamlessly integrate with various models available on Azure AI Foundry and take advantage of features like observability, prompt management, fallbacks, and more.

Understanding Azure AI Foundry Deployments

Azure AI Foundry offers three different ways to deploy models, each with unique endpoints and configurations:

  1. AI Services: Azure-managed models accessed through Azure AI Services endpoints
  2. Managed: User-managed deployments running on dedicated Azure compute resources
  3. Serverless: Seamless, scalable deployment without managing infrastructure

You can learn more about the Azure AI Foundry deployment here.

Integrate

To integrate Azure AI Foundry with Obiguard, you’ll need to create a virtual key. Virtual keys securely store your Azure AI Foundry credentials in Obiguard’s vault, allowing you to use a simple identifier in your code instead of handling sensitive authentication details directly.

Navigate to the Virtual Keys section in Obiguard and select “Azure AI Foundry” as your provider.

Creating Your Virtual Key

You can create a virtual key for Azure AI Foundry using one of three authentication methods. Each method requires different information from your Azure deployment:

The recommended authentication mode using API Keys:

Required parameters:

  • API Key: Your Azure AI Foundry API key

  • Azure Foundry URL: The base endpoint URL for your deployment, formatted according to your deployment type:

    • For AI Services: https://your-resource-name.services.ai.azure.com/models
    • For Managed: https://your-model-name.region.inference.ml.azure.com/score
    • For Serverless: https://your-model-name.region.models.ai.azure.com
  • Azure API Version: The API version to use (e.g., “2024-05-01-preview”). This is required if you have api version in your deployment url. For example:

    • If your URL is https://mycompany-ai.westus2.services.ai.azure.com/models?api-version=2024-05-01-preview, the API version is 2024-05-01-preview
  • Azure Deployment Name: (Optional) Required only when a single resource contains multiple deployments.

Sample Request

Once you’ve created your virtual key, you can start making requests to Azure AI Foundry models through Obiguard.

Install the Obiguard SDK with pip

pip install obiguard
from obiguard import Obiguard

client = Obiguard(
  obiguard_api_key="sk-obg***",  # Your Obiguard API key
  virtual_key = "AZURE_FOUNDRY_VIRTUAL_KEY"
)

response = client.chat.completions.create(
  model="DeepSeek-V3-0324", # Replace with your deployed model name
  messages=[
    {"role": "system", "content": "You are a helpful assistant."},
    {"role": "user", "content": "Tell me about cloud computing"}
  ]
)

print(response.choices[0].message.content)

Advanced Features

Function Calling

Azure AI Foundry supports function calling (tool calling) for compatible models. Here’s how to implement it with Obiguard:

tools = [{
    "type": "function",
    "function": {
        "name": "getWeather",
        "description": "Get the current weather",
        "parameters": {
            "type": "object",
            "properties": {
                "location": {"type": "string", "description": "City and state"},
                "unit": {"type": "string", "enum": ["celsius", "fahrenheit"]}
            },
            "required": ["location"]
        }
    }
}]

response = client.chat.completions.create(
    model="DeepSeek-V3-0324", # Use a model that supports function calling
    messages=[
        {"role": "system", "content": "You are a helpful assistant."},
        {"role": "user", "content": "What's the weather like in Delhi?"}
    ],
    tools=tools,
    tool_choice="auto"
)

print(response.choices[0])

Vision Capabilities

Process images alongside text using Azure AI Foundry’s vision capabilities:

response = client.chat.completions.create(
    model="Llama-4-Scout-17B-16E", # Use a model that supports vision
    messages=[
        {
            "role": "user",
            "content": [
                {"type": "text", "text": "What's in this image?"},
                {
                    "type": "image_url",
                    "image_url": "https://upload.wikimedia.org/wikipedia/commons/thumb/d/dd/Gfp-wisconsin-madison-the-nature-boardwalk.jpg/2560px-Gfp-wisconsin-madison-the-nature-boardwalk.jpg",
                },
            ],
        }
    ],
    max_tokens=500,
)

print(response.choices[0].message.content)

Structured Outputs

Get consistent, parseable responses in specific formats:

response = client.chat.completions.create(
    model="cohere-command-a", # Use a model that supports response formats
    messages=[
        {"role": "system", "content": "You are a helpful assistant."},
        {"role": "user", "content": "List the top 3 cloud providers with their main services"}
    ],
    response_format={"type": "json_object"},
    temperature=0
)

import json
print(json.loads(response.choices[0].message.content))

Relationship with Azure OpenAI

For Azure OpenAI specific models and deployments, we recommend using the existing Azure OpenAI provider in Obiguard:

Azure OpenAI Integration

Learn how to integrate Azure OpenAI with Obiguard for access to OpenAI models hosted on Azure.