Learn how to integrate Azure AI Foundry with Obiguard to access a wide range of AI models with enhanced observability and reliability features.
Azure AI Foundry provides a unified platform for enterprise AI operations, model building, and application development. With Obiguard, you can seamlessly integrate with various models available on Azure AI Foundry and take advantage of features like observability, prompt management, fallbacks, and more.
Azure AI Foundry offers three different ways to deploy models, each with unique endpoints and configurations:
You can learn more about the Azure AI Foundry deployment here.
If you’re specifically looking to use OpenAI models on Azure, you might want to use Azure OpenAI instead, which is optimized for OpenAI models.
To integrate Azure AI Foundry with Obiguard, you’ll need to create a virtual key. Virtual keys securely store your Azure AI Foundry credentials in Obiguard’s vault, allowing you to use a simple identifier in your code instead of handling sensitive authentication details directly.
Navigate to the Virtual Keys section in Obiguard and select “Azure AI Foundry” as your provider.
You can create a virtual key for Azure AI Foundry using one of three authentication methods. Each method requires different information from your Azure deployment:
The recommended authentication mode using API Keys:
Required parameters:
API Key: Your Azure AI Foundry API key
Azure Foundry URL: The base endpoint URL for your deployment, formatted according to your deployment type:
https://your-resource-name.services.ai.azure.com/models
https://your-model-name.region.inference.ml.azure.com/score
https://your-model-name.region.models.ai.azure.com
Azure API Version: The API version to use (e.g., “2024-05-01-preview”). This is required if you have api version in your deployment url. For example:
https://mycompany-ai.westus2.services.ai.azure.com/models?api-version=2024-05-01-preview
, the API version is 2024-05-01-preview
Azure Deployment Name: (Optional) Required only when a single resource contains multiple deployments.
The recommended authentication mode using API Keys:
Required parameters:
API Key: Your Azure AI Foundry API key
Azure Foundry URL: The base endpoint URL for your deployment, formatted according to your deployment type:
https://your-resource-name.services.ai.azure.com/models
https://your-model-name.region.inference.ml.azure.com/score
https://your-model-name.region.models.ai.azure.com
Azure API Version: The API version to use (e.g., “2024-05-01-preview”). This is required if you have api version in your deployment url. For example:
https://mycompany-ai.westus2.services.ai.azure.com/models?api-version=2024-05-01-preview
, the API version is 2024-05-01-preview
Azure Deployment Name: (Optional) Required only when a single resource contains multiple deployments.
For managed Azure deployments:
Required parameters:
Azure Managed ClientID: Your managed client ID
Azure Foundry URL: The base endpoint URL for your deployment, formatted according to your deployment type:
https://your-resource-name.services.ai.azure.com/models
https://your-model-name.region.inference.ml.azure.com/score
https://your-model-name.region.models.ai.azure.com
Azure API Version: The API version to use (e.g., “2024-05-01-preview”). This is required if you have api version in your deployment url. Examples:
https://mycompany-ai.westus2.services.ai.azure.com/models?api-version=2024-05-01-preview
, the API version is 2024-05-01-preview
Azure Deployment Name: (Optional) Required only when a single resource contains multiple deployments.
To use this authentication your azure application need to have the role of: conginitive services user
.
Enterprise-level authentication with Azure Entra ID:
Required parameters:
Azure Entra ClientID: Your Azure Entra client ID
Azure Entra Secret: Your client secret
Azure Entra Tenant ID: Your tenant ID
Azure Foundry URL: The base endpoint URL for your deployment, formatted according to your deployment type:
https://your-resource-name.services.ai.azure.com/models
https://your-model-name.region.inference.ml.azure.com/score
https://your-model-name.region.models.ai.azure.com
Azure API Version: The API version to use (e.g., “2024-05-01-preview”). This is required if you have api version in your deployment url. Examples:
https://mycompany-ai.westus2.services.ai.azure.com/models?api-version=2024-05-01-preview
, the API version is 2024-05-01-preview
Azure Deployment Name: (Optional) Required only when a single resource contains multiple deployments. Common in Managed deployments.
You can Learn more about these Azure Entra Resources here
Once you’ve created your virtual key, you can start making requests to Azure AI Foundry models through Obiguard.
Install the Obiguard SDK with pip
Install the Obiguard SDK with pip
Azure AI Foundry supports function calling (tool calling) for compatible models. Here’s how to implement it with Obiguard:
Process images alongside text using Azure AI Foundry’s vision capabilities:
Get consistent, parseable responses in specific formats:
For Azure OpenAI specific models and deployments, we recommend using the existing Azure OpenAI provider in Obiguard:
Learn how to integrate Azure OpenAI with Obiguard for access to OpenAI models hosted on Azure.