Skip to main content

Microsoft Azure Ai Inference Client Library for Python

Project description

Azure AI Inference client library for Python

The client Library (in preview) does inference, including chat completions, for AI models deployed by Azure AI Studio and Azure Machine Learning Studio. It supports Serverless API endpoints and Managed Compute endpoints (formerly known as Managed Online Endpoints). The client library makes services calls using REST API version 2024-05-01-preview, as documented in Azure AI Model Inference API. For more information see Overview: Deploy models, flows, and web apps with Azure AI Studio.

Use the model inference client library to:

  • Authenticate against the service
  • Get information about the model
  • Do chat completions
  • Get text embeddings

With some minor adjustments, this client library can also be configured to do inference for Azure OpenAI endpoints. See samples with azure_openai in their name, in the samples folder.

Product documentation | Samples | API reference documentation | Package (Pypi) | SDK source code

Getting started

Prerequisites

  • Python 3.8 or later installed, including pip.
  • An Azure subscription.
  • An AI Model from the catalog deployed through Azure AI Studio.
  • To construct the client library, you will need to pass in the endpoint URL. The endpoint URL has the form https://your-host-name.your-azure-region.inference.ai.azure.com, where your-host-name is your unique model deployment host name and your-azure-region is the Azure region where the model is deployed (e.g. eastus2).
  • Depending on your model deployment and authentication preference, you either need a key to authenticate against the service, or Entra ID credentials. The key is a 32-character string.

Install the package

To install the Azure AI Inferencing package use the following command:

pip install azure-ai-inference

To update an existing installation of the package, use:

pip install --upgrade azure-ai-inference

Key concepts

Create and authenticate a client directly, using key

The package includes two clients ChatCompletionsClient and EmbeddingsClient. Both can be created in the similar manner. For example, assuming endpoint and key are strings holding your endpoint URL and key, this Python code will create and authenticate a synchronous ChatCompletionsClient:

from azure.ai.inference import ChatCompletionsClient
from azure.core.credentials import AzureKeyCredential

client = ChatCompletionsClient(
    endpoint=endpoint,
    credential=AzureKeyCredential(key)
)

A synchronous client supports synchronous inference methods, meaning they will block until the service responds with inference results. For simplicity the code snippets below all use synchronous methods. The client offers equivalent asynchronous methods which are more commonly used in production.

To create an asynchronous client, Install the additional package aiohttp:

    pip install aiohttp

and update the code above to import asyncio, and import ChatCompletionsClient from the azure.ai.inference.aio namespace instead of azure.ai.inference:

import asyncio
from azure.ai.inference.aio import ChatCompletionsClient
from azure.core.credentials import AzureKeyCredential

client = ChatCompletionsClient(
    endpoint=endpoint,
    credential=AzureKeyCredential(key)
)

Create and authenticate a client directly, using Entra ID

Note: At the time of this package release, not all deployments support Entra ID authentication. For those who do, follow the instructions below.

To use an Entra ID token credential, first install the azure-identity package:

pip install azure.identity

You will need to provide the desired credential type obtained from that package. A common selection is DefaultAzureCredential and it can be used as follows:

from azure.ai.inference import ChatCompletionsClient
from azure.identity import DefaultAzureCredential

client = ChatCompletionsClient(
    endpoint=endpoint,
    credential=DefaultAzureCredential(exclude_interactive_browser_credential=False)
)

During application development, you would typically set up the environment for authentication using Entra ID by first Installing the Azure CLI, running az login in your console window, then entering your credentials in the browser window that was opened. The call to DefaultAzureCredential() will then succeed. Setting exclude_interactive_browser_credential=False in that call will enable launching a browser window if the user isn't already logged in.

Create and authentice clients using load_client

As an alternative to creating a specific client directly, you can use the function load_client to return the relevant client (of types ChatCompletionsClient or EmbeddingsClient) based on the provided endpoint:

from azure.ai.inference import load_client
from azure.core.credentials import AzureKeyCredential

client = load_client(
    endpoint=endpoint,
    credential=AzureKeyCredential(key)
)

print(f"Created client of type `{type(client).__name__}`.")

To load an asynchronous client, import the load_client function from azure.ai.inference.aio instead.

Entra ID authentication is also supported by the load_client function. Replace the key authentication above with credential=DefaultAzureCredential() for example.

Getting AI model information

All clients provide a get_model_info method to retrive AI model information. This makes a REST call to the /info route on the provided endpoint, as documented in the REST API reference.

model_info = client.get_model_info()

print(f"Model name: {model_info.model_name}")
print(f"Model provider name: {model_info.model_provider_name}")
print(f"Model type: {model_info.model_type}")

AI model information is cached in the client, and futher calls to get_model_info will access the cached value and wil not result in a REST API call. Note that if you created the client using load_client function, model information will already be cached in the client.

AI model information is displayed (if available) when you print(client).

Chat Completions

The ChatCompletionsClient has a method named complete. The method makes a REST API call to the /chat/completions route on the provided endpoint, as documented in the REST API reference.

See simple chat completion examples below. More can be found in the samples folder.

Text Embeddings

The EmbeddingsClient has a method named embedding. The method makes a REST API call to the /embeddings route on the provided endpoint, as documented in the REST API reference.

See simple text embedding example below. More can be found in the samples folder.

Sending proprietary model parameters

The REST API defines common model parameters for chat completions, text embeddings, etc. If the model you are targeting has additional parameters you would like to set, the client library allows you easily do so. See Chat completions with additional model-specific parameters. It similarly applies to other clients.

Inference using Azure OpenAI endpoints

The request and response payloads of the Azure AI Model Inference API is mostly compatible with OpenAI REST APIs for chat completions and text embeddings. Therefore, with some minor adjustments, this client library can be configured to do inference using Azure OpenAI endpoints. See samples with azure_openai in their name, in the samples folder, and the comments there.

Examples

In the following sections you will find simple examples of:

The examples create a synchronous client as mentioned in Create and authenticate a client directly, using key. Only mandatory input settings are shown for simplicity.

See the Samples folder for full working samples for synchronous and asynchronous clients.

Chat completions example

This example demonstrates how to generate a single chat completions, with key authentication, assuming endpoint and key are already defined.

from azure.ai.inference import ChatCompletionsClient
from azure.ai.inference.models import SystemMessage, UserMessage
from azure.core.credentials import AzureKeyCredential

client = ChatCompletionsClient(endpoint=endpoint, credential=AzureKeyCredential(key))

response = client.complete(
    messages=[
        SystemMessage(content="You are a helpful assistant."),
        UserMessage(content="How many feet are in a mile?"),
    ]
)

print(response.choices[0].message.content)

The following types or messages are supported: SystemMessage,UserMessage, AssistantMessage, ToolMessage. See also samples:

Alternatively, you can provide the messages as dictionary instead of using the strongly typed classes like SystemMessage and UserMessage:

response = client.complete(
    {
        "messages": [
            {
                "role": "system",
                "content": "You are an AI assistant that helps people find information. Your replies are short, no more than two sentences.",
            },
            {
                "role": "user",
                "content": "What year was construction of the International Space Station mostly done?",
            },
            {
                "role": "assistant",
                "content": "The main construction of the International Space Station (ISS) was completed between 1998 and 2011. During this period, more than 30 flights by US space shuttles and 40 by Russian rockets were conducted to transport components and modules to the station.",
            },
            {
                "role": "user",
                "content": "And what was the estimated cost to build it?"
            },
        ]
    }
)

To generate completions for additional messages, simply call client.complete multiple times using the same client.

Streaming chat completions example

This example demonstrates how to generate a single chat completions with streaming response, with key authentication, assuming endpoint and key are already defined. You need to add stream=True to the complete call to enable streaming.

from azure.ai.inference import ChatCompletionsClient
from azure.ai.inference.models import SystemMessage, UserMessage
from azure.core.credentials import AzureKeyCredential

client = ChatCompletionsClient(endpoint=endpoint, credential=AzureKeyCredential(key))

response = client.complete(
    stream=True,
    messages=[
        SystemMessage(content="You are a helpful assistant."),
        UserMessage(content="Give me 5 good reasons why I should exercise every day."),
    ],
)

for update in response:
    print(update.choices[0].delta.content or "", end="")

client.close()

In the above for loop that prints the results you should see the answer progressively get longer as updates get streamed to the client.

To generate completions for additional messages, simply call client.complete multiple times using the same client.

Chat completions with additional model-specific parameters

In this example, extra JSON elements are inserted at the root of the request body by setting model_extras when calling the complete method. These are intended for AI models that require extra parameters beyond what is defined in the REST API.

Note that by default, the service will reject any request payload that includes unknown parameters (ones that are not defined in the REST API Request Body table). In order to change the default service behaviour, when the complete method includes model_extras, the client library will automatically add the HTTP request header "unknown_params": "pass_through".

The input argument model_extras is not restricted to chat completions. It is suppored on other client methods as well.

response = client.complete(
    messages=[
        SystemMessage(content="You are a helpful assistant."),
        UserMessage(content="How many feet are in a mile?"),
    ],
    model_extras={"key1": "value1", "key2": "value2"},  # Optional. Additional parameters to pass to the model.
)

In the above example, this will be the JSON payload in the HTTP request:

{
    "messages":
    [
        {"role":"system","content":"You are a helpful assistant."},
        {"role":"user","content":"How many feet are in a mile?"}
    ],
    "key1": "value1",
    "key2": "value2"
}

Text Embeddings example

This example demonstrates how to get text embeddings, with key authentication, assuming endpoint and key are already defined.

from azure.ai.inference import EmbeddingsClient
from azure.core.credentials import AzureKeyCredential

client = EmbeddingsClient(endpoint=endpoint, credential=AzureKeyCredential(key))

response = client.embed(input=["first phrase", "second phrase", "third phrase"])

for item in response.data:
    length = len(item.embedding)
    print(
        f"data[{item.index}]: length={length}, [{item.embedding[0]}, {item.embedding[1]}, "
        f"..., {item.embedding[length-2]}, {item.embedding[length-1]}]"
    )

The length of the embedding vector depends on the model, but you should see something like this:

data[0]: length=1024, [0.0013399124, -0.01576233, ..., 0.007843018, 0.000238657]
data[1]: length=1024, [0.036590576, -0.0059547424, ..., 0.011405945, 0.004863739]
data[2]: length=1024, [0.04196167, 0.029083252, ..., -0.0027484894, 0.0073127747]

To generate embeddings for additional phrases, simply call client.embed multiple times using the same client.

Troubleshooting

Exceptions

The complete, embed and get_model_info methods on the clients raise an HttpResponseError exception for a non-success HTTP status code response from the service. The exception's status_code will hold the HTTP response status code (with reason showing the friendly name). The exception's error.message contains a detailed message that may be helpful in diagnosing the issue:

from azure.core.exceptions import HttpResponseError

...

try:
    result = client.complete( ... )
except HttpResponseError as e:
    print(f"Status code: {e.status_code} ({e.reason})")
    print(e.message)

For example, when you provide a wrong authentication key:

Status code: 401 (Unauthorized)
Operation returned an invalid status 'Unauthorized'

Or when you create an EmbeddingsClient and call embed on the client, but the endpoint does not support the /embeddings route:

Status code: 405 (Method Not Allowed)
Operation returned an invalid status 'Method Not Allowed'

Logging

The client uses the standard Python logging library. The SDK logs HTTP request and response details, which may be useful in troubleshooting. To log to stdout, add the following:

import sys
import logging

# Acquire the logger for this client library. Use 'azure' to affect both
# 'azure.core` and `azure.ai.inference' libraries.
logger = logging.getLogger("azure")

# Set the desired logging level. logging.INFO or logging.DEBUG are good options.
logger.setLevel(logging.DEBUG)

# Direct logging output to stdout:
handler = logging.StreamHandler(stream=sys.stdout)
# Or direct logging output to a file:
# handler = logging.FileHandler(filename="sample.log")
logger.addHandler(handler)

# Optional: change the default logging format. Here we add a timestamp.
formatter = logging.Formatter("%(asctime)s:%(levelname)s:%(name)s:%(message)s")
handler.setFormatter(formatter)

By default logs redact the values of URL query strings, the values of some HTTP request and response headers (including Authorization which holds the key or token), and the request and response payloads. To create logs without redaction, do these two things:

  1. Set the method argument logging_enable = True when you construct the client library, or when you call the client's complete or embed methods.
    client = ChatCompletionsClient(
        endpoint=endpoint,
        credential=AzureKeyCredential(key),
        logging_enable=True
    )
    
  2. Set the log level to logging.DEBUG. Logs will be redacted with any other log level.

Be sure to protect non redacted logs to avoid compromising security.

For more information, see Configure logging in the Azure libraries for Python

Reporting issues

To report issues with the client library, or request additional features, please open a GitHub issue here

Next steps

  • Have a look at the Samples folder, containing fully runnable Python code for doing inference using synchronous and asynchronous clients.

Contributing

This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit https://cla.microsoft.com.

When you submit a pull request, a CLA-bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., label, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA.

This project has adopted the Microsoft Open Source Code of Conduct. For more information, see the Code of Conduct FAQ or contact opencode@microsoft.com with any additional questions or comments.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

azure-ai-inference-1.0.0b1.tar.gz (97.1 kB view details)

Uploaded Source

Built Distribution

azure_ai_inference-1.0.0b1-py3-none-any.whl (84.8 kB view details)

Uploaded Python 3

File details

Details for the file azure-ai-inference-1.0.0b1.tar.gz.

File metadata

  • Download URL: azure-ai-inference-1.0.0b1.tar.gz
  • Upload date:
  • Size: 97.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: RestSharp/106.13.0.0

File hashes

Hashes for azure-ai-inference-1.0.0b1.tar.gz
Algorithm Hash digest
SHA256 e44c58ccc38a6ae335193d4ba99d9424b1d9cdfc7f18d5285dc17ba988404dfd
MD5 01329fd9f48b148c8e7ea1c967d59ccb
BLAKE2b-256 e901dffe8290e40752a243e8982c82aefcbbe0c3d290042dc5b759b02bd09644

See more details on using hashes here.

File details

Details for the file azure_ai_inference-1.0.0b1-py3-none-any.whl.

File metadata

File hashes

Hashes for azure_ai_inference-1.0.0b1-py3-none-any.whl
Algorithm Hash digest
SHA256 c62279a36c232a98cb69bcbbf6b8f1974316922350e20f990bc27c041881c854
MD5 9c5332bd9daf0a12ce1d0a028737c49b
BLAKE2b-256 f958436220c284c2ab3e37461facbd85736511ef476eb8e9a9ad1c998e19c60a

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page