Skip to main content

Microsoft Azure Ai Vision Face Client Library for Python

Project description

Azure AI Face client library for Python

The Azure AI Face service provides AI algorithms that detect, recognize, and analyze human faces in images. It includes the following main features:

  • Face detection and analysis
  • Liveness detection
  • Face recognition
    • Face verification ("one-to-one" matching)
  • Find similar faces
  • Group faces

Source code | Package (PyPI) | API reference documentation | Product documentation | Samples

Getting started

Prerequisites

  • Python 3.8 or later is required to use this package.
  • You need an Azure subscription to use this package.
  • Your Azure account must have a Cognitive Services Contributor role assigned in order for you to agree to the responsible AI terms and create a resource. To get this role assigned to your account, follow the steps in the Assign roles documentation, or contact your administrator.
  • Once you have sufficient permissions to control your Azure subscription, you need either

Create a Face or an Azure AI services multi-service account

Azure AI Face supports both multi-service and single-service access. Create an Azure AI services multi-service account if you plan to access multiple Azure AI services under a single endpoint/key. For Face access only, create a Face resource.

Install the package

python -m pip install azure-ai-vision-face

Authenticate the client

In order to interact with the Face service, you will need to create an instance of a client. An endpoint and credential are necessary to instantiate the client object.

Both key credential and Microsoft Entra ID credential are supported to authenticate the client. For enhanced security, we strongly recommend utilizing Microsoft Entra ID credential for authentication in the production environment, while AzureKeyCredential should be reserved exclusively for the testing environment.

Get the endpoint

You can find the endpoint for your Face resource using the Azure Portal or Azure CLI:

# Get the endpoint for the Face resource
az cognitiveservices account show --name "resource-name" --resource-group "resource-group-name" --query "properties.endpoint"

Either a regional endpoint or a custom subdomain can be used for authentication. They are formatted as follows:

Regional endpoint: https://<region>.api.cognitive.microsoft.com/
Custom subdomain: https://<resource-name>.cognitiveservices.azure.com/

A regional endpoint is the same for every resource in a region. A complete list of supported regional endpoints can be consulted here. Please note that regional endpoints do not support Microsoft Entra ID authentication. If you'd like migrate your resource to use custom subdomain, follow the instructions here.

A custom subdomain, on the other hand, is a name that is unique to the resource. Once created and linked to a resource, it cannot be modified.

Create the client with a Microsoft Entra ID credential

AzureKeyCredential authentication is used in the examples in this getting started guide, but you can also authenticate with Microsoft Entra ID using the azure-identity library. Note that regional endpoints do not support Microsoft Entra ID authentication. Create a custom subdomain name for your resource in order to use this type of authentication.

To use the DefaultAzureCredential type shown below, or other credential types provided with the Azure SDK, please install the azure-identity package:

pip install azure-identity

You will also need to register a new Microsoft Entra ID application and grant access to Face by assigning the "Cognitive Services User" role to your service principal.

Once completed, set the values of the client ID, tenant ID, and client secret of the Microsoft Entra ID application as environment variables: AZURE_CLIENT_ID, AZURE_TENANT_ID, AZURE_CLIENT_SECRET.

"""DefaultAzureCredential will use the values from these environment
variables: AZURE_CLIENT_ID, AZURE_TENANT_ID, AZURE_CLIENT_SECRET
"""
from azure.ai.vision.face import FaceClient
from azure.identity import DefaultAzureCredential

endpoint = "https://<my-custom-subdomain>.cognitiveservices.azure.com/"
credential = DefaultAzureCredential()

face_client = FaceClient(endpoint, credential)

Create the client with AzureKeyCredential

To use an API key as the credential parameter, pass the key as a string into an instance of AzureKeyCredential. You can get the API key for your Face resource using the Azure Portal or Azure CLI:

# Get the API keys for the Face resource
az cognitiveservices account keys list --name "<resource-name>" --resource-group "<resource-group-name>"
from azure.core.credentials import AzureKeyCredential
from azure.ai.vision.face import FaceClient

endpoint = "https://<my-custom-subdomain>.cognitiveservices.azure.com/"
credential = AzureKeyCredential("<api_key>")
face_client = FaceClient(endpoint, credential)

Key concepts

FaceClient

FaceClient provides operations for:

  • Face detection and analysis: Detect human faces in an image and return the rectangle coordinates of their locations, and optionally with landmarks, and face-related attributes. This operation is required as a first step in all the other face recognition scenarios.
  • Face recognition: Confirm that a user is who they claim to be based on how closely their face data matches the target face. It includes Face verification ("one-to-one" matching).
  • Finding similar faces from a smaller set of faces that look similar to the target face.
  • Grouping faces into several smaller groups based on similarity.

FaceSessionClient

FaceSessionClient is provided to interact with sessions which is used for Liveness detection.

  • Create, query, and delete the session.
  • Query the liveness and verification result.
  • Query the audit result.

Examples

The following section provides several code snippets covering some of the most common Face tasks, including:

Face Detection

Detect faces and analyze them from an binary data. The latest model is the most accurate and recommended to be used. For the detailed differences between different versions of Detection and Recognition model, please refer to the following links.

from azure.core.credentials import AzureKeyCredential
from azure.ai.vision.face import FaceClient
from azure.ai.vision.face.models import (
    FaceDetectionModel,
    FaceRecognitionModel,
    FaceAttributeTypeDetection03,
    FaceAttributeTypeRecognition04,
)

endpoint = "<your endpoint>"
key = "<your api key>"

with FaceClient(endpoint=endpoint, credential=AzureKeyCredential(key)) as face_client:
    sample_file_path = "<your image file>"
    with open(sample_file_path, "rb") as fd:
        file_content = fd.read()

    result = face_client.detect(
        file_content,
        detection_model=FaceDetectionModel.DETECTION_03,  # The latest detection model.
        recognition_model=FaceRecognitionModel.RECOGNITION_04,  # The latest recognition model.
        return_face_id=True,
        return_face_attributes=[
            FaceAttributeTypeDetection03.HEAD_POSE,
            FaceAttributeTypeDetection03.MASK,
            FaceAttributeTypeRecognition04.QUALITY_FOR_RECOGNITION,
        ],
        return_face_landmarks=True,
        return_recognition_model=True,
        face_id_time_to_live=120,
    )

    print(f"Detect faces from the file: {sample_file_path}")
    for idx, face in enumerate(result):
        print(f"----- Detection result: #{idx+1} -----")
        print(f"Face: {face.as_dict()}")

Liveness detection

Face Liveness detection can be used to determine if a face in an input video stream is real (live) or fake (spoof). The goal of liveness detection is to ensure that the system is interacting with a physically present live person at the time of authentication. The whole process of authentication is called a session.

There are two different components in the authentication: a frontend application and an app server/orchestrator. Before uploading the video stream, the app server has to create a session, and then the frontend client could upload the payload with a session authorization token to call the liveness detection. The app server can query for the liveness detection result and audit logs anytime until the session is deleted.

The Liveness detection operation can not only confirm if the input is live or spoof, but also verify whether the input belongs to the expected person's face, which is called liveness detection with face verification. For the detail information, please refer to the tutorial.

This package is only responsible for app server to create, query, delete a session and get audit logs. For how to integrate the UI and the code into your native frontend application, please follow instructions in the tutorial.

Here is an example to create and get the liveness detection result of a session.

import uuid

from azure.core.credentials import AzureKeyCredential
from azure.ai.vision.face import FaceSessionClient
from azure.ai.vision.face.models import CreateLivenessSessionContent, LivenessOperationMode

endpoint = "<your endpoint>"
key = "<your api key>"

with FaceSessionClient(endpoint=endpoint, credential=AzureKeyCredential(key)) as face_session_client:
    # Create a session.
    print("Create a new liveness session.")
    created_session = face_session_client.create_liveness_session(
        CreateLivenessSessionContent(
            liveness_operation_mode=LivenessOperationMode.PASSIVE,
            device_correlation_id=str(uuid.uuid4()),
            send_results_to_client=False,
            auth_token_time_to_live_in_seconds=60,
        )
    )
    print(f"Result: {created_session}")

    # Get the liveness detection result.
    print("Get the liveness detection result.")
    liveness_result = face_session_client.get_liveness_session_result(created_session.session_id)
    print(f"Result: {liveness_result}")

Here is another example for the liveness detection with face verification.

import uuid

from azure.core.credentials import AzureKeyCredential
from azure.ai.vision.face import FaceSessionClient
from azure.ai.vision.face.models import CreateLivenessSessionContent, LivenessOperationMode

endpoint = "<your endpoint>"
key = "<your api key>"

with FaceSessionClient(endpoint=endpoint, credential=AzureKeyCredential(key)) as face_session_client:
    sample_file_path = "<your verify image file>"
    with open(sample_file_path, "rb") as fd:
        file_content = fd.read()

    # Create a session.
    print("Create a new liveness with verify session with verify image.")

    created_session = face_session_client.create_liveness_with_verify_session(
        CreateLivenessSessionContent(
            liveness_operation_mode=LivenessOperationMode.PASSIVE,
            device_correlation_id=str(uuid.uuid4()),
            send_results_to_client=False,
            auth_token_time_to_live_in_seconds=60,
        ),
        verify_image=file_content,
    )
    print(f"Result: {created_session}")

    # Get the liveness detection and verification result.
    print("Get the liveness detection and verification result.")
    liveness_result = face_session_client.get_liveness_with_verify_session_result(created_session.session_id)
    print(f"Result: {liveness_result}")

Troubleshooting

General

Face client library will raise exceptions defined in Azure Core. Error codes and messages raised by the Face service can be found in the service documentation.

Logging

This library uses the standard logging library for logging.

Basic information about HTTP sessions (URLs, headers, etc.) is logged at INFO level.

Detailed DEBUG level logging, including request/response bodies and unredacted headers, can be enabled on the client or per-operation with the logging_enable keyword argument.

See full SDK logging documentation with examples here.

import sys
import logging

from azure.ai.vision.face import FaceClient
from azure.core.credentials import AzureKeyCredential

logging.basicConfig(level=logging.DEBUG,
                    format='%(asctime)s - %(name)s - %(levelname)s - %(message)s',
                    stream=sys.stdout)

endpoint = "https://<my-custom-subdomain>.cognitiveservices.azure.com/"
credential = AzureKeyCredential("<api_key>")
face_client = FaceClient(endpoint, credential)

face.detect(..., logging_enable=True)

Optional Configuration

Optional keyword arguments can be passed in at the client and per-operation level. The azure-core reference documentation describes available configurations for retries, logging, transport protocols, and more.

Next steps

More sample code

See the Sample README for several code snippets illustrating common patterns used in the Face Python API.

Additional documentation

For more extensive documentation on Azure AI Face, see the Face documentation on learn.microsoft.com.

Contributing

This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit https://cla.microsoft.com.

When you submit a pull request, a CLA-bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., label, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA.

This project has adopted the Microsoft Open Source Code of Conduct. For more information, see the Code of Conduct FAQ or contact opencode@microsoft.com with any additional questions or comments.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

azure-ai-vision-face-1.0.0b1.tar.gz (116.1 kB view details)

Uploaded Source

Built Distribution

azure_ai_vision_face-1.0.0b1-py3-none-any.whl (108.2 kB view details)

Uploaded Python 3

File details

Details for the file azure-ai-vision-face-1.0.0b1.tar.gz.

File metadata

File hashes

Hashes for azure-ai-vision-face-1.0.0b1.tar.gz
Algorithm Hash digest
SHA256 7081323532cd44d7e1fd8b6586b4daa423432a5557c5c14299bb342a3df9efce
MD5 1cace94bbac32770786ccfa0b7e81470
BLAKE2b-256 d3c76e15cc442d07627988d20cc9ea0e8cd242f5024ed0cc6eed4750b33483bd

See more details on using hashes here.

File details

Details for the file azure_ai_vision_face-1.0.0b1-py3-none-any.whl.

File metadata

File hashes

Hashes for azure_ai_vision_face-1.0.0b1-py3-none-any.whl
Algorithm Hash digest
SHA256 73325485b63a2d1b6b8577fba29477870da6f81967ecdbc5bc058d8f6874f113
MD5 2ab11c702f1c63d82468627b0a92bec7
BLAKE2b-256 9f15f54da6d534d48edf811d7ecebfa41059c3c324d9a4068bb5dcde637a08f8

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page