Edit

Share via


What's new in Azure Face in Foundry Tools

Learn what's new in Azure Face. Check this page to stay up to date with new features, enhancements, fixes, and documentation updates.

August 2025

Face liveness service v1.3-preview.1 API release

The v1.3-preview.1 public preview introduces a new security enhancement:

  • Abuse detection – Adds built-in risk assessments, including IP-based checks, to help identify and flag liveness sessions that may be fraudulent. This enables earlier intervention in high-risk scenarios such as identity verification or account onboarding. Learn more.

See the API Reference for full details.

Network isolation support for Liveness Detection APIs

Liveness Detection APIs now support disabling public network access for calls from client applications, ensuring requests are only processed within your trusted network boundaries. This feature is available across supported API versions and is particularly valuable for regulated or high-security environments. Learn more.

Face liveness client-side SDK 1.4.1 release

Version 1.4.1 improves distribution and CI/CD integration for the Liveness SDK.

  • Public wrapper SDKs are now available in npm (JavaScript/Web), Maven Central (Android), and a GitHub repo (iOS xcframework), enabling easier integration and automated dependency monitoring with tools such as GitHub Dependabot or Renovate.
  • Simplified gated asset access – Instead of running a local script, developers can now call a dedicated API to obtain an access token using their Azure Face resource endpoint and API key, making automated builds simpler to set up.

For platform-specific details, samples, and migration guidance, see the full SDK release notes.

February 2025

Face liveness client-side SDK 1.1.0 release

Liveness client-side SDK released 1.1.0

This update includes a few improvements:

  • Increased timeout for the head-turn scenario to provide end-users more time to complete the flow.
  • Fixes to iOS and Android SDKs to resolve compatibility issues with Microsoft Intune Mobile Application Management SDKs.
  • Security related fixes/improvements.

For more information, see the SDK release notes.

January 2025

Face liveness detection GA

The Face liveness detection feature is now generally available (GA).

This SDK allows developers to utilize face liveness checks on both native-mobile applications and web-browsers applications for identity-verification scenarios.

The new SDK supports both Passive and Passive-Active modes. The hybrid Passive-Active mode is designed to require Active motion only in poor lighting conditions, while using the speed and efficiency of Passive liveness checks in optimal lighting.

For more information, see the SDK release notes.

August 2024

New detectable Face attributes

The glasses, occlusion, blur, and exposure attributes are available with the latest Detection 03 model. See Specify a face detection model for more details.

May 2024

New Face SDK 1.0.0-beta.1 (breaking changes)

The Face SDK was rewritten in version 1.0.0-beta.1 to better meet the guidelines and design principles of Azure SDKs. C#, Python, Java, and JavaScript are the supported languages. Follow the QuickStart to get started.

November 2023

Face client-side SDK for liveness detection

The Face Liveness SDK supports liveness detection on your users' mobile or edge devices. It's available in Java/Kotlin for Android and Swift/Objective-C for iOS.

Our liveness detection service achieved a 0% penetration rate in iBeta Level 1 and Level 2 Presentation Attack Detection (PAD) tests, conducted by a NIST/NVLAP-accredited laboratory and conformant to the ISO/IEC 30107-3 PAD international standard.

April 2023

Face limited access tokens

Independent software vendors (ISVs) can manage the Face API usage of their clients by issuing access tokens that grant access to Face features that are normally gated. This allows client companies to use the Face API without having to go through the formal approval process. Use limited access tokens.

June 2022

Vision Studio launch

Vision Studio is UI tool that lets you explore, build, and integrate features from Azure Vision into your applications.

Vision Studio provides you with a platform to try several service features, and see what they return in a visual manner. Using the Studio, you can get started without needing to write code, and then use the available client libraries and REST APIs in your application.

Responsible AI for Face

Face transparency note

  • The transparency note provides guidance to assist our customers to improve the accuracy and fairness of their systems by incorporating meaningful human review to detect and resolve cases of misidentification or other failures, providing support to people who believe their results were incorrect, and identifying and addressing fluctuations in accuracy due to variations in operational conditions.

Retirement of sensitive attributes

  • We have retired facial analysis capabilities that purport to infer emotional states and identity attributes, such as gender, age, smile, facial hair, hair, and makeup.
  • Facial detection capabilities, (including detecting blur, exposure, glasses, headpose, landmarks, noise, occlusion, facial bounding box) will remain generally available and don't require an application.

Fairlearn package and Microsoft's Fairness Dashboard

Limited Access policy

  • As a part of aligning Face to the updated Responsible AI Standard, a new Limited Access policy has been implemented for the Face API and Azure Vision. Existing customers have one year to apply and receive approval for continued access to the facial recognition services based on their provided use cases. See details on Limited Access for Face here.

February 2022

New Quality Attribute in Detection_01 and Detection_03

  • To help system builders and their customers capture high quality images, which are necessary for high quality outputs from Face API, we’re introducing a new quality attribute QualityForRecognition to help decide whether an image is of sufficient quality to attempt face recognition. The value is an informal rating of low, medium, or high. The new attribute is only available when using any combinations of detection models detection_01 or detection_03, and recognition models recognition_03 or recognition_04. Only "high" quality images are recommended for person enrollment and quality above "medium" is recommended for identification scenarios. To learn more about the new quality attribute, see Face detection and attributes and see how to use it with QuickStart.

April 2021

PersonDirectory data structure (preview)

  • In order to perform face recognition operations such as Identify and Find Similar, Face API customers need to create an assorted list of Person objects. The new PersonDirectory is a data structure that contains unique IDs, optional name strings, and optional user metadata strings for each Person identity added to the directory. Currently, the Face API offers the LargePersonGroup structure, which has similar functionality but is limited to 1 million identities. The PersonDirectory structure can scale up to 75 million identities.
  • Another major difference between PersonDirectory and previous data structures is that you'll no longer need to make any Train calls after adding faces to a Person object—the update process happens automatically. For more details, see Use the PersonDirectory structure.

February 2021

New Face API detection model

  • The new Detection 03 model is the most accurate detection model currently available. If you're a new customer, we recommend using this model. Detection 03 improves both recall and precision on smaller faces found within images (64x64 pixels). Other improvements include an overall reduction in false positives and improved detection on rotated face orientations. Combining Detection 03 with the new Recognition 04 model provides improved recognition accuracy as well. See Specify a face detection model for more details.

New detectable Face attributes

  • The faceMask attribute is available with the latest Detection 03 model, along with the added attribute "noseAndMouthCovered", which detects whether the face mask is worn as intended, covering both the nose and mouth. To use the latest mask detection capability, users need to specify the detection model in the API request: assign the model version with the detectionModel parameter to detection_03. See Specify a face detection model for more details.

New Face API Recognition Model

  • The new Recognition 04 model is the most accurate recognition model currently available. If you're a new customer, we recommend using this model for verification and identification. It improves upon the accuracy of Recognition 03, including improved recognition for users wearing face covers (surgical masks, N95 masks, cloth masks). We recommend against enrolling images of users wearing face covers as this will lower recognition quality. Now customers can build safe and seamless user experiences that detect whether a user is wearing a face cover with the latest Detection 03 model, and recognize them with the latest Recognition 04 model. See Specify a face recognition model for more details.

January 2021

Mitigate latency

December 2020

Customer configuration for Face ID storage

  • While the Face Service does not store customer images, the extracted face feature(s) will be stored on server. The Face ID is an identifier of the face feature and will be used in Face - Identify, Face - Verify, and Face - Find Similar. The stored face features will expire and be deleted 24 hours after the original detection call. Customers can now determine the length of time these Face IDs are cached. The maximum value is still up to 24 hours, but a minimum value of 60 seconds can now be set. The new time ranges for Face IDs being cached is any value between 60 seconds and 24 hours. More details can be found in the Face - Detect API reference (the faceIdTimeToLive parameter).

November 2020

Sample Face enrollment app

  • The team published a sample Face enrollment app to demonstrate best practices for establishing meaningful consent and creating high-accuracy face recognition systems through high-quality enrollments. The open-source sample can be found in the Build an enrollment app guide and on GitHub, ready for developers to deploy or customize.

August 2020

Customer-managed encryption of data at rest

  • The Face service automatically encrypts your data when persisting it to the cloud. The Face service encryption protects your data to help you meet your organizational security and compliance commitments. By default, your subscription uses Microsoft-managed encryption keys. There is also a new option to manage your subscription with your own keys called customer-managed keys (CMK). More details can be found at Customer-managed keys.

April 2020

New Face API Recognition Model

  • The new recognition 03 model is the most accurate model currently available. If you're a new customer, we recommend using this model. Recognition 03 provides improved accuracy for both similarity comparisons and person-matching comparisons. More details can be found at Specify a face recognition model.

June 2019

New Face API detection model

April 2019

Improved attribute accuracy

  • Improved overall accuracy of the age and headPose attributes. The headPose attribute is also updated with the pitch value enabled now. Use these attributes by specifying them in the returnFaceAttributes parameter of Face - Detect returnFaceAttributes parameter.

Improved processing speeds

March 2019

New Face API recognition model

January 2019

Face Snapshot feature

  • This feature allows the service to support data migration across subscriptions: Snapshot.

Important

As of June 30, 2023, the Face Snapshot API is retired.

October 2018

API messages

May 2018

Improved attribute accuracy

  • Improved gender attribute significantly and also improved age, glasses, facialHair, hair, makeup attributes. Use them through Face - Detect returnFaceAttributes parameter.

Increased file size limit

March 2018

New data structure

May 2017

New detectable Face attributes

  • Added hair, makeup, accessory, occlusion, blur, exposure, and noise attributes in Face - Detect returnFaceAttributes parameter.
  • Supported 10K persons in a PersonGroup and Face - Identify.
  • Supported pagination in PersonGroup Person - List with optional parameters: start and top.
  • Supported concurrency in adding/deleting faces against different FaceLists and different persons in PersonGroup.

March 2017

New detectable Face attribute

  • Added emotion attribute in Face - Detect returnFaceAttributes parameter.

Fixed issues

November 2016

New subscription tier

  • Added Face Storage Standard subscription to store additional persisted faces when using PersonGroup Person - Add Face or FaceList - Add Face for identification or similarity matching. The stored images are charged at $0.5 per 1000 faces and this rate is prorated on a daily basis. Free tier subscriptions continue to be limited to 1,000 total persons.

October 2016

API messages

July 2016

New features

  • Supported Face to Person object authentication in Face - Verify.
  • Added optional mode parameter enabling selection of two working modes: matchPerson and matchFace in Face - Find Similar and default is matchPerson.
  • Added optional confidenceThreshold parameter for user to set the threshold of whether one face belongs to a Person object in Face - Identify.
  • Added optional start and top parameters in PersonGroup - List to enable user to specify the start point and the total PersonGroups number to list.

V1.0 changes from V0

  • Updated service root endpoint from https://westus.api.cognitive.microsoft.com/face/v0/ to https://westus.api.cognitive.microsoft.com/face/v1.0/. Changes applied to: Face - Detect, Face - Identify, Face - Find Similar and Face - Group.
  • Updated the minimal detectable face size to 36x36 pixels. Faces smaller than 36x36 pixels will not be detected.
  • Deprecated the PersonGroup and Person data in Face V0. Those data cannot be accessed with the Face V1.0 service.
  • Deprecated the V0 endpoint of Face API on June 30, 2016.