Microsoft Face Check Identity Verification Now Available for Enterprise Use

Microsoft has announced the general release of Face Check with Microsoft Entra Verified ID, a consent-based method used to confirm a person's identity.

First announced and released in preview in February of this year, Face Check, powered by Azure AI services, enhances identity verification by matching a user’s real-time selfie with the photo on their Verified ID, which typically originates from trusted sources like passports or driver's licenses. The Face Check service analyzes specific facial features, like the position of the eyes and nose, rather than the entire face, to generate a confidence score indicating whether the two photos are a match.

Organizations can set their preferred confidence score threshold for accepting a Face Check verification. A higher threshold decreases the chances of an impersonator being mistakenly accepted. At the default and recommended confidence score of 70 percent, the likelihood that a user is not the rightful credential owner is one in 10 million. Raising the threshold to 90 percent reduces the likelihood to one in one billion. However, Microsoft said that the higher the threshold, the more likely that a verified user might be rejected, so it's recommended that enterprises find the right balance that works for their organization.

The new feature is part of Microsoft Entra Verified ID, a managed verifiable credential service that enables organizations to create customized, user-owned identity solutions, fostering trustworthy, secure and efficient interactions between individuals and organizations, according to Microsoft.

Microsoft touts the service as another layer to strengthen enterprise security and protect organizational data. "By sharing only match results and not any sensitive identity data, Face Check strengthens an organization's identity verification while protecting user privacy," said Microsoft's Ankur Patel. "It can detect and reject various spoofing techniques, including deepfakes, to fully protect your users' identities."

Organizations can also leverage Face Check for more than just security. Because the technology is built on open source standards, IT can custom build their own APIs, connecting employee faces to automated tasks, like automatically connecting users to password resets and virtual help desk assistance.

Enterprises can sign up for Face Check with Microsoft Entra Verified ID as a standalone service, priced at $0.25 per verification or users can access it as a feature within the Microsoft Entra Suite.

Visit the Microsoft site for more information.

About the Author

Chris Paoli (@ChrisPaoli5) is the associate editor for Converge360.

Featured

  • Teacher Holding Tablet Computer Explains Lesson to Young Children

    How to Streamline Procurement and Reduce Compliance Headaches

    Learn key areas to simplify K-12 purchasing and help teams manage limited budgets more effectively.

  • Red alert symbols and email icons floating in a dark digital space

    Report: Cyber Attackers Are Fully Embracing AI

    According to Google Cloud's 2026 Cybersecurity Forecast, AI will become standard for both cyber attackers and defenders, with threats expanding to virtualization systems, blockchain networks, and nation-state operations.

  • magnifying glass highlighting a human profile silhouette, set over a collage of framed icons including landscapes, charts, and education symbols

    New AI Detector Identifies AI-Generated Multimedia Content

    Amazon Web Services and DeepBrain AI have launched AI Detector, an enterprise-grade solution designed to identify and manage AI-generated content across multiple media types. The collaboration targets organizations in government, finance, media, law, and education sectors that need to validate content authenticity at scale.

  • cybersecurity book with a shield and padlock

    Proposed NIST Cybersecurity Guidelines Aim to Safeguard AI Systems

    The National Institute of Standards and Technology has announced plans to issue a new set of cybersecurity guidelines aimed at safeguarding artificial intelligence systems, citing rising concerns over risks tied to generative models, predictive analytics, and autonomous agents.