Windows 10 Passwordless – Azure AD Join, Microsoft Intune and Windows Hello for Business

We’re back and it’s been a W H I L E…. let’s jump right back in with some Single Sign-On (SSO) passwordless fun with Windows 10, Azure AD Join, Microsoft Intune and Windows Hello for Business. This article is also uploaded to the Access Onion blog here.

In this post we describe one route to incorporating passwordless technology that leverages customer investment in the Microsoft cloud, specifically Enterprise Mobility + Security.  We assume the customer is in possession of a hybrid infrastructure, with on-premise pieces (Active Directory Domain Services, Certificate Services etc.). Most customer configurations we come across are those where a Hybrid Azure AD-join configuration has been opted for, with the on-premise identity being the dominant one.

We’ll use Windows Autopilot to kick start a hypothetical migration from hybrid to cloud-only, in doing so using Microsoft Intune as an alternate for SCCM and on-premise GPO, rolling out Windows Hello for Business as part of the process, together with Wireless 802.1X and AlwaysOn VPN profiles. Finally, a single sign-on (SSO) path back to on-premise resources is a must.

Here we take a Windows 10 version 1803 client  and join it to the tenant Azure Active Directory. As way of demonstrating the platform capability, we:

  • Provision the machine using Windows Autopilot and onboard the user using multi-factor authentication (sans password)
  • Use Windows Hello for Business for Multi-Factor Authentication (MFA) via biometric gestures and PIN for fallback
  • Use TPM-backed certificate authentication to provide secure access to the end-user both in deployment and access to:
    • VPN using Win10 AlwaysOn VPN
    • Secure wireless using 802.1X
  • Use Credential Guard to isolate and protect secrets (e.g., NTLM hashes / Kerberos ticket-granting tickets)
  • Leverage Single Sign-On (SSO) access to on-premise resources:
    • File servers
    • Print servers
    • Application servers

Machines are built using Windows Autopilot and joined to the Azure Active Directory (AADJ).  Since these are AADJ devices, they will not be part of the on-premise Active Directory. User accounts exist in both the cloud and on-premise AD.

Moving on, let’s peek at the configuration:

In this use case, we’re going to deploy a Windows 10 machine using Windows Autopilot. In a conventional Windows deployment, Out of the Box exprience (OoBE) requires the user to identify whether the device is personally or organizationally owned, the selected option then triggering a different set of configuration workflow. For an organizational join, the client needs to have visibility to the Internet to process the registration of the device and the user.

Windows AutoPilot simplifies this decision-making process by directly tying the procured hardware to the organization tenant, importing the hardware ID of the device into the Microsoft Store for Business. The Out of Box Experience (OoBE) lands the user on the tenant branded logon screen.

When prompted for credentials, we enter our tenant details (in this example Route443). In a standard Windows configuration, we’d be required to enter a password after entering the username. Since our goal is passwordless, we look to stronger mechanisms for authentication. During OoBE deployment Windows Hello for Business is not available, so an alternative credential is required. FIDO 2.0 would be ideal but is not yet General Availability (GA) in a Windows 10 release.  Vendors such as Yubikey have incorporated FIDO 2.0 into their product range and are ready to support the up-coming release  of Windows 10 that includes support for FIDO 2.0.

With Multi-Factor Authentication (MFA) enabled in the tenant and phone sign-in configured for the user, the Microsoft Authenticator app can be used to do passwordless sign-in.  Like Windows Hello for Business, it uses key-based authentication for the user credential bound to a device (Biometric or PIN). With phone sign-in enabled for the AAD account, the enrolling user will see a number onscreen.

Within the mobile authenticator app, the user must match the number that appears on-screen with that surfaced via the mobile app.

Note that the App itself can be protected either by Touch ID or PIN. Once successfully authenticated, the machine will continue with configuration. Consistent with the organizational Intune policies in this example, the user will be prompted to register a PIN for Windows Hello for Business. This is also used for fallback in case biometric options (facial recognition/fingerprint) are not available.

Click on PIN requirements to see what your organizational policy has decreed.


Once the PIN is set, the user is able to login with their Hello PIN.  Should the device have other Hello capabilities, such as facial recognition or fingerprint reader, then these can also be engaged.

In the above example, the device is configured for facial recognition.  Changing biometric settings can also be modified under Windows Settings|Accounts|Sign-In Options.

Devices are enrolled for Intune MDM and Azure AD joined. This can be checked via Windows Settings|Accounts|Access Work or School.

With device configuration profiles defined in Microsoft Intune and assigned to devices, the AADJ client will receive the appropriate configuration.

Specific to this configuration, the following profiles are relevant:

  • Certificate configuration profiles
    • Root CA
    • Issuing CA
  • SCEP profile for Windows Hello
  • SCEP profile for WLAN/VPN
  • 1X WLAN profile
  • AlwaysOn VPN profile

Certificate Authority Configuration Profiles

If certificates are pushed out via SCEP, then an Enterprise PKI and NDES server, acting as a Registration Authority, is required, together with an Intune connector installed on the server. As announced at Ignite, Intune will support 3rd party CAs in the near future.

A configuration profile is required for each tier of the PKI. In this example, a two-tier PKI exists, with a profile for each CA pushed to the client.

SCEP profile for Secure Wireless / VPN

A SCEP profile is rolled out with a Client Authentication EKU to satisfy the 802.1X and AlwaysOn certificate requirements.  This certificate is then used by these services to authenticate the client to the back-end Network Policy Server (NPS) running behind the respective wireless and VPN services.

SCEP Profile for Windows Hello

While Windows Hello for Business prefers hardware-backed credentials, not all computers are in possession of a Trusted Platform Module (TPM). Intune provides options for falling back to a software-based credential, should the need arise.

In Certificate Trust scenarios using Windows Hello for Business, a SCEP profile is required with a Smart Card EKU. This is to satisfy access conditions for Single Sign-On (SSO) for Windows Hello for Business against the on-premise domain.

Secure Wireless LAN profile

The Secure Wireless LAN profile contains the configuration for the on-premise wireless network, EAP type settings, authentication methods etc. Use of certificates ensures that access to the on-premise wireless is seamless when in-range. For a more immersive experience, machine certificates are preferred for use, subject to their availability in Intune.

AlwaysOn VPN profile

The AlwaysOn VPN profile contains the configuration for the on-premise AlwaysOn VPN server (Microsoft replacement for DirectAccess). The more detailed settings are minted from a EAP.XML file generated on a test machine manually and then imported into the Intune blade in Azure Resource Manager (ARM) console.

Dependent on whether the user is on-site at the business location or working from home, connectivity to the network, either via secure wireless (802.1X) or IPsec VPN (IKEv2) gives access to the corporate resources, courtesy of the provisioned certificates. This is carried out transparently. Since these are user certificates, connectivity is established after interactive logon.

This is down to  a limitation in the Microsoft Intune SCEP configuration profile that assumes all assigned certificates are to be user-oriented, rather than machine. Thankfully, based on details from Microsoft at Ignite, an upcoming Microsoft Intune release will provide additional support for machine certificates. Once these are available, we’ll follow up with an additional post.

On-Premise Access and Single Sign-On (SSO)

So how are on-premise Active Directory resources accessed in a native Azure AD Join (AADJ) scenario? It may come as a surprise, but AADJ clients can also communicate with on-premise Active Directory resources. This is down to functionality built into recent versions of the Windows 10 client and Azure AD Connect, providing additional details during AAD Sync that can be subsequently used by the Windows client.  It’s worth pointing out that this functionality is not specific Windows Hello for Business, but for AADJ clients as a whole that wish to communicate with on-premise resources.

It is assumed there is line-of-sight to a on-premise domain controller from the Windows 10 client. Line-of-sight can mean on-premise wired, wireless (802.1X) or Always On VPN.. For our passwordless scenario, the authenticated user has the aforementioned “Hello” certificate deployed via SCEP.

If on-premise domain controllers are Windows Server 2016 or above, then the certificate trust model for Windows Hello for Business, described here, can be dropped in favour of the key trust model. This simplifies deployment by not requiring SCEP/NDES for the Smart Card.

Using DSREGCMD from the command-line we can derive some useful information concerning the client.


This is an Azure AD joined device, with TPM-backed private keys for certificates created during the enrollment being stored in TPM. The user also has been enrolled for Windows Hello for Business (NgcSet: YES).

In this article, we illustrated how it is possible to optimise investment in Microsoft cloud services for deploying workspaces across any network, using modern secure authentication, whilst maintaing the ability to seamlessly access on-premise applications. This managed workspace achieves feature parity comparable to that of a classically deployed workspace. By utilising modern (passwordless) secure authentication, together with Azure AD Domain Join, this provides opportunities for customers to take advantage of other identity-as-a-service pieces already available in the Microsoft cloud.

For more information on these interesting topics, please contact us at Route443.

Shifting to Adaptive Authentication and Cloud-Based Security

There’s a significant shift in how organizations are viewing information security, according to The Global State of Information Security Survey 2017 (click to download the original publication) from PricewaterhouseCoopers (PwC).

Here’s a short summary of a few of the major trends mentioned in the document:

Opting for Cloud-Based Security

Instead of traditional on-premises systems, 62 percent of organizations are opting for cloud-based managed security services to provide:

  • Authentication
  • Identity and access management
  • Real-time monitoring and analytics
  • Threat intelligence

PwC calls out real-time monitoring and analytics as key to proactive threat intelligence – 51 percent of respondents monitor data to detect security risks and incidents.

To help you gain insight into the users and devices accessing your applications, Route443 is able to assist you on the area of Identity & Access Management, that can be used to make access policy decisions.

Advanced Authentication

Identity has been at the heart of most every breach in the past two years“. – Richard Kneeley, PwC US Managing Director, Cybersecurity and Privacy.

Phishing has emerged as a significant risk across all companies and every industry. Thirty-eight percent of those surveyed reported phishing scams. Criminals will send phishing emails to employees in order to trick them into sharing their legitimate user credentials, gaining access to company systems and data.

Passwords alone aren’t secure enough to protect against phishing attacks. PwC reports that businesses are adopting advanced authentication, or multi-factor authentication  technology such as software tokens, biometrics and smartphone tokens.

As security perimeters dissolve and identity expands from  people to connected devices, identity and access management (IAM) tools are more essential than ever to protect access and prevent incursions.

As PwC stated in their survey, “authentication must be frictionless and intuitive for end users.”

Route443 is able to assist you by implementing conditional access, contextual based where having the password is just not enough. Getting devices into the context of authentication and authorization, enables frictionless and intuitive authentication for your end users.

Adaptive Authentication

Another trend listed by PwC is the use of additional data points to identify suspicious behaviors and patterns – data such as a user’s login time and location, type of device, network, etc. to create risk-based access decisions.

“Identity has been at the heart of most every breach in the past two years,” said Richard Kneeley, PwC US Managing Director, Cybersecurity and Privacy. “Many of these breaches have involved someone gaining access by using compromised identity, then changing their identity once inside the network to ratchet up access to data and systems by taking over a privileged account and in the process gaining unlimited access to the network, to systems and to data .”

Protecting the Identity is the fundamental ground rule of our Identity Driven Security approach, Route443 is able to assess, guide and implement all required measures.
By blocking authentication attempts based on user location, network type or their device, you can reduce risks associated with anonymous networks, countries you don’t do business in, or exposure to out-of-date and risky devices.

Please stay tuned, follow our website ( our blog ( to receive the latest information from us.


Transformation of the Desktop

For more than twenty years the Desktop PC has been the staple of enterprise computing, as the main productivity tool for knowledge workers. This dominance is being increasingly challenged as the modern workforce shifts to a more mobile experience, with modern operating systems reflecting this commoditized (read: BYOD) trend. Within this new generation of computing the traditional way of managing (thereby controlling) those devices will no longer apply or suffice. The reality is that as we see the desktop shifting toward a more mobile form, our traditional view of how we perceive infrastructure and security is fundamentally challenged. Not convinced? Stay tuned and we’ll delve into how we see this next generation computing mapping out.

Within the mobile world there’s a powerful and agile model of security and management called Enterprise Mobility Management (EMM). It contains three major management components Mobile Device Management (MDM), Mobile Application Management (MAM) and Mobile Content Management (MCM).

…With Windows 10, Microsoft has re-architected the Windows operating system to adopt EMM…

Here’s why: With the rise of mobile computing, employees don’t use (or not only) a locked-down PC on the corporate network to do their jobs. Instead they use many different devices, some company-owned and some personally owned. These devices run a vast array of (mobile) apps and connect across networks that are outside of IT’s control. Legacy Windows client management tools (like Microsoft’s System Center Configuration Manager (SCCM) are too inflexible for modern computing environments. They imply management of a client through installation of a complex system image on the PC, constrained by the boundaries of the organization. Solutions such as DirectAccess are last gasp entreaties to modernize the managed client in the conventional sense.

…The era of the domain-joined PC is coming to a close…

EMM moves the legacy PC paradigm from complex and hard-coded system image to context-based policy. With Windows 10, Microsoft is addressing the need for greater security and management flexibility in the enterprise. Yet, the Apple MacOS platform has been in this position for many years. From the start of the “mobile century”, the MacOS platform has been considered a mobile device next to the smartphones and tablets using the Android and iOS platform. So why is this development now taking momentum ? Could it have something to do with the impressive number of 400 million Windows 10 devices already in the field ? Clearly an operating system that is imposing itself on the market in such volume, while supporting much of the desired functionality organizations and their users are looking for,  is going to have impact on the conversation.

Gartner retired the Magic Quadrant for Client Management Tools in March 2016…

The traditional Windows architecture offered a broad attack surface because both the file system and the operating system itself presented vectors. To counter the risk, IT had to install, as part of the image, additional security agents to monitor threats and remediate accordingly.  Maintaining the integrity and security of data on the PC was a constant struggle. Likewise, this model required devices to join a Windows domain governed by policy (GPOs) , or third-party management software, controlling what employees could or could not do on this PC. It assumed devices were corporate-owned, Windows-based, and connected to a persistent local area network (LAN).

For the most part, the modern enterprise, moreover the IT department, no longer has the latitude to work this way. The demands of today’s employees; working on any device, in a variety of environments — home, airports, coffee shops, hotels, etc., means the traditional approach can no longer support this work style. Mobile devices are not LAN-bound and are frequently owned by the employee, rather than the company. The clouding of business v personal and the way in which the focus shifts freely from device to application to data, means overlapping is inevitable. Flexible use of devices becomes deeply embedded in many aspects of an employee’s personal and work life.

To address this new vista (no pun intended), Microsoft has re-architected Windows 10 to move beyond the legacy management systems and fully supporting EMM.

EMM solutions like Microsoft Intune are providing an efficient and flexible way to provision services to employees and secure business data on modern operating systems. The move to EMM represents a major change in how the desktop will be secured and managed moving forward.

…Our vision on this…

We believe that organizations need to start planning now for the moment where PCs are managed and secured like mobile devices, and desktop apps are developed and deployed like mobile apps. That’s a major upcoming shift within the technology landscape, enabling the transformation of the desktop.

In a upcoming blog post we’ll explain the technology behind EMM solutions, in specific the Microsoft Intune EMM solution and will also provide you a sneak preview in the near future to help you make the right decisions.

Please stay tuned, follow our website ( our blog ( to receive the latest information from us.



DirectAccess with PointSharp ID

Microsoft DirectAccess continues to be a strong remote access solution in the on-premise space. On 27th July 2016, Richard Hicks, MVP in Cloud and Data Center Management and well-known DirectAccess expert, will be hosting a webinar with PointSharp to describe the combination of strong authentication using DirectAccess with PointSharp ID. You can enroll for this webinar here.

Meanwhile, if you can’t make the webinar, Route443 will demonstrate in this blog post how the two technologies can work together. PointSharp ID, for those not familiar, is a robust two-factor authentication (2FA) service that combines One-Time Passwords (OTP), and other alternate authentication mechanisms, for use in a wide variety of logon scenarios. Developed by PointSharp AB, a Swedish based security company, it’s a flexible, low cost, easy to use product, that provides a comprehensive set of authentication and security features .  In this post, we look at how DirectAccess and PointSharp ID can be used to strengthen the DA authentication process.


DA Client/Authentication Kerberos Proxy Machine Certificate User OTP
Windows 7 Enterprise X X 1
Windows 8.x Enterprise X X X
Windows 10 Enterprise X X X
1 requires Connectivity Assistant

Windows 8.x and beyond support a simplified access model using DirectAccess a kerberos proxy. For OTP configurations, use of a Public Key Infrastructure (PKI) is mandatory. Through an appropriately configured Active Directory Certificate Services (AD CS) certificate authority, DirectAccess acts as a certificate enrollment agent, thereby providing successfully authenticated clients with “OTP” certificates for veracity.

While Windows 7 is supported for two-factor authentication, it requires the installation of a separate application, the DirectAccess Connectivity Assistant, to provide the necessary OTP capability.  For expediency, we’ve limited this test setup to Windows 8.x and Windows 10 Enterprise, both with support for 2FA in DirectAccess built-in.

A reference document outlining what is required for this configuration can be found on Microsoft Technet here. Richard Hicks has also written an excellent post about DirectAccess with OTP.

Let’s take a peek at our basic test logon workflow.

DirectAccess with PointSharp ID

In this configuration Windows 8.1 / 10 Enterprise Client(s) are configured with machine certificates issued by an AD Enterprise Certificate Authority. DirectAccess relies on IPsec policies for authenticating and securing traffic from Internet-connected clients. In order to authenticate to domain resources, the client must first establish connectivity to DNS servers and Domain Controllers (DCs) through what we refer to as the Infrastructure Tunnel (1). Once authenticated successfully, the machine is available to reach management servers identified during the DA installation, for example SCCM server(s) to process software updates.

At this point, the user has not authenticated and from the Windows side bar (2), they need to press <CTRL><ALT><DEL> . The user has been issued with a soft token on their Smartphone by PointSharp ID. They reference this token (2a),  input the time-based OTP (TOTP) on the logon screen and their credentials are sent to DA. As a RADIUS client, DirectAccess forwards (2a) the request to the PointSharp ID RADIUS server, where a user lookup in AD is performed (2b) and the OTP validated by PointSharp ID. Upon successful authentication, the DirectAccess server enrols a short-lived OTP certificate on behalf of the user (2c) and this certificate is then used by the DA client together with the machine certificate for authentication of the Intranet/User tunnel (3).

With the DirectAccess role installed, let’s  have a look at some of the specifics of this configuration. Rather than cover the entire DA configuration, we’ll jump to the pertinent parts of a DA/PointSharp configuration. We begin midway through Step 1 of our DirectAccess server setup.


On the Select Groups option, we can determine which managed clients will receive the DirectAccess group policy (GPO). By default, the built-in Domain Computers group is enabled.


As the above graphic and the warning illustrate, it’s not a good idea to uncheck the “Enable DirectAccess for mobile computers only” as the combination of Domain Computers and the cleared checkbox will mean all domain computers will receive this configuration.

It’s common for organizations to replace the default Domain Computers group with an AD security group to filter application of the DirectAccess group policy. Although this requires manual intervention, requiring adding computers to the created group, it does add an additional level of control in determining which (computer) clients are allowed remote access.

Moving onto the Network Connectivity Assistant (NCA) screen, add an HTTP endpoint from your corporate network that the NCA can use to validate the connection.


In Step 2, we enable the two-factor authentication elements.


Before we leap ahead, let’s have a look at what’s being done to prepare the PointSharp ID server and AD Certificate Services.

PointSharp ID acts as a RADIUS Server for DirectAccess.  This requires adding the DA server as a RADIUS client to the PointSharp configuration. A shared secret is used between the two to pair the RADIUS “trust”.


Once the RADIUS client is added, an authentication method can be created in PointSharp ID to support OTP logon through DirectAccess. In the example below, a specific listener is setup for DA. Since DirectAccess does not support challenge/response, the Password Type Stateless:OTP is used.


Our Certificate Authority (CA), a subordinate enterprise CA, is configured as per the documented requirements. Two templates have been created (Windows 8/2012 R2 compatibility level).


The first template is for the DirectAccess server acting as a registration authority, or in PKI parlance an Enrollment Agent.  This template uses an Object Identifier (OID) specific for this task- and in the Application Policy, the original OIDs are removed and replaced them with the DirectAccess OTP identifier.

NB: This template is a duplicate of a Computer template.


The DirectAccess computer account then needs to be given permission to auto-enroll on this template.


Also in this setup, the Default Domain Policy Group Policy Object (GPO) in Active Directory is providing the requisite auto-enrollment policy, so the DA server may request and receive certificates and updates.


Back in AD Certificate Services, the validity period is set to 2 days and renewal period to 1 day.  For certificate naming, this is based on the DNS Name of the server, with subject alternate name (SAN) also set to the DNS name.


The second template, DirectAccess PointSharp OTP Logon, is a duplicate of the Smart Card logon template, with the Client Authentication OID removed from the Application Policy. This template has issuance requirements that specify that the application policy from the RA template  ( is present in the signature, in other words the DirectAccess server


The validity period we set for this cert is extremely short-lived (1 hour). By default, certificate processing each client would entail storing a record of each certificate request and issued certificate in the CA database. When dealing with a relatively high volume of these requests for OTP certs from a number of DA Clients, over time this could significantly increase the CA database size.  Given the longevity of the certificate it doesn’t make much sense to store this in the Certificate Services database. Accordingly, we enable non-persistent certificate processing on the CA.  This needs to be enabled by running:


Certificate Services then needs to be restarted. Similarly, the DA OTP template also needs to be told (configured) to not persist certs/requests to the database. This is done by checking the Do not store certificates… checkbox


Back to the DirectAccess server, the PointSharp ID server information (OTP RADIUS Server) needs to be filled in, a shared secret specified and authentication port to be used.


The Certificate Authority hosting the OTP template(s) then needs to be identified to the DirectAccess Server configuration.


The templates created earlier are then viewable.


If there are any accounts that are exempt for using two-factor authentication, then these should be added.


In Step 3 of the configuration wizard, ensure adding the FQDN of the enterprise CA as a management server.


Once the DirectAccess server configuration is complete, GPO’s created etc., the relevant clients (members of the specified security group) will receive their DA configuration on reboot.

Testing from the Internet, the Infrastructure (computer) tunnel is negotiated during client startup. This aspect of the configuration remains unchanged from a base DA setup. It’s the User (Intranet) tunnel that requires further interaction once the user has logged in to Windows.

From a Windows 8.1 client, clicking on the Networking icon in the system tray.


We are informed that the connection requires additional attention.


Clicking on Continue, the user is prompted to press <CTRL><ALT><DEL> to enter additional credentials.


This can be either a smart card, a virtual smart card or (in this case) a One-Time Password (OTP).


Clicking on One-time password (OTP) shifts the login focus to entering the OTP credential.


Referencing the smartphone, we enter the PointSharp One-Time Password (OTP).  Since PointSharp ID supports OATH tokens, we’re pretty much free to choose which type of authenticator client we wish to use on our smartphone. In this instance we are using the Microsoft Authenticator app on Windows Phone.


In another setup using Google Authenticator, we’ve enrolled an iPhone 6 for OTP integration.


Once credentials have been enter at logon, these are sent to the DA Server as a RADIUS Client and then  forward to the PointSharp RADIUS Server for authentication. If the OTP is valid then an authentication successful event is generated.


An OTP Certificate is issued to the client, via the DA Enrollment Agent and Enterprise CA, and the second User (Intranet) tunnel is established.


For Windows 10 clients, the behavior is similar,  albeit with some slightly nuanced user interface changes. Again, clicking on the Network icon will take us to the network summary screen.


Click on the Action needed icon


The user is taken to the Network & Internet settings section.


Click again in the Action needed area.


Click on the Continue button and the user is prompted to press CTRL><ALT><DEL> to enter their credentials.


Under Windows 10, we’re directly asked to enter our One-Time Password (OTP) credentials. We enter the OTP from the smartphone.


And the connection is established.


From our Windows 10 client, we can then use Powershell to check our connection using the Get-DAConnectionStatus cmdlet.


If you’re building this environment from scratch, ensure that basic DirectAccess connectivity is working before proceeding with building in two-factor authentication; check the DA server is fully operational, clients are auto-enrolled with computer certificate, both tunnels are starting etc. Similarly, we recommend building out your PointSharp ID configuration, before beginning integration with DA.

If you’d like to know more on implementing DirectAccess or similar technologies, please contact us. We’ll be happy to assist.

SAML authentication for Citrix XenDesktop and XenApp

Citrix recently published an article announcing a technical preview of their SAML based authentication technology for XenApp and XenDesktop.

This is a very exciting development and something we have been seeking for a long time. Federated authentication has been around for some time in various guises for NetScaler, Web Interface and for some older XenApp versions, actually KCD: the latter mysteriously disappearing in version 7.x of XenApp and XenDesktop.

At Synergy 2016, Citrix announced a new version of XenApp/XenDesktop, version 7.9. This latest release, available early June 2016, incorporates their SAML authentication technology.

“The 7.9 release introduces Federated Authentication Service to provide secure business-to-business access to contractors and partners as well as simplify Active Directory domain integration as part of an acquisition, merger or cloud transition. The new Federated Authentication Service integrates with SAML-based identity providers via Citrix NetScaler to allow each business unit to manage their own accounts yet still provide the same secure, remote access to their virtualized apps and desktops hosted on XenApp and XenDesktop”

In this blog we will explain how to implement the Technical Preview. Use it to become familiarized with the technology, in order to be ready to implement it when the release of XenApp/XenDesktop version 7.9 becomes available.

Please do not implement the technical preview in a live environment.

Outside of Windows 10, the classical Microsoft Windows logon experience supports two basic authentication mechanisms:

  • username/password
  • smart card

With the Federated Authentication Service, Citrix introduce a Virtual Smart card (VSC) to logon to a Windows server or desktop. In order to facilitate this module an additional component is introduced, the “User Credential Service” (UCS). This service acts as an intermediary between StoreFront, Virtual Desktop Agent (VDA) and the Certificate Authority (CA).

Here is a high level architecture overview of the technology:


When implementing the Federated Authentication Service, please ensure to meet the necessary prerequisites. The following components should be up and running in your infrastructure:

  • SAML 2.0 Identity Provider (IdP)
  • Public Key Infrastructure (PKI) / Certificate Authority (CA)
  • NetScaler
  • Desktop Delivery Controller (DDC), StoreFront and a VDA

The installation and initial configuration of these components are not covered in this blog post. With the above prerequisites in mind, the starting point for this configuration was an operational Active Directory, AD Certificate Services,  AD Federation Services, together with the NetScaler and XenDesktop environment. Before installing the Federation Authentication Service a basic preflight of Citrix services was conduced.

  • Being able to use the Receiver for Web to access StoreFront.
  • Launching a published application with the Windows Receiver on a Windows device
  • Using NetScaler Gateway for:
    • Client less access to StoreFront
    • ICA-proxy towards XenDesktop
  • Authenticating with LDAP on Netscaler Gateway


User Credential Service Installation/Configuration

The installation of User Credential Service (UCS) consists of three components that need installing:

  • Citrix.UCSLogonDataProvider-x64.msi. Install this on the StoreFront server.
  • Citrix.Authentication.IdentityAssertion_x64.msi. Install this on the VDA server. (If you have a 32-bit VDA installation, use the 32-bit version.)
  • UserCredentialService.msi. Install this on a Windows 2012 R2 server.

We encountered some issues with the installation and only manged to get them properly installed when launching from a Administrative Command Prompt by calling msiexec.exe with the /i switch.

Configure your AD and for smart card logon

The Active Directory Domain Controller environment needs to be configured for certificate authentication by ensuring that there are up-to-date Domain Controller certificates installed for Kerberos authentication. Look at CTX206156 for an example deployment.


Logon to your StoreFront server and open PowerShell as an administrator. Run the following commands:

& "$Env:PROGRAMFILES\Citrix\Receiver StoreFront\Scripts\ImportModules.ps1"
$siteId = 1
Install-DSUcsClaimsFactory -siteId $siteId -virtualPath "/Citrix/StoreAuth"
Install-DSUcsLogonDataProvider -siteId $siteId -virtualPath "/Citrix/Store"

Change the virtualpath parameter so that this matches your StoreFront installation paths.

Group Policy (GPO)

When you have installed the UCS component, locate the  CitrixUserCredentialService.admx/adml GPO template and copy this file to the GPO template location. (C:\Program Files\Citrix\UserCredentialService\PolicyDefinitions)

Create a new GPO object and locate the Administrative template for Citrix Components/Authentication


Enable the “User Credential Service” and enter the FQDN of the server that is hosting the User Credential Service.

Enable “Virtual Smartcards” and specify the “Prompt Scope” and “Consent timeout” value. In our setup we used the default values.

Close the GPO and apply it on your StoreFront servers and VDA’s

Run gpupdate on your StoreFront servers and VDA’s. Verify if the GPO is active by opening the Registry and browse to:


Verify if the FQDN of your UCS server is listed


Logon to the UCS server. Open the Citrix User Credential Service console and Select your UCS server. If you did not start the USC console with a local admin account you will be prompted for credentials.

The initial setup is a three-step process:


  1. Deploy certificate templates to AD Certificate Services.
    • These three templates will be installed and enabled on your Certificate Authority
    • Citrix_RegistrationAuthority_ManualAuthorization

    • Citrix_RegistrationAuthority

    • Citrix_SmartcardLogon

    • The technical preview does add the templates, but the initial ACL is incomplete. Open the certificate template plugin in MMC and add “Authenticated Users” with read permissions to all three Citrix templates.
  2. Setup Certificate Authority
    • Specify the correct Certificate Authority
  3. Authorize the UCS service
    • Authorizing is done in two steps. First request a certificate and then approve the pending request on your CA. Once the certificate is issued and installed, the initial setup for the UCS service is complete.


The next step is to configure the “User Roles” on the UCS. The setup of UCS includes one “default” role. You can specify which role you would like to use in the GPO. If you do not specify a role in the GPO the default role will be used. In our test setup, we’ve used the default role.

  • Specify the list of StoreFront server you would like to use the specific role
  • Specify the list of VDA’s you would like to logon to with the specific role
  • Specify the list of Users you would like to logon to StoreFront with the specific role

Specify all the above settings and we are done with the configuration of the UCS.



The technical preview does not open up the local firewall yet on the UCS server. Configure the local firewall manually to accept incoming requests on port 80.

NetScaler Gateway Authentication/Session Profile

Once the installation and configuration of the UCS is complete, we can begin with the configuration of the Netscaler Gateway setup.

The setup used in testing is similar to this schematic:


Authentication Profile

  • Import your ADFS signing certificate (public key only)
  • Create a SAML authentication server
  • Create a SAML authentication policy and bind the SAML authentication server
  • Bind the SAML authentication policy to the Netscaler Gateway virtual server
add ssl certKey adfs_signing_cert -cert adfs_signing_cert.cer

add authentication samlAction adfs_auth_svr -samlIdPCertName adfs_signing_cert" -samlSigningCertName <nsg_fqdn> -samlRedirectUrl "https://<adfs_fqdn>/adfs/ls" -samlUserField "Name ID" -samlIssuerName <nsg_fqdn>

add authentication samlPolicy adfs_auth_pol ns_true adfs_auth_svr

bind vpn vserver nsg_vsrv -policy adfs_auth_pol -priority 100


AD FS Configuration

We’ll need to establish a relying party trust on the AD FS server between it, as the identity provider (IdP), and the NetScaler Gateway virtual server, a service provider (SP),  configured for use with SAML 2.0 protocol/authentication.

Initially, create a metadata.xml file on and place this on the Netscaler

metadata.xml eample file:

<md:EntityDescriptor xmlns:md="urn:oasis:names:tc:SAML:2.0:metadata" ID="_724200788f8391f96053f72adc628fecc808d09a" entityID="<nsg_fqdn>">
 <md:SPSSODescriptor protocolSupportEnumeration="urn:oasis:names:tc:SAML:2.0:protocol urn:oasis:names:tc:SAML:1.1:protocol urn:oasis:names:tc:SAML:1.0:protocol">
 <init:RequestInitiator xmlns:init="urn:oasis:names:tc:SAML:profiles:SSO:request-init" Binding="urn:oasis:names:tc:SAML:profiles:SSO:request-init" Location="https://<adfs_fqdn>/adfs/ls"/>
 <ds:KeyInfo xmlns:ds="">
 <NSG Cert in pem format, public key only>
 <md:AssertionConsumerService Binding="urn:oasis:names:tc:SAML:2.0:bindings:HTTP-POST" Location="https://<nsg_fqdn>/cgi/samlauth" index="0"/>

Next, logon to the AD FS server and create a new Relying Party Trust using the wizard. In the wizard point to the metadata.xml file on the NetScaler.


Open the properties of the Relying Party Trust and uncheck “Monitor replying party” on the Monitoring tab


Remove the Encryption certificate on the Encryption tab


Set the secure hash algorithm to SHA-1 on the Advanced tab.


Click on OK. This completes the initial configuration of the Relying Party Trust in AD FS.


ADFS needs to pass two claim on to the NetScaler gateway virtual server in order to correctly process the authentication process. Right click on the NSG relying party trust en select “Edit claim rules”. Add a Send LDAP attributes and Send Claims using a custom rule.

Send LDAP attributes Claim (Send UPN as NameID)

c:[Type == "", Issuer == "AD AUTHORITY"]
 => issue(store = "Active Directory", types = (""), query = ";userPrincipalName;{0}", param = c.Value);

Send Claim using a custom rule (Send LogoutURL)

 => issue(Type = "logoutURL", Value = "https://<adfs_fqdn>/adfs/ls/", Properties[""] = "urn:oasis:names:tc:SAML:2.0:attrname-format:unspecified")

Session Profile

  • Create a NSG session profile
  • Create a NSG session policy and bind the session profile
  • Bind the NSG session policy to the Netscaler Gateway virtual server
add vpn sessionAction ses_prof_rfw -transparentInterception ON -defaultAuthorizationAction ALLOW -SSO ON -ssoCredential PRIMARY -icaProxy ON -wihome "https://<storefront_fqdn>/Citrix/<rfw_path>" -wiPortalMode NORMAL -clientlessVpnMode OFF

add vpn sessionPolicy ses_pol_rfw "REQ.HTTP.HEADER User-Agent NOTCONTAINS CitrixReceiver" ses_prof_rfw

bind vpn vserver nsg_vsrv -policy ses_pol_rfw -priority 100


StoreFront configuration

Configure StoreFront to fully delegate the authentication to NetScaler. Logon to the StoreFront server and open the StoreFront management console. Browse to the Manage Authentication Methods and select “Pass-through from NetScaler Gateway”.


Select Configure Delegated Authentication and check “Fully delegate credential validation to NetScaler Gateway”.



Ready to test the configuration

Having configured the Federated Authentication Service, we are ready to test it. The technical preview has only support for RfW and a Windows Receiver.

  • Open a browser and browse to your NSG virtual server.
  • Your browser redirects you to the AD FS server for authentication.
  • Once AD FS has completed authentication, the browser is returned to NSG and you will be logged on to StoreFront.
  • Launch a published application or desktop and seamless logon will commence.

This is how it looked in our environment:




StoreFront troubleshooting is described here:

Desktop Agent

To enable tracing, create a folder named c:\logs, and set permissions so that the Broker Agent Service can write to it. Open the BrokerAgent.exe.config file in c:\Program Files\Citrix\Virtual Desktop Agent

Add a line:

<add key="Citrix.Authentication.IdentityAssertion.LogFileName" value="c:\logs\ucs.log"/>

User Credential Service

To enable tracing, create a folder named c:\logs, and set permissions so that the User Credential Service can write to it. Open the Citrix.Authentication.UserCredentialService.exe.config file in C:\Program Files\Citrix\UserCredentialService

Add a line:

<add key="Citrix.Authentication.UserCredentialService.LogFileName" value="c:\logs\ucs.log"/>


Federated Authentication Service Blog


Azure AD as an Identity Provider

Let’s take a quick look at Azure Active Directory (AAD) in the identity provider role. Anyone using Office 365 , be it logging on with a standard account or a federated one, utilizes an Azure AD identity, with the latter brokering access to Office 365 resources.

What happens when we wish to connect our own SaaS/web applications to  the Azure AD world? Well, Windows Azure brokers a number of identity-based technologies to support such requirements. As a means of illustrating this, we’ll show an example using Azure AD as a SAML 2.0 Identity Provider (IdP), connecting up to a basic web application using a pHP-based SAML Service Provider: simpleSAMLphp.

We login to our Azure tenant (Azure Service Manager). Scroll down to the Active Directory icon.


On the directory tab, click on the organization and then the Applications tab.  From the bottom of the screen, create a new application by clicking on the Add icon.


Select Add an application my organization is developing.


Give your SaaS/Web application a name (e.g. simpleSAMLphp Demo).  Using the radio button, select the type of application. Since this is a SAML-P application using the browser, we need to select the Web Application / Web API  option.


Click on the arrow. Enter the details for your SAML application.


For Sign-On URL fill in the Assertion Consumer Service (ACS) URL for the Service Provider (simpleSAMLphp). We’ll revisit these settings in a  moment. For the App-ID URI, the Identifier or Entity ID of the SAML Service Provider is expected.

Here’s an example using our  simpleSAMLphp application.


Here we’ve gone back and changed the Sign-On URL to the base URL of the SimpleSAMLphp admin page. This is where (for the test) we want to send users to when accessing the “application”. It’s the Reply URL which is the address to which Azure AD will send the SAML authentication response. Further down in the application configuration in Azure Manager, we see the Single Sign-On settings.


Here are the actual settings used, albeit with a dummy URLs.

Sign-On URL

Reply URL

App URI (Identifier)

On the Service Provider side, the metadata from the tenant) Azure Identity Provider needs to be parsed and added to the SimpleSAMLphp configuration file (saml20-idp-remote.php). This is done by downloading the Azure IdP metadata file directly, e.g.<AzureTenantID/federationmetadata/2007-06/federationmetadata.xml

Connect to the simpleSAMLphp web administration interface. From the federation tab, select the XML to simpleSAMLphp metadata converter.


Cut and paste the Azure XML document from the tenant into the simpleSAMLphp web browser, convert the text and then copy to the clipboard. This text can be then appended directly to the saml20-idp-remote.php file.

Here’s an example. Replace the Azure Tenant ID with your own ID accordingly.

$metadata['<Azure Tenant ID>/'] = array (
  'entityid' => '<Azure Tenant ID>/',
  'contacts' =>
  array (
  'metadata-set' => 'saml20-idp-remote',
  'SingleSignOnService' =>
  array (
    0 =>
    array (
      'Binding' => 'urn:oasis:names:tc:SAML:2.0:bindings:HTTP-Redirect',
      'Location' => '<Azure Tenant ID>/saml2',
    1 =>
    array (
      'Binding' => 'urn:oasis:names:tc:SAML:2.0:bindings:HTTP-POST',
      'Location' => '<AzureTenantID>/saml2',
  'SingleLogoutService' =>
  array (
    0 =>
    array (
      'Binding' => 'urn:oasis:names:tc:SAML:2.0:bindings:HTTP-Redirect',
      'Location' => '<Azure Tenant ID>/saml2',
  'ArtifactResolutionService' =>
  array (
  'keys' =>
  array (
    0 =>
    array (
      'encryption' => false,
      'signing' => true,
      'type' => 'X509Certificate',
      'X509Certificate' => '<CERTIFICATE>',

Testing Authentication

From the Azure Application Portal, we can access the new test application.


From them we’re taken to the simpleSAMLphp administration page (

Within simpleSAMLphp we can select our identity provider for logon (Azure AD)


Click on the Select button to initiate the logon process.


Logon with your Azure AD credentials to the application and we’re returned to the simpleSAMLphp landing page.


Since Azure is brokering the connection with the application, this process also extends to using ADFS where the domain is federated. Azure performs the necessary realm discovery and routes the  user to their home domain.

With these and a number of services, Azure offers a solid convergence point for brokering connections with your web applications and workspaces. It’s a rapidly evolving space, so stay tuned…

If you’d like to know more or on how you can implement this and related technologies within your own  environment, please contact us. We’ll be happy to assist.

The evolution of access control

Do you remember what it was like when everyone had desktop computers and data security focused on the best way to physically lock computers to heavy desks?

Many customers are asking us the question how they would be able to take control again of their “environment” where the environment now has become scattered amongst on-premise or outsourced resources, Cloud resources and Mobile devices.

In this blog post, we’ll review the ways security and access control have changed over the years, highlighting how Enterprise Mobility Management solutions (we’ll be showing you the  Microsoft Enterprise Mobility suite here) are poised to provide integrated solutions for the current world with mobile devices and online (Cloud) services.

We’re using the Microsoft Enterprise Mobility (EMS) solution as the explicit example here, because Microsoft has a different vision on how to solve these issues compared to other solution providers like MobileIron or Airwatch. The main and most important difference in vision between those providers, is the way they are handling the delivery of (mobile) applications and data. Where MobileIron and Airwatch (as an example, there are many other providers out there) are desperately trying to create “controlled bubbles”, the Microsoft vision is to use the native device and application experience, while protecting the access towards the application and the data. That’s a fundamental other way of taking care of the challenge here. It’s not that the “bubble” approach is wrong to begin with (even more, there are specific use cases to use the approach) but the end-user will lose the “native device experience” but even more the end-user will end up using the EMM solution provided mobile apps to replace the native apps…

bubble approach

Picture: showing the MobileIron “apps” with the “bubble” approach 

We’ve seen cases at customers where the “bubble” approach failed or at least was not that successful as end-users did not fully accepted the fact that they where losing the native device apps like Outlook mail of ActiveSync. Let’s not get going about the quality of those non-native EMM apps here but you’re able to imagine the challenges there for the EMM providers 😉

So what’s the vision of Microsoft here ? Well, Microsoft has of course the best interest in using their  applications but on the other hand they have a great bundle of solutions to their availability. But first, let us have a look in the way things have been changed the past years for now:

Mobile Access version 1: Mobile Laptops

In the past, corporate data was hosted on-premises. It was accessed by desktops that were physically connected to the corporate network. Then, laptops emerged as the dominant corporate device, and the Virtual Private Network (VPN) was born.
VPNs provided 3 primary functions:

1. They made it possible for laptops to reach corporate services on the Intranet
2. They restricted corporate access to Internet-connected laptops
3. They helped prevent data loss by encrypting communications and running agents on the laptops that helped contain data

Over time, VPN technology evolved. The criteria that could be used for access control (e.g. require the laptop to be domain-joined) expanded and the technology to prevent data loss matured.
Eventually, new types of VPNs such as SSL VPNs emerged. SSL VPNs enabled app-specific, as opposed to device-wide, access to corporate services from the Internet. This reduced the attack surface and also enabled new scenarios such as accessing corporate services from web browsers running on unmanaged devices.

Mobile Access version 2: Smart Mobile Devices

Later, when smart mobile devices arrived in the corporate computing landscape, they needed access to corporate resources, and VPN technology was the tool available to provide that. Mobile devices, primarily connected to the Internet, needed network reachability to corporate services. However, theses always-on devices brought many security concerns from their early general lack of IT controls. This drove demand for complementary technology to the VPNs which would help protect data.
All of this created an opportunity for integrated solutions based on Mobile VPN, Mobile Device Management (MDM), and Mobile Application Management (MAM). The management system would provision a VPN profile to a mobile device and thereby give it controlled access to corporate services on the Intranet. MDM and MAM features would help provide data protection on mobile devices analogously to the agents deployed by VPN clients on laptops.
Over time, Mobile VPNs emerged into per-app Mobile VPNs. The per-app variety provided similar benefits to mobile devices that SSL VPNs had provided to mobile laptops in the past. They reduced the attack surface and enabled new scenarios.

Mobile Access version 3: Identity-based Access Control and Data Protection

Now, we are in an era of mobile access where increasing amounts of corporate data lives outside of the network perimeter. Data still lives on corporate networks, but it’s also in cloud services, on mobile devices, and in mobile apps. Perhaps one day you won’t have any corporate data left on-premises, but the moment you start adopting cloud services you need to rethink the way access is controlled and data is protected.


Picture: showing that within the current world the apps, devices and resources are scattered

In the mobile-first, cloud-first world, a fundamentally different approach was needed, so we built access control and data protection directly into mobile devices, mobile apps, and the cloud infrastructure itself.  In this world your network perimeter is replaced by an “identity perimeter.”


Picture: showing that within the perimeter protection layers do not apply anymore

That’s what Microsoft has built with Office 365 and the Enterprise Mobility Suite, as a supplement to the classic VPN provisioning mechanisms that other  EMM providers like MobileIron or Airwatch have for on-premises apps. Microsoft EMS delivers integrated identity, access control, management, and data protection – built to protect your corporate data wherever it lives, using technologies like Device and Application Management, Information Right Management, Risk based contextual based authentication, Analytic Security Services and more.

With Microsoft EMS, whenever a mobile device or app attempts to authenticate to an online service (Microsoft or 3rd party) or on-premises web app, subjects the request to criteria you define, consulting with the management system as needed. Is the mobile device managed and compliant with your IT policies? Is the mobile app managed? Has the user presented multiple forms of authentication? Is the PC domain-joined and managed or controlled ? Is the request coming from the corporate network or the internet? All of these criteria and more are provided without the need for VPN. It’s just built-in the solution.
The diagram below shows how Microsoft EMS ensures that you have the access controls in the cloud needed to replace the access controls in your VPNs.


Picture: illustration of conditional access using Microsoft EMS

In addition to providing cloud access control, Microsoft EMS also provides native data protection. Again, this is based on identity and integrated with management.
Was a corporate identity used to access the data? If yes, then the mobile apps will prevent the data from being shared with consumer apps or services via Save-As, Open-In, clipboard, etc (Intune MAM with or without device enrollment into MDM). Is the document itself explicitly protected by an access policy (using IRM like Azure RMS)? If so, enforce access control on that file, even when it roams outside of apps and devices under management.

This integrated approach to data loss prevention enables the same application to isolate the corporate and personal data that it handles. This means your employees will not have to use separate apps for work. They can just use native or Office mobile apps for work and personal use and the right protections will apply at the right times. The diagram below shows this concept.


Picture: showing the Microsoft EMS approach to handle Data Loss Prevention

As mobile access evolves from VPN-based to identity-based, we foresee several benefits:

  • Cost savings compared to VPNs. VPN technology is typically expensive and complex. Deploying VPN agents, profiles, and certificates is also complex and expensive. As more and more of your data moves to the cloud, you’ll enable larger and larger populations of cloud-only users that don’t require a VPN and everything it carries.
  • Simpler access infrastructure to operate. Instead of operating a global scale network perimeter with various proxies, gateways, and VPNs, you just need to connect your existing on premise AD with the Azure Active Directory. From there, Office 365 and other SaaS apps will route their authentication through Azure AD and your modern access controls will be enforced.
  • Better end-user experiences. With EMS’s identity-based access control, your end users will not have to install and launch separate VPN apps. The access control experience is natively a part of the sign-in experience in the mobile apps. Since your traffic isn’t bounced from the Internet to the Intranet and back, your employees get better latency and performance in their mobile apps.
  • Positioned for the future. Once your basic cloud access infrastructure is in place, you have a solid foundation for future innovation. Because the capabilities are provided from the cloud, improvements come often and automatically. You don’t need to plan upgrades or migrations to start to take advantage of the latest and greatest. Compare this to your VPN infrastructure today and the tremendous amount of effort it takes to upgrade to the latest and greatest.

As Route443, we’re often working by the identity-based model for mobile access control and data protection, as it has our special interest we’re following any developments very closely. We see this development as one of the best things offered in the industry to help you provide great mobile experiences to your employees and in the most future-proofed way.

In the meantime, if you’d like to know more on how you are able to use this functionality within your corporate environment, please contact us and let us know how we can assist you.



Azure Active Directory Identity Protection

Hi folks,

Just recently,  Microsoft released their long awaited implementation of risk based authentication/authorization control. Personally, we’re very excited about this announcement. Hold your horses, as it’s still in public preview …. for now…

Let’s have a little background on the subject. What’s so interesting about this component and why should you be interested in it?

For starters, in the contemporary cloud, we’re relying on Identity & Access Management frameworks to provide our subscribers with secure and manageable paths to authentication and authorization of their resources. By secure we mean we are able to provide our subscribers with a corporate identity in the current framework, but there are limitations. For example, how are we to know if it’s really that subscriber using the resource at a given moment ? Sure, we know that the credentials are valid, but what if the account has been compromised? How does one tell? Cue Azure AD Identity Protection: a big step in the right direction for helping establish a risk posture and applying it during the authentication process, particular when combined with other mechanisms, such as Multi-Factor Authentication (MFA).. (something we’ll cover in a later blog post).

Azure Active Directory Identity Protection is a security service within Microsoft Azure that provides a consolidated view into risk events and potential vulnerabilities affecting the organization’s identities. Identity Protection leverages existing Azure AD’s anomaly detection capabilities (available through Azure AD’s Anomalous Activity Reports), and introduces new risk event types that can detect anomalies in real-time.

The vast majority of security breaches take place when attackers gain access to an environment by stealing a user’s identity. Attackers have become increasingly effective at leveraging third party breaches, and using sophisticated phishing attacks. Once an attacker gains access to even a low privileged user account, it is relatively straightforward for them to gain access to important company resources through lateral movements/traversal attacks. It is essential, therefore, to protect all identities and, when an identity is compromised, proactively prevent the compromised identity from being abused.

Discovering compromised identities is no easy task. Identity Protection uses adaptive machine learning algorithms and heuristics to detect anomalies and risk events that may indicate that an identity has been compromised.

Using this data, Identity Protection generates reports and alerts that enables the administrator to investigate these risk events and take appropriate remediation or mitigation action.

Azure Active Directory Identity Protection is more than simply a monitoring and reporting tool. Based on risk events, Identity Protection calculates a user risk level for each user, enabling the security professional to configure risk-based policies to automatically protect the identities of the organization. These risk-based policies, in addition to other conditional access controls provided by Azure Active Directory and EMS, can automatically block or offer adaptive remediation actions, including password resets and enforcement of multi-factor authentication.

Now, let’s have a look at the delivered functionality here.

In the reporting module of the Azure Active Directory Identity Protection service, we’re now able to view some important security related events within our environment (tenant):

Detecting risk events and risky accounts:

  • Detecting 6 risk event types using machine learning and heuristic rules
  • Calculating user risk levels
  • Providing custom recommendations to improve overall security posture by highlighting vulnerabilities

Investigating risk events:

  • Sending notifications for risk events
  • Investigating risk events using relevant and contextual information
  • Providing basic workflows to track investigations
  • Providing easy access to remediation actions such as password reset

Very useful additions for incident and event management. The real-time evaluation and mitigation are also very interesting.

Risk-based conditional access policies:

  • Policy to mitigate perceived “risky” sign-ins by blocking sign-ins or requiring multi-factor authentication challenges.
  • Policy to block or secure “risky” user accounts
  • Policy to require users to register for multi-factor authentication

Risk level, determining the authentication context

The Risk level for a risk event is an indication (High, Medium, or Low) of the severity of the risk event. The risk level helps Identity Protection users prioritize the actions they must take to reduce the risk to their organization. The severity of the risk event represents the strength of the signal as a predictor of identity compromise, combined with the amount of noise that it typically introduces.

  • High: High confidence and high severity risk event. These events are strong indicators that the user’s identity has been compromised, and any user accounts impacted should be remediated immediately.
  • Medium: High severity, but lower confidence risk event, or vice versa. These events are potentially risky, and any user accounts impacted should be remediated.
  • Low: Low confidence and low severity risk event. This event may not require an immediate action, but when combined with other risk events, may provide a strong indication that the identity is compromised

Risk levels
Given we’re able to classify the risk level of any authentication attempt, using the classification within the context of the authentication process, we still need to look at how the information is collected.  Let’s have a look under the hood of the Azure Active Directory Identity Protection service a little further…

Leaked credentials

Leaked credentials are found posted publicly in the dark web by Microsoft security researchers. These credentials are usually found in plain text. They are checked against Azure AD credentials, and if there is a match, they are reported as “Leaked credentials” in Identity Protection. Leaked credentials risk events are classified as a “High” severity risk event, because they provide a clear indication that the user name and password are available to an attacker.

Impossible travel to atypical locations

This risk event type identifies two sign-ins originating from geographically distant locations, where at least one of the locations may also be atypical for the user, given past behavior. In addition, the time between the two sign-ins is shorter than the time it would have taken the user to travel from the first location to the second, indicating that a different user is using the same credentials.

This machine learning algorithm ignores obvious “false positives” contributing to the impossible travel condition, such as VPNs and locations regularly used by other users in the organization. The system has an initial learning period of 14 days during which it learns a new user’s sign-in behavior.

Impossible travel is usually a good indicator that a hacker was able to successfully sign-in. However, false-positives may occur when a user is traveling using a new device or using a VPN that is typically not used by other users in the organization. Another source of false-positives is applications that incorrectly pass server IPs as client IPs, which may purport sign-ins taking place from the data center where that application’s back-end is hosted (often these are Microsoft datacenters, which may give the appearance of sign-ins taking place from Microsoft owned IP addresses). As a result of these false-positives, the risk level for this risk event is “Medium”.

s from infected devices

This risk event type identifies sign-ins from devices infected with malware, that are known to actively communicate with a bot server. This is determined by correlating IP addresses of the user’s device against IP addresses that were in contact with a bot server. Be aware; this risk event identifies IP addresses, not user devices ! If several devices are behind a single IP address, and only some are controlled by a bot network, sign-ins from other devices my trigger this event unnecessarily, which is the reason for classifying this risk event as “Low”.

s from anonymous IP addresses

This risk event type identifies users who have successfully signed in from an IP address that has been identified as an anonymous proxy IP address. These proxies are used by people who want to hide their device’s IP address, and may be used for malicious intent. The risk level for this risk event type is “Medium” because in itself an anonymous IP is not a strong indication of an account compromise.

s from IP addresses with suspicious activity

This risk event type identifies IP addresses from which a high number of failed sign-in attempts were seen, across multiple user accounts, over a short period of time. This matches traffic patterns of IP addresses used by attackers, and is a strong indicator that accounts are either already or are about to be compromised. This is a machine learning algorithm that ignores obvious “false-positives“, such as IP addresses that are regularly used by other users in the organization. The system has an initial learning period of 14 days where it learns the sign-in behavior of a new user and new tenant.

The risk level for this event type is “Medium” because several devices may be behind the same IP address, while only some may be responsible for the suspicious activity.

from unfamiliar locations

This risk event type is a real-time sign-in evaluation mechanism that considers past sign-in locations (IP, Latitude / Longitude) to determine new / unfamiliar locations. The system stores information about previous locations used by a user, and considers these “familiar” locations. The risk even is triggered when the sign-in occurs from a location that’s not already in the list of familiar locations. The system has an initial learning period of 14 days, during which it does not flag any new locations as unfamiliar locations. The system also ignores sign-ins from familiar devices, and locations that are geographically close to a familiar location.

Unfamiliar locations can provide a strong indication that an attacker is able attempting to use a stolen identity. False-positives may occur when a user is traveling, trying out a new device or uses a new VPN. As a result of these false positives, the risk level for this event type is “Medium”.

There’s a nice looking management style console as a collation point for gathering all events, but the real ingredients or “special sauce” lie beneath 🙂


We’re still some steps away from the desired end-state, where we’re able to influence or even determine the level of authorization next to the level of authentication, but let’s not be too pessimistic 🙂  This is really a big step forward as a building block within the (Microsoft) Access Management framework!

In the meantime, if you’d like to know more on how you are able to use this functionality within your corporate environment, please contact us and let us know how we can assist you.