Shifting to Adaptive Authentication and Cloud-Based Security

There’s a significant shift in how organizations are viewing information security, according to The Global State of Information Security Survey 2017 (click to download the original publication) from PricewaterhouseCoopers (PwC).

Here’s a short summary of a few of the major trends mentioned in the document:

Opting for Cloud-Based Security

Instead of traditional on-premises systems, 62 percent of organizations are opting for cloud-based managed security services to provide:

  • Authentication
  • Identity and access management
  • Real-time monitoring and analytics
  • Threat intelligence

PwC calls out real-time monitoring and analytics as key to proactive threat intelligence – 51 percent of respondents monitor data to detect security risks and incidents.

To help you gain insight into the users and devices accessing your applications, Route443 is able to assist you on the area of Identity & Access Management, that can be used to make access policy decisions.

Advanced Authentication

Identity has been at the heart of most every breach in the past two years“. – Richard Kneeley, PwC US Managing Director, Cybersecurity and Privacy.

Phishing has emerged as a significant risk across all companies and every industry. Thirty-eight percent of those surveyed reported phishing scams. Criminals will send phishing emails to employees in order to trick them into sharing their legitimate user credentials, gaining access to company systems and data.

Passwords alone aren’t secure enough to protect against phishing attacks. PwC reports that businesses are adopting advanced authentication, or multi-factor authentication  technology such as software tokens, biometrics and smartphone tokens.

As security perimeters dissolve and identity expands from  people to connected devices, identity and access management (IAM) tools are more essential than ever to protect access and prevent incursions.

As PwC stated in their survey, “authentication must be frictionless and intuitive for end users.”

Route443 is able to assist you by implementing conditional access, contextual based where having the password is just not enough. Getting devices into the context of authentication and authorization, enables frictionless and intuitive authentication for your end users.

Adaptive Authentication

Another trend listed by PwC is the use of additional data points to identify suspicious behaviors and patterns – data such as a user’s login time and location, type of device, network, etc. to create risk-based access decisions.

“Identity has been at the heart of most every breach in the past two years,” said Richard Kneeley, PwC US Managing Director, Cybersecurity and Privacy. “Many of these breaches have involved someone gaining access by using compromised identity, then changing their identity once inside the network to ratchet up access to data and systems by taking over a privileged account and in the process gaining unlimited access to the network, to systems and to data .”

Protecting the Identity is the fundamental ground rule of our Identity Driven Security approach, Route443 is able to assess, guide and implement all required measures.
By blocking authentication attempts based on user location, network type or their device, you can reduce risks associated with anonymous networks, countries you don’t do business in, or exposure to out-of-date and risky devices.

Please stay tuned, follow our website (www.route443.eu) our blog (blog.route443.eu) to receive the latest information from us.

 

Transformation of the Desktop

For more than twenty years the Desktop PC has been the staple of enterprise computing, as the main productivity tool for knowledge workers. This dominance is being increasingly challenged as the modern workforce shifts to a more mobile experience, with modern operating systems reflecting this commoditized (read: BYOD) trend. Within this new generation of computing the traditional way of managing (thereby controlling) those devices will no longer apply or suffice. The reality is that as we see the desktop shifting toward a more mobile form, our traditional view of how we perceive infrastructure and security is fundamentally challenged. Not convinced? Stay tuned and we’ll delve into how we see this next generation computing mapping out.

Within the mobile world there’s a powerful and agile model of security and management called Enterprise Mobility Management (EMM). It contains three major management components Mobile Device Management (MDM), Mobile Application Management (MAM) and Mobile Content Management (MCM).

…With Windows 10, Microsoft has re-architected the Windows operating system to adopt EMM…

Here’s why: With the rise of mobile computing, employees don’t use (or not only) a locked-down PC on the corporate network to do their jobs. Instead they use many different devices, some company-owned and some personally owned. These devices run a vast array of (mobile) apps and connect across networks that are outside of IT’s control. Legacy Windows client management tools (like Microsoft’s System Center Configuration Manager (SCCM) are too inflexible for modern computing environments. They imply management of a client through installation of a complex system image on the PC, constrained by the boundaries of the organization. Solutions such as DirectAccess are last gasp entreaties to modernize the managed client in the conventional sense.

…The era of the domain-joined PC is coming to a close…

EMM moves the legacy PC paradigm from complex and hard-coded system image to context-based policy. With Windows 10, Microsoft is addressing the need for greater security and management flexibility in the enterprise. Yet, the Apple MacOS platform has been in this position for many years. From the start of the “mobile century”, the MacOS platform has been considered a mobile device next to the smartphones and tablets using the Android and iOS platform. So why is this development now taking momentum ? Could it have something to do with the impressive number of 400 million Windows 10 devices already in the field ? Clearly an operating system that is imposing itself on the market in such volume, while supporting much of the desired functionality organizations and their users are looking for,  is going to have impact on the conversation.

Gartner retired the Magic Quadrant for Client Management Tools in March 2016…

The traditional Windows architecture offered a broad attack surface because both the file system and the operating system itself presented vectors. To counter the risk, IT had to install, as part of the image, additional security agents to monitor threats and remediate accordingly.  Maintaining the integrity and security of data on the PC was a constant struggle. Likewise, this model required devices to join a Windows domain governed by policy (GPOs) , or third-party management software, controlling what employees could or could not do on this PC. It assumed devices were corporate-owned, Windows-based, and connected to a persistent local area network (LAN).

For the most part, the modern enterprise, moreover the IT department, no longer has the latitude to work this way. The demands of today’s employees; working on any device, in a variety of environments — home, airports, coffee shops, hotels, etc., means the traditional approach can no longer support this work style. Mobile devices are not LAN-bound and are frequently owned by the employee, rather than the company. The clouding of business v personal and the way in which the focus shifts freely from device to application to data, means overlapping is inevitable. Flexible use of devices becomes deeply embedded in many aspects of an employee’s personal and work life.

To address this new vista (no pun intended), Microsoft has re-architected Windows 10 to move beyond the legacy management systems and fully supporting EMM.

3
EMM solutions like Microsoft Intune are providing an efficient and flexible way to provision services to employees and secure business data on modern operating systems. The move to EMM represents a major change in how the desktop will be secured and managed moving forward.

…Our vision on this…

We believe that organizations need to start planning now for the moment where PCs are managed and secured like mobile devices, and desktop apps are developed and deployed like mobile apps. That’s a major upcoming shift within the technology landscape, enabling the transformation of the desktop.

In a upcoming blog post we’ll explain the technology behind EMM solutions, in specific the Microsoft Intune EMM solution and will also provide you a sneak preview in the near future to help you make the right decisions.

Please stay tuned, follow our website (www.route443.eu) our blog (blog.route443.eu) to receive the latest information from us.

 

 

DirectAccess with PointSharp ID

Microsoft DirectAccess continues to be a strong remote access solution in the on-premise space. On 27th July 2016, Richard Hicks, MVP in Cloud and Data Center Management and well-known DirectAccess expert, will be hosting a webinar with PointSharp to describe the combination of strong authentication using DirectAccess with PointSharp ID. You can enroll for this webinar here.

Meanwhile, if you can’t make the webinar, Route443 will demonstrate in this blog post how the two technologies can work together. PointSharp ID, for those not familiar, is a robust two-factor authentication (2FA) service that combines One-Time Passwords (OTP), and other alternate authentication mechanisms, for use in a wide variety of logon scenarios. Developed by PointSharp AB, a Swedish based security company, it’s a flexible, low cost, easy to use product, that provides a comprehensive set of authentication and security features .  In this post, we look at how DirectAccess and PointSharp ID can be used to strengthen the DA authentication process.

 

DA Client/Authentication Kerberos Proxy Machine Certificate User OTP
Windows 7 Enterprise X X 1
Windows 8.x Enterprise X X X
Windows 10 Enterprise X X X
1 requires Connectivity Assistant

Windows 8.x and beyond support a simplified access model using DirectAccess a kerberos proxy. For OTP configurations, use of a Public Key Infrastructure (PKI) is mandatory. Through an appropriately configured Active Directory Certificate Services (AD CS) certificate authority, DirectAccess acts as a certificate enrollment agent, thereby providing successfully authenticated clients with “OTP” certificates for veracity.

While Windows 7 is supported for two-factor authentication, it requires the installation of a separate application, the DirectAccess Connectivity Assistant, to provide the necessary OTP capability.  For expediency, we’ve limited this test setup to Windows 8.x and Windows 10 Enterprise, both with support for 2FA in DirectAccess built-in.

A reference document outlining what is required for this configuration can be found on Microsoft Technet here. Richard Hicks has also written an excellent post about DirectAccess with OTP.

Let’s take a peek at our basic test logon workflow.

DirectAccess with PointSharp ID

In this configuration Windows 8.1 / 10 Enterprise Client(s) are configured with machine certificates issued by an AD Enterprise Certificate Authority. DirectAccess relies on IPsec policies for authenticating and securing traffic from Internet-connected clients. In order to authenticate to domain resources, the client must first establish connectivity to DNS servers and Domain Controllers (DCs) through what we refer to as the Infrastructure Tunnel (1). Once authenticated successfully, the machine is available to reach management servers identified during the DA installation, for example SCCM server(s) to process software updates.

At this point, the user has not authenticated and from the Windows side bar (2), they need to press <CTRL><ALT><DEL> . The user has been issued with a soft token on their Smartphone by PointSharp ID. They reference this token (2a),  input the time-based OTP (TOTP) on the logon screen and their credentials are sent to DA. As a RADIUS client, DirectAccess forwards (2a) the request to the PointSharp ID RADIUS server, where a user lookup in AD is performed (2b) and the OTP validated by PointSharp ID. Upon successful authentication, the DirectAccess server enrols a short-lived OTP certificate on behalf of the user (2c) and this certificate is then used by the DA client together with the machine certificate for authentication of the Intranet/User tunnel (3).

With the DirectAccess role installed, let’s  have a look at some of the specifics of this configuration. Rather than cover the entire DA configuration, we’ll jump to the pertinent parts of a DA/PointSharp configuration. We begin midway through Step 1 of our DirectAccess server setup.

2016-06-28_21-32-31

On the Select Groups option, we can determine which managed clients will receive the DirectAccess group policy (GPO). By default, the built-in Domain Computers group is enabled.

2016-07-10_11-05-56.png

As the above graphic and the warning illustrate, it’s not a good idea to uncheck the “Enable DirectAccess for mobile computers only” as the combination of Domain Computers and the cleared checkbox will mean all domain computers will receive this configuration.

It’s common for organizations to replace the default Domain Computers group with an AD security group to filter application of the DirectAccess group policy. Although this requires manual intervention, requiring adding computers to the created group, it does add an additional level of control in determining which (computer) clients are allowed remote access.

Moving onto the Network Connectivity Assistant (NCA) screen, add an HTTP endpoint from your corporate network that the NCA can use to validate the connection.

2016-07-09_16-08-22

In Step 2, we enable the two-factor authentication elements.

2016-07-09_16-10-11

Before we leap ahead, let’s have a look at what’s being done to prepare the PointSharp ID server and AD Certificate Services.

PointSharp ID acts as a RADIUS Server for DirectAccess.  This requires adding the DA server as a RADIUS client to the PointSharp configuration. A shared secret is used between the two to pair the RADIUS “trust”.

2016-06-29_17-46-56

Once the RADIUS client is added, an authentication method can be created in PointSharp ID to support OTP logon through DirectAccess. In the example below, a specific listener is setup for DA. Since DirectAccess does not support challenge/response, the Password Type Stateless:OTP is used.

2016-07-09_13-00-42

Our Certificate Authority (CA), a subordinate enterprise CA, is configured as per the documented requirements. Two templates have been created (Windows 8/2012 R2 compatibility level).

2016-06-29_17-52-43

The first template is for the DirectAccess server acting as a registration authority, or in PKI parlance an Enrollment Agent.  This template uses an Object Identifier (OID) specific for this task- 1.3.6.1.4.1.311.81.1.1 and in the Application Policy, the original OIDs are removed and replaced them with the DirectAccess OTP identifier.

NB: This template is a duplicate of a Computer template.

2016-06-28_21-42-24

The DirectAccess computer account then needs to be given permission to auto-enroll on this template.

2016-07-10_13-37-30.png

Also in this setup, the Default Domain Policy Group Policy Object (GPO) in Active Directory is providing the requisite auto-enrollment policy, so the DA server may request and receive certificates and updates.

2016-07-20_21-09-14

Back in AD Certificate Services, the validity period is set to 2 days and renewal period to 1 day.  For certificate naming, this is based on the DNS Name of the server, with subject alternate name (SAN) also set to the DNS name.

2016-06-28_21-43-54

The second template, DirectAccess PointSharp OTP Logon, is a duplicate of the Smart Card logon template, with the Client Authentication OID removed from the Application Policy. This template has issuance requirements that specify that the application policy from the RA template  (1.3.6.1.4.1.311.81.1.1) is present in the signature, in other words the DirectAccess server

2016-06-29_16-25-41

The validity period we set for this cert is extremely short-lived (1 hour). By default, certificate processing each client would entail storing a record of each certificate request and issued certificate in the CA database. When dealing with a relatively high volume of these requests for OTP certs from a number of DA Clients, over time this could significantly increase the CA database size.  Given the longevity of the certificate it doesn’t make much sense to store this in the Certificate Services database. Accordingly, we enable non-persistent certificate processing on the CA.  This needs to be enabled by running:

certutil –setreg DBFlags +DBFLAGS_ENABLEVOLATILEREQUESTS

Certificate Services then needs to be restarted. Similarly, the DA OTP template also needs to be told (configured) to not persist certs/requests to the database. This is done by checking the Do not store certificates… checkbox

2016-07-10_13-54-34

Back to the DirectAccess server, the PointSharp ID server information (OTP RADIUS Server) needs to be filled in, a shared secret specified and authentication port to be used.

2016-07-20_21-21-39

The Certificate Authority hosting the OTP template(s) then needs to be identified to the DirectAccess Server configuration.

2016-07-20_21-23-18

The templates created earlier are then viewable.

2016-07-20_21-25-33

If there are any accounts that are exempt for using two-factor authentication, then these should be added.

2016-07-23_15-48-37

In Step 3 of the configuration wizard, ensure adding the FQDN of the enterprise CA as a management server.

2016-07-23_15-54-10

Once the DirectAccess server configuration is complete, GPO’s created etc., the relevant clients (members of the specified security group) will receive their DA configuration on reboot.

Testing from the Internet, the Infrastructure (computer) tunnel is negotiated during client startup. This aspect of the configuration remains unchanged from a base DA setup. It’s the User (Intranet) tunnel that requires further interaction once the user has logged in to Windows.

From a Windows 8.1 client, clicking on the Networking icon in the system tray.

2016-07-24_10-49-04

We are informed that the connection requires additional attention.

2016-07-24_10-49-58

Clicking on Continue, the user is prompted to press <CTRL><ALT><DEL> to enter additional credentials.

2016-07-24_10-50-39

This can be either a smart card, a virtual smart card or (in this case) a One-Time Password (OTP).

2016-07-07_20-22-02

Clicking on One-time password (OTP) shifts the login focus to entering the OTP credential.

2016-07-24_10-56-26

Referencing the smartphone, we enter the PointSharp One-Time Password (OTP).  Since PointSharp ID supports OATH tokens, we’re pretty much free to choose which type of authenticator client we wish to use on our smartphone. In this instance we are using the Microsoft Authenticator app on Windows Phone.

2016-07-07-13-23-50

In another setup using Google Authenticator, we’ve enrolled an iPhone 6 for OTP integration.

IMG_1153

Once credentials have been enter at logon, these are sent to the DA Server as a RADIUS Client and then  forward to the PointSharp RADIUS Server for authentication. If the OTP is valid then an authentication successful event is generated.

2016-07-24_11-02-34.png

An OTP Certificate is issued to the client, via the DA Enrollment Agent and Enterprise CA, and the second User (Intranet) tunnel is established.

2016-07-24_11-03-49

For Windows 10 clients, the behavior is similar,  albeit with some slightly nuanced user interface changes. Again, clicking on the Network icon will take us to the network summary screen.

2016-07-23_22-34-07

Click on the Action needed icon

2016-07-23_13-11-38

The user is taken to the Network & Internet settings section.

2016-07-24_11-09-10

Click again in the Action needed area.

2016-07-24_11-09-31

Click on the Continue button and the user is prompted to press CTRL><ALT><DEL> to enter their credentials.

2016-07-24_11-09-53

Under Windows 10, we’re directly asked to enter our One-Time Password (OTP) credentials. We enter the OTP from the smartphone.

2016-07-24_11-10-24

And the connection is established.

2016-07-24_11-11-03

From our Windows 10 client, we can then use Powershell to check our connection using the Get-DAConnectionStatus cmdlet.

Notes

If you’re building this environment from scratch, ensure that basic DirectAccess connectivity is working before proceeding with building in two-factor authentication; check the DA server is fully operational, clients are auto-enrolled with computer certificate, both tunnels are starting etc. Similarly, we recommend building out your PointSharp ID configuration, before beginning integration with DA.

If you’d like to know more on implementing DirectAccess or similar technologies, please contact us. We’ll be happy to assist.

SAML authentication for Citrix XenDesktop and XenApp

Citrix recently published an article announcing a technical preview of their SAML based authentication technology for XenApp and XenDesktop.

This is a very exciting development and something we have been seeking for a long time. Federated authentication has been around for some time in various guises for NetScaler, Web Interface and for some older XenApp versions, actually KCD: the latter mysteriously disappearing in version 7.x of XenApp and XenDesktop.

At Synergy 2016, Citrix announced a new version of XenApp/XenDesktop, version 7.9. This latest release, available early June 2016, incorporates their SAML authentication technology.

“The 7.9 release introduces Federated Authentication Service to provide secure business-to-business access to contractors and partners as well as simplify Active Directory domain integration as part of an acquisition, merger or cloud transition. The new Federated Authentication Service integrates with SAML-based identity providers via Citrix NetScaler to allow each business unit to manage their own accounts yet still provide the same secure, remote access to their virtualized apps and desktops hosted on XenApp and XenDesktop”

In this blog we will explain how to implement the Technical Preview. Use it to become familiarized with the technology, in order to be ready to implement it when the release of XenApp/XenDesktop version 7.9 becomes available.

Please do not implement the technical preview in a live environment.

Outside of Windows 10, the classical Microsoft Windows logon experience supports two basic authentication mechanisms:

  • username/password
  • smart card

With the Federated Authentication Service, Citrix introduce a Virtual Smart card (VSC) to logon to a Windows server or desktop. In order to facilitate this module an additional component is introduced, the “User Credential Service” (UCS). This service acts as an intermediary between StoreFront, Virtual Desktop Agent (VDA) and the Certificate Authority (CA).

Here is a high level architecture overview of the technology:

saml2

When implementing the Federated Authentication Service, please ensure to meet the necessary prerequisites. The following components should be up and running in your infrastructure:

  • SAML 2.0 Identity Provider (IdP)
  • Public Key Infrastructure (PKI) / Certificate Authority (CA)
  • NetScaler
  • Desktop Delivery Controller (DDC), StoreFront and a VDA

The installation and initial configuration of these components are not covered in this blog post. With the above prerequisites in mind, the starting point for this configuration was an operational Active Directory, AD Certificate Services,  AD Federation Services, together with the NetScaler and XenDesktop environment. Before installing the Federation Authentication Service a basic preflight of Citrix services was conduced.

  • Being able to use the Receiver for Web to access StoreFront.
  • Launching a published application with the Windows Receiver on a Windows device
  • Using NetScaler Gateway for:
    • Client less access to StoreFront
    • ICA-proxy towards XenDesktop
  • Authenticating with LDAP on Netscaler Gateway

Onward!

User Credential Service Installation/Configuration

The installation of User Credential Service (UCS) consists of three components that need installing:

  • Citrix.UCSLogonDataProvider-x64.msi. Install this on the StoreFront server.
  • Citrix.Authentication.IdentityAssertion_x64.msi. Install this on the VDA server. (If you have a 32-bit VDA installation, use the 32-bit version.)
  • UserCredentialService.msi. Install this on a Windows 2012 R2 server.

We encountered some issues with the installation and only manged to get them properly installed when launching from a Administrative Command Prompt by calling msiexec.exe with the /i switch.

Configure your AD and for smart card logon

The Active Directory Domain Controller environment needs to be configured for certificate authentication by ensuring that there are up-to-date Domain Controller certificates installed for Kerberos authentication. Look at CTX206156 for an example deployment.

StoreFront

Logon to your StoreFront server and open PowerShell as an administrator. Run the following commands:

& "$Env:PROGRAMFILES\Citrix\Receiver StoreFront\Scripts\ImportModules.ps1"
$siteId = 1
Install-DSUcsClaimsFactory -siteId $siteId -virtualPath "/Citrix/StoreAuth"
Install-DSUcsLogonDataProvider -siteId $siteId -virtualPath "/Citrix/Store"

Change the virtualpath parameter so that this matches your StoreFront installation paths.

Group Policy (GPO)

When you have installed the UCS component, locate the  CitrixUserCredentialService.admx/adml GPO template and copy this file to the GPO template location. (C:\Program Files\Citrix\UserCredentialService\PolicyDefinitions)

Create a new GPO object and locate the Administrative template for Citrix Components/Authentication

GPO

Enable the “User Credential Service” and enter the FQDN of the server that is hosting the User Credential Service.

Enable “Virtual Smartcards” and specify the “Prompt Scope” and “Consent timeout” value. In our setup we used the default values.

Close the GPO and apply it on your StoreFront servers and VDA’s

Run gpupdate on your StoreFront servers and VDA’s. Verify if the GPO is active by opening the Registry and browse to:

HKEY_LOCAL_MACHINE/SOFTWARE/Policies/Citrix/Authentication/UserCredentialService/Addresses

Verify if the FQDN of your UCS server is listed

UCS

Logon to the UCS server. Open the Citrix User Credential Service console and Select your UCS server. If you did not start the USC console with a local admin account you will be prompted for credentials.

The initial setup is a three-step process:

UCS3

  1. Deploy certificate templates to AD Certificate Services.
    • These three templates will be installed and enabled on your Certificate Authority
    • Citrix_RegistrationAuthority_ManualAuthorization

    • Citrix_RegistrationAuthority

    • Citrix_SmartcardLogon

    • The technical preview does add the templates, but the initial ACL is incomplete. Open the certificate template plugin in MMC and add “Authenticated Users” with read permissions to all three Citrix templates.
  2. Setup Certificate Authority
    • Specify the correct Certificate Authority
  3. Authorize the UCS service
    • Authorizing is done in two steps. First request a certificate and then approve the pending request on your CA. Once the certificate is issued and installed, the initial setup for the UCS service is complete.

UCS7

The next step is to configure the “User Roles” on the UCS. The setup of UCS includes one “default” role. You can specify which role you would like to use in the GPO. If you do not specify a role in the GPO the default role will be used. In our test setup, we’ve used the default role.

  • Specify the list of StoreFront server you would like to use the specific role
  • Specify the list of VDA’s you would like to logon to with the specific role
  • Specify the list of Users you would like to logon to StoreFront with the specific role

Specify all the above settings and we are done with the configuration of the UCS.

UCS8

Firewall

The technical preview does not open up the local firewall yet on the UCS server. Configure the local firewall manually to accept incoming requests on port 80.

NetScaler Gateway Authentication/Session Profile

Once the installation and configuration of the UCS is complete, we can begin with the configuration of the Netscaler Gateway setup.

The setup used in testing is similar to this schematic:

NSG_ADFS_UCS_VDA

Authentication Profile

  • Import your ADFS signing certificate (public key only)
  • Create a SAML authentication server
  • Create a SAML authentication policy and bind the SAML authentication server
  • Bind the SAML authentication policy to the Netscaler Gateway virtual server
add ssl certKey adfs_signing_cert -cert adfs_signing_cert.cer

add authentication samlAction adfs_auth_svr -samlIdPCertName adfs_signing_cert" -samlSigningCertName <nsg_fqdn> -samlRedirectUrl "https://<adfs_fqdn>/adfs/ls" -samlUserField "Name ID" -samlIssuerName <nsg_fqdn>

add authentication samlPolicy adfs_auth_pol ns_true adfs_auth_svr

bind vpn vserver nsg_vsrv -policy adfs_auth_pol -priority 100

 

AD FS Configuration

We’ll need to establish a relying party trust on the AD FS server between it, as the identity provider (IdP), and the NetScaler Gateway virtual server, a service provider (SP),  configured for use with SAML 2.0 protocol/authentication.

Initially, create a metadata.xml file on and place this on the Netscaler

metadata.xml eample file:

<md:EntityDescriptor xmlns:md="urn:oasis:names:tc:SAML:2.0:metadata" ID="_724200788f8391f96053f72adc628fecc808d09a" entityID="<nsg_fqdn>">
 <md:SPSSODescriptor protocolSupportEnumeration="urn:oasis:names:tc:SAML:2.0:protocol urn:oasis:names:tc:SAML:1.1:protocol urn:oasis:names:tc:SAML:1.0:protocol">
 <md:Extensions>
 <init:RequestInitiator xmlns:init="urn:oasis:names:tc:SAML:profiles:SSO:request-init" Binding="urn:oasis:names:tc:SAML:profiles:SSO:request-init" Location="https://<adfs_fqdn>/adfs/ls"/>
 </md:Extensions>
 <md:KeyDescriptor>
 <ds:KeyInfo xmlns:ds="http://www.w3.org/2000/09/xmldsig#">
 <ds:KeyName><nsg_fqdn></ds:KeyName>
 <ds:X509Data>
 <ds:X509SubjectName>CN=<nsg_fqdn></ds:X509SubjectName>
 <ds:X509Certificate>
 <NSG Cert in pem format, public key only>
 </ds:X509Certificate>
 </ds:X509Data>
 </ds:KeyInfo>
 </md:KeyDescriptor>
 <md:AssertionConsumerService Binding="urn:oasis:names:tc:SAML:2.0:bindings:HTTP-POST" Location="https://<nsg_fqdn>/cgi/samlauth" index="0"/>
 </md:SPSSODescriptor>
</md:EntityDescriptor>

Next, logon to the AD FS server and create a new Relying Party Trust using the wizard. In the wizard point to the metadata.xml file on the NetScaler.

https://<nsg_fqdn>/vpn/metadata.xml

Open the properties of the Relying Party Trust and uncheck “Monitor replying party” on the Monitoring tab

ADFS1

Remove the Encryption certificate on the Encryption tab

ADFS2

Set the secure hash algorithm to SHA-1 on the Advanced tab.

ADFS3

Click on OK. This completes the initial configuration of the Relying Party Trust in AD FS.

Claims

ADFS needs to pass two claim on to the NetScaler gateway virtual server in order to correctly process the authentication process. Right click on the NSG relying party trust en select “Edit claim rules”. Add a Send LDAP attributes and Send Claims using a custom rule.

Send LDAP attributes Claim (Send UPN as NameID)

c:[Type == "http://schemas.microsoft.com/ws/2008/06/identity/claims/windowsaccountname", Issuer == "AD AUTHORITY"]
 => issue(store = "Active Directory", types = ("http://schemas.xmlsoap.org/ws/2005/05/identity/claims/nameidentifier"), query = ";userPrincipalName;{0}", param = c.Value);

Send Claim using a custom rule (Send LogoutURL)

 => issue(Type = "logoutURL", Value = "https://<adfs_fqdn>/adfs/ls/", Properties["http://schemas.xmlsoap.org/ws/2005/05/identity/claimproperties/attributename"] = "urn:oasis:names:tc:SAML:2.0:attrname-format:unspecified")

Session Profile

  • Create a NSG session profile
  • Create a NSG session policy and bind the session profile
  • Bind the NSG session policy to the Netscaler Gateway virtual server
add vpn sessionAction ses_prof_rfw -transparentInterception ON -defaultAuthorizationAction ALLOW -SSO ON -ssoCredential PRIMARY -icaProxy ON -wihome "https://<storefront_fqdn>/Citrix/<rfw_path>" -wiPortalMode NORMAL -clientlessVpnMode OFF

add vpn sessionPolicy ses_pol_rfw "REQ.HTTP.HEADER User-Agent NOTCONTAINS CitrixReceiver" ses_prof_rfw

bind vpn vserver nsg_vsrv -policy ses_pol_rfw -priority 100

 

StoreFront configuration

Configure StoreFront to fully delegate the authentication to NetScaler. Logon to the StoreFront server and open the StoreFront management console. Browse to the Manage Authentication Methods and select “Pass-through from NetScaler Gateway”.

SF1

Select Configure Delegated Authentication and check “Fully delegate credential validation to NetScaler Gateway”.

SF2

 

Ready to test the configuration

Having configured the Federated Authentication Service, we are ready to test it. The technical preview has only support for RfW and a Windows Receiver.

  • Open a browser and browse to your NSG virtual server.
  • Your browser redirects you to the AD FS server for authentication.
  • Once AD FS has completed authentication, the browser is returned to NSG and you will be logged on to StoreFront.
  • Launch a published application or desktop and seamless logon will commence.

This is how it looked in our environment:

 

Troubleshooting

StoreFront

StoreFront troubleshooting is described here: https://docs.citrix.com/en-us/storefront/3/sf-troubleshoot.html

Desktop Agent

To enable tracing, create a folder named c:\logs, and set permissions so that the Broker Agent Service can write to it. Open the BrokerAgent.exe.config file in c:\Program Files\Citrix\Virtual Desktop Agent

Add a line:

<add key="Citrix.Authentication.IdentityAssertion.LogFileName" value="c:\logs\ucs.log"/>

User Credential Service

To enable tracing, create a folder named c:\logs, and set permissions so that the User Credential Service can write to it. Open the Citrix.Authentication.UserCredentialService.exe.config file in C:\Program Files\Citrix\UserCredentialService

Add a line:

<add key="Citrix.Authentication.UserCredentialService.LogFileName" value="c:\logs\ucs.log"/>

Netscaler

http://support.citrix.com/article/CTX114999

Federated Authentication Service Blog

http://discussions.citrix.com/forum/1642-saml-federated-authentication-tech-preview/

 

Azure AD as an Identity Provider

Let’s take a quick look at Azure Active Directory (AAD) in the identity provider role. Anyone using Office 365 , be it logging on with a standard account or a federated one, utilizes an Azure AD identity, with the latter brokering access to Office 365 resources.

What happens when we wish to connect our own SaaS/web applications to  the Azure AD world? Well, Windows Azure brokers a number of identity-based technologies to support such requirements. As a means of illustrating this, we’ll show an example using Azure AD as a SAML 2.0 Identity Provider (IdP), connecting up to a basic web application using a pHP-based SAML Service Provider: simpleSAMLphp.

We login to our Azure tenant (Azure Service Manager). Scroll down to the Active Directory icon.

2016-04-27_11-34-58

On the directory tab, click on the organization and then the Applications tab.  From the bottom of the screen, create a new application by clicking on the Add icon.

2016-04-27_11-35-18

Select Add an application my organization is developing.

2016-04-27_11-35-32

Give your SaaS/Web application a name (e.g. simpleSAMLphp Demo).  Using the radio button, select the type of application. Since this is a SAML-P application using the browser, we need to select the Web Application / Web API  option.

2016-04-27_11-35-43

Click on the arrow. Enter the details for your SAML application.

2016-04-27_11-35-54

For Sign-On URL fill in the Assertion Consumer Service (ACS) URL for the Service Provider (simpleSAMLphp). We’ll revisit these settings in a  moment. For the App-ID URI, the Identifier or Entity ID of the SAML Service Provider is expected.

Here’s an example using our  simpleSAMLphp application.

2016-04-27_11-36-12

Here we’ve gone back and changed the Sign-On URL to the base URL of the SimpleSAMLphp admin page. This is where (for the test) we want to send users to when accessing the “application”. It’s the Reply URL which is the address to which Azure AD will send the SAML authentication response. Further down in the application configuration in Azure Manager, we see the Single Sign-On settings.

2016-04-27_11-36-26

Here are the actual settings used, albeit with a dummy URLs.

Sign-On URL

https://saml.mydomain.com/login

Reply URL

https://saml.mydomain.com/login/module.php/saml/sp/saml2-acs.php/default-sp

App URI (Identifier)

https://saml.mydomain.com/login/module.php/saml/sp/metadata.php/default-sp

On the Service Provider side, the metadata from the tenant) Azure Identity Provider needs to be parsed and added to the SimpleSAMLphp configuration file (saml20-idp-remote.php). This is done by downloading the Azure IdP metadata file directly, e.g.

https://login.microsoftonline.com/<AzureTenantID/federationmetadata/2007-06/federationmetadata.xml

Connect to the simpleSAMLphp web administration interface. From the federation tab, select the XML to simpleSAMLphp metadata converter.

2016-04-27_11-36-41

Cut and paste the Azure XML document from the tenant into the simpleSAMLphp web browser, convert the text and then copy to the clipboard. This text can be then appended directly to the saml20-idp-remote.php file.

Here’s an example. Replace the Azure Tenant ID with your own ID accordingly.

$metadata['https://sts.windows.net/<Azure Tenant ID>/'] = array (
  'entityid' => 'https://sts.windows.net/<Azure Tenant ID>/',
  'contacts' =>
  array (
  ),
  'metadata-set' => 'saml20-idp-remote',
  'SingleSignOnService' =>
  array (
    0 =>
    array (
      'Binding' => 'urn:oasis:names:tc:SAML:2.0:bindings:HTTP-Redirect',
      'Location' => 'https://login.microsoftonline.com/<Azure Tenant ID>/saml2',
    ),
    1 =>
    array (
      'Binding' => 'urn:oasis:names:tc:SAML:2.0:bindings:HTTP-POST',
      'Location' => 'https://login.microsoftonline.com/<AzureTenantID>/saml2',
    ),
  ),
  'SingleLogoutService' =>
  array (
    0 =>
    array (
      'Binding' => 'urn:oasis:names:tc:SAML:2.0:bindings:HTTP-Redirect',
      'Location' => 'https://login.microsoftonline.com/<Azure Tenant ID>/saml2',
    ),
  ),
  'ArtifactResolutionService' =>
  array (
  ),
  'keys' =>
  array (
    0 =>
    array (
      'encryption' => false,
      'signing' => true,
      'type' => 'X509Certificate',
      'X509Certificate' => '<CERTIFICATE>',
    ),
  ),
);

Testing Authentication

From the Azure Application Portal, we can access the new test application.

2016-04-27_11-36-57

From them we’re taken to the simpleSAMLphp administration page (https://saml.mydomain.com/login)

Within simpleSAMLphp we can select our identity provider for logon (Azure AD)

2016-04-27_11-37-08

Click on the Select button to initiate the logon process.

2016-04-27_11-37-17

Logon with your Azure AD credentials to the application and we’re returned to the simpleSAMLphp landing page.

2016-04-27_11-37-33

Since Azure is brokering the connection with the application, this process also extends to using ADFS where the domain is federated. Azure performs the necessary realm discovery and routes the  user to their home domain.

With these and a number of services, Azure offers a solid convergence point for brokering connections with your web applications and workspaces. It’s a rapidly evolving space, so stay tuned…

If you’d like to know more or on how you can implement this and related technologies within your own  environment, please contact us. We’ll be happy to assist.

The evolution of access control

Do you remember what it was like when everyone had desktop computers and data security focused on the best way to physically lock computers to heavy desks?

Many customers are asking us the question how they would be able to take control again of their “environment” where the environment now has become scattered amongst on-premise or outsourced resources, Cloud resources and Mobile devices.

In this blog post, we’ll review the ways security and access control have changed over the years, highlighting how Enterprise Mobility Management solutions (we’ll be showing you the  Microsoft Enterprise Mobility suite here) are poised to provide integrated solutions for the current world with mobile devices and online (Cloud) services.

We’re using the Microsoft Enterprise Mobility (EMS) solution as the explicit example here, because Microsoft has a different vision on how to solve these issues compared to other solution providers like MobileIron or Airwatch. The main and most important difference in vision between those providers, is the way they are handling the delivery of (mobile) applications and data. Where MobileIron and Airwatch (as an example, there are many other providers out there) are desperately trying to create “controlled bubbles”, the Microsoft vision is to use the native device and application experience, while protecting the access towards the application and the data. That’s a fundamental other way of taking care of the challenge here. It’s not that the “bubble” approach is wrong to begin with (even more, there are specific use cases to use the approach) but the end-user will lose the “native device experience” but even more the end-user will end up using the EMM solution provided mobile apps to replace the native apps…

bubble approach

Picture: showing the MobileIron “apps” with the “bubble” approach 

We’ve seen cases at customers where the “bubble” approach failed or at least was not that successful as end-users did not fully accepted the fact that they where losing the native device apps like Outlook mail of ActiveSync. Let’s not get going about the quality of those non-native EMM apps here but you’re able to imagine the challenges there for the EMM providers 😉

So what’s the vision of Microsoft here ? Well, Microsoft has of course the best interest in using their  applications but on the other hand they have a great bundle of solutions to their availability. But first, let us have a look in the way things have been changed the past years for now:

Mobile Access version 1: Mobile Laptops

In the past, corporate data was hosted on-premises. It was accessed by desktops that were physically connected to the corporate network. Then, laptops emerged as the dominant corporate device, and the Virtual Private Network (VPN) was born.
VPNs provided 3 primary functions:

1. They made it possible for laptops to reach corporate services on the Intranet
2. They restricted corporate access to Internet-connected laptops
3. They helped prevent data loss by encrypting communications and running agents on the laptops that helped contain data

Over time, VPN technology evolved. The criteria that could be used for access control (e.g. require the laptop to be domain-joined) expanded and the technology to prevent data loss matured.
Eventually, new types of VPNs such as SSL VPNs emerged. SSL VPNs enabled app-specific, as opposed to device-wide, access to corporate services from the Internet. This reduced the attack surface and also enabled new scenarios such as accessing corporate services from web browsers running on unmanaged devices.

Mobile Access version 2: Smart Mobile Devices

Later, when smart mobile devices arrived in the corporate computing landscape, they needed access to corporate resources, and VPN technology was the tool available to provide that. Mobile devices, primarily connected to the Internet, needed network reachability to corporate services. However, theses always-on devices brought many security concerns from their early general lack of IT controls. This drove demand for complementary technology to the VPNs which would help protect data.
All of this created an opportunity for integrated solutions based on Mobile VPN, Mobile Device Management (MDM), and Mobile Application Management (MAM). The management system would provision a VPN profile to a mobile device and thereby give it controlled access to corporate services on the Intranet. MDM and MAM features would help provide data protection on mobile devices analogously to the agents deployed by VPN clients on laptops.
Over time, Mobile VPNs emerged into per-app Mobile VPNs. The per-app variety provided similar benefits to mobile devices that SSL VPNs had provided to mobile laptops in the past. They reduced the attack surface and enabled new scenarios.

Mobile Access version 3: Identity-based Access Control and Data Protection

Now, we are in an era of mobile access where increasing amounts of corporate data lives outside of the network perimeter. Data still lives on corporate networks, but it’s also in cloud services, on mobile devices, and in mobile apps. Perhaps one day you won’t have any corporate data left on-premises, but the moment you start adopting cloud services you need to rethink the way access is controlled and data is protected.

ConditionalAccess1

Picture: showing that within the current world the apps, devices and resources are scattered

In the mobile-first, cloud-first world, a fundamentally different approach was needed, so we built access control and data protection directly into mobile devices, mobile apps, and the cloud infrastructure itself.  In this world your network perimeter is replaced by an “identity perimeter.”

ConditionalAccess2

Picture: showing that within the perimeter protection layers do not apply anymore

That’s what Microsoft has built with Office 365 and the Enterprise Mobility Suite, as a supplement to the classic VPN provisioning mechanisms that other  EMM providers like MobileIron or Airwatch have for on-premises apps. Microsoft EMS delivers integrated identity, access control, management, and data protection – built to protect your corporate data wherever it lives, using technologies like Device and Application Management, Information Right Management, Risk based contextual based authentication, Analytic Security Services and more.

With Microsoft EMS, whenever a mobile device or app attempts to authenticate to an online service (Microsoft or 3rd party) or on-premises web app, subjects the request to criteria you define, consulting with the management system as needed. Is the mobile device managed and compliant with your IT policies? Is the mobile app managed? Has the user presented multiple forms of authentication? Is the PC domain-joined and managed or controlled ? Is the request coming from the corporate network or the internet? All of these criteria and more are provided without the need for VPN. It’s just built-in the solution.
The diagram below shows how Microsoft EMS ensures that you have the access controls in the cloud needed to replace the access controls in your VPNs.

ConditionalAccess3

Picture: illustration of conditional access using Microsoft EMS

In addition to providing cloud access control, Microsoft EMS also provides native data protection. Again, this is based on identity and integrated with management.
Was a corporate identity used to access the data? If yes, then the mobile apps will prevent the data from being shared with consumer apps or services via Save-As, Open-In, clipboard, etc (Intune MAM with or without device enrollment into MDM). Is the document itself explicitly protected by an access policy (using IRM like Azure RMS)? If so, enforce access control on that file, even when it roams outside of apps and devices under management.

This integrated approach to data loss prevention enables the same application to isolate the corporate and personal data that it handles. This means your employees will not have to use separate apps for work. They can just use native or Office mobile apps for work and personal use and the right protections will apply at the right times. The diagram below shows this concept.

ConditionalAccess4

Picture: showing the Microsoft EMS approach to handle Data Loss Prevention

As mobile access evolves from VPN-based to identity-based, we foresee several benefits:

  • Cost savings compared to VPNs. VPN technology is typically expensive and complex. Deploying VPN agents, profiles, and certificates is also complex and expensive. As more and more of your data moves to the cloud, you’ll enable larger and larger populations of cloud-only users that don’t require a VPN and everything it carries.
  • Simpler access infrastructure to operate. Instead of operating a global scale network perimeter with various proxies, gateways, and VPNs, you just need to connect your existing on premise AD with the Azure Active Directory. From there, Office 365 and other SaaS apps will route their authentication through Azure AD and your modern access controls will be enforced.
  • Better end-user experiences. With EMS’s identity-based access control, your end users will not have to install and launch separate VPN apps. The access control experience is natively a part of the sign-in experience in the mobile apps. Since your traffic isn’t bounced from the Internet to the Intranet and back, your employees get better latency and performance in their mobile apps.
  • Positioned for the future. Once your basic cloud access infrastructure is in place, you have a solid foundation for future innovation. Because the capabilities are provided from the cloud, improvements come often and automatically. You don’t need to plan upgrades or migrations to start to take advantage of the latest and greatest. Compare this to your VPN infrastructure today and the tremendous amount of effort it takes to upgrade to the latest and greatest.

As Route443, we’re often working by the identity-based model for mobile access control and data protection, as it has our special interest we’re following any developments very closely. We see this development as one of the best things offered in the industry to help you provide great mobile experiences to your employees and in the most future-proofed way.

In the meantime, if you’d like to know more on how you are able to use this functionality within your corporate environment, please contact us and let us know how we can assist you.

 

 

Azure Active Directory Identity Protection

Hi folks,

Just recently,  Microsoft released their long awaited implementation of risk based authentication/authorization control. Personally, we’re very excited about this announcement. Hold your horses, as it’s still in public preview …. for now…

Let’s have a little background on the subject. What’s so interesting about this component and why should you be interested in it?

For starters, in the contemporary cloud, we’re relying on Identity & Access Management frameworks to provide our subscribers with secure and manageable paths to authentication and authorization of their resources. By secure we mean we are able to provide our subscribers with a corporate identity in the current framework, but there are limitations. For example, how are we to know if it’s really that subscriber using the resource at a given moment ? Sure, we know that the credentials are valid, but what if the account has been compromised? How does one tell? Cue Azure AD Identity Protection: a big step in the right direction for helping establish a risk posture and applying it during the authentication process, particular when combined with other mechanisms, such as Multi-Factor Authentication (MFA).. (something we’ll cover in a later blog post).

Azure Active Directory Identity Protection is a security service within Microsoft Azure that provides a consolidated view into risk events and potential vulnerabilities affecting the organization’s identities. Identity Protection leverages existing Azure AD’s anomaly detection capabilities (available through Azure AD’s Anomalous Activity Reports), and introduces new risk event types that can detect anomalies in real-time.

The vast majority of security breaches take place when attackers gain access to an environment by stealing a user’s identity. Attackers have become increasingly effective at leveraging third party breaches, and using sophisticated phishing attacks. Once an attacker gains access to even a low privileged user account, it is relatively straightforward for them to gain access to important company resources through lateral movements/traversal attacks. It is essential, therefore, to protect all identities and, when an identity is compromised, proactively prevent the compromised identity from being abused.

Discovering compromised identities is no easy task. Identity Protection uses adaptive machine learning algorithms and heuristics to detect anomalies and risk events that may indicate that an identity has been compromised.

Using this data, Identity Protection generates reports and alerts that enables the administrator to investigate these risk events and take appropriate remediation or mitigation action.

Azure Active Directory Identity Protection is more than simply a monitoring and reporting tool. Based on risk events, Identity Protection calculates a user risk level for each user, enabling the security professional to configure risk-based policies to automatically protect the identities of the organization. These risk-based policies, in addition to other conditional access controls provided by Azure Active Directory and EMS, can automatically block or offer adaptive remediation actions, including password resets and enforcement of multi-factor authentication.

Now, let’s have a look at the delivered functionality here.

In the reporting module of the Azure Active Directory Identity Protection service, we’re now able to view some important security related events within our environment (tenant):

Detecting risk events and risky accounts:

  • Detecting 6 risk event types using machine learning and heuristic rules
  • Calculating user risk levels
  • Providing custom recommendations to improve overall security posture by highlighting vulnerabilities

Investigating risk events:

  • Sending notifications for risk events
  • Investigating risk events using relevant and contextual information
  • Providing basic workflows to track investigations
  • Providing easy access to remediation actions such as password reset

Very useful additions for incident and event management. The real-time evaluation and mitigation are also very interesting.

Risk-based conditional access policies:

  • Policy to mitigate perceived “risky” sign-ins by blocking sign-ins or requiring multi-factor authentication challenges.
  • Policy to block or secure “risky” user accounts
  • Policy to require users to register for multi-factor authentication

Risk level, determining the authentication context

The Risk level for a risk event is an indication (High, Medium, or Low) of the severity of the risk event. The risk level helps Identity Protection users prioritize the actions they must take to reduce the risk to their organization. The severity of the risk event represents the strength of the signal as a predictor of identity compromise, combined with the amount of noise that it typically introduces.

  • High: High confidence and high severity risk event. These events are strong indicators that the user’s identity has been compromised, and any user accounts impacted should be remediated immediately.
  • Medium: High severity, but lower confidence risk event, or vice versa. These events are potentially risky, and any user accounts impacted should be remediated.
  • Low: Low confidence and low severity risk event. This event may not require an immediate action, but when combined with other risk events, may provide a strong indication that the identity is compromised

Risk levels
Given we’re able to classify the risk level of any authentication attempt, using the classification within the context of the authentication process, we still need to look at how the information is collected.  Let’s have a look under the hood of the Azure Active Directory Identity Protection service a little further…

Leaked credentials

Leaked credentials are found posted publicly in the dark web by Microsoft security researchers. These credentials are usually found in plain text. They are checked against Azure AD credentials, and if there is a match, they are reported as “Leaked credentials” in Identity Protection. Leaked credentials risk events are classified as a “High” severity risk event, because they provide a clear indication that the user name and password are available to an attacker.

Impossible travel to atypical locations

This risk event type identifies two sign-ins originating from geographically distant locations, where at least one of the locations may also be atypical for the user, given past behavior. In addition, the time between the two sign-ins is shorter than the time it would have taken the user to travel from the first location to the second, indicating that a different user is using the same credentials.

This machine learning algorithm ignores obvious “false positives” contributing to the impossible travel condition, such as VPNs and locations regularly used by other users in the organization. The system has an initial learning period of 14 days during which it learns a new user’s sign-in behavior.

Impossible travel is usually a good indicator that a hacker was able to successfully sign-in. However, false-positives may occur when a user is traveling using a new device or using a VPN that is typically not used by other users in the organization. Another source of false-positives is applications that incorrectly pass server IPs as client IPs, which may purport sign-ins taking place from the data center where that application’s back-end is hosted (often these are Microsoft datacenters, which may give the appearance of sign-ins taking place from Microsoft owned IP addresses). As a result of these false-positives, the risk level for this risk event is “Medium”.

Sign-ins from infected devices

This risk event type identifies sign-ins from devices infected with malware, that are known to actively communicate with a bot server. This is determined by correlating IP addresses of the user’s device against IP addresses that were in contact with a bot server. Be aware; this risk event identifies IP addresses, not user devices ! If several devices are behind a single IP address, and only some are controlled by a bot network, sign-ins from other devices my trigger this event unnecessarily, which is the reason for classifying this risk event as “Low”.

Sign-ins from anonymous IP addresses

This risk event type identifies users who have successfully signed in from an IP address that has been identified as an anonymous proxy IP address. These proxies are used by people who want to hide their device’s IP address, and may be used for malicious intent. The risk level for this risk event type is “Medium” because in itself an anonymous IP is not a strong indication of an account compromise.

Sign-ins from IP addresses with suspicious activity

This risk event type identifies IP addresses from which a high number of failed sign-in attempts were seen, across multiple user accounts, over a short period of time. This matches traffic patterns of IP addresses used by attackers, and is a strong indicator that accounts are either already or are about to be compromised. This is a machine learning algorithm that ignores obvious “false-positives“, such as IP addresses that are regularly used by other users in the organization. The system has an initial learning period of 14 days where it learns the sign-in behavior of a new user and new tenant.

The risk level for this event type is “Medium” because several devices may be behind the same IP address, while only some may be responsible for the suspicious activity.

Sign-in from unfamiliar locations

This risk event type is a real-time sign-in evaluation mechanism that considers past sign-in locations (IP, Latitude / Longitude) to determine new / unfamiliar locations. The system stores information about previous locations used by a user, and considers these “familiar” locations. The risk even is triggered when the sign-in occurs from a location that’s not already in the list of familiar locations. The system has an initial learning period of 14 days, during which it does not flag any new locations as unfamiliar locations. The system also ignores sign-ins from familiar devices, and locations that are geographically close to a familiar location.

Unfamiliar locations can provide a strong indication that an attacker is able attempting to use a stolen identity. False-positives may occur when a user is traveling, trying out a new device or uses a new VPN. As a result of these false positives, the risk level for this event type is “Medium”.

There’s a nice looking management style console as a collation point for gathering all events, but the real ingredients or “special sauce” lie beneath 🙂

dashboard

We’re still some steps away from the desired end-state, where we’re able to influence or even determine the level of authorization next to the level of authentication, but let’s not be too pessimistic 🙂  This is really a big step forward as a building block within the (Microsoft) Access Management framework!

In the meantime, if you’d like to know more on how you are able to use this functionality within your corporate environment, please contact us and let us know how we can assist you.