The Role Of Mobile During Authentication

Nearly all the big player social networks now provide a multi-factor authentication option – either an SMS sent code or perhaps key derived one-time password, accessible via a mobile app.  Examples include Google’s Authenticator, Facebook’s options for MFA (including their Code Generator, built into their mobile app) or LinkedIn’s two-step verification.  There are lots more examples, but the main component is using the user’s mobile phone as an out of band authenticator channel.

Phone as a Secondary Device - “Phone-as-a-Token”

The common term for this is “phone-as-a-token”.  Depending on the statistics, basic mobile phones are now so ubiquitous that the ability to leverage at least SMS delivered one one-time-passwords (OTP) for users who do not have either data plans or smart phones is common.  This is an initial step in moving away from the traditional user name and password based login.  However, since the National Institute of Standards and Technology (NIST) released their view that SMS based OTP delivery is deemed insecure, there has been constant innovations around how best to integrate phone-based out of band authentication.  Push notifications are one and local or native biometry is another, often coupled with FIDO (Fast Identity Online) for secure application integration.

EMM and Device Authentication

But using a phone as an out of band authentication device, often overlooks the credibility and assurance of the device itself.  If push based notification apps are used, whilst the security and integrity of those apps can be guaranteed to a certain degree, the device the app is installed upon can not necessarily be attested to the same levels.  What about environments where BYOD (Bring Your Own Device) is used?  What about the potential for either jail broken operating systems or low assurance or worse still malware based apps running parallel to the push authentication app?  Does that impact credibility and assurance?  Could that result in the app being compromised in some way? 

In the internal identity space, Enterprise Mobility Management (EMM) software often comes to the rescue here – perhaps issuing and distributing certs of key pairs to devices in order to perform device validation, before accepting the out band verification step.  This can often be coupled with app assurance checks and OS baseline versioning.  However this is often time-consuming and complex and isn’t always possible in the consumer or digital identity space.

Multi-band to Single-band Login

Whilst you can achieve both a user authentication, device authentication and out of band authentication nirvana, let’s spin forward and simulate a world where the majority of interactions are solely via a mobile device.  So we no longer have an “out of band” authentication vehicle.  The main application login occurs on the mobile.  So what does that really mean?  Well we lose the secondary binding.  But if the initial application authentication leverages the mechanics of the original out of band (aka local biometry, crypto/FIDO integration) is there anything to worry about?  Well the initial device to user binding is still an avenue that requires further investigation.  I guess by removing an out of band process, we are reducing the number of signals or factors.  Also, unless a biometric local authentication process is used, the risk of credential theft increases substantially. 

Leave your phone on the train, with a basic local PIN based authentication that allows access to refresh_tokens or private keys and we’re back to the “keys to the castle” scenario.


User, Device & Contextual Analysis

So we’re back to a situation where we need to augment what is in fact a single factor login journey.

The physical identity is bound to a digital device. How can we have a continuous level of assurance for the user to app interaction?  We need to add additional signals – commonly known as context. 

This “context” could well include environmental data such as geo-location, time, network addressing or more behavioural such as movement or gait analysis or app usage patterns.  The main driver being a movement away from the big bang login event, where assurance is very high, with a long slow tail drop off as time goes by.  This correlates to the adage of short lived sessions or access_tokens – mainly as assurance can not be guaranteed as time from authentication event increases.

This “context” is then used to attempt lots of smaller micro-authentication events – perhaps checking at every use of an access_token or when a session is presented to an access control event.

So once a mobile user has “logged in” to the app, in the background there is a lot more activity looking for changes regarding to context (either environmental or behavioural).   No more out of band, just a lot of micro-steps.

As authentication becomes more transparent or passive, the real effort then moves to physical to digital binding or user proofing...

The Role Of Mobile During Authentication

Nearly all the big player social networks now provide a multi-factor authentication option – either an SMS sent code or perhaps key derived one-time password, accessible via a mobile app.  Examples include Google’s Authenticator, Facebook’s options for MFA (including their Code Generator, built into their mobile app) or LinkedIn’s two-step verification.  There are lots more examples, but the main component is using the user’s mobile phone as an out of band authenticator channel.

Phone as a Secondary Device - “Phone-as-a-Token”

The common term for this is “phone-as-a-token”.  Depending on the statistics, basic mobile phones are now so ubiquitous that the ability to leverage at least SMS delivered one one-time-passwords (OTP) for users who do not have either data plans or smart phones is common.  This is an initial step in moving away from the traditional user name and password based login.  However, since the National Institute of Standards and Technology (NIST) released their view that SMS based OTP delivery is deemed insecure, there has been constant innovations around how best to integrate phone-based out of band authentication.  Push notifications are one and local or native biometry is another, often coupled with FIDO (Fast Identity Online) for secure application integration.

EMM and Device Authentication

But using a phone as an out of band authentication device, often overlooks the credibility and assurance of the device itself.  If push based notification apps are used, whilst the security and integrity of those apps can be guaranteed to a certain degree, the device the app is installed upon can not necessarily be attested to the same levels.  What about environments where BYOD (Bring Your Own Device) is used?  What about the potential for either jail broken operating systems or low assurance or worse still malware based apps running parallel to the push authentication app?  Does that impact credibility and assurance?  Could that result in the app being compromised in some way? 

In the internal identity space, Enterprise Mobility Management (EMM) software often comes to the rescue here – perhaps issuing and distributing certs of key pairs to devices in order to perform device validation, before accepting the out band verification step.  This can often be coupled with app assurance checks and OS baseline versioning.  However this is often time-consuming and complex and isn’t always possible in the consumer or digital identity space.

Multi-band to Single-band Login

Whilst you can achieve both a user authentication, device authentication and out of band authentication nirvana, let’s spin forward and simulate a world where the majority of interactions are solely via a mobile device.  So we no longer have an “out of band” authentication vehicle.  The main application login occurs on the mobile.  So what does that really mean?  Well we lose the secondary binding.  But if the initial application authentication leverages the mechanics of the original out of band (aka local biometry, crypto/FIDO integration) is there anything to worry about?  Well the initial device to user binding is still an avenue that requires further investigation.  I guess by removing an out of band process, we are reducing the number of signals or factors.  Also, unless a biometric local authentication process is used, the risk of credential theft increases substantially. 

Leave your phone on the train, with a basic local PIN based authentication that allows access to refresh_tokens or private keys and we’re back to the “keys to the castle” scenario.


User, Device & Contextual Analysis

So we’re back to a situation where we need to augment what is in fact a single factor login journey.

The physical identity is bound to a digital device. How can we have a continuous level of assurance for the user to app interaction?  We need to add additional signals – commonly known as context. 

This “context” could well include environmental data such as geo-location, time, network addressing or more behavioural such as movement or gait analysis or app usage patterns.  The main driver being a movement away from the big bang login event, where assurance is very high, with a long slow tail drop off as time goes by.  This correlates to the adage of short lived sessions or access_tokens – mainly as assurance can not be guaranteed as time from authentication event increases.

This “context” is then used to attempt lots of smaller micro-authentication events – perhaps checking at every use of an access_token or when a session is presented to an access control event.

So once a mobile user has “logged in” to the app, in the background there is a lot more activity looking for changes regarding to context (either environmental or behavioural).   No more out of band, just a lot of micro-steps.

As authentication becomes more transparent or passive, the real effort then moves to physical to digital binding or user proofing...

ForgeRock welcomes Laetitia Ellison

ForgeRock Logo Late welcome to Laetitia Ellison, who joined the ForgeRock documentation team in February.

Laetitia works with the access management team, and has started out on AM documentation.

Laetitia might be the only member of the team who has a hereditary connection to technical writing. She comes to ForgeRock with a background in writing about customer engagement software, and also technical support. Laetitia brings great energy and enthusiasm to the team, and has really hit the ground running.

Implementing Delegated Administration with the ForgeRock 5.5 Platform

Out of the box in 5.5, IDM (ForgeRock Identity Management) has two types of users – basic end-users and all-powerful administrators. You often need a class of users that fall between these extremes – users which can trigger a password reset action but cannot redefine connector configuration, for example. Another common need is for users to only be allowed to perform actions for a subset of other users – maybe only those within their organization. The typical term for these sorts of users is ‘Delegated Administrators‘ – users who have been granted limited access to perform particular administrative tasks given to them by a more privileged administrator.

There is a way to define new user roles within IDM that have more granular access, but this is fairly limited – you have to write back-end JavaScript code to define what these new roles can do. Also, there is no way to inform the user at runtime about what options they have as a result of these JavaScript-based roles. Instead, you would have to write new UI code which specifically adjusts itself for the each of new roles you define, essentially hard-coding it to match the back-end JavaScript. Depending on the complexity that you need, this can quickly become a big challenge.

The good news is that IDM does not have to do this job by itself – by making use of the other parts of the ForgeRock Identity Platform, you can define very sophisticated authorization logic for each of your users, including the option to delegate administrative tasks to them. The two other products which provide the biggest benefit to IDM for this are AM (ForgeRock Access Management) and IG (ForgeRock Identity Gateway)

AM has a very powerful authorization engine that allows you to declare precise rules which govern the requests made by your users, and it also has the very useful option of being able to return the full set of rules which apply for a given user. Take a look at the product documentation here to learn more about this feature of AM: Introducing Authorization.

IG has full support for working with the AM authorization engine as the enforcement point. It is capable of intercepting each request to IDM and evaluating it by calling out to AM for policy evaluation. It can also do additional local evaluation prior to passing the request down to IDM. You can see the full details about IG’s policy enforcement feature in the IG documentation: Enforcing Policy Decisions.

Demo

Before jumping into the details about how all of this can be put together, a short demo video may make it easier to understand exactly what it is I am hoping to accomplish with this setup:

Example Configuration

I have put together a project in the forgeops git repository which installs the whole ForgeRock Identity Platform exactly as I show it in the demo above and how I describe below. Feel free to use this project to explore the fine details which I might gloss over when I explain how all of this works. You can also use this project as the starting point for your own delegated administration project – after all, it is much easier than starting from scratch! Be aware that the code and configuration provided in this sample are not supported – they are just examples of how you might go about using the supported products they build upon. Also be aware that this project is oriented towards demonstrating functionality – it is not hardened for production. Use your own product expertise as you normally would when considering production deployment practices.

https://stash.forgerock.org/projects/CLOUD/repos/forgeops/browse/sample-platform?at=refs%2Fheads%2Frelease%2F5.5.0

Architecture

The basic architecture you would need to make the most of each of these products is as follows:

 

AM

AM is configured as the authentication provider and the authorization policy decision point. For it to perform this role, it will need a “Policy Set” defined for IDM; this is the collection of policy rules which apply specifically to requests for IDM REST APIs. Since each request to the IDM REST APIs is essentially just a basic HTTP call, you can use the default “URL” resource type provided by AM. You will need to define a policy for each call you expect your REST client to make; for example, if your REST client makes calls like: GET /openidm/info/login you will need to declare a policy which allows this request. Such a policy would look like this:

{
 "data" : {
   "_id" : "info",
   "name" : "info",
   "active" : true,
   "description" : "",
   "resources" : [ "*://*:*/openidm/info/login" ],
   "actionValues" : {
     "GET" : true
   },
   "applicationName" : "openidm",
   "subject" : {
     "type" : "AuthenticatedUsers"
   }
 }
}

This simple policy just states that any authenticated user is allowed to perform a GET action on the “/openidm/info/login” resource (irrespective of the protocol/host/port). You could use the AM Admin UI to define this policy; it would look something like this if you did:

 

Defining appropriate policies for your IDM needs could vary considerably. Take stock of each IDM REST call you need to make and consider the conditions under which users are allowed to make them. Use this to craft a policy set in AM which aligns with all of those details.

IG

After you have defined your policies in AM, you can start enforcing them with IG. IG needs to be positioned in your network topology so that all HTTP requests made to IDM can be intercepted by the IG reverse proxy, and also so that IG can request policy decisions about those requests from AM. This is a standard deployment model for IG – very little is different about how you would deploy IG to protect IDM compared with how you would use it to protect any other HTTP application.

The main thing you need to configure within IG is the PolicyEnforcementFilter. This is a supported, out-of-the-box filter and can be configured in many ways, all of which are described in the filter documentation. An example of one such configuration:

 {
   "type": "PolicyEnforcementFilter",
   "config": {
     "openamUrl": "${env['OPENAM_INSTANCE']}",
     "cache": {
       "enabled": true,
       "defaultTimeout": "1 hour",
       "maxTimeout": "1 day"
     },
     "pepUsername": "openidm",
     "pepPassword": "openidm",
     "pepRealm": "/",
     "application": "openidm",
     "ssoTokenSubject": "${session.openid.id_token}",
     "environment": {
       "securityContextPath": [
         "${session.idmUserDetails.authorization.component}/${session.idmUserDetails.authorization.id}"
       ],
       "securityContextRoles": "${session.idmUserDetails.authorization.roles}"
     }
   }
 }

This example configuration passes in environment details which are stored in the IG session and relate to the currently-authenticated user, as it is defined in IDM. In particular, it passes in the user’s authorization roles and their IDM-specific REST resource path. Depending on how you decide to define your AM policies, these details may or may not be needed.

Based on the form of delegation that you need for your users, this may be all of the filtering you need to declare in IG. However, if you want to perform more fine-grained access control over subsets of records a user can modify, then you willl need additional filters. Those are described below (under “Scoping Data”).

Assuming the filters allow the request to continue, you will need to be sure that IG provides relevant user details in the request to IDM. The simplest solution is to augment the request by adding a new header value which identifies the user; IDM will read this header from the request and use it as part of its own basic authentication framework.

IDM

Since IG has taken responsibility for validating authentication and authorization, IDM simply needs to be configured to no longer attempt to perform these duties itself. There are two main changes that you need to make to the IDM configuration to make this possible:

conf/authentication.json

This configuration entry describes how IDM performs authentication. Every request to an IDM REST endpoint has to have some form of authentication, even if it is very basic; IDM provides several different options for this. The easist option to use with IG is the TRUSTED_ATTRIBUTE authentication  module; see the authentication module documentation and the associated sample documentation for details on how this works. Essentially, the ‘X-Special-Trusted-User’ header contains the name of the user performing the request. It is set by IG, and it is trusted by IDM to be accurate. This trust is why it is so important that IG be the only entry point into IDM – otherwise, an attacker could supply their own header value and pretend to be anyone. Here’s an example module configuration:

 {
   "name" : "TRUSTED_ATTRIBUTE",
   "properties" : {
     "queryOnResource" : "managed/user",
     "propertyMapping" : {
       "authenticationId" : "_id",
       "userRoles" : "authzRoles"
     },
     "defaultUserRoles" : [ "openidm-authorized" ],
     "authenticationIdAttribute" : "X-ForgeRock-AuthenticationId",
     "augmentSecurityContext" : {
       "type" : "text/javascript",
       "file" : "augmentSecurityContext.js",
       "globals" : {
         "authzHeaderName" : "X-Authorization-Map"
       }
     }
   },
   "enabled" : true
 }

conf/router.json

This entry defines scripted filters which apply to every IDM API request, whether from HTTP or from internal API calls. By default IDM has its own authorization logic specified as the first filter; it is the one which invokes router-authz.js. Since IG and AM are performing authorization, you can simply remove this filter.

This is actually all that is required for you to change in IDM, at least for the REST API level. At this point the REST API in IDM should be protected by the AM policy engine.

Scoping Data

The request from IG to the AM policy engine only includes certain, high-level details about the request. IG asks AM things like “what actions can this user perform on the resource /openidm/managed/user/01234“. Generally, AM policies are pattern-based; this means that AM can only tell if the user can perform actions on a set of resources, such as “/openidm/managed/user/*“. If you need rules governing the actions a given user can perform on a subset of resources within that pattern, AM probably does not and cannot have enough information about the resources to make a decision on its own. However, there is a way to return information to IG about the user which IG can use to make its own decision about the request.

Response Attributes

An AM policy can return data along with the results of the policy evaluation in the form of “response attributes”. These attributes can be static values defined within the policy or they can be dynamic values read from the user’s profile. It is up to IG to decide what meaning to impart upon the presence of these attributes. For example, IG can use these response attributes to decide whether a request for a particular resource falls within the subset of resources available to the current user.

To continue to build upon the “/openidm/managed/user/01234” question above, an AM policy can be declared which indicates that the current user is allowed to perform some actions on the general case of “/openidm/managed/user/*” and also send back a response attribute – for example, the organization they belong to. IG can then include another filter that runs after the PolicyEnforcementFilter which is configured to look for the presence of this response attribute.

Scripted Filter Based on Policy Response

Before the request is forwarded to IDM, IG can take additional action based on the response attributes recieved from the policy decision. For example, IG can query IDM to verify that /openidm/managed/user/01234 is within the same organization as the user making the request. If the user is not found in the query results, then this filter can simply reject the request before it is sent to IDM.

Here is an example scripted filter which performs this function:

 {
   "name": "ScopeValidation",
   "type": "ScriptableFilter",
   "config": {
     "type": "application/x-groovy",
     "file": "scopeValidation.groovy",
     "args": {
       "scopeResourceQueryFilter": "/organizationName eq \"\\${organizationName}\"",
       "scopingAttribute": "organizationName",
       "failureResponse": "${heap['authzFailureResponse']}"
     }
   }
 }

This “scopeValidation.groovy” example looks for the presence of a “scopingAttribute” as a response attribute from the earlier policy decision and uses it to construct a query filter. This query filter is then used to define the bounds within which a particular user can perform actions on subsets of resources. It is also used when the user is querying resources – whatever filter they supplied will be altered to also include this filter too. For example, if they call GET /openidm/managed/user?_queryFilter=userName eq “jfeasel” then the resulting query that is passed down to IDM would actually be this: GET /openidm/managed/user?_queryFilter=((userName eq “jfeasel”) AND (/organizationName eq “example”)). The end result is that the user only sees the subset of resources they are allowed to see.

This is just one example of how you could achieve scoping. The power of scripting in IG means that if this example is insufficient for your needs, you can easily alter the groovy code for this filter to account for any variation you might need.

Discovery

Having each REST call validated as it is made is clearly the most essential behavior required. That being said, another very important aspect of delegated administration is being able to show users only those options which are actually available to them. It is a very poor user experience to show all options followed by “Access Denied” type messages when they try something they were not allowed to do.

Fortunately, the AM policy engine has a way for users to discover which features are available to them. See the documentation for “Requesting Policy Decisions For a Tree of Resources“. Making a request to policies?_action=evaluateTree returns values like so:

[
 {
   "advices": {},
   "ttl": 9223372036854776000,
   "resource": "*://*:*/openidm/info/login",
   "actions": {
     "POST": true,
     "GET": true
   },
   "attributes": {}
 },
 {
   "advices": {},
   "ttl": 9223372036854776000,
   "resource": "*://*:*/openidm/managed/user/*",
   "actions": {
     "PATCH": true,
     "GET": true
   },
   "attributes": {
     "organizationName": [
       "example"
     ]
   }
 }
 .....
]

This is the fundamental building block upon which you can build a dynamic UI for your delegated admins. The next step necessary is a means to return this information to your users. The policy engine endpoints (such as the above call to evaluateTree) are not normally accessible directly by end-users; only privileged accounts can access them. The best method for returning this information to your users is to define a new endpoint in IG which has the same configuration details that the PolicyEnforcementFilter uses. This new endpoint is a ScriptableHandler instance which is basically just a thin wrapper around the call to evaluateTree.

An example implementation of this endpoint is available within the “sample-platform” folder within the forgeops git repository. Here are the two key files:

With this /policyTree/ endpoint available, your UI can make a simple GET request and find every detail about which REST endpoints are available for the current user. You can then write UI code which relies on this information in order to render only those features which the user has access to use. Example code which extends the default IDM Admin UI is also available in the sample-platform folder; take a look at the files under idm/ui/admin/extension for more details.

Request Sequence

Here is a detailed diagram which demonstates how a series of requests by an authenticated user with the delegated administration role are routed through the various systems:

Diagram available via WebSequenceDiagrams

Next steps

Although I only showed a very simple form of delegation with this example, the pattern described should enable many more complex and powerful use-cases. There are many features of the products which, when used together, could make for some very exciting systems.

This is just the beginning – we will be working toward improving product integration and features to make this use case and many others possible. Be sure to let us know about your successes and your struggles in this area, so that we can keep the products growing in the right direction. Thanks!

Enhancing User Privacy with OpenID Connect Pairwise Identifiers

This is a quick post to describe how to set up Pairwise subject hashing, when issuing OpenID Connect id_tokens that require the users sub= claim to be pseudonymous.  The main use case for this approach, is to prevent clients or resource servers, from being able to track user activity and correlate the same subject’s activity across different applications.

OpenID Connect basically provides two subject identifier types: public or pairwise.  With public, the sub= claim is simply the user id or equivalent for the user.  This creates a flow something like the below:

Typical “public” subject identifier OIDC flow

This is just a typical authorization_code flow – end result is the id_token payload.  The sub= claim is simply clear and readable.  This allows the possibility of correlating all of sub=jdoe activity.

So, what if you want a bit more privacy within your ecosystem?  Well here comes the Pairwise Subject Identifier type.  This allows each client to be basically issued with a non-reversible hash of the sub= claim, preventing correlation.

To configure in ForgeRock Access Management, alter the OIDC provider settings.  On the advanced tab, simply add pairwise as a subject type.

Enabling Pairwise on the provider

 

Next alter the salt for the hash, also on the provider settings advanced tab.

Add a salt for the hash

 

Each client profile, then needs either a request_uri setting or a sector_identifier_uri.  Basically akin to the redirect_uri whitelist.  This is just a mechanism to identify client requests.  On the client profile, add in the necessary sector identifier and change the subject identifier to be “pairwise”.  This is on the client profile Advanced tab.

Client profile settings

Once done, just slightly alter the incoming authorization_code generation request to looking something like this:

/openam/oauth2/authorize?response_type=code
&save_consent=0
&decision=Allow
&scope=openid
&client_id=OIDCClient
&redirect_uri=http://app.example.com:8080
&sector_identifier_uri=http://app.example.com:8080
Note the addition of the sector_identifier_uri parameter.  Once you’ve exchanged the authorization_code for an access_token, take a peak inside the associated id_token.  This now contains an opaque sub= claim:
{
  "at_hash": "numADlVL3JIuH2Za4X-G6Q",
  "sub": "lj9/l6hzaqtrO2BwjYvu3NLXKHq46SdorqSgHAUaVws=",
  "auditTrackingId": "f8ca531a-61dd-4372-aece-96d0cea21c21-152094",
  "iss": "http://openam.example.com:8080/openam/oauth2",
  "tokenName": "id_token",
  "aud": "OIDCClient",
  "c_hash": "Pr1RhcSUUDTZUGdOTLsTUQ",
  "org.forgerock.openidconnect.ops": "SJNTKWStNsCH4Zci8nW-CHk69ro",
  "azp": "OIDCClient",
  "auth_time": 1517485644000,
  "realm": "/",
  "exp": 1517489256,
  "tokenType": "JWTToken",
  "iat": 1517485656

}
The overall flow would now look something like this:
OIDC flow with Pairwise

This blog post was first published @ http://www.theidentitycookbook.com/, included here with permission from the author.

Enhancing OAuth2 introspection with a Policy Decision Point

OAuth2 protection of resource server content, is typically either done via a call to the authorization service (AS) and the ../introspect endpoint for stateful access_tokens, or, in deployments where stateless access_tokens are deployed, the resource server (RS) could perform “local” introspection, if they have access to the necessary AS signing material.  All good.  The RS would valid scope values, token expiration time and so on.

Contrast that to the typical externalised authorization model, with a policy enforcement point (PEP) and policy decision point (PDP).  Something being protected, sends in a request to a central PDP.  That request is likely to contain the object descriptor, a token representing the subject and some contextual data.  The PDP will have a load of pre-built signatures or policies that would be looked up and processed.  The net-net is the PDP sends back a deny/allow style decision which the PEP (either in the form of an SDK or a policy agent) complies with.

So what is this blog about?  Well it’s the juxtaposition of the typical OAuth2 construct, with externalised PDP style authorization.

So the first step is to set up a basic policy within ForgeRock Access Management that protects a basic web URL – http://app.example.com:8080/index.html.  In honesty the thing being protected could be a URL, button, image, physical object or any other schema you see fit.

Out of the box authorization policy summary

To call the PDP, an application would create a REST payload looking something like the following:

REST request payload to PDP

The request would be a POST ../openam/json/policies?_action=evaluate endpoint.  This endpoint is a protected endpoint, meaning it requires authX from an AM instance.  In a normal non-OAuth2 integrated scenario, this would be handled via the iPlanetDirectoryPro header that would be used within the PDP decision.  Now in this case, we don’t have an iPlanetDirectoryPro cookie, simply the access_token.

Application Service Account

So, there are a couple of extra steps to take.  Firstly, we need to give our calling application their own service account.  Simply add a new group and associated application service user.  This account could then authenticate either via shared secret, JWT, x509 or any other authentication method configured. Make sure to give the associated group the account is in, privileges to the call the REST PDP endpoint.  So back to the use case…

This REST PDP request is the same as any other.  We have the resource being protected which maps into the policy and the OAuth2 access_token that was generated out of band, presented to the PDP as an environment variable.

OAuth2 Validation Script

The main validation is now happening in a simple Policy Condition script.  The script does a few things: performs a call to the AM ../introspect endpoint to perform basic validation – is the token AM issued, valid, within exp and so on.  In addition there are two switches – perform auth_level validation and also perform scope_validation.  Each of these functions takes a configurable setting.  If performAuthLevelCheck is true, make sure to set the acceptableAuthLevel value.  As of AM 5.5, the issued OAuth2 access_token now contains a value called “auth_level”.  This value just ties in the authentication assurance level that has been in AM since the OpenSSO days.  This numeric value is useful to differentiate how a user was validated during OAuth2 issuance. The script basically allows a simple way to perform a minimum acceptable value check.

The other configurable switch, is the performScopeCheck boolean.  If true, the script checks to make sure that the submitted access_token, is associated with atleast a minimum set of required scopes.  The access_token may have more scopes, but it must, as a minimum have the ones configured in the acceptableScopes attribute.

Validation Responses

Once the script is in place lets run through some examples where access is denied.  The first simple one is if the auth_level of the access_token is less than the configured acceptable value.

acceptable_auth_level_not_met advice message

Next up, is the situation where the scopes associated with the submitted access_token fall short of what is required.  There are two advice payloads that could be sent back here.  Firstly, if the number of scopes is fundamentally too small, the following advice is sent back:

acceptable_scopes_not_met – submitted scopes too few

A second response, associated with mismatched scopes, is if the number of scopes is OK, but the actual values don’t contain the acceptable ones.  The following is seen:

acceptable_scopes_not_met – scope entry missing

That’s all there is to it.  A few things to know.  The TTL of the policy has been set to be the exp of the access_token.  Clearly this is over writable, but seemed sensible to tie this to the access_token lifespan.

All being well though, a successful response back would look something like the following – depending on what actions you had configured in your policy:

Successful PDP response

Augmenting with Additional Environmental Conditions

So we have an OAuth2-compatible PDP.  Cool!  But what else can we do.  Well, we can augment the scripted decision making, with a couple of other conditions.  Namely the time based, IP address based and LDAP based conditions.

IP and Time based augmentation of access_token validation

The above just shows a simple of example of tying the decision making to only allow valid access_token usage between 830am and 530pm Monday-Friday from a valid IP range.  The other condition worth a mention is the LDAP filter one.

Note, any of the environmental conditions that require session validation, would fail, the script isn’t linking any access_token to AM session at this point – in some cases (depending on how the access_token was generated) may never have a session associated.  So beware they will not work.

The code for the scripted condition is available here.

 

This blog post was first published @ http://www.theidentitycookbook.com/, included here with permission from the author.

How Information Security Can Drive Innovation

Information Security and Innovation: often at two different ends of an executive team’s business strategy. The non-CIO ‘C’ level folks want to discuss revenue generation, efficiency and growth. Three areas often immeasurably enhanced by having a strong and clear innovation management framework. The CIO’s objectives are often focused on technical delivery, compliance, uploading SLA’s and more recently on privacy enablement and data breach prevention.

So how can the two worlds combine, to create a perfect storm for trusted and secure economic growth?

Innovation Management

But firstly how do organisations actually become innovative? It is a buzzword that is thrown around at will, but many organisations fail to build out the necessary teams and processes to allow innovation to succeed. Innovation basically focuses on the ability to create both incremental and radically different products, processes and services, with the aim of developing net-new revenue streams.

But can this process be managed? Or are companies and individuals just “born” to be creative? Well simply, no. Creativity can be managed, fostered and encouraged. Some basic creative thinking concepts, include “design thinking” – where the focus is on emphasising customer needs, prototyping, iterating and testing again. This is then combined with different thinking types – both open (problem felt directly), closed (via a 3rd party), internal (a value add contribution) and external (creativity as part of a job role).

The “idea-factory” can then be categorised into something like HR lead ideas – those from existing staff that lead to incremental changes – R&D ideas – the generation of radical concepts that lead to entirely new products – and finally Marketing lead ideas – those that capture customer feedback.

Business Management

Once the idea-machine has been designed, it needs feeding with business strategy. That “food” helps to define what the idea-machine should focus upon and prioritise. This can be articulated in the form of what the business wants to achieve. If it is revenue maximisation, does this take the form of product standardization, volume or distribution changes? This business analysis needs to look for identifying unmet customer needs, tied neatly into industry or global trends (a nice review on the latter is the “Fourth Industrial Revolution” by Klaus Schwab).

Information Security Management

There is a great quote by Amit & Zott, that goes along the lines of, as an organisation, you’re always one innovation from being wiped out. Very true. But that analogy can also be applied to “one data breach” from being wiped out – from irreparable brand damage, or perhaps via the theft of intellectual property. So how can we accelerate from the focus of business change and forward thinking to information security, which has typically been retrospective, restrictive and seen as a IT cost centre?

Well there are similarities believe it or not and, when designed in the right way the overlay of application, data and identity lead security can drive faster, more efficient and more trust worthy services.

One of the common misconceptions regarding security management and implementation, is that it is applied retrospectively. An application or infrastructure is created, then audits, penetration tests or code analysis takes place. Security vulnerabilities are identified, triaged and fixed in future releases.
Move Security to the Left

It is much more cost effective and secure, to apply security processes at the very beginning of any project. Be it for the creation of net-new applications or a new infrastructure design. The classic “security by design” approach. For example, developers should have basic understanding of security concepts – cryptography 101, when to hash versus encrypt, what algorithms to use, how to protect from unnecessary information disclosure, identity protection and so on. Exit criteria within epic and story creation should reference how the security posture should, as a minimum not be altered. Functional tests should include static and dynamic code analysis. All of these incremental changes really move “security to the left” of the development pipeline, getting closer to the project start than the end.

Agile -v- State Gate Analysis

Within the innovation management framework, stage-gate analysis is often used to triage creative idea processing, helping to identify what to move forward with and what to abandon.

A stage is a piece of work, followed by a gate. A gate basically has an exit criteria, with exits such as “kill”, “stop”, “back”, “go forward” etc. Each idea flows through this process to basically kill early and reduce cost. As an idea flows through the stage-gate process, the cost of implementation clearly increases. This approach is very similar to the agile methodology of building complex software. Lots of unknowns. Baby steps, iteration, feedback and behaviour alteration and so on.

So there is a definitive mindset duplication between creating ideas that feed into service and application creation and how those applications are made real.

Security Innovation and IP Protection

A key theme of information security attack vectors over the last 5 years, have been the speed of change. Whether we are discussing malware, ransomware, nation state attacks or zero-day notifications, there is constant change. Attack vectors do not stay still. The infosec industry is growing annually as both private sector and nation states ramp defence mechanisms using skilled personnel, machine learning and dedicated practices. Those “external” changes require organisations to respond in innovative and agile ways when it comes to security.

Security is no longer a compliance exercise. The ability to deliver a secure and trusted application or service is a competitive differentiator that can build long lasting, sticky customer relationships.

A more direct relationship between innovation and information security, is the simple protection of intellectual property that relates to the new practices, ideas, patents and other value that has been created, due to innovative frameworks. That IP needs protecting, from external malicious attacks, disgruntled insiders and so on.

Summary

Overall, organisations are doing through the digital transformation exercise at rapid speed and scale. That transformation process requires smart innovation which should be neatly tied into the business strategy. However, security management is no longer a retrospective compliance driven exercise. The process, personnel and speed of change the infosec industry sees, can provide a great breeding ground for helping to alter the application development process, help to reduce internal boundaries and help to deliver secure, trusted privacy preserving services that can allows organisations to grow.

This blog post was first published @ www.infosecprofessional.com, included here with permission.

ForgeRock welcomes Shankar Raman

Welcome to Shankar Raman, who joins the ForgeRock documentation team today.

Shankar is starting with platform and deployment documentation, where many of you have been asking us to do more.

Shankar comes to the team from curriculum development, having worked for years as an instructor, writer, and course developer at Oracle on everything from the DB, to middleware, to Fusion Applications. Shankar’s understanding of the deployment problem space and what architects and deployers must know to build solutions will help him make your lives, or at least your jobs, a bit easier. Looking forward to that!

This blog post was first published @ marginnotes2.wordpress.com, included here with permission.

ForgeRock welcomes Shankar Raman

ForgeRock Logo Welcome to Shankar Raman, who joins the ForgeRock documentation team today.

Shankar is starting with platform and deployment documentation, where many of you have been asking us to do more.

Shankar comes to the team from curriculum development, having worked for years as an instructor, writer, and course developer at Oracle on everything from the DB, to middleware, to Fusion Applications. Shankar’s understanding of the deployment problem space and what architects and deployers must know to build solutions will help him make your lives, or at least your jobs, a bit easier. Looking forward to that!

How Information Security Can Drive Innovation

Information Security and Innovation: often at two different ends of an executive team’s business strategy. The non-CIO ‘C’ level folks want to discuss revenue generation, efficiency and growth. Three areas often immeasurably enhanced by having a strong and clear innovation management framework. The CIO’s objectives are often focused on technical delivery, compliance, uploading SLA’s and more recently on privacy enablement and data breach prevention. So how can the two worlds combine, to create a perfect storm for trusted and secure economic growth?

Innovation Management 

But firstly how do organisations actually become innovative? It is a buzzword that is thrown around at will, but many organisations fail to build out the necessary teams and processes to allow innovation to succeed. Innovation basically focuses on the ability to create both incremental and radically different products, processes and services, with the aim of developing net-new revenue streams. But can this process be managed?

Or are companies and individuals just “born” to be creative? Well simply, no. Creativity can be managed, fostered and encouraged. Some basic creative thinking concepts, include “design thinking” - where the focus is on emphasising customer needs, prototyping, iterating and testing again. This is then combined with different thinking types – both open (problem felt directly), closed (via a 3rd party), internal (a value add contribution) and external (creativity as part of a job role). The “idea-factory” can then be categorised into something like HR lead ideas – those from existing staff that lead to incremental changes – R&D ideas – the generation of radical concepts that lead to entirely new products – and finally Marketing lead ideas – those that capture customer feedback.

Business Management 

 Once the idea-machine has been designed, it needs feeding with business strategy. That “food” helps to define what the idea-machine should focus upon and prioritise. This can be articulated in the form of what the business wants to achieve. If it is revenue maximisation, does this take the form of product standardisation, volume or distribution changes? This business analysis needs to look for identifying unmet customer needs, tied neatly into industry or global trends (a nice review on the latter is the “Fourth Industrial Revolution” by Klaus Schwab).

Information Security Management 


There is a great quote by Amit & Zott, that goes along the lines of, as an organisation, you're always one innovation from being wiped out. Very true. But that analogy can also be applied to “one data breach” from being wiped out – from irreparable brand damage, or perhaps via the theft of intellectual property. So how can we accelerate from the focus of business change and forward thinking to information security, which has typically been retrospective, restrictive and seen as a IT cost centre.

Well there are similarities believe it or not and, when designed in the right way the overlay of application, data and identity lead security can drive faster, more efficient and more trust worthy services. One of the common misconceptions regarding security management and implementation, is that it is applied retrospectively. An application or infrastructure is created, then audits, penetration tests or code analysis takes place. Security vulnerabilities are identified, triaged and fixed in future releases.

Move Security to the Left 

It is much more cost effective and secure, to apply security processes at the very beginning of any project. Be it for the creation of net-new applications or a new infrastructure design. The classic “security by design” approach. For example, developers should have basic understanding of security concepts – cryptography 101, when to hash versus encrypt, what algorithms to use, how to protect from unnecessary information disclosure, identity protection and so on. Exit criteria within epic and story creation should reference how the security posture should, as a minimum not be altered. Functional tests should include static and dynamic code analysis. All of these incremental changes really move “security to the left” of the development pipeline, getting closer to the project start than the end.

Agile -v- State Gate Analysis 

Within the innovation management framework, stage-gate analysis is often used to triage creative idea processing, helping to identify what to move forward with and what to abandon. A stage is a piece of work, followed by a gate. A gate basically has an exit criteria, with exits such as “kill”, “stop”, “back”, “go forward” etc. Each idea flows through this process to basically kill early and reduce cost. As an idea flows through the stage-gate process, the cost of implementation clearly increases. This approach is very similar to the agile methodology of building complex software. Lots of unknowns. Baby steps, iteration, feedback and behaviour alteration and so on. So there is a definitive mindset duplication between creating ideas that feed into service and application creation and how those applications are made real.

Security Innovation and IP Protection

A key theme of information security attack vectors over the last 5 years, have been the speed of change. Whether we are discussing malware, ransomware, nation state attacks or zero-day notifications, there is constant change. Attack vectors do not stay still. The infosec industry is growing annually as both private sector and nation states ramp defence mechanisms using skilled personnel, machine learning and dedicated practices. Those “external” changes require organisations to respond in innovative and agile ways when it comes to security.

Security is no longer a compliance exercise. The ability to deliver a secure and trusted application or service is a competitive differentiator that can build long lasting, sticky customer relationships. A more direct relationship between innovation and information security, is the simple protection of intellectual property that relates to the new practices, ideas, patents and other value that has been created, due to innovative frameworks. That IP needs protecting, from external malicious attacks, disgruntled insiders and so on.

Summary 

Overall, organisations are doing through the digital transformation exercise at rapid speed and scale. That transformation process requires smart innovation which should be neatly tied into the business strategy. However, security management is no longer a retrospective compliance driven exercise. The process, personnel and speed of change the infosec industry sees, can provide a great breeding ground for helping to alter the application development process, help to reduce internal boundaries and help to deliver secure, trusted privacy preserving services that can allows organisations to grow.