Implementing JWT Profile for OAuth2 Access Tokens

There is a new IETF draft stream called JSON Web Token (JWT) Profile for OAuth 2.0 Access Tokens.  This is a very early 0 version, that looks to describe the format of OAuth2 issued access_tokens.

Access tokens, are typically bearer tokens, but the OAuth2 spec, doesn’t really describe what format they should be.  They typically end up being two high level types – stateless and stateful.  Stateful just means “by reference”, with a long opaque random string being issued to the requestor, which resource servers can then send back into the authorization service, in order to introspect and validate.  On their own, stateful or reference tokens, don’t really provide the resource servers with any detail.

The alternative is to use a stateless token – namely a JSON Web Token (JWT).  This new spec, aims to standardise what the content and format should be.

From a ForgeRock AM perspective, this is good news.  AM has delivered JWT based tokens (web session, OIDC id_tokens and OAuth2 access_tokens) for a long time.  The format and content of the access_tokens, out of the box, generally look something like the following:

The out of the box header (using RS256 signing):

The out of the box payload:

Note there is a lot of stuff in that access_token.  Note the cnf claim (confirmation key).  This is used for proof of possession support which is of course optional, so you can easily reduce the size by not implementing that.  There are several claims, that are specific to the AM authorization service, which may not always be needed in a stateless JWT world, where perhaps the RS is performing offline validation away from the AS.

In AM 6.5.2 and above, new functionality allows for the ability to rapidly customize the content of the access_token.  You can add custom claims, remove out of the box fields and generally build token formats that suit your deployment.  We do this, by the addition of scriptable support.  Within the settings of the OAuth2 provider, note the new field for OAuth2 Access Token Modification Script.

The scripting ability, was already in place for OIDC id_tokens.  Similar concepts now apply.

The draft JWT profile spec, basically mandates iss, exp, aud, sub and client_id, with auth_time and jti as optional.  The AM token already contains those claims.  The perhaps only differing component, is that the JWT Profile spec –  section 2.1 – recommends the header typ value be set to “at+JWT” – meaning access token JWT, so the RS does not confuse the token as an id_token.  The FR AM scripting support, does not allow for changes to the typ, but the payload already contains a tokenName claim (value as access_token) to help this distinction.

If we add a couple of lines to the out of the box script, namely the following, we cut back the token content to the recommended JWT Profile:

accessToken.removeField(“cts”);
accessToken.removeField(“expires_in”);
accessToken.removeField(“realm”);
accessToken.removeField(“grant_type”);
accessToken.removeField(“nbf”);
accessToken.removeField(“authGrantId”);
accessToken.removeField(“cnf”);

The new token payload is now much more slimmed down:

The accessToken.setField(“name”, “value”) method, allows simple extension and alteration of standard claims.

For further details see the following documentation on scripted token content – https://backstage.forgerock.com/docs/am/6.5/oauth2-guide/#modifying-access-tokens-scripts

This blog post was first published @ http://www.theidentitycookbook.com/, included here with permission from the author.

Runtime configuration for ForgeRock Identity Microservices using Consul

The ForgeRock Identity Microservices are able to read all runtime configuration from environment variables. These could be statically provided as hard coded key-value pairs in a Kubernetes manifest or shell script, but a better solution is to manage the configuration in a centralized configuration store such as Hashicorp Consul.

Managing configuration outside the code as such has been routinely adopted as a practice by modern service providers such as Google, Azure and AWS and business that use both cloud as well as on-premise software.

In this post, I shall walk you through how Consul can be used to set configuration values, envconsul can be used to read these key-values from different namespaces and supply them as environment variables to a subprocess. Envconsul can even be used to trigger automatic restarts on configuration changes without any modifications needed to the ForgeRock Identity Microservices.

Perhaps even more significantly, this post shows how it is possible to use Docker to layer on top of a base microservice docker image from the Forgerock Bintray repo and then use a pre-built go binary for envconsul to spawn the microservice as a subprocess.

This github.com repo has all the docker and kubernetes artifacts needed to run this integration. I used a single cluster minikube setup.

Consul

Starting a Pod in Minikube

We must start with setting up consul as a docker container and this is just a simple docker pull consul followed by a kubectl create -f kube-consul.yaml that sets up a pod running consul in our minikube environment. Consul creates a volume called /consul/data on to which you can store configuration key-value pairs in JSON format that could be directly imported using consul kv import @file.json. consul kv export > file.json can be used to export all stored configuration separated into JSON blobs by namespace.

The consul commands can be run from inside the Consul container by starting  a shell in it using docker exec -ti <consul-container-name> /bin/sh

Configuration setup using Key Value pairs

The namespaces for each microservice are defined either manually in the UI or via REST calls such as:
consul kv put my-app/my-key may-value

 

Expanding the ms-authn namespace yields the following screen:

A sample configuration set for all three microservices is available in the github.com repo for this integration here. Use consul kv import @consul-export.json to import them into your consul server.

Docker Builds

Base Builds

First, I built base images for each of the ForgeRock Identity microservices without using an ENTRYPOINT instruction. These images are built using the openjdk:8-jre-alpine base image which are the smallest alpine-based openjdk image one can possibly find. The openjdk7 and openjdk9 images are larger. Alpine itself is a great choice for a minimalistic docker base image and provides packaging capabilities using apk. It is a great choice for a base image especially if these are to be layered on top of like in this demonstration.

An example for the Authentication microservice is:
FROM openjdk:8-jre-alpine
WORKDIR /opt/authn-microservice
ADD . /opt/authn-microservice
EXPOSE 8080

The command docker build -t authn:BASE-ONLY builds the base image but does not start the authentication listener since no CMD or ENTRYPOINT instruction was specified in the Dockerfile.

The graphic below shows the layers the microservices docker build added on top of the openjdk:8-jre-alpine base image:

Next, we need to define a launcher script that uses a pre-built envconsul go binary to setup communication with the Consul server and use the key-value namespace for the authentication microservice.

/usr/bin/envconsul -log-level debug -consul=192.168.99.100:31100 -pristine -prefix=ms-authn /bin/sh -c "$SERVICE_HOME"/bin/docker-entrypoint.sh

A lot just happened there!

We instructed the launcher script to start envconsul with an address for the Consul server and nodePort defined in the kube-consul.yaml previously. The switch -pristine tells envconsul to not merge extant environment variables with the ones read from the Consul server. The -prefix switch identifies the application namespace we want to retrieve configuration for and the shell spawns a new process for the docker-entrypoint.sh script which is an ENTRYPOINT instruction for the authentication microservice tagged as 1.0.0-SNAPSHOT. Note that this is not used in our demonstration, but instead we built a BASE-ONLY tagged image and layer the envconsul binary and launcher script (shown next) on top of it. The -c switch enables tracking of changes to configuration values in Consul and if detected the subprocess is restarted automatically!

Envconsul Build

With the envconsul.sh script ready as shown above, it is time to build a new image using the base-only tagged microservice image in the FROM instruction as shown below:

FROM forgerock-docker-public.bintray.io/microservice/authn:BASE-ONLY
MAINTAINER Javed Shah <javed.shah@forgerock.com>
COPY envconsul /usr/bin/
ADD envconsul.sh /usr/bin/envconsul.sh
RUN chmod +x /usr/bin/envconsul.sh
ENTRYPOINT ["/usr/bin/envconsul.sh"]

This Dockerfile includes the COPY instruction to add our pre-built envconsul go-binary that can run standalone to our new image and also adds the envconsul.sh script shown above to the /usr/bin/ directory. It also sets the +x permission on the script and makes it the ENTRYPOINT for the container.

At this time we used the docker build -t authn:ENVCONSUL . command to tag the new image correctly. This is available at the ForgeRock Bintray docker repo as well.

This graphic shows the layers we just added using our Dockerfile instruction set above to the new ENVCONSUL tagged image:

These two images are available for use as needed by the github.com repo.

Kubernetes Manifest

It is a matter of pulling down the correctly tagged image and specifying one key item, the hostAliases instruction to point to the OpenAM server running on the host. This server could be anywhere, just update the hostAliases accordingly. Manifests for the three microservices are available in the github.com repo.

Start the pod with kubectl create -f kube-authn-envconsul.yaml will cause the following things to happen, as seen in the logs using kubectl logs ms-authn-env.

This shows that envonsul started up with the address of the Consul server. We deliberately used the older -consul flag for more “haptic” feedback from the pod if you will. Now that envconsul started up, it will immediately reach out to the Consul server unless a Consul agent was provided in the startup configuration. It will fetch the key-value pairs for the namespace ms-authn and make them available as environment variables to the child process!

The configuration has been received from the server and at this time envconsul is ready to spawn the child process with these key-value pairs available as environment variables:

The Authentication microservice is able to startup and read its configuration from the environment provided to it by envconsul.

Change Detection

Lets change a value in the Consul UI for one of our keys, CLIENT_CREDENTIALS_STORE from json to ldap:

This change is immediately detected by envconsul in all running pods:

envconsul then wastes no time in restarting the subprocess microservice:

Pretty cool way to maintain a 12-factor app!

Tests

A quick test for the OAuth2 client credential grant reveals our endeavor has been successful. We receive a bearer access token from the newly started authn-microservice.

The token validation and token exchange microservices were also spun up using the procedures defined above.

Here is an example of validating a stateful access_token generated using the Authorization Code grant flow using OpenAM:

And finally the postman call to the token-exchange service for exchanging a user’s id_token with an access token granting delegation for an “actor” whose id_token is also supplied as input is shown here:

The resulting token is:

This particular use case is described in its own blog.

Whats coming next?

Watch this space for secrets integration with Vault and envconsul!

Token Exchange and Delegation using the ForgeRock Identity Microservices

In 2017 ForgeRock introduced an Early Access program (aka beta) for the ForgeRock Identity Microservices. In summary the capabilities offered include token issuance using the OAuth2 client credentials grant, token validations of OAuth2/OIDC tokens (and even ssotokens) and token exchange based on the draft OAuth2 token exchange spec. I wrote a press release here and an introductory community blog post here.

In this article we will discuss how to implement delegation as per the OAuth2 Token Exchange  draft specification. An overview of the process is worth discussing at this point.

We basically first want to grab hold of an OAuth2 token for our client to use in an Authorization header when calling the Token Exchange service. Next, we need our Authorization Server such as ForgeRock Access Management in our example, to issue our friendly neighbors Alice and Bob their respective OIDC id_tokens. Note that Alice wants to delegate authority to Bob. While we shall show the REST calls to acquire those, the details of setting up the OAuth2 Provider will be left out. We shall however discuss the custom OIDC Claims Script for setting up the delegation framework for authenticated users, as well as the Policy that sets up allowed actors who may be chosen for delegation.

But first things first, so we start with acquiring a bearer authentication token for our client who will make calls into the Token Exchange service. This can be accomplished by having the Authentication microservice issue an OAuth2 access token with the exchange scope using the client-credentials grant_type. For the example we shall just use a JSON backend as our client repository, although the Authentication microservice supports using Cassandra, MongoDB, any LDAP v3 directory including ForgeRock Directory Services.

The repository holds the following data for the client:

{
 "clientId" : "client",
 "clientSecret" : "client",
 "scopes" : [ "exchange", "introspect" ],
 "attributes" : {}
 }

The authentication request is:

POST /service/access_token?grant_type=client_credentials HTTP/1.1
Host: localhost:18080
Content-Type: multipart/form-data
Authorization: Basic ****
Cache-Control: no-cache

After client authenticates, the Authentication microservice issues a token that when decoded looks like this:

{
 "sub": "client",
 "scp": [
     "exchange",
     "introspect"
 ],
 "auditTrackingId": "237cf708-1bde-4800-9bcd-5db5b5f1ca12",
 "iss": "https://fr.tokenissuer.example.com",
 "tokenName": "access_token",
 "token_type": "Bearer",
 "authGrantId": "a55f2bad-cda5-459e-8620-3e826bad9095",
 "aud": "client",
 "nbf": 1523058711,
 "scope": [
     "exchange",
     "introspect"
 ],
 "auth_time": 1523058711,
 "exp": 1523062311,
 "iat": 1523058711,
 "expires_in": 3600,
 "jti": "39b72a84-112d-46f3-a7cb-84ce7513e5bc",
 "cid": "client"
}

Bob, the actor, authenticates with AM and receives id_token:

{
 "at_hash": "Hgjw0D49EfYM6dmB9_K6kg",
 "sub": "user.1",
 "auditTrackingId": "079fda18-c96f-4c78-90f2-fc9be0545b32-111930",
 "iss": "http://localhost:8080/openam/oauth2",
 "tokenName": "id_token",
 "aud": "oidcclient",
 "azp": "oidcclient",
 "auth_time": 1523058714,
 "realm": "/",
 "may_act": {},
 "exp": 1523062314,
 "tokenType": "JWTToken",
 "iat": 1523058714
}

Alice, the user also authenticates with AM and receives a special id_token:

{
 "at_hash": "nT_tDxXhcee7zZHdwladcQ",
 "sub": "user.0",
 "auditTrackingId": "079fda18-c96f-4c78-90f2-fc9be0545b32-111989",
 "iss": "http://localhost:8080/openam/oauth2",
 "tokenName": "id_token",
 "aud": "myuserclient1",
 "azp": "myuserclient1",
 "auth_time": 1523058716,
 "realm": "/",
 "may_act": {
     "sub": "Bob"
 },
 "exp": 1523062316,
 "tokenType": "JWTToken",
 "iat": 1523058716
}

Why does Alice receive a may_act claim but Bob does not? This has to do with Alice’s group membership in DS. The OIDC Claims script attached to the OAuth2 Provider in AM checks for membership to this group, and if found retrieves a value for the assigned “actor” or delegate. In this case Alice’s has designed actor status to Bob (via some out of band method, such as on a user portal, for example). The script retrieves the actor and sets a special claim with its value: may_act. The claims script is shown here in part:

def isAdmin = identity.getMemberships(IdType.GROUP)
.inject(false) { found, group ->
found ||
group.getName() == “policyEval”
}

//…search for actor.. not shown

def mayact = [:]
if (isAdmin) {
mayact.put(“sub”,actor)
}

Next, we must define a Policy in AM. This policy could be based on OAuth2 scopes or URI, whatever you choose. I chose to setup an OAuth2 Scopes policy but whatever scope you choose must match the resource you pass into the TE service. Rest of the policy is pretty standard, for example, here is a summary of the Subject tab of the policy:

{
  "type": "JwtClaim",
  "claimName": "iss",
  "claimValue": "http://localhost:8080/openam/oauth2"
}

I am checking for an acceptable issuer, and thats it. This could be replaced with a subject condition for Authenticated users as well although the client would then have to send in Alice’s “ssotoken” as the subject_token instead of her id_token when invoking the TE service.

This also implies that delegation is “limited” to the case where you are using a JWT for Alice, preferably an OIDC id_token. We cannot do delegation without a JWT because the TE depends on the presence of a “may_act” claim in Alice’s subject_token. When using only an ssotoken or an access_token for Alice we cannot pass in this claim to the TE service. Hopefully thats clear as mud now (laughter).

Anyway.. back to the policy setup in AM. Keep in mind that the TE service expects certain response attributes: “aud” and “scp” are mandatory. “allowedActors” is optional but needed for our discussion of course. It contains a list of allowed actors and this list could be computed based on who the subject is. One subject attribute must also be returned. A simple example of response attributes could be:

aud: images.example.com
scp: read,write
allowedActors: Bob
allowedActors: HelpDeskAdministrator
allowedActors: James

The subject must also be returned from the policy and one way to do that is to parse the id_token and return the sub claim as a response attribute. A scripted policy condition attached to the Policy Environment extracts the sub claim and returns it in the response.

This policy script is attached as an environment condition as follows:

The extracted sub claim is then included as a response attribute.

At this point we are ready to setup the Token Exchange microservice to invoke the Policy REST endpoint in AM for resource-match, allowed actions and subject evaluation. Keep in mind that the TE service supports pluggable policy backends – a really powerful feature – using which one could have specific “pods” of microservices running local JSON-backed policies, other “pods” could have Open Policy Agent (OPA)-powered regex policies and yet another set of “pods” could have the TE service calling out to AM’s policy engine for evaluation.

The setup for when the TE is configured for using AM as the token exchange policy backend  looks like this:

{
 // Credentials for an OpenAM "subject" account with permission to access the OpenAM policy endpoint
 "authSubjectId" : "&{EXCHANGE_OPENAM_AUTH_SUBJECT_ID|service-account}",
 "authSubjectPassword" : "&{EXCHANGE_OPENAM_AUTH_SUBJECT_PASSWORD|****}",

// URL for the OpenAM authentication endpoint, without query parameters
 "authUrl" : "&{EXCHANGE_OPENAM_AUTH_URL|http://localhost:8080/openam/json/authenticate}",

// URL for the OpenAM policy-endpoint, without query parameters
 "policyUrl" : "&{EXCHANGE_OPENAM_POLICY_URL|http://localhost:8080/openam/json/realms/root/policies}",

// OpenAM Policy-Set ID
 "policySetId" : "&{EXCHANGE_OPENAM_POLICY_SET_ID|resource_policies}",

// OpenAM policy-endpoint response-attribute field-name containing "audience" values
 "audienceAttribute" : "&{EXCHANGE_OPENAM_POLICY_AUDIENCE_ATTR|aud}",

// OpenAM policy-endpoint response-attribute field-name containing "scope" values
 "scopeAttribute" : "&{EXCHANGE_OPENAM_POLICY_SCOPE_ATTR|scp}",

// OpenAM policy-endpoint response-attribute field-name containing the token's "subject" value
 "subjectAttribute" : "&{EXCHANGE_OPENAM_POLICY_SUBJECT_ATTR|uid}",

// OpenAM policy-endpoint response-attribute field-name containing the allowedActors
 "allowedActorsAttribute" : "&{EXCHANGE_OPENAM_POLICY_ALLOWED_ACTORS_ATTR|may_act}",

// (optional) OpenAM policy-endpoint response-attribute field-name containing "expires-in-seconds" values
 "expiresInSecondsAttribute" : "&{EXCHANGE_OPENAM_POLICY_EXPIRES_IN_SEC_ATTR|}",

// Set to "true" to copy additional OpenAM policy-endpoint response-attributes into generated token claims
 "copyAdditionalAttributes" : "&{EXCHANGE_OPENAM_POLICY_COPY_ADDITIONAL_ATTR|true}"
}

The TE services checks the response attributes shown above in the policy evaluation result in order to setup claims in the signed JWT it returns to the client.

The body of a sample REST Policy call from the TE service to an OAuth2 scope policy defined in AM looks like as follows:

{
 "subject" : {
     "jwt" : "{{AM_ALICE_ID_TOKEN}}"
 },
 "application" : "resource_policies",
 "resources" : [ "delegate-scope" ]
}

Policy response returned to the TE service:

[
 {
 "resource": "delegate-scope",
 "actions": {
     "GRANT": true
 },
 "attributes": {
     "aud": [
         "images.example.com"
     ],
     "scp": [
         "read"
     ],
     "uid": [
         "Alice"
     ],
     "allowedActors": [
          "Bob"
     ]
 },
 "advices": {},
 "ttl": 9223372036854775807
 }
]

Again, whatever resource type you choose for the policy in AM, it must match the resource you pass into the TE service as shown above.

We have come far and if you are still with me, then it is time to invoke the TE service and get back a signed and exchanged JWT with the correct “act” claim to indicate delegation was successful. The decoded JWT claims set of the issued token is shown below. The new JWT is issued by the Token Exchange service and intended for consumption by a system entity known by the logical name “images.example.com” any time before its expiration. The subject “sub” of the JWT is the same as the subject of the subject_token used to make the request.

{
 "sub": "Alice",
 "aud": "images.example.com",
 "scp": [
     "read",
     "write"
 ],
 "act": {
     "sub": "Bob"
 },
 "iss": "https://fr.tokenexchange.example.com",
 "exp": 1523087727,
 "iat": 1523084127
}

The decoded claims set shows that the actor “act” of the JWT is the same as the subject of the “actor_token” used to make the request. This indicates delegation and identifies Bob as the current actor to whom authority has been delegated to act on behalf of Alice.

The OAuth2 ForgeRock Identity Microservices

ForgeRock Identity Microservices

ForgeRock released in Q4 2017, an Early Access (aka beta) program for three key Identity Microservices within a compact, single-purpose code set for consumer-scale deployments. For companies who are deploying stateless Microservices architectures, these microservices offer a micro-gateway enabled solution that enables service trust, policy-enforced identity propagation and even OAuth2-based delegation. The stateless architecture of FR Identity Microservices allows for implementing a sidecar-friendly microgateway deployment pattern.

I blogged about it here. The sign up form is here.

The Microservices

OAuth2 Token Issuance

Enables service to service authentication using client-credential grant type. These “bearer” tokens facilitate trust up and down the call chain.

We currently support RSA, EC, and HMAC signing, with the following algorithms:

  • HS256
  • HS384
  • HS512
  • RS256
  • ES256
  • ES384
  • ES512

Token Validation

Supports introspection of OAuth2 / OIDC tokens issued by an OAuth2 AS as well as native AM tokens. Token validations performed using either configured keys, or lookup via JWK URI for issuer and kid verification.

One or more Token Introspection API implementations can be configured by setting the value to a comma-separated list of implementation names. Each implementation is given a chance to handle the token, in given order, until an implementation successfully handles the token. An error response occurs if no implementation successfully introspects a given token.

The following implementations are provided:

  • json : Configured by a JSON file.
  • openam : Proxies introspect calls to ForgeRock Access Management.

Token Exchange

Supports exchanging stateless OAuth2 access tokens and native AM tokens for hybrid tokens that grant specific entitlements per resourceor audience requested. Implements the draft OAuth2 token-exchange specification. Offers no-fuss declarative (“local”) policy enforcement inside the pod without needing to “call home” for identity data. Also provides policy based entitlement decisions by calling out to ForgeRock Access Management.

The Token-Exchange Policy is a pluggable service which determines whether or not a given token exchange may proceed. The service also determines what audience, scope, and so forth the generated token will have. The following implementations are provided:

  • json (default) : Configured by a JSON file.
  • jwt : Simply returns a JWT for any token that can be introspected.
  • openam : Uses a ForgeRock Access Management Policy Set.

Cloud and Platform

Docker and Kubernetes

Dockerfile is bundled with the binary. Soon there will be a docker repository from where the images could be pulled by evaluators.

A Kubernetes manifest is also provided that demonstrates using environment variables to configure a simple demo instance.

Metrics and Prometheus Integration

The microservices are instrumented to publish basic request metrics and a Prometheus endpoint is also provided, which is convenient for monitoring published metrics.

Audit Tracking

Each incoming HTTP request is assigned a transaction ID, for audit logging purposes, which by default is a UUID. ForgeRock Microservices like other products support propagation of a transaction ID between point-to-point service calls.

Cloud Foundry

The standard binary in Zip format is also directly deploy-able to Cloud Foundry, and will get recognized by the Cloud Foundry Java build pack as a “dist zip” format.

Client Credential Repository

ForgeRock Identity Microservices support Cassandra, MongoDB, any LDAP v3 including ForgeRock Directory Services.

Enhancing OAuth2 introspection with a Policy Decision Point

OAuth2 protection of resource server content, is typically either done via a call to the authorization service (AS) and the ../introspect endpoint for stateful access_tokens, or, in deployments where stateless access_tokens are deployed, the resource server (RS) could perform “local” introspection, if they have access to the necessary AS signing material.  All good.  The RS would valid scope values, token expiration time and so on.

Contrast that to the typical externalised authorization model, with a policy enforcement point (PEP) and policy decision point (PDP).  Something being protected, sends in a request to a central PDP.  That request is likely to contain the object descriptor, a token representing the subject and some contextual data.  The PDP will have a load of pre-built signatures or policies that would be looked up and processed.  The net-net is the PDP sends back a deny/allow style decision which the PEP (either in the form of an SDK or a policy agent) complies with.

So what is this blog about?  Well it’s the juxtaposition of the typical OAuth2 construct, with externalised PDP style authorization.

So the first step is to set up a basic policy within ForgeRock Access Management that protects a basic web URL – http://app.example.com:8080/index.html.  In honesty the thing being protected could be a URL, button, image, physical object or any other schema you see fit.

Out of the box authorization policy summary

To call the PDP, an application would create a REST payload looking something like the following:

REST request payload to PDP

The request would be a POST ../openam/json/policies?_action=evaluate endpoint.  This endpoint is a protected endpoint, meaning it requires authX from an AM instance.  In a normal non-OAuth2 integrated scenario, this would be handled via the iPlanetDirectoryPro header that would be used within the PDP decision.  Now in this case, we don’t have an iPlanetDirectoryPro cookie, simply the access_token.

Application Service Account

So, there are a couple of extra steps to take.  Firstly, we need to give our calling application their own service account.  Simply add a new group and associated application service user.  This account could then authenticate either via shared secret, JWT, x509 or any other authentication method configured. Make sure to give the associated group the account is in, privileges to the call the REST PDP endpoint.  So back to the use case…

This REST PDP request is the same as any other.  We have the resource being protected which maps into the policy and the OAuth2 access_token that was generated out of band, presented to the PDP as an environment variable.

OAuth2 Validation Script

The main validation is now happening in a simple Policy Condition script.  The script does a few things: performs a call to the AM ../introspect endpoint to perform basic validation – is the token AM issued, valid, within exp and so on.  In addition there are two switches – perform auth_level validation and also perform scope_validation.  Each of these functions takes a configurable setting.  If performAuthLevelCheck is true, make sure to set the acceptableAuthLevel value.  As of AM 5.5, the issued OAuth2 access_token now contains a value called “auth_level”.  This value just ties in the authentication assurance level that has been in AM since the OpenSSO days.  This numeric value is useful to differentiate how a user was validated during OAuth2 issuance. The script basically allows a simple way to perform a minimum acceptable value check.

The other configurable switch, is the performScopeCheck boolean.  If true, the script checks to make sure that the submitted access_token, is associated with atleast a minimum set of required scopes.  The access_token may have more scopes, but it must, as a minimum have the ones configured in the acceptableScopes attribute.

Validation Responses

Once the script is in place lets run through some examples where access is denied.  The first simple one is if the auth_level of the access_token is less than the configured acceptable value.

acceptable_auth_level_not_met advice message

Next up, is the situation where the scopes associated with the submitted access_token fall short of what is required.  There are two advice payloads that could be sent back here.  Firstly, if the number of scopes is fundamentally too small, the following advice is sent back:

acceptable_scopes_not_met – submitted scopes too few

A second response, associated with mismatched scopes, is if the number of scopes is OK, but the actual values don’t contain the acceptable ones.  The following is seen:

acceptable_scopes_not_met – scope entry missing

That’s all there is to it.  A few things to know.  The TTL of the policy has been set to be the exp of the access_token.  Clearly this is over writable, but seemed sensible to tie this to the access_token lifespan.

All being well though, a successful response back would look something like the following – depending on what actions you had configured in your policy:

Successful PDP response

Augmenting with Additional Environmental Conditions

So we have an OAuth2-compatible PDP.  Cool!  But what else can we do.  Well, we can augment the scripted decision making, with a couple of other conditions.  Namely the time based, IP address based and LDAP based conditions.

IP and Time based augmentation of access_token validation

The above just shows a simple of example of tying the decision making to only allow valid access_token usage between 830am and 530pm Monday-Friday from a valid IP range.  The other condition worth a mention is the LDAP filter one.

Note, any of the environmental conditions that require session validation, would fail, the script isn’t linking any access_token to AM session at this point – in some cases (depending on how the access_token was generated) may never have a session associated.  So beware they will not work.

The code for the scripted condition is available here.

 

This blog post was first published @ http://www.theidentitycookbook.com/, included here with permission from the author.

Top 5 Digital Identity Predictions for 2017

2016 is drawing to an end, the goose is getting fat, the lights and decorations are adorning many a fire place and other such cold weather cliches.  However, the attention must turn back to identity management and what the future may or may not hold.

Digital identity or consumer based identity and access management (CIAM) has taken a few big steps forward in the last 2 years.  Numerous industry analysts, aka Gartner, Forrester and Kuppinger Cole, have carved out CIAM as a new sub topic of IAM, that requires its own market and vendor analysis.  I think this is a valuable process, as CIAM projects tend to have very different requirements and implementation steps to traditional internal or employee based IAM.

From a predictions perspective, I see the following top 5 topics becoming key components of any digital identity platform for the next 12-18 months.


1 - Device Pairing Becomes a Base Requirement for IoT


Everyone knows about IoT.  It's going to save the planet.  Increase personalisation. Create loads of data and bring most CISO and network security managers to their knees.  Other than that, "smart devices", aka devices that can talk at least HTTP (hopefully HTTPS) will be much more powerful and useful, when tied and paired to a physical personal identity.  The classic "pin and pair" style use case. Take for example a smart-TV or a healthcare wearable.  By tying the device to an individual, the device can not only access cloud services and API's on the owners behalf, but can then in turn receive information to make the user experience more personalised.

A simple way to achieve this is via a draft IETF standard that leverages the popular authorization protocol OAuth2.  This allows the device to receive a scoped OAuth2 access token that can be used to represent the real person to other services.  More importantly the token can be revoked just like any other OAuth2 access token when the the device is sold or lost.


2 - OAuth2 Token Protection Becomes Mainstream


So what does this mean?  OAuth2 and OpenId Connect are the now defacto method for application owners to integrate 3rd party authorization, identity assertions and other authX style use cases. 
OAuth2 generates an access_token and refresh_token pair that are used to gain access to profile data or API's for example.  OpenId Connect extends this concept slightly, by also issuing an id_token that can basically act like a SAML2 identity assertion.

However...the access tokens, are bearer tokens.  What does that mean? Well if you are in possession of the token, you basically have access.  Assuming the token is valid of course.  This opens up the possibility that tokens can be stolen (thinking insecure communications channels, MITM, man-in-the-middle) and then reused maliciously.  The resource servers, by design only really check that the access token is valid and has the correct scopes/permissions - they don't check that the person, application or device that is presenting the token is the correct owner of the token.  Bad times.

Another draft IETF standard focuses on generating tokens that basically can't be reused if stolen. Each issued token contains a little piece of the requester - aka their public key.  This allows the resource server to extract the public key from the access token and generate a challenge response dance with the requester, to see if they are in fact the correct holder of the corresponding private key pair.  If they are, great, access granted.  If not, well access is not granted as they are not the original token owner.

3 - Social Signup Default


Social signup and sign in (aka Sign in with Facebook..) is so omnipresent in the applications and consumer services world, that enterprise service providers, be it in the public sector deliverying government services or the private sector deliverying banking, insurance or retail services, can not ignore the end user benefits it can bring.

Not only does it speed it the user registration process, it also reduces the over head for the service provider, in that they no longer need to handle password storage.  The user is authenticating with a 3rd party, so it allows the service provider to out source the password storage to Google, Facebook, Microsoft or whomever.

The flip side of using a 3rd party, is that you have to trust their vetting, registration and data storage capabilities.  Social networks are notorious for the having fake accounts, or accounts that no longer map into the correct owner.  If you are a service provider leveraging social sign in, your applications and data assurance standards need to align and add extra levels of assurance or verification as necessary.


4 - Push Authentication Default


What is push authentication? I thought one-time-passwords (OTP) were going to save the world? Well OTP's are certainly not going away any time soon, but many consumer facing sites and indeed social networks, are now introducing push authentication.  This basically occurs via a mobile app that creates notifications during login time.  The device and app and previously registered to the user.  During login time, the end user performs a simple action (generally a finger-print scan or a swipe) to confirm they are the user logging in.  Push is certainly becoming the standard mechanism amongst the under 30's and no doubt will replace OTP for enterprise multi-factor-authentication soon.


5 - Stateless Tokens & Micro-services a Match Made in Heaven


Microservice architectures seem to be everywhere.  Out with monolithic apps that often have long delivery cycles and lots of fragility and in with tiny, often single function applications, that are loosely coupled, that can be delivered and updated continuously.

However, that then introduces new challenges and requirements surrounding authentication and authorization in a microservices world. Here, OAuth2 again tends to come to the rescue, as many microservice or single function systems, are generally just exposed API's, sitting behind a routing and throttling mechanism.  Add in to that mix the ability to have stateless access tokens (that is, an access token that is a JSON Web Token, that carries all of the access, validity and permissions data with it in one place) and you can start to support multi-million transaction style infrastructures.

Microservice infrastructures tend to get hit hard.  Very hard.  Multi-million requests per day, performing GET's to retrieve data, or POST's to update, with each transaction perhaps hitting 10, 20 or 100 tiny independent services.  By being to pass down an access token within an HTTP authorization header is powerful and flexible and couple with that a token that is stateless provides the necessary scaling back bone.  

But why is stateless so interesting here? A stateless access token allows local introspection before access is given. That allows a microservice API to verify and look inside the presented JWT (which will appear in the Authorization header) without making a call back to the authorization service that issued the token. This reduction in hops can be pretty useful in high volume ecosystems - albeit the microservice will need the public key of the authorization service to verify the tokens and some extra code to verify and then introspect attributes like the exp, aud, scopes etc.

Interesting to see where we are come this time 2017...

Top 5 Digital Identity Predictions for 2017

2016 is drawing to an end, the goose is getting fat, the lights and decorations are adorning many a fire place and other such cold weather cliches.  However, the attention must turn back to identity management and what the future may or may not hold.

Digital identity or consumer based identity and access management (CIAM) has taken a few big steps forward in the last 2 years.  Numerous industry analysts, aka Gartner, Forrester and Kuppinger Cole, have carved out CIAM as a new sub topic of IAM, that requires its own market and vendor analysis.  I think this is a valuable process, as CIAM projects tend to have very different requirements and implementation steps to traditional internal or employee based IAM.

From a predictions perspective, I see the following top 5 topics becoming key components of any digital identity platform for the next 12-18 months.


1 - Device Pairing Becomes a Base Requirement for IoT


Everyone knows about IoT.  It's going to save the planet.  Increase personalisation. Create loads of data and bring most CISO and network security managers to their knees.  Other than that, "smart devices", aka devices that can talk at least HTTP (hopefully HTTPS) will be much more powerful and useful, when tied and paired to a physical personal identity.  The classic "pin and pair" style use case. Take for example a smart-TV or a healthcare wearable.  By tying the device to an individual, the device can not only access cloud services and API's on the owners behalf, but can then in turn receive information to make the user experience more personalised.

A simple way to achieve this is via a draft IETF standard that leverages the popular authorization protocol OAuth2.  This allows the device to receive a scoped OAuth2 access token that can be used to represent the real person to other services.  More importantly the token can be revoked just like any other OAuth2 access token when the the device is sold or lost.


2 - OAuth2 Token Protection Becomes Mainstream


So what does this mean?  OAuth2 and OpenId Connect are the now defacto method for application owners to integrate 3rd party authorization, identity assertions and other authX style use cases. 
OAuth2 generates an access_token and refresh_token pair that are used to gain access to profile data or API's for example.  OpenId Connect extends this concept slightly, by also issuing an id_token that can basically act like a SAML2 identity assertion.

However...the access tokens, are bearer tokens.  What does that mean? Well if you are in possession of the token, you basically have access.  Assuming the token is valid of course.  This opens up the possibility that tokens can be stolen (thinking insecure communications channels, MITM, man-in-the-middle) and then reused maliciously.  The resource servers, by design only really check that the access token is valid and has the correct scopes/permissions - they don't check that the person, application or device that is presenting the token is the correct owner of the token.  Bad times.

Another draft IETF standard focuses on generating tokens that basically can't be reused if stolen. Each issued token contains a little piece of the requester - aka their public key.  This allows the resource server to extract the public key from the access token and generate a challenge response dance with the requester, to see if they are in fact the correct holder of the corresponding private key pair.  If they are, great, access granted.  If not, well access is not granted as they are not the original token owner.

3 - Social Signup Default


Social signup and sign in (aka Sign in with Facebook..) is so omnipresent in the applications and consumer services world, that enterprise service providers, be it in the public sector deliverying government services or the private sector deliverying banking, insurance or retail services, can not ignore the end user benefits it can bring.

Not only does it speed it the user registration process, it also reduces the over head for the service provider, in that they no longer need to handle password storage.  The user is authenticating with a 3rd party, so it allows the service provider to out source the password storage to Google, Facebook, Microsoft or whomever.

The flip side of using a 3rd party, is that you have to trust their vetting, registration and data storage capabilities.  Social networks are notorious for the having fake accounts, or accounts that no longer map into the correct owner.  If you are a service provider leveraging social sign in, your applications and data assurance standards need to align and add extra levels of assurance or verification as necessary.


4 - Push Authentication Default


What is push authentication? I thought one-time-passwords (OTP) were going to save the world? Well OTP's are certainly not going away any time soon, but many consumer facing sites and indeed social networks, are now introducing push authentication.  This basically occurs via a mobile app that creates notifications during login time.  The device and app and previously registered to the user.  During login time, the end user performs a simple action (generally a finger-print scan or a swipe) to confirm they are the user logging in.  Push is certainly becoming the standard mechanism amongst the under 30's and no doubt will replace OTP for enterprise multi-factor-authentication soon.


5 - Stateless Tokens & Micro-services a Match Made in Heaven


Microservice architectures seem to be everywhere.  Out with monolithic apps that often have long delivery cycles and lots of fragility and in with tiny, often single function applications, that are loosely coupled, that can be delivered and updated continuously.

However, that then introduces new challenges and requirements surrounding authentication and authorization in a microservices world. Here, OAuth2 again tends to come to the rescue, as many microservice or single function systems, are generally just exposed API's, sitting behind a routing and throttling mechanism.  Add in to that mix the ability to have stateless access tokens (that is, an access token that is a JSON Web Token, that carries all of the access, validity and permissions data with it in one place) and you can start to support multi-million transaction style infrastructures.

Microservice infrastructures tend to get hit hard.  Very hard.  Multi-million requests per day, performing GET's to retrieve data, or POST's to update, with each transaction perhaps hitting 10, 20 or 100 tiny independent services.  By being to pass down an access token within an HTTP authorization header is powerful and flexible and couple with that a token that is stateless provides the necessary scaling back bone.  

But why is stateless so interesting here? A stateless access token allows local introspection before access is given. That allows a microservice API to verify and look inside the presented JWT (which will appear in the Authorization header) without making a call back to the authorization service that issued the token. This reduction in hops can be pretty useful in high volume ecosystems - albeit the microservice will need the public key of the authorization service to verify the tokens and some extra code to verify and then introspect attributes like the exp, aud, scopes etc.

Interesting to see where we are come this time 2017...

Top 5 Digital Identity Predictions for 2017

2016 is drawing to an end, the goose is getting fat, the lights and decorations are adorning many a fire place and other such cold weather cliches.  However, the attention must turn back to identity management and what the future may or may not hold. Digital identity or consumer based identity and access management (CIAM) has taken a few big […]

Protect Bearer Tokens Using Proof of Possession

Bearer tokens are the cash of the digital world.  They need to be protected.  Whoever gets hold of them, can well, basically use them as if they were you. Pretty much the same as cash.  The shop owner only really checks the cash is real, they don’t check that the £5 note you produced from your wallet is actually your £5 note.

This has been an age old issue in web access management technologies, both for stateless and stateful token types, OAuth2 access and refresh tokens, as well as OpenID Connect id tokens.

In the hyper connected Consumer Identity & Access Management (CIAM) and Internet (Identity) of Things worlds, this can become a big problem.

Token misuse, perhaps via MITM (man in the middle) attacks, or even resource server misconfiguration, could result in considerable data compromise.

However, there are some newer standards that look to add some binding ability to the tokens – that is, glue them to a particular user or device based on some simple crypto.

The unstable nightly source and build of OpenAM has added the proof of possession capability to the OAuth2 provider service. (Perhaps the first vendor to do so? Email me if you see other implementations..).

The idea is, that the client makes a normal request for an access_token from the authorization service (AS), but also adds another parameter in the request, that contains some crypto the client has access to – basically a public key of an asymmetric key pair.

This key, which could be ephemeral for that request, is then baked into the access_token.  If the access_token is a JWT, the JWT contains this public key and the JWT is then signed by the authorization service.  If using a stateful access_token, the AS token introspection endpoint can relay the public key back to the resource server at look up time.

This basically gives the RS an option to then issue a challenge response style interaction with the client to see if they are in possession of the private key pair – thus proving they are the correct recipient of the originally issued access_token!

 

The basic flow, sees the addition of a new parameter to the access_token request to the OpenAM authorization service, under the name of “cnf_key”.  This is a confirmation key, that the client is in possession of.  In this example, it would be a base64 encoded JSON Web Key representation of a public key.

So for example, a POST request to the endpoint ../openam/oauth2/access_token, would now take the parameters grant_type, scope and also cnf_key, with an authorization header containing the OAuth2 client id and secret as normal.  A cnf_key could look something like this:

eyJqd2siOnsKICAiYWxnIjogIlJTMjU2IiwKICAiZSI6ICJBUUFCIiwKICAibiI6ICJ2TDM0UXh5bXdId1dEOVpWTDljaU42Yk5ybk91NTI0cjdZMzRvUlJXRkpjWjc3S1dXaHB1Si1iSlZXVVNUd3ZKTGdWTWlDZmFxSTZEWnIwNWQ2VGdONTNfMklVWmtHLXgzNnBFbDZZRWs1d1ZnX1ExelFkeEZHZkRoeFBWajJ3TWNNcjFyR0h1UUFEeC1qV2JHeGRHLTJXMXFsVEdQT253SklqYk9wVm1RYUJjNHhSYndqenNsdG1tcndzMmZNTUtNTDVqbnFwR2RoeWRfdXlFTU0wdHpNTGFNSVN2M2lmeFM2UUw3c2tpZTZ5ajJxamxUTUd3QjA4S29ZUEQ2QlVPaXd6QWxkUmJfM3k4bVA2TXY5cDdvQXBheTZCb25pWU8yaVJySzMxUlRaLVlWUHRleTllSWZ1d0ZFc0RqVzNES0JBS21rMlhGY0NkTHEyU1djVWFOc1EiLAogICJrdHkiOiAiUlNBIiwKICAidXNlIjogInNpZyIsCiAgImtpZCI6ICJzbW9mZi1rZXkiCn19Cg==

Running that through base64 -d on bash, or via an online base64 decoder, shows something like the following: (NB this JWK was created using an online tool for simple testing)

{
   "jwk":
            "alg": "RS256",
             "e": "AQAB",
             "n": "vL34QxymwHwWD9ZVL9ciN6bNrnOu524r7Y34oRRWFJcZ77KWWhpuJ-                               bJVWUSTwvJLgVMiCfaqI6DZr05d6TgN53_2IUZkG-                                                x36pEl6YEk5wVg_Q1zQdxFGfDhxPVj2wMcMr1rGHuQADx-jWbGxdG-2W1qlTGPOnwJIjbOpVmQaBc4xRbwjzsltmmrws2fMMKML5jnqpGdhyd_uyEMM0tzMLaMISv3ifxS6QL7skie6yj2qjlTMGwB08KoYPD6BUOiwzAldRb_3y8mP6Mv9p7oApay6BoniYO2iRrK31RTZ-YVPtey9eIfuwFEsDjW3DKBAKmk2XFcCdLq2SWcUaNsQ",
          "kty": "RSA",
           "use": "sig",
            "kid": "smoff-key"
     }
}

The authorization service, should then return the normal access_token payload.  If using stateless OAuth2 access_tokens, the access_token will contain the new embedded cnf_key attribute, containing the originally submitted public key.  The resource server, can then leverage the public key to perform some out of band challenge response questions of the client, when the client comes to present the access_token later.

If using the more traditional stateful access_tokens, the RS can call the ../oauth2/introspect endpoint to find the public key.

The powerful use case is to validate the the client submitting the access_token, is in fact the same as the original recipient, when the access_token was issued.  This can help reduce MITM and other basic token misuse scenarios.

This blog post was first published @ http://www.theidentitycookbook.com/, included here with permission from the author.

Identity Disorder Podcast, Episode 1

I’m excited to introduce a new podcast series hosted by Daniel Raskin and myself. The series will focus on (what we hope are!) interesting identity topics, news about ForgeRock, events, and much more. Take a listen to the debut episode below where we discuss why and how to get rid of passwords, how stateless OAuth2 tokens work, and some current events, too!

-Chris