Token Exchange and Delegation using the ForgeRock Identity Microservices

In 2017 ForgeRock introduced an Early Access program (aka beta) for the ForgeRock Identity Microservices. In summary the capabilities offered include token issuance using the OAuth2 client credentials grant, token validations of OAuth2/OIDC tokens (and even ssotokens) and token exchange based on the draft OAuth2 token exchange spec. I wrote a press release here and an introductory community blog post here.

In this article we will discuss how to implement delegation as per the OAuth2 Token Exchange  draft specification. An overview of the process is worth discussing at this point.

We basically first want to grab hold of an OAuth2 token for our client to use in an Authorization header when calling the Token Exchange service. Next, we need our Authorization Server such as ForgeRock Access Management in our example, to issue our friendly neighbors Alice and Bob their respective OIDC id_tokens. Note that Alice wants to delegate authority to Bob. While we shall show the REST calls to acquire those, the details of setting up the OAuth2 Provider will be left out. We shall however discuss the custom OIDC Claims Script for setting up the delegation framework for authenticated users, as well as the Policy that sets up allowed actors who may be chosen for delegation.

But first things first, so we start with acquiring a bearer authentication token for our client who will make calls into the Token Exchange service. This can be accomplished by having the Authentication microservice issue an OAuth2 access token with the exchange scope using the client-credentials grant_type. For the example we shall just use a JSON backend as our client repository, although the Authentication microservice supports using Cassandra, MongoDB, any LDAP v3 directory including ForgeRock Directory Services.

The repository holds the following data for the client:

{
 "clientId" : "client",
 "clientSecret" : "client",
 "scopes" : [ "exchange", "introspect" ],
 "attributes" : {}
 }

The authentication request is:

POST /service/access_token?grant_type=client_credentials HTTP/1.1
Host: localhost:18080
Content-Type: multipart/form-data
Authorization: Basic ****
Cache-Control: no-cache

After client authenticates, the Authentication microservice issues a token that when decoded looks like this:

{
 "sub": "client",
 "scp": [
     "exchange",
     "introspect"
 ],
 "auditTrackingId": "237cf708-1bde-4800-9bcd-5db5b5f1ca12",
 "iss": "https://fr.tokenissuer.example.com",
 "tokenName": "access_token",
 "token_type": "Bearer",
 "authGrantId": "a55f2bad-cda5-459e-8620-3e826bad9095",
 "aud": "client",
 "nbf": 1523058711,
 "scope": [
     "exchange",
     "introspect"
 ],
 "auth_time": 1523058711,
 "exp": 1523062311,
 "iat": 1523058711,
 "expires_in": 3600,
 "jti": "39b72a84-112d-46f3-a7cb-84ce7513e5bc",
 "cid": "client"
}

Bob, the actor, authenticates with AM and receives id_token:

{
 "at_hash": "Hgjw0D49EfYM6dmB9_K6kg",
 "sub": "user.1",
 "auditTrackingId": "079fda18-c96f-4c78-90f2-fc9be0545b32-111930",
 "iss": "http://localhost:8080/openam/oauth2",
 "tokenName": "id_token",
 "aud": "oidcclient",
 "azp": "oidcclient",
 "auth_time": 1523058714,
 "realm": "/",
 "may_act": {},
 "exp": 1523062314,
 "tokenType": "JWTToken",
 "iat": 1523058714
}

Alice, the user also authenticates with AM and receives a special id_token:

{
 "at_hash": "nT_tDxXhcee7zZHdwladcQ",
 "sub": "user.0",
 "auditTrackingId": "079fda18-c96f-4c78-90f2-fc9be0545b32-111989",
 "iss": "http://localhost:8080/openam/oauth2",
 "tokenName": "id_token",
 "aud": "myuserclient1",
 "azp": "myuserclient1",
 "auth_time": 1523058716,
 "realm": "/",
 "may_act": {
     "sub": "Bob"
 },
 "exp": 1523062316,
 "tokenType": "JWTToken",
 "iat": 1523058716
}

Why does Alice receive a may_act claim but Bob does not? This has to do with Alice’s group membership in DS. The OIDC Claims script attached to the OAuth2 Provider in AM checks for membership to this group, and if found retrieves a value for the assigned “actor” or delegate. In this case Alice’s has designed actor status to Bob (via some out of band method, such as on a user portal, for example). The script retrieves the actor and sets a special claim with its value: may_act. The claims script is shown here in part:

def isAdmin = identity.getMemberships(IdType.GROUP)
.inject(false) { found, group ->
found ||
group.getName() == “policyEval”
}

//…search for actor.. not shown

def mayact = [:]
if (isAdmin) {
mayact.put(“sub”,actor)
}

Next, we must define a Policy in AM. This policy could be based on OAuth2 scopes or URI, whatever you choose. I chose to setup an OAuth2 Scopes policy but whatever scope you choose must match the resource you pass into the TE service. Rest of the policy is pretty standard, for example, here is a summary of the Subject tab of the policy:

{
  "type": "JwtClaim",
  "claimName": "iss",
  "claimValue": "http://localhost:8080/openam/oauth2"
}

I am checking for an acceptable issuer, and thats it. This could be replaced with a subject condition for Authenticated users as well although the client would then have to send in Alice’s “ssotoken” as the subject_token instead of her id_token when invoking the TE service.

This also implies that delegation is “limited” to the case where you are using a JWT for Alice, preferably an OIDC id_token. We cannot do delegation without a JWT because the TE depends on the presence of a “may_act” claim in Alice’s subject_token. When using only an ssotoken or an access_token for Alice we cannot pass in this claim to the TE service. Hopefully thats clear as mud now (laughter).

Anyway.. back to the policy setup in AM. Keep in mind that the TE service expects certain response attributes: “aud” and “scp” are mandatory. “allowedActors” is optional but needed for our discussion of course. It contains a list of allowed actors and this list could be computed based on who the subject is. One subject attribute must also be returned. A simple example of response attributes could be:

aud: images.example.com
scp: read,write
allowedActors: Bob
allowedActors: HelpDeskAdministrator
allowedActors: James

The subject must also be returned from the policy and one way to do that is to parse the id_token and return the sub claim as a response attribute. A scripted policy condition attached to the Policy Environment extracts the sub claim and returns it in the response.

This policy script is attached as an environment condition as follows:

The extracted sub claim is then included as a response attribute.

At this point we are ready to setup the Token Exchange microservice to invoke the Policy REST endpoint in AM for resource-match, allowed actions and subject evaluation. Keep in mind that the TE service supports pluggable policy backends – a really powerful feature – using which one could have specific “pods” of microservices running local JSON-backed policies, other “pods” could have Open Policy Agent (OPA)-powered regex policies and yet another set of “pods” could have the TE service calling out to AM’s policy engine for evaluation.

The setup for when the TE is configured for using AM as the token exchange policy backend  looks like this:

{
 // Credentials for an OpenAM "subject" account with permission to access the OpenAM policy endpoint
 "authSubjectId" : "&{EXCHANGE_OPENAM_AUTH_SUBJECT_ID|service-account}",
 "authSubjectPassword" : "&{EXCHANGE_OPENAM_AUTH_SUBJECT_PASSWORD|****}",

// URL for the OpenAM authentication endpoint, without query parameters
 "authUrl" : "&{EXCHANGE_OPENAM_AUTH_URL|http://localhost:8080/openam/json/authenticate}",

// URL for the OpenAM policy-endpoint, without query parameters
 "policyUrl" : "&{EXCHANGE_OPENAM_POLICY_URL|http://localhost:8080/openam/json/realms/root/policies}",

// OpenAM Policy-Set ID
 "policySetId" : "&{EXCHANGE_OPENAM_POLICY_SET_ID|resource_policies}",

// OpenAM policy-endpoint response-attribute field-name containing "audience" values
 "audienceAttribute" : "&{EXCHANGE_OPENAM_POLICY_AUDIENCE_ATTR|aud}",

// OpenAM policy-endpoint response-attribute field-name containing "scope" values
 "scopeAttribute" : "&{EXCHANGE_OPENAM_POLICY_SCOPE_ATTR|scp}",

// OpenAM policy-endpoint response-attribute field-name containing the token's "subject" value
 "subjectAttribute" : "&{EXCHANGE_OPENAM_POLICY_SUBJECT_ATTR|uid}",

// OpenAM policy-endpoint response-attribute field-name containing the allowedActors
 "allowedActorsAttribute" : "&{EXCHANGE_OPENAM_POLICY_ALLOWED_ACTORS_ATTR|may_act}",

// (optional) OpenAM policy-endpoint response-attribute field-name containing "expires-in-seconds" values
 "expiresInSecondsAttribute" : "&{EXCHANGE_OPENAM_POLICY_EXPIRES_IN_SEC_ATTR|}",

// Set to "true" to copy additional OpenAM policy-endpoint response-attributes into generated token claims
 "copyAdditionalAttributes" : "&{EXCHANGE_OPENAM_POLICY_COPY_ADDITIONAL_ATTR|true}"
}

The TE services checks the response attributes shown above in the policy evaluation result in order to setup claims in the signed JWT it returns to the client.

The body of a sample REST Policy call from the TE service to an OAuth2 scope policy defined in AM looks like as follows:

{
 "subject" : {
     "jwt" : "{{AM_ALICE_ID_TOKEN}}"
 },
 "application" : "resource_policies",
 "resources" : [ "delegate-scope" ]
}

Policy response returned to the TE service:

[
 {
 "resource": "delegate-scope",
 "actions": {
     "GRANT": true
 },
 "attributes": {
     "aud": [
         "images.example.com"
     ],
     "scp": [
         "read"
     ],
     "uid": [
         "Alice"
     ],
     "allowedActors": [
          "Bob"
     ]
 },
 "advices": {},
 "ttl": 9223372036854775807
 }
]

Again, whatever resource type you choose for the policy in AM, it must match the resource you pass into the TE service as shown above.

We have come far and if you are still with me, then it is time to invoke the TE service and get back a signed and exchanged JWT with the correct “act” claim to indicate delegation was successful. The decoded JWT claims set of the issued token is shown below. The new JWT is issued by the Token Exchange service and intended for consumption by a system entity known by the logical name “images.example.com” any time before its expiration. The subject “sub” of the JWT is the same as the subject of the subject_token used to make the request.

{
 "sub": "Alice",
 "aud": "images.example.com",
 "scp": [
     "read",
     "write"
 ],
 "act": {
     "sub": "Bob"
 },
 "iss": "https://fr.tokenexchange.example.com",
 "exp": 1523087727,
 "iat": 1523084127
}

The decoded claims set shows that the actor “act” of the JWT is the same as the subject of the “actor_token” used to make the request. This indicates delegation and identifies Bob as the current actor to whom authority has been delegated to act on behalf of Alice.

The OAuth2 ForgeRock Identity Microservices

ForgeRock Identity Microservices

ForgeRock released in Q4 2017, an Early Access (aka beta) program for three key Identity Microservices within a compact, single-purpose code set for consumer-scale deployments. For companies who are deploying stateless Microservices architectures, these microservices offer a micro-gateway enabled solution that enables service trust, policy-enforced identity propagation and even OAuth2-based delegation. The stateless architecture of FR Identity Microservices allows for implementing a sidecar-friendly microgateway deployment pattern.

I blogged about it here. The sign up form is here.

The Microservices

OAuth2 Token Issuance

Enables service to service authentication using client-credential grant type. These “bearer” tokens facilitate trust up and down the call chain.

We currently support RSA, EC, and HMAC signing, with the following algorithms:

  • HS256
  • HS384
  • HS512
  • RS256
  • ES256
  • ES384
  • ES512

Token Validation

Supports introspection of OAuth2 / OIDC tokens issued by an OAuth2 AS as well as native AM tokens. Token validations performed using either configured keys, or lookup via JWK URI for issuer and kid verification.

One or more Token Introspection API implementations can be configured by setting the value to a comma-separated list of implementation names. Each implementation is given a chance to handle the token, in given order, until an implementation successfully handles the token. An error response occurs if no implementation successfully introspects a given token.

The following implementations are provided:

  • json : Configured by a JSON file.
  • openam : Proxies introspect calls to ForgeRock Access Management.

Token Exchange

Supports exchanging stateless OAuth2 access tokens and native AM tokens for hybrid tokens that grant specific entitlements per resourceor audience requested. Implements the draft OAuth2 token-exchange specification. Offers no-fuss declarative (“local”) policy enforcement inside the pod without needing to “call home” for identity data. Also provides policy based entitlement decisions by calling out to ForgeRock Access Management.

The Token-Exchange Policy is a pluggable service which determines whether or not a given token exchange may proceed. The service also determines what audience, scope, and so forth the generated token will have. The following implementations are provided:

  • json (default) : Configured by a JSON file.
  • jwt : Simply returns a JWT for any token that can be introspected.
  • openam : Uses a ForgeRock Access Management Policy Set.

Cloud and Platform

Docker and Kubernetes

Dockerfile is bundled with the binary. Soon there will be a docker repository from where the images could be pulled by evaluators.

A Kubernetes manifest is also provided that demonstrates using environment variables to configure a simple demo instance.

Metrics and Prometheus Integration

The microservices are instrumented to publish basic request metrics and a Prometheus endpoint is also provided, which is convenient for monitoring published metrics.

Audit Tracking

Each incoming HTTP request is assigned a transaction ID, for audit logging purposes, which by default is a UUID. ForgeRock Microservices like other products support propagation of a transaction ID between point-to-point service calls.

Cloud Foundry

The standard binary in Zip format is also directly deploy-able to Cloud Foundry, and will get recognized by the Cloud Foundry Java build pack as a “dist zip” format.

Client Credential Repository

ForgeRock Identity Microservices support Cassandra, MongoDB, any LDAP v3 including ForgeRock Directory Services.

ForgeRock welcomes Shankar Raman

Welcome to Shankar Raman, who joins the ForgeRock documentation team today.

Shankar is starting with platform and deployment documentation, where many of you have been asking us to do more.

Shankar comes to the team from curriculum development, having worked for years as an instructor, writer, and course developer at Oracle on everything from the DB, to middleware, to Fusion Applications. Shankar’s understanding of the deployment problem space and what architects and deployers must know to build solutions will help him make your lives, or at least your jobs, a bit easier. Looking forward to that!

This blog post was first published @ marginnotes2.wordpress.com, included here with permission.

Using IDM and DS to synchronise hashed passwords

Overview

In this post I will describe a technique for synchronising a hashed password from ForgeRock IDM to DS.

Out of the box, IDM has a Managed User object that encrypts a password in symmetric (reversible) encryption.  One reason for this is that sometimes it is necessary to pass, in clear text, the password to a destination directory in order for it to perform its own hashing before storing it.  Therefore the out of the box synchronisation model for IDM is take the encrypted password from its own store, decrypt it, and pass it in clear text (typically over a secure channel!) to DS for it to hash and store.

You can see this in the samples for IDM.

However, there are some times when storing an encrypted, rather than hashed, value for a password is not acceptable.  IDM includes the capability to hash properties (such as passwords) not just encrypt them.  In that scenario, given that password hashes are one way, it’s not possible to decrypt the password before synchronisation with other systems such as DS.

Fortunately, DS offers the capability of accepting pre-hashed passwords so IDM is able to pass the hash to DS for storage.  DS obviously needs to know this value is a hash, otherwise it will try to hash the hash!

So, what are the steps required?

  1. Ensure that DS is configured to accept hashed passwords.
  2. Ensure the IDM data model uses ‘Hashing’ for the password property.
  3. Ensure the IDM mapping is setup correctly

Ensure DS is configured to accept hashed passwords

This topic is covered excellently by Mark Craig in this article here:
https://marginnotes2.wordpress.com/2011/07/21/opendj-using-pre-encoded-passwords/

I’m using ForgeRock DS v5.0 here, but Mark references the old name for DS (OpenDJ) because this capability has been around for a while.  The key thing to note about the steps in the article is that you need the allow-pre-encoded-passwords advanced password policy property to be set for the appropriate password policy.  I’m only going to be dealing with one password policy – the default one – so Mark’s article covers everything I need.

(I will be using a Salted SHA-512 algorithm so if you want to follow all the steps, including testing out the change of a user’s password, then specify {SSHA512} in the userPassword value, rather than {SSHA}.  This test isn’t necessary for the later steps in this article, but may help you understand what’s going on).

Ensure IDM uses hashing

Like everything in IDM, you can modify configuration by changing the various config .json files, or the UI (which updates the json config files!)

I’ll use IDM v5.0 here and show the UI.

By default, the Managed User object includes a password property that is defined as ‘Encrypted’:

We need to change this to be Hashed:

And, I’m using the SHA-512 algorithm here (which is a Salted SHA-512 algorithm).

Note that making this change does not update all the user passwords that exist.  It will only take effect when a new value is saved to the property.

Now the value of a password, when it is saved, is string representation of a complex JSON object (just like it is when encrypted) but will look something like:

{"$crypto":

  {"value":

    {"algorithm":"SHA-512","data":"Quxh/PEBXMa2wfh9Jmm5xkgMwbLdQfytGRy9VFP12Bb5I2w4fcpAkgZIiMPX0tcPg8OSo+UbeJRdnNPMV8Kxc354Nj12j0DXyJpzgqkdiWE="},

    "type":"salted-hash"

  }

}

Ensure IDM Mapping is setup correctly

Now we need to configure the mapping.

As you may have noted in the 1st step, DS is told that the password is pre-hashed by the presence of {SSHA512} at the beginning of the password hash value.  Therefore we need a transformation script that takes the algorithm and hash value from IDM and concatenates it in a way suited for DS.

The script is fairly simple, but does need some logic to convert the IDM algorithm representation: SHA-512 into the DS representation: {SSHA512}

This is the transformation script (in groovy) I used (which can be extended of course for other algorithms):

String strHash;

if (source.$crypto.value.algorithm == "SHA-512" ) {

  strHash = "{SSHA512}" + source.$crypto.value.data

}

strHash;

This script replaces the default IDM script that does the decryption of the password.

(You might want to extend the script to cope with both hashed and encrypted values of passwords if you already have data.  Look at functions such as openidm.isHashed and openidm.isEncrypted in the IDM Integrators Guide).

Now when a password is changed, the password is stored in hashed form in IDM.  Then the mapping is triggered to synchronise to DS applying the transformation script that passes the pre-hashed password value.

Now there is no need to store passwords in reversible encryption!

This blog post was first published @ yaunap.blogspot.no, included here with permission from the author.

Kubernetes: Why won’t that deployment roll?

Kubernetes Deployments provide a declarative way of managing replica sets and pods.  A deployment specifies how many pods to run as part of a replica set, where to place pods, how to scale them and how to manage their availability.

Deployments are also used to perform rolling updates to a service. They can support a number of different update strategies such as blue/green and canary deployments.

The examples provided in the Kubernetes documentation explain how to trigger a rolling update by changing a docker image.  For example, suppose you edit the Docker image tag by using the kubectl edit deployment/my-app command, changing the image tag from acme/my-app:v0.2.3 to acme/my-app:v0.3.0.

When Kubernetes sees that the image tag has changed, a rolling update is triggered according to the declared strategy. This results in the pods with the 0.2.3 image being taken down and replaced with pods running the 0.3.0 image.  This is done in a rolling fashion so that the service remains available during the transition.

If your application is packaged up as Helm chart, the helm upgrade command can be used to trigger a rollout:

helm upgrade my-release my-chart

Under the covers, Helm is just applying any updates in your chart to the deployment and sending them to Kubernetes. You can achieve the same thing yourself by applying updates using kubectl.

Now let’s suppose that you want to trigger a rolling update even if the docker image has not changed. A likely scenario is that you want to apply an update to your application’s configuration.

As an example, the ForgeRock Identity Gateway (IG) Helm chart pulls its configuration from a git repository. If the configuration changes in git, we’d like to roll out a new deployment.

My first attempt at this was to perform a helm upgrade on the release, updating the ConfigMap in the chart with the new git branch for the release. Our Helm chart uses the ConfigMap to set an environment variable to the git branch (or commit) that we want to checkout:

kind: ConfigMap
 data:
 GIT_CHECKOUT_BRANCH: test

After editing the ConfigMap, and doing a helm upgrade, nothing terribly exciting happened.  My deployment did not “roll” as expected with the new git configuration.

As it turns out, Kubernetes needs to see a change in the pod’s spec.template before triggers a new deployment. Changing the image tag is one way to do that,  but any change to the template will work. As I discovered, changes to a ConfigMap *do not* trigger deployment updates.

The solution here is to move the git branch variable out of the ConfigMap and into to the pod’s spec.template in the deployment object:

 spec:
      initContainers:
      - name: git-init
        env:
        - name: GIT_CHECKOUT_BRANCH
          value: "{{ .Values.global.git.branch }}"

When the IG helm chart is updated (we supply a new value to the template variable above), the template’s spec changes, and Kubernetes will roll out the new deployment.

Here is what it looks like when we put it all together (you can check out the full IG chart here).

# install IG using helm. The git branch defaults to "master"
 $ helm install openig
 NAME:   hissing-yak

$ kubectl get deployment
 NAME      DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE

openig    1         1         1            1           4m

# Take a look at our deployment history. So far, there is only one revision:
 $ kubectl rollout history deployment
 deployments "openig"
 REVISION CHANGE-CAUSE
 1
 # Let's deploy a different configuration. We can use a commit hash here instead
 # of a branch name:
 $ helm upgrade --set global.git.branch=d0562faec4b1621ad53b852aa5ee61f1072575cc
 hissing-yak openig

# Have a look at our deployment. We now see a new revision
 $ kubectl rollout history deploy
 deployments "openig"
 REVISION  CHANGE-CAUSE
 1
 2
 # Look at the deployment history. You should see an event where the original
 # replicaset is scaled down and a new set is scaled up:
 kubectl describe deployment
 Events:
 Type    Reason             Age   From                   Message
 ----    ------             ----  ----                   -------
 ---> Time=19M ago. This is the original RS
 Normal  ScalingReplicaSet  19m   deployment-controller  Scaled up replica set openig-6f6575cdfd to 1
 ------> Time=3m ago. The new RS being brought up
 Normal  ScalingReplicaSet  3m    deployment-controller  Scaled up replica set openig-7b5788659c to 1
 -----> Time=3m ago. The old RS being scaled down
 Normal  ScalingReplicaSet  3m    deployment-controller  Scaled down replica set openig-6f6575cdfd to 0

One of the neat things we can do with deployments is roll back to a previous revision. For example, if we have an error in our configuration, and want to restore the previous release:

$ kubectl rollout undo deployment/openig
 deployment "openig" rolled back
 $ kubectl describe  deployment:
 Normal  DeploymentRollback  1m  deployment-controller  Rolled back deployment "openig" to revision 1

# We can see the old pod is being terminated and the new one has started:
 $ kubectl get pod
 NAME                      READY     STATUS        RESTARTS   AGE
 openig-6f6575cdfd-tvmgj   1/1       Running       0          28s
 openig-7b5788659c-h7kcb   0/1       Terminating   0          8m

And that my friends, is how we roll.

This blog post was first published @ warrenstrange.blogspot.ca, included here with permission.

Introduction to ForgeRock DevOps – Part 3 – Deploying Clusters

Introduction to ForgeRock DevOps – Part 3 – Deploying Clusters

We have just launched Version 5 of the ForgeRock Identity Platform with numerous enhancements for DevOps friendliness. I have been meaning to jump into the world of DevOps for some time so the new release afforded a great opportunity to do just that.

Catch up with previous entries in the series:

http://identity-implementation.blogspot.co.uk/2017/04/introduction-to-forgerock-devops-part-1.html
http://identity-implementation.blogspot.co.uk/2017/05/introduction-to-forgerock-devops-part-2.html

I will be using IBM Bluemix here as I have recent experience of it but nearly all of the concepts will be similar for any other cloud environment.

Deploying Clusters

So now we have docker images deployed into Bluemix. The next step is to actually deploy the images into a Kubernetes cluster. Firstly we need to create a cluster, then we need to actually deploy into it. For what we are doing here we need a standard paid cluster.

Preperation

1. Log in to the Blue Mix CLI using you Blue Mix account credentials:

bx login -a https://api.ng.bluemix.net

2. Choose a location, you can view locations with:

bx cs locations

2. Choose machine type, you can view machine types for locations with:

bx cs machine-types dal10

3. Check for VLANS. You need to choose both a public and private VLAN for a standard cluster. It should look something like this:

bx cs vlans dal10

If you need to create them… init the SoftLayer CLI first:

bx sl init

Just select Single Sign On: (2)

You should be logged in and able to create vlans:

bx sl vlan create -t public -d dal10 -s 8 -n waynepublic

Note: Your Bluemix account needs permission to create VLANs, if you don’t have this you need to contact support. You’ll be told if this is the case. You should get one free public VLAN I believe.

Creating a Cluster

1. Create a cluster:

Assuming you have public and private VLANs you can create a kubernetes cluster:

bx cs cluster-create --location dal10 --machine-type u1c.2x4 --workers 2 --name wbcluster --private-vlan 1638423 --public-vlan 2106869

You *should* also be able to use the Bluemix UI to create clusters.

2. You may need to wait a little while for the cluster to be deployed. You can check the status of it using:

bx cs clusters

During the deployment you will likely receive various emails from Bluemix confirming infrastructure has been provisioned.

3. When the cluster has finished deployment ( state is pending ), set the new cluster as the current context:

bx cs cluster-config wbcluster

The statement in yellow is the important bit, copy and paste that export back into the terminal to configure the environment for kubernetes to run.

4. Now you can run kubectl commands, view the cluster config with:

kubectl config view

See the kubernetes documentation for the full set of commands you can run, we will only be looking at a few key ones for now.

5. Clone (or download) the ForgeRock Kubernetes repo to somewhere local:

https://stash.forgerock.org/projects/DOCKER/repos/fretes/browse

6. Navigate to the fretes directory:

cd /usr/local/DevOps/stash/fretes

 

7. We need to make a tweak to the fretes/helm/custom.yaml file and add the following:

storageClass: ibmc-file-bronze

This specified the type of storage we want our deployment to use in Bluemix. If it were AWS or Azure you may need something similar.

8. From the same terminal window that you have setup kubectl, navigate to the fretes/helm/ directory and run:

helm init

This will install the helm component into the cluster ready to process the helm scripts we are going to run.

9. Run the OpenAM helm script which will configure instances of AM, backed by DJ into our kubernetes cluster:

/usr/local/DevOps/stash/fretes/helm/bin/openam.sh

This script will take a while and again will trigger the provisioning of infrastructure, storage and other components resulting in emails from Bluemix. While this is happening you should see something like this:

If you have to re-deploy on subsequent occasions, the storage will not need to be re-provisioned and the whole process will be significantly faster. When it is all done you should see something like this:

10. Proxy the kube dash:

kubectl proxy

Navigate to http://127.0.0.1:8001/ui in a browser and you should see the kubernetes console!

Here you can see everything that has been deployed automatically using the helm script!

We have multiple instances of AM and DJ with storage deployed into Bluemix ready to configure!

In the next blog we will take a detailed look at the kubernetes dashboard to understand exactly what we have done, but for now lets take a quick look at one of our new AM instances.

11. Log in to AM:

Ctrl-C the proxy command and type the following:

bx cs workers wbcluster

You can see a list of our workers above, and the IP they have been exposed publicly on.

Note: There are defined ways of accessing applications using Kubernetes, typically you would use an ingress or a load balancer and not go directly using the public IP. We may look at these in later blogs.

As you probably know, AM expects a fully qualified domain name so before we can log in we need to edit /etc/hosts and add the following:

Then you can navigate to AM:

http://openam.example.com:30080/openam

You should be able to login with amadmin/password!

Summary

So far in this series we have created docker containers with the ForgeRock components, uploaded these to Bluemix and run the orchestration helm script to actually deploy instances of these containers into a meaningful architecture. Not bad!

In the next blog we will take a detailed look at the kubernetes console and examine what has actually been deployed.

This blog post was first published @ http://identity-implementation.blogspot.no/, included here with permission from the author.

Storing ForgeRock Directory Services server keys on the Nitrokey HSM

The Nitrokey HSM provides a PKCS#11 hardware security module the form of a USB key. The design is based on open hardware and open software.

This is a low cost option to familiarize yourself with an actual hardware HSM, and to test your procedures. With it, you can demonstrate that ForgeRock Directory Services servers can in fact use the HSM as a key store.

In addition to the documentation that you can access through https://www.nitrokey.com/start, see https://raymii.org/s/articles/Get_Started_With_The_Nitrokey_HSM.html for a helpful introduction.

The current article demonstrates generating and storing keys and certificates on the Nitrokey HSM, and then using they keys to protect DS server communications. It was tested with a build from the current master branch. Thanks to Fabio Pistolesi and others for debugging advice.

This article does not describe how to install the prerequisite tools and libraries to work with the Nitrokey HSM on your system. The introduction mentioned above briefly describes installation on a couple of Linux distributions, but the software itself seems to be cross-platform.

When you first plug the Nitrokey HSM into a USB slot, it has PINs but no keys. The following examples examine the mostly empty Nitrokey HSM when initially plugged in:

# List devices:
$ opensc-tool --list-readers
# Detected readers (pcsc)
Nr. Card Features Name
0 Yes Nitrokey Nitrokey HSM (010000000000000000000000) 00 00

# List slots, where you notice that the Nitrokey HSM is in slot 0 on this system:
$ pkcs11-tool --list-slots
Available slots:
Slot 0 (0x0): Nitrokey Nitrokey HSM (010000000000000000000000) 00 00
 token label : SmartCard-HSM (UserPIN)
 token manufacturer : www.CardContact.de
 token model : PKCS#15 emulated
 token flags : rng, login required, PIN initialized, token initialized
 hardware version : 24.13
 firmware version : 2.5
 serial num : DENK0100751

The following example initializes the Nitrokey HSM, using the default SO PIN and a user PIN of 648219:

$ sc-hsm-tool --initialize --so-pin 3537363231383830 --pin 648219
Using reader with a card: Nitrokey Nitrokey HSM (010000000000000000000000) 00 00
Version : 2.5
Config options :
 User PIN reset with SO-PIN enabled
SO-PIN tries left : 15
User PIN tries left : 3

The following example tests the PIN on the otherwise empty Nitrokey HSM:

$ pkcs11-tool --test --login --pin 648219
Using slot 0 with a present token (0x0)
C_SeedRandom() and C_GenerateRandom():
 seeding (C_SeedRandom) not supported
 seems to be OK
Digests:
 all 4 digest functions seem to work
 MD5: OK
 SHA-1: OK
 RIPEMD160: OK
Signatures (currently only RSA signatures)
Signatures: no private key found in this slot
Verify (currently only for RSA):
 No private key found for testing
Unwrap: not implemented
Decryption (RSA)
No errors

The following example generates a key pair on the Nitrokey HSM:

$ pkcs11-tool 
 --module opensc-pkcs11.so 
 --keypairgen --key-type rsa:2048 
 --id 10 --label server-cert 
 --login --pin 648219
Using slot 0 with a present token (0x0)
Key pair generated:
Private Key Object; RSA
  label: server-cert
  ID: 10
  Usage: decrypt, sign, unwrap
Public Key Object; RSA 2048 bits
  label: server-cert
  ID: 10
  Usage: encrypt, verify, wrap

The following examples show what is on the Nitrokey HSM:

$ pkcs15-tool --dump
Using reader with a card: Nitrokey Nitrokey HSM (010000000000000000000000) 00 00
PKCS#15 Card [SmartCard-HSM]:
 Version : 0
 Serial number : DENK0100751
 Manufacturer ID: www.CardContact.de
 Flags :

PIN [UserPIN]
 Object Flags : [0x3], private, modifiable
 ID : 01
 Flags : [0x812], local, initialized, exchangeRefData
 Length : min_len:6, max_len:15, stored_len:0
 Pad char : 0x00
 Reference : 129 (0x81)
 Type : ascii-numeric
 Path : e82b0601040181c31f0201::
 Tries left : 3

PIN [SOPIN]
 Object Flags : [0x1], private
 ID : 02
 Flags : [0x9A], local, unblock-disabled, initialized, soPin
 Length : min_len:16, max_len:16, stored_len:0
 Pad char : 0x00
 Reference : 136 (0x88)
 Type : bcd
 Path : e82b0601040181c31f0201::
 Tries left : 15

Private RSA Key [server-cert]
 Object Flags : [0x3], private, modifiable
 Usage : [0x2E], decrypt, sign, signRecover, unwrap
 Access Flags : [0x1D], sensitive, alwaysSensitive, neverExtract, local
 ModLength : 2048
 Key ref : 1 (0x1)
 Native : yes
 Auth ID : 01
 ID : 10
 MD:guid : b4212884-6800-34d5-4866-11748bd12289

Public RSA Key [server-cert]
 Object Flags : [0x0]
 Usage : [0x51], encrypt, wrap, verify
 Access Flags : [0x2], extract
 ModLength : 2048
 Key ref : 0 (0x0)
 Native : no
 ID : 10
 DirectValue : 

$ pkcs15-tool --read-public-key 10
Using reader with a card: Nitrokey Nitrokey HSM (010000000000000000000000) 00 00
-----BEGIN PUBLIC KEY-----
MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAlxvdwL6PyRnOs58L0X8d
2z8/WcgA/beoR+p08nymN8KZ4KlWKUo93AKMcFBUW8Bl8zFC80P9ZlNIXM8NSmPr
cBR9Nmpi0nUQDgfTi8vIU51tD84UcYetxX9rSHbh+CKqUmmSk6f7JPIyT6RonrOo
QJQyFmIi4oV9/d0Op8WVCbL7omYaPFwYbdUPetM1MfVyLNpkhzVdvZJE0F46hXF8
Sspqjh4f9KkJWdozIOND8ZTFvxP5Cs1y/kvvuhfjWVAtii52E4LKXRr53SA5Spl2
v1oNu5sqoaEd/SNxjj/52iH6zeGm61I7wbcIgvcHCI5CKONKceSL3PkIYzHeJMu2
SQIDAQAB
-----END PUBLIC KEY-----

The following example self-signs a public key certificate and writes it to the Nitrokey HSM. The example uses openssl, and configures an engine to use the Nitrokey HSM, which implements PKCS#11. The configuration for the OpenSSL engine is stored in a file called hsm.conf. On an Ubuntu 17.04 laptop, the PKCS#11 library installed alongside the tools is /usr/lib/x86_64-linux-gnu/opensc-pkcs11.so as shown below:

$ cat hsm.conf
# PKCS11 engine config
openssl_conf = openssl_def

[openssl_def]
engines = engine_section

[req]
distinguished_name = req_distinguished_name

[req_distinguished_name]
# empty.

[engine_section]
pkcs11 = pkcs11_section

[pkcs11_section]
engine_id = pkcs11
dynamic_path = /usr/lib/x86_64-linux-gnu/openssl-1.0.2/engines/libpkcs11.so
MODULE_PATH = /usr/lib/x86_64-linux-gnu/opensc-pkcs11.so
PIN = 648219
init = 0

# Check the engine configuration. In this case, the PKCS11 engine loads fine:
$ OPENSSL_CONF=./hsm.conf openssl engine -tt -c(rdrand) Intel RDRAND engine
 [RAND]
     [ available ]
(dynamic) Dynamic engine loading support
     [ unavailable ]
(pkcs11) pkcs11 engine
 [RSA]
     [ available ]

# Create a self-signed certificate and write it to server-cert.pem.
# Notice that the key is identified using slot-id:key-id:
$ OPENSSL_CONF=./hsm.conf openssl req 
 -engine pkcs11 -keyform engine -new -key 0:10 
 -nodes -days 3560 -x509 -sha256 -out "server-cert.pem" 
 -subj "/C=FR/O=Example Corp/CN=opendj.example.com"
engine "pkcs11" set.
No private keys found.

The openssl command prints a message, “No private keys found.” Yet, it still returns 0 (success) and writes the certificate file:

$ more server-cert.pem
-----BEGIN CERTIFICATE-----
MIIC/jCCAeYCCQD1SEBmUy8aCzANBgkqhkiG9w0BAQsFADBBMQswCQYDVQQGEwJG
UjEVMBMGA1UECgwMRXhhbXBsZSBDb3JwMRswGQYDVQQDDBJvcGVuZGouZXhhbXBs
ZS5jb20wHhcNMTcwODE0MTEzNzA0WhcNMjcwNTE0MTEzNzA0WjBBMQswCQYDVQQG
EwJGUjEVMBMGA1UECgwMRXhhbXBsZSBDb3JwMRswGQYDVQQDDBJvcGVuZGouZXhh
bXBsZS5jb20wggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCXG93Avo/J
Gc6znwvRfx3bPz9ZyAD9t6hH6nTyfKY3wpngqVYpSj3cAoxwUFRbwGXzMULzQ/1m
U0hczw1KY+twFH02amLSdRAOB9OLy8hTnW0PzhRxh63Ff2tIduH4IqpSaZKTp/sk
8jJPpGies6hAlDIWYiLihX393Q6nxZUJsvuiZho8XBht1Q960zUx9XIs2mSHNV29
kkTQXjqFcXxKymqOHh/0qQlZ2jMg40PxlMW/E/kKzXL+S++6F+NZUC2KLnYTgspd
GvndIDlKmXa/Wg27myqhoR39I3GOP/naIfrN4abrUjvBtwiC9wcIjkIo40px5Ivc
+QhjMd4ky7ZJAgMBAAEwDQYJKoZIhvcNAQELBQADggEBAIX0hQadHtv1c0P7ObpF
sDnIWMRVtq0s+NcXvMEwKhLHHvEHrId9ZM3Ywn0P3CFd+WXMWMNSVz51cPn6SKPS
pdN9CVq6B26cFrvzWrsD06ohP6jCkXEBSshlE/k71FcnukBuJHNzj8O6JMwfyWeE
xT53WIvgz9t02B/ObZSYFlUNX+WApPCbILTHazEzYws3AN0hZPmv4Ng1Vt71nNiT
EZskTxvKsmuEuG5E8j79zO0TYvOrGCISzS3PFRrl7G83vNaSyzBIhTYF2Ilt2g7B
jnNc1/k8R/TXwskJR8gL7EFZyakQ6xUiboFDf6PWa4KMLJVNX5HsVGyLvP9FiVkY
82A=
-----END CERTIFICATE-----

The following example writes the certificate to the Nitrokey HSM:

# Transform the certificate to binary format:
$ openssl x509 -in server-cert.pem -out server-cert.der -outform der

# Write the binary format to the Nitrokey HSM, with the label (aka alias) "server-cert":
$ pkcs11-tool 
 --module opensc-pkcs11.so 
 --login --pin 648219 
 --write-object server-cert.der --type cert 
 --id 10 --label server-cert
Using slot 0 with a present token (0x0)
Created certificate:
Certificate Object, type = X.509 cert
  label:      Certificate
  ID:         10

With the keys and certificate loaded on the Nitrokey HSM, prepare to use it with Java programs. If the Java environment is configured to access the HSM, then you can just use it. In testing, however, where you are trying the HSM, and the Java environment is not configured to use it, you can specify the configuration:

# Edit a configuration file for Java programs to access the Nitrokey HSM:
$ cat /path/to/hsm.conf
name = NitrokeyHSM
description = SunPKCS11 with Nitrokey HSM
library = /usr/lib/x86_64-linux-gnu/opensc-pkcs11.so
slot = 0

# Verify that the Java keytool command can read the certificate on the Nitrokey HSM:
$ keytool 
  -list 
  -keystore NONE 
  -storetype PKCS11 
  -storepass 648219 
  -providerClass sun.security.pkcs11.SunPKCS11 
  -providerArg /path/to/hsm.conf

Keystore type: PKCS11
Keystore provider: SunPKCS11-NitrokeyHSM

Your keystore contains 1 entry

server-cert, PrivateKeyEntry,
Certificate fingerprint (SHA1): B9:A2:88:5F:69:8E:C6:FB:C2:29:BF:F8:39:51:F6:CC:5A:0C:CC:10

A ForgeRock Directory Services server needs to access the configuration indirectly, as there is no setup parameter to specify the HSM configuration file. Add your own settings to extend the Java environment configuration as in the following example:

$ cat /path/to/java.security
# Security provider for accessing Nitrokey HSM:
security.provider.10=sun.security.pkcs11.SunPKCS11 /path/to/hsm.conf

# Unzip OpenDJ server files and edit the configuration before running setup:
$ cd /path/to && unzip 

# Set the Java args to provide access to the Nitrokey HSM.
# Make opendj/template/config/java.properties writable, and edit.
# This allows the OpenDJ server to start as needed:
$ grep java.security /path/to/opendj/template/config/java.properties
start-ds.java-args=-server -Djava.security.properties=/path/to/java.security

# Set up the server:
$ OPENDJ_JAVA_ARGS="-Djava.security.properties=/path/to/java.security" 
 /path/to/opendj/setup 
 directory-server 
 --rootUserDN "cn=Directory Manager" 
 --rootUserPassword password 
 --hostname opendj.example.com 
 --ldapPort 1389 
 --certNickname server-cert 
 --usePkcs11keyStore 
 --keyStorePassword 648219 
 --enableStartTLS 
 --ldapsPort 1636 
 --httpsPort 8443 
 --adminConnectorPort 4444 
 --baseDN dc=example,dc=com 
 --ldifFile /path/to/Example.ldif 
 --acceptLicense

To debug, you can set security options such as the following:

OPENDJ_JAVA_ARGS="-Djava.security.debug=sunpkcs11,pkcs11 -Djava.security.properties=/path/to/java.security"

The following example shows an LDAP search that uses StartTLS to secure the connection:

$ /path/to/opendj/bin/ldapsearch --port 1389 --useStartTLS --baseDn dc=example,dc=com "(uid=bjensen)" cn

Server Certificate:

User DN  : CN=opendj.example.com, O=Example Corp, C=FR
Validity : From 'Mon Aug 14 13:37:04 CEST 2017'
             To 'Fri May 14 13:37:04 CEST 2027'
Issuer   : CN=opendj.example.com, O=Example Corp, C=FR



Do you trust this server certificate?

  1) No
  2) Yes, for this session only
  3) Yes, also add it to a truststore
  4) View certificate details

Enter choice: [2]: 4


[
[
  Version: V1
  Subject: CN=opendj.example.com, O=Example Corp, C=FR
  Signature Algorithm: SHA256withRSA, OID = 1.2.840.113549.1.1.11

  Key:  Sun RSA public key, 2048 bits
modulus:
19075725396235933137769598662661614197862047561628746980441589981485944705910796672312984856468967795133561692335016063740885234669000544938180872617609018349362382746691431903457463096067521727428890407876216335060234859298584617093111442598717549413985534234195585205628275977771336192817217401466821950358077667360760303781781546092776529804134165206111430903307470063770954498312408782707671718644473532565867636087296875111917369665456339790081809729622515754260638402122026793085096606980136589008904235094266835122846140853242190316629669042978441585862504373498978113550866427439699045924980942028978028000841
  public exponent: 65537
  Validity: [From: Mon Aug 14 13:37:04 CEST 2017,
               To: Fri May 14 13:37:04 CEST 2027]
  Issuer: CN=opendj.example.com, O=Example Corp, C=FR
  SerialNumber: [    f5484066 532f1a0b]

]
  Algorithm: [SHA256withRSA]
  Signature:
0000: 85 F4 85 06 9D 1E DB F5   73 43 FB 39 BA 45 B0 39  ........sC.9.E.9
0010: C8 58 C4 55 B6 AD 2C F8   D7 17 BC C1 30 2A 12 C7  .X.U..,.....0*..
0020: 1E F1 07 AC 87 7D 64 CD   D8 C2 7D 0F DC 21 5D F9  ......d......!].
0030: 65 CC 58 C3 52 57 3E 75   70 F9 FA 48 A3 D2 A5 D3  e.X.RW>up..H....
0040: 7D 09 5A BA 07 6E 9C 16   BB F3 5A BB 03 D3 AA 21  ..Z..n....Z....!
0050: 3F A8 C2 91 71 01 4A C8   65 13 F9 3B D4 57 27 BA  ?...q.J.e..;.W'.
0060: 40 6E 24 73 73 8F C3 BA   24 CC 1F C9 67 84 C5 3E  @n$ss...$...g..>
0070: 77 58 8B E0 CF DB 74 D8   1F CE 6D 94 98 16 55 0D  wX....t...m...U.
0080: 5F E5 80 A4 F0 9B 20 B4   C7 6B 31 33 63 0B 37 00  _..... ..k13c.7.
0090: DD 21 64 F9 AF E0 D8 35   56 DE F5 9C D8 93 11 9B  .!d....5V.......
00A0: 24 4F 1B CA B2 6B 84 B8   6E 44 F2 3E FD CC ED 13  $O...k..nD.>....
00B0: 62 F3 AB 18 22 12 CD 2D   CF 15 1A E5 EC 6F 37 BC  b..."..-.....o7.
00C0: D6 92 CB 30 48 85 36 05   D8 89 6D DA 0E C1 8E 73  ...0H.6...m....s
00D0: 5C D7 F9 3C 47 F4 D7 C2   C9 09 47 C8 0B EC 41 59  ..

When using an HSM with a ForgeRock Directory Services server, keep in mind the following caveats:

  • Each time the server needs to access the keys, it accesses the HSM. You can see this with the Nitrokey HSM because it flashes a small red LED when accessed. Depending on the HSM, this could significantly impact performance.
  • The key manager provider supports PKCS#11 as shown. The trust manager provider implementation does not, however, support PKCS#11 at the time of this writing, though there is an RFE for that (OPENDJ-4191).
  • The Crypto Manager stores symmetric keys for encryption using the cn=admin data backend, and the symmetric keys cannot currently be stored in a PKCS#11 module.

This blog post was first published @ marginnotes2.wordpress.com, included here with permission.

Open Banking, PSD2 & Screen Scraping

Open Banking & PSD2

PSD2 is due to come into force September 2018, meanwhile the UK is forging ahead with Open Banking which is due to come into force even earlier in January 2018. Both regulations are all about cracking open banking APIs to increase digital competitiveness an improve consumer choice.

The 9 biggest UK banks have been collaborating in the form of the Open Banking Working Group (OBWG) to define the solution for Open Banking in the UK. After much discussion and deliberation the OBWG has determined that Open Banking should be achieved through the use of open standards and specifically the use of the OAuth 2.0 family of standards.

OAuth 2.0

OAuth 2.0 is something I use just about every day and it’s something that all of us have probably used at one time or another though we may not have realised it. OAuth is a standard designed for Delegated Authorization.

We commonly refer to Authentication as proving who you are, whereas Authorization determines what you are allowed to do. Authentication is typically achieved with some sort of username and password (and ideally a second factor). Authorization is generally concerned with the policy and permissions that apply once I have authenticated.

Effectively, Delegated Authorization is a way to permit someone to do something on my behalf. A very common example can be seen with Instragram and Twitter when a user gives Instagram permission to post to their Twitter feed.

With OAuth 2.0, Instragram will redirect you to Twitter, you will authenticate with Twitter and consent to Instragram posting to your Twitter account. Twitter will then share an authorization code with Instagram that Instagram will exchange for an access token. This access token can only be used to post to your Twitter account, Instagram for example could not use it to delete your tweets.

In a world without OAuth 2.0, Twitter would have to know your Facebook username and password in order to post a Tweet to your Facebook account. This would allow them to post to your Twitter feed but it would also enable them to do anything else that you could do if you authenticated. More crucially your username and password have now been shared with a third party who you have to trust. Propagating passwords is never a good thing for security and is really the very definition of a security anti-pattern. This is how screen scraping works.

Screen Scraping

Up to now there has been no standards based mechanism for sharing account data. There are services at the moment that can aggregate your financial data in one place. These services are convenient for many however to use them you have to share your credentials with them. So, if you want the aggregator to be able to report on your bank account. You need to share your banking credentials with the aggregator. You have to trust a third party with your banking credentials.

Putting aside the issues of trust, massive credential leaks are now a weekly occurrence and the more you share your credentials around the more vulnerable those credentials become.

Open Banking aims to put an end to this by using secure, trusted open standards such as OAuth. As a security professional and as a customer I feel very strongly that this is the right way to do Open Banking and it ensures I remain in control of my account data and enables me to revoke third party access at any time.

Right now there is much debate and discussion as to whether screen scraping should be permitted in both PSD2 and Open Banking. There are a number of groups who are right now petitioning for it to remain a valid approach for data sharing under the new regulations.

I can appreciate the difficulties many organisations may face in transitioning from screen scraping to an OAuth 2.0 based model but I cannot in good conscience support the screen scraping approach and I suspect that if it were to be adopted as an acceptable interim solution that it would persist for the longer term and undermine the benefits that an API driven approach to Open Banking would bring.

The Kantara Initiative is a non-profit organisation dedicated to advancing digital identity and data privacy. If you feel as strongly as I do about this, please visit the Kantara Initiative and sign the pledge against screen scraping:

https://kantarainitiative.org/psd2statement/

This blog post was first published @ http://identity-implementation.blogspot.no/, included here with permission from the author.

Faster docs

One of the things you have asked for is to see large documents load faster on the ForgeRock BackStage docs site. We recently switched from publishing HTML documentation through the BackStage single-page app to publishing separate, static HTML with JavaScript to provide BackStage features.

This allows browsers to use progressive rendering, and start laying out the page before everything has been loaded and styled. The result is that large documents feel faster in your browser.

If you have bookmarks to published HTML, notice that we have dropped the per-chapter view of published docs. Each document is now a single HTML page. So instead of a link to /docs/product/version/book/chapter#section, target /docs/product/version/book/#section.

Also notice that we have consolidated documentation sets to make information easier to find, with only one set per major or minor release. Generally this means that you only have to read one set of release notes, no matter what maintenance version you have right now.

The latest docs are the ones for version 5 of the platform:

We still publish all the same docs as before, including docs for software that is beyond the end of its service life. Please check out the updated site. Open issues there for any problems you notice.

This blog post was first published @ marginnotes2.wordpress.com, included here with permission.

ForgeRock Self-Service Custom Stage

Introduction

A while ago I blogged an article describing how to add custom stages to the ForgeRock IDM self-service config.  At the time I used the sample custom stage available from the ForgeRock Commons Self-Service code base.  I left it as a task for the reader to build their own stage!  However, I recently had cause to build a custom stage for a proof of concept I was working on.

It’s for IDM v5 and I’ve detailed the steps here.

Business Logic

The requirement for the stage was to validate that a registering user had ownership of the provided phone number.  The phone number could be either a mobile or a landline.  The approach taken was to use Twilio (a 3rd party) to send out either an SMS to a mobile, or text-to-speech to a landline.  The content of the message is a code based on HOTP.

Get the code for the module

https://stash.forgerock.org/users/andrew.potter/repos/twilio-stage/browse

Building the module

Follow the instructions in README.md

After deploying the .jar file you must restart IDM for the bundle to be correctly recognised.

The module is targeted for IDMv5.  It uses the maven repositories to get the binary dependencies.
See this article in order to access the ForgeRock ‘private-releases’ maven repo:
https://backstage.forgerock.com/knowledge/kb/article/a74096897

It also uses appropriate pom.xml directives to ensure the final .jar file is packaged as an OSGi bundle so that it can be dropped into IDM

Technical details

The code consists of a few files.  The first two in this list a the key files for any stage.  They implement the necessary interfaces for a stage.  The remaining files are the specific business logic for this stage.

  • TwilioStageConfig.java.  This class manages reading the configuration data from the configuration file.  It simply represents each configuration item for the stage as properties of the class.
  • TwilioStage.java.  This is main orchestration file for the stage.  It copes with both registration and password reset scenarios.  It manages the ‘state’ of the flow within this stage and generates the appropriate callbacks to user, but relies on the other classes to do the real code management work.  If you want to learn about the way a ‘stage’ works then this is file to consider in detail.
  • HOTPAlgorithm.java.  This is taken from the OATH Initiative work and is unchanged by me.  It is a java class to generate a code based on the HOTP algorithm.
  • TwilioService.java. This class manages the process of sending the code.  It generates the code then decides whether to send it using SMS or TTS.  (In the UK, all mobile phone numbers start 07… so it’s very simple logic for my purpose!)  This class also provides a method to validate the code entered by the user.
  • TwilioUtil.java.  The class provides the utility functions that interact directly with the Twilio APIs for sending either an SMS or TTS

 

Configuration

There are also two sample config files for registration and password reset.  You should include the JSON section relating to this class in your self-service configuration files for IDM.
For example:

        {
“class” : “org.forgerock.selfservice.twilio.TwilioStageConfig”,
“codeValidityDuration” : “6000”,
“codeLength” : “5”,
“controlUrl” : “http://twimlets.com/message?Message%5B0%5D=Hello%20Please%20enter%20the%20following%20one%20time%20code”,
“fromPhone” : “+441412803033”,
“accountSid” : “<Enter accountSid>”,
“tokenId” : “<Enter tokenId>”,
“telephoneField” : “telephoneNumber”,
“skipSend” : false
},

Most configuration items should be self explanatory.  However, the ‘skipSend’ option is worthy of special note.  This, when true, will cause the stage to avoid calling the Twilio APIs and instead return the code as part of the callback data.  This means that if you’re using the OOTB UI then the ‘placeholder’ HTML attribute of the input box will tell you the code to enter.  This is really useful for testing this stage if you don’t have access to a Twilio account as this also ignores the Twilio account specific configuration items.

Of course, now you need to deploy it as per my previous article!

This blog post was first published @ yaunap.blogspot.no, included here with permission from the author.