ForgeRock Self-Service Custom Stage

Introduction

A while ago I blogged an article describing how to add custom stages to the ForgeRock IDM self-service config.  At the time I used the sample custom stage available from the ForgeRock Commons Self-Service code base.  I left it as a task for the reader to build their own stage!  However, I recently had cause to build a custom stage for a proof of concept I was working on.

It’s for IDM v5 and I’ve detailed the steps here.

Business Logic

The requirement for the stage was to validate that a registering user had ownership of the provided phone number.  The phone number could be either a mobile or a landline.  The approach taken was to use Twilio (a 3rd party) to send out either an SMS to a mobile, or text-to-speech to a landline.  The content of the message is a code based on HOTP.

Get the code for the module

https://stash.forgerock.org/users/andrew.potter/repos/twilio-stage/browse

Building the module

Follow the instructions in README.md

After deploying the .jar file you must restart IDM for the bundle to be correctly recognised.

The module is targeted for IDMv5.  It uses the maven repositories to get the binary dependencies.
See this article in order to access the ForgeRock ‘private-releases’ maven repo:
https://backstage.forgerock.com/knowledge/kb/article/a74096897

It also uses appropriate pom.xml directives to ensure the final .jar file is packaged as an OSGi bundle so that it can be dropped into IDM

Technical details

The code consists of a few files.  The first two in this list a the key files for any stage.  They implement the necessary interfaces for a stage.  The remaining files are the specific business logic for this stage.

  • TwilioStageConfig.java.  This class manages reading the configuration data from the configuration file.  It simply represents each configuration item for the stage as properties of the class.
  • TwilioStage.java.  This is main orchestration file for the stage.  It copes with both registration and password reset scenarios.  It manages the ‘state’ of the flow within this stage and generates the appropriate callbacks to user, but relies on the other classes to do the real code management work.  If you want to learn about the way a ‘stage’ works then this is file to consider in detail.
  • HOTPAlgorithm.java.  This is taken from the OATH Initiative work and is unchanged by me.  It is a java class to generate a code based on the HOTP algorithm.
  • TwilioService.java. This class manages the process of sending the code.  It generates the code then decides whether to send it using SMS or TTS.  (In the UK, all mobile phone numbers start 07… so it’s very simple logic for my purpose!)  This class also provides a method to validate the code entered by the user.
  • TwilioUtil.java.  The class provides the utility functions that interact directly with the Twilio APIs for sending either an SMS or TTS

 

Configuration

There are also two sample config files for registration and password reset.  You should include the JSON section relating to this class in your self-service configuration files for IDM.
For example:

        {
“class” : “org.forgerock.selfservice.twilio.TwilioStageConfig”,
“codeValidityDuration” : “6000”,
“codeLength” : “5”,
“controlUrl” : “http://twimlets.com/message?Message%5B0%5D=Hello%20Please%20enter%20the%20following%20one%20time%20code”,
“fromPhone” : “+441412803033”,
“accountSid” : “<Enter accountSid>”,
“tokenId” : “<Enter tokenId>”,
“telephoneField” : “telephoneNumber”,
“skipSend” : false
},

Most configuration items should be self explanatory.  However, the ‘skipSend’ option is worthy of special note.  This, when true, will cause the stage to avoid calling the Twilio APIs and instead return the code as part of the callback data.  This means that if you’re using the OOTB UI then the ‘placeholder’ HTML attribute of the input box will tell you the code to enter.  This is really useful for testing this stage if you don’t have access to a Twilio account as this also ignores the Twilio account specific configuration items.

Of course, now you need to deploy it as per my previous article!

This blog post was first published @ yaunap.blogspot.no, included here with permission from the author.

Save greenbacks on Google Container Engine using autoscaling and preemptible VMs

There is an awesome new feature on Google Container Engine (GKE) that lets you combine autoscaling, node pools and preemptible VMs to save big $!

The basic idea is to create a small cluster with an inexpensive VM type that will run 7×24. This primary node can be used for critical services that should not be rescheduled to another pod. A good example would be a Jenkins master server. Here is an example of how to create the cluster:

gcloud alpha container clusters create $CLUSTER 
  --network "default" --num-nodes 1 
  --machine-type  ${small} --zone $ZONE 
  --disk-size 50

Now here is the money saver trick:  A second node pool is added to the cluster. This node pool is configured to auto-scale from one node up to a maximum. This additional node pool uses preemptible VMs. These are VMs that can be taken away at any time if Google needs the capacity, but in exchange you get dirt cheap images. For example, running a 4 core VM with 15GB of RAM for a month comes in under $30.

This second pool is perfect for containers that can survive a restart or migration to a new node. Jenkins slaves would be a good candidate.

Here is an example of adding the node pool to the cluster you created above:

gcloud alpha container node-pools create $NODEPOOL --cluster $CLUSTER --zone $ZONE 
    --machine-type ${medium} --preemptible --disk-size 50 
    --enable-autoscaling --min-nodes=1 --max-nodes=4

That node pool will scale down to a single VM if the cluster is not busy, and scale up to a maximum of 4 nodes.

If your VM gets preempted (and it will at least once every 24 hours),  the pods running on that node will be rescheduled onto a new node created by the auto-scaler.

Container engine assigns a label to nodes which you can use for scheduling. For example, to ensure you Jenkins Master does not get put on a preemptible node, you can add the following annotation to your Pod Spec:

apiVersion: v1kind: Podspec:  nodeSelector:    !cloud.google.com/gke-preemptible
apiVersion: v1kind: Podspec:  nodeSelector:    !cloud.google.com/gke-preemptible
nodeSelector:    !cloud.google.com/gke-preemptible

See https://cloud.google.com/container-engine/docs/preemptible-vm for the details.

This blog post was first published @ warrenstrange.blogspot.ca, included here with permission.

Introduction to ForgeRock DevOps – Part 2 – Building Docker Containers

We have just launched Version 5 of the ForgeRock Identity Platform with numerous enhancements for DevOps friendliness. I have been meaning to jump into the world of DevOps for some time so the new release afforded a great opportunity to do just that.

Catch up with previous entries in the series:
http://identity-implementation.blogspot.co.uk/2017/04/introduction-to-forgerock-devops-part-1.html

I will be using IBM Bluemix here as I have recent experience of it but nearly all of the concepts will be similar for any other cloud environment.

Building Docker Containers

In this blog we are going to build our docker containers that will contain the ForgeRock platform components, tag them and upload them to the Bluemix registry.

Prerequisites

Install all of the below:

Docker: https://www.docker.com
Used to build, tag and upload docker containers.
Bluemix CLI: http://clis.ng.bluemix.net/ui/home.html
Used to deploy and configure the Bluemix environment.
CloudFoundry CLI: https://github.com/cloudfoundry/cli
Bluemix dependency.
Kubectl: https://kubernetes.io/docs/tasks/tools/install-kubectl/

Deploy and manage Kubernetes clusters.

Initial Configuration

1. Log in to the Blue Mix CLI using you Blue Mix account credentials:

bx login -a https://api.ng.bluemix.net

Note we are using the US instance of Bluemix here as it has support for Kubernetes in beta.

When prompted to select an account ( just type 1) and if you are logged in successfully you should see the above. Now you can interact with the Bluemix environment just as you might if you were logged in via a browser.

2. Add the Bluemix Docker components:

bx plugin repo-add Bluemix https://plugins.ng.bluemix.netbx plugin install container-service -r Bluemix
bx plugin install IBM-Containers -r Bluemix

Check they have installed:

bx plugin list

3. Clone (or download) the ForgeRock Docker Repo to somewhere local:

https://stash.forgerock.org/projects/DOCKER/repos/docker/browse

4. Download the ForgeRock AM and DS component binaries from backstage:

https://backstage.forgerock.com/downloads

5. Unzip and copy ForgeRock binaries into the Docker build directories:

AM:

unzip AM-5.0.0.zip
cp openam/AM-5.0.0.war /usr/local/DevOps/stash/docker/openam/

DJ:

mv DS-5.0.0.zip /usr/local/DevOps/stash/docker/openam/opendj.zipcp openam/AM-5.0.0.war /usr/local/DevOps/stash/docker/openam/

Amster:

mv Amster-5.0.0.zip /usr/local/DevOps/stash/docker/amster/amster.zip

For those unfamiliar, Amster is our new RESTful configuration tool for AM in the 5 platform, replacing SSOADM with a far more DevOps friendly tool, I’ll be covering it in a future blog.

Build Containers

We are going to create three containers: AM, DJ & Amster:

1. Build and Tag OpenAM container ( don’t forget the . ) :

cd /usr/local/DevOps/stash/docker/openam
docker build -t wayneblacklockfr/openam .

Note wayneblacklockfr/openam is just a name to tag the container with locally, replace it with whatever you like but keep the /openam.

All being well you will see something like the below:

Congratulations, you have built your first ForgeRock container!

Now we need to get the namespace for tagging, this is usually your username but check using:

bx ic namespace-get

Now lets tag it ready for upload to Bluemix, use the container ID output at the end of the build process and your namespace

docker tag d7e1700cfadd registry.ng.bluemix.net/wayneblacklock/openam:14.0.0

Repeat the process for Amster and DS.

2. Build and Tag Amster container:

cd /usr/local/DevOps/stash/docker/amster
docker build -t wayneblacklockfr/amster .
docker tag 54bf5bd46bf1 registry.ng.bluemix.net/wayneblacklock/amster:14.0.0

3. Build and Tag DS container:

cd /usr/local/DevOps/stash/docker/opendj
docker build -t wayneblacklockfr/opendj .
docker tag 19b8a6f4af73 registry.ng.bluemix.net/wayneblacklock/opendj:4.0.0

4. View the containers:

You can take a look at what we have built with: docker images

Push Containers

Finally we want to push our containers up to the Bluemix registry.

1. Login again:

bx login -a https://api.ng.bluemix.net

2. Initiate the Bluemix container service, this may take a moment:

bx ic init

Ignore Option 1 & Option 2, we are not doing either.

3. Push your Docker images up to Bluemix:

docker push registry.ng.bluemix.net/wayneblacklock/openam:14.0.0

docker push registry.ng.bluemix.net/wayneblacklock/amster:14.0.0

docker push registry.ng.bluemix.net/wayneblacklock/opendj:4.0.0

4. Confirm your images have been uploaded:

bx ic images

If you login to the Bluemix webapp you should be able to see your containers in the catalog:

Next Time

We will take a look at actually deploying a Kubernetes cluster and everything we have to do to ready our containers for deployment.

This blog post was first published @ http://identity-implementation.blogspot.no/, included here with permission from the author.

Extending OpenAM HOTP module to display OTP delivery details

OpenAM provide HOTP authentication module which can send OTP to user’s email address and/or telephone number. By default, OpenAM doesn’t displays user’s email address and/or telephone number while sending this OTP.

Solution

Versions used for this implementation: OpenAM 13.5, OpenDJ 3.5
One of the solution can include extending out of the box OpenAM’s HOTP module:
  • Extend HOTP auth module (openam-auth-hotp).
  • Update below property in extended amAuthHOTP.properties: send.success=Please enter your One Time Password sent at
  • Extend HOTPService appropriately to retrieve user profile details.
  • Change extended HOTP module code as per below (both for auto send and on request):

substituteHeader(START_STATE, bundle.getString("send.success") + <Get User contact details from HOTPService>);

Deploy

Register service and module (Note that for OpenAM v12 use amAuthHOTPExt-12.xml) :
$ ./ssoadm create-svc --adminid amadmin --password-file /tmp/pwd.txt --xmlfile ~/softwares/amAuthHOTPExt.xml
$ ./ssoadm register-auth-module --adminid amadmin --password-file /tmp/pwd.txt --authmodule com.sun.identity.authentication.modules.hotp.HOTPExt

UnRegister service and module (in case module needs to be uninstalled) : 
$ ./ssoadm unregister-auth-module --adminid amadmin --password-file /tmp/pwd.txt --authmodule com.sun.identity.authentication.modules.hotp.HOTPExt
$ ./ssoadm delete-svc --adminid amadmin --password-file /tmp/pwd.txt -s sunAMAuthHOTPExtService
  • Configure HOTPExt module with required SMTP server. Enable both SMS and Email.
  • Create a chain(otpChain) with (LDAP:Required, HOTPExt:Required). Set this chain as default for “Organization Authentication”
  • Restart OpenAM
  • Invoke HOTP module and appropriate message is displayed on screen with user’s email address and/or telephone number:

 

This blog post was first published @ theinfinitelooper.blogspot.com, included here with permission.

OpenAM SP SAML Attribute Mapper extension for updating profile attributes

OpenAM can act as both SP and IdP for SAML webSSO flows. OpenAM also provides ability to dynamically create user profiles.

When OpenAM is acting as SAML SP and Dynamic user profile is enabled, if user profile doesn’t exist on OpenAM then OpenAM dynamically creates this profile from attributes in SAML assertion.
The problem comes if user profile is updated at IdP side, all subsequent SAML webSSO flows doesn’t update these changes at OpenAM SP side. More details here: OPENAM-8340

Solution

Versions used for this implementation: OpenAM 13.5, OpenDJ 3.5

One of the solution can include extending OpenAM SP Attribute Mapper. This extension may include just checking if user profile exists in OpenAM SP and updating any modified or new attributes in OpenAM datastore. Some tips for this implementation:

  1. Extend DefaultSPAttributeMapper and override getAttributes()
  2. Get datastore provider from SAML2Utils.getDataStoreProvider()
  3. Check if user exists: dataStoreProvider.isUserExists(userID)
  4. Get existing user attributes: dataStoreProvider.getAttributes()
  5. Compare attributes in SAML assertion with existing user attributes.
  6. Finally persist any new and updated attributes: dataStoreProvider.setAttributes()

Deploy

  • Compile and deploy this extension in OpenAM under  (OpenAM-Tomcat)/webapps/openam/WEB-INF/lib
  • Change SAML attribute setting in OpenAM. Navigate to Federation > Entity Providers > (SP Hosted Entity) > Assertion Processing. Specify ‘org.forgerock.openam.saml2.plugins.examples.UpdateDynamicUserSPAttMapper’ under Attribute Mapper.
  • Restart OpenAM
  • And we are good to go! Any changes in user profile attributes in SAML assertion will now be persisted in OpenAM datastore.

Note that ideally attributes between different sources should be synced by using some tool like OpenIDM 

See Also

Get code: https://github.com/CharanMann/OpenAM-SAMLSP-updateDynamicUser
OpenAM User Profile settings: https://backstage.forgerock.com/docs/openam/13.5/admin-guide#auth-core-realm-attributes
OpenAM SAML configuration: https://backstage.forgerock.com/docs/openam/13.5/admin-guide#chap-federation

This blog post was first published @ theinfinitelooper.blogspot.com, included here with permission.

Introduction to ForgeRock DevOps – Part 1

We have just launched Version 5 of the ForgeRock Identity Platform with numerous enhancements for DevOps friendliness. I have been meaning to jump into the world of DevOps for some time so the new release afforded a great opportunity to do just that.

As always with this blog I am going to step through a fully worked example. In this case I am using IBM Bluemix however it could just as easily have been AWS, Azure. GKE or any service that supports Kubernetes. By the end of this blog you will have a containerised instance of ForgeRock Access Management and Directory Services running on Bluemix deployed using Kubernetes. First off we will cover the basics.

DevOps Basics

There are many tutorials out there introducing dev ops that do a great job so I am not going to repeat those here I will point you towards the excellent ForgeRock Platform 5 DevOps guide which also takes you through DevOps deployment step by step into Minikube or GKE:

https://backstage.forgerock.com/docs/platform/5/devops-guide

What I want to do briefly is touch on some of the key ideas that really helped me to understand DevOps. I do not claim to be an expert but I think I am beginning to piece it all together:

12 Factor Applications: Best practices for developing applications, superbly summarised here this is why we need containers and DevOps.

Docker: Technology for building, deploying and managing containers.

Containers: A minimal operating system and components necessary to host an application. Traditionally we host apps in virtual machines with full blown operating systems whereas containers cut all of that down to just what you need for the application you are going to run.

In docker containers are built from Dockerfiles which are effectively recipes for building containers from different components. e.g. a recipe for a container running Tomcat.

Container Registry: A place where built containers can be uploaded to, managed, downloaded and deployed from. You could have a registry running locally, cloud environments will also typically have registries they will use to retrieve containers at deployment time.

Kubernetes: An engine for orchestrating deployment of containers. Because containers are very minimal, they need to have extra elements provisioning such as volume storage, secrets storage and configuration. In addition when you deploy any application you need load balancing and numerous other considerations. Kubernetes is a language for defining all of these requirements and an engine for implementing them all.

In cloud environments such as AWS, Azure and IBM Bluemix that support Kubernetes this effectively means that Kubernetes will manage the configuration of the cloud infrastructure for you in effect abstracting away all of the usual configuration you have to do specific to these environments.

Storage is a good example, in Kubernetes you can define persistent volume claims, this is effectively a way of asking for storage. Now with Kubernetes you do not need to be concerned with the specifics of how this storage is provisioned. Kubernetes will do that for you regardless of whether you deploy onto AWS, Azure, IBM Bluemix.

This enables automated and simplified deployment of your application to any deployment environment that supports Kubernetes! If you want to move from one environment to another just point your script at that environment! More so Kubernetes gives you a consistent deployment management and monitoring dashboard across all of these environments!

Helm: An engine for scripting Kubernetes deployments and operations. The ForgeRock platform uses this for DevOps deployment. It simply enables scripting of Kubernetes functionality and configuration of things like environment variables that may change between deployments.

The above serves as a very brief introduction to the world of DevOps and helps to set the scene for our deployment.

If you want to following along with this guide please get yourself a paid IBM Bluemix account alternatively if you want to use GKE or Minikube ( for local deployment ) take a look at the superb ForgeRock DevOps Guide. I will likely cover off Azure and AWS deployment in later blogs however everything we talk about here will still be relevant for those and other cloud environments as after all that is the whole point of Kubernetes!

In Part 2 we will get started by installing some prerequisites and building our first docker containers.

This blog post was first published @ http://identity-implementation.blogspot.no/, included here with permission from the author.

Automating OpenDJ backups on Kubernetes

Kubernetes StatefulSets are designed to run “pet” like services such as databases.  ForgeRock’s OpenDJ LDAP server is an excellent fit for StatefulSets as it requires stable network identity and persistent storage.

The ForgeOps project contains a Kubernetes Helm chart to deploy DJ to a Kubernetes cluster. Using a StatefulSet, the cluster will auto-provision persistent storage for our pod. We configure OpenDJ to place its backend database on this storage volume.

This gives us persistence that survives container restarts, or even restarts of the cluster. As long as we don’t delete the underlying persistent volume, our data is safe.

Persistent storage is quite reliable, but we typically want additional offline backups for our database.

The high level approach to accomplish this is as follows:

  • Configure the OpenDJ container to supported scheduled backups to a volume.
  • Configure a Kubernetes volume to store the backups.
  • Create a sidecar container that archives the backups. For our example we will use Google Cloud Storage.
Here are the steps in more detail:

Scheduled Backups:

OpenDJ has a built in task scheduler that can periodically run backups using a crontab(5) format.  We update the Dockerfile for OpenDJ with environment variables that control when backups run:

 

 # The default backup directory. Only relevant if backups have been scheduled.  
 ENV BACKUP_DIRECTORY /opt/opendj/backup  
 # Optional full backup schedule in cron (5) format.  
 ENV BACKUP_SCHEDULE_FULL "0 2 * * *"  
 # Optional incremental backup schedule in cron(5) format.  
 ENV BACKUP_SCHEDULE_INCREMENTAL "15 * * * *"  
 # The hostname to run the backups on. If this hostname does not match the container hostname, the backups will *not* be scheduled.  
 # The default value below means backups will not be scheduled automatically. Set this environment variable if you want backups.  
 ENV BACKUP_HOST dont-run-backups  

To enable backup support, the OpenDJ container runs a script on first time setup that configures the backup schedule.  A snippet from that script looks like this:

 if [ -n "$BACKUP_SCHEDULE_FULL" ];  
 then  
   echo "Scheduling full backup with cron schedule ${BACKUP_SCHEDULE_FULL}"  
   bin/backup --backupDirectory ${BACKUP_DIRECTORY} -p 4444 -D "cn=Directory Manager"   
   -j ${DIR_MANAGER_PW_FILE} --trustAll --backupAll   
   --recurringTask "${BACKUP_SCHEDULE_FULL}"  
 fi  

 

Update the Helm Chart to support backup

Next we update the OpenDJ Helm chart to mount a volume for backups and to support our new BACKUP_ variables introduced in the Dockerfile. We use a ConfigMap to pass the relevant environment variables to the OpenDJ container:

 apiVersion: v1  
 kind: ConfigMap  
 metadata:  
  name: {{ template "fullname" . }}  
 data:  
  BASE_DN: {{ .Values.baseDN }}  
  BACKUP_HOST: {{ .Values.backupHost }}  
  BACKUP_SCHEDULE_FULL: {{ .Values.backupScheduleFull }}  
  BACKUP_SCHEDULE_INCREMENTAL: {{ .Values.backupScheduleIncremental }}  

The funny looking expressions in the curly braces are Helm templates. Those variables are expanded
when the object is sent to Kubernetes. Using values allows us to parameterize the chart when we deploy it.

Next we configure the container with a volume to hold the backups:

  volumeMounts:  
     - name: data  
      mountPath: /opt/opendj/data  
     - name: dj-backup  
      mountPath: /opt/opendj/backup  

This can be any volume type supported by your Kubernetes cluster. We will use an “emptyDir” for now – which is a dynamic volume that Kubernetes creates and mounts on the container.

Configuring a sidecar backup container

Now for the pièce de résistance. We have our scheduled backups going to a Kubernetes volume. How do we send those files to offline storage?

One approach would be to modify our OpenDJ Dockerfile to support offline storage. We could, for example, include commands to write backups to Amazon S3 or Google Cloud storage.  This works, but it would specialize our container image to a unique environment. Where practical, we want our images to be flexible so they can be reused in different contexts.

This is where sidecar containers come into play.  The sidecar container holds the specialized logic for archiving files.  In general, it is a good idea to design containers that have a single responsibility. Using sidecars helps to enable this kind of design.

If you are running on Google Cloud Engine,  there is a ready made container that bundles the “gcloud” SDK, including the “gsutil” utility for cloud storage.   We update our Helm chart to include this container as a sidecar that shares the backup volume with the OpenDJ container:

  {{- if .Values.enableGcloudBackups }}  
    # An example of enabling backup to google cloud storage.  
    # The bucket must exist, and the cluster needs --scopes storage-full when it is created.  
    # This runs the gsutil command periodically to rsync the contents of the /backup folder (shared with the DJ container) to cloud storage.   
    - name: backup  
     image: gcr.io/cloud-builders/gcloud  
     imagePullPolicy: IfNotPresent  
     command: [ "/bin/sh", "-c", "while true; do gsutil -m rsync -r /backup {{ .Values.gsBucket }} ; sleep 600; done"]  
     volumeMounts:  
     - name: dj-backup  
      mountPath: /backup  
    {{- end }}  

The above container runs in a loop that periodically rsyncs the contents of the backup volume to cloud storage.  You could of course replace this sidecar with another that sends storage to a different location (say an Amazon S3 bucket).

If you enable this feature and browse to your cloud storage bucket, you should see your backed up data:

To wrap it all up, here is the final helm command that will deploy a highly available, replicated two node OpenDJ cluster, and schedule backups on the second node:

 helm install -f custom-gke.yaml   
   --set djInstance=userstore 
   --set numberSampleUsers=1000,backupHost=userstore-1,replicaCount=2 helm/opendj  

Now we just need to demonstrate that we can restore our data. Stay tuned!

This blog post was first published @ warrenstrange.blogspot.ca, included here with permission.

Making Rest Calls from IDM Workflow

I attended the Starling Bank Hackathon this weekend and had a great time, I will shortly be writing a longer blog post to talk all about it but before that I briefly wanted to blog about a little bit of code that might be really helpful to anyone building IDM workflows.

The External Rest Endpoint

ForgeRock Identity Management (Previously OpenIDM) has an REST API that effectively allows you to invoke external REST services hosted anywhere. You might use this for example to call out to an identity verification service as part of a registration workflow and I made good use of it at the hackathon.

With the following piece of code you can create some JSON and call out to a REST service outside of ForgeRock Identity Management:

java.util.logging.Logger logger = java.util.logging.Logger.getLogger("")
logger.info("Make REST call)

def slurper = new groovy.json.JsonSlurper()
def result = slurper.parseText('{"destinationAccountUid": "a41dd561-d64c-4a13-8f86-532584b8edc4","payment": {"amount": 12.34,"currency": "GBP"},"reference": "text"}')

result = openidm.action('external/rest', 'call', ['body': (new groovy.json.JsonBuilder(result)).toString(), 'method': 'POST', 'url': 'https://api-sandbox.starlingbank.com/api/v1/payments/local', 'contentType':'application/json', 'authenticate': ['type':'bearer', 'token': 'lRq08rfL4vzy2GyoqkJmeKzjwaeRfSKfWbuAi9NFNFZZ27eSjhqRNplBwR2do3iF'], 'forceWrap': true ])

A really small bit of code but with it you can do all sorts of awesome things!

This blog post was first published @ http://identity-implementation.blogspot.no/, included here with permission from the author.

What’s New in ForgeRock Access Management 5?

ForgeRock this week released version 5 of the ForgeRock Identity Platform of which ForgeRock Access Management 5 is a major component. So, what’s new in this release?

New Name

The eagle-eyed amongst you may be asking yourselves about that name and version. ForgeRock Access Management is the actually the new name for what was previously known as OpenAM. And as all components of the Platform are now at the same version, this becomes AM 5.0 rather than OpenAM 14.0 (though you may still see remnants of the old versioning under the hood).

Cloud Friendly

AM5 is focussed on being a great Identity Platform for Consumer IAM and IoT and one of the shared characteristics of these markets is high, unpredictable scale. So one of the design goals of AM5 was to become more cloud-friendly enabling a more elastic architecture. This has resulted in a simpler architectural model where individual AM servers no longer need to know about, or address, other AM servers in a cluster, they can act Autonomously. If you need more horsepower, simply spin up new instances of AM.

DevOps Friendly

To assist with the casual “Spin up new instances” statement above, AM5 has become more DevOps friendly. Firstly, the configuration APIs to AM are now available over REST, meaning configuration can be done remotely. Secondly, there’s a great new tool called Amster.

Amster is a lightweight command-line tool which can run in interactive shell mode, or be scripted.

A typical Amster script looks like this:

connect http://www.example.com:8080/openam -k /Users/demo/keyfile
import-config --path /Users/demo/am-config
:exit

This example connects to the remote AM5 instance, authenticating using a key, then imports configuration from the filesystem/git repo, before exiting.

Amster is separately downloadable has its own documentation too.

Developer Friendly

AM5 comes with new interactive documentation via the API Explorer. This is a Swagger-like interface describing all of the AM CREST (Common REST) APIs and means it is now easier than ever for devs to understand how to use these APIs. Not only are the Request parameters fully documented with Response results, but devs can “Try it out” there and then.

Secure OAuth2 Tokens

OAuth2 is great, and used everywhere. Mobile apps, Web apps, Micro-services and, more and more, in IoT.
But one of the problems with OAuth2 access tokens are that they are bearer tokens. This means that if someone steals it, they can use it to get access to the services it grants access to.

One way to prevent this is to adopt a new industry standard approach called “Proof of Possession“(PoP).

With PoP the client provides something unique to it, which is baked into the token when it is issued by AM. This is usually the public key of the client. The Resource Server, when presented with such a token, can use the confirmation claim/key to challenge the client, knowing that only the true-client can successfully answer the challenge.

Splunk Audit Handler

Splunk is one of the cool kids so it makes sense that our pluggable Audit Framework supports a native Splunk handler.

There are a tonne of other improvements to AM5 we don’t have time to cover but read about some of the others in the Release Notes, or download it from Backstage now and give it a whirl.

This blog post by the Access Management product manager was first published @ thefatblokesings.blogspot.com, included here with permission.

ForgeRock Identity Platform 5.0 docs

By now you have probably read the news about the ForgeRock Identity Platform 5.0 release.

This major platform update comes with many documentation changes and improvements:

Hope you have no trouble finding what you need.

This blog post was first published @ marginnotes2.wordpress.com, included here with permission.