SAML2 IDP Automated Certificate Management in FR AM

ForgeRock AM 5.0 ships with Amster a lightweight command line tool and interactive shell, that allows for the automation of many management and configuration tasks.

A common task often associated with SAML2 identity provider configs, is the updating of certificates that are used for signing and the possible encryption of assertions.  A feature added in 13.0 of OpenbAM, was the ability to have multiple certificates within an IDP config.  This is useful to overcome the age old challenge of how to handle certificate expiration.  An invalid cert can brake integrations with service providers.  The process to remove, then add a new certificate, would require any entities within the circle of trust to retrieve new metadata into their configs – and thus create downtime, so the timing of this is often an issue.  The ability to have multiple certificates in the config, would allow service providers to pull down meta data at a known date, instead of specifically when certificates expired.

Here we see the basic admin view of the IDP config…showing the list of certs available.  These certs are stored in the JCEKS keystore in AM5.0 (previously the JKS keystore).

So the config contains am1 and am2 certs – an export of the meta data (from the ../openam/saml2/jsp/exportmetadata.jsp?entityid=idp endpoint) will list both certs that could be used for signing:

The first certificate listed in the config, is the one that is used to sign.  When that expires, just remove from the list and the second certificate is then used.  As the service provider already has both certs in their originally downloaded metadata, there should be no break in service.

Anyway….back to automation.  Amster can manage the the SAML2 entities, either via the shell or script.  This allows admins to operationally create, edit and update entities…and a regular task could be to add new certificates to the IDP list as necessary.

To do this I created a script that does just this.  It’s a basic bash script that utilises Amster to read, edit then re-import the entity as a JSON wrapped XML object.

The script is available here.

For more information on IDP certificate management see the docs here.

This blog post was first published @ http://www.theidentitycookbook.com/, included here with permission from the author.

Extending OpenAM HOTP module to display OTP delivery details

OpenAM provide HOTP authentication module which can send OTP to user’s email address and/or telephone number. By default, OpenAM doesn’t displays user’s email address and/or telephone number while sending this OTP.

Solution

Versions used for this implementation: OpenAM 13.5, OpenDJ 3.5
One of the solution can include extending out of the box OpenAM’s HOTP module:
  • Extend HOTP auth module (openam-auth-hotp).
  • Update below property in extended amAuthHOTP.properties: send.success=Please enter your One Time Password sent at
  • Extend HOTPService appropriately to retrieve user profile details.
  • Change extended HOTP module code as per below (both for auto send and on request):

substituteHeader(START_STATE, bundle.getString("send.success") + <Get User contact details from HOTPService>);

Deploy

Register service and module (Note that for OpenAM v12 use amAuthHOTPExt-12.xml) :
$ ./ssoadm create-svc --adminid amadmin --password-file /tmp/pwd.txt --xmlfile ~/softwares/amAuthHOTPExt.xml
$ ./ssoadm register-auth-module --adminid amadmin --password-file /tmp/pwd.txt --authmodule com.sun.identity.authentication.modules.hotp.HOTPExt

UnRegister service and module (in case module needs to be uninstalled) : 
$ ./ssoadm unregister-auth-module --adminid amadmin --password-file /tmp/pwd.txt --authmodule com.sun.identity.authentication.modules.hotp.HOTPExt
$ ./ssoadm delete-svc --adminid amadmin --password-file /tmp/pwd.txt -s sunAMAuthHOTPExtService
  • Configure HOTPExt module with required SMTP server. Enable both SMS and Email.
  • Create a chain(otpChain) with (LDAP:Required, HOTPExt:Required). Set this chain as default for “Organization Authentication”
  • Restart OpenAM
  • Invoke HOTP module and appropriate message is displayed on screen with user’s email address and/or telephone number:

 

This blog post was first published @ theinfinitelooper.blogspot.com, included here with permission.

OpenAM SP SAML Attribute Mapper extension for updating profile attributes

OpenAM can act as both SP and IdP for SAML webSSO flows. OpenAM also provides ability to dynamically create user profiles.

When OpenAM is acting as SAML SP and Dynamic user profile is enabled, if user profile doesn’t exist on OpenAM then OpenAM dynamically creates this profile from attributes in SAML assertion.
The problem comes if user profile is updated at IdP side, all subsequent SAML webSSO flows doesn’t update these changes at OpenAM SP side. More details here: OPENAM-8340

Solution

Versions used for this implementation: OpenAM 13.5, OpenDJ 3.5

One of the solution can include extending OpenAM SP Attribute Mapper. This extension may include just checking if user profile exists in OpenAM SP and updating any modified or new attributes in OpenAM datastore. Some tips for this implementation:

  1. Extend DefaultSPAttributeMapper and override getAttributes()
  2. Get datastore provider from SAML2Utils.getDataStoreProvider()
  3. Check if user exists: dataStoreProvider.isUserExists(userID)
  4. Get existing user attributes: dataStoreProvider.getAttributes()
  5. Compare attributes in SAML assertion with existing user attributes.
  6. Finally persist any new and updated attributes: dataStoreProvider.setAttributes()

Deploy

  • Compile and deploy this extension in OpenAM under  (OpenAM-Tomcat)/webapps/openam/WEB-INF/lib
  • Change SAML attribute setting in OpenAM. Navigate to Federation > Entity Providers > (SP Hosted Entity) > Assertion Processing. Specify ‘org.forgerock.openam.saml2.plugins.examples.UpdateDynamicUserSPAttMapper’ under Attribute Mapper.
  • Restart OpenAM
  • And we are good to go! Any changes in user profile attributes in SAML assertion will now be persisted in OpenAM datastore.

Note that ideally attributes between different sources should be synced by using some tool like OpenIDM 

See Also

Get code: https://github.com/CharanMann/OpenAM-SAMLSP-updateDynamicUser
OpenAM User Profile settings: https://backstage.forgerock.com/docs/openam/13.5/admin-guide#auth-core-realm-attributes
OpenAM SAML configuration: https://backstage.forgerock.com/docs/openam/13.5/admin-guide#chap-federation

This blog post was first published @ theinfinitelooper.blogspot.com, included here with permission.

Introduction to ForgeRock DevOps – Part 1

We have just launched Version 5 of the ForgeRock Identity Platform with numerous enhancements for DevOps friendliness. I have been meaning to jump into the world of DevOps for some time so the new release afforded a great opportunity to do just that.

As always with this blog I am going to step through a fully worked example. In this case I am using IBM Bluemix however it could just as easily have been AWS, Azure. GKE or any service that supports Kubernetes. By the end of this blog you will have a containerised instance of ForgeRock Access Management and Directory Services running on Bluemix deployed using Kubernetes. First off we will cover the basics.

DevOps Basics

There are many tutorials out there introducing dev ops that do a great job so I am not going to repeat those here I will point you towards the excellent ForgeRock Platform 5 DevOps guide which also takes you through DevOps deployment step by step into Minikube or GKE:

https://backstage.forgerock.com/docs/platform/5/devops-guide

What I want to do briefly is touch on some of the key ideas that really helped me to understand DevOps. I do not claim to be an expert but I think I am beginning to piece it all together:

12 Factor Applications: Best practices for developing applications, superbly summarised here this is why we need containers and DevOps.

Docker: Technology for building, deploying and managing containers.

Containers: A minimal operating system and components necessary to host an application. Traditionally we host apps in virtual machines with full blown operating systems whereas containers cut all of that down to just what you need for the application you are going to run.

In docker containers are built from Dockerfiles which are effectively recipes for building containers from different components. e.g. a recipe for a container running Tomcat.

Container Registry: A place where built containers can be uploaded to, managed, downloaded and deployed from. You could have a registry running locally, cloud environments will also typically have registries they will use to retrieve containers at deployment time.

Kubernetes: An engine for orchestrating deployment of containers. Because containers are very minimal, they need to have extra elements provisioning such as volume storage, secrets storage and configuration. In addition when you deploy any application you need load balancing and numerous other considerations. Kubernetes is a language for defining all of these requirements and an engine for implementing them all.

In cloud environments such as AWS, Azure and IBM Bluemix that support Kubernetes this effectively means that Kubernetes will manage the configuration of the cloud infrastructure for you in effect abstracting away all of the usual configuration you have to do specific to these environments.

Storage is a good example, in Kubernetes you can define persistent volume claims, this is effectively a way of asking for storage. Now with Kubernetes you do not need to be concerned with the specifics of how this storage is provisioned. Kubernetes will do that for you regardless of whether you deploy onto AWS, Azure, IBM Bluemix.

This enables automated and simplified deployment of your application to any deployment environment that supports Kubernetes! If you want to move from one environment to another just point your script at that environment! More so Kubernetes gives you a consistent deployment management and monitoring dashboard across all of these environments!

Helm: An engine for scripting Kubernetes deployments and operations. The ForgeRock platform uses this for DevOps deployment. It simply enables scripting of Kubernetes functionality and configuration of things like environment variables that may change between deployments.

The above serves as a very brief introduction to the world of DevOps and helps to set the scene for our deployment.

If you want to following along with this guide please get yourself a paid IBM Bluemix account alternatively if you want to use GKE or Minikube ( for local deployment ) take a look at the superb ForgeRock DevOps Guide. I will likely cover off Azure and AWS deployment in later blogs however everything we talk about here will still be relevant for those and other cloud environments as after all that is the whole point of Kubernetes!

In Part 2 we will get started by installing some prerequisites and building our first docker containers.

This blog post was first published @ http://identity-implementation.blogspot.no/, included here with permission from the author.

The Role of Identity Management in the GDPR

Unless you have been living in a darkened room for a long time, you will know the countdown for the EU's General Data Protection Regulation is dramatically coming to a head.  May 2018 is when the regulation really takes hold, and organisations are fast in the act on putting plans, processes and personnel in place, in order to comply.

Whilst many organisations are looking at employing a Data Privacy Officer (DPO), reading through all the legalese and developing data analytics and tagging processes, many need to embrace and understand the requirements with how their consumer identity and access management platform can and should be used in this new regulatory setting.

My intention in this blog, isn't to list every single article and what they mean - there are plenty of other sites that can help with that.  I want to really highlight, some of the more identity related components of the GDPR and what needs to be done.

Personal Data

On the the personal data front, more and more organisations are collecting more data, more frequently than ever before.  Some data is explicit, like when you enter your first name, last name and date of birth when you register for a service for example, through to the more subtle - location, history and preference details amongst others. The GDPR focuses on making sure personal data is processed legally and data is only kept for as long as necessary - with a full end user interface that has the ability to make sure their data is up to date and accurate.

It goes with out saying, that this personal data needs to have the necessary security, confidentiality, integrity and availability constraints applied to it.  This will require the necessary least privileged administrative controls and data persistence security, such as the necessary hashing or encryption.

Lawful Processing

Ah the word law! That must be the legal team. Or the newly appointed DPO. That can't be a security, identity or technology issue.  Partially correct. But the lawful processing, also has a significant requirement surrounding the capture and management of consent.  So what is this explicit consent? The data owner - that's Joe Blogs whose data has been snaffled - needs to be fully aware of the data that has been captured, why it is captured and who has access.

The service provider, also needs to explicitly capture consent - not an implicit "the end user needs to opt out", but more the end user needs to "opt-in" for their data to be used and processed.  This will require a transparent user driven consent system, with sharing and more importantly, timely revocation of access.  Protocols such as User Managed Access may come in useful here.

Individuals Right to be Informed

The lawful processing aspect, flows neatly into the entire area of the end user being informed.  The end user needs to be in a position to make informed decisions, around data sharing, service registration, data revocation and more.  The use of 10 page terms and conditions thrust down the end user's screen at service startup, are over.

Non-tech language is now a must, with clear explanations of why data has been captured and which 3rd parties - if any - have access to the data.  This again flows into the consent model - with the data owner being able to make consent decisions, they need simple to understand information.  So registration flows will now need to be much more progressive - only collecting data when it is needed, with a clear explanation of why the data is needed and what processing will be done with it.  20 attribute registration forms are dead.

Individuals Right to Rectification, Export and Erasure

Certainly some new requirements here - if you are a service provider, can you allow your end users to clearly see what data you have captured about them, and also provide that data in a simple to use end user dashboard where they can make changes and keep it up to date?  What about the ability for the data owner to export that data in a machine readable and standard format such as CSV or JSON?

Right to erasure is also interesting - do you know where your end user data resides?  Which systems, what attributes, what correlations or translations have taken place?  Could you issue a de-provisioning request to either delete, clean or anonymize that data? If not you may need to investigate why and what can be done to remediate that.


Conclusion

The GDPR is big.  It contains over 90 articles, containing lots of legalese and fine grained print.  Don't just assume the legal team or the newly appointed DPO will cover your company's ass.  Full platform data analytics tagging will be needed, along with a modern consumer identity and access management design pattern.  End user dashboards, registration journeys and consent frameworks will need updating.

The interesting aspect, is that privacy is now becoming a competitive differentiator.  The GDPR should not just be seen as an internal compliance exercise.  It could actually be a launch pad for building closer more trusted relationships with your end user community.

The Role of Identity Management in the GDPR

Unless you have been living in a darkened room for a long time, you will know the countdown for the EU's General Data Protection Regulation is dramatically coming to a head.  May 2018 is when the regulation really takes hold, and organisations are fast in the act on putting plans, processes and personnel in place, in order to comply.

Whilst many organisations are looking at employing a Data Privacy Officer (DPO), reading through all the legalese and developing data analytics and tagging processes, many need to embrace and understand the requirements with how their consumer identity and access management platform can and should be used in this new regulatory setting.

My intention in this blog, isn't to list every single article and what they mean - there are plenty of other sites that can help with that.  I want to really highlight, some of the more identity related components of the GDPR and what needs to be done.

Personal Data

On the the personal data front, more and more organisations are collecting more data, more frequently than ever before.  Some data is explicit, like when you enter your first name, last name and date of birth when you register for a service for example, through to the more subtle - location, history and preference details amongst others. The GDPR focuses on making sure personal data is processed legally and data is only kept for as long as necessary - with a full end user interface that has the ability to make sure their data is up to date and accurate.

It goes with out saying, that this personal data needs to have the necessary security, confidentiality, integrity and availability constraints applied to it.  This will require the necessary least privileged administrative controls and data persistence security, such as the necessary hashing or encryption.

Lawful Processing

Ah the word law! That must be the legal team. Or the newly appointed DPO. That can't be a security, identity or technology issue.  Partially correct. But the lawful processing, also has a significant requirement surrounding the capture and management of consent.  So what is this explicit consent? The data owner - that's Joe Blogs whose data has been snaffled - needs to be fully aware of the data that has been captured, why it is captured and who has access.

The service provider, also needs to explicitly capture consent - not an implicit "the end user needs to opt out", but more the end user needs to "opt-in" for their data to be used and processed.  This will require a transparent user driven consent system, with sharing and more importantly, timely revocation of access.  Protocols such as User Managed Access may come in useful here.

Individuals Right to be Informed

The lawful processing aspect, flows neatly into the entire area of the end user being informed.  The end user needs to be in a position to make informed decisions, around data sharing, service registration, data revocation and more.  The use of 10 page terms and conditions thrust down the end user's screen at service startup, are over.

Non-tech language is now a must, with clear explanations of why data has been captured and which 3rd parties - if any - have access to the data.  This again flows into the consent model - with the data owner being able to make consent decisions, they need simple to understand information.  So registration flows will now need to be much more progressive - only collecting data when it is needed, with a clear explanation of why the data is needed and what processing will be done with it.  20 attribute registration forms are dead.

Individuals Right to Rectification, Export and Erasure

Certainly some new requirements here - if you are a service provider, can you allow your end users to clearly see what data you have captured about them, and also provide that data in a simple to use end user dashboard where they can make changes and keep it up to date?  What about the ability for the data owner to export that data in a machine readable and standard format such as CSV or JSON?

Right to erasure is also interesting - do you know where your end user data resides?  Which systems, what attributes, what correlations or translations have taken place?  Could you issue a de-provisioning request to either delete, clean or anonymize that data? If not you may need to investigate why and what can be done to remediate that.


Conclusion

The GDPR is big.  It contains over 90 articles, containing lots of legalese and fine grained print.  Don't just assume the legal team or the newly appointed DPO will cover your company's ass.  Full platform data analytics tagging will be needed, along with a modern consumer identity and access management design pattern.  End user dashboards, registration journeys and consent frameworks will need updating.

The interesting aspect, is that privacy is now becoming a competitive differentiator.  The GDPR should not just be seen as an internal compliance exercise.  It could actually be a launch pad for building closer more trusted relationships with your end user community.

Automating OpenDJ backups on Kubernetes

Kubernetes StatefulSets are designed to run “pet” like services such as databases.  ForgeRock’s OpenDJ LDAP server is an excellent fit for StatefulSets as it requires stable network identity and persistent storage.

The ForgeOps project contains a Kubernetes Helm chart to deploy DJ to a Kubernetes cluster. Using a StatefulSet, the cluster will auto-provision persistent storage for our pod. We configure OpenDJ to place its backend database on this storage volume.

This gives us persistence that survives container restarts, or even restarts of the cluster. As long as we don’t delete the underlying persistent volume, our data is safe.

Persistent storage is quite reliable, but we typically want additional offline backups for our database.

The high level approach to accomplish this is as follows:

  • Configure the OpenDJ container to supported scheduled backups to a volume.
  • Configure a Kubernetes volume to store the backups.
  • Create a sidecar container that archives the backups. For our example we will use Google Cloud Storage.
Here are the steps in more detail:

Scheduled Backups:

OpenDJ has a built in task scheduler that can periodically run backups using a crontab(5) format.  We update the Dockerfile for OpenDJ with environment variables that control when backups run:

 

 # The default backup directory. Only relevant if backups have been scheduled.  
 ENV BACKUP_DIRECTORY /opt/opendj/backup  
 # Optional full backup schedule in cron (5) format.  
 ENV BACKUP_SCHEDULE_FULL "0 2 * * *"  
 # Optional incremental backup schedule in cron(5) format.  
 ENV BACKUP_SCHEDULE_INCREMENTAL "15 * * * *"  
 # The hostname to run the backups on. If this hostname does not match the container hostname, the backups will *not* be scheduled.  
 # The default value below means backups will not be scheduled automatically. Set this environment variable if you want backups.  
 ENV BACKUP_HOST dont-run-backups  

To enable backup support, the OpenDJ container runs a script on first time setup that configures the backup schedule.  A snippet from that script looks like this:

 if [ -n "$BACKUP_SCHEDULE_FULL" ];  
 then  
   echo "Scheduling full backup with cron schedule ${BACKUP_SCHEDULE_FULL}"  
   bin/backup --backupDirectory ${BACKUP_DIRECTORY} -p 4444 -D "cn=Directory Manager"   
   -j ${DIR_MANAGER_PW_FILE} --trustAll --backupAll   
   --recurringTask "${BACKUP_SCHEDULE_FULL}"  
 fi  

 

Update the Helm Chart to support backup

Next we update the OpenDJ Helm chart to mount a volume for backups and to support our new BACKUP_ variables introduced in the Dockerfile. We use a ConfigMap to pass the relevant environment variables to the OpenDJ container:

 apiVersion: v1  
 kind: ConfigMap  
 metadata:  
  name: {{ template "fullname" . }}  
 data:  
  BASE_DN: {{ .Values.baseDN }}  
  BACKUP_HOST: {{ .Values.backupHost }}  
  BACKUP_SCHEDULE_FULL: {{ .Values.backupScheduleFull }}  
  BACKUP_SCHEDULE_INCREMENTAL: {{ .Values.backupScheduleIncremental }}  

The funny looking expressions in the curly braces are Helm templates. Those variables are expanded
when the object is sent to Kubernetes. Using values allows us to parameterize the chart when we deploy it.

Next we configure the container with a volume to hold the backups:

  volumeMounts:  
     - name: data  
      mountPath: /opt/opendj/data  
     - name: dj-backup  
      mountPath: /opt/opendj/backup  

This can be any volume type supported by your Kubernetes cluster. We will use an “emptyDir” for now – which is a dynamic volume that Kubernetes creates and mounts on the container.

Configuring a sidecar backup container

Now for the pièce de résistance. We have our scheduled backups going to a Kubernetes volume. How do we send those files to offline storage?

One approach would be to modify our OpenDJ Dockerfile to support offline storage. We could, for example, include commands to write backups to Amazon S3 or Google Cloud storage.  This works, but it would specialize our container image to a unique environment. Where practical, we want our images to be flexible so they can be reused in different contexts.

This is where sidecar containers come into play.  The sidecar container holds the specialized logic for archiving files.  In general, it is a good idea to design containers that have a single responsibility. Using sidecars helps to enable this kind of design.

If you are running on Google Cloud Engine,  there is a ready made container that bundles the “gcloud” SDK, including the “gsutil” utility for cloud storage.   We update our Helm chart to include this container as a sidecar that shares the backup volume with the OpenDJ container:

  {{- if .Values.enableGcloudBackups }}  
    # An example of enabling backup to google cloud storage.  
    # The bucket must exist, and the cluster needs --scopes storage-full when it is created.  
    # This runs the gsutil command periodically to rsync the contents of the /backup folder (shared with the DJ container) to cloud storage.   
    - name: backup  
     image: gcr.io/cloud-builders/gcloud  
     imagePullPolicy: IfNotPresent  
     command: [ "/bin/sh", "-c", "while true; do gsutil -m rsync -r /backup {{ .Values.gsBucket }} ; sleep 600; done"]  
     volumeMounts:  
     - name: dj-backup  
      mountPath: /backup  
    {{- end }}  

The above container runs in a loop that periodically rsyncs the contents of the backup volume to cloud storage.  You could of course replace this sidecar with another that sends storage to a different location (say an Amazon S3 bucket).

If you enable this feature and browse to your cloud storage bucket, you should see your backed up data:

To wrap it all up, here is the final helm command that will deploy a highly available, replicated two node OpenDJ cluster, and schedule backups on the second node:

 helm install -f custom-gke.yaml   
   --set djInstance=userstore 
   --set numberSampleUsers=1000,backupHost=userstore-1,replicaCount=2 helm/opendj  

Now we just need to demonstrate that we can restore our data. Stay tuned!

This blog post was first published @ warrenstrange.blogspot.ca, included here with permission.

Integrating Yubikey OTP with ForgeRock Access Management

Yubico is a manufacturer of multi-factor authentication devices, that typically are just USB dongles. They can provide a range of different MFA options including traditional static password linking, one-time-password generation and integration using FIDO (Fast Identity Online) Universal 2nd Factor (U2F).

I want to quickly show the route of integrating your Yubico Yubikey with ForgeRock Access Management.  ForgeRock and Yubico have had integrations for the last 6 years, but I thought it was good to have a simple update on integration using the OATH compliant OTP.

First of all you need a Yubikey.  I’m using a Yubikey Nano, which couldn’t be any smaller if it tried. Just make sure you don’t lose it… The Yubikey needs configuring first of all to generate one time passwords.  This is done using the Yubico personalisation tool.  This is a simple util that works on Mac, Windows and Linux.  Download the tool from Yubico and install.  Setting up the Yubikey for OTP generation is a 3 min job.  There’s even a nice Vimeo on how to do it, if you can’t be bothered RTFM.

This set up process, basically generates a secret, that is bound to the Yubikey along with some config.  If you want to use your own secret, just fill in the field…but don’t forget it :-)

Next step is to setup ForgeRock AM (aka OpenAM), to use the Yubikey during login.

Access Management has shipped with an OATH compliant authentication module for years.  Even since the Sun OpenSSO days.  This module works with any Open Authentication compliant device.

Create a new module instance and add in the fields where you will store the secret and counter against the users profile.  For quickness (and laziness) I just used employeeNumber and telephoneNumber as they are already shipped in the profile schema and weren’t being used.  In the “real world” you would just add two specific attributes to the profile schema.

Make sure you then copy the secret that the Yubikey personalisation tool created, into the user record within the employeeNumber field…

Next, just add the module to a chain, that contains your data store module first – the data store isn’t essential, but you do need a way to identify the user first, in order to look up their OTP seed in the profile store, so user name and password authentication seems the quickest – albeit you could just use persistent cookie if the user had authenticated previously, or maybe even just a username module.

Done.  Next, to use your new authentication service, just augment the authentication URL with the name of the service – in this case yubikeyOTPService. Eg:

../openam/XUI/#login/&authIndexType=service&authIndexValue=yubikeyOTPService

This first asks me for my username and password…

…then my OTP.

At this point, I just add my Yubikey Nano into my USB drive, then touch it for 3 seconds, to auto generate the 6 digit OTP and log me in.  Note the 3 seconds bit is important.  Most Yubikeys have 2 configuration slots and the 1 slot is often configured for the Yubico Cloud Service, and is activated if you touch the key for only 1 second.  To activate the second configuration and in our case the OTP, just hold a little longer…

This blog post was first published @ http://www.theidentitycookbook.com/, included here with permission from the author.

Making Rest Calls from IDM Workflow

I attended the Starling Bank Hackathon this weekend and had a great time, I will shortly be writing a longer blog post to talk all about it but before that I briefly wanted to blog about a little bit of code that might be really helpful to anyone building IDM workflows.

The External Rest Endpoint

ForgeRock Identity Management (Previously OpenIDM) has an REST API that effectively allows you to invoke external REST services hosted anywhere. You might use this for example to call out to an identity verification service as part of a registration workflow and I made good use of it at the hackathon.

With the following piece of code you can create some JSON and call out to a REST service outside of ForgeRock Identity Management:

java.util.logging.Logger logger = java.util.logging.Logger.getLogger("")
logger.info("Make REST call)

def slurper = new groovy.json.JsonSlurper()
def result = slurper.parseText('{"destinationAccountUid": "a41dd561-d64c-4a13-8f86-532584b8edc4","payment": {"amount": 12.34,"currency": "GBP"},"reference": "text"}')

result = openidm.action('external/rest', 'call', ['body': (new groovy.json.JsonBuilder(result)).toString(), 'method': 'POST', 'url': 'https://api-sandbox.starlingbank.com/api/v1/payments/local', 'contentType':'application/json', 'authenticate': ['type':'bearer', 'token': 'lRq08rfL4vzy2GyoqkJmeKzjwaeRfSKfWbuAi9NFNFZZ27eSjhqRNplBwR2do3iF'], 'forceWrap': true ])

A really small bit of code but with it you can do all sorts of awesome things!

This blog post was first published @ http://identity-implementation.blogspot.no/, included here with permission from the author.

What’s New in ForgeRock Access Management 5?

ForgeRock this week released version 5 of the ForgeRock Identity Platform of which ForgeRock Access Management 5 is a major component. So, what’s new in this release?

New Name

The eagle-eyed amongst you may be asking yourselves about that name and version. ForgeRock Access Management is the actually the new name for what was previously known as OpenAM. And as all components of the Platform are now at the same version, this becomes AM 5.0 rather than OpenAM 14.0 (though you may still see remnants of the old versioning under the hood).

Cloud Friendly

AM5 is focussed on being a great Identity Platform for Consumer IAM and IoT and one of the shared characteristics of these markets is high, unpredictable scale. So one of the design goals of AM5 was to become more cloud-friendly enabling a more elastic architecture. This has resulted in a simpler architectural model where individual AM servers no longer need to know about, or address, other AM servers in a cluster, they can act Autonomously. If you need more horsepower, simply spin up new instances of AM.

DevOps Friendly

To assist with the casual “Spin up new instances” statement above, AM5 has become more DevOps friendly. Firstly, the configuration APIs to AM are now available over REST, meaning configuration can be done remotely. Secondly, there’s a great new tool called Amster.

Amster is a lightweight command-line tool which can run in interactive shell mode, or be scripted.

A typical Amster script looks like this:

connect http://www.example.com:8080/openam -k /Users/demo/keyfile
import-config --path /Users/demo/am-config
:exit

This example connects to the remote AM5 instance, authenticating using a key, then imports configuration from the filesystem/git repo, before exiting.

Amster is separately downloadable has its own documentation too.

Developer Friendly

AM5 comes with new interactive documentation via the API Explorer. This is a Swagger-like interface describing all of the AM CREST (Common REST) APIs and means it is now easier than ever for devs to understand how to use these APIs. Not only are the Request parameters fully documented with Response results, but devs can “Try it out” there and then.

Secure OAuth2 Tokens

OAuth2 is great, and used everywhere. Mobile apps, Web apps, Micro-services and, more and more, in IoT.
But one of the problems with OAuth2 access tokens are that they are bearer tokens. This means that if someone steals it, they can use it to get access to the services it grants access to.

One way to prevent this is to adopt a new industry standard approach called “Proof of Possession“(PoP).

With PoP the client provides something unique to it, which is baked into the token when it is issued by AM. This is usually the public key of the client. The Resource Server, when presented with such a token, can use the confirmation claim/key to challenge the client, knowing that only the true-client can successfully answer the challenge.

Splunk Audit Handler

Splunk is one of the cool kids so it makes sense that our pluggable Audit Framework supports a native Splunk handler.

There are a tonne of other improvements to AM5 we don’t have time to cover but read about some of the others in the Release Notes, or download it from Backstage now and give it a whirl.

This blog post by the Access Management product manager was first published @ thefatblokesings.blogspot.com, included here with permission.