The Simple Way to Create an AM Authentication Node Project

ForgeRock’s Identity Platform Access Management introduced Authentication Trees for preview in version 5.5. Version 6.0 will see Authentication Trees and Nodes become an integral part of the product. This blog post will help you quickly and easily create a Authentication Tree Node project so that you can develop your own authentication node.

About Authentication Trees and Nodes

Authentication trees provide fine-grained authentication by allowing multiple paths and decision points throughout the authentication flow.

Authentication trees are made up of authentication nodes, which define actions taken during authentication, similar to authentication modules within chains. Authentication nodes are more granular than modules, with each node performing a single task such as collecting a username or making a simple decision. Unlike authentication modules, authentication nodes can have multiple outcomes rather than just success or failure.

You can create complex yet customer-friendly authentication experiences by linking nodes together, creating loops, and nesting nodes within a tree.

You can read more about Authentication Trees and Nodes in the ForgeRock documentation here. Note the link is to v5.5 documentation. There may be newer versions available.

Creating an Authentication Node

Because Authentication Nodes are fine-grained you can end up writing lots of them to build a flexible custom authentication suite. The creation of the maven project for each node can become an overhead, but fear not! There is a maven archetype to help you set up a skeleton independent auth node project!

Using the Maven Archetype

The Maven archetype lives in the ForgeRock maven repository. In order to use it you will need to set up your maven environment to be able to authenticate to that repository. To be able to do that you will need a ForgeRock Backstage Account that is associated with either a customer subscription or a partner status.
To set up maven you will need to download a preconfigured maven settings.xml file as explained in this Backstage Knowledge Base article.
Note: If you have previously downloaded your settings.xml file it could still be worth downloading it again as the `profile` section of the settings.xml file required to access the archetype did not exist before mid Dec 2017.

I’m set up. Let’s do this!

OK! Create your project;

mvn archetype:generate \
-DgroupId=<my-group-id> \
-DartifactId=<my-artefact-id> \
-Dversion=<my-version> \
-DpackageName=<my-package-id> \
-DauthNodeName=<my-auth-node-class-name> \ \
-DarchetypeArtifactId=auth-tree-node-archetype \
-DarchetypeVersion=5.5.0 \

Where you need to substitute values for the groupId, artefactId, version and packageName and authNodeName to suite your project.
groupId, artefactId & version are all pretty self evident and will be used in the generation of the pom’s for your project.
packageName defines the package in which your auth tree node classes will be generated.
authNodeName Used to name generated classes and in the generation of a file etc.

What does this create for me?

Assuming we run a command something like this;

mvn archetype:generate \
-DgroupId=com.boho-software \
-DartifactId=super-auth-tree-node \
-Dversion=1.0.0-SNAPSHOT \
-Dpackage=com.boho-software.supernode \
-DauthNodeName=SuperNode \ \
-DarchetypeArtifactId=auth-tree-node-archetype \
-DarchetypeVersion=5.5.0 \

We will get a project with the following structure;

+ example.png

  + legal
    + CDDL-1.0.txt
+ pom.xml

  + src
    + main
      + java
      | + com
      |   + boho-software
      |     + supernode
      |       +
|       +

      + resources
        + META-INF
          + services
            + org.forgerock.openam.plugins.AmPlugin
+ com

          + boho-software
            + supernode

Which I’m sure you’ll agree, saves a lot of project set up time!

Once it’s built…

put it in the Backstage Marketplace! There, you can build a community around your auth tree node, share it with others, find help maintaining it and if it becomes popular it could be accepted into the AM project as a fully supported node.

This blog was originally published at

2020: Machine Learning, Post Quantum Crypto & Zero Trust

Welcome to a digital identity project in 2020! You’ll be expected to have a plan for post-quantum cryptography.  Your network will be littered with “zero trust” buzz words, that will make you suspect everyone, everything and every transaction.  Add to that, “machines” will be learning everything, from how you like your coffee, through to every network, authentication and authorisation decision. OK, are you ready?

Machine Learning

I’m not going to do an entire blog on machine learning (ML) and artificial intelligence (AI).  Firstly I’m not qualified enough on the topic and secondly I want to focus on the security implications.  Needless to say, within 3 years, most organisations will have relatively experienced teams who are handling big data capture from an and identity, access management and network perspective.

That data will be being fed into ML platforms, either on-premise, or via cloud services.  Leveraging either structured or unstructured learning, data from events such as login (authentication) for end users and devices, as well authorization decisions can be analysed in order to not only increase assurance and security, but for also increasing user experience.  How?  Well if the output from ML can be used to either update existing signatures (bit legacy, but still) whilst simultaneously working out the less risky logins, end user journeys can be made less intrusive.

Step one is finding out the correct data sources to be entered into the ML “model”.  What data is available, especially within the sign up, sign in and authorization flows?  Clearly general auditing data will look to capture ML “tasks” such as successful sign ins and any other meta data associated with that – such as time, location, IP, device data, behavioural biometry and so on.  Having vast amounts of this data available is the first start, which in turn can be used to “feed” the ML engine.  Other data points would be needed to.  What resources, applications and API calls are being made to complete certain business processes?  Can patterns be identified and tied to “typical” behaviour and user and device communities.  Being able to identify and track critical data and the services that process that data would be a first step, before being able to extract task based data samples to help identify trusted and untrusted activities.


Post Quantum Crypto

Quantum computing is coming.  Which is great.  Even in 2020, it might not be ready, but you need to be ready for it.  But, and there’s always a but, the main concern is that the super power of quantum will blow away the ability for existing encryption and hashing algorithms to remain secure.  Why?  Well quantum computing ushers in a paradigm of “qubits” – a superpositional state in between the classic binary 1 and 0.  Ultimately, that means that the “solutioneering” of complex problems can be completed  in a much more efficient and non-sequential way.

The quantum boxes can basically solve certain problems faster.  The mathematics behind cryptography being one of those problems.  A basic estimate for the future effectiveness of something like AES-256, drops to 128 bits.  Scary stuff.  Commonly used approaches today for key exchange rely on protocols such as Diffie-Hellman (DH) or Elliptic Curve Diffie Hellman (ECDH).  Encryption is then handled by things like Rivest-Shamir-Adleman (RSA) or Elliptic Curve Digital Signing Algorithm (ECDSA).

In the post-quantum (PQ) world they’re basically broken.  Clearly, the material impact on your organisation or services will largely depend on impact assessment.  There’s no point putting a $100 lock on a $20 bike.  But everyone wants encryption right?  All that data that will be flying around is likely to need even more protection whilst in transit and at rest.

Some of the potentially “safe” PQ algorithms include XMSS and SPHINCS for hashing – the former going through IETF standardization.  Ring Learning With Errors (RLWE) is basically an enhanced public key cryptosystem, that alters the structure of the private key.  Currently under research but no weakness have yet been found.  NTRU is another algorithm for the PQ world, using a hefty 12881 bit key.  NTRU is also already standardized by the IEEE which helps with the maturity aspect.

But how to decide?  There is a nice body called the PQCRYPTO Consortium that is providing guidance on current research.  Clearing you’re not going to build your own alternatives, but information assurance and crypto specialists within your organisation, will need to start data impact assessments, in order to understand where cryptography is currently used for both transport, identification and data at rest protection to understand any future potential exposures.

Zero Trust Identities

“Zero Trust” (ZT) networking has been around for a while.  The concept of organisations having a “safe” internal network versus the untrusted “hostile” public network, separated by a firewall are long gone. Organisations are perimeter-less.

Assume every device, identity and transaction is hostile until proven otherwise.  ZT for identity especially, will be looking to bind not only a physical identity to a digital representation (session Id, token, JWT), but also that representation to a vehicle – aka mobile, tablet or device.  In turn, every transaction that tuple interacts with, is then verified – checking for changes – either contextual or behavioural that could indicate malicious intent.  That introduces a lot of complexity to transaction, data and application protection.

Every transaction potentially requires introspection or validation.  Add to this mix an increased number of devices and data flows, which would pave the way for distributed authorization, coupled with continuous session validation.

How will that look?  Well we’re starting to see the of things like stateless JSON Web Tokens (JWT’s) as a means for hyper scale assertion issuance, along with token binding to sessions and devices.  Couple to that Fine Grained Authentication processes that are using 20+ signals of data to identify a user or thing and we’re starting to see the foundations of ZT identity infrastructures.  Microservice or hyper-mesh related application infrastructures are going to need rapid introspection and re-validation on every call so the likes of distributed authorization looks likely.

So the future is now.  As always.  We know that secure identity and access management functions has never been more needed, popular or advanced in the last 20 years.  The next 3-5 years will be critical in defining a back bone of security services that can nimbly be applied to users, devices, data and the billions of transactions that will result.

This blog post was first published @, included here with permission.