Using an Authentication Tree Stage to Build a Custom UI with the ForgeRock JavaScript SDK

The ForgeRock JavaScript SDK greatly simplifies the process of adding intelligent authentication to your applications. It offers a ready-to-use UI that completely handles rendering of authentication steps. You can also take full control and create a custom UI; in which case, it’s helpful to know the current stage of the authentication tree to determine which UI you should render.

OpenAM 7.0 adds a stage property to the page node that can be used for this purpose, and alternate approaches are available for prior versions. This post will show you two approaches for OpenAM 6.5, and one for OpenAM 7.

OpenAM 6.5

While the stage property doesn’t exist in authentication trees prior to OpenAM 7.0, there are two alternate approaches to achieve the same result.

Approach #1: Metadata Callbacks

Using metadata callbacks, you can inject the stage value into the tree’s response payload. The only difference is that the value will appear in a callback, instead of directly associated with the step itself. This approach involves three steps:

  1. Create a script to add the metadata callback.
  2. Update your tree to execute that script.
  3. Read the metadata callback in your application.

Step 1: Add a Metadata Callback Using a Script

  • Create a script of type Decision node script for authentication trees.
  • Give it an appropriate name, such as “MetadataCallback: UsernamePassword”.
  • In the script, add a metadata callback that creates an object with a stageproperty. Be sure to also set the outcome value:
var fr = JavaImporter(
  org.forgerock.json.JsonValue,
  org.forgerock.openam.auth.node.api.Action,
  com.sun.identity.authentication.spi.MetadataCallback
);

with (fr) {
  var json = JsonValue.json({ stage: "UsernamePassword" });
  action = Action.send(new MetadataCallback(json)).build();
}

outcome = "true";

As with all scripts, ensure you have whitelisted any imported classes.

Step 2: Update Your Tree to Execute the Script

Add a scripted decision node to your page node and configure it to reference the script created in the previous step. In this example, the step payload will contain three callbacks:

  • MetadataCallback
  • NameCallback
  • PasswordCallback

Step 3: Read the Metadata Callback

Use the SDK to find the metadata callback and read its stage property:

function getStage(step) {
  // Get all metadata callbacks in the step
  const metadataCallbacks = step.getCallbacksOfType(CallbackType.MetadataCallback);

  // Find the first callback that contains a "stage" value in its data
  const stage = metadataCallbacks
    .map(x => {
      const data = x.getData();
      const dataIsObject = typeof data === "object" && data !== null;
      return dataIsObject && data.stage ? data.stage : undefined;
    })
    .find(x => x !== undefined);

  // Return the stage value, which will be undefined if none exists
  return stage;
}

Approach #2: Inspecting Callbacks

If you have relatively few and/or well-known authentication trees, it’s likely you can determine the stage by simply looking at the types of callbacks in the step.

For example, it’s common for a tree to start by capturing the username and password. In this case, you can inspect the callbacks to see if they consist of a NameCallback and PasswordCallback. If your tree uses WebAuthn for passwordless authentication, the SDK can help with this inspection:

function getStage(step) {
  // Check if the step contains callbacks for capturing username and password
  const usernameCallbacks = step.getCallbacksOfType(CallbackType.NameCallback);
  const passwordCallbacks = step.getCallbacksOfType(CallbackType.PasswordCallback);
  if (usernameCallbacks.length > 0 && passwordCallbacks.length > 0) {
    return "UsernamePassword";
  }

  // Use the SDK to determine if this is a WebAuthn step
  const webAuthnStepType = FRWebAuthn.getWebAuthnStepType(step);
  if (webAuthnStepType === WebAuthnStepType.Authentication) {
    return "DeviceAuthentication";
  } else if (webAuthnStepType === WebAuthnStepType.Registration) {
    return "DeviceRegistration";
  }

  // ... Add checks to determine other stages in your trees ...

  return undefined;
}

OpenAM 7.0 Approach

Using OpenAM 7.0 to specify a stage is straightforward. When constructing a tree, place nodes inside a page node and then specify its stage, which is a free-form text field:

When you use the SDK’s FRAuth module to iterate through a tree, you can now call the getStage() method on the returned FRStep, and decide which custom UI needs to be rendered:

// Get the current step in the tree
const currentStep = await FRAuth.next(previousStep);

// Use the stage value configured in the tree
switch (currentStep.getStage()) {
  case "UsernamePassword":
    // Render your custom username/password UI
    break;
  case "SomeOtherStage":
    // etc
    break;
}

DS: Zero Downtime Upgrade Strategy Using a Blue/Green Deployment

Introduction

This is the continuation of the previous blog about a Zero Downtime Upgrade Strategy Using a Blue/Green Deployment for AM. Traditionally, ForgeRock Directory Server (DS) upgrades are handled via a rolling upgrade strategy using an in-place update. As many deployments have constraints around this approach (zero downtime, immutable, etc.), a parallel deployment approach, also known as a blue/green strategy, can be leveraged for upgrading ForgeRock DS servers.

This blog provides a high-level approach for using a blue/green methodology for updating ForgeRock DS-UserStores.

This corresponds to Unit 3: DS-UserStores in our overall ForgeRock upgrade approach.

ForgeRock Upgrade Units
Unit 3: DS-Userstores Upgrade Process

Prerequisites/Assumptions

1. This approach assumes that your infrastructure processes have the ability to install a parallel deployment for an upgrade, or you are already using a blue/green deployment.

2. In the above diagram, the blue cluster reflects an existing DS deployment (like a 3.5.x version), and the green reflects a new DS deployment (like a 6.5.x version).

3. There are N+1 DS servers deployed in your existing deployment. N servers are used for your production workload and one server is reserved for maintenance activities like backup, upgrades, etc. If there is no maintenance server, then you may need to remove one server from the production cluster (thereby reducing production load capacity) or install an additional DS server node for this upgrade strategy.

4. Review release notes for all DS versions between existing and target DS deployments for new, deprecated features, bug fixes, and others. For a DS 3.5 to DS 6.5 upgrade, review the Release Notes for DS 5.0, 5.5, 6.0, and 6.5 versions.

Upgrade Process

1. Unconfigure replication for the DS-3 user store. Doing so ensures that the upgrade doesn’t impact your existing DS deployment.

2. Upgrade DS-3 in place using DS upgrade process.

3. Create a backup from DS-3 using the DS backup utility.

4. Configure green RS-1’s replication with the existing blue replication topology.

5. Configure green RS-2’s replication with the existing blue replication topology.

6. Install green DS-1 and restore data from backup using the DS restore utility.

7. Install green DS-2 and restore data from backup using the DS restore utility.

8. Install Green DS-3 and restore data from backup using the DS restore utility.

9. Configure Green DS-1’s replication with Green RS-1.

10. Configure Green DS-2’s replication with Green RS-1.

11. Configure Green DS-3’s replication with Green RS-1.

Switch Over to the New Deployment

12. After validating that the new deployment is working correctly, switch the load balancer from blue to green. This can also be done incrementally. If any issues occur, you can always roll back to the blue deployment.

If direct hostnames are used by DS clients, such as AM, IDM, etc., then those configurations need to be updated to leverage new green hostnames.

Post Go-Live

13. Unconfigure the blue RS1 replication server to remove this server from blue’s replication topology.

14. Unconfigure the blue RS2 replication server to remove this server from blue’s replication topology.

15. Stop the blue DS servers.

16. Stop the blue RS servers.

17. De-provision the blue deployment.

Conclusion

Although a blue/green deployment requires a high level of deployment maturity, this approach provides an elegant way to minimizing downtime for ForgeRock deployment upgrades. It is always advisable to try an upgrade strategy in lower environments like dev, stage before moving to a production environment.

Depending on the complexity of your deployment, there can be multiple things to be considered for these upgrades, such as customizations, new FR features, etc. It is always recommended to break the entire upgrade process into multiple releases like “base upgrade” followed by “leveraging new features”, and so on.

References

AM and IG: Zero Downtime Upgrade Strategy Using a Blue/Green Deployment

Introduction

The standard deployment for the ForgeRock Identity Platform consists of multiple ForgeRock products such as IG, AM, IDM, and DS. As newer ForgeRock versions are released, deployments using older versions need to be migrated before they reach their end of life. Also, newer versions of ForgeRock products provide features such as intelligent authentication and the latest OAuth standards, which help businesses implement complex use cases.

ForgeRock Deployment Components

Problem Statement

Traditionally, ForgeRock upgrades are handled via a rolling upgrade strategy using an in-place update. This strategy doesn’t suit all deployments due to the following constraints:

  • Many deployments don’t allow any downtime. This means production servers can’t be stopped for upgrade purposes.
  • Some deployments follow an immutable instances approach. This means no modification is allowed on the current running servers.

To resolve these constraints, a parallel deployment approach, or a blue/green strategy can be leveraged for upgrading ForgeRock servers.

Solution

This article provides a high-level approach for using a blue/green methodology for updating ForgeRock AM servers and related components like DS-ConfigStore, DS-CTS, AM-Agents, and IG servers. We plan to cover similar strategies for DS-UserStores and IDM in future articles.

In order to upgrade ForgeRock deployment, we need to first analyze the dependencies between various ForgeRock products and their impact on upgrade process:

Given the dependencies between ForgeRock products, it is generally advisable to upgrade AM before upgrading DS, AM agents, and others, as new versions of AM support older versions of DS and AM agents, but the converse may not be true.

Note: There can be some exceptions to this rule. For example:

  • Web policy agents 4.x are compatible with AM 6.0, but not with AM 6.5. This means the order of upgrade shall be existing version to AM 6.0 => AM Agent 4.x to 5.x => AM 6.0 to AM 6.5.x
  • If an AM-IDM integration is used, then both AM and IDM need to be upgraded at the same time.

Upgrade Units

ForgeRock Upgrade Units

A ForgeRock Identity Platform deployment can be divided into 4 units so that upgrade of these units can be handled individually:

  • Unit 1: AM and its related stores (DS-Config and DS-CTS)
  • Unit 2: AM-Agents/IG
  • Unit 3: DS-UserStores
  • Unit 4: IDM and its datastore

The order of upgrade used by our approach shall be Unit 1=>Unit 2=>Unit 3=>Unit 4.

Unit 1: AM Upgrade

Unit 1: AM Upgrade

Prerequisites/ Assumptions

1. This approach assumes that your infrastructure processes have the ability to install a parallel deployment for upgrade, or you are already using a blue/green deployment.

2. In the above diagram, the blue cluster reflects an existing AM deployment (like the 13.5.x version) and the green cluster reflects a new AM deployment (like the 6.5.x version).

3. There are N+1 AM servers and corresponding config stores deployed in your existing deployment. This means N servers are used for production load, and one server is reserved for maintenance activities like backup, upgrades, and others. If there is no such maintenance server, then you may need to remove one server from the production cluster (thereby reducing production load capacity) or install an additional node (AM server and corresponding config store) for this upgrade.

4. No sessions in CTS servers are replicated during blue/green switch; therefore, users are expected to re-authenticate after this migration. If your business use cases require users to remain authenticated, then these sessions (like OAuth Refresh tokens) need to be synced from the old to the new deployment. Mechanisms like ldif export/import or using IDM synchronization engine can be leveraged for syncing selective tokens from old to new deployments. Also, refer to the AM Release Notes on session compatibility across AM versions.

5. Review the Release Notes for all AM versions between existing and target AM deployment for new features, deprecated features, bug fixes, and so on for OpenAM 13.5 to AM 6.5 upgrade. Review the Release Notes for AM 5.0, 5.5, 6.0, and 6.5 versions.

Upgrade Process

1. Unconfigure replication for the DS-3 Config store. This ensures that the upgrade doesn’t impact an existing AM deployment.

2. Upgrade AM-3 in-place using the AM upgrade process. Note: You may need to handle new AM features in this process like AM 6.5 secrets, and others.

3. Export Amster configs from AM-3.

4. Transform Amster export so that the Amster export is aligned with a new green deployment such as DS hostname:port.

5. Install AM, DS-Config, and DS-CTS servers. Import the Amster export into a new green cluster. Note: For certain deployment patterns, such as ForgeRock immutable deployment, the Amster import needs to be executed for each AM node. If a shared config store is used, then the Amster import needs to be executed only once, and other nodes are required to be added to the existing AM site.

Switch Over to the New Deployment

6. After validating that the new deployment is working correctly, switch the load balancer from blue to green. This can also be done incrementally. If any issues occur, we can always roll back to the blue deployment.

Note: Any configuration changes made after the blue’s cluster’s Amster export should be applied to both blue and green deployments so that no configuration change is lost during switchover or rollback.

Post Go-Live

7. Stop the AM servers in the blue deployment.

8. Stop the Config and CTS DS servers in blue deployment.

9. De-provision the blue deployment.

Unit 2: AM-Agent/IG Upgrade

Unit 2: AM-Agent/IG Upgrade Process

AM-Agent

Prerequisites/ Assumptions

1. This approach assumes that your deployment (including applications protected by agents) has the ability to install a parallel deployment for upgrade, or you are already using a blue/green deployment.

2. In the above diagram, the blue cluster reflects an existing AM-Agent deployment and the green reflects new AM-Agent deployment.

3. A parallel base green deployment for protected app servers has already been created.

4. Create new Agent profiles for green deployment on AM servers.

5. This approach assumes both old and new AM-Agent versions are supported by the AM deployment version.

6. Refer to the Release Notes for latest and deprecated features in the new AM-Agent/IG version, such as the AM-Agent 5.6 Release Notes.

Upgrade Process

1. Install AM-Agents in the green deployment. Update agent profiles on the AM server (created in #4 above) for new agents deployed in the green deployment to match configurations used in agent profiles from the blue deployment. For certain AM versions, this process can be automated by retrieving existing Agent profiles and using these results to create new Agent profiles.

Switch Over to the New Deployment

2. After validating that the new deployment is working properly, switch the load balancer from blue to green.

Post Go-Live

3. Stop the app servers in the blue deployment.

4. Remove the blue agent profiles from AM deployment.

5. De-provision the blue deployment.

IG

Prerequisites/ Assumptions

1. This approach assumes that your deployment (including applications protected by agents) has the ability to install a parallel deployment for upgrade, or you are already using a blue/green deployment.

2. In the above diagram, the blue cluster reflects an existing IG deployment and the green reflects the new IG deployment.

3. This approach assumes both old and new IG versions are supported by the AM deployment version.

4. Create new Agent profiles for the green deployment on the AM servers required for IG servers.

5. Refer to the Release Notes for the latest and deprecated features in the new IG version, like IG 6.5 Release Notes.

Upgrade Process

1. Update the IG configs in the git repository as per the changes in the new version. You may create a different branch in your repository for the same.

2. Deploy the new green IG deployment by leveraging updated configurations.

Switch Over to the New Deployment

3. After validating that the new deployment is working fine, switch the load balancer from blue to green.

Post Go-Live

4. Stop the IG servers in the blue deployment.

5. De-provision the blue deployment.

Conclusion

Although a blue/green deployment requires a high level of deployment maturity, this approach provides an elegant way to minimize downtime for ForgeRock deployment upgrades. It is always advisable to practice an upgrade strategy in lower environments like dev, and stage before moving to a production environment.

Depending on the complexity of your deployment, there can be multiple things to be considered for these upgrade,s such as customizations, new FR features, migration to containers, and others. It is always recommended to break the entire upgrade process into multiple releases, like “base upgrade” followed by “leveraging new features”, and so on.

References

Easily Share Authentication Trees

Originally published on Mr. Anderson’s Musings

A New World

A new world of possibilities was born with the introduction of authentication trees in ForgeRock’s Access Management (AM). Limiting login sequences of the past were replaced with flexible, adaptive, and contextual authentication journeys.

ForgeRock chose the term Intelligent Authentication to capture this new set of capabilities. Besides offering a shiny new browser-based design tool to visually create and maintain authentication trees, Intelligent Authentication also rang in a new era of atomic extensibility.

Authentication Tree

While ForgeRock’s Identity Platform has always been known for its developer-friendliness, authentication trees took it to the next level: Trees consist of a number nodes, which are connected with each other like in a flow diagram or decision tree. Each node is an atomic entity, taking a single input and providing one or more outputs. Nodes can be implemented in Java, JavaScript, or Groovy.

A public marketplace allows the community to share custom nodes. An extensive network of technology partners provides nodes to integrate with their products and services.

A New Challenge

With the inception of authentication trees, a spike of collaboration between individuals, partners, and customers occurred. At first the sharing happened on a node basis as people would exchange cool custom node jar files with instructions on how to use those nodes. But soon it became apparent that the sharing of atomic pieces of functionality wasn’t quite cutting it. People wanted to share whole journeys, processes, trees.

A New Tool – amtree.sh

A fellow ForgeRock solution architect in the UK, Jon Knight, created the first version of a tool that allowed the easy export and import of trees. I was so excited about the little utility that I forked his repository and extended its functionality to make it even more useful. Shortly thereafter, another fellow solution architect from the Bay Area, Jamie Morgan, added even more capabilities.

The tool is implemented as a shell script, which exports authentication trees from any AM realm to standard output or a file and imports trees into any realm from standard input or a file. The tool automatically includes required decision node scripts for authentication trees (JavaScript and Groovy) and requires curl, jq, and uuidgen to be installed and available on the host where it is to be used. Here are a few ideas and examples for how to use the tool:

Backup/Export

I do a lot of POCs or create little point solutions for customer or prospect use cases or build demos to show off technology partner integrations or our support for the latest open standards. No matter what I do, it often involves authentication trees of various complexity and usually those trees take some time designing and testing and thus are worthy of documentation and preservation and sharing. The first step to achieve any of these things is to extract the trees’ configuration into a reusable format, or simply speaking: backing them up or exporting them.

Before performing an export, It can be helpful to just produce a list of all the authentication trees in a realm. That way we get an idea what’s available and can decide if we want to export individual trees or all the trees in a realm. The tool provides an option to list all trees in a realm. It lists the trees in their natural order (order of creation). To get an alphabetically ordered list, we can pipe the output into the sort shell command.

List Trees
../amtree.sh -h https://am.example.com/openam -u amadmin -p ******** -r /authn -l | sort
email
push
push_reg
push_reg_2fa
risk
select
simple
smart
solid
trusona
webauthn
webauthn_reg
webauthn_reg_2fa 

Now that we have a list of trees, it is time to think about what it is we want to do. The amtree.sh tool offers us 3 options:

  1. Export a single tree into a file or to standard out: -e
  2. Export all trees into individual files: -S
  3. Export all trees into a single file or to standard out: -E

The main reason to choose one of these options over another is whether your trees are independent (have no dependency on other trees) or not. Authentication trees can reference other trees, which then act like subroutines in a program. These subroutines are called inner trees. Independent trees do not contain inner trees. Dependent trees contain inner trees.

Options 1 and 2 are great for independent trees as they put a single tree into a single file. Those trees can then easily be imported again. Option 2 generates the same output as if running option 1 for every tree in the realm.

Dependent trees require other trees be already available or be imported before the dependent tree is imported or the AM APIs will complain and the tool will not be able to complete the import.

Option 3 is best suited for highly interdependent trees. It puts all the trees of a realm into the same file and on import of that file, the tool will always have all the required dependencies available.

Option 2: Export All Trees To Individual Files
../amtree.sh -h https://am.example.com/openam -u amadmin -p ******** -r /authn -S
 Export all trees to files
 Exporting push ..........
 Exporting simple ............
 Exporting trusona ....
 Exporting risk ........
 Exporting smart ..........
 Exporting webauthn .............
 Exporting select .......
 Exporting solid ......
 Exporting webauthn_reg ............
 Exporting webauthn_reg_2fa .............
 Exporting email .........
 Exporting push_reg ..............
 Exporting push_reg_2fa ...............
Option 3: Export All Trees To Single File
../amtree.sh -h https://am.example.com/openam -u amadmin -p ******** -r /authn -E -f authn_all.json
 Exporting push ..........
 Exporting simple ............
 Exporting trusona ....
 Exporting risk ........
 Exporting smart ..........
 Exporting webauthn .............
 Exporting select .......
 Exporting solid ......
 Exporting webauthn_reg ............
 Exporting webauthn_reg_2fa .............
 Exporting email .........
 Exporting push_reg ..............
 Exporting push_reg_2fa ...............

After running both of those commands, we should find the expected files in our current directory:

ls -1
 authn_all.json
 email.json
 push.json
 push_reg.json
 push_reg_2fa.json
 risk.json
 select.json
 simple.json
 smart.json
 solid.json
 trusona.json
 webauthn.json
 webauthn_reg.json
 webauthn_reg_2fa.json

The second command (option 3) produced the single authn_all.json file as indicated by the -f parameter. The first command (option 2) generated individual files per tree.

Restore/Import

Import is just as simple as export. The tool brings in required scripts and resolves dependencies to inner trees, which means it orders trees on import to satisfy dependencies.

Exports omit secrets of all kind (passwords, API keys, etc.) which may be stored in node configuration properties. Therefore, if we exported a tree whose configuration contains secrets, the imported tree will lack those secrets. If we want to more easily reuse trees (like I do in my demo/lab environments) we can edit the exported tree files and manually insert the secrets. Fields containing secrets are exported as null values. Once we manually add those secrets to our exports, they will import as expected.

{
  "origin": "003232731275e50c2770b3de61675fca",
  "innernodes": {},
  "nodes": {
    ...
    "B56DB408-E26D-4FBA-BF86-339799ED8C45": {
      "_id": "B56DB408-E26D-4FBA-BF86-339799ED8C45",
      "hostName": "smtp.gmail.com",
      "password": null,
      "sslOption": "SSL",
      "hostPort": 465,
      "emailAttribute": "mail",
      "smsGatewayImplementationClass": "com.sun.identity.authentication.modules.hotp.DefaultSMSGatewayImpl",
      "fromEmailAddress": "vscheuber@gmail.com",
      "username": "vscheuber@gmail.com",
      "_type": {
        "_id": "OneTimePasswordSmtpSenderNode",
        "name": "OTP Email Sender",
        "collection": true
      }
    },
    ...
  },
  "scripts": {
    ...
  },
  "tree": {
    "_id": "email",
    "nodes": {
      ...
      "B56DB408-E26D-4FBA-BF86-339799ED8C45": {
        "displayName": "Email OTP",
        "nodeType": "OneTimePasswordSmtpSenderNode",
        "connections": {
          "outcome": "08211FF9-9F09-4688-B7F1-5BCEB3984624"
        }
      },
      ...
    },
    "entryNodeId": "DF68B2B8-0F10-4FF3-9F2C-622DA16BA4B7"
  }
}

The Json code snippet above shows excerpts from the email tree. One of the nodes is responsible for sending a one-time password (OTP) via email to the user, thus needing SMTP gateway configuration. The export does not include the value of the password property in the node configuration. To make this export file re-usable, we could replace null with the actual password. Depending on the type of secret this might be acceptable or not.

Importing individual trees requires us to make sure all the dependencies are met. Amtree.sh provides a nice option, -d, to describe a tree export file. That will tell us if a tree has any dependencies we need to meet before we can import that single tree. Let’s take the select tree as an example. The select tree offers the user a choice, which 2nd factor they want to use to login. Each choice then evaluates another tree, which implements the chosen method:

Running amtree.sh against the exported select.json file gives us a good overview of what the select tree is made of, which node types it uses, which scripts (if any) it references, and what other trees (inner trees) it depends on:

../amtree.sh -d -f select.json
 Tree: select
 ============

 Nodes:
 -----
 - ChoiceCollectorNode
 - InnerTreeEvaluatorNode 

 Scripts:
 -------
 None

 Dependencies:
 ------------
 - email
 - push
 - trusona
 - webauthn 

From the output of the -d option we can derive useful information:

  • Which nodes will we need to have installed in our AM instance? ChoiceCollectorNode and InnerTreeEvaluatorNode.
  • Which scripts will the tree export file install in our AM instance? None.
  • Which trees does this tree depend on? The requiring of the InnerTreeEvaluatorNode already gave away that there will be dependencies. This list simply breaks them down: email, push, trusona, and webauthn.

Ignoring the dependencies we can try to import the file into an empty realm and see what amtree.sh will tell us:

../amtree.sh -h https://am.example.com/openam -u amadmin -p ******** -r /empty -i select -f select.json 

Importing select…Error importing node InnerTreeEvaluatorNode (D21C798F-D4E9-400A-A038-0E1A883348EB): {"code":400,"reason":"Bad Request","message":"Data validation failed for the attribute, Tree Name"}
{
  "_id": "D21C798F-D4E9-400A-A038-0E1A883348EB",
  "tree": "email",
  "_type": {
    "_id": "InnerTreeEvaluatorNode",
    "name": "Inner Tree Evaluator",
    "collection": true
  }
}

The error message confirms that dependencies are not met. This leaves us with 3 options:

  1. Instruct amtree.sh to import the four dependencies using the -i option before trying to import select.json. Of course that bears the risk that any or all of the 4 inner trees have dependencies of their own.
  2. Instruct amtree.sh to import the authn_all.json using the -I option. The tool will bring in all the trees in the right order but there is no easy way to avoid any of the many trees in the file to be imported.
  3. Instruct amtree.sh import all the .json files in the current directory using the -s option. The tool will bring in all the trees in the right order. Any trees we don’t want to import, we can move into a sub folder and amtree.sh will ignore them.

Let’s see how option 3 will work out. To avoid errors, we need to move the authn_all.json file containing all the trees into a sub folder, ignore in my case. Then we are good to go:

../amtree.sh -h https://am.example.com/openam -u amadmin -p ******** -r /empty -s
Import all trees in the current directory
Determining installation order..............................................
Importing email.........
Importing push_reg..............
Importing push_reg_2fa...............
Importing simple............
Importing trusona....
Importing webauthn.............
Importing webauthn_reg............
Importing webauthn_reg_2fa.............
Importing push..........
Importing risk........
Importing select.......
Importing smart..........
Importing solid......

No errors reported this time. You can see the tools spent quite some cycles determining the proper import order (the more dots, the more cycles). We would have likely run into nested dependencies had we tried option 1 and manually imported the four known dependencies.

A word of caution: Imports overwrite trees of the same name without any warning. Be mindful of that fact when importing into a realm with existing trees.

Migrate/Copy

Amtree.sh supports stdin and stdout for input and output. That allows us to pipe the output of an export command (-e or -E) to an import command (-i or -I) without storing anything on disk. That’s a pretty slick way to migrate trees from one realm to another in the same AM instance or across instances. The -s and -S options do not support stdin and stdout, thus they won’t work for this scenario.

../amtree.sh -h https://am.example.com/openam -u amadmin -p ******** -r /authn -E | ../amtree.sh -h https://am.example.com/openam -u amadmin -p ******** -r /empty -I
 Exporting push ..........
 Exporting simple ............
 Exporting trusona ....
 Exporting risk ........
 Exporting smart ..........
 Exporting webauthn .............
 Exporting select .......
 Exporting solid ......
 Exporting webauthn_reg ............
 Exporting webauthn_reg_2fa .............
 Exporting email .........
 Exporting push_reg ..............
 Exporting push_reg_2fa ...............
 Determining installation order.............................
 Importing email.........
 Importing push..........
 Importing push_reg..............
 Importing push_reg_2fa...............
 Importing risk........
 Importing select.......
 Importing simple............
 Importing smart..........
 Importing solid......
 Importing trusona....
 Importing webauthn.............
 Importing webauthn_reg............
 Importing webauthn_reg_2fa.............

The above command copies all the trees in a realm to another realm. Nothing is ever exported to disk.

Prune

Trees consist of different configuration artifacts in the AM configuration store. When managing trees through the AM REST APIs, it is easy to forget to remove unused artifacts. Even when using the AM Admin UI, dead configuration is left behind every time a tree is deleted. The UI doesn’t give an admin any options to remove those dead artifacts nor is there a way really to even see them. Over time, they will grow to uncomfortable size and will clutter the results of API calls.

Amtree.sh prunes those orphaned configuration artifacts when the -P parameter is supplied. I regularly delete all the default trees in a new realm, which leaves me with 33 orphaned configuration artifacts right out of the gate. To be clear: Those orphaned configuration artifacts don’t cause any harm. It’s a desire for tidiness that makes me want them gone.

./amtree.sh -h https://am.example.com/openam -u amadmin -p ******** -r /authn -P
Analyzing authentication nodes configuration artifacts…

Total:    118
Orphaned: 20

Do you want to prune (permanently delete) all the orphaned node instances? (N/y): y
Pruning....................
Done.

Wrap-Up & Resources

Amtree.sh is a big improvement for the handling and management of authentication trees in the ForgeRock Identity Platform. It is hardly the final solution, though. The implementation as a shell script is limiting both the supported platforms and functionality. My fingers are itching to re-write it in a better suited language. Now there’s a goal for 2020!

If you want to explore the examples in this post, here are all the resources I used:

Use Authentication Trees To Create A Great SAML Login Experience

If you are shopping for a C/IAM platform these days, chances are the vendor pitch you are going to hear is all about OAuth and OIDC and JWT tokens and other shiny things for application integration, authentication, authorization, and single sign-on. And rightly so, as these standards truly offer great new capabilities or provide a modern implementation of old friends. But once the honeymoon is over and reality sets in, you are probably going to find yourself facing a majority of SAML applications and services that need integration and only a very small minority of modern applications supporting those shiny new standards.

The ForgeRock Identity Platform is an authentication broker and orchestration hub. How you come in and where you take your session is merely a matter of configuration. With the great new capabilities introduced with Authentication Trees, one might wonder how these new capabilities jive with old-timers like SAML. And wow do they jive!

With SAML there are two main scenarios per integration: Being the service provider (SP) or being the identity provider (IDP). The service provider consumes an attested identity and has limited control over how that identity was authenticated. The identity provider on the other hand dictates how identities authenticate.

Application users can start the process on the IDP side (IDP-initiated) or on the application side (SP-initiated). IDP-initiated login comes typically in the form of a portal listing all the applications the user has access to. Selecting any of these applications launches the login flow. SP-initiated flows start on the application side. Often users have bookmarked a page in the application, which requires authentication, causing the application to initiate the login flow for unauthenticated users.

This guide focuses on the SP-initiated flow and how you can use the ForgeRock Identity Platform to create the exact login experience you seek. The use case is:

“Controlling which authentication tree to launch in SP-initiated flows”.

The configuration option is a bit non-obvious and I have been asked a few times if the only way was to use the realm default authentication setting or whether there were alternatives. This is a great way to create an individual login journey for SAML users, distinct from the others.

To configure your environment, follow these steps:

  1. Follow the steps to configure SAML federation between the ForgeRock Identity Platform and your application. For this guide I configured my private Google Apps account as an SP.
  2. Test the SP-initiated flow. It should use your realm default for authentication (realm > authentication > settings > core).
  3. Now create the trees you want SAML users to use in the realm you are federating into. In this example, a tree called “saml” is the base tree that should be launched to authenticate SAML users.

    where the “Continue?” node just displays a message notifying users that they are logging in via SAML SP-initiated login. The “Login” node is an inner tree evaluator launching the “simple” tree, which lets them authenticate using username and password:

    The “2nd Factor” node is another inner tree evaluator branching out into tree, which allows the user to select a 2nd factor they want to use:

    This guide will use the “push” tree for push authentication:
  4. Now navigate to the “Authentication Context” section in your “Hosted IDP” configuration in the AM Admin UI (Applications > Federation > Entity Providers > [your hosted entity provider] > Authentication Context):

    This is where the magic happens. Select “Service” from the “Key” drop-down list on all the supported authentication contexts (note, you could launch different trees based on what the SP proposes for authentication, Google seems to only support “PasswordProtectedTransport” by default) and enter the name of the base tree you want to execute in the “Value” column, “saml” in this configuration example.

Test your configuration:

  1. Launch your SP-initiated login flow. For Google GSuite you do that by pointing your browser to https://gsuite.google.com and select “Sign-In”, then type in your GSuite domain name, select the application you want to land in after authentication and select “GO”.
  2. Google redirects to ForgeRock Access Management. The “saml” tree displays the configured message, giving users the options to “Continue” or “Abort”. Select “Continue”:
  3. The “simple” tree executes, prompting for username and password:
  4. Now the flow comes back into the “simple” tree and branches out into the 2nd Factor selector tree “select”:
  5. Select “Push” and respond to the push notification on your phone while the web user interface gracefully waits for your response:
  6. And finally, a redirect back to Google Apps with a valid SAML assertion in tow completes the SP-initiated login flow:

Immutable Deployment Pattern for ForgeRock Access Management (AM) Configuration without File Based…

Immutable Deployment Pattern for ForgeRock Access Management (AM) Configuration without File Based Configuration (FBC)

Introduction

The standard Production Grade deployment pattern for ForgeRock AM is to use replicated sets of Configuration Directory Server instances to store all of AM’s configuration. The deployment pattern has worked well in the past, but is less suited to the immutable, DevOps enabled environments of today.

This blog presents an alternative view of how an immutable deployment pattern could be applied to AM in lieu of the upcoming full File Based Configuration (FBC) for AM in version 7.0 of the ForgeRock Platform. This pattern could also support easier transition to FBC.

Current Common Deployment Pattern

Currently most customers deploy AM with externalised Configuration, Core Token Service (CTS) and UserStore instances.

The following diagram illustrates such a topology spread over two sites; the focus is on the DS Config Stores hence the CTS and DS Userstore connections and replication topology have been simplified . Note this blog is still applicable to deployments which are single site.

Dual site AM deployment pattern. Focus is on the DS Configuration stores

In this topology AM uses connection strings to the DS Config stores to enable an all active Config store architecture, with each AM targeting one DS Config store as primary and the second as failover per site. Note in this model there is no cross site failover for AM to Config stores connections (possible but discouraged). The DS Config stores do communicate across site for replication to create a full mesh as do the User and CTS stores.

A slight divergence from this model and one applicable to cloud environments is to use a load balancer between AM and it’s DS Config Stores, however we have observed many customers experience problems with features such as Persistent Searches failing due to dropped connections. Hence, where possible Consulting Services recommends the use of AM Connection Strings.

It should be noted that the use of AM Connection Strings specific to each AM can only be used if each AM has a unique FQDN — for example: https://openam1.example.com:8443/openam, https://openam2.example.com:8443/openam and so on.

For more on AM Connection Strings click here

Problem Statement

This model has worked well in the past; the DS Config stores contain all the stuff AM needs to boot and operate plus a handful of runtime entries.

However, times are a changing!

The advent of Open Banking introduces potentially hundreds of thousands of OAuth2 clients, AM policies entry numbers are ever increasing and with UMA thrown in for good measure; the previously small, minimal footprint are fairly static DS Config Stores are suddenly much more dynamic and contains many thousands of entries. Managing the stuff AM needs to boot and operate and all this runtime data suddenly becomes much more complex.

TADA! Roll up the new DS App and Policy Stores. These new data stores address this by allowing separation from this stuff AM needs to boot and operate from long lived environment specifics data such as policies, OAuth2 clients, SAML entities etc. Nice!

However, one problem still remains; it is still difficult to do stack by stack deployments, blue/green type deployments, rolling deployments and/or support immutable style deployments as DS Config Store replication is in place and needs to be very carefully managed during deployment scenarios.

Some common issues:

  • Making a change to one AM can quite easily have a ripple effect through DS replication, which impacts and/or impairs the other AM nodes both within the same site or remote. This behaviour can make customers more hesitant to introduce patches, config or code changes.
  • In a dual site environment the typical deployment pattern is to stop cross site replication, force traffic to site B, disable site A, upgrade site A, test it in isolation, force traffic back to the newly deployed site A, ensure production is functional, disable traffic to site B, push replication from site A to site B and re-enable replication, upgrade site B before finally returning to normal service.
  • Complexity is further increased if App and Policy stores are not in use as the in service DS Config stores may have new OAuth2 clients, UMA data etc created during transition which needs to be preserved. So in the above scenario an LDIF export of site B’s DS Config Stores for such data needs to be taken and imported in site A prior to site A going live (to catch changes while site A deployed was in progress) and after site B is disabled another LDIF export needs to taken from B and imported into A to catch any last minute changes between the first LDIF export and the switch over. Sheesh!
  • Even in a single site deployment model managing replication as well as managing the AM upgrade/deployment itself introduces risk and several potential break points.

New Deployment Model

The real enabler for a new deployment model for AM is the introduction of App and Policy stores, which will be replicated across sites. They enable full separation from the stuff AM needs to boot and run, from environmental runtime data. In such a model the DS Config stores return to a minimal footprint, containing only AM boot data with the App and Policy Stores containing the long lived environmental runtime data which is typically subject to zero loss SLAs and long term preservation.

Another enabler is a different configuration pattern for AM, where each AM effectively has the same FQDN and serverId allowing AM to be built once and then cloned into an image to allow rapid expansion and contraction of the AM farm without having to interact with the DS Config Store to add/delete new instances or go through the build process again and again.

Finally the last key component to this model is Affinity Based Load Balancing for the Userstore, CTS, App and Policy stores to both simplify the configuration and enable an all-active datastore architecture immune to data misses as a result of replication delay and is central to this new model.

Affinity is a unique feature of the ForgeRock platform and is used extensively by many customers. For more on Affinity click here.

The proposed topology below illustrates this new deployment model and is applicable to both active-active deployments and active-standby. Note cross site replication for the User, App and CTS stores is depicted, but for global/isolated deployments may well not be required.

Localised DS Config Store for each AM with replication disabled

As the DS Config store footprint will be minimal, to enable immutable configuration and massively simplify step-by-step/blue green/rolling deployments the proposal is to move the DS Config Stores local to AM with each AM built with exactly the same FQDN and serverId. Each local DS Config Store lives in isolation and replication is not enabled between these stores.

In order to provision each DS Config Store in lieu of replication, either the same build script can be executed on each host or a quicker and more optimised approach would be to build one AM-DS Config store instance/Pod in full, clone it and deploy the complete image to deploy a new AM-DS instance. The latter approach removes the need to interact with Amster to build additional instances and for example Git to pull configuration artefacts. With this model any new configuration changes require a new package/docker image/AMI, etc, i.e. an immutable build.

At boot time AM uses its local address to connect to its DS Config Store and Affinity to connect to the user Store, CTS and the App/Policy stores.

Advantages of this model:

  • As the DS Config Stores are not replicated most AM configuration and code level changes can be implemented or rolled back (using a new image or similar) without impacting any of the other AM instances and without the complexity of managing replication. Blue/green, rolling and stack by stack deployments and upgrades are massively simplified as is rollback.
  • Enables simplified expansion and contraction of the AM pool especially if an image/clone of a full AM instance and associated DS Config instance is used. This cloning approach also protects against configuration changes in Git or other code repositories inadvertently rippling to new AM instances; the same code and configuration base is deployment everywhere.
  • Promotes the cattle vs pet paradigm, for any new configuration deploy a new image/package.
  • This approach does not require any additional instances; the existing DS Config Stores are repurposed as App/Policy stores and the DS Config Stores are hosted locally to AM (or in a small Container in the same Pod as AM).
  • The existing DS Config Store can be quickly repurposed as App/Policy Stores no new instances or data level deployment steps are required other than tuning up the JVM and potentially uprating storage; enabling rapid switching from DS Config to App/Policy Stores
  • Enabler for FBC; when FBC becomes available the local DS Config stores are simply stopped in favour of FBC. Also if transition to FBC becomes problematic, rollback is easy — fire up the local DS Config stores and revert back.

Disadvantages of this model:

  • No DS Config Store failover; if the local DS Config Store fails the AM connected to it would also fail and not recover. However, this fits well with the pets vs cattle paradigm; if a local component fails, kill the whole instance and instantiate a new one.
  • Any log systems which have logic based on individual FQDNs for AM (Splunk, etc) would need their configuration to be modified to take into account each AM now has the same FQDN.
  • This deployment pattern is only suitable for customers who have mature DevOps processes. The expectation is no changes are made in production, instead a new release/build is produced and promoted to production. If for example a customer makes changes via REST or the UI directly then these changes will not be replicated to all other AM instances in the cluster, which would severely impair performance and stability.

Conclusions

This suggested model would significantly improve a customer’s ability to take on new configuration/code changes and potentially rollback without impacting other AM servers in the pool, makes effective use of the App/Policy stores without additional kit, allows easy transition to FBC and enables DevOps style deployments.

This blog post was first published @ https://medium.com/@darinder.shokar included here with permission.

Deploying the ForgeRock platform on Kubernetes using Skaffold and Kustomize

Image result for forgerock logo

If you are following along with the ForgeOps repository, you will see some significant changes in the way we deploy the ForgeRock IAM platform to Kubernetes.  These changes are aimed at dramatically simplifying the workflow to configure, test and deploy ForgeRock Access Manager, Identity Manager, Directory Services and the Identity Gateway.

To understand the motivation for the change, let’s recap the current deployment approach:

  • The Kubernetes manifests are maintained in one git repository (forgeops), while the product configuration is another (forgeops-init).
  • At runtime,  Kubernetes init containers clone the configuration from git and make it  available to the component using a shared volume.
The advantage of this approach is that the docker container for a product can be (relatively) stable. Usually it is the configuration that is changing, not the product binary.
This approach seemed like a good idea at the time, but in retrospect it created a lot of complexity in the deployment:
  • The runtime configuration is complex, requiring orchestration (init containers) to make the configuration available to the product.
  • It creates a runtime dependency on a git repository being available. This isn’t a show stopper (you can create a local mirror), but it is one more moving part to manage.
  • The helm charts are complicated. We need to weave git repository information throughout the deployment. For example, putting git secrets and configuration into each product chart. We had to invent a mechanism to allow the user to switch to a different git repo or configuration – adding further complexity. Feedback from users indicated this was a frequent source of errors. 
  • Iterating on configuration during development is slow. Changes need to be committed to git and the deployment rolled to test out a simple configuration change.
  • Kubernetes rolling deployments are tricky. The product container version must be in sync with the git configuration. A mistake here might not get caught until runtime. 
It became clear that it would be *much* simpler if the products could just bundle the configuration in the docker container so that it is “ready to run” without any complex orchestration or runtime dependency on git.
[As an aside, we often get asked why we don’t store configuration in ConfigMaps. The short answer is: We do – for top level configuration such as domain names and global environment variables. Products like AM have large and complex configurations (~1000 json files for a full AM export). Managing these in ConfigMaps gets to be cumbersome. We also need a hierarchical directory structure – which is an outstanding ConfigMap RFE]
The challenge with the “bake the configuration in the docker image” approach is that  it creates *a lot* of docker containers. If each configuration change results in a new (and unique) container, you quickly realize that automation is required to be successful. 
About a year ago, one of my colleagues happened to stumble across a new tool from Google called skaffold.  From the documentation

“Skaffold handles the workflow for building, pushing and deploying your application.
So you can focus more on application development”
To some extent skaffold is syntactic sugar on top of this workflow:
docker build; docker tag; docker push;
kustomize build |  kubectl apply -f – 
Calling it syntactic sugar doesn’t really do it justice, so do read through their excellent documentation. 
There isn’t anything that skaffold does that you can’t accomplish with other tools (or a whack of bash scripts), but skaffold focuses on smoothing out and automating this basic workflow.
A key element of Skaffold is its tagging strategy. Skaffold will apply a unique tag to each docker image (the tagging strategy is pluggable, but is generally a sha256 hash, or a git commit). This is essential for our workflow where we want to ensure that combination of the product (say AM) and a specific configuration is guaranteed to be unique. By using a git commit tag on the final image, we can be confident that we know exactly how a container was built including its configuration.  This also makes rolling deployments much more tractable, as we can update a deployment tag and let Kubernetes spin down the older container and replace it with the new one.
If it isn’t clear from the above, the configuration for the product lives inside the docker image, and that in turn is tracked in a git repository. If for example you check out the source for the IDM container: https://github.com/ForgeRock/forgeops/tree/master/docker/idm 
You will see that the Dockerfile COPYs the configuration into the final image. When IDM runs, its configuration will be right there, ready to go. 
Skaffold has two major modes of operation.  The “run” mode  is a one shot build, tag, push and deploy.  You will typically use skaffold run as part of CD pipeline. Watch for git commit, and invoke skaffold to deploy the change.  Again – you can do this with other tools, but Skaffold just makes it super convenient.
Where Skaffold really shines is in “dev” mode. If you run skaffold dev, it will run a continual loop watching the file system for changes, and rebuilding and deploying as you edit files.
This diagram (lifted from the skaffold project) shows the workflow:
architectureThis process is really snappy. We find that we can deploy changes within 20-30 seconds (most of that is just container restarts).  When pushing to a remote GKE cluster, the first deployment is a little slower as we need to push all those containers to gcr.io, but subsequent updates are fast as you are pushing configuration deltas that are just a few KB in size.
Note that git commits are not required during development.  A developer will iterate on the desired configuration, and only when they are happy will they commit the changes to git and create a pull request. At this stage a CD process will pick up the commit and deploy the change to a QA environment. We have a simple CD sample using Google Cloudbuild.
At this point we haven’t said anything about helm and why we decided to move to Kustomize.  
Once our runtime deployments became simpler (no more git init containers, simpler ConfigMaps, etc.), we found ourselves questioning the need for  complex helm templates. There was some internal resistance from our developers on using golang templates (they *are* pretty ugly when combined with yaml), and the security issues raised by Helm’s Tiller component raised additional red flags. 
Suffice to say, there was no compelling reason to stick with Helm, and transitioning to Kustomize was painless. A shout out to the folks at Replicated – who have a very nifty tool called ship, that will convert your helm charts to Kustomize.  The “port” from Helm to Kustomize took a couple of days. We might look at Helm 3 in the future, but for now our requirements are being met by Kustomize. One nice side effect that we noticed is that Kustomize deployments with skaffold are really fast. 
This work is being done on the master branch of forgeops (targetting the 7.0 release), but if you would like to try out this new workflow with the current (6.5.2) products, you are in luck!  We have a preview branch  that uses the current products.  
The following should just work(TM) on minikube:
cd forgeops
git checkout skaffold-6.5
skaffold dev 
There are some prerequisites that you need to install. See the README-skaffold
The initial feedback on this workflow has been very positive. We’d love for folks to try it out and let us know what you think. Feel free to reach out to me at my ForgeRock email (warren dot strange at forgerock.com).

This blog post was first published @ warrenstrange.blogspot.ca, included here with permission.

Use ForgeRock Access Manager to provide MFA to Linux using PAM Radius

Use ForgeRock Access Manager to provide Multi-Factor Authentication to Linux

Introduction

Our aim is to set up an integration to provide Multi-Factor Authentication (MFA) to the Linux (Ubuntu) platform using ForgeRock Access Manager. The integration uses pluggable authentication module (PAM) to point to a RADIUS server. In this case AM is configured as a RADIUS server.

We achieve the following:

  1. Outsource Authentication of Linux to ForgeRock Access Manager.
  2. Provide an MFA solution to the Linux Platform.
  3. Configure ForgeRock Access Manager as a RADIUS Server.
  4. Configure PAM on Linux server point to our new RADIUS Server.

Setup

  • ForgeRock Access Manager 6.5.2 Installed and configured.
  • OS — Ubuntu 16.04.
  • PAM exists on your your server (this is common these days and you’ll find PAM here: /etc/pam.d/ ).

Configuration Steps

Configure a chain in AM

Firstly we configure a simple Authentication Chain in Access Manager with two modules.

a. First module – DataStore.

b. Second Module – HOTP. Email configured to point to local fakesmtp server.

Simple Authentication Chain

Configure ForgeRock AM as a RADIUS Server

Now we configure AM as a RADIUS Server.

a. Follow steps here: https://backstage.forgerock.com/docs/am/6.5/radius-server-guide/#chap-radius-implementation

RADIUS Server setup

Secondary Configuration — i.e. RADIUS Client

We have to configure a trusted RADIUS client, our Linux server.

a. Enter the IP address of the client (Linux Server).

b. Set the Client Secret.

b. Select your Realm — I used top level realm (don’t do this in production!).

c. Select your Chain.

Configure RADIUS Client

Configure pam_radius on Linux Server (Ubuntu)

Following these instructions, configure pam_radius on your Linux server:

https://www.wikidsystems.com/support/how-to/how-to-configure-pam-radius-in-ubuntu/

a. Install pam_radius.

sudo apt-get install libpam-radius-auth

b. Configure pam_radius to talk to RADIUS server (In this case AM).

sudo vim /etc/pam_radius_auth.conf

i.e <AM Server VM>:1812 secret 1

Point pam_radius to your RADIUS server

Tell SSH to use pam_radius for authentication.

a. Add this line to the top of the /etc/pam.d/sshd file.

auth sufficient pam_radius_auth.so debug

Note: debug is optional and has been added for testing, do not do this in production.

pam_radius is sufficient to authentication

Enable Challenge response for MFA

Tell your sshd config to allow challenge/response and use PAM.

a. Set the following values in your /etc/ssh/sshd_config file.

ChallengeResponseAuthentication yes

UsePAM yes

Create a local user on your Linux server and in AM

In this simple use case you will required a separate account on your Linux server and in AM.

a. Create a Linux user.

sudo adduser test

Note: Make sure the user has a different password than the user in AM to ensure you’re not authenticating locally. Users may have no password if your system allows it, but in this demo I set the password to some random string.

Create Linux User

b. Ensure the user is created in AM with an email address.

User exists in AM with an email address for OTP

Test Authentication to Unix via SSH

It’s now time to put it all together.

a. I recommend you tail the auth log file.

tail -f /var/log/auth.log

b. SSH to your server using.

ssh test@<server name>

c. You should be authenticating to the first module in your AM chain so enter your AM Password.

d. You should be prompted for your OTP, check your email.

OTP generated and sent to mail attribute on user

e. Enter your OTP and press enter then enter again (the UI i.e. challenge/response is not super friendly here).

f. If successfully entered you should be logged in.

You can follow the Auth logs, as well as the AM logs i.e. authentication.audit.json to view the process.

The End.

References:

This blog post was first published @ https://medium.com/@marknienaber included here with permission.

Implementing JWT Profile for OAuth2 Access Tokens

There is a new IETF draft stream called JSON Web Token (JWT) Profile for OAuth 2.0 Access Tokens.  This is a very early 0 version, that looks to describe the format of OAuth2 issued access_tokens.

Access tokens, are typically bearer tokens, but the OAuth2 spec, doesn’t really describe what format they should be.  They typically end up being two high level types – stateless and stateful.  Stateful just means “by reference”, with a long opaque random string being issued to the requestor, which resource servers can then send back into the authorization service, in order to introspect and validate.  On their own, stateful or reference tokens, don’t really provide the resource servers with any detail.

The alternative is to use a stateless token – namely a JSON Web Token (JWT).  This new spec, aims to standardise what the content and format should be.

From a ForgeRock AM perspective, this is good news.  AM has delivered JWT based tokens (web session, OIDC id_tokens and OAuth2 access_tokens) for a long time.  The format and content of the access_tokens, out of the box, generally look something like the following:

The out of the box header (using RS256 signing):

The out of the box payload:

Note there is a lot of stuff in that access_token.  Note the cnf claim (confirmation key).  This is used for proof of possession support which is of course optional, so you can easily reduce the size by not implementing that.  There are several claims, that are specific to the AM authorization service, which may not always be needed in a stateless JWT world, where perhaps the RS is performing offline validation away from the AS.

In AM 6.5.2 and above, new functionality allows for the ability to rapidly customize the content of the access_token.  You can add custom claims, remove out of the box fields and generally build token formats that suit your deployment.  We do this, by the addition of scriptable support.  Within the settings of the OAuth2 provider, note the new field for OAuth2 Access Token Modification Script.

The scripting ability, was already in place for OIDC id_tokens.  Similar concepts now apply.

The draft JWT profile spec, basically mandates iss, exp, aud, sub and client_id, with auth_time and jti as optional.  The AM token already contains those claims.  The perhaps only differing component, is that the JWT Profile spec –  section 2.1 – recommends the header typ value be set to “at+JWT” – meaning access token JWT, so the RS does not confuse the token as an id_token.  The FR AM scripting support, does not allow for changes to the typ, but the payload already contains a tokenName claim (value as access_token) to help this distinction.

If we add a couple of lines to the out of the box script, namely the following, we cut back the token content to the recommended JWT Profile:

accessToken.removeField(“cts”);
accessToken.removeField(“expires_in”);
accessToken.removeField(“realm”);
accessToken.removeField(“grant_type”);
accessToken.removeField(“nbf”);
accessToken.removeField(“authGrantId”);
accessToken.removeField(“cnf”);

The new token payload is now much more slimmed down:

The accessToken.setField(“name”, “value”) method, allows simple extension and alteration of standard claims.

For further details see the following documentation on scripted token content – https://backstage.forgerock.com/docs/am/6.5/oauth2-guide/#modifying-access-tokens-scripts

This blog post was first published @ http://www.theidentitycookbook.com/, included here with permission from the author.

ForgeRock DS and the LDAP Relax Rules Control

In ForgeRock Directory Services 6.5, we’ve added the support for the LDAP Relax Rules Control, both on the server and our clients. One of my colleagues, involved with the customers’ deployment, asked me why we’ve added the control and what it should be used for.

The LDAP Relax Rules Control is an LDAP extension that allows a directory user agent (a client) to request the directory service to temporarily relax enforcement of various data and service model rules. The internet-draft is explicit about which rules can be relaxed or not. But typically it can be used to allow a client to write specific operational attributes that should be read-only and managed by the server.

Starting with OpenDJ 3.0, we’ve removed the ability to bulk import LDIF data to a server while preserving the existing data (the “append mode”). First, performing an import-ldif in append mode was breaking replication. The import needed to be applied to all replica, while no change was to happen on the new data. The process was cumbersome, especially when having multiple data-centers. But also, removing this feature allowed us to have a more generic interface and implement multiple backend using different underlying key-value stores.

But we have a few customers that have the need to seldom bulk load a large set of users to their directory service. In DS 6.0, we’ve added an option to speed bulk operations using ldapmodify or ldapdelete: –numConnections. Instead of serialising all updates or adds contained in an LDIF file, the tool will run them in parallel across multiple connections, while also controlling dependencies of changes. With this options, some of our customers have added several millions of users to their replicated directory services in minutes. By controlling the number of connections, one can also balance the need for speed of bulk loading data against the need to keep bandwidth for the regular client applications.

Doing bulk updates over LDAP is now fast, but some customers used the import process to also carry over some attributes that are usually managed by the directory server and thus read-only, such as the CreateTimeStamp, the CreatorsName.

And this is specifically what the Relax Rules Control is meant to allow.

So, if you have a need to bulk load large set of data, or synchronise over LDAP data from another server, and need to preserve some of the operational attribute, you can use the Relax Rules Control as illustrated below. Note that the OID for the control is 1.3.6.1.4.1.4203.666.5.12 but ForgeRock DS tools also recognise the RelaxRules string alias.

$ ldapmodify -p 1389 -D cn=directory manager -w secret12
-J RelaxRules:true --numConnections 4 ../50Kusers.ldif
...
ADD operation successful for DN uid=user.10021,ou=People,dc=example,dc=com
ADD operation successful for DN uid=user.10022,ou=People,dc=example,dc=com
ADD operation successful for DN uid=user.10001,ou=People,dc=example,dc=com
ADD operation successful for DN uid=user.10020,ou=People,dc=example,dc=com
ADD operation successful for DN uid=user.10026,ou=People,dc=example,dc=com
ADD operation successful for DN uid=user.10025,ou=People,dc=example,dc=com
ADD operation successful for DN uid=user.10024,ou=People,dc=example,dc=com
ADD operation successful for DN uid=user.10005,ou=People,dc=example,dc=com
ADD operation successful for DN uid=user.10033,ou=People,dc=example,dc=com
ADD operation successful for DN uid=user.10029,ou=People,dc=example,dc=com
...

Note that because the Relax Rules Control allows to override some of the rules enforced normally by the server, it’s important to control and restrict which clients or users are allowed to make use of it. In ForgeRock DS, you would use ACIs (global or not) to define who has permission to use the control. Out of the box, only Directory Manager can, because it has the bypass access controls privilege. Check the “Use Control or Extended Operation” section of the Administration Guide for the details on how to allow a user to use a control.

This blog post was first published @ ludopoitou.com, included here with permission.