Using an Authentication Tree Stage to Build a Custom UI with the ForgeRock JavaScript SDK

The ForgeRock JavaScript SDK greatly simplifies the process of adding intelligent authentication to your applications. It offers a ready-to-use UI that completely handles rendering of authentication steps. You can also take full control and create a custom UI; in which case, it’s helpful to know the current stage of the authentication tree to determine which UI you should render.

OpenAM 7.0 adds a stage property to the page node that can be used for this purpose, and alternate approaches are available for prior versions. This post will show you two approaches for OpenAM 6.5, and one for OpenAM 7.

OpenAM 6.5

While the stage property doesn’t exist in authentication trees prior to OpenAM 7.0, there are two alternate approaches to achieve the same result.

Approach #1: Metadata Callbacks

Using metadata callbacks, you can inject the stage value into the tree’s response payload. The only difference is that the value will appear in a callback, instead of directly associated with the step itself. This approach involves three steps:

  1. Create a script to add the metadata callback.
  2. Update your tree to execute that script.
  3. Read the metadata callback in your application.

Step 1: Add a Metadata Callback Using a Script

  • Create a script of type Decision node script for authentication trees.
  • Give it an appropriate name, such as “MetadataCallback: UsernamePassword”.
  • In the script, add a metadata callback that creates an object with a stageproperty. Be sure to also set the outcome value:
var fr = JavaImporter(
  org.forgerock.json.JsonValue,
  org.forgerock.openam.auth.node.api.Action,
  com.sun.identity.authentication.spi.MetadataCallback
);

with (fr) {
  var json = JsonValue.json({ stage: "UsernamePassword" });
  action = Action.send(new MetadataCallback(json)).build();
}

outcome = "true";

As with all scripts, ensure you have whitelisted any imported classes.

Step 2: Update Your Tree to Execute the Script

Add a scripted decision node to your page node and configure it to reference the script created in the previous step. In this example, the step payload will contain three callbacks:

  • MetadataCallback
  • NameCallback
  • PasswordCallback

Step 3: Read the Metadata Callback

Use the SDK to find the metadata callback and read its stage property:

function getStage(step) {
  // Get all metadata callbacks in the step
  const metadataCallbacks = step.getCallbacksOfType(CallbackType.MetadataCallback);

  // Find the first callback that contains a "stage" value in its data
  const stage = metadataCallbacks
    .map(x => {
      const data = x.getData();
      const dataIsObject = typeof data === "object" && data !== null;
      return dataIsObject && data.stage ? data.stage : undefined;
    })
    .find(x => x !== undefined);

  // Return the stage value, which will be undefined if none exists
  return stage;
}

Approach #2: Inspecting Callbacks

If you have relatively few and/or well-known authentication trees, it’s likely you can determine the stage by simply looking at the types of callbacks in the step.

For example, it’s common for a tree to start by capturing the username and password. In this case, you can inspect the callbacks to see if they consist of a NameCallback and PasswordCallback. If your tree uses WebAuthn for passwordless authentication, the SDK can help with this inspection:

function getStage(step) {
  // Check if the step contains callbacks for capturing username and password
  const usernameCallbacks = step.getCallbacksOfType(CallbackType.NameCallback);
  const passwordCallbacks = step.getCallbacksOfType(CallbackType.PasswordCallback);
  if (usernameCallbacks.length > 0 && passwordCallbacks.length > 0) {
    return "UsernamePassword";
  }

  // Use the SDK to determine if this is a WebAuthn step
  const webAuthnStepType = FRWebAuthn.getWebAuthnStepType(step);
  if (webAuthnStepType === WebAuthnStepType.Authentication) {
    return "DeviceAuthentication";
  } else if (webAuthnStepType === WebAuthnStepType.Registration) {
    return "DeviceRegistration";
  }

  // ... Add checks to determine other stages in your trees ...

  return undefined;
}

OpenAM 7.0 Approach

Using OpenAM 7.0 to specify a stage is straightforward. When constructing a tree, place nodes inside a page node and then specify its stage, which is a free-form text field:

When you use the SDK’s FRAuth module to iterate through a tree, you can now call the getStage() method on the returned FRStep, and decide which custom UI needs to be rendered:

// Get the current step in the tree
const currentStep = await FRAuth.next(previousStep);

// Use the stage value configured in the tree
switch (currentStep.getStage()) {
  case "UsernamePassword":
    // Render your custom username/password UI
    break;
  case "SomeOtherStage":
    // etc
    break;
}

DS: Zero Downtime Upgrade Strategy Using a Blue/Green Deployment

Introduction

This is the continuation of the previous blog about a Zero Downtime Upgrade Strategy Using a Blue/Green Deployment for AM. Traditionally, ForgeRock Directory Server (DS) upgrades are handled via a rolling upgrade strategy using an in-place update. As many deployments have constraints around this approach (zero downtime, immutable, etc.), a parallel deployment approach, also known as a blue/green strategy, can be leveraged for upgrading ForgeRock DS servers.

This blog provides a high-level approach for using a blue/green methodology for updating ForgeRock DS-UserStores.

This corresponds to Unit 3: DS-UserStores in our overall ForgeRock upgrade approach.

ForgeRock Upgrade Units
Unit 3: DS-Userstores Upgrade Process

Prerequisites/Assumptions

1. This approach assumes that your infrastructure processes have the ability to install a parallel deployment for an upgrade, or you are already using a blue/green deployment.

2. In the above diagram, the blue cluster reflects an existing DS deployment (like a 3.5.x version), and the green reflects a new DS deployment (like a 6.5.x version).

3. There are N+1 DS servers deployed in your existing deployment. N servers are used for your production workload and one server is reserved for maintenance activities like backup, upgrades, etc. If there is no maintenance server, then you may need to remove one server from the production cluster (thereby reducing production load capacity) or install an additional DS server node for this upgrade strategy.

4. Review release notes for all DS versions between existing and target DS deployments for new, deprecated features, bug fixes, and others. For a DS 3.5 to DS 6.5 upgrade, review the Release Notes for DS 5.0, 5.5, 6.0, and 6.5 versions.

Upgrade Process

1. Unconfigure replication for the DS-3 user store. Doing so ensures that the upgrade doesn’t impact your existing DS deployment.

2. Upgrade DS-3 in place using DS upgrade process.

3. Create a backup from DS-3 using the DS backup utility.

4. Configure green RS-1’s replication with the existing blue replication topology.

5. Configure green RS-2’s replication with the existing blue replication topology.

6. Install green DS-1 and restore data from backup using the DS restore utility.

7. Install green DS-2 and restore data from backup using the DS restore utility.

8. Install Green DS-3 and restore data from backup using the DS restore utility.

9. Configure Green DS-1’s replication with Green RS-1.

10. Configure Green DS-2’s replication with Green RS-1.

11. Configure Green DS-3’s replication with Green RS-1.

Switch Over to the New Deployment

12. After validating that the new deployment is working correctly, switch the load balancer from blue to green. This can also be done incrementally. If any issues occur, you can always roll back to the blue deployment.

If direct hostnames are used by DS clients, such as AM, IDM, etc., then those configurations need to be updated to leverage new green hostnames.

Post Go-Live

13. Unconfigure the blue RS1 replication server to remove this server from blue’s replication topology.

14. Unconfigure the blue RS2 replication server to remove this server from blue’s replication topology.

15. Stop the blue DS servers.

16. Stop the blue RS servers.

17. De-provision the blue deployment.

Conclusion

Although a blue/green deployment requires a high level of deployment maturity, this approach provides an elegant way to minimizing downtime for ForgeRock deployment upgrades. It is always advisable to try an upgrade strategy in lower environments like dev, stage before moving to a production environment.

Depending on the complexity of your deployment, there can be multiple things to be considered for these upgrades, such as customizations, new FR features, etc. It is always recommended to break the entire upgrade process into multiple releases like “base upgrade” followed by “leveraging new features”, and so on.

References

AM and IG: Zero Downtime Upgrade Strategy Using a Blue/Green Deployment

Introduction

The standard deployment for the ForgeRock Identity Platform consists of multiple ForgeRock products such as IG, AM, IDM, and DS. As newer ForgeRock versions are released, deployments using older versions need to be migrated before they reach their end of life. Also, newer versions of ForgeRock products provide features such as intelligent authentication and the latest OAuth standards, which help businesses implement complex use cases.

ForgeRock Deployment Components

Problem Statement

Traditionally, ForgeRock upgrades are handled via a rolling upgrade strategy using an in-place update. This strategy doesn’t suit all deployments due to the following constraints:

  • Many deployments don’t allow any downtime. This means production servers can’t be stopped for upgrade purposes.
  • Some deployments follow an immutable instances approach. This means no modification is allowed on the current running servers.

To resolve these constraints, a parallel deployment approach, or a blue/green strategy can be leveraged for upgrading ForgeRock servers.

Solution

This article provides a high-level approach for using a blue/green methodology for updating ForgeRock AM servers and related components like DS-ConfigStore, DS-CTS, AM-Agents, and IG servers. We plan to cover similar strategies for DS-UserStores and IDM in future articles.

In order to upgrade ForgeRock deployment, we need to first analyze the dependencies between various ForgeRock products and their impact on upgrade process:

Given the dependencies between ForgeRock products, it is generally advisable to upgrade AM before upgrading DS, AM agents, and others, as new versions of AM support older versions of DS and AM agents, but the converse may not be true.

Note: There can be some exceptions to this rule. For example:

  • Web policy agents 4.x are compatible with AM 6.0, but not with AM 6.5. This means the order of upgrade shall be existing version to AM 6.0 => AM Agent 4.x to 5.x => AM 6.0 to AM 6.5.x
  • If an AM-IDM integration is used, then both AM and IDM need to be upgraded at the same time.

Upgrade Units

ForgeRock Upgrade Units

A ForgeRock Identity Platform deployment can be divided into 4 units so that upgrade of these units can be handled individually:

  • Unit 1: AM and its related stores (DS-Config and DS-CTS)
  • Unit 2: AM-Agents/IG
  • Unit 3: DS-UserStores
  • Unit 4: IDM and its datastore

The order of upgrade used by our approach shall be Unit 1=>Unit 2=>Unit 3=>Unit 4.

Unit 1: AM Upgrade

Unit 1: AM Upgrade

Prerequisites/ Assumptions

1. This approach assumes that your infrastructure processes have the ability to install a parallel deployment for upgrade, or you are already using a blue/green deployment.

2. In the above diagram, the blue cluster reflects an existing AM deployment (like the 13.5.x version) and the green cluster reflects a new AM deployment (like the 6.5.x version).

3. There are N+1 AM servers and corresponding config stores deployed in your existing deployment. This means N servers are used for production load, and one server is reserved for maintenance activities like backup, upgrades, and others. If there is no such maintenance server, then you may need to remove one server from the production cluster (thereby reducing production load capacity) or install an additional node (AM server and corresponding config store) for this upgrade.

4. No sessions in CTS servers are replicated during blue/green switch; therefore, users are expected to re-authenticate after this migration. If your business use cases require users to remain authenticated, then these sessions (like OAuth Refresh tokens) need to be synced from the old to the new deployment. Mechanisms like ldif export/import or using IDM synchronization engine can be leveraged for syncing selective tokens from old to new deployments. Also, refer to the AM Release Notes on session compatibility across AM versions.

5. Review the Release Notes for all AM versions between existing and target AM deployment for new features, deprecated features, bug fixes, and so on for OpenAM 13.5 to AM 6.5 upgrade. Review the Release Notes for AM 5.0, 5.5, 6.0, and 6.5 versions.

Upgrade Process

1. Unconfigure replication for the DS-3 Config store. This ensures that the upgrade doesn’t impact an existing AM deployment.

2. Upgrade AM-3 in-place using the AM upgrade process. Note: You may need to handle new AM features in this process like AM 6.5 secrets, and others.

3. Export Amster configs from AM-3.

4. Transform Amster export so that the Amster export is aligned with a new green deployment such as DS hostname:port.

5. Install AM, DS-Config, and DS-CTS servers. Import the Amster export into a new green cluster. Note: For certain deployment patterns, such as ForgeRock immutable deployment, the Amster import needs to be executed for each AM node. If a shared config store is used, then the Amster import needs to be executed only once, and other nodes are required to be added to the existing AM site.

Switch Over to the New Deployment

6. After validating that the new deployment is working correctly, switch the load balancer from blue to green. This can also be done incrementally. If any issues occur, we can always roll back to the blue deployment.

Note: Any configuration changes made after the blue’s cluster’s Amster export should be applied to both blue and green deployments so that no configuration change is lost during switchover or rollback.

Post Go-Live

7. Stop the AM servers in the blue deployment.

8. Stop the Config and CTS DS servers in blue deployment.

9. De-provision the blue deployment.

Unit 2: AM-Agent/IG Upgrade

Unit 2: AM-Agent/IG Upgrade Process

AM-Agent

Prerequisites/ Assumptions

1. This approach assumes that your deployment (including applications protected by agents) has the ability to install a parallel deployment for upgrade, or you are already using a blue/green deployment.

2. In the above diagram, the blue cluster reflects an existing AM-Agent deployment and the green reflects new AM-Agent deployment.

3. A parallel base green deployment for protected app servers has already been created.

4. Create new Agent profiles for green deployment on AM servers.

5. This approach assumes both old and new AM-Agent versions are supported by the AM deployment version.

6. Refer to the Release Notes for latest and deprecated features in the new AM-Agent/IG version, such as the AM-Agent 5.6 Release Notes.

Upgrade Process

1. Install AM-Agents in the green deployment. Update agent profiles on the AM server (created in #4 above) for new agents deployed in the green deployment to match configurations used in agent profiles from the blue deployment. For certain AM versions, this process can be automated by retrieving existing Agent profiles and using these results to create new Agent profiles.

Switch Over to the New Deployment

2. After validating that the new deployment is working properly, switch the load balancer from blue to green.

Post Go-Live

3. Stop the app servers in the blue deployment.

4. Remove the blue agent profiles from AM deployment.

5. De-provision the blue deployment.

IG

Prerequisites/ Assumptions

1. This approach assumes that your deployment (including applications protected by agents) has the ability to install a parallel deployment for upgrade, or you are already using a blue/green deployment.

2. In the above diagram, the blue cluster reflects an existing IG deployment and the green reflects the new IG deployment.

3. This approach assumes both old and new IG versions are supported by the AM deployment version.

4. Create new Agent profiles for the green deployment on the AM servers required for IG servers.

5. Refer to the Release Notes for the latest and deprecated features in the new IG version, like IG 6.5 Release Notes.

Upgrade Process

1. Update the IG configs in the git repository as per the changes in the new version. You may create a different branch in your repository for the same.

2. Deploy the new green IG deployment by leveraging updated configurations.

Switch Over to the New Deployment

3. After validating that the new deployment is working fine, switch the load balancer from blue to green.

Post Go-Live

4. Stop the IG servers in the blue deployment.

5. De-provision the blue deployment.

Conclusion

Although a blue/green deployment requires a high level of deployment maturity, this approach provides an elegant way to minimize downtime for ForgeRock deployment upgrades. It is always advisable to practice an upgrade strategy in lower environments like dev, and stage before moving to a production environment.

Depending on the complexity of your deployment, there can be multiple things to be considered for these upgrade,s such as customizations, new FR features, migration to containers, and others. It is always recommended to break the entire upgrade process into multiple releases, like “base upgrade” followed by “leveraging new features”, and so on.

References

ForgeRock Identity Day Paris (2019)

Jeudi 21 Novembre, c’est tenu à Paris le ForgeRock Identity Day, une demi journée d’information sur notre société et nos produits, destinée à nos clients, prospects et partenaires.

Animé par Christophe Badot, VP de la Région France, Benelux, Europe du Sud, cet événement a commencé par une présentation de Alexander Laurie, VP Global Solution Architecture, sur les tendances du marché et la vision de ForgeRock, en Français avec un bel accent Anglais.

Nous avons eu des témoignages de nos clients: CNP Assurance, GRDF et l’Alliance Renault-Nissan-Mitsubishi. Merci à eux d’avoir partagé leurs besoins et la solution apportée par ForgeRock.

Léonard Moustacchis et Stéphane Orluc, Solutions Architects chez ForgeRock, ont fait une démonstration en direct de la force de la Plateforme d’Identité de ForgeRock, à travers une application bancaire web et mobile. Et j’ai eu l’honneur de clore la journée avec une présentation de la roadmap produits, et surtout du ForgeRock Identity Cloud, notre offre SaaS disponible depuis la fin Octobre.

Cette après-midi s’est terminée sur un cocktail qui nous a permis de discuter plus en détail avec les participants. Toutes les photos de l’événement sont visible dans l’album sur mon compte Flickr.


And now the English shorter version:

On Thursday November 21st, we hosted ForgeRock Identity Day in Paris, a half day event for our customers, prospect customers and partners. We presented our vision of the identity landscape, our products and the roadmap. And three of our French customers : CNP Assurances, GRDF, Renault-Nissan-Mitsubishi Alliance, presented how ForgeRock has helped them with their digital transformation and identity needs. My colleagues from the Solutions Architect team ran a live demo of our web and mobile sample banking applications, to illustrate the power of the ForgeRock Identity Platform. And I closed the day with a presentation of the product roadmap and especially of ForgeRock Identity Cloud, our solution as a service. As usual, all my photos are visible in this public Flickr album.

This blog post was first published @ ludopoitou.com, included here with permission.

Configuring ForgeRock AM Active/Active Deployment Routing Using IG

Introduction

The standard deployment pattern for ForgeRock Identity Platform is to deploy the entire platform in multiple data centers/cloud regions. This is ensures the availability of services in case of an outage in one data center. This approach also provides performance benefits, as the load can be distributed among multiple data centers. Below is the example diagram for Active/Active deployment:

Problem Statement

AM provides both stateful/CTS-based and stateless/client-based sessions. Global deployment use cases require a seamless, single sign-on (SSO) experience among all applications with following constraints:

  • Certain deployments have distributed applications, such as App-A, deployed only in Data Center-A, and App-B, deployed only in Data Center-B.
  • The end user may travel to different locations, such as from the East coast to the West coast in the U.S. This means that application access requests will be handled by different data centers.

To achieve these use cases, CTS replication has to be enabled across multiple data centers/cloud regions.

In some situations, a user may try to access an application hosted in a specific data center before their corresponding sessions have been replicated. This can result in the user being prompted to re-authenticate, thereby degrading the user experience:

Note: This problem may be avoided if client-based sessions are leveraged, but many deployments have to use CTS-based sessions due to current limitations in client-based sessions. Also, when CTS-based sessions are used, the impact of CTS replication is much more than in client-based sessions.

In this article, we leverage IG to intelligently route session validation requests to a single data center, irrespective of the application being accessed.

Solution

IG can route session validation requests to a specific data center/region, depending on an additional site cookie generated during user’s authentication.

This approach ensures that the AM data center that issued the user’s session is used for corresponding session validation calls. This also means that CTS replication is not required across multiple data centers/ cloud regions:

Configure AM

  • Install AM 6.5.x and corresponding DS stores, Amster, and others. Following is a sample Amster install command:
install-openam — serverUrl http://am-A.example.com:8094/am — adminPwd cangetinam — acceptLicense — userStoreDirMgr “cn=Directory Manager” — userStoreDirMgrPwd “cangetindj” — userStoreHost uds1.example.com — userStoreType LDAPv3ForOpenDS — userStorePort 1389 — userStoreRootSuffix dc=example,dc=com — cfgStoreAdminPort 18092 — cfgStorePort 28092 — cfgStoreJmxPort 38092 — cfgStoreSsl SIMPLE — cfgStoreHost am-A.example.com — cfgDir /home/forgerock/am11 — cookieDomain example.com
am> connect http://am-A.example.com:8094/am -i
Sign in
User Name: amadmin
Password: **********
amster am-A.example.com:8094> import-config — path /home/forgerock/work/amster
Importing directory /home/forgerock/work/amster
Imported /home/forgerock/work/amster/global/Realms/root-employees.json
Imported /home/forgerock/work/amster/realms/root-employees/CoookieSetterNode/e4c11a8e-6c3b-455d-a875–4a1c29547716.json
Imported /home/forgerock/work/amster/realms/root-employees/DataStoreDecision/6bc90a3d-d54d-4857-a226-fb99df08ff8c.json
Imported /home/forgerock/work/amster/realms/root-employees/PasswordCollector/013d8761–2267–43cf-9e5e-01a794bd6d8d.json
Imported /home/forgerock/work/amster/realms/root-employees/UsernameCollector/31ce613e-a630–4c64–84ee-20662fb4e15e.json
Imported /home/forgerock/work/amster/realms/root-employees/PageNode/55f2d83b-724b-4e3a-87cc-247570c7020e.json
Imported /home/forgerock/work/amster/realms/root-employees/AuthTree/LDAPTree.json
Imported /home/forgerock/work/amster/realms/root/J2eeAgents/IG.json
Import completed successfully
- Creates /root realm aliases: am-A.example.com and am-B.example.com- AM Agent to be used by IG in /root realm- LDAPTree to create cookie after authentication. Update Cookie value as DC-A or DC-B, dependending on datacenter being used
  • Repeat the previous steps for configuring AM in all data centers:

Configure IG

- frProps.json to specify AM primary and secondary DC endpoints. Refer frProps-DC-A for DC-A and frProps-DC-B for DC-B.- config.json to declare primary and secondary AmService objects- 01-pep-dc-igApp.json to route session validation to specific datacenter, depending on “DataCenterCookie” value.
  • Repeat the previous steps for deploying IG in all data centers.

Test the use cases

The user accesses an application deployed in DC-A first

  1. The user accesses app1.example.com, deployed in DC-A.
  2. IG, deployed in DC-A, redirects the request to AM, deployed in DC-A for authentication.
  3. A DataCenterCookie is issued with a DC-A value.
  4. The user accesses app2.example.com, deployed in DC-B.
  5. IG, deployed in DC-B, redirects the request to AM, deployed in DC-A, for session validation.

The user accesses an application deployed in DC-B first

  1. The user accesses app2.example.com deployed in DC-B.
  2. IG, deployed in DC-B, redirects the request to AM deployed in DC-B, for authentication.
  3. A DataCenterCookie is issued with a DC-B value.
  4. The user accesses app1.example.com, deployed in DC-A.
  5. IG, deployed in DC-A, redirects request to AM, deployed in DC-B, for session validation.

Extend AM to OAuth/OIDC use cases

OAuth: AM 6.5.2 provides option to modify Access tokens using scripts. This allows additional metadata in stateless OAuth tokens, such as dataCenter. This information can be leveraged by IG OAuth RS to invoke the appropriate data center’s AmService objects for tokenInfo/ introspection endpoints:

{
 “sub”: “user.88”,
 “cts”: “OAUTH2_STATELESS_GRANT”,
 “auth_level”: 0,
 “iss”: “http://am6521.example.com:8092/am/oauth2/employees",
 …
 “dataCenter”: “DC-A”
}

OIDC: AM allows additional claims in OIDC tokens using scripts. This information can be leveraged by IG to invoke appropriate dataceter’s AmService objects

References

IDM Deployment Patterns — Centralized Repo- Based vs. Immutable File-Based

Introduction

I recently blogged about how customers can architect ForgeRock Access Management to support an immutable, DevOps style deployment pattern — see link. In this post, we’ll take a look at how to do this for ForgeRock Identity Management (IDM).

IDM is a modern OSGi-based application, with its configuration stored as a set of JSON files. This lends itself well to either a centralized repository- (repo) based deployment pattern, or a file-based immutable pattern. This blog explores both options, and summarizes the advantages and disadvantages of each.

IDM Architecture

Before delving into the deployment patterns, it is useful to summarize the IDM architecture. IDM provides centralized, simple management, and synchronization of users, devices, and things. It is a highly flexible product, and caters to a multitude of different identity management use cases, from provisioning, self-service, password management, synchronization, and reconciliation, to workflow, relationships, and task execution. For more on IDM’s architecture, check out the following link.

IDM’s core deployment architecture is split between a web application running in an OSGi framework within a Jetty Web Server, and a supported repo. See this link for a list of supported repos.

Within the IDM web application, the following components are stored:

  • Apache Felix and Jetty Web Server hosting the IDM binaries
  • Secrets
  • Configuration and scripts — this is the topic of this blog.
  • Policy scripts
  • Audit logs (optional)
  • Workflow BAR files
  • Bundles and application JAR files (connectors, dependencies, repo drivers, etc)
  • UI files

Within the IDM repo the following components are stored:

  • Centralized copies of configuration and policies — again, the topic of this blog
  • Cluster configuration
  • Managed and system objects
  • Audit logs (optional)
  • Scheduled tasks and jobs
  • Workflow
  • Relationships and link data

Notice that configuration is listed twice, both on the IDM node’s filesystem, and within the IDM repo. This is the focus of this blog, and how manipulation of this can either support a centralized repository deployment pattern, or a file-based immutable configuration deployment pattern.

Centralized, Repo-Based Deployment Pattern

This is the out-of-the-box (OOTB) deployment pattern for IDM. In this model, all IDM nodes share the same repository to pull down their configuration on startup, and if necessary, overwrite their local files. Any configuration changes made through the UI or over REST (REST ensures consistency) are pushed to the repo and then down to each IDM node via the cluster service. The JSON configuration files within the ../conf directory on the IDM web application are present, but should not be manipulated directly, as this can lead to inconsistencies in the configuration between the local file system and the authoritative repo configuration.

The following component-level diagram illustrates this deployment pattern:

Configuration Settings

The main configuration items for a multi-instance, immutable, file-based deployment pattern are:

  • The ../resolver/boot.properties file — This file stores IDM boot specifics like the IDM host, ports, SSL settings, and more. The key configuration item in this file for this blog post is openidm.node.id, which needs to be a string that is unique to each IDM node to allow the cluster service to identify each host.
  • The ../conf folder — This contains all JSON configuration files. On startup, these files will be pushed to the IDM repo. As a best practice (see link), the OOTB ../conf directory should not be used. Instead, a project folder containing the contents of the ../conf and ../script directory should be created, and IDM started with the “-p </path/to/my/project/location>” flag. This ensures OOTB and custom configurations are kept separately to ease version control, upgrades, backouts, and others.
  • The ../<my_project>/conf/system.properties file. This file contains 2 key settings:
openidm.fileinstall.enabled=false

This setting can either be left-commented (for example, true by default) or uncommented, and explicitly set to true. This, combined with the setting below, pushes all configurations except those from your project’s directory (for example, . ../conf and ../script) to the repo:

openidm.config.repo.enabled=false 

This setting needs to be uncommented to ensure IDM does not read the configuration from the repo, or push the configuration to the repo:

  • The ../<my_project>/conf/config.properties file. The key setting in this file is:
felix.fileinstall.enableConfigSave=false 

This setting needs to be uncommented. This means any changes made via REST or the UI are not pushed down to the local IDM file system. This effectively makes the IDM configuration read-only, which is key to immutability.

Note: Direct manipulation of configuration files and promotion to other IDM environments can fail if the JSON files contain crypto material. See the following KB article for information on how to handle this. You can also use the IDM configexport tool (IDM version 6.5 and above).

Following are key advantages and disadvantages of this deployment pattern:

Advantages

  • Follows core DevOps patterns for an immutable configuration: push configuration into a repo like GIT, parameterize, and promoted up to production. A customer knows without a doubt which configuration is running in production.
  • This pattern offers the ability to pre-bake a configuration into an image (such as, a Docker image, an Amazon Machine Image, and others) for auto-deployment of IDM configuration using orchestration tools.
  • Supports “stack by stack” deployments, as configuration changes can be made to a single node without impacting the others. Rollback is also far simpler—just restore the previous configuration.
  • The IDM configuration is set to read-only; meaning, accidental UI or REST-based configuration changes cannot alter configuration, and can potentially go on to impact functionality.

Disadvantages

  • As each IDM node holds its own configuration, the UI cannot be used to make configuration changes. This could present a challenge to customers new to IDM.
  • The customer is left to ensure processes are put in place to ensure all IDM nodes run from exactly the same configuration. This requires strong DevOps methodologies and experience.
  • Limited benefits for customers who do not modify their IDM configuration often.

Immutable, File-Based Deployment Pattern

The key difference in this model is IDM’s configuration is not stored in the repository. Instead, IDM pulls the configuration from the local filesystem and stores it in memory. The repo is still the authoritative source for all other IDM components (cluster configuration, schedules, and optionally, audit logs, system and managed objects, links, relationships, and others).

The following component level diagram illustrates this deployment pattern:

Configuration Settings

The main configuration items for a multi-instance, immutable, file-based deployment pattern are:

  • The ../resolver/boot.properties file — This file stores IDM boot specifics like the IDM host, ports, SSL settings, and more. The key configuration item in this file for this blog post is openidm.node.id, which needs to be a string unique to each IDM node to let the cluster service identify each host.
  • The ../conf folder — This contains all JSON configuration files. On startup, these files will be pushed to the IDM repo. As a best practice, (see link), the OOTB ../conf directory should not be used. Instead, a project folder containing the contents of the ../conf and ../script directory should be created, and IDM started with the “-p </path/to/my/project/location>” flag. This ensures OOTB and custom configurations are kept separate, to ease version control, upgrades, backouts, and others.
  • The ../<my_project>/conf/system.properties file. This file contains 2 key settings:
openidm.fileinstall.enabled=false

This setting can either be left commented (for example, true by default) or uncommented, and explicitly set to true. This, combined with the setting below pushes all configurations except that from your project’s directory (such as ../conf and ../script) to the repo:

openidm.config.repo.enabled=false 

This setting needs to be uncommented to ensure IDM does not read the configuration from the repo or push the configuration to the repo:

  • The ../<my_project>/conf/config.properties file. The key setting in this file is:
felix.fileinstall.enableConfigSave=false 

This setting needs to be uncommented. This means any changes made via REST or the UI are not pushed down to the local IDM filesystem. This effectively makes the IDM configuration read-only, which is key to immutability.

Note: Direct manipulation of configuration files and promotion to other IDM environments can fail if the JSON files contain crypto material. See the following KB article for information on how to handle this. You can also use the IDM configexport tool (IDM version 6.5 and above).

The following presents key advantages and disadvantages of this deployment pattern:

Advantages

  • Follows core DevOps patterns for immutable configuration: push configuration into a repo like GIT, parameterize, and promoted up to production. A customer knows without a doubt which configuration is running in production.
  • This pattern offers the ability to pre-bake the configuration into an image (such as a Docker image, an Amazon Machine Image, and others) for auto-deployment of IDM configuration using orchestration tools.
  • Supports “stack by stack” deployments, as configuration changes can be made to a single node without impacting the others. Rollback is also far simpler—restore the previous configuration.
  • The IDM configuration is set to read-only; meaning, accidental UI or REST-based configuration changes cannot alter configuration and potentially go on to impact functionality.

Disadvantages

  • As each IDM node holds its own configuration, the UI cannot be used to make configuration changes. This could present a challenge to customers new to IDM.
  • The customer is left to guarantee processes are put in place to ensure all IDM nodes run from exactly the same configuration. This requires strong DevOps methodologies and experience.
  • Limited benefit for customers who do not modify their IDM configuration often.

Summary of Configuration Parameters

The following table summarizes the key configuration parameters used in the centralized repo, and in file-based, immutable deployment patterns:

Conclusion

There you have it, two different deployment patterns—the centralized, repo-based pattern for customers who wish to go with the OOTB configuration, and/or do not update the IDM configuration often, and the immutable, file- based deployment pattern for those customers who demand it and/or are well-versed in DevOps methodologies and wish to treat IDM like code.

Push Protocol – Challenge/Response & Registration Redux

Overview

Push authentication depends on the secure verification of information sent from the server to the client, and from the client to the server. This lets the server verify that the notification was received by the original device, and for the device to verify that only the server sent the original request.
This approach is achieved by a combination of communication channels:

  • QR Code over HTTPS: Required for setup of the account on the user’s device. This allows an out-of-band setup of the device, and is of a sufficient security level for the goal of push authentication
  • Amazon Simple Notification Service: Used for delivering push messages to clients, which will trigger the authentication flow.

The high-level flow of operations is best summarized with the following diagram:

Registration

Registration is the process of registering the user’s phone with their account, so they can use it for push authentication-based login.

The flows described below assume an inline registration is being performed; that is, the user completes authentication using a conventional authentication chain, and within that, starts the registration process for the phone:

Authentication

The authentication flow assumes that both server and client have been preconfigured with the secret, and that the communication link between the server and the client has been established:

Data Contents

REG0 : Message sent from server to device via QR code

Format: URL-encoded within QR code
Scheme: pushauth://push.forgerock:<username>/?params

Where <username> is defined as:

The name of the account to display to the user on the phone’s app, usually the user-facing username of the account.

Where params are defined as:

Where loadbalancer is defined as:

Base64Url(loadbalancerName + “=” + loadbalancerValue)
The client should Base64Url-decode the string value as its cookie.

REG1 : Message sent from device to server in response to REG0

Format: JSON data payload , sent over HTTPS POST to the endpoint defined as the registration endpoint taken from REG0.

Where payload is defined as:

Where claims are defined as:

AUTH0 : Message sent from server to device via push

Format: Json data payload

Where payload is defined as below:

Signed JWT, with claims:

Where loadbalancer is defined as:
Base64(loadbalancerName + “=” + loadbalancerValue)

The client should first Base64-decode the string, and finally, use the value presented.

AUTH1 : Message sent from device to server in response to AUTH0

Format: JSON data payload , sent over HTTPS POST to the endpoint defined as the authentication endpoint taken from REG0.

Where payload is defined as:

Signed JWT, with claims:

Leveraging AD Nested Groups With AM

This article comes from an issue raised by multiple customers, where ForgeRock Access Management (AM) was not able to retrieve a user’s group memberships when using Active Directory (AD) as a datastore with nested groups. I’ve read in different docs about the “embedded groups” expression, as well as the “transitive groups” or “recursive groups” or “indirect groups”, and finally, the “parent groups” expressions. I’m just quoting them all here for search engines.

As a consequence, it was not, for example, possible for AM agents or any policy engine client, such as a custom web application, to enforce access control rules based on these memberships. In the same manner, applications relying on the AM user session or profile, or custom OAUTH 2, or OpenID Connect tokens could not safely be used to retrieve the entire list of groups a user belonged to. In the best case scenario, only the “direct” groups were fetched from AD, and other errors could occur. Read more about it below:

Indeed, historically, AM has used the common memberOf or isMemberOf attribute by default, (depending on the type of LDAP user store), while AD had a different implementation that also evolved across time.

So, initially, when AM was issuing “(member=$userdn)” LDAP searches against an AD, if, for example, a user was member of the AD “Engineering” group, and that group was itself member of the “Staff” group, the above search was only returning the user’s direct group; in this case, the “Engineering” group.

A patch was written for AM to leverage a new feature of AD 2003 SP2 and above, providing the ability to retrieve the AD groups and nested groups a user belongs to, thanks to a search like this: (member:1.2.840.113556.1.4.1941:=$userdn).

See, for example, https://social.technet.microsoft.com/wiki/contents/articles/5392.active-directory-ldap-syntax-filters.aspxon this topic.

This worked in some deployments. But for some large ones, that search was slow and sometimes induced timeouts and failing requests; for example, when the AM agent was retrieving a user’s session. Thus, the agent com.sun.identity.agents.config.receive.timeout parameter had to be increased (by 4 seconds by default).

Fortunately, since AD 2012 R2, there’s a new feature available—a base search from the user’s DN (the LDAP DN of the user in AD) with a filter of
“(objectClass=user)”. Requesting the msds-memberoftransitive attribute will return the whole of the user’s groups, including the parent groups of the nested groups a user is member of. That search can be configured from the AM console.

You can find more information about that attribute here: https://docs.microsoft.com/en-us/openspecs/windows_protocols/ms-adts/c5c7d019-8d88-4bfa-b84d-4413bbf189b5

Also, our bug tracker holds a reference to that issue: https://bugster.forgerock.org/jira/browse/OPENAM-9674

But from now on, you should hopefully be able to leverage AD nested groups gracefully with AM.

Proof of Concept

This article is an overview of a proof of concept (PoC) we recently completed with one of our partners. The purpose was to demonstrate the ability to use the ForgeRock Identity platform to quickly provide rich authentication (such as biometric authentication by face recognition), and authorization capabilities to a custom mobile application, written from scratch.

Indeed, it took about two weeks to develop and integrate the mobile application with the ForgeRock Identity platform, thanks to the externalization of both the registration and the authentication code, and logic out the mobile application itself. The developers had to mostly use the ForgeRock REST API’s to implement these features.

Some important facts: 

  • While the authentication leveraged the ForgeRock REST API and authentication trees, the device registration was implemented using a mobile SDK provided by the biometric vendor. Another option could have been to use an existing ForgeRock authentication tree node for the registration, but it would have required the use of a browser. The ideal goal was to provide all the features using a single, custom mobile application for a better user experience.
  • It was decided to use Daon as the biometric authentication solution provider/vendor, even though the ForgeRock Identity platform can be integrated with other authentication solutions. The ForgeRock marketplace is a good place to figure out what’s available for whatever type of authentication you’re looking for.
  • Depending on the solution, we may or may not provide a node for user or device registration. When not available, it can be either developed, or the vendor will provide an SDK to implement registration directly with them.
  • Daon provides two different ways to manage user credentials; they can be left on the user’s device, leveraging the FIDO protocol, or they can be stored in a specific tenant on the Daon’s Identity X server. For that PoC, we chose to use the FIDO protocol, because we wanted to provide the best user experience. So, for example, with as little network latency as possible, and checking a face locally on a mobile device looked faster than having to send it (or a hash of it or so) to a server for verification.

The functional objectives of that PoC were to provide:

  • A way for the mobile application to dynamically discover the available authentication methods, rather than hardcoding or defining in the application configuration the list of methods. We did that using an authentication tree which included three authentication methods, and the ForgeRock Access Management REST API and callbacks to discover the available choices.
  • A way to provide different authentication choices based on some criteria, such as the domain of the user’s email address and the status of that user (already registered in the authentication platform or not).
  • The ability to deliver OTPs by either SMS or email, based on the user’s profile. A user profile including a mobile phone number had to trigger the OTP delivery by SMS, and by email without a filled mobile phone number.
  • Biometric authentication by face recognition, embedded in the custom mobile application, thus providing the best possible user experience, without the need to rely on an extra browser session or additional device.
  • Biometric-enabled device registration: Not only was the face recognition was at authentication time, it was also used for the device registration.
  • OAUTH 2 access tokens delivery, introspection, and usage to gain access to business APIs.
  • Protection of the APIs by authorization rules and the ForgeRock authorization engine.

The PoC logical architecture was as follows:

The authentication trees looked like this, with the first couple of screenshots showing the main tree, and the third screenshot showing the biometric authentication tree, embedded in the main tree:

In the main tree, we used a few custom JavaScript scripted nodes to implement the desired logic; for example, to expose different authentication choices based on the user’s email address domain:

Below, you can see the registration flow diagram:

  • The relying party app is the custom mobile application developed during the PoC.
  • The FIDO client in our case was the Daon’s mobile SDK running in the mobile app. That SDK is especially responsible for triggering the camera to take a picture of the user’s face.
  • The relying party server is ForgeRock Access Management.
  • The FIDO server is Daon’s Identity X server:

The depicted registration flow above is actually the one that occurs when using a browser and the ForgeRock registration node for Daon. When using a custom mobile app and the Daon mobile SDK, the registration request and responses go directly from the mobile application to the Daon’s Identity X server, so without going through ForgeRock Access Management.

On the contrary, the authentication flow always goes through ForgeRock Access Management, leveraging the nodes developed for that purpose:

Feel free to ask for questions for more details!

Overview of Options of Authentication By Face Recognition in ForgeRock Identity Platform

The following table provides solution designers and architects with a comparative overview of the different options available as of today to for authentication by face recognition to a ForgeRock Identity platform deployment.

The different columns represent some important criteria to consider when one searches for such a solution, some criteria is self-explanatory while the others are detailed below:

  • The Device agnostic column helps to figure out which type of device can be used (any vs a subset).
  • The Requires a 3rdparty solution column indicates whether or not other software is required in addition to the ForgeRock platform.
  • The Security column represents the relative level of security brought by the solution.
  • The ForgeRock supported column represents the level of effort required to integrate the solution.
  • Flows: That criteria gives an idea, from the user’s perspective (rather than from a purely technical perspective), of whether registration and/or authentication occurs with or without friction. As a rule of thumb, in band flows, it can be considered as frictionless (or so, since it involves use cases where a user needs a single device or uses a single browser session), while out of band flows can be seen as more secure (in some contexts at least), since different channels are involved. Some exceptions may exist, such as Face ID, which can be used in a purely mobile scenario (so, rather in band-like) or just as a means to register or authenticate on one side with a mobile device, while accessing a website or service from another device.