Easily Share Authentication Trees

Originally published on Mr. Anderson’s Musings

A New World

A new world of possibilities was born with the introduction of authentication trees in ForgeRock’s Access Management (AM). Limiting login sequences of the past were replaced with flexible, adaptive, and contextual authentication journeys.

ForgeRock chose the term Intelligent Authentication to capture this new set of capabilities. Besides offering a shiny new browser-based design tool to visually create and maintain authentication trees, Intelligent Authentication also rang in a new era of atomic extensibility.

Authentication Tree

While ForgeRock’s Identity Platform has always been known for its developer-friendliness, authentication trees took it to the next level: Trees consist of a number nodes, which are connected with each other like in a flow diagram or decision tree. Each node is an atomic entity, taking a single input and providing one or more outputs. Nodes can be implemented in Java, JavaScript, or Groovy.

A public marketplace allows the community to share custom nodes. An extensive network of technology partners provides nodes to integrate with their products and services.

A New Challenge

With the inception of authentication trees, a spike of collaboration between individuals, partners, and customers occurred. At first the sharing happened on a node basis as people would exchange cool custom node jar files with instructions on how to use those nodes. But soon it became apparent that the sharing of atomic pieces of functionality wasn’t quite cutting it. People wanted to share whole journeys, processes, trees.

A New Tool – amtree.sh

A fellow ForgeRock solution architect in the UK, Jon Knight, created the first version of a tool that allowed the easy export and import of trees. I was so excited about the little utility that I forked his repository and extended its functionality to make it even more useful. Shortly thereafter, another fellow solution architect from the Bay Area, Jamie Morgan, added even more capabilities.

The tool is implemented as a shell script, which exports authentication trees from any AM realm to standard output or a file and imports trees into any realm from standard input or a file. The tool automatically includes required decision node scripts for authentication trees (JavaScript and Groovy) and requires curl, jq, and uuidgen to be installed and available on the host where it is to be used. Here are a few ideas and examples for how to use the tool:


I do a lot of POCs or create little point solutions for customer or prospect use cases or build demos to show off technology partner integrations or our support for the latest open standards. No matter what I do, it often involves authentication trees of various complexity and usually those trees take some time designing and testing and thus are worthy of documentation and preservation and sharing. The first step to achieve any of these things is to extract the trees’ configuration into a reusable format, or simply speaking: backing them up or exporting them.

Before performing an export, It can be helpful to just produce a list of all the authentication trees in a realm. That way we get an idea what’s available and can decide if we want to export individual trees or all the trees in a realm. The tool provides an option to list all trees in a realm. It lists the trees in their natural order (order of creation). To get an alphabetically ordered list, we can pipe the output into the sort shell command.

List Trees
../amtree.sh -h https://am.example.com/openam -u amadmin -p ******** -r /authn -l | sort

Now that we have a list of trees, it is time to think about what it is we want to do. The amtree.sh tool offers us 3 options:

  1. Export a single tree into a file or to standard out: -e
  2. Export all trees into individual files: -S
  3. Export all trees into a single file or to standard out: -E

The main reason to choose one of these options over another is whether your trees are independent (have no dependency on other trees) or not. Authentication trees can reference other trees, which then act like subroutines in a program. These subroutines are called inner trees. Independent trees do not contain inner trees. Dependent trees contain inner trees.

Options 1 and 2 are great for independent trees as they put a single tree into a single file. Those trees can then easily be imported again. Option 2 generates the same output as if running option 1 for every tree in the realm.

Dependent trees require other trees be already available or be imported before the dependent tree is imported or the AM APIs will complain and the tool will not be able to complete the import.

Option 3 is best suited for highly interdependent trees. It puts all the trees of a realm into the same file and on import of that file, the tool will always have all the required dependencies available.

Option 2: Export All Trees To Individual Files
../amtree.sh -h https://am.example.com/openam -u amadmin -p ******** -r /authn -S
 Export all trees to files
 Exporting push ..........
 Exporting simple ............
 Exporting trusona ....
 Exporting risk ........
 Exporting smart ..........
 Exporting webauthn .............
 Exporting select .......
 Exporting solid ......
 Exporting webauthn_reg ............
 Exporting webauthn_reg_2fa .............
 Exporting email .........
 Exporting push_reg ..............
 Exporting push_reg_2fa ...............
Option 3: Export All Trees To Single File
../amtree.sh -h https://am.example.com/openam -u amadmin -p ******** -r /authn -E -f authn_all.json
 Exporting push ..........
 Exporting simple ............
 Exporting trusona ....
 Exporting risk ........
 Exporting smart ..........
 Exporting webauthn .............
 Exporting select .......
 Exporting solid ......
 Exporting webauthn_reg ............
 Exporting webauthn_reg_2fa .............
 Exporting email .........
 Exporting push_reg ..............
 Exporting push_reg_2fa ...............

After running both of those commands, we should find the expected files in our current directory:

ls -1

The second command (option 3) produced the single authn_all.json file as indicated by the -f parameter. The first command (option 2) generated individual files per tree.


Import is just as simple as export. The tool brings in required scripts and resolves dependencies to inner trees, which means it orders trees on import to satisfy dependencies.

Exports omit secrets of all kind (passwords, API keys, etc.) which may be stored in node configuration properties. Therefore, if we exported a tree whose configuration contains secrets, the imported tree will lack those secrets. If we want to more easily reuse trees (like I do in my demo/lab environments) we can edit the exported tree files and manually insert the secrets. Fields containing secrets are exported as null values. Once we manually add those secrets to our exports, they will import as expected.

  "origin": "003232731275e50c2770b3de61675fca",
  "innernodes": {},
  "nodes": {
    "B56DB408-E26D-4FBA-BF86-339799ED8C45": {
      "_id": "B56DB408-E26D-4FBA-BF86-339799ED8C45",
      "hostName": "smtp.gmail.com",
      "password": null,
      "sslOption": "SSL",
      "hostPort": 465,
      "emailAttribute": "mail",
      "smsGatewayImplementationClass": "com.sun.identity.authentication.modules.hotp.DefaultSMSGatewayImpl",
      "fromEmailAddress": "vscheuber@gmail.com",
      "username": "vscheuber@gmail.com",
      "_type": {
        "_id": "OneTimePasswordSmtpSenderNode",
        "name": "OTP Email Sender",
        "collection": true
  "scripts": {
  "tree": {
    "_id": "email",
    "nodes": {
      "B56DB408-E26D-4FBA-BF86-339799ED8C45": {
        "displayName": "Email OTP",
        "nodeType": "OneTimePasswordSmtpSenderNode",
        "connections": {
          "outcome": "08211FF9-9F09-4688-B7F1-5BCEB3984624"
    "entryNodeId": "DF68B2B8-0F10-4FF3-9F2C-622DA16BA4B7"

The Json code snippet above shows excerpts from the email tree. One of the nodes is responsible for sending a one-time password (OTP) via email to the user, thus needing SMTP gateway configuration. The export does not include the value of the password property in the node configuration. To make this export file re-usable, we could replace null with the actual password. Depending on the type of secret this might be acceptable or not.

Importing individual trees requires us to make sure all the dependencies are met. Amtree.sh provides a nice option, -d, to describe a tree export file. That will tell us if a tree has any dependencies we need to meet before we can import that single tree. Let’s take the select tree as an example. The select tree offers the user a choice, which 2nd factor they want to use to login. Each choice then evaluates another tree, which implements the chosen method:

Running amtree.sh against the exported select.json file gives us a good overview of what the select tree is made of, which node types it uses, which scripts (if any) it references, and what other trees (inner trees) it depends on:

../amtree.sh -d -f select.json
 Tree: select

 - ChoiceCollectorNode
 - InnerTreeEvaluatorNode 


 - email
 - push
 - trusona
 - webauthn 

From the output of the -d option we can derive useful information:

  • Which nodes will we need to have installed in our AM instance? ChoiceCollectorNode and InnerTreeEvaluatorNode.
  • Which scripts will the tree export file install in our AM instance? None.
  • Which trees does this tree depend on? The requiring of the InnerTreeEvaluatorNode already gave away that there will be dependencies. This list simply breaks them down: email, push, trusona, and webauthn.

Ignoring the dependencies we can try to import the file into an empty realm and see what amtree.sh will tell us:

../amtree.sh -h https://am.example.com/openam -u amadmin -p ******** -r /empty -i select -f select.json 

Importing select…Error importing node InnerTreeEvaluatorNode (D21C798F-D4E9-400A-A038-0E1A883348EB): {"code":400,"reason":"Bad Request","message":"Data validation failed for the attribute, Tree Name"}
  "_id": "D21C798F-D4E9-400A-A038-0E1A883348EB",
  "tree": "email",
  "_type": {
    "_id": "InnerTreeEvaluatorNode",
    "name": "Inner Tree Evaluator",
    "collection": true

The error message confirms that dependencies are not met. This leaves us with 3 options:

  1. Instruct amtree.sh to import the four dependencies using the -i option before trying to import select.json. Of course that bears the risk that any or all of the 4 inner trees have dependencies of their own.
  2. Instruct amtree.sh to import the authn_all.json using the -I option. The tool will bring in all the trees in the right order but there is no easy way to avoid any of the many trees in the file to be imported.
  3. Instruct amtree.sh import all the .json files in the current directory using the -s option. The tool will bring in all the trees in the right order. Any trees we don’t want to import, we can move into a sub folder and amtree.sh will ignore them.

Let’s see how option 3 will work out. To avoid errors, we need to move the authn_all.json file containing all the trees into a sub folder, ignore in my case. Then we are good to go:

../amtree.sh -h https://am.example.com/openam -u amadmin -p ******** -r /empty -s
Import all trees in the current directory
Determining installation order..............................................
Importing email.........
Importing push_reg..............
Importing push_reg_2fa...............
Importing simple............
Importing trusona....
Importing webauthn.............
Importing webauthn_reg............
Importing webauthn_reg_2fa.............
Importing push..........
Importing risk........
Importing select.......
Importing smart..........
Importing solid......

No errors reported this time. You can see the tools spent quite some cycles determining the proper import order (the more dots, the more cycles). We would have likely run into nested dependencies had we tried option 1 and manually imported the four known dependencies.

A word of caution: Imports overwrite trees of the same name without any warning. Be mindful of that fact when importing into a realm with existing trees.


Amtree.sh supports stdin and stdout for input and output. That allows us to pipe the output of an export command (-e or -E) to an import command (-i or -I) without storing anything on disk. That’s a pretty slick way to migrate trees from one realm to another in the same AM instance or across instances. The -s and -S options do not support stdin and stdout, thus they won’t work for this scenario.

../amtree.sh -h https://am.example.com/openam -u amadmin -p ******** -r /authn -E | ../amtree.sh -h https://am.example.com/openam -u amadmin -p ******** -r /empty -I
 Exporting push ..........
 Exporting simple ............
 Exporting trusona ....
 Exporting risk ........
 Exporting smart ..........
 Exporting webauthn .............
 Exporting select .......
 Exporting solid ......
 Exporting webauthn_reg ............
 Exporting webauthn_reg_2fa .............
 Exporting email .........
 Exporting push_reg ..............
 Exporting push_reg_2fa ...............
 Determining installation order.............................
 Importing email.........
 Importing push..........
 Importing push_reg..............
 Importing push_reg_2fa...............
 Importing risk........
 Importing select.......
 Importing simple............
 Importing smart..........
 Importing solid......
 Importing trusona....
 Importing webauthn.............
 Importing webauthn_reg............
 Importing webauthn_reg_2fa.............

The above command copies all the trees in a realm to another realm. Nothing is ever exported to disk.


Trees consist of different configuration artifacts in the AM configuration store. When managing trees through the AM REST APIs, it is easy to forget to remove unused artifacts. Even when using the AM Admin UI, dead configuration is left behind every time a tree is deleted. The UI doesn’t give an admin any options to remove those dead artifacts nor is there a way really to even see them. Over time, they will grow to uncomfortable size and will clutter the results of API calls.

Amtree.sh prunes those orphaned configuration artifacts when the -P parameter is supplied. I regularly delete all the default trees in a new realm, which leaves me with 33 orphaned configuration artifacts right out of the gate. To be clear: Those orphaned configuration artifacts don’t cause any harm. It’s a desire for tidiness that makes me want them gone.

./amtree.sh -h https://am.example.com/openam -u amadmin -p ******** -r /authn -P
Analyzing authentication nodes configuration artifacts…

Total:    118
Orphaned: 20

Do you want to prune (permanently delete) all the orphaned node instances? (N/y): y

Wrap-Up & Resources

Amtree.sh is a big improvement for the handling and management of authentication trees in the ForgeRock Identity Platform. It is hardly the final solution, though. The implementation as a shell script is limiting both the supported platforms and functionality. My fingers are itching to re-write it in a better suited language. Now there’s a goal for 2020!

If you want to explore the examples in this post, here are all the resources I used:

Use Authentication Trees To Create A Great SAML Login Experience

If you are shopping for a C/IAM platform these days, chances are the vendor pitch you are going to hear is all about OAuth and OIDC and JWT tokens and other shiny things for application integration, authentication, authorization, and single sign-on. And rightly so, as these standards truly offer great new capabilities or provide a modern implementation of old friends. But once the honeymoon is over and reality sets in, you are probably going to find yourself facing a majority of SAML applications and services that need integration and only a very small minority of modern applications supporting those shiny new standards.

The ForgeRock Identity Platform is an authentication broker and orchestration hub. How you come in and where you take your session is merely a matter of configuration. With the great new capabilities introduced with Authentication Trees, one might wonder how these new capabilities jive with old-timers like SAML. And wow do they jive!

With SAML there are two main scenarios per integration: Being the service provider (SP) or being the identity provider (IDP). The service provider consumes an attested identity and has limited control over how that identity was authenticated. The identity provider on the other hand dictates how identities authenticate.

Application users can start the process on the IDP side (IDP-initiated) or on the application side (SP-initiated). IDP-initiated login comes typically in the form of a portal listing all the applications the user has access to. Selecting any of these applications launches the login flow. SP-initiated flows start on the application side. Often users have bookmarked a page in the application, which requires authentication, causing the application to initiate the login flow for unauthenticated users.

This guide focuses on the SP-initiated flow and how you can use the ForgeRock Identity Platform to create the exact login experience you seek. The use case is:

“Controlling which authentication tree to launch in SP-initiated flows”.

The configuration option is a bit non-obvious and I have been asked a few times if the only way was to use the realm default authentication setting or whether there were alternatives. This is a great way to create an individual login journey for SAML users, distinct from the others.

To configure your environment, follow these steps:

  1. Follow the steps to configure SAML federation between the ForgeRock Identity Platform and your application. For this guide I configured my private Google Apps account as an SP.
  2. Test the SP-initiated flow. It should use your realm default for authentication (realm > authentication > settings > core).
  3. Now create the trees you want SAML users to use in the realm you are federating into. In this example, a tree called “saml” is the base tree that should be launched to authenticate SAML users.

    where the “Continue?” node just displays a message notifying users that they are logging in via SAML SP-initiated login. The “Login” node is an inner tree evaluator launching the “simple” tree, which lets them authenticate using username and password:

    The “2nd Factor” node is another inner tree evaluator branching out into tree, which allows the user to select a 2nd factor they want to use:

    This guide will use the “push” tree for push authentication:
  4. Now navigate to the “Authentication Context” section in your “Hosted IDP” configuration in the AM Admin UI (Applications > Federation > Entity Providers > [your hosted entity provider] > Authentication Context):

    This is where the magic happens. Select “Service” from the “Key” drop-down list on all the supported authentication contexts (note, you could launch different trees based on what the SP proposes for authentication, Google seems to only support “PasswordProtectedTransport” by default) and enter the name of the base tree you want to execute in the “Value” column, “saml” in this configuration example.

Test your configuration:

  1. Launch your SP-initiated login flow. For Google GSuite you do that by pointing your browser to https://gsuite.google.com and select “Sign-In”, then type in your GSuite domain name, select the application you want to land in after authentication and select “GO”.
  2. Google redirects to ForgeRock Access Management. The “saml” tree displays the configured message, giving users the options to “Continue” or “Abort”. Select “Continue”:
  3. The “simple” tree executes, prompting for username and password:
  4. Now the flow comes back into the “simple” tree and branches out into the 2nd Factor selector tree “select”:
  5. Select “Push” and respond to the push notification on your phone while the web user interface gracefully waits for your response:
  6. And finally, a redirect back to Google Apps with a valid SAML assertion in tow completes the SP-initiated login flow:

Brokering Identity Services Into Pivotal Cloud Foundry


Pivotal Cloud Foundry (PCF) deployments are maturing across the corporate landscape. PCF’s out-of-the-box identity and access management (IAM) tool, UAA (User Accounts and Authentication), provides basic user management functions and OAuth 2.0/OIDC 1.0 support. UAA has come a long way since its inception and provides a solid foundation of IAM services for an isolated application ecosystem running on the Pivotal platform. As organizations experience ever more demanding requirements pushed on their applications, they start realizing the need for a full IAM platform that provides identity services beyond what UAA can offer. Integrating applications running on Pivotal with applications running outside the platform, providing strong and adaptive authentication journeys, managing identities across applications, enforcing security policies and more requires a full-service IAM platform like ForgeRock’s Identity Platform.

ForgeRock provides a Pivotal service broker implementation, the ForgeRock Service Broker. It runs as a small service inside Pivotal and brokers two services into the PCF platform: An OAuth 2.0 AM Service and an IG Route Service. While the OAuth 2.0 AM Service provides similar capabilities to UAA on the OAuth/OIDC side, the IG Route Service is based on IG (Identity Gateway) and can broker the full spectrum of services of the ForgeRock Identity Platform. PCF applications bound to the IG Route Service can seamlessly consume any of the countless services the ForgeRock Identity Platform provides: Intelligent authentication, authorization, federation, user-managed access, identity synchronization, user self-service, workflow, social identity, directory services, API gateway services and more.

This article provides an easy-to-follow path to:

  • Set up a PCF development environment (PCF Dev)
  • Install and configure IG in that environment
  • Install and configure the ForgeRock Service Broker in that environment
  • Deploy, integrate and protect a number of PCF sample applications using the IG Route Service and IG

Additionally, the guide provides steps how to run IG on PCF. If you have access to a full PCF instance, you can skip the PCF Dev part and dive right into the Service Broker deployment and configuration. You also need access to a ForgeRock Access Management instance 5.0 or newer.

1. Preparing a PCF Dev environment

As mentioned, if you have access to a full PCF instance, you can skip this part and go straight  to the Service Broker deployment and configuration.

1.1. Installing CF CLI

Before you install the server side of the PCF Dev environment, you must first install the Cloud Foundry Command Line Interface (CF CLI) utility, which is the main way you will interact with PCF throughout this process.

Follow the Pivotal documentation to install the flavor of the CLI you need for your workstation OS:


1.2. Installing PCF Dev

Now that you are ready to roll with the CF CLI, it is time to download and install the PCF Dev components. This article is based on PCF Dev v0.30.0 for PCF 1.11.0. This version is based on a VirtualBox and has a number of default services installed, some of which you will need later on.

PCF Dev – PAS is an alpha release of the NextGen PCF Dev using the native OS hypervisor, doubling the minimum memory requirements from 4G to 8G, having only a few PCF services installed by default, and taking up to 1h to start. It does however include a full BOSH Director, which is the graphical UI to manage “Tiles” in PCF vs having to use the CLI. As soon as this version is a bit more stable and bundles more services like the old one did, it might be worth upgrading. But for now, make sure you select and download v0.30.0:


In order to use your own IP address and DNS name (-i and -d parameters of the cf dev start command) you need to set up a wildcard DNS record. In my case I setup *.pcfdev.mytestrun.com pointing to my workstation’s IP address where I am running PCF Dev.

Follow the command log below to install and start PCF Dev:

unzip pcfdev-v0.30.0_PCF1.11.0-osx.zip
cf dev start -i -d pcfdev.mytestrun.com -m 6144
Warning: the chosen PCF Dev VM IP address may be in use by another VM or device.
Using existing image.
Allocating 6144 MB out of 16384 MB total system memory (6591 MB free).
Importing VM...
Starting VM...
Provisioning VM...
Waiting for services to start...
7 out of 58 running
7 out of 58 running
7 out of 58 running
7 out of 58 running
40 out of 58 running
56 out of 58 running
58 out of 58 running
 _______  _______  _______    ______   _______  __   __
|       ||       ||       |  |      | |       ||  | |  |
|    _  ||       ||    ___|  |  _    ||    ___||  |_|  |
|   |_| ||       ||   |___   | | |   ||   |___ |       |
|    ___||      _||    ___|  | |_|   ||    ___||       |
|   |    |     |_ |   |      |       ||   |___  |     |
|___|    |_______||___|      |______| |_______|  |___|
is now running.
To begin using PCF Dev, please run:
   cf login -a https://api.pcfdev.mytestrun.com --skip-ssl-validation
Apps Manager URL: https://apps.pcfdev.mytestrun.com
Admin user => Email: admin / Password: admin
Regular user => Email: user / Password: pass

1.3 Logging in to PCF Dev

Login to your fresh PCF Dev instance and select the org you want to work with. Use the pcfdev-org:

cf login -a https://api.pcfdev.mytestrun.com/ --skip-ssl-validation
API endpoint: https://api.pcfdev.mytestrun.com/
Email> admin
Select an org (or press enter to skip):
1. pcfdev-org
2. system
Org> 1
Targeted org pcfdev-org
Targeted space pcfdev-space

API endpoint:  https://api.pcfdev.mytestrun.com (API version: 2.82.0)
User:          admin
Org:            pcfdev-org
Space:          pcfdev-space

Authenticate using admin/admin if using PCF Dev or a Pivotal admin user if using a real PCF instance.

2. Install Sample Applications

To test the Service Broker and inter-application SSO, install 2 sample applications:

2.1. Spring Music

git clone https://github.com/cloudfoundry-samples/spring-music
cd spring-music

Modify manifest to reduce memory and avoid random route names:

vi manifest.yml

Enter or copy & paste the following content:

- name: music
  memory: 768M
  random-route: false
  path: build/libs/spring-music-1.0.jar

Push the app:

cf push

Waiting for app to start...
name:           music
requested state:   started
instances:      1/1
usage:          768M x 1 instances
routes:            music.pcfdev.mytestrun.com
last uploaded:  Tue 22 May 15:28:24 CDT 2018
stack:          cflinuxfs2
buildpack:      container-certificate-trust-store=2.0.0_RELEASE java-buildpack=v3.13-offline-https://github.com/cloudfoundry/java-buildpack.git#03b493f
                java-main open-jdk-like-jre=1.8.0_121 open-jdk-like-memory-calculator=2.0.2_RELEASE spring-auto-reconfiguration=1.10...
start command:  CALCULATED_MEMORY=$($PWD/.java-buildpack/open_jdk_jre/bin/java-buildpack-memory-calculator-2.0.2_RELEASE
                -memorySizes=metaspace:64m..,stack:228k.. -memoryWeights=heap:65,metaspace:10,native:15,stack:10 -memoryInitials=heap:100%,metaspace:100%
                -stackThreads=300 -totMemory=$MEMORY_LIMIT) && JAVA_OPTS="-Djava.io.tmpdir=$TMPDIR
                -XX:OnOutOfMemoryError=$PWD/.java-buildpack/open_jdk_jre/bin/killjava.sh $CALCULATED_MEMORY
                -Djavax.net.ssl.trustStorePassword=java-buildpack-trust-store-password" && SERVER_PORT=$PORT eval exec
                $PWD/.java-buildpack/open_jdk_jre/bin/java $JAVA_OPTS -cp $PWD/. org.springframework.boot.loader.JarLauncher

     state    since                  cpu      memory          disk          details
#0   running   2018-05-22T20:29:00Z   226.8% 530.4M of 768M   168M of 512M

Note the routes: music.pcfdev.mytestrun.com

That’s the URL at which your application can be reached. You should be able to resolve the dynamically generated DNS name. You should also be able to hit the URL in a web browser.

Retrieve application logs:

cf logs music --recent

Live-tail application logs:

cf logs music

2.2. Cloud Foundry Sample NodeJS App

git clone https://github.com/cloudfoundry-samples/cf-sample-app-nodejs.git
cd cf-sample-app-nodejs

Modify manifest to reduce memory and avoid random route names:

vi manifest.yml

- name: node
  memory: 512M
  instances: 1
  random-route: false

Push the app:

cf push

Waiting for app to start...
name:           node
requested state:   started
instances:      1/1
usage:          512M x 1 instances
routes:            node.pcfdev.mytestrun.com
last uploaded:  Tue 22 May 15:46:02 CDT 2018
stack:          cflinuxfs2
buildpack:      node.js 1.5.32
start command:  npm start

     state    since                  cpu    memory      disk        details
#0   running   2018-05-22T20:46:35Z   0.0% 0 of 512M 0 of 512M   

Note the routes: node.pcfdev.mytestrun.com

That’s the URL at which your application can be reached. You should be able to resolve the dynamically generated DNS name. You should also be able to hit the URL in a web browser.

Retrieve application logs:

$ cf logs node —recent

Live-tail application logs:

cf logs node

2.3. Create Your Own JSP Headers App

Create your very own useful sample application to display headers. This will come in handy for future experiments with the IG Route Service.

mkdir headers
cd headers
mkdir WEB-INF
vi index.jsp

<%@ page import="java.util.*" %>
<title><%= application.getServerInfo() %></title>
<h1>HTTP Request Headers Received</h1>
<table border="1" cellpadding="3" cellspacing="3">
Enumeration eNames = request.getHeaderNames();
while (eNames.hasMoreElements()) {
String name = (String) eNames.nextElement();
String value = normalize(request.getHeader(name));
<tr><td><%= name %></td><td><%= value %></td></tr>
private String normalize(String value)
StringBuffer sb = new StringBuffer();
for (int i = 0; i < value.length(); i++) {
char c = value.charAt(i);
if (c == ';')
return sb.toString();
cf push headers

Waiting for app to start...
name:           headers
requested state:   started
instances:      1/1
usage:          256M x 1 instances
routes:            headers.pcfdev.mytestrun.com
last uploaded:  Tue 22 May 16:24:26 CDT 2018
stack:          cflinuxfs2
buildpack:      container-certificate-trust-store=2.0.0_RELEASE java-buildpack=v3.13-offline-https://github.com/cloudfoundry/java-buildpack.git#03b493f
                open-jdk-like-jre=1.8.0_121 open-jdk-like-memory-calculator=2.0.2_RELEASE tomcat-access-logging-support=2.5.0_RELEAS...
start command:  CALCULATED_MEMORY=$($PWD/.java-buildpack/open_jdk_jre/bin/java-buildpack-memory-calculator-2.0.2_RELEASE
                -memorySizes=metaspace:64m..,stack:228k.. -memoryWeights=heap:65,metaspace:10,native:15,stack:10 -memoryInitials=heap:100%,metaspace:100%
                -stackThreads=300 -totMemory=$MEMORY_LIMIT) &&  JAVA_HOME=$PWD/.java-buildpack/open_jdk_jre JAVA_OPTS="-Djava.io.tmpdir=$TMPDIR
                -XX:OnOutOfMemoryError=$PWD/.java-buildpack/open_jdk_jre/bin/killjava.sh $CALCULATED_MEMORY
                -Djavax.net.ssl.trustStorePassword=java-buildpack-trust-store-password -Djava.endorsed.dirs=$PWD/.java-buildpack/tomcat/endorsed
                -Daccess.logging.enabled=false -Dhttp.port=$PORT" exec $PWD/.java-buildpack/tomcat/bin/catalina.sh run

     state    since                  cpu    memory        disk            details
#0   running   2018-05-22T21:24:48Z   0.0% 600K of 256M 84.6M of 512M  

2.4. More Sample Apps

git clone https://github.com/cloudfoundry-samples/cf-ex-php-info
git clone https://github.com/cloudfoundry-samples/cf-sample-app-rails.git

3. Running IG in Pivotal Cloud Foundry

You can run IG absolutely anywhere you want, but since you are going to use it inside PCF, running it in PCF may be a logic choice.

3.1. Install, Deploy, and Configure  IG in PCF

The steps below describe an opinionated deployment model for IG in PCF. Your specific environment may require you to make different choices to achieve an ideal configuration and behavior.

3.1.1. Download IG

Download IG 6 from https://backstage.forgerock.com/downloads/browse/ig/latest to a preferred working location. Login using your backstage credentials.

unzip IG-6.1.0.war
cf push ig --no-start

3.1.2. Enable Development Mode

cf set-env ig IG_RUN_MODE development

3.1.3. Create And Use Persistent Volume For Configuration Data

IG is configured using JSON files. This section is an easy way to create a share storage volume that can persist your IG configuration between restarts. If you run IG using its default configuration, it will lose all its configuration every time it restarts because the app is reset. Externalizing the config allows the configuration to reside outside the app and persist between restarts. In a real PCF environment (vs a PCF DEV environment) you would probably use a different shared storage like an NSF service or the like. But for development purposes, a local-volume will work great.


cf create-service local-volume free-local-disk local-volume-instance
cf bind-service ig local-volume-instance -c '{"mount":"/var/openig"}'
cf set-env ig IG_INSTANCE_DIR '/var/openig'

3.1.4. Start IG applying all the configuration changes we have made

cf start ig

3.1.5. Logs

cf logs ig --recent

3.1.6. Apply Required Configuration SSH into your IG instance

cf ssh ig
cd /var/openig
mkdir config
vi config/config.json Apply configuration

Create /var/openig/config/config.json and populate with default configuration as documented here:


  "heap": [
       "name": "ClientHandler",
       "type": "ClientHandler",
       "config": {
         "hostnameVerifier": "ALLOW_ALL",
         "trustManager": {
           "type": "TrustAllManager"
      "name": "_router",
      "type": "Router",
      "config": {
        "defaultHandler": {
          "type": "StaticResponseHandler",
          "config": {
            "status": 404,
            "reason": "Not Found",
            "headers": {
              "Content-Type": [
            "entity": "{ \"error\": \"Something went wrong, contact the sys admin\"}"
      "type": "Chain",
      "name": "CloudFoundryProxy",
      "config": {
        "filters": [
            "type": "ScriptableFilter",
            "name": "CloudFoundryRequestRebaser",
            "comment": "Rebase the request based on the CloudFoundry provided headers",
            "config": {
              "type": "application/x-groovy",
              "source": [
                "Request newRequest = new Request(request);",
                "newRequest.uri = URI.create(request.headers['X-CF-Forwarded-Url'].firstValue);",
                "newRequest.headers['Host'] = newRequest.uri.host;",
                "logger.info('Receive request : ' + request.uri + ' forwarding to ' + newRequest.uri);",
                "Context newRoutingContext = org.forgerock.http.routing.UriRouterContext.uriRouterContext(context).originalUri(newRequest.uri.asURI()).build();",
                "return next.handle(newRoutingContext, newRequest);"
        "handler": "_router"
      "capture": [
  "handler": {
    "type": "DispatchHandler",
    "name": "Dispatcher",
    "config": {
      "bindings": [
          "condition": "${not empty request.headers['X-CF-Forwarded-Url']}",
          "handler": "CloudFoundryProxy"
          "handler": {
            "type": "StaticResponseHandler",
            "config": {
              "status": 400,
              "entity": "Bad request : expecting a header X-CF-Forwarded-Url"


exit cf
restart ig

3.1.7. Access IG Studio


4. Install ForgeRock Service Broker

Download and install the service broker following the instructions in the doc:


4.1. Deploy and Configure the Service Broker App

cf push forgerockbroker-app -p service-broker-servlet-2.0.1.war
cf set-env forgerockbroker-app SECURITY_USER_NAME f8Q7hyHKgz
cf set-env forgerockbroker-app SECURITY_USER_PASSWORD n3BpjwKW4m
cf set-env forgerockbroker-app OPENAM_BASE_URI https://idp.mytestrun.com/openam/
cf set-env forgerockbroker-app OPENAM_USERNAME CloudFoundryAgentAdmin
cf set-env forgerockbroker-app OPENAM_PASSWORD KZDJhN7Vr4
cf set-env forgerockbroker-app OAUTH2_SCOPES profile
cf set-env forgerockbroker-app OPENIG_BASE_URI https://ig.pcfdev.mytestrun.com
cf restage forgerockbroker-app

Note that OPENIG_BASE_URI is specified as https, not http! If specified as http, the following error occurred when binding the ig route service to an application:

cf bind-route-service pcfdev.mytestrun.com igrs --hostname spring-music-chatty-quokka
Binding route spring-music-chatty-quokka.pcfdev.mytestrun.com to service instance igrs in org pcfdev-org / space pcfdev-space as admin...
Server error, status code: 502, error code: 10001, message: The service broker returned an invalid response for the request to http://forgerockbroker-app.pcfdev.mytestrun.com/v2/service_instances/4aa37a88-afc0-4e75-9474-d5e2ed3e7876/service_bindings/c8da2445-6689-4824-afd1-125795e2a848. Status Code: 201 Created, Body: {"route_service_url":"http://ig.pcfdev.mytestrun.com/4aa37a88-afc0-4e75-9474-d5e2ed3e7876/c8da2445-6689-4824-afd1-125795e2a848"}

To see the service broker app’s environment:

cf env forgerockbroker-app

To see the service broker app’s details:

cf app forgerockbroker-app

Create service broker:

cf create-service-broker forgerockbroker f8Q7hyHKgz n3BpjwKW4m http://forgerockbroker-app.pcfdev.mytestrun.com

Enable the service you plan on using. The ForgeRock Service Broker supports OAuth and IG. You can enable either or both.

cf enable-service-access forgerock-ig-route-service
cf enable-service-access forgerock-am-oauth2

Create the service instance(s) you will be using for your apps. You should only need one instance per service to handle any number of applications:

cf create-service forgerock-ig-route-service shared igrs
cf create-service forgerock-am-oauth2 shared amrs

4.2. Bind IG Route Service to the Sample Apps

Note how no apps are bound to the IG Route Service (igrs):

cf routes
Getting routes for org pcfdev-org / space pcfdev-space as admin ...
space          host                  domain                port  path  type  apps                  service
pcfdev-space  music                 pcfdev.mytestrun.com                     music
pcfdev-space  node                  pcfdev.mytestrun.com                     node
pcfdev-space  rails                 pcfdev.mytestrun.com                     rails
pcfdev-space  headers               pcfdev.mytestrun.com                     headrs
pcfdev-space  ig                    pcfdev.mytestrun.com                     ig
pcfdev-space  forgerockbroker-app   pcfdev.mytestrun.com                     forgerockbroker-app

Bind the Route Service to the apps:

cf bind-route-service pcfdev.mytestrun.com igrs --hostname music
cf bind-route-service pcfdev.mytestrun.com igrs --hostname node
cf bind-route-service pcfdev.mytestrun.com igrs --hostname rails
cf bind-route-service pcfdev.mytestrun.com igrs --hostname headers

Now the two sample apps are bound to our IG Route Service:

cf routes
Getting routes for org pcfdev-org / space pcfdev-space as admin ...
space          host                              domain                port  path  type  apps                  service
pcfdev-space  music                 pcfdev.mytestrun.com                     music              igrs
pcfdev-space  node                  pcfdev.mytestrun.com                     node               igrs
pcfdev-space  rails                 pcfdev.mytestrun.com                     rails              igrs
pcfdev-space  headers               pcfdev.mytestrun.com                     headers            igrs
pcfdev-space  ig                    pcfdev.mytestrun.com                     ig
pcfdev-space  forgerockbroker-app   pcfdev.mytestrun.com                     forgerockbroker-app

5. Define IG Routes for the Sample Apps

By default, no routes are defined in IG for our sample apps and the default behavior in IG (defined in config.json you created earlier) is to deny access to everything. So the next and very important step is now to define routes that re-enable access to our sample applications. Once the basic routes are defined, we can add authentication and authorization per application as we see fit:

  • Point your browser to the IG Studio: http://ig.pcfdev.mytestrun.com/openig/studio/
  • Select “Protect an Application” from the Studio home screen, then select “Structured.”
  • Select “Advanced options” and enter the app URL from the step where you pushed the app to PCF.
    • Since PCF does hostname-based routing (vs path-based) you have to change the Condition that selects your route accordingly. Into the Condition field, select “Expression” and enter:
      ${matches(request.uri.host, ‘^app-url’)}
      ${matches(request.uri.host, ‘^music.pcfdev.mytestrun.com’)}
    • Pick a descriptive name and a unique ID for the application
    • Select “Create route”

  • Deploy your route.
  • You have now created a route with default configuration, which simply proxies requests through IG to the app. That means your app is available again like it was before you implemented IG and the Service Broker. The next step is to add value to your route like authentication or authorization.

5.1. Prepare for Authentication and Authorization

As a preparatory step to authentication and authorization, create an AM Service for your route, which is a piece of configuration pointing to your ForgeRock Access Management instance. Select “AM service” from the left side menu and provide the details of your AM instance:

You won’t need the agent section populated for the use cases here.

5.2. Broker Authentication to an Application

  • To add authentication to your route, select “Authentication” from the left side menu and move the slider “Enable authentication” to the right, then select “Single Sign-On” as your authentication option.
  • In the configuration dialog popping up, select your AM service:

    Then select “Save”.
  • Deploy your route.
  • In a browser, point your browser to your app URL, e.g. https://music.pcfdev.mytestrun.com/
  • Notice how you will be redirected to your Access Management login page for authentication. Provide valid login credentials and your sample app should load.
  • Repeat with the other apps. Note how you can now SSO between all the apps!
  • Now let’s add authorization to one of the routes and only allow members of a certain group access to that application. For that, we need some additional prep work in AM:
    • Create a J2EE agent IG can use to evaluate AM policies:

    • Create a new policy set with the name “PCF” or a name and ID of your liking:

      Add URL as the resource type.
    • Create a policy and name it after your application you are protecting. Specify your app URL as the resource, allow GET as an action, and specify the subject condition to require a group membership. In this example, we want membership in the “Engineering” group to be required for access to the “headers” application:

      Your policy summary page should look something like this:

  • Now come back to IG Studio and select the route of the app you created your policy for, in our case the “headers” app and select “Authorization” from the left side bar and move the slider “Enable authorization” to the right, then select “AM Policy Enforcement” as your way to authorize users.
  • Select your AM service, specify your realm and provide the name of the J2EE agent you created in an earlier step and the password. In the policy endpoint section specify the name of your policy set and the expression to retrieve your SSO token; the default should work: ${contexts.ssoToken.value}

  • Save and deploy your route.
  • Point your browser to the protected app and login using a user who is a member of the group you configured to control access. Notice how the app loads after logging in.
  • Now remove the user from the group and refresh the app. Notice how the page goes blank because the user is no longer authorized.


With this setup, applications can now be integrated, protected, SSO-enabled, and identity-infused within minutes. Provide profile self-service, password reset, strong and step-up authentication, continuous authentication, authorization, and risk evaluation to any application in the Pivotal Cloud Foundry ecosystem.