Identity Workflow with AM using Zeebe and Cloud Functions

**** Please note that this is only a sample. The statements, examples or code in this post are not supported by ForgeRock. ****

Functions as a Service

Serverless computing has emerged as a new and compelling paradigm for the deployment of cloud applications. Offering a true “utility” model for software, developers do not need to worry about low-level details of server management and scaling, and only pay for when processing requests or events.

The most natural way to use serverless computing is to provide a piece of code (function) to be executed by the serverless computing platform. This is where Functions-as-a-Service (FaaS) come in to the picture, allowing small pieces of code to run for limited amount of time (minutes) with executions triggered by events (or HTTP requests) and not allowed to keep persistent state.

From a cost perspective, the benefits of a serverless architecture are most apparent for bursty workloads, because the developer offloads the elasticity of the function to the platform.

Zeebe: Workflow as a Service

Zeebe is a cloud-native workflow engine for BPMN style process automation. It is also Camunda’s open source workflow orchestration engine. With Zeebe, you can define workflows graphically in BPMN 2.0 with the Zeebe Modeler.

Workflows built in Zeebe can react to events from messaging platforms. Zeebe can scale horizontally to handle very high throughput, and provides fault tolerance. There is state representation or state persistence in Zeebe.

Zeebe architecture, taken from their documentation site, is shown below:

The gateway, which proxies requests to brokers, serves as a single entry point to a Zeebe cluster. It is stateless and session-less. Zeebe cloud adds gateways as necessary for load balancing and high availability, thereby removing the burden from the developer of scaling the workflow engine.

The Zeebe broker is the distributed workflow engine that keeps state of active workflow instances. No application business logic lives in the broker. It is responsible for storing and managing the state of active workflow instances.

Zeebe uses BPMN 2.0 for representing workflows. BPMN is an industry standard which is widely supported by different vendors and implementations. Using BPMN ensures that workflows can be interchanged between Zeebe and other workflow systems.

Zeebe Basics

Sequences – In Zeebe, the simplest kind of BPMN workflow is an ordered sequence of tasks. Whenever workflow execution reaches a task, Zeebe creates a job that can be requested and completed by a job worker.

State Machine – Zeebe’s workflow orchestration operates like a state machine. A workflow instance reaches a task, and Zeebe creates a job that can be requested by a worker. Zeebe then waits for the worker to request a job and complete the work. Once the work is completed, the flow continues to the next step. If the worker fails to complete the work, the workflow remains at the current step, and the job could be retried until it’s successfully completed.

Data Flow – As Zeebe progresses from one task to the next in a workflow, it can move custom data in the form of variables. Variables are key-value-pairs and part of the workflow instance.

Parallel Gateway – With Zeebe, you can use the Fork / Join concurrency available with parallel gateways to perform multiple tasks in parallel.

Enforcing Gateway – For implementing data-based Conditions, you can have some workflow nodes choose between different tasks based on variables and conditions. The diamond shape with the “X” in the middle is an element indicating that the workflow decides to take one of many paths.

Onboarding Workflow

For reference, the BPMN workflow used in the example is shown below:

Sample Integration with ForgeRock

This blog post demonstrates invoking an on-boarding BPMN workflow from ForgeRock AM.

Google Cloud Functions

I used the serverless framework to setup my cloud functions in GCP. Each cloud function implements a discrete piece of business logic without storing any state whatsoever. Each cloud function is invoked separately by a specific BPMN workflow node from the Zeebe Cluster (discussed next).

A snippet from the serverless.yml file shows the definition of the create-account cloud function with a handler “createAccount”. My GCP project will host this cloud function at the specific handler.

Running serverless deploy will create the cloud function from the index.js file, from which a quick sample is pasted below:

const body = req.body;
let responseBody;
console.log(body)
responseBody = JSON.stringify({
success: true,
cMessage: 'Create successful'
});
res.status(200).send( responseBody );
};

Here I am simply returning success from the cloud function. As such, the workflow node should receive a body.success parameter, which it could map to an output parameter, say success to pass along to the next workflow node.

To complete the API composer pattern, we will also create a cloud function that serves to initiate the workflow. This way the initiator of the workflow does not need to know the specifics of the root workflow node.

const ZB = require('zeebe-node')
module.exports.customerOnboarding = async (req, res) => {
const body = req.body;
const zbc = new ZB.ZBClient({
camundaCloud: {
clientId: "yXtccn6lWclShJGUQ1aRSfrhJZikYang",
clientSecret: "<bleeped>",
clusterId: "0263777a-114c-4d2f-9445-47a2290b0320",
cacheOnDisk: false
}
})
const result = await zbc.createWorkflowInstanceWithResult('CustomerOnboarding', body);
const responseBody = JSON.stringify({
message: 'onboarding started',
data: result
});
res.status(200).send( responseBody );
};

The initiator cloud function is able to instantiate the workflow in one of two modes: async, or await. I chose to use the latter in order to receive a workflow processing outcome.

Deployed in GCP:

customerOnboarding will be invoked by our Authentication Tree from AM. createAccount will be invoked by the service task of the same name from inside our BPMN workflow running on a Zeebe workflow cluster. Read on!

Zeebe Cluster

I also created a free Zeebe cluster on the public Camunda cloud, and set it up for HTTP access:

The Zeebe cluster can be invoked over HTTP as depicted by the HTTP worker “job type”. I also defined a worker variable that will serve as the web address for the cloud functions used by the worker nodes to invoke relevant business logic.

The BPMN workflow was modeled using the Zeebe Modeler. The picture below is from Camunda Operate, which offers a cool dashboard to visualize in-flight, completed and failed workflow instances.:

Each “service task”, or worker node must be configured for input/output parameters and headers that must be supplied to it by the root node. The workflow operates like a state machine, with each successive node using the context of the operation to make decisions. In the configuration settings for each node, things that can be passed under “Input parameters” include the request body and those under “Output parameters” can include information needed for successor nodes to correctly operate. An example for input and output parameters, and headers is shown using the createAccount node:

The success output parameter is also the state of the workflow if set by the last worker node! You will see later that this workflow outcome is used by the Authentication Tree to decide whether to authenticate a user or not.

The headers passed into the worker node will be used by the worker node to invoke the cloud function responsible for executing the node’s business logic. In this case, it would be the google cloud function createAccount specified in the serverless.yml file in the previous section. Also, notice the body.success being mapped to a workflow variable, called success. The success variable is used by the “certified?” Enforcing Gateway to determine whether manual review is needed, or if it is okay to finish on-boarding the customer.

Let’s look at the configuration for one of the SequenceFlows, the outbound connectors from the “certified?” Enforcing Gateway:

The SequenceFlow connector labelled Yes only triggers if the workflow variable success evaluates to true, otherwise the alternate flow is triggered resulting in a manual review.

AM Integration

First. I created a workflow script, which is responsible for building out the request body with user and context data. It calls our workflow-initiator cloud function:

Next, I set up a really simple authentication tree called WorkflowLogin to use the workflow script. This is just to demonstrate use. The use case is that the user must be on-boarded before they can be authenticated and given a session by AM. As mentioned earlier, the Zeebe workflow is able to return a workflow outcome in the form of result.get("variables").get("status"), which can be checked for by in the scripted decision node.

A simple dry run from the CLI directly invoking the customerOnboarding initiator cloud function shows how to manually send an HTTPS request to the workflow initiator cloud function:

curl -H "Content-Type: application/json" -X PUT -d @request-zeebe.json https://<bleeped>.cloudfunctions.net/customerOnboarding | jq

{
  "message": "onboarding started",
"result": {
   "workflowKey": "2251799813746599",
   "bpmnProcessId": "CustomerOnboarding",
  "version": 1,
   "workflowInstanceKey": "2251799813757247",
   "variables": {
     "body": {
       "success": true,
       "cMessage": "welcome email successful"
     },
     "success": true,
     "statusCode": 200,
     "attributeValues": "joe, black, 999881"
   }
 }
}

Similarly, invoking the authentication tree directly from the CLI presents a similar response:

curl --request POST --header "Accept-API-Version: resource=2.0, protocol=1.0" --header "Content-Type: application/json" --header "X-OpenAM-Username: demo" --header "X-OpenAM-Password: <>" 'https://openam-ea-tinkerdoodle.forgeblocks.com/am/json/realms/root/authenticate?authIndexType=service&authIndexValue=WorkflowLogin' | jq'

{ 
"tokenId":"6slh4WridRR4WTNfmogOmXMfi9E.AAJTSQACMDIAAlNLABxmUmFVeUlVdVd1SHpkc3RndHBtTkxnUjc5QTg9AAR0eXBlAANDVFMAAlMxAAIwMQ..",
"successUrl":"/console",
"realm":"/"
}

This completes the demonstration of invoking a powerful cloud workflow engine, Zeebe from an authentication tree in AM via Google Cloud Functions. Leave your thoughts and comments below..

Thanks for reading!

Easily Share Authentication Trees

Originally published on Mr. Anderson’s Musings

A New World

A new world of possibilities was born with the introduction of authentication trees in ForgeRock’s Access Management (AM). Limiting login sequences of the past were replaced with flexible, adaptive, and contextual authentication journeys.

ForgeRock chose the term Intelligent Authentication to capture this new set of capabilities. Besides offering a shiny new browser-based design tool to visually create and maintain authentication trees, Intelligent Authentication also rang in a new era of atomic extensibility.

Authentication Tree

While ForgeRock’s Identity Platform has always been known for its developer-friendliness, authentication trees took it to the next level: Trees consist of a number nodes, which are connected with each other like in a flow diagram or decision tree. Each node is an atomic entity, taking a single input and providing one or more outputs. Nodes can be implemented in Java, JavaScript, or Groovy.

A public marketplace allows the community to share custom nodes. An extensive network of technology partners provides nodes to integrate with their products and services.

A New Challenge

With the inception of authentication trees, a spike of collaboration between individuals, partners, and customers occurred. At first the sharing happened on a node basis as people would exchange cool custom node jar files with instructions on how to use those nodes. But soon it became apparent that the sharing of atomic pieces of functionality wasn’t quite cutting it. People wanted to share whole journeys, processes, trees.

A New Tool – amtree.sh

A fellow ForgeRock solution architect in the UK, Jon Knight, created the first version of a tool that allowed the easy export and import of trees. I was so excited about the little utility that I forked his repository and extended its functionality to make it even more useful. Shortly thereafter, another fellow solution architect from the Bay Area, Jamie Morgan, added even more capabilities.

The tool is implemented as a shell script, which exports authentication trees from any AM realm to standard output or a file and imports trees into any realm from standard input or a file. The tool automatically includes required decision node scripts for authentication trees (JavaScript and Groovy) and requires curl, jq, and uuidgen to be installed and available on the host where it is to be used. Here are a few ideas and examples for how to use the tool:

Backup/Export

I do a lot of POCs or create little point solutions for customer or prospect use cases or build demos to show off technology partner integrations or our support for the latest open standards. No matter what I do, it often involves authentication trees of various complexity and usually those trees take some time designing and testing and thus are worthy of documentation and preservation and sharing. The first step to achieve any of these things is to extract the trees’ configuration into a reusable format, or simply speaking: backing them up or exporting them.

Before performing an export, It can be helpful to just produce a list of all the authentication trees in a realm. That way we get an idea what’s available and can decide if we want to export individual trees or all the trees in a realm. The tool provides an option to list all trees in a realm. It lists the trees in their natural order (order of creation). To get an alphabetically ordered list, we can pipe the output into the sort shell command.

List Trees
../amtree.sh -h https://am.example.com/openam -u amadmin -p ******** -r /authn -l | sort
email
push
push_reg
push_reg_2fa
risk
select
simple
smart
solid
trusona
webauthn
webauthn_reg
webauthn_reg_2fa 

Now that we have a list of trees, it is time to think about what it is we want to do. The amtree.sh tool offers us 3 options:

  1. Export a single tree into a file or to standard out: -e
  2. Export all trees into individual files: -S
  3. Export all trees into a single file or to standard out: -E

The main reason to choose one of these options over another is whether your trees are independent (have no dependency on other trees) or not. Authentication trees can reference other trees, which then act like subroutines in a program. These subroutines are called inner trees. Independent trees do not contain inner trees. Dependent trees contain inner trees.

Options 1 and 2 are great for independent trees as they put a single tree into a single file. Those trees can then easily be imported again. Option 2 generates the same output as if running option 1 for every tree in the realm.

Dependent trees require other trees be already available or be imported before the dependent tree is imported or the AM APIs will complain and the tool will not be able to complete the import.

Option 3 is best suited for highly interdependent trees. It puts all the trees of a realm into the same file and on import of that file, the tool will always have all the required dependencies available.

Option 2: Export All Trees To Individual Files
../amtree.sh -h https://am.example.com/openam -u amadmin -p ******** -r /authn -S
 Export all trees to files
 Exporting push ..........
 Exporting simple ............
 Exporting trusona ....
 Exporting risk ........
 Exporting smart ..........
 Exporting webauthn .............
 Exporting select .......
 Exporting solid ......
 Exporting webauthn_reg ............
 Exporting webauthn_reg_2fa .............
 Exporting email .........
 Exporting push_reg ..............
 Exporting push_reg_2fa ...............
Option 3: Export All Trees To Single File
../amtree.sh -h https://am.example.com/openam -u amadmin -p ******** -r /authn -E -f authn_all.json
 Exporting push ..........
 Exporting simple ............
 Exporting trusona ....
 Exporting risk ........
 Exporting smart ..........
 Exporting webauthn .............
 Exporting select .......
 Exporting solid ......
 Exporting webauthn_reg ............
 Exporting webauthn_reg_2fa .............
 Exporting email .........
 Exporting push_reg ..............
 Exporting push_reg_2fa ...............

After running both of those commands, we should find the expected files in our current directory:

ls -1
 authn_all.json
 email.json
 push.json
 push_reg.json
 push_reg_2fa.json
 risk.json
 select.json
 simple.json
 smart.json
 solid.json
 trusona.json
 webauthn.json
 webauthn_reg.json
 webauthn_reg_2fa.json

The second command (option 3) produced the single authn_all.json file as indicated by the -f parameter. The first command (option 2) generated individual files per tree.

Restore/Import

Import is just as simple as export. The tool brings in required scripts and resolves dependencies to inner trees, which means it orders trees on import to satisfy dependencies.

Exports omit secrets of all kind (passwords, API keys, etc.) which may be stored in node configuration properties. Therefore, if we exported a tree whose configuration contains secrets, the imported tree will lack those secrets. If we want to more easily reuse trees (like I do in my demo/lab environments) we can edit the exported tree files and manually insert the secrets. Fields containing secrets are exported as null values. Once we manually add those secrets to our exports, they will import as expected.

{
  "origin": "003232731275e50c2770b3de61675fca",
  "innernodes": {},
  "nodes": {
    ...
    "B56DB408-E26D-4FBA-BF86-339799ED8C45": {
      "_id": "B56DB408-E26D-4FBA-BF86-339799ED8C45",
      "hostName": "smtp.gmail.com",
      "password": null,
      "sslOption": "SSL",
      "hostPort": 465,
      "emailAttribute": "mail",
      "smsGatewayImplementationClass": "com.sun.identity.authentication.modules.hotp.DefaultSMSGatewayImpl",
      "fromEmailAddress": "vscheuber@gmail.com",
      "username": "vscheuber@gmail.com",
      "_type": {
        "_id": "OneTimePasswordSmtpSenderNode",
        "name": "OTP Email Sender",
        "collection": true
      }
    },
    ...
  },
  "scripts": {
    ...
  },
  "tree": {
    "_id": "email",
    "nodes": {
      ...
      "B56DB408-E26D-4FBA-BF86-339799ED8C45": {
        "displayName": "Email OTP",
        "nodeType": "OneTimePasswordSmtpSenderNode",
        "connections": {
          "outcome": "08211FF9-9F09-4688-B7F1-5BCEB3984624"
        }
      },
      ...
    },
    "entryNodeId": "DF68B2B8-0F10-4FF3-9F2C-622DA16BA4B7"
  }
}

The Json code snippet above shows excerpts from the email tree. One of the nodes is responsible for sending a one-time password (OTP) via email to the user, thus needing SMTP gateway configuration. The export does not include the value of the password property in the node configuration. To make this export file re-usable, we could replace null with the actual password. Depending on the type of secret this might be acceptable or not.

Importing individual trees requires us to make sure all the dependencies are met. Amtree.sh provides a nice option, -d, to describe a tree export file. That will tell us if a tree has any dependencies we need to meet before we can import that single tree. Let’s take the select tree as an example. The select tree offers the user a choice, which 2nd factor they want to use to login. Each choice then evaluates another tree, which implements the chosen method:

Running amtree.sh against the exported select.json file gives us a good overview of what the select tree is made of, which node types it uses, which scripts (if any) it references, and what other trees (inner trees) it depends on:

../amtree.sh -d -f select.json
 Tree: select
 ============

 Nodes:
 -----
 - ChoiceCollectorNode
 - InnerTreeEvaluatorNode 

 Scripts:
 -------
 None

 Dependencies:
 ------------
 - email
 - push
 - trusona
 - webauthn 

From the output of the -d option we can derive useful information:

  • Which nodes will we need to have installed in our AM instance? ChoiceCollectorNode and InnerTreeEvaluatorNode.
  • Which scripts will the tree export file install in our AM instance? None.
  • Which trees does this tree depend on? The requiring of the InnerTreeEvaluatorNode already gave away that there will be dependencies. This list simply breaks them down: email, push, trusona, and webauthn.

Ignoring the dependencies we can try to import the file into an empty realm and see what amtree.sh will tell us:

../amtree.sh -h https://am.example.com/openam -u amadmin -p ******** -r /empty -i select -f select.json 

Importing select…Error importing node InnerTreeEvaluatorNode (D21C798F-D4E9-400A-A038-0E1A883348EB): {"code":400,"reason":"Bad Request","message":"Data validation failed for the attribute, Tree Name"}
{
  "_id": "D21C798F-D4E9-400A-A038-0E1A883348EB",
  "tree": "email",
  "_type": {
    "_id": "InnerTreeEvaluatorNode",
    "name": "Inner Tree Evaluator",
    "collection": true
  }
}

The error message confirms that dependencies are not met. This leaves us with 3 options:

  1. Instruct amtree.sh to import the four dependencies using the -i option before trying to import select.json. Of course that bears the risk that any or all of the 4 inner trees have dependencies of their own.
  2. Instruct amtree.sh to import the authn_all.json using the -I option. The tool will bring in all the trees in the right order but there is no easy way to avoid any of the many trees in the file to be imported.
  3. Instruct amtree.sh import all the .json files in the current directory using the -s option. The tool will bring in all the trees in the right order. Any trees we don’t want to import, we can move into a sub folder and amtree.sh will ignore them.

Let’s see how option 3 will work out. To avoid errors, we need to move the authn_all.json file containing all the trees into a sub folder, ignore in my case. Then we are good to go:

../amtree.sh -h https://am.example.com/openam -u amadmin -p ******** -r /empty -s
Import all trees in the current directory
Determining installation order..............................................
Importing email.........
Importing push_reg..............
Importing push_reg_2fa...............
Importing simple............
Importing trusona....
Importing webauthn.............
Importing webauthn_reg............
Importing webauthn_reg_2fa.............
Importing push..........
Importing risk........
Importing select.......
Importing smart..........
Importing solid......

No errors reported this time. You can see the tools spent quite some cycles determining the proper import order (the more dots, the more cycles). We would have likely run into nested dependencies had we tried option 1 and manually imported the four known dependencies.

A word of caution: Imports overwrite trees of the same name without any warning. Be mindful of that fact when importing into a realm with existing trees.

Migrate/Copy

Amtree.sh supports stdin and stdout for input and output. That allows us to pipe the output of an export command (-e or -E) to an import command (-i or -I) without storing anything on disk. That’s a pretty slick way to migrate trees from one realm to another in the same AM instance or across instances. The -s and -S options do not support stdin and stdout, thus they won’t work for this scenario.

../amtree.sh -h https://am.example.com/openam -u amadmin -p ******** -r /authn -E | ../amtree.sh -h https://am.example.com/openam -u amadmin -p ******** -r /empty -I
 Exporting push ..........
 Exporting simple ............
 Exporting trusona ....
 Exporting risk ........
 Exporting smart ..........
 Exporting webauthn .............
 Exporting select .......
 Exporting solid ......
 Exporting webauthn_reg ............
 Exporting webauthn_reg_2fa .............
 Exporting email .........
 Exporting push_reg ..............
 Exporting push_reg_2fa ...............
 Determining installation order.............................
 Importing email.........
 Importing push..........
 Importing push_reg..............
 Importing push_reg_2fa...............
 Importing risk........
 Importing select.......
 Importing simple............
 Importing smart..........
 Importing solid......
 Importing trusona....
 Importing webauthn.............
 Importing webauthn_reg............
 Importing webauthn_reg_2fa.............

The above command copies all the trees in a realm to another realm. Nothing is ever exported to disk.

Prune

Trees consist of different configuration artifacts in the AM configuration store. When managing trees through the AM REST APIs, it is easy to forget to remove unused artifacts. Even when using the AM Admin UI, dead configuration is left behind every time a tree is deleted. The UI doesn’t give an admin any options to remove those dead artifacts nor is there a way really to even see them. Over time, they will grow to uncomfortable size and will clutter the results of API calls.

Amtree.sh prunes those orphaned configuration artifacts when the -P parameter is supplied. I regularly delete all the default trees in a new realm, which leaves me with 33 orphaned configuration artifacts right out of the gate. To be clear: Those orphaned configuration artifacts don’t cause any harm. It’s a desire for tidiness that makes me want them gone.

./amtree.sh -h https://am.example.com/openam -u amadmin -p ******** -r /authn -P
Analyzing authentication nodes configuration artifacts…

Total:    118
Orphaned: 20

Do you want to prune (permanently delete) all the orphaned node instances? (N/y): y
Pruning....................
Done.

Wrap-Up & Resources

Amtree.sh is a big improvement for the handling and management of authentication trees in the ForgeRock Identity Platform. It is hardly the final solution, though. The implementation as a shell script is limiting both the supported platforms and functionality. My fingers are itching to re-write it in a better suited language. Now there’s a goal for 2020!

If you want to explore the examples in this post, here are all the resources I used:

Use Authentication Trees To Create A Great SAML Login Experience

If you are shopping for a C/IAM platform these days, chances are the vendor pitch you are going to hear is all about OAuth and OIDC and JWT tokens and other shiny things for application integration, authentication, authorization, and single sign-on. And rightly so, as these standards truly offer great new capabilities or provide a modern implementation of old friends. But once the honeymoon is over and reality sets in, you are probably going to find yourself facing a majority of SAML applications and services that need integration and only a very small minority of modern applications supporting those shiny new standards.

The ForgeRock Identity Platform is an authentication broker and orchestration hub. How you come in and where you take your session is merely a matter of configuration. With the great new capabilities introduced with Authentication Trees, one might wonder how these new capabilities jive with old-timers like SAML. And wow do they jive!

With SAML there are two main scenarios per integration: Being the service provider (SP) or being the identity provider (IDP). The service provider consumes an attested identity and has limited control over how that identity was authenticated. The identity provider on the other hand dictates how identities authenticate.

Application users can start the process on the IDP side (IDP-initiated) or on the application side (SP-initiated). IDP-initiated login comes typically in the form of a portal listing all the applications the user has access to. Selecting any of these applications launches the login flow. SP-initiated flows start on the application side. Often users have bookmarked a page in the application, which requires authentication, causing the application to initiate the login flow for unauthenticated users.

This guide focuses on the SP-initiated flow and how you can use the ForgeRock Identity Platform to create the exact login experience you seek. The use case is:

“Controlling which authentication tree to launch in SP-initiated flows”.

The configuration option is a bit non-obvious and I have been asked a few times if the only way was to use the realm default authentication setting or whether there were alternatives. This is a great way to create an individual login journey for SAML users, distinct from the others.

To configure your environment, follow these steps:

  1. Follow the steps to configure SAML federation between the ForgeRock Identity Platform and your application. For this guide I configured my private Google Apps account as an SP.
  2. Test the SP-initiated flow. It should use your realm default for authentication (realm > authentication > settings > core).
  3. Now create the trees you want SAML users to use in the realm you are federating into. In this example, a tree called “saml” is the base tree that should be launched to authenticate SAML users.

    where the “Continue?” node just displays a message notifying users that they are logging in via SAML SP-initiated login. The “Login” node is an inner tree evaluator launching the “simple” tree, which lets them authenticate using username and password:

    The “2nd Factor” node is another inner tree evaluator branching out into tree, which allows the user to select a 2nd factor they want to use:

    This guide will use the “push” tree for push authentication:
  4. Now navigate to the “Authentication Context” section in your “Hosted IDP” configuration in the AM Admin UI (Applications > Federation > Entity Providers > [your hosted entity provider] > Authentication Context):

    This is where the magic happens. Select “Service” from the “Key” drop-down list on all the supported authentication contexts (note, you could launch different trees based on what the SP proposes for authentication, Google seems to only support “PasswordProtectedTransport” by default) and enter the name of the base tree you want to execute in the “Value” column, “saml” in this configuration example.

Test your configuration:

  1. Launch your SP-initiated login flow. For Google GSuite you do that by pointing your browser to https://gsuite.google.com and select “Sign-In”, then type in your GSuite domain name, select the application you want to land in after authentication and select “GO”.
  2. Google redirects to ForgeRock Access Management. The “saml” tree displays the configured message, giving users the options to “Continue” or “Abort”. Select “Continue”:
  3. The “simple” tree executes, prompting for username and password:
  4. Now the flow comes back into the “simple” tree and branches out into the 2nd Factor selector tree “select”:
  5. Select “Push” and respond to the push notification on your phone while the web user interface gracefully waits for your response:
  6. And finally, a redirect back to Google Apps with a valid SAML assertion in tow completes the SP-initiated login flow: