ForgeRock Identity Day Paris (2019)

Jeudi 21 Novembre, c’est tenu à Paris le ForgeRock Identity Day, une demi journée d’information sur notre société et nos produits, destinée à nos clients, prospects et partenaires.

Animé par Christophe Badot, VP de la Région France, Benelux, Europe du Sud, cet événement a commencé par une présentation de Alexander Laurie, VP Global Solution Architecture, sur les tendances du marché et la vision de ForgeRock, en Français avec un bel accent Anglais.

Nous avons eu des témoignages de nos clients: CNP Assurance, GRDF et l’Alliance Renault-Nissan-Mitsubishi. Merci à eux d’avoir partagé leurs besoins et la solution apportée par ForgeRock.

Léonard Moustacchis et Stéphane Orluc, Solutions Architects chez ForgeRock, ont fait une démonstration en direct de la force de la Plateforme d’Identité de ForgeRock, à travers une application bancaire web et mobile. Et j’ai eu l’honneur de clore la journée avec une présentation de la roadmap produits, et surtout du ForgeRock Identity Cloud, notre offre SaaS disponible depuis la fin Octobre.

Cette après-midi s’est terminée sur un cocktail qui nous a permis de discuter plus en détail avec les participants. Toutes les photos de l’événement sont visible dans l’album sur mon compte Flickr.


And now the English shorter version:

On Thursday November 21st, we hosted ForgeRock Identity Day in Paris, a half day event for our customers, prospect customers and partners. We presented our vision of the identity landscape, our products and the roadmap. And three of our French customers : CNP Assurances, GRDF, Renault-Nissan-Mitsubishi Alliance, presented how ForgeRock has helped them with their digital transformation and identity needs. My colleagues from the Solutions Architect team ran a live demo of our web and mobile sample banking applications, to illustrate the power of the ForgeRock Identity Platform. And I closed the day with a presentation of the product roadmap and especially of ForgeRock Identity Cloud, our solution as a service. As usual, all my photos are visible in this public Flickr album.

This blog post was first published @ ludopoitou.com, included here with permission.

ForgeRock Identity Day Paris (2019)

Jeudi 21 Novembre, c’est tenu à Paris le ForgeRock Identity Day, une demi journée d’information sur notre société et nos produits, destinée à nos clients, prospects et partenaires.

Animé par Christophe Badot, VP de la Région France, Benelux, Europe du Sud, cet événement a commencé par une présentation de Alexander Laurie, VP Global Solution Architecture, sur les tendances du marché et la vision de ForgeRock, en Français avec un bel accent Anglais.

Nous avons eu des témoignages de nos clients: CNP Assurance, GRDF et l’Alliance Renault-Nissan-Mitsubishi. Merci à eux d’avoir partagé leurs besoins et la solution apportée par ForgeRock.

Léonard Moustacchis et Stéphane Orluc, Solutions Architects chez ForgeRock, ont fait une démonstration en direct de la force de la Plateforme d’Identité de ForgeRock, à travers une application bancaire web et mobile. Et j’ai eu l’honneur de clore la journée avec une présentation de la roadmap produits, et surtout du ForgeRock Identity Cloud, notre offre SaaS disponible depuis la fin Octobre.

Cette après-midi s’est terminée sur un cocktail qui nous a permis de discuter plus en détail avec les participants. Toutes les photos de l’événement sont visible dans l’album sur mon compte Flickr.


And now the English shorter version:

On Thursday November 21st, we hosted ForgeRock Identity Day in Paris, a half day event for our customers, prospect customers and partners. We presented our vision of the identity landscape, our products and the roadmap. And three of our French customers : CNP Assurances, GRDF, Renault-Nissan-Mitsubishi Alliance, presented how ForgeRock has helped them with their digital transformation and identity needs. My colleagues from the Solutions Architect team ran a live demo of our web and mobile sample banking applications, to illustrate the power of the ForgeRock Identity Platform. And I closed the day with a presentation of the product roadmap and especially of ForgeRock Identity Cloud, our solution as a service. As usual, all my photos are visible in this public Flickr album.

Configuring ForgeRock AM Active/Active Deployment Routing Using IG

Introduction

The standard deployment pattern for ForgeRock Identity Platform is to deploy the entire platform in multiple data centers/cloud regions. This is ensures the availability of services in case of an outage in one data center. This approach also provides performance benefits, as the load can be distributed among multiple data centers. Below is the example diagram for Active/Active deployment:

Problem Statement

AM provides both stateful/CTS-based and stateless/client-based sessions. Global deployment use cases require a seamless, single sign-on (SSO) experience among all applications with following constraints:

  • Certain deployments have distributed applications, such as App-A, deployed only in Data Center-A, and App-B, deployed only in Data Center-B.
  • The end user may travel to different locations, such as from the East coast to the West coast in the U.S. This means that application access requests will be handled by different data centers.

To achieve these use cases, CTS replication has to be enabled across multiple data centers/cloud regions.

In some situations, a user may try to access an application hosted in a specific data center before their corresponding sessions have been replicated. This can result in the user being prompted to re-authenticate, thereby degrading the user experience:

Note: This problem may be avoided if client-based sessions are leveraged, but many deployments have to use CTS-based sessions due to current limitations in client-based sessions. Also, when CTS-based sessions are used, the impact of CTS replication is much more than in client-based sessions.

In this article, we leverage IG to intelligently route session validation requests to a single data center, irrespective of the application being accessed.

Solution

IG can route session validation requests to a specific data center/region, depending on an additional site cookie generated during user’s authentication.

This approach ensures that the AM data center that issued the user’s session is used for corresponding session validation calls. This also means that CTS replication is not required across multiple data centers/ cloud regions:

Configure AM

  • Install AM 6.5.x and corresponding DS stores, Amster, and others. Following is a sample Amster install command:
install-openam — serverUrl http://am-A.example.com:8094/am — adminPwd cangetinam — acceptLicense — userStoreDirMgr “cn=Directory Manager” — userStoreDirMgrPwd “cangetindj” — userStoreHost uds1.example.com — userStoreType LDAPv3ForOpenDS — userStorePort 1389 — userStoreRootSuffix dc=example,dc=com — cfgStoreAdminPort 18092 — cfgStorePort 28092 — cfgStoreJmxPort 38092 — cfgStoreSsl SIMPLE — cfgStoreHost am-A.example.com — cfgDir /home/forgerock/am11 — cookieDomain example.com
am> connect http://am-A.example.com:8094/am -i
Sign in
User Name: amadmin
Password: **********
amster am-A.example.com:8094> import-config — path /home/forgerock/work/amster
Importing directory /home/forgerock/work/amster
Imported /home/forgerock/work/amster/global/Realms/root-employees.json
Imported /home/forgerock/work/amster/realms/root-employees/CoookieSetterNode/e4c11a8e-6c3b-455d-a875–4a1c29547716.json
Imported /home/forgerock/work/amster/realms/root-employees/DataStoreDecision/6bc90a3d-d54d-4857-a226-fb99df08ff8c.json
Imported /home/forgerock/work/amster/realms/root-employees/PasswordCollector/013d8761–2267–43cf-9e5e-01a794bd6d8d.json
Imported /home/forgerock/work/amster/realms/root-employees/UsernameCollector/31ce613e-a630–4c64–84ee-20662fb4e15e.json
Imported /home/forgerock/work/amster/realms/root-employees/PageNode/55f2d83b-724b-4e3a-87cc-247570c7020e.json
Imported /home/forgerock/work/amster/realms/root-employees/AuthTree/LDAPTree.json
Imported /home/forgerock/work/amster/realms/root/J2eeAgents/IG.json
Import completed successfully
- Creates /root realm aliases: am-A.example.com and am-B.example.com- AM Agent to be used by IG in /root realm- LDAPTree to create cookie after authentication. Update Cookie value as DC-A or DC-B, dependending on datacenter being used
  • Repeat the previous steps for configuring AM in all data centers:

Configure IG

- frProps.json to specify AM primary and secondary DC endpoints. Refer frProps-DC-A for DC-A and frProps-DC-B for DC-B.- config.json to declare primary and secondary AmService objects- 01-pep-dc-igApp.json to route session validation to specific datacenter, depending on “DataCenterCookie” value.
  • Repeat the previous steps for deploying IG in all data centers.

Test the use cases

The user accesses an application deployed in DC-A first

  1. The user accesses app1.example.com, deployed in DC-A.
  2. IG, deployed in DC-A, redirects the request to AM, deployed in DC-A for authentication.
  3. A DataCenterCookie is issued with a DC-A value.
  4. The user accesses app2.example.com, deployed in DC-B.
  5. IG, deployed in DC-B, redirects the request to AM, deployed in DC-A, for session validation.

The user accesses an application deployed in DC-B first

  1. The user accesses app2.example.com deployed in DC-B.
  2. IG, deployed in DC-B, redirects the request to AM deployed in DC-B, for authentication.
  3. A DataCenterCookie is issued with a DC-B value.
  4. The user accesses app1.example.com, deployed in DC-A.
  5. IG, deployed in DC-A, redirects request to AM, deployed in DC-B, for session validation.

Extend AM to OAuth/OIDC use cases

OAuth: AM 6.5.2 provides option to modify Access tokens using scripts. This allows additional metadata in stateless OAuth tokens, such as dataCenter. This information can be leveraged by IG OAuth RS to invoke the appropriate data center’s AmService objects for tokenInfo/ introspection endpoints:

{
 “sub”: “user.88”,
 “cts”: “OAUTH2_STATELESS_GRANT”,
 “auth_level”: 0,
 “iss”: “http://am6521.example.com:8092/am/oauth2/employees",
 …
 “dataCenter”: “DC-A”
}

OIDC: AM allows additional claims in OIDC tokens using scripts. This information can be leveraged by IG to invoke appropriate dataceter’s AmService objects

References

IDM Deployment Patterns — Centralized Repo- Based vs. Immutable File-Based

Introduction

I recently blogged about how customers can architect ForgeRock Access Management to support an immutable, DevOps style deployment pattern — see link. In this post, we’ll take a look at how to do this for ForgeRock Identity Management (IDM).

IDM is a modern OSGi-based application, with its configuration stored as a set of JSON files. This lends itself well to either a centralized repository- (repo) based deployment pattern, or a file-based immutable pattern. This blog explores both options, and summarizes the advantages and disadvantages of each.

IDM Architecture

Before delving into the deployment patterns, it is useful to summarize the IDM architecture. IDM provides centralized, simple management, and synchronization of users, devices, and things. It is a highly flexible product, and caters to a multitude of different identity management use cases, from provisioning, self-service, password management, synchronization, and reconciliation, to workflow, relationships, and task execution. For more on IDM’s architecture, check out the following link.

IDM’s core deployment architecture is split between a web application running in an OSGi framework within a Jetty Web Server, and a supported repo. See this link for a list of supported repos.

Within the IDM web application, the following components are stored:

  • Apache Felix and Jetty Web Server hosting the IDM binaries
  • Secrets
  • Configuration and scripts — this is the topic of this blog.
  • Policy scripts
  • Audit logs (optional)
  • Workflow BAR files
  • Bundles and application JAR files (connectors, dependencies, repo drivers, etc)
  • UI files

Within the IDM repo the following components are stored:

  • Centralized copies of configuration and policies — again, the topic of this blog
  • Cluster configuration
  • Managed and system objects
  • Audit logs (optional)
  • Scheduled tasks and jobs
  • Workflow
  • Relationships and link data

Notice that configuration is listed twice, both on the IDM node’s filesystem, and within the IDM repo. This is the focus of this blog, and how manipulation of this can either support a centralized repository deployment pattern, or a file-based immutable configuration deployment pattern.

Centralized, Repo-Based Deployment Pattern

This is the out-of-the-box (OOTB) deployment pattern for IDM. In this model, all IDM nodes share the same repository to pull down their configuration on startup, and if necessary, overwrite their local files. Any configuration changes made through the UI or over REST (REST ensures consistency) are pushed to the repo and then down to each IDM node via the cluster service. The JSON configuration files within the ../conf directory on the IDM web application are present, but should not be manipulated directly, as this can lead to inconsistencies in the configuration between the local file system and the authoritative repo configuration.

The following component-level diagram illustrates this deployment pattern:

Configuration Settings

The main configuration items for a multi-instance, immutable, file-based deployment pattern are:

  • The ../resolver/boot.properties file — This file stores IDM boot specifics like the IDM host, ports, SSL settings, and more. The key configuration item in this file for this blog post is openidm.node.id, which needs to be a string that is unique to each IDM node to allow the cluster service to identify each host.
  • The ../conf folder — This contains all JSON configuration files. On startup, these files will be pushed to the IDM repo. As a best practice (see link), the OOTB ../conf directory should not be used. Instead, a project folder containing the contents of the ../conf and ../script directory should be created, and IDM started with the “-p </path/to/my/project/location>” flag. This ensures OOTB and custom configurations are kept separately to ease version control, upgrades, backouts, and others.
  • The ../<my_project>/conf/system.properties file. This file contains 2 key settings:
openidm.fileinstall.enabled=false

This setting can either be left-commented (for example, true by default) or uncommented, and explicitly set to true. This, combined with the setting below, pushes all configurations except those from your project’s directory (for example, . ../conf and ../script) to the repo:

openidm.config.repo.enabled=false 

This setting needs to be uncommented to ensure IDM does not read the configuration from the repo, or push the configuration to the repo:

  • The ../<my_project>/conf/config.properties file. The key setting in this file is:
felix.fileinstall.enableConfigSave=false 

This setting needs to be uncommented. This means any changes made via REST or the UI are not pushed down to the local IDM file system. This effectively makes the IDM configuration read-only, which is key to immutability.

Note: Direct manipulation of configuration files and promotion to other IDM environments can fail if the JSON files contain crypto material. See the following KB article for information on how to handle this. You can also use the IDM configexport tool (IDM version 6.5 and above).

Following are key advantages and disadvantages of this deployment pattern:

Advantages

  • Follows core DevOps patterns for an immutable configuration: push configuration into a repo like GIT, parameterize, and promoted up to production. A customer knows without a doubt which configuration is running in production.
  • This pattern offers the ability to pre-bake a configuration into an image (such as, a Docker image, an Amazon Machine Image, and others) for auto-deployment of IDM configuration using orchestration tools.
  • Supports “stack by stack” deployments, as configuration changes can be made to a single node without impacting the others. Rollback is also far simpler—just restore the previous configuration.
  • The IDM configuration is set to read-only; meaning, accidental UI or REST-based configuration changes cannot alter configuration, and can potentially go on to impact functionality.

Disadvantages

  • As each IDM node holds its own configuration, the UI cannot be used to make configuration changes. This could present a challenge to customers new to IDM.
  • The customer is left to ensure processes are put in place to ensure all IDM nodes run from exactly the same configuration. This requires strong DevOps methodologies and experience.
  • Limited benefits for customers who do not modify their IDM configuration often.

Immutable, File-Based Deployment Pattern

The key difference in this model is IDM’s configuration is not stored in the repository. Instead, IDM pulls the configuration from the local filesystem and stores it in memory. The repo is still the authoritative source for all other IDM components (cluster configuration, schedules, and optionally, audit logs, system and managed objects, links, relationships, and others).

The following component level diagram illustrates this deployment pattern:

Configuration Settings

The main configuration items for a multi-instance, immutable, file-based deployment pattern are:

  • The ../resolver/boot.properties file — This file stores IDM boot specifics like the IDM host, ports, SSL settings, and more. The key configuration item in this file for this blog post is openidm.node.id, which needs to be a string unique to each IDM node to let the cluster service identify each host.
  • The ../conf folder — This contains all JSON configuration files. On startup, these files will be pushed to the IDM repo. As a best practice, (see link), the OOTB ../conf directory should not be used. Instead, a project folder containing the contents of the ../conf and ../script directory should be created, and IDM started with the “-p </path/to/my/project/location>” flag. This ensures OOTB and custom configurations are kept separate, to ease version control, upgrades, backouts, and others.
  • The ../<my_project>/conf/system.properties file. This file contains 2 key settings:
openidm.fileinstall.enabled=false

This setting can either be left commented (for example, true by default) or uncommented, and explicitly set to true. This, combined with the setting below pushes all configurations except that from your project’s directory (such as ../conf and ../script) to the repo:

openidm.config.repo.enabled=false 

This setting needs to be uncommented to ensure IDM does not read the configuration from the repo or push the configuration to the repo:

  • The ../<my_project>/conf/config.properties file. The key setting in this file is:
felix.fileinstall.enableConfigSave=false 

This setting needs to be uncommented. This means any changes made via REST or the UI are not pushed down to the local IDM filesystem. This effectively makes the IDM configuration read-only, which is key to immutability.

Note: Direct manipulation of configuration files and promotion to other IDM environments can fail if the JSON files contain crypto material. See the following KB article for information on how to handle this. You can also use the IDM configexport tool (IDM version 6.5 and above).

The following presents key advantages and disadvantages of this deployment pattern:

Advantages

  • Follows core DevOps patterns for immutable configuration: push configuration into a repo like GIT, parameterize, and promoted up to production. A customer knows without a doubt which configuration is running in production.
  • This pattern offers the ability to pre-bake the configuration into an image (such as a Docker image, an Amazon Machine Image, and others) for auto-deployment of IDM configuration using orchestration tools.
  • Supports “stack by stack” deployments, as configuration changes can be made to a single node without impacting the others. Rollback is also far simpler—restore the previous configuration.
  • The IDM configuration is set to read-only; meaning, accidental UI or REST-based configuration changes cannot alter configuration and potentially go on to impact functionality.

Disadvantages

  • As each IDM node holds its own configuration, the UI cannot be used to make configuration changes. This could present a challenge to customers new to IDM.
  • The customer is left to guarantee processes are put in place to ensure all IDM nodes run from exactly the same configuration. This requires strong DevOps methodologies and experience.
  • Limited benefit for customers who do not modify their IDM configuration often.

Summary of Configuration Parameters

The following table summarizes the key configuration parameters used in the centralized repo, and in file-based, immutable deployment patterns:

Conclusion

There you have it, two different deployment patterns—the centralized, repo-based pattern for customers who wish to go with the OOTB configuration, and/or do not update the IDM configuration often, and the immutable, file- based deployment pattern for those customers who demand it and/or are well-versed in DevOps methodologies and wish to treat IDM like code.

5 Minute Briefing: Designing for Security Outcomes

This is the first in a set of blogs focused on high level briefings - typically 5 minute reads, covering design patterns and meta trends relating to security architecture and design.

When it comes to cyber security design, there have been numerous ways of attempting to devise the investment profile and allocative efficiency metric.  Should we protect a $10 bike with a $6 lock, if the chance of loss is 10% - that sort of stuff.  I don’t want to tackle the measurement process per-se.

I want to focus upon taking the generic business concept of outcomes, alongside some of the uncertainty that is often associated with complex security and access control investments.

I guess, to start with, a few definitions to get us all on the same page.  Firstly, what are outcomes?  In a simple business context, an outcome is really just a forward looking statement – where do we want go get to?  What do we want to achieve?  In the objective, strategy and tactics model of analysis, it is likely the outcome could fall somewhere in the objective and possibly strategy blocks.

A really basic example of an OST breakdown could be the following:

    • Objective: fit in to my wedding dress by July 1st
    • Strategy: eat less and exercise more
    • Tactic: don’t eat snacks between meals and walk to work



So how does this fit into security?  Well cyber is typically seen as a business cost – with risk management another parallel cost centre, used to manage the implementation of cyber security investment and the subsequent returns - or associated loss reduction.

The end result of traditional information security management, is something resembling a control – essentially a pretty fine grained and repeatable step that can be measured.  Maybe something like “run anti-virus version x or above on all managed desktops”.  

But how does something so linear and pretty abstract in some cases, flow back to the business objectives?  I think in general, it doesn’t (or even can’t) which results in the inertia associated with security investment – the overall security posture is compromised and business investment in those controls is questioned.

The security control could be seen as a tactic – but is often not associated with any strategy or IT objective – and certainly very rarely associated with a business objective.  The business wants to sell widgets, not worry about security controls and quite rightly so.

Improve Security Design


So how are outcomes better than simple controls?  I think there are two aspects to this.  First is about security design and second is about security communications.

If we take the AV control previously described – what is that trying to achieve?  Maybe a more broad brush outcome, is that malware isn’t great and should be avoided.  Why should malware be avoided?  Perhaps metrics can attribute firewall performance reductions of 18% due to malware call home activity, which in turn reduces the ability for the business to uphold a 5 minute response SLA for customer support calls?

Or that 2 recent data breaches, were attributable to a bot net miner browser plug-in, that resulted in 20,000 identity records being leaked at a cost of $120 per record in fines?  

Does a security outcome such as “a 25% reduction in malware activity” result in a more productive, accountable and business understandable work effort?  

It would certainly require multiple different strategies and tactics to make it successful, covering lots of different aspects of people, process and technology.  Perhaps one of the tactics involved is indeed running up to date AV.   I guess the outcome can act as both a modular umbrella and also a future proofed and autonomous way of identifying the most value driven technical control.

Perhaps outcomes really are more about reporting and accountability?


Improve Security Communications


Accountability and communications are often a major weakness of security design and risk management.  IT often doesn’t understand the nuance of certain security requirements – anyone heard of devsecops (secdevops)?

Business understanding is vitally important when it comes to security design and that really is what “security communications” is all about.  I’m not talking about TLS (nerd joke alert), but more about making sure both the business and IT functions not only use a common language, but also work towards common goals. 

Security controls tend to be less effective when seen as checkbox exercises, powered by internal and external audit processes (audit functions tend to exist in areas of market failure, where the equilibrium state of the market results in externalities….but I won’t go there here).

Controls are often abstracted away from business objectives via a risk management layer and can lose their overall effectiveness – and in turn business confidence.  Controls also tend to be implicitly out of date by the time they are designed and certainly when they implemented.

If controls are emphasised less, and security outcomes more – and making sure outcomes are tied more closely with business objectives, an alignment on accountability and in turn investment profiles can be made.

Summary


So what are trying to say?  At a high level, try to move away from controls and encourage more goals and outcomes based design when it comes to security.  By leveraging an outcomes based model, procurement and investment decisions can be crystallised and made more accountable.  

The business objectives can be contributed towards and security essentially can become more effective – resulting in fewer data breaches, better returns on investment and greater clarity on where investment should be made.

5 Minute Briefing: Designing for Security Outcomes

This is the first in a set of blogs focused on high level briefings - typically 5 minute reads, covering design patterns and meta trends relating to security architecture and design.

When it comes to cyber security design, there have been numerous ways of attempting to devise the investment profile and allocative efficiency metric.  Should we protect a $10 bike with a $6 lock, if the chance of loss is 10% - that sort of stuff.  I don’t want to tackle the measurement process per-se.

I want to focus upon taking the generic business concept of outcomes, alongside some of the uncertainty that is often associated with complex security and access control investments.

I guess, to start with, a few definitions to get us all on the same page.  Firstly, what are outcomes?  In a simple business context, an outcome is really just a forward looking statement – where do we want go get to?  What do we want to achieve?  In the objective, strategy and tactics model of analysis, it is likely the outcome could fall somewhere in the objective and possibly strategy blocks.

A really basic example of an OST breakdown could be the following:

    • Objective: fit in to my wedding dress by July 1st
    • Strategy: eat less and exercise more
    • Tactic: don’t eat snacks between meals and walk to work



So how does this fit into security?  Well cyber is typically seen as a business cost – with risk management another parallel cost centre, used to manage the implementation of cyber security investment and the subsequent returns - or associated loss reduction.

The end result of traditional information security management, is something resembling a control – essentially a pretty fine grained and repeatable step that can be measured.  Maybe something like “run anti-virus version x or above on all managed desktops”.  

But how does something so linear and pretty abstract in some cases, flow back to the business objectives?  I think in general, it doesn’t (or even can’t) which results in the inertia associated with security investment – the overall security posture is compromised and business investment in those controls is questioned.

The security control could be seen as a tactic – but is often not associated with any strategy or IT objective – and certainly very rarely associated with a business objective.  The business wants to sell widgets, not worry about security controls and quite rightly so.

Improve Security Design


So how are outcomes better than simple controls?  I think there are two aspects to this.  First is about security design and second is about security communications.

If we take the AV control previously described – what is that trying to achieve?  Maybe a more broad brush outcome, is that malware isn’t great and should be avoided.  Why should malware be avoided?  Perhaps metrics can attribute firewall performance reductions of 18% due to malware call home activity, which in turn reduces the ability for the business to uphold a 5 minute response SLA for customer support calls?

Or that 2 recent data breaches, were attributable to a bot net miner browser plug-in, that resulted in 20,000 identity records being leaked at a cost of $120 per record in fines?  

Does a security outcome such as “a 25% reduction in malware activity” result in a more productive, accountable and business understandable work effort?  

It would certainly require multiple different strategies and tactics to make it successful, covering lots of different aspects of people, process and technology.  Perhaps one of the tactics involved is indeed running up to date AV.   I guess the outcome can act as both a modular umbrella and also a future proofed and autonomous way of identifying the most value driven technical control.

Perhaps outcomes really are more about reporting and accountability?


Improve Security Communications


Accountability and communications are often a major weakness of security design and risk management.  IT often doesn’t understand the nuance of certain security requirements – anyone heard of devsecops (secdevops)?

Business understanding is vitally important when it comes to security design and that really is what “security communications” is all about.  I’m not talking about TLS (nerd joke alert), but more about making sure both the business and IT functions not only use a common language, but also work towards common goals. 

Security controls tend to be less effective when seen as checkbox exercises, powered by internal and external audit processes (audit functions tend to exist in areas of market failure, where the equilibrium state of the market results in externalities….but I won’t go there here).

Controls are often abstracted away from business objectives via a risk management layer and can lose their overall effectiveness – and in turn business confidence.  Controls also tend to be implicitly out of date by the time they are designed and certainly when they implemented.

If controls are emphasised less, and security outcomes more – and making sure outcomes are tied more closely with business objectives, an alignment on accountability and in turn investment profiles can be made.

Summary


So what are trying to say?  At a high level, try to move away from controls and encourage more goals and outcomes based design when it comes to security.  By leveraging an outcomes based model, procurement and investment decisions can be crystallised and made more accountable.  

The business objectives can be contributed towards and security essentially can become more effective – resulting in fewer data breaches, better returns on investment and greater clarity on where investment should be made.

Use Authentication Trees To Create A Great SAML Login Experience

If you are shopping for a C/IAM platform these days, chances are the vendor pitch you are going to hear is all about OAuth and OIDC and JWT tokens and other shiny things for application integration, authentication, authorization, and single sign-on. And rightly so, as these standards truly offer great new capabilities or provide a modern implementation of old friends. But once the honeymoon is over and reality sets in, you are probably going to find yourself facing a majority of SAML applications and services that need integration and only a very small minority of modern applications supporting those shiny new standards.

The ForgeRock Identity Platform is an authentication broker and orchestration hub. How you come in and where you take your session is merely a matter of configuration. With the great new capabilities introduced with Authentication Trees, one might wonder how these new capabilities jive with old-timers like SAML. And wow do they jive!

With SAML there are two main scenarios per integration: Being the service provider (SP) or being the identity provider (IDP). The service provider consumes an attested identity and has limited control over how that identity was authenticated. The identity provider on the other hand dictates how identities authenticate.

Application users can start the process on the IDP side (IDP-initiated) or on the application side (SP-initiated). IDP-initiated login comes typically in the form of a portal listing all the applications the user has access to. Selecting any of these applications launches the login flow. SP-initiated flows start on the application side. Often users have bookmarked a page in the application, which requires authentication, causing the application to initiate the login flow for unauthenticated users.

This guide focuses on the SP-initiated flow and how you can use the ForgeRock Identity Platform to create the exact login experience you seek. The use case is:

“Controlling which authentication tree to launch in SP-initiated flows”.

The configuration option is a bit non-obvious and I have been asked a few times if the only way was to use the realm default authentication setting or whether there were alternatives. This is a great way to create an individual login journey for SAML users, distinct from the others.

To configure your environment, follow these steps:

  1. Follow the steps to configure SAML federation between the ForgeRock Identity Platform and your application. For this guide I configured my private Google Apps account as an SP.
  2. Test the SP-initiated flow. It should use your realm default for authentication (realm > authentication > settings > core).
  3. Now create the trees you want SAML users to use in the realm you are federating into. In this example, a tree called “saml” is the base tree that should be launched to authenticate SAML users.

    where the “Continue?” node just displays a message notifying users that they are logging in via SAML SP-initiated login. The “Login” node is an inner tree evaluator launching the “simple” tree, which lets them authenticate using username and password:

    The “2nd Factor” node is another inner tree evaluator branching out into tree, which allows the user to select a 2nd factor they want to use:

    This guide will use the “push” tree for push authentication:
  4. Now navigate to the “Authentication Context” section in your “Hosted IDP” configuration in the AM Admin UI (Applications > Federation > Entity Providers > [your hosted entity provider] > Authentication Context):

    This is where the magic happens. Select “Service” from the “Key” drop-down list on all the supported authentication contexts (note, you could launch different trees based on what the SP proposes for authentication, Google seems to only support “PasswordProtectedTransport” by default) and enter the name of the base tree you want to execute in the “Value” column, “saml” in this configuration example.

Test your configuration:

  1. Launch your SP-initiated login flow. For Google GSuite you do that by pointing your browser to https://gsuite.google.com and select “Sign-In”, then type in your GSuite domain name, select the application you want to land in after authentication and select “GO”.
  2. Google redirects to ForgeRock Access Management. The “saml” tree displays the configured message, giving users the options to “Continue” or “Abort”. Select “Continue”:
  3. The “simple” tree executes, prompting for username and password:
  4. Now the flow comes back into the “simple” tree and branches out into the 2nd Factor selector tree “select”:
  5. Select “Push” and respond to the push notification on your phone while the web user interface gracefully waits for your response:
  6. And finally, a redirect back to Google Apps with a valid SAML assertion in tow completes the SP-initiated login flow: