ForgeRock Identity Live Austin

The season for the ForgeRock Identity Live events has opened earlier in May with the first of a series of 6 worldwide events in 2018, the Identity Live Austin.

LP0_3097With the largest audience since we’ve started these events, this was an absolutely great event, with as usual, passionate and in depth discussions with customers and partners.

You can find highlights, session videos and selected decks on the event website.

And here is my summary of the 2 days conference in pictures.

The next event will take place in Europe, in Berlin on June 12-13. It is still time to register, and you can look at the whole agenda of the summits to find one closer to your home. I’m looking forward to meet you there.

ForgeRock Identity Platform Version 6: Integrating IDM, AM, and DS

For the ForgeRock Identity Platform version 6, integration between our products is easier than ever. In this blog, I’ll show you how to integrate ForgeRock Identity Management (IDM), ForgeRock Access Management (AM), and ForgeRock Directory Services (DS). With integration, you can configure aspects of privacy, consent, trusted devices, and more. This configuration sets up IDM as an OpenID Connect / OAuth 2.0 client of AM, using DS as a common user datastore.

Setting up integration can be a challenge, as it requires you to configure (and read documentation from) three different ForgeRock products. This blog will help you set up that integration. For additional features, refer to the following chapters from IDM documentation: Integrating IDM with the ForgeRock Identity Platform and the Configuring User Self-Service.

While you can find most of the steps in the IDM 6 Samples Guide, this blog collects the information you need to set up integration in one place.

This blog post will guide you through the process. (Here’s the link to the companion blog post for ForgeRock Identity Platform version 5.5.)

Preparing Your System

For the purpose of this blog, I’ve configured all three systems in a single Ubuntu 16.04 VM (8 GB RAM / 40GB HD / 2 CPU).

Install Java 8 on your system. I’ve installed the Ubuntu 16.04-native openjdk-8 packages. In some cases, you may have to include export JAVA_HOME=/usr in your ~/.bashrc or ~/.bash_profile files.

Create separate home directories for each product. For the purpose of this blog, I’m using:

  • /home/idm
  • /home/am
  • /home/ds

Install Tomcat 8 as the web container for AM. For the purpose of this blog, I’ve downloaded Tomcat 8.5.30, and have unpacked it. Activate the executable bit in the bin/ subdirectory:

$ cd apache-tomcat-8.5.30/bin
$ chmod u+x *.sh

As AM requires fully qualified domain names (FQDNs), I’ve set up an /etc/hosts file with FQDNs for all three systems, with the following line:

192.168.0.1 AM.example.com DS.example.com IDM.example.com

(Substitute your IP address as appropriate. You may set up AM, DS, and IDM on different systems.)

If you set up AM and IDM on the same system, make sure they’re configured to connect on different ports. Both products configure default connections on ports 8080 and 8443.

Download AM, IDM, and DS versions 6 from backstage.forgerock.com. For organizational purposes, set them up on their own home directories:

 

Product Download Home Directory
DS DS-6.0.0.zip /home/ds
AM AM-6.0.0.war /home/am
IDM IDM-6.0.0.zip /home/idm

 

Unpack the zip files. For convenience, copy the Example.ldif file from /home/idm/openidm/samples/full-stack/data to the /home/ds directory.

For the purpose of this blog, I’ve downloaded Tomcat 8.5.30 to the /home/am directory.

Configuring ForgeRock Directory Services (DS)

To install DS, navigate to the directory where you unpacked the binary, in this case, /home/ds/opendj. In that directory, you’ll find a setup script. The following command uses that script to start DS as a directory server, with a root DN of “cn=Directory Manager”, with a host name of ds.example.com, port 1389 for LDAP communication, and 4444 for administrative connections:

$ ./setup \
  directory-server \
  --rootUserDN "cn=Directory Manager" \
  --rootUserPassword password \
  --hostname ds.example.com \
  --ldapPort 1389 \
  --adminConnectorPort 4444 \
  --baseDN dc=com \
  --ldifFile /home/ds/Example.ldif \
  --acceptLicense

Earlier in this blog, you copied the Example.ldif file to /home/ds. Substitute if needed. Once the setup script returns you to the command line, DS is ready for integration.

Installing ForgeRock Access Manager (AM)

Use the configured external DS server as a common user store for AM and IDM. For an extended explanation, see the following documentation: Integrating IDM with the ForgeRock Identity Platform. To install AM, use the following steps:

  1. Set up Tomcat for AM. For this blog, I used Tomcat 8.5.30, downloaded from http://tomcat.apache.org/. 
  2. Unzip Tomcat in the /home/am directory.
  3. Make the files in the apache-tomcat-8.5.30/bin directory executable.
  4. Copy the AM-6.0.0.war file from the /home/am directory to apache-tomcat-8.5.30/webapps/openam.war.
  5. Start the Tomcat web container with the startup.sh script in the apache-tomcat-8.5.30/bin directory. This action should unpack the openam.war binary to the
    apache-tomcat-8.5.30/webapps/openam directory.
     
  6. Shut down Tomcat, with the shutdown.sh script in the same directory. Make sure the Tomcat process has stopped.
  7. Open the web.xml file in the following directory: apache-tomcat-8.5.30/webapps/openam/WEB-INF/. For an explanation, see the AM 6 Release Notes. Include the following code blocks in that file to support cross-origin resource sharing:
<filter>
      <filter-name>CORSFilter</filter-name>
      <filter-class>org.apache.catalina.filters.CorsFilter</filter-class>
      <init-param>
           <param-name>cors.allowed.headers</param-name>
           <param-value>Content-Type,X-OpenIDM-OAuth-Login,X-OpenIDM-DataStoreToken,X-Requested-With,Cache-Control,Accept-Language,accept,Origin,Access-Control-Request-Method,Access-Control-Request-Headers,X-OpenAM-Username,X-OpenAM-Password,iPlanetDirectoryPro</param-value>
      </init-param>
      <init-param>
           <param-name>cors.allowed.methods</param-name>
           <param-value>GET,POST,HEAD,OPTIONS,PUT,DELETE</param-value>
      </init-param>
      <init-param>
           <param-name>cors.allowed.origins</param-name>
           <param-value>http://am.example.com:8080,https://idm.example.com:9080</param-value>
      </init-param>
      <init-param>
           <param-name>cors.exposed.headers</param-name>
           <param-value>Access-Control-Allow-Origin,Access-Control-Allow-Credentials,Set-Cookie</param-value>
      </init-param>
      <init-param>
           <param-name>cors.preflight.maxage</param-name>
           <param-value>10</param-value>
      </init-param>
      <init-param>
           <param-name>cors.support.credentials</param-name>
           <param-value>true</param-value>
      </init-param>
 </filter>

 <filter-mapping>
      <filter-name>CORSFilter</filter-name>
      <url-pattern>/json/*</url-pattern>
 </filter-mapping>

Important: Substitute the actual URL and ports for your AM and IDM deployments, where you see http://am.example.com:8080 and http://idm.example.com:9080. (I forgot to make these once and couldn’t figure out the problem for a couple of days.)

Configuring AM

  1. If you’ve configured AM on this system before, delete the /home/am/openam directory.
  2. Restart Tomcat with the startup.sh script in the aforementioned apache-tomcat-8.5.30/bin directory.
  3. Navigate to the URL for your AM deployment. In this case, call it http://am.example.com:8080/openam. You’ll create a “Custom Configuration” for OpenAM, and accept the defaults for most cases.
    • When setting up Configuration Data Store Settings, for consistency, use the same root suffix in the Configuration Data Store, i.e. dc=example,dc=com.
    • When setting up User Data Store settings, make sure the entries match what you used when you installed DS. The following table is based on that installation:

      Option Setting
      Directory Name ds.example.com
      Port 1389
      Root Suffix dc=example,dc=com
      Login ID cn=Directory Manager
      Password password

       

  4. When the installation process is complete, you’ll be prompted with a login screen. Log in as the amadmin administrative user with the password you set up during the configuration process. With the following action, you’ll set up an OpenID Connect/OAuth 2.0 service that you’ll configure shortly for a connection to IDM.
    • Select Top-Level Realm -> Configure OAuth Provider -> Configure OpenID Connect -> Create -> OK. This sets up AM as an OIDC authorization server.
  5. Set up IDM as an OAuth 2.0 Client:
    • Select Applications -> OAuth 2.0. Choose Add Client. In the New OAuth 2.0 Client window that appears, set openidm as a Client ID, set changeme as a Client Secret, along with a Redirection URI of http://idm.example.com:9080/oauthReturn/. The scope is openid, which reflects the use of the OpenID Connect standard.
    • Select Create, go to the Advanced Tab, and scroll down. Activate the Implied Consent option.
    • Press Save Changes.
  6. Go to the OpenID Connect tab, and enter the following information in the Post Logout Redirect URIs text box:
    • http://idm.example.com:9080/
    • http://idm.example.com:9080/admin/
    • Press Save Changes.
  7. Select Services -> OAuth2 Provider -> Advanced OpenID Connect:
    • Scroll down and enter openidm in the “Authorized OIDC SSO Clients” text box.
    • Press Save Changes.
  8. Navigate to the Consent tab:
    • Enable the Allow Clients to Skip Consent option.
    • Press Save Changes.

AM is now ready for integration.

 

Configuring IDM

Now you’re ready to configure IDM, using the following steps:

  1. For the purpose of this blog, use the following project subdirectory: /home/idm/openidm/samples/full-stack.
  2. If you haven’t modified the deployment port for AM, modify the port for IDM. To do so, edit the boot.properties file in the openidm/resolver/ subdirectory, and change the port property appropriate for your deployment (openidm.port.http or openidm.port.https). For this blog, I’ve changed the openidm.port.http line to:
    • openidm.port.http=9080
  3. (NEW) You’ll also need to change the openidm.host. By default, it’s set to localhost. For this blog, set it to:
    • openidm.host=idm.example.com
  4. Start IDM using the full-stack project directory:
    • $ cd openidm
    • $ ./startup.sh -p samples/full-stack
      • (If you’re running IDM in a VM, the following command starts IDM and keeps it going after you log out of the system:
        nohup ./startup.sh -p samples/full-stack/ > logs/console.out 2>&1& )
    • As IDM includes pre-configured options for the ForgeRock Identity Platform in the full-stack subdirectory, IDM documentation on the subject frequently refers to the platform as the “Full Stack”.
  5. In a browser, navigate to http://idm.example.com:9080/admin/.
  6. Log in as an IDM administrator:
    • Username: openidm-admin
    • Password: openidm-admin
  7. Reconcile users from the common DS user store to IDM. Select Configure > Mappings. In the page that appears, find the mapping from System/Ldap/Account to Managed/User, and press Reconcile. That will populate the IDM Managed User store with users from the common DS user store.
  8. Select Configure -> Authentication. Choose the ForgeRock Identity Provider option. In the window that appears, scroll down to the configuration details. Based on the instance of AM configured earlier, you’d change:
    Property Entry
    Well-Known Endpoint http://am.example.com:8080/openam/oauth2/.well-known/openid-configuration
    Client ID Matching entry from Step 5 of Configuring AM (openidm)
    Client Secret Matching entry from Step 5 of Configuring AM (changeme)
  9. When you’ve made appropriate changes, press Submit. (You won’t be able to press submit until you’ve entered a valid Well-Known Endpoint.)
    • You’re prompted with the following message:
      • Your current session may be invalid. Click here to logout and re-authenticate.
  10. When you tap on the ‘Click here’ link, you should be taken to http://am.example.com:8080/openam/<some long extension>. Log in with AM administrative credentials:
    • Username: amadmin
    • Password: <what you configured during the AM installation process>

If you see the IDM Admin UI after logging in, congratulations! You now have a working integration between AM, IDM, and DS.

Note: To ensure a rapid response when the AM session expires, the IDM JWT_SESSION timeout has been reduced to 5 seconds. For more information, see the following section of the IDM ForgeRock Identity Platform sample: Changes to Session and Authentication Modules.

 

Building On The ForgeRock Identity Platform

Once you’ve integrated AM, IDM, and DS, you can:

To visualize how this works, review the following diagram. For more information, see the following section of the IDM ForgeRock Identity Platform sample: Authorization Flow.

 

Troubleshooting

If you run into errors, review the following table:

 

Error message Solution
redirect_uri_mismatch Check for typos in the OAuth 2.0 client Redirection URI.
This application is requesting access to your account. Enable “Implied Consent” in the OAuth 2.0 client.

Enable “Allow Clients to Skip Consent” in the OAuth2 Provider.

Upon logout: The redirection URI provided does not match a pre-registered value. Check for typos in the OAuth 2.0 client Post Logout Redirect URIs.
Unable to login using authentication provider, with a redirect to preventAutoLogin=true. Check for typos in the Authorized OIDC SSO Clients list, in the OAuth2 Provider.

Make sure the Client ID and Client Secret in IDM match those configured for the AM OAuth 2.0 Application Client.

Unknown error: Please contact the administrator.

(In the dev console, you might see: Origin ‘http://idm.example.com:9080’ is therefore not allowed access.’).

Check for typos in the URLs in your web.xml file.
The IDM Self-Service UI does not appear, but you can connect to the IDM Admin UI. Check for typos in the URLs in your web.xml file.

 

If you see other errors, the problem is likely beyond the scope of this blog.

 

Runtime configuration for ForgeRock Identity Microservices using Consul

The ForgeRock Identity Microservices are able to read all runtime configuration from environment variables. These could be statically provided as hard coded key-value pairs in a Kubernetes manifest or shell script, but a better solution is to manage the configuration in a centralized configuration store such as Hashicorp Consul.

Managing configuration outside the code as such has been routinely adopted as a practice by modern service providers such as Google, Azure and AWS and business that use both cloud as well as on-premise software.

In this post, I shall walk you through how Consul can be used to set configuration values, envconsul can be used to read these key-values from different namespaces and supply them as environment variables to a subprocess. Envconsul can even be used to trigger automatic restarts on configuration changes without any modifications needed to the ForgeRock Identity Microservices.

Perhaps even more significantly, this post shows how it is possible to use Docker to layer on top of a base microservice docker image from the Forgerock Bintray repo and then use a pre-built go binary for envconsul to spawn the microservice as a subprocess.

This github.com repo has all the docker and kubernetes artifacts needed to run this integration. I used a single cluster minikube setup.

Consul

Starting a Pod in Minikube

We must start with setting up consul as a docker container and this is just a simple docker pull consul followed by a kubectl create -f kube-consul.yaml that sets up a pod running consul in our minikube environment. Consul creates a volume called /consul/data on to which you can store configuration key-value pairs in JSON format that could be directly imported using consul kv import @file.json. consul kv export > file.json can be used to export all stored configuration separated into JSON blobs by namespace.

The consul commands can be run from inside the Consul container by starting  a shell in it using docker exec -ti <consul-container-name> /bin/sh

Configuration setup using Key Value pairs

The namespaces for each microservice are defined either manually in the UI or via REST calls such as:
consul kv put my-app/my-key may-value

 

Expanding the ms-authn namespace yields the following screen:

A sample configuration set for all three microservices is available in the github.com repo for this integration here. Use consul kv import @consul-export.json to import them into your consul server.

Docker Builds

Base Builds

First, I built base images for each of the ForgeRock Identity microservices without using an ENTRYPOINT instruction. These images are built using the openjdk:8-jre-alpine base image which are the smallest alpine-based openjdk image one can possibly find. The openjdk7 and openjdk9 images are larger. Alpine itself is a great choice for a minimalistic docker base image and provides packaging capabilities using apk. It is a great choice for a base image especially if these are to be layered on top of like in this demonstration.

An example for the Authentication microservice is:
FROM openjdk:8-jre-alpine
WORKDIR /opt/authn-microservice
ADD . /opt/authn-microservice
EXPOSE 8080

The command docker build -t authn:BASE-ONLY builds the base image but does not start the authentication listener since no CMD or ENTRYPOINT instruction was specified in the Dockerfile.

The graphic below shows the layers the microservices docker build added on top of the openjdk:8-jre-alpine base image:

Next, we need to define a launcher script that uses a pre-built envconsul go binary to setup communication with the Consul server and use the key-value namespace for the authentication microservice.

/usr/bin/envconsul -log-level debug -consul=192.168.99.100:31100 -pristine -prefix=ms-authn /bin/sh -c "$SERVICE_HOME"/bin/docker-entrypoint.sh

A lot just happened there!

We instructed the launcher script to start envconsul with an address for the Consul server and nodePort defined in the kube-consul.yaml previously. The switch -pristine tells envconsul to not merge extant environment variables with the ones read from the Consul server. The -prefix switch identifies the application namespace we want to retrieve configuration for and the shell spawns a new process for the docker-entrypoint.sh script which is an ENTRYPOINT instruction for the authentication microservice tagged as 1.0.0-SNAPSHOT. Note that this is not used in our demonstration, but instead we built a BASE-ONLY tagged image and layer the envconsul binary and launcher script (shown next) on top of it. The -c switch enables tracking of changes to configuration values in Consul and if detected the subprocess is restarted automatically!

Envconsul Build

With the envconsul.sh script ready as shown above, it is time to build a new image using the base-only tagged microservice image in the FROM instruction as shown below:

FROM forgerock-docker-public.bintray.io/microservice/authn:BASE-ONLY
MAINTAINER Javed Shah <javed.shah@forgerock.com>
COPY envconsul /usr/bin/
ADD envconsul.sh /usr/bin/envconsul.sh
RUN chmod +x /usr/bin/envconsul.sh
ENTRYPOINT ["/usr/bin/envconsul.sh"]

This Dockerfile includes the COPY instruction to add our pre-built envconsul go-binary that can run standalone to our new image and also adds the envconsul.sh script shown above to the /usr/bin/ directory. It also sets the +x permission on the script and makes it the ENTRYPOINT for the container.

At this time we used the docker build -t authn:ENVCONSUL . command to tag the new image correctly. This is available at the ForgeRock Bintray docker repo as well.

This graphic shows the layers we just added using our Dockerfile instruction set above to the new ENVCONSUL tagged image:

These two images are available for use as needed by the github.com repo.

Kubernetes Manifest

It is a matter of pulling down the correctly tagged image and specifying one key item, the hostAliases instruction to point to the OpenAM server running on the host. This server could be anywhere, just update the hostAliases accordingly. Manifests for the three microservices are available in the github.com repo.

Start the pod with kubectl create -f kube-authn-envconsul.yaml will cause the following things to happen, as seen in the logs using kubectl logs ms-authn-env.

This shows that envonsul started up with the address of the Consul server. We deliberately used the older -consul flag for more “haptic” feedback from the pod if you will. Now that envconsul started up, it will immediately reach out to the Consul server unless a Consul agent was provided in the startup configuration. It will fetch the key-value pairs for the namespace ms-authn and make them available as environment variables to the child process!

The configuration has been received from the server and at this time envconsul is ready to spawn the child process with these key-value pairs available as environment variables:

The Authentication microservice is able to startup and read its configuration from the environment provided to it by envconsul.

Change Detection

Lets change a value in the Consul UI for one of our keys, CLIENT_CREDENTIALS_STORE from json to ldap:

This change is immediately detected by envconsul in all running pods:

envconsul then wastes no time in restarting the subprocess microservice:

Pretty cool way to maintain a 12-factor app!

Tests

A quick test for the OAuth2 client credential grant reveals our endeavor has been successful. We receive a bearer access token from the newly started authn-microservice.

The token validation and token exchange microservices were also spun up using the procedures defined above.

Here is an example of validating a stateful access_token generated using the Authorization Code grant flow using OpenAM:

And finally the postman call to the token-exchange service for exchanging a user’s id_token with an access token granting delegation for an “actor” whose id_token is also supplied as input is shown here:

The resulting token is:

This particular use case is described in its own blog.

Whats coming next?

Watch this space for secrets integration with Vault and envconsul!

Token Exchange and Delegation using the ForgeRock Identity Microservices

In 2017 ForgeRock introduced an Early Access program (aka beta) for the ForgeRock Identity Microservices. In summary the capabilities offered include token issuance using the OAuth2 client credentials grant, token validations of OAuth2/OIDC tokens (and even ssotokens) and token exchange based on the draft OAuth2 token exchange spec. I wrote a press release here and an introductory community blog post here.

In this article we will discuss how to implement delegation as per the OAuth2 Token Exchange  draft specification. An overview of the process is worth discussing at this point.

We basically first want to grab hold of an OAuth2 token for our client to use in an Authorization header when calling the Token Exchange service. Next, we need our Authorization Server such as ForgeRock Access Management in our example, to issue our friendly neighbors Alice and Bob their respective OIDC id_tokens. Note that Alice wants to delegate authority to Bob. While we shall show the REST calls to acquire those, the details of setting up the OAuth2 Provider will be left out. We shall however discuss the custom OIDC Claims Script for setting up the delegation framework for authenticated users, as well as the Policy that sets up allowed actors who may be chosen for delegation.

But first things first, so we start with acquiring a bearer authentication token for our client who will make calls into the Token Exchange service. This can be accomplished by having the Authentication microservice issue an OAuth2 access token with the exchange scope using the client-credentials grant_type. For the example we shall just use a JSON backend as our client repository, although the Authentication microservice supports using Cassandra, MongoDB, any LDAP v3 directory including ForgeRock Directory Services.

The repository holds the following data for the client:

{
 "clientId" : "client",
 "clientSecret" : "client",
 "scopes" : [ "exchange", "introspect" ],
 "attributes" : {}
 }

The authentication request is:

POST /service/access_token?grant_type=client_credentials HTTP/1.1
Host: localhost:18080
Content-Type: multipart/form-data
Authorization: Basic ****
Cache-Control: no-cache

After client authenticates, the Authentication microservice issues a token that when decoded looks like this:

{
 "sub": "client",
 "scp": [
     "exchange",
     "introspect"
 ],
 "auditTrackingId": "237cf708-1bde-4800-9bcd-5db5b5f1ca12",
 "iss": "https://fr.tokenissuer.example.com",
 "tokenName": "access_token",
 "token_type": "Bearer",
 "authGrantId": "a55f2bad-cda5-459e-8620-3e826bad9095",
 "aud": "client",
 "nbf": 1523058711,
 "scope": [
     "exchange",
     "introspect"
 ],
 "auth_time": 1523058711,
 "exp": 1523062311,
 "iat": 1523058711,
 "expires_in": 3600,
 "jti": "39b72a84-112d-46f3-a7cb-84ce7513e5bc",
 "cid": "client"
}

Bob, the actor, authenticates with AM and receives id_token:

{
 "at_hash": "Hgjw0D49EfYM6dmB9_K6kg",
 "sub": "user.1",
 "auditTrackingId": "079fda18-c96f-4c78-90f2-fc9be0545b32-111930",
 "iss": "http://localhost:8080/openam/oauth2",
 "tokenName": "id_token",
 "aud": "oidcclient",
 "azp": "oidcclient",
 "auth_time": 1523058714,
 "realm": "/",
 "may_act": {},
 "exp": 1523062314,
 "tokenType": "JWTToken",
 "iat": 1523058714
}

Alice, the user also authenticates with AM and receives a special id_token:

{
 "at_hash": "nT_tDxXhcee7zZHdwladcQ",
 "sub": "user.0",
 "auditTrackingId": "079fda18-c96f-4c78-90f2-fc9be0545b32-111989",
 "iss": "http://localhost:8080/openam/oauth2",
 "tokenName": "id_token",
 "aud": "myuserclient1",
 "azp": "myuserclient1",
 "auth_time": 1523058716,
 "realm": "/",
 "may_act": {
     "sub": "Bob"
 },
 "exp": 1523062316,
 "tokenType": "JWTToken",
 "iat": 1523058716
}

Why does Alice receive a may_act claim but Bob does not? This has to do with Alice’s group membership in DS. The OIDC Claims script attached to the OAuth2 Provider in AM checks for membership to this group, and if found retrieves a value for the assigned “actor” or delegate. In this case Alice’s has designed actor status to Bob (via some out of band method, such as on a user portal, for example). The script retrieves the actor and sets a special claim with its value: may_act. The claims script is shown here in part:

def isAdmin = identity.getMemberships(IdType.GROUP)
.inject(false) { found, group ->
found ||
group.getName() == “policyEval”
}

//…search for actor.. not shown

def mayact = [:]
if (isAdmin) {
mayact.put(“sub”,actor)
}

Next, we must define a Policy in AM. This policy could be based on OAuth2 scopes or URI, whatever you choose. I chose to setup an OAuth2 Scopes policy but whatever scope you choose must match the resource you pass into the TE service. Rest of the policy is pretty standard, for example, here is a summary of the Subject tab of the policy:

{
  "type": "JwtClaim",
  "claimName": "iss",
  "claimValue": "http://localhost:8080/openam/oauth2"
}

I am checking for an acceptable issuer, and thats it. This could be replaced with a subject condition for Authenticated users as well although the client would then have to send in Alice’s “ssotoken” as the subject_token instead of her id_token when invoking the TE service.

This also implies that delegation is “limited” to the case where you are using a JWT for Alice, preferably an OIDC id_token. We cannot do delegation without a JWT because the TE depends on the presence of a “may_act” claim in Alice’s subject_token. When using only an ssotoken or an access_token for Alice we cannot pass in this claim to the TE service. Hopefully thats clear as mud now (laughter).

Anyway.. back to the policy setup in AM. Keep in mind that the TE service expects certain response attributes: “aud” and “scp” are mandatory. “allowedActors” is optional but needed for our discussion of course. It contains a list of allowed actors and this list could be computed based on who the subject is. One subject attribute must also be returned. A simple example of response attributes could be:

aud: images.example.com
scp: read,write
allowedActors: Bob
allowedActors: HelpDeskAdministrator
allowedActors: James

The subject must also be returned from the policy and one way to do that is to parse the id_token and return the sub claim as a response attribute. A scripted policy condition attached to the Policy Environment extracts the sub claim and returns it in the response.

This policy script is attached as an environment condition as follows:

The extracted sub claim is then included as a response attribute.

At this point we are ready to setup the Token Exchange microservice to invoke the Policy REST endpoint in AM for resource-match, allowed actions and subject evaluation. Keep in mind that the TE service supports pluggable policy backends – a really powerful feature – using which one could have specific “pods” of microservices running local JSON-backed policies, other “pods” could have Open Policy Agent (OPA)-powered regex policies and yet another set of “pods” could have the TE service calling out to AM’s policy engine for evaluation.

The setup for when the TE is configured for using AM as the token exchange policy backend  looks like this:

{
 // Credentials for an OpenAM "subject" account with permission to access the OpenAM policy endpoint
 "authSubjectId" : "&{EXCHANGE_OPENAM_AUTH_SUBJECT_ID|service-account}",
 "authSubjectPassword" : "&{EXCHANGE_OPENAM_AUTH_SUBJECT_PASSWORD|****}",

// URL for the OpenAM authentication endpoint, without query parameters
 "authUrl" : "&{EXCHANGE_OPENAM_AUTH_URL|http://localhost:8080/openam/json/authenticate}",

// URL for the OpenAM policy-endpoint, without query parameters
 "policyUrl" : "&{EXCHANGE_OPENAM_POLICY_URL|http://localhost:8080/openam/json/realms/root/policies}",

// OpenAM Policy-Set ID
 "policySetId" : "&{EXCHANGE_OPENAM_POLICY_SET_ID|resource_policies}",

// OpenAM policy-endpoint response-attribute field-name containing "audience" values
 "audienceAttribute" : "&{EXCHANGE_OPENAM_POLICY_AUDIENCE_ATTR|aud}",

// OpenAM policy-endpoint response-attribute field-name containing "scope" values
 "scopeAttribute" : "&{EXCHANGE_OPENAM_POLICY_SCOPE_ATTR|scp}",

// OpenAM policy-endpoint response-attribute field-name containing the token's "subject" value
 "subjectAttribute" : "&{EXCHANGE_OPENAM_POLICY_SUBJECT_ATTR|uid}",

// OpenAM policy-endpoint response-attribute field-name containing the allowedActors
 "allowedActorsAttribute" : "&{EXCHANGE_OPENAM_POLICY_ALLOWED_ACTORS_ATTR|may_act}",

// (optional) OpenAM policy-endpoint response-attribute field-name containing "expires-in-seconds" values
 "expiresInSecondsAttribute" : "&{EXCHANGE_OPENAM_POLICY_EXPIRES_IN_SEC_ATTR|}",

// Set to "true" to copy additional OpenAM policy-endpoint response-attributes into generated token claims
 "copyAdditionalAttributes" : "&{EXCHANGE_OPENAM_POLICY_COPY_ADDITIONAL_ATTR|true}"
}

The TE services checks the response attributes shown above in the policy evaluation result in order to setup claims in the signed JWT it returns to the client.

The body of a sample REST Policy call from the TE service to an OAuth2 scope policy defined in AM looks like as follows:

{
 "subject" : {
     "jwt" : "{{AM_ALICE_ID_TOKEN}}"
 },
 "application" : "resource_policies",
 "resources" : [ "delegate-scope" ]
}

Policy response returned to the TE service:

[
 {
 "resource": "delegate-scope",
 "actions": {
     "GRANT": true
 },
 "attributes": {
     "aud": [
         "images.example.com"
     ],
     "scp": [
         "read"
     ],
     "uid": [
         "Alice"
     ],
     "allowedActors": [
          "Bob"
     ]
 },
 "advices": {},
 "ttl": 9223372036854775807
 }
]

Again, whatever resource type you choose for the policy in AM, it must match the resource you pass into the TE service as shown above.

We have come far and if you are still with me, then it is time to invoke the TE service and get back a signed and exchanged JWT with the correct “act” claim to indicate delegation was successful. The decoded JWT claims set of the issued token is shown below. The new JWT is issued by the Token Exchange service and intended for consumption by a system entity known by the logical name “images.example.com” any time before its expiration. The subject “sub” of the JWT is the same as the subject of the subject_token used to make the request.

{
 "sub": "Alice",
 "aud": "images.example.com",
 "scp": [
     "read",
     "write"
 ],
 "act": {
     "sub": "Bob"
 },
 "iss": "https://fr.tokenexchange.example.com",
 "exp": 1523087727,
 "iat": 1523084127
}

The decoded claims set shows that the actor “act” of the JWT is the same as the subject of the “actor_token” used to make the request. This indicates delegation and identifies Bob as the current actor to whom authority has been delegated to act on behalf of Alice.

The OAuth2 ForgeRock Identity Microservices

ForgeRock Identity Microservices

ForgeRock released in Q4 2017, an Early Access (aka beta) program for three key Identity Microservices within a compact, single-purpose code set for consumer-scale deployments. For companies who are deploying stateless Microservices architectures, these microservices offer a micro-gateway enabled solution that enables service trust, policy-enforced identity propagation and even OAuth2-based delegation. The stateless architecture of FR Identity Microservices allows for implementing a sidecar-friendly microgateway deployment pattern.

I blogged about it here. The sign up form is here.

The Microservices

OAuth2 Token Issuance

Enables service to service authentication using client-credential grant type. These “bearer” tokens facilitate trust up and down the call chain.

We currently support RSA, EC, and HMAC signing, with the following algorithms:

  • HS256
  • HS384
  • HS512
  • RS256
  • ES256
  • ES384
  • ES512

Token Validation

Supports introspection of OAuth2 / OIDC tokens issued by an OAuth2 AS as well as native AM tokens. Token validations performed using either configured keys, or lookup via JWK URI for issuer and kid verification.

One or more Token Introspection API implementations can be configured by setting the value to a comma-separated list of implementation names. Each implementation is given a chance to handle the token, in given order, until an implementation successfully handles the token. An error response occurs if no implementation successfully introspects a given token.

The following implementations are provided:

  • json : Configured by a JSON file.
  • openam : Proxies introspect calls to ForgeRock Access Management.

Token Exchange

Supports exchanging stateless OAuth2 access tokens and native AM tokens for hybrid tokens that grant specific entitlements per resourceor audience requested. Implements the draft OAuth2 token-exchange specification. Offers no-fuss declarative (“local”) policy enforcement inside the pod without needing to “call home” for identity data. Also provides policy based entitlement decisions by calling out to ForgeRock Access Management.

The Token-Exchange Policy is a pluggable service which determines whether or not a given token exchange may proceed. The service also determines what audience, scope, and so forth the generated token will have. The following implementations are provided:

  • json (default) : Configured by a JSON file.
  • jwt : Simply returns a JWT for any token that can be introspected.
  • openam : Uses a ForgeRock Access Management Policy Set.

Cloud and Platform

Docker and Kubernetes

Dockerfile is bundled with the binary. Soon there will be a docker repository from where the images could be pulled by evaluators.

A Kubernetes manifest is also provided that demonstrates using environment variables to configure a simple demo instance.

Metrics and Prometheus Integration

The microservices are instrumented to publish basic request metrics and a Prometheus endpoint is also provided, which is convenient for monitoring published metrics.

Audit Tracking

Each incoming HTTP request is assigned a transaction ID, for audit logging purposes, which by default is a UUID. ForgeRock Microservices like other products support propagation of a transaction ID between point-to-point service calls.

Cloud Foundry

The standard binary in Zip format is also directly deploy-able to Cloud Foundry, and will get recognized by the Cloud Foundry Java build pack as a “dist zip” format.

Client Credential Repository

ForgeRock Identity Microservices support Cassandra, MongoDB, any LDAP v3 including ForgeRock Directory Services.

The Role Of Mobile During Authentication

Nearly all the big player social networks now provide a multi-factor authentication option – either an SMS sent code or perhaps key derived one-time password, accessible via a mobile app.  Examples include Google’s Authenticator, Facebook’s options for MFA (including their Code Generator, built into their mobile app) or LinkedIn’s two-step verification.  There are lots more examples, but the main component is using the user’s mobile phone as an out of band authenticator channel.

Phone as a Secondary Device - “Phone-as-a-Token”

The common term for this is “phone-as-a-token”.  Depending on the statistics, basic mobile phones are now so ubiquitous that the ability to leverage at least SMS delivered one one-time-passwords (OTP) for users who do not have either data plans or smart phones is common.  This is an initial step in moving away from the traditional user name and password based login.  However, since the National Institute of Standards and Technology (NIST) released their view that SMS based OTP delivery is deemed insecure, there has been constant innovations around how best to integrate phone-based out of band authentication.  Push notifications are one and local or native biometry is another, often coupled with FIDO (Fast Identity Online) for secure application integration.

EMM and Device Authentication

But using a phone as an out of band authentication device, often overlooks the credibility and assurance of the device itself.  If push based notification apps are used, whilst the security and integrity of those apps can be guaranteed to a certain degree, the device the app is installed upon can not necessarily be attested to the same levels.  What about environments where BYOD (Bring Your Own Device) is used?  What about the potential for either jail broken operating systems or low assurance or worse still malware based apps running parallel to the push authentication app?  Does that impact credibility and assurance?  Could that result in the app being compromised in some way? 

In the internal identity space, Enterprise Mobility Management (EMM) software often comes to the rescue here – perhaps issuing and distributing certs of key pairs to devices in order to perform device validation, before accepting the out band verification step.  This can often be coupled with app assurance checks and OS baseline versioning.  However this is often time-consuming and complex and isn’t always possible in the consumer or digital identity space.

Multi-band to Single-band Login

Whilst you can achieve both a user authentication, device authentication and out of band authentication nirvana, let’s spin forward and simulate a world where the majority of interactions are solely via a mobile device.  So we no longer have an “out of band” authentication vehicle.  The main application login occurs on the mobile.  So what does that really mean?  Well we lose the secondary binding.  But if the initial application authentication leverages the mechanics of the original out of band (aka local biometry, crypto/FIDO integration) is there anything to worry about?  Well the initial device to user binding is still an avenue that requires further investigation.  I guess by removing an out of band process, we are reducing the number of signals or factors.  Also, unless a biometric local authentication process is used, the risk of credential theft increases substantially. 

Leave your phone on the train, with a basic local PIN based authentication that allows access to refresh_tokens or private keys and we’re back to the “keys to the castle” scenario.


User, Device & Contextual Analysis

So we’re back to a situation where we need to augment what is in fact a single factor login journey.

The physical identity is bound to a digital device. How can we have a continuous level of assurance for the user to app interaction?  We need to add additional signals – commonly known as context. 

This “context” could well include environmental data such as geo-location, time, network addressing or more behavioural such as movement or gait analysis or app usage patterns.  The main driver being a movement away from the big bang login event, where assurance is very high, with a long slow tail drop off as time goes by.  This correlates to the adage of short lived sessions or access_tokens – mainly as assurance can not be guaranteed as time from authentication event increases.

This “context” is then used to attempt lots of smaller micro-authentication events – perhaps checking at every use of an access_token or when a session is presented to an access control event.

So once a mobile user has “logged in” to the app, in the background there is a lot more activity looking for changes regarding to context (either environmental or behavioural).   No more out of band, just a lot of micro-steps.

As authentication becomes more transparent or passive, the real effort then moves to physical to digital binding or user proofing...

The Role Of Mobile During Authentication

Nearly all the big player social networks now provide a multi-factor authentication option – either an SMS sent code or perhaps key derived one-time password, accessible via a mobile app.  Examples include Google’s Authenticator, Facebook’s options for MFA (including their Code Generator, built into their mobile app) or LinkedIn’s two-step verification.  There are lots more examples, but the main component is using the user’s mobile phone as an out of band authenticator channel.

Phone as a Secondary Device - “Phone-as-a-Token”

The common term for this is “phone-as-a-token”.  Depending on the statistics, basic mobile phones are now so ubiquitous that the ability to leverage at least SMS delivered one one-time-passwords (OTP) for users who do not have either data plans or smart phones is common.  This is an initial step in moving away from the traditional user name and password based login.  However, since the National Institute of Standards and Technology (NIST) released their view that SMS based OTP delivery is deemed insecure, there has been constant innovations around how best to integrate phone-based out of band authentication.  Push notifications are one and local or native biometry is another, often coupled with FIDO (Fast Identity Online) for secure application integration.

EMM and Device Authentication

But using a phone as an out of band authentication device, often overlooks the credibility and assurance of the device itself.  If push based notification apps are used, whilst the security and integrity of those apps can be guaranteed to a certain degree, the device the app is installed upon can not necessarily be attested to the same levels.  What about environments where BYOD (Bring Your Own Device) is used?  What about the potential for either jail broken operating systems or low assurance or worse still malware based apps running parallel to the push authentication app?  Does that impact credibility and assurance?  Could that result in the app being compromised in some way? 

In the internal identity space, Enterprise Mobility Management (EMM) software often comes to the rescue here – perhaps issuing and distributing certs of key pairs to devices in order to perform device validation, before accepting the out band verification step.  This can often be coupled with app assurance checks and OS baseline versioning.  However this is often time-consuming and complex and isn’t always possible in the consumer or digital identity space.

Multi-band to Single-band Login

Whilst you can achieve both a user authentication, device authentication and out of band authentication nirvana, let’s spin forward and simulate a world where the majority of interactions are solely via a mobile device.  So we no longer have an “out of band” authentication vehicle.  The main application login occurs on the mobile.  So what does that really mean?  Well we lose the secondary binding.  But if the initial application authentication leverages the mechanics of the original out of band (aka local biometry, crypto/FIDO integration) is there anything to worry about?  Well the initial device to user binding is still an avenue that requires further investigation.  I guess by removing an out of band process, we are reducing the number of signals or factors.  Also, unless a biometric local authentication process is used, the risk of credential theft increases substantially. 

Leave your phone on the train, with a basic local PIN based authentication that allows access to refresh_tokens or private keys and we’re back to the “keys to the castle” scenario.


User, Device & Contextual Analysis

So we’re back to a situation where we need to augment what is in fact a single factor login journey.

The physical identity is bound to a digital device. How can we have a continuous level of assurance for the user to app interaction?  We need to add additional signals – commonly known as context. 

This “context” could well include environmental data such as geo-location, time, network addressing or more behavioural such as movement or gait analysis or app usage patterns.  The main driver being a movement away from the big bang login event, where assurance is very high, with a long slow tail drop off as time goes by.  This correlates to the adage of short lived sessions or access_tokens – mainly as assurance can not be guaranteed as time from authentication event increases.

This “context” is then used to attempt lots of smaller micro-authentication events – perhaps checking at every use of an access_token or when a session is presented to an access control event.

So once a mobile user has “logged in” to the app, in the background there is a lot more activity looking for changes regarding to context (either environmental or behavioural).   No more out of band, just a lot of micro-steps.

As authentication becomes more transparent or passive, the real effort then moves to physical to digital binding or user proofing...

ForgeRock welcomes Laetitia Ellison

ForgeRock Logo Late welcome to Laetitia Ellison, who joined the ForgeRock documentation team in February.

Laetitia works with the access management team, and has started out on AM documentation.

Laetitia might be the only member of the team who has a hereditary connection to technical writing. She comes to ForgeRock with a background in writing about customer engagement software, and also technical support. Laetitia brings great energy and enthusiasm to the team, and has really hit the ground running.

Implementing Delegated Administration with the ForgeRock 5.5 Platform

Out of the box in 5.5, IDM (ForgeRock Identity Management) has two types of users – basic end-users and all-powerful administrators. You often need a class of users that fall between these extremes – users which can trigger a password reset action but cannot redefine connector configuration, for example. Another common need is for users to only be allowed to perform actions for a subset of other users – maybe only those within their organization. The typical term for these sorts of users is ‘Delegated Administrators‘ – users who have been granted limited access to perform particular administrative tasks given to them by a more privileged administrator.

There is a way to define new user roles within IDM that have more granular access, but this is fairly limited – you have to write back-end JavaScript code to define what these new roles can do. Also, there is no way to inform the user at runtime about what options they have as a result of these JavaScript-based roles. Instead, you would have to write new UI code which specifically adjusts itself for the each of new roles you define, essentially hard-coding it to match the back-end JavaScript. Depending on the complexity that you need, this can quickly become a big challenge.

The good news is that IDM does not have to do this job by itself – by making use of the other parts of the ForgeRock Identity Platform, you can define very sophisticated authorization logic for each of your users, including the option to delegate administrative tasks to them. The two other products which provide the biggest benefit to IDM for this are AM (ForgeRock Access Management) and IG (ForgeRock Identity Gateway)

AM has a very powerful authorization engine that allows you to declare precise rules which govern the requests made by your users, and it also has the very useful option of being able to return the full set of rules which apply for a given user. Take a look at the product documentation here to learn more about this feature of AM: Introducing Authorization.

IG has full support for working with the AM authorization engine as the enforcement point. It is capable of intercepting each request to IDM and evaluating it by calling out to AM for policy evaluation. It can also do additional local evaluation prior to passing the request down to IDM. You can see the full details about IG’s policy enforcement feature in the IG documentation: Enforcing Policy Decisions.

Demo

Before jumping into the details about how all of this can be put together, a short demo video may make it easier to understand exactly what it is I am hoping to accomplish with this setup:

Example Configuration

I have put together a project in the forgeops git repository which installs the whole ForgeRock Identity Platform exactly as I show it in the demo above and how I describe below. Feel free to use this project to explore the fine details which I might gloss over when I explain how all of this works. You can also use this project as the starting point for your own delegated administration project – after all, it is much easier than starting from scratch! Be aware that the code and configuration provided in this sample are not supported – they are just examples of how you might go about using the supported products they build upon. Also be aware that this project is oriented towards demonstrating functionality – it is not hardened for production. Use your own product expertise as you normally would when considering production deployment practices.

https://stash.forgerock.org/projects/CLOUD/repos/forgeops/browse/sample-platform?at=refs%2Fheads%2Frelease%2F5.5.0

Architecture

The basic architecture you would need to make the most of each of these products is as follows:

 

AM

AM is configured as the authentication provider and the authorization policy decision point. For it to perform this role, it will need a “Policy Set” defined for IDM; this is the collection of policy rules which apply specifically to requests for IDM REST APIs. Since each request to the IDM REST APIs is essentially just a basic HTTP call, you can use the default “URL” resource type provided by AM. You will need to define a policy for each call you expect your REST client to make; for example, if your REST client makes calls like: GET /openidm/info/login you will need to declare a policy which allows this request. Such a policy would look like this:

{
 "data" : {
   "_id" : "info",
   "name" : "info",
   "active" : true,
   "description" : "",
   "resources" : [ "*://*:*/openidm/info/login" ],
   "actionValues" : {
     "GET" : true
   },
   "applicationName" : "openidm",
   "subject" : {
     "type" : "AuthenticatedUsers"
   }
 }
}

This simple policy just states that any authenticated user is allowed to perform a GET action on the “/openidm/info/login” resource (irrespective of the protocol/host/port). You could use the AM Admin UI to define this policy; it would look something like this if you did:

 

Defining appropriate policies for your IDM needs could vary considerably. Take stock of each IDM REST call you need to make and consider the conditions under which users are allowed to make them. Use this to craft a policy set in AM which aligns with all of those details.

IG

After you have defined your policies in AM, you can start enforcing them with IG. IG needs to be positioned in your network topology so that all HTTP requests made to IDM can be intercepted by the IG reverse proxy, and also so that IG can request policy decisions about those requests from AM. This is a standard deployment model for IG – very little is different about how you would deploy IG to protect IDM compared with how you would use it to protect any other HTTP application.

The main thing you need to configure within IG is the PolicyEnforcementFilter. This is a supported, out-of-the-box filter and can be configured in many ways, all of which are described in the filter documentation. An example of one such configuration:

 {
   "type": "PolicyEnforcementFilter",
   "config": {
     "openamUrl": "${env['OPENAM_INSTANCE']}",
     "cache": {
       "enabled": true,
       "defaultTimeout": "1 hour",
       "maxTimeout": "1 day"
     },
     "pepUsername": "openidm",
     "pepPassword": "openidm",
     "pepRealm": "/",
     "application": "openidm",
     "ssoTokenSubject": "${session.openid.id_token}",
     "environment": {
       "securityContextPath": [
         "${session.idmUserDetails.authorization.component}/${session.idmUserDetails.authorization.id}"
       ],
       "securityContextRoles": "${session.idmUserDetails.authorization.roles}"
     }
   }
 }

This example configuration passes in environment details which are stored in the IG session and relate to the currently-authenticated user, as it is defined in IDM. In particular, it passes in the user’s authorization roles and their IDM-specific REST resource path. Depending on how you decide to define your AM policies, these details may or may not be needed.

Based on the form of delegation that you need for your users, this may be all of the filtering you need to declare in IG. However, if you want to perform more fine-grained access control over subsets of records a user can modify, then you willl need additional filters. Those are described below (under “Scoping Data”).

Assuming the filters allow the request to continue, you will need to be sure that IG provides relevant user details in the request to IDM. The simplest solution is to augment the request by adding a new header value which identifies the user; IDM will read this header from the request and use it as part of its own basic authentication framework.

IDM

Since IG has taken responsibility for validating authentication and authorization, IDM simply needs to be configured to no longer attempt to perform these duties itself. There are two main changes that you need to make to the IDM configuration to make this possible:

conf/authentication.json

This configuration entry describes how IDM performs authentication. Every request to an IDM REST endpoint has to have some form of authentication, even if it is very basic; IDM provides several different options for this. The easist option to use with IG is the TRUSTED_ATTRIBUTE authentication  module; see the authentication module documentation and the associated sample documentation for details on how this works. Essentially, the ‘X-Special-Trusted-User’ header contains the name of the user performing the request. It is set by IG, and it is trusted by IDM to be accurate. This trust is why it is so important that IG be the only entry point into IDM – otherwise, an attacker could supply their own header value and pretend to be anyone. Here’s an example module configuration:

 {
   "name" : "TRUSTED_ATTRIBUTE",
   "properties" : {
     "queryOnResource" : "managed/user",
     "propertyMapping" : {
       "authenticationId" : "_id",
       "userRoles" : "authzRoles"
     },
     "defaultUserRoles" : [ "openidm-authorized" ],
     "authenticationIdAttribute" : "X-ForgeRock-AuthenticationId",
     "augmentSecurityContext" : {
       "type" : "text/javascript",
       "file" : "augmentSecurityContext.js",
       "globals" : {
         "authzHeaderName" : "X-Authorization-Map"
       }
     }
   },
   "enabled" : true
 }

conf/router.json

This entry defines scripted filters which apply to every IDM API request, whether from HTTP or from internal API calls. By default IDM has its own authorization logic specified as the first filter; it is the one which invokes router-authz.js. Since IG and AM are performing authorization, you can simply remove this filter.

This is actually all that is required for you to change in IDM, at least for the REST API level. At this point the REST API in IDM should be protected by the AM policy engine.

Scoping Data

The request from IG to the AM policy engine only includes certain, high-level details about the request. IG asks AM things like “what actions can this user perform on the resource /openidm/managed/user/01234“. Generally, AM policies are pattern-based; this means that AM can only tell if the user can perform actions on a set of resources, such as “/openidm/managed/user/*“. If you need rules governing the actions a given user can perform on a subset of resources within that pattern, AM probably does not and cannot have enough information about the resources to make a decision on its own. However, there is a way to return information to IG about the user which IG can use to make its own decision about the request.

Response Attributes

An AM policy can return data along with the results of the policy evaluation in the form of “response attributes”. These attributes can be static values defined within the policy or they can be dynamic values read from the user’s profile. It is up to IG to decide what meaning to impart upon the presence of these attributes. For example, IG can use these response attributes to decide whether a request for a particular resource falls within the subset of resources available to the current user.

To continue to build upon the “/openidm/managed/user/01234” question above, an AM policy can be declared which indicates that the current user is allowed to perform some actions on the general case of “/openidm/managed/user/*” and also send back a response attribute – for example, the organization they belong to. IG can then include another filter that runs after the PolicyEnforcementFilter which is configured to look for the presence of this response attribute.

Scripted Filter Based on Policy Response

Before the request is forwarded to IDM, IG can take additional action based on the response attributes recieved from the policy decision. For example, IG can query IDM to verify that /openidm/managed/user/01234 is within the same organization as the user making the request. If the user is not found in the query results, then this filter can simply reject the request before it is sent to IDM.

Here is an example scripted filter which performs this function:

 {
   "name": "ScopeValidation",
   "type": "ScriptableFilter",
   "config": {
     "type": "application/x-groovy",
     "file": "scopeValidation.groovy",
     "args": {
       "scopeResourceQueryFilter": "/organizationName eq \"\\${organizationName}\"",
       "scopingAttribute": "organizationName",
       "failureResponse": "${heap['authzFailureResponse']}"
     }
   }
 }

This “scopeValidation.groovy” example looks for the presence of a “scopingAttribute” as a response attribute from the earlier policy decision and uses it to construct a query filter. This query filter is then used to define the bounds within which a particular user can perform actions on subsets of resources. It is also used when the user is querying resources – whatever filter they supplied will be altered to also include this filter too. For example, if they call GET /openidm/managed/user?_queryFilter=userName eq “jfeasel” then the resulting query that is passed down to IDM would actually be this: GET /openidm/managed/user?_queryFilter=((userName eq “jfeasel”) AND (/organizationName eq “example”)). The end result is that the user only sees the subset of resources they are allowed to see.

This is just one example of how you could achieve scoping. The power of scripting in IG means that if this example is insufficient for your needs, you can easily alter the groovy code for this filter to account for any variation you might need.

Discovery

Having each REST call validated as it is made is clearly the most essential behavior required. That being said, another very important aspect of delegated administration is being able to show users only those options which are actually available to them. It is a very poor user experience to show all options followed by “Access Denied” type messages when they try something they were not allowed to do.

Fortunately, the AM policy engine has a way for users to discover which features are available to them. See the documentation for “Requesting Policy Decisions For a Tree of Resources“. Making a request to policies?_action=evaluateTree returns values like so:

[
 {
   "advices": {},
   "ttl": 9223372036854776000,
   "resource": "*://*:*/openidm/info/login",
   "actions": {
     "POST": true,
     "GET": true
   },
   "attributes": {}
 },
 {
   "advices": {},
   "ttl": 9223372036854776000,
   "resource": "*://*:*/openidm/managed/user/*",
   "actions": {
     "PATCH": true,
     "GET": true
   },
   "attributes": {
     "organizationName": [
       "example"
     ]
   }
 }
 .....
]

This is the fundamental building block upon which you can build a dynamic UI for your delegated admins. The next step necessary is a means to return this information to your users. The policy engine endpoints (such as the above call to evaluateTree) are not normally accessible directly by end-users; only privileged accounts can access them. The best method for returning this information to your users is to define a new endpoint in IG which has the same configuration details that the PolicyEnforcementFilter uses. This new endpoint is a ScriptableHandler instance which is basically just a thin wrapper around the call to evaluateTree.

An example implementation of this endpoint is available within the “sample-platform” folder within the forgeops git repository. Here are the two key files:

With this /policyTree/ endpoint available, your UI can make a simple GET request and find every detail about which REST endpoints are available for the current user. You can then write UI code which relies on this information in order to render only those features which the user has access to use. Example code which extends the default IDM Admin UI is also available in the sample-platform folder; take a look at the files under idm/ui/admin/extension for more details.

Request Sequence

Here is a detailed diagram which demonstates how a series of requests by an authenticated user with the delegated administration role are routed through the various systems:

Diagram available via WebSequenceDiagrams

Next steps

Although I only showed a very simple form of delegation with this example, the pattern described should enable many more complex and powerful use-cases. There are many features of the products which, when used together, could make for some very exciting systems.

This is just the beginning – we will be working toward improving product integration and features to make this use case and many others possible. Be sure to let us know about your successes and your struggles in this area, so that we can keep the products growing in the right direction. Thanks!

Enhancing User Privacy with OpenID Connect Pairwise Identifiers

This is a quick post to describe how to set up Pairwise subject hashing, when issuing OpenID Connect id_tokens that require the users sub= claim to be pseudonymous.  The main use case for this approach, is to prevent clients or resource servers, from being able to track user activity and correlate the same subject’s activity across different applications.

OpenID Connect basically provides two subject identifier types: public or pairwise.  With public, the sub= claim is simply the user id or equivalent for the user.  This creates a flow something like the below:

Typical “public” subject identifier OIDC flow

This is just a typical authorization_code flow – end result is the id_token payload.  The sub= claim is simply clear and readable.  This allows the possibility of correlating all of sub=jdoe activity.

So, what if you want a bit more privacy within your ecosystem?  Well here comes the Pairwise Subject Identifier type.  This allows each client to be basically issued with a non-reversible hash of the sub= claim, preventing correlation.

To configure in ForgeRock Access Management, alter the OIDC provider settings.  On the advanced tab, simply add pairwise as a subject type.

Enabling Pairwise on the provider

 

Next alter the salt for the hash, also on the provider settings advanced tab.

Add a salt for the hash

 

Each client profile, then needs either a request_uri setting or a sector_identifier_uri.  Basically akin to the redirect_uri whitelist.  This is just a mechanism to identify client requests.  On the client profile, add in the necessary sector identifier and change the subject identifier to be “pairwise”.  This is on the client profile Advanced tab.

Client profile settings

Once done, just slightly alter the incoming authorization_code generation request to looking something like this:

/openam/oauth2/authorize?response_type=code
&save_consent=0
&decision=Allow
&scope=openid
&client_id=OIDCClient
&redirect_uri=http://app.example.com:8080
&sector_identifier_uri=http://app.example.com:8080
Note the addition of the sector_identifier_uri parameter.  Once you’ve exchanged the authorization_code for an access_token, take a peak inside the associated id_token.  This now contains an opaque sub= claim:
{
  "at_hash": "numADlVL3JIuH2Za4X-G6Q",
  "sub": "lj9/l6hzaqtrO2BwjYvu3NLXKHq46SdorqSgHAUaVws=",
  "auditTrackingId": "f8ca531a-61dd-4372-aece-96d0cea21c21-152094",
  "iss": "http://openam.example.com:8080/openam/oauth2",
  "tokenName": "id_token",
  "aud": "OIDCClient",
  "c_hash": "Pr1RhcSUUDTZUGdOTLsTUQ",
  "org.forgerock.openidconnect.ops": "SJNTKWStNsCH4Zci8nW-CHk69ro",
  "azp": "OIDCClient",
  "auth_time": 1517485644000,
  "realm": "/",
  "exp": 1517489256,
  "tokenType": "JWTToken",
  "iat": 1517485656

}
The overall flow would now look something like this:
OIDC flow with Pairwise

This blog post was first published @ http://www.theidentitycookbook.com/, included here with permission from the author.