We love feedback on our documentation!

We as the ForgeRock documentation team appreciate the feedback that you provide. It helps us improve our work, it helps us make our documentation (docs) more relevant to you, our users.

To that end, we’ve improved our feedback processes.

If you have an issue with a specific document, look for a bug icon in the upper-right corner of our documents. You’ll see the following text pop-up: Report documentation feedback.

Select the bug icon to "Report documentation feedback"

When you select that icon, you’ll see the following feedback form, where you can enter any and all desired feedback on our docs.

 

When you `Submit` that form, we as the ForgeRock documentation team receive an email with the following information:

  • Product
  • Version
  • Title of the document
  • Message (entered by you)
  • Email address (if available)
  • Information from the current browser URL, including the current chapter, section, and title

Thanks in advance for your feedback, and we’ll do our best to answer every issue you bring us!

ForgeRock Identity Microservices with Consul on OpenShift

Introduction

Openshift, a Kubernetes-as-a-Paas service, is increasingly being considered as an alternative to managed kubernetes platforms such as those from Tectonic, Rancher, etc and vanilla native kubernetes implementations such as those provided by Google, Amazon and even Azure. RedHat’s OpenShift is a PaaS that provides a complete platform which includes features such as source-2-image build and test management, managing images with built-in change detection, and deploying to staging / production.

In this blog, I demonstrate how ForgeRock Identity Microservices could be deployed into OpenShift Origin (community project).

OpenShift Origin Overview

OpenShift Origin is a distribution of Kubernetes optimized for continuous application development and multi-tenant deployment. OpenShift adds developer and operations-centric tools on top of Kubernetes to enable rapid application development, easy deployment and scaling, and long-term lifecycle maintenance for small and large teams.

OpenShift embeds Kubernetes and extends it with security and other integrated concepts. An OpenShift Origin release corresponds to the Kubernetes distribution – for example, OpenShift 1.7 includes Kubernetes 1.7

Docker Images

I described in this blog post how I built the envconsul layer on top of the forgerock-docker-public.bintray.io/microservice/authn:BASE-ONLY image for the Authentication microservice, and similarly for the token-validation and token-exchange micro services. Back then I took the easy route and used the vbox (host) IP for exposing the Consul IP address externally to the rest of the microservices pods.

This time however, I rebuilt the docker images using a modified version of the envconsul.sh script which is of course, used as the ENTRYPOINT as seen in my github repo. I use an environment variable called CONSUL_ADDRESS in the script instead of a hard coded virtualbox adapter IP. The latter works nicely for minikube deployments, but of course, is not production friendly. However, with the env variable we move much closer to a production grade deployment.

How does OpenShift Origin resolve the environment variable on container startup? Read on!

OpenShift Origin Configuration

I have used minishift for this demonstration. More info about minishift can be found here. I started up Openshift Origin using these settings:
minishift start --openshift-version=v3.10.0-rc.0
eval $(minishift oc-env)
eval $(minishift docker-env)
minishift enable addon anyuid

The anyuid addon is only necessary when you have containers that must be run as root. Consul is not one of those, but most docker images rely on root, including the microservices, so I decided to set it as such anyway.

Apps can be deployed to OC in a number of ways, using the web console or the oc CLI. The main method I used is to use an existing container image hosted on the Bintray ForgeRock repository as well as the Consul image on Docker Hub.

The following command creates a new project:

$oc new-project consul-microservices --display-name="Consul based MS" --description="consul powered env vars"

At this point we have a project container for all our OC artifacts, such as deployments, image streams, pods, services and routes.

The new-app command can be used to deploy any image hosted on an external public or private registry. Using this command, OC will create a deployment configuration with a rolling strategy for updatesz. The command shown here will deploy the Hashicorp Consul image from docker hub (the default repository OC looks for). OC will download the image and store it into an internal image registry. The image is then copied from there to each node in an OC cluster where the app is run.

$oc new-app consul --name=consul-ms

With the deployment configuration created, two triggers are added by default: a ConfigChange and ImageChange which means that each deployment configuration change, or image update on the external repository will automatically trigger a new deployment. A replication controller is created automatically by OC soon after the deployment itself.

Make sure to add a route for the Consul UI running at port 8500 as follows:

I have described the Consul configuration needed for ForgeRock Identity Microservices in my earlier blog post on using envconsul for sourcing environment configuration at runtime. For details on how to get the microservices configuration into Consul as Key-Value pairs please refer to that post. A few snapshots of the UI showing the namespaces for the configuration are presented here:

 

A sample key value used for token exchange is shown here:

Next, we move to deploying the microservices. And in order for envconsul to read the configuration from Consul at runtime, we have the option of passing an --env switch to the oc new-app command indicating what the value for the CONSUL_ADDRESS should be, or we could also include the environment variable and a value for it in the deployment definition as shown in the next section.

OpenShift Deployment Artifacts

If bundling more than one object for import, such as pod and service definitions, via a yaml file, OC expects strongly typed template  configuration, as shown here, for the ms-authn microservice. Note the extra metadata for template identification, and use of the objects syntax:

 

This template, once stored in OC, can be reused to create new deployments of the ms-authn microservice. Notice the use of the environment variable indicating the Consul IP. This informs the envconsul.sh script of the location of Consul and envconsul is able to pull the configuration key-value pairs before spawning the microservice.

I used the oc new-app command in this demonstration insteed.

As described in the other blog, copy the configuration export (could be automated using GIT as well) into the Consul docker container:

$docker cp consul-export.json 5a2e414e548a:/consul-export.json

And then import the configuration using the consul kv command:

$consul kv import @consul-export.json
Imported: ms-authn/CLIENT_CREDENTIALS_STORE
Imported: ms-authn/CLIENT_SECRET_SECURITY_SCHEME
Imported: ms-authn/INFO_ACCOUNT_PASSWORD
Imported: ms-authn/ISSUER_JWK_STORE
Imported: ms-authn/METRICS_ACCOUNT_PASSWORD
Imported: ms-authn/TOKEN_AUDIENCE
Imported: ms-authn/TOKEN_DEFAULT_EXPIRATION_SECONDS
Imported: ms-authn/TOKEN_ISSUER
Imported: ms-authn/TOKEN_SIGNATURE_JWK_BASE64
Imported: ms-authn/TOKEN_SUPPORTED_ADDITIONAL_CLAIMS
Imported: token-exchange/
Imported: token-exchange/EXCHANGE_JWT_INTROSPECT_URL
Imported: token-exchange/EXCHANGE_OPENAM_AUTH_SUBJECT_ID
Imported: token-exchange/EXCHANGE_OPENAM_AUTH_SUBJECT_PASSWORD
Imported: token-exchange/EXCHANGE_OPENAM_AUTH_URL
Imported: token-exchange/EXCHANGE_OPENAM_POLICY_ALLOWED_ACTORS_ATTR
Imported: token-exchange/EXCHANGE_OPENAM_POLICY_AUDIENCE_ATTR
Imported: token-exchange/EXCHANGE_OPENAM_POLICY_COPY_ADDITIONAL_ATTR
Imported: token-exchange/EXCHANGE_OPENAM_POLICY_SCOPE_ATTR
Imported: token-exchange/EXCHANGE_OPENAM_POLICY_SET_ID
Imported: token-exchange/EXCHANGE_OPENAM_POLICY_SUBJECT_ATTR
Imported: token-exchange/EXCHANGE_OPENAM_POLICY_URL
Imported: token-exchange/INFO_ACCOUNT_PASSWORD
Imported: token-exchange/ISSUER_JWK_JSON_ISSUER_URI
Imported: token-exchange/ISSUER_JWK_JSON_JWK_BASE64
Imported: token-exchange/ISSUER_JWK_OPENID_URL
Imported: token-exchange/ISSUER_JWK_STORE
Imported: token-exchange/METRICS_ACCOUNT_PASSWORD
Imported: token-exchange/TOKEN_EXCHANGE_POLICIES
Imported: token-exchange/TOKEN_EXCHANGE_REQUIRED_SCOPE
Imported: token-exchange/TOKEN_ISSUER
Imported: token-exchange/TOKEN_SIGNATURE_JWK_BASE64
Imported: token-validation/
Imported: token-validation/INFO_ACCOUNT_PASSWORD
Imported: token-validation/INTROSPECTION_OPENAM_CLIENT_ID
Imported: token-validation/INTROSPECTION_OPENAM_CLIENT_SECRET
Imported: token-validation/INTROSPECTION_OPENAM_ID_TOKEN_INFO_URL
Imported: token-validation/INTROSPECTION_OPENAM_SESSION_URL
Imported: token-validation/INTROSPECTION_OPENAM_URL
Imported: token-validation/INTROSPECTION_REQUIRED_SCOPE
Imported: token-validation/INTROSPECTION_SERVICES
Imported: token-validation/ISSUER_JWK_JSON_ISSUER_URI
Imported: token-validation/ISSUER_JWK_JSON_JWK_BASE64
Imported: token-validation/ISSUER_JWK_STORE
Imported: token-validation/METRICS_ACCOUNT_PASSWORD

The oc new-app command used to create a deployment:
$oc new-app forgerock-docker-public.bintray.io/microservice/authn:ENVCONSUL --env CONSUL_ADDRESS=consul-ms-ui-consul-microservices.192.168.64.3.nip.io

This creates a new deployment for the authn microservice, and as described in detail in this blog, envconsul is able to reach the Consul server, extract configuration and set up the environment required by the auth microservice.

 

Commands for the token validation microservice are shown here:
$oc new-app forgerock-docker-public.bintray.io/microservice/token-validation:ENVCONSUL --env CONSUL_ADDRESS=consul-ms-ui-consul-microservices.192.168.64.3.nip.io

$oc expose svc/token-validation
route "token-validation" exposed

Token Exchange Microservice is setup as follows

$oc new-app forgerock-docker-public.bintray.io/microservice/token-exchange:ENVCONSUL --env CONSUL_ADDRESS=consul-ms-ui-consul-microservices.192.168.64.3.nip.io

$oc expose svc/token-exchange
route "token-validation" exposed

Image Streams

OpenShift has a neat CI/CD trigger for docker images built in to help you manage your pipeline. Whenever the image is updated in the remote public or private repository, the deployment is restarted.

Details about the token exchange micro service image stream are shown here:

More elaborate Jenkins based pipelines can be built and will be demonstrated in a later blog!

Test

Setup the env variables in Postman like so:

I have several tests in this github repo. An example of getting a new access token using client credentials from the authn microservice is shown here:

 

This was an introduction to using OpenShift with our micro services, and there is much more to come!

ForgeRock Identity Live Berlin

The second show of the ForgeRock worldwide tour of Identity Live events took place last week in the beautiful city of Berlin.LP0_4079

My colleagues from the Marketing team have already put a summary of the event with an highlight video and links to slides, videos of the sessions.

And my photo album of the event is also visible online here:

ForgeRock Identity Live Berlin 2018

See you at the next Identity Live in Sydney or in Singapore in August.

Monitoring the ForgeRock Identity Platform 6.0 using Prometheus and Grafana


All products within the ForgeRock Identity Platform 6.0 release include native support for monitoring product metrics using Prometheus and visualising this information using Grafana.  To illustrate how these product metrics can be used, alongside each products’ binary download on Backstage, you can also find a zip file containing Monitoring Dashboard Samples.  Each download includes a README file which explains how to setup these dashboards.  In this post, we’ll review the AM 6.0.0 Overview dashboard included within the AM Monitoring Dashboard Samples.

As you might expect from the name, the AM 6.0.0 Overview dashboard is a high level dashboard which gives a cluster-wide summary of the load and average response time for key areas of AM.  For illustration purposes, I’ve used Prometheus to monitor three AM instances running on my laptop and exported an interactive snapshot of the AM 6.0.0 Overview dashboard.  The screenshots which follow have all been taken from that snapshot.

Variables

At the top of the dashboard, there are two dropdowns:

AM 6.0.0 Overview dashboard variables

 

  • Instance – Use this to select which AM instances within your cluster you’d like to monitor.  The instances shown within this dropdown are automatically derived from the metric data you have stored within Prometheus.
  • Aggregation Window – Use this to select a period of time over which you’d like to take a count of events shown within the dashboard. For example, how many sessions were started in the last minute, hour, day, etc.

Note. You’ll need to be up and running with your own Grafana instance in order to use these dropdowns as they can’t be updated within the interactive snapshot.

Authentications

AM 6.0.0 Overview dashboard Authentications section

The top row of the Authentications section shows a count of authentications which have started, succeeded, failed, or timed-out across all AM instances selected in the “Instance” variable drop-down over the selected “Aggregation Window”.  The screenshot above shows that a total of 95 authentications were started over the past minute across all three of my AM instances.

The second row of the Authentications section shows the per second rate of authentications starting, succeeding, failing, or timing-out by AM instance.  This set of line graphs can be used to see how behaviour is changing over time and if any AM instance is taking more or less of the load.

All of the presented Authentications metrics are of the summary type.

Sessions

AM 6.0.0 Overview dashboard Sessions section

As with the Authentications section, the top row of the Sessions section shows cluster-wide aggregations while the second row contains a set of line graphs.

In the Sessions section, the metrics being presented are session creation, session termination, and average session lifetime.  Unlike the other metrics presented in the Authentications and Sessions sections, the session lifetime metric is of the timer type.

In both panels, Prometheus is calculating the average session lifetime by dividing am_session_lifetime_seconds_total by am_session_lifetime_count.  Because this calculation is happening within Prometheus rather than within each product instance, we have control over what period of time the average is calculated, we can choose to include or exclude certain session types by filtering on the session_type tag values, and we can calculate the cluster-wide average.

When working with any timer metric, it’s also possible to present the 50th, 75th, 95th, 98th, 99th, or 99.9th percentiles.  These are calculated from within the monitored product using an exponential decay so that they are representative of roughly the last five minutes of data [1].  Because percentiles are calculated from within the product, this does mean that it’s not possible to combine the results of multiple similar metrics or to aggregate across the whole cluster.

CTS

AM 6.0.0 Overview dashboard CTS section

The CTS is AM’s storage service for tokens and other data objects.  When a token needs to be operated upon, AM describes the desired operation as a Task object and enqueues this to be handled by an available worker thread when a connection to DS is available.

In the CTS section, we can observe…

  • Task Throughput – how many CTS operations are being requested
  • Tasks Waiting in Queues – how many operations are waiting for an available worker thread and connection to DS
  • Task Queueing Time – the time each operation spends waiting for a worker thread and connection
  • Task Service Time – the time spent performing the requested operation against DS

If you’d like to dig deeper into the CTS behaviour, you can use the AM 6.0.0 CTS dashboard to observe any of the recorded percentiles by operation type and token type.  You can also use the AM 6.0.0 CTS Token Reaper dashboard to monitor AM’s removal of expired CTS tokens.  Both of these dashboards are also included in the Monitoring Dashboard Samples zip file.

OAuth2

AM 6.0.0 Overview dashboard OAuth2 section

In the OAuth2 section, we can monitor OAuth2 grants completed or revoked, and tokens issued or revoked.  As OAuth2 refresh tokens can be long-lived and require state to be stored in CTS (to allow the resource owner to revoke consent) tracking grant issuance vs revocation can be helpful to aid with CTS capacity planning.

The dashboard currently shows a line per grant type.  You may prefer to use Prometheus’ sum function to show the count of all OAuth2 grants.  You can see examples of the sum function used in the Authentications section.

Note. you may also prefer to filter the grant_type tag to exclude refresh.

Policy / Authorization

AM 6.0.0 Overview dashboard Policy / Authorizations section

The Policy section shows a count of policy evaluations requested and the average time it took to perform these evaluations.  As you can see from the screenshots, policy evaluation completed in next to no time on my AM instances as I have no policies defined.

JVM

AM 6.0.0 Overview dashboard JVM section

The JVM section shows how JVM metrics forwarded by AM to Prometheus can be used to track memory usage and garbage collections.  As can be seen in the screenshot, the JVM section is repeated per AM instance.  This is a nice feature of Grafana.

Note. the set of garbage collection metric available is dependent on the selected GC algorithm.

Passing thoughts

While I’ve focused heavily on the AM dashboards within this post, all products across the ForgeRock Identitty Platform 6.0 release include the same support for Prometheus and Grafana and you can find sample dashboards for each.  I encourage you to take a look at the sample dashboards for DS, IG and IDM.

References

  1. DropWizard Metrics Exponentially Decaying Reservoirs, https://metrics.dropwizard.io/4.0.0/manual/core.html

This blog post was first published @ https://xennial-bytes.blogspot.com/, included here with permission from the author.

Cyber Security Skills in 2018

Last week I passed the EC-Council Certified Ethical Hacker exam.  Yay to me.  I am a professional penetration tester right?  Negatory.  I sat the exam more as an exercise to see if I “still had it”.  A boxer returning to the ring.  It is over 10 years since I passed my CISSP.  The 6-hour multi-choice horror of an exam, that was still being conducted using pencil and paper down at the Royal Holloway University.  In honesty, that was a great general information security bench mark and allowed you to go in multiple different directions as an "infosec pro".  So back to the CEH…

There are now a fair few information security related career paths in 2018.  The basic split tends to be something like:

  • Managerial  - I don’t always mean managing people, more risk management, compliance management and auditing
  • Technical - here I guess I focus upon penetration testing, cryptography or secure software engineering
  • Operational - thinking this is more security operation centres, log analysis and threat intelligence and the like
So the CEH would fit as an intro to intermediate level qualification within the technical sphere.  Is it a useful qualification to have?  Let me come back to that question, by framing it a little.

There is the constant hum that in both the US and UK, there is a massive cyber and information security personnel shortage, in both the public and private sectors.  This I agree with, but it also needs some additional framing and qualification.  Which areas, what jobs, what skill levels are missing or in short supply?  As the cyber security sector has reached a decent level maturity with regards job roles and more importantly job definitions, we can start to work backwards in understanding how to fulfil demand.

I often hear conversations around cyber education, which go down the route of delivering cyber security curriculum at the under sixteens or even under 11 age groups.  Whilst this is incredibly important for general Internet safety, I’m not sure it helps the longer term cyber skills supply problem.  If we look at the omnipresent shortage of medical doctors, we don’t start medical school earlier.  We teach the first principles earlier: maths, biology and chemistry for example.  With those foundations in place, specialism becomes much easier at say eighteen and again at 21 or 22 when specialist doctor training starts.

Shouldn’t we just apply the same approach to cyber?  A good grounding in mathematics, computing and networking would then provide a strong foundation to build upon, before focusing on cryptography or penetration testing.

The CEH exam (and this isn’t a specific criticism of the EC Council, simply recent experience), doesn’t necessarily provide you with the skills to become a hacker.  I spent 5 months self-studying for the exam.  A few hours here and there whilst holding down a full time job with regular travel.  Aka not a lot of time.  The reason I probably passed the exam, was mainly due to a broad 17 year history in networking, security and access management.  I certainly learned a load of stuff.  Mainly tooling and process, but not necessarily first principles skills.

Most qualifications are great.  They certainly give the candidate career bounce and credibility and any opportunity to study is a good one.  I do think cyber security training is at a real inflection point though.

Clearly most large organisations are desperately building out teams to protect and react to security incidents.  Be it for compliance reasons, or to build end user trust, but we as an industry need to look at a longer term and sustainable way to develop, nurture and feed talent.  Going back to basics seems a good step forward.

Cyber Security Skills in 2018

Last week I passed the EC-Council Certified Ethical Hacker exam.  Yay to me.  I am a professional penetration tester right?  Negatory.  I sat the exam more as an exercise to see if I “still had it”.  A boxer returning to the ring.  It is over 10 years since I passed my CISSP.  The 6-hour multi-choice horror of an exam, that was still being conducted using pencil and paper down at the Royal Holloway University.  In honesty, that was a great general information security bench mark and allowed you to go in multiple different directions as an "infosec pro".  So back to the CEH…

There are now a fair few information security related career paths in 2018.  The basic split tends to be something like:

  • Managerial  - I don’t always mean managing people, more risk management, compliance management and auditing
  • Technical - here I guess I focus upon penetration testing, cryptography or secure software engineering
  • Operational - thinking this is more security operation centres, log analysis and threat intelligence and the like
So the CEH would fit as an intro to intermediate level qualification within the technical sphere.  Is it a useful qualification to have?  Let me come back to that question, by framing it a little.

There is the constant hum that in both the US and UK, there is a massive cyber and information security personnel shortage, in both the public and private sectors.  This I agree with, but it also needs some additional framing and qualification.  Which areas, what jobs, what skill levels are missing or in short supply?  As the cyber security sector has reached a decent level maturity with regards job roles and more importantly job definitions, we can start to work backwards in understanding how to fulfil demand.

I often hear conversations around cyber education, which go down the route of delivering cyber security curriculum at the under sixteens or even under 11 age groups.  Whilst this is incredibly important for general Internet safety, I’m not sure it helps the longer term cyber skills supply problem.  If we look at the omnipresent shortage of medical doctors, we don’t start medical school earlier.  We teach the first principles earlier: maths, biology and chemistry for example.  With those foundations in place, specialism becomes much easier at say eighteen and again at 21 or 22 when specialist doctor training starts.

Shouldn’t we just apply the same approach to cyber?  A good grounding in mathematics, computing and networking would then provide a strong foundation to build upon, before focusing on cryptography or penetration testing.

The CEH exam (and this isn’t a specific criticism of the EC Council, simply recent experience), doesn’t necessarily provide you with the skills to become a hacker.  I spent 5 months self-studying for the exam.  A few hours here and there whilst holding down a full time job with regular travel.  Aka not a lot of time.  The reason I probably passed the exam, was mainly due to a broad 17 year history in networking, security and access management.  I certainly learned a load of stuff.  Mainly tooling and process, but not necessarily first principles skills.

Most qualifications are great.  They certainly give the candidate career bounce and credibility and any opportunity to study is a good one.  I do think cyber security training is at a real inflection point though.

Clearly most large organisations are desperately building out teams to protect and react to security incidents.  Be it for compliance reasons, or to build end user trust, but we as an industry need to look at a longer term and sustainable way to develop, nurture and feed talent.  Going back to basics seems a good step forward.

ForgeRock Identity Live Austin

The season for the ForgeRock Identity Live events has opened earlier in May with the first of a series of 6 worldwide events in 2018, the Identity Live Austin.

LP0_3097With the largest audience since we’ve started these events, this was an absolutely great event, with as usual, passionate and in depth discussions with customers and partners.

You can find highlights, session videos and selected decks on the event website.

And here is my summary of the 2 days conference in pictures.

The next event will take place in Europe, in Berlin on June 12-13. It is still time to register, and you can look at the whole agenda of the summits to find one closer to your home. I’m looking forward to meet you there.

ForgeRock Identity Platform Version 6: Integrating IDM, AM, and DS

For the ForgeRock Identity Platform version 6, integration between our products is easier than ever. In this blog, I’ll show you how to integrate ForgeRock Identity Management (IDM), ForgeRock Access Management (AM), and ForgeRock Directory Services (DS). With integration, you can configure aspects of privacy, consent, trusted devices, and more. This configuration sets up IDM as an OpenID Connect / OAuth 2.0 client of AM, using DS as a common user datastore.

Setting up integration can be a challenge, as it requires you to configure (and read documentation from) three different ForgeRock products. This blog will help you set up that integration. For additional features, refer to the following chapters from IDM documentation: Integrating IDM with the ForgeRock Identity Platform and the Configuring User Self-Service.

While you can find most of the steps in the IDM 6 Samples Guide, this blog collects the information you need to set up integration in one place.

This blog post will guide you through the process. (Here’s the link to the companion blog post for ForgeRock Identity Platform version 5.5.)

Preparing Your System

For the purpose of this blog, I’ve configured all three systems in a single Ubuntu 16.04 VM (8 GB RAM / 40GB HD / 2 CPU).

Install Java 8 on your system. I’ve installed the Ubuntu 16.04-native openjdk-8 packages. In some cases, you may have to include export JAVA_HOME=/usr in your ~/.bashrc or ~/.bash_profile files.

Create separate home directories for each product. For the purpose of this blog, I’m using:

  • /home/idm
  • /home/am
  • /home/ds

Install Tomcat 8 as the web container for AM. For the purpose of this blog, I’ve downloaded Tomcat 8.5.30, and have unpacked it. Activate the executable bit in the bin/ subdirectory:

$ cd apache-tomcat-8.5.30/bin
$ chmod u+x *.sh

As AM requires fully qualified domain names (FQDNs), I’ve set up an /etc/hosts file with FQDNs for all three systems, with the following line:

192.168.0.1 AM.example.com DS.example.com IDM.example.com

(Substitute your IP address as appropriate. You may set up AM, DS, and IDM on different systems.)

If you set up AM and IDM on the same system, make sure they’re configured to connect on different ports. Both products configure default connections on ports 8080 and 8443.

Download AM, IDM, and DS versions 6 from backstage.forgerock.com. For organizational purposes, set them up on their own home directories:

 

Product Download Home Directory
DS DS-6.0.0.zip /home/ds
AM AM-6.0.0.war /home/am
IDM IDM-6.0.0.zip /home/idm

 

Unpack the zip files. For convenience, copy the Example.ldif file from /home/idm/openidm/samples/full-stack/data to the /home/ds directory.

For the purpose of this blog, I’ve downloaded Tomcat 8.5.30 to the /home/am directory.

Configuring ForgeRock Directory Services (DS)

To install DS, navigate to the directory where you unpacked the binary, in this case, /home/ds/opendj. In that directory, you’ll find a setup script. The following command uses that script to start DS as a directory server, with a root DN of “cn=Directory Manager”, with a host name of ds.example.com, port 1389 for LDAP communication, and 4444 for administrative connections:

$ ./setup \
  directory-server \
  --rootUserDN "cn=Directory Manager" \
  --rootUserPassword password \
  --hostname ds.example.com \
  --ldapPort 1389 \
  --adminConnectorPort 4444 \
  --baseDN dc=com \
  --ldifFile /home/ds/Example.ldif \
  --acceptLicense

Earlier in this blog, you copied the Example.ldif file to /home/ds. Substitute if needed. Once the setup script returns you to the command line, DS is ready for integration.

Installing ForgeRock Access Manager (AM)

Use the configured external DS server as a common user store for AM and IDM. For an extended explanation, see the following documentation: Integrating IDM with the ForgeRock Identity Platform. To install AM, use the following steps:

  1. Set up Tomcat for AM. For this blog, I used Tomcat 8.5.30, downloaded from http://tomcat.apache.org/. 
  2. Unzip Tomcat in the /home/am directory.
  3. Make the files in the apache-tomcat-8.5.30/bin directory executable.
  4. Copy the AM-6.0.0.war file from the /home/am directory to apache-tomcat-8.5.30/webapps/openam.war.
  5. Start the Tomcat web container with the startup.sh script in the apache-tomcat-8.5.30/bin directory. This action should unpack the openam.war binary to the
    apache-tomcat-8.5.30/webapps/openam directory.
     
  6. Shut down Tomcat, with the shutdown.sh script in the same directory. Make sure the Tomcat process has stopped.
  7. Open the web.xml file in the following directory: apache-tomcat-8.5.30/webapps/openam/WEB-INF/. For an explanation, see the AM 6 Release Notes. Include the following code blocks in that file to support cross-origin resource sharing:
<filter>
      <filter-name>CORSFilter</filter-name>
      <filter-class>org.apache.catalina.filters.CorsFilter</filter-class>
      <init-param>
           <param-name>cors.allowed.headers</param-name>
           <param-value>Content-Type,X-OpenIDM-OAuth-Login,X-OpenIDM-DataStoreToken,X-Requested-With,Cache-Control,Accept-Language,accept,Origin,Access-Control-Request-Method,Access-Control-Request-Headers,X-OpenAM-Username,X-OpenAM-Password,iPlanetDirectoryPro</param-value>
      </init-param>
      <init-param>
           <param-name>cors.allowed.methods</param-name>
           <param-value>GET,POST,HEAD,OPTIONS,PUT,DELETE</param-value>
      </init-param>
      <init-param>
           <param-name>cors.allowed.origins</param-name>
           <param-value>http://am.example.com:8080,https://idm.example.com:9080</param-value>
      </init-param>
      <init-param>
           <param-name>cors.exposed.headers</param-name>
           <param-value>Access-Control-Allow-Origin,Access-Control-Allow-Credentials,Set-Cookie</param-value>
      </init-param>
      <init-param>
           <param-name>cors.preflight.maxage</param-name>
           <param-value>10</param-value>
      </init-param>
      <init-param>
           <param-name>cors.support.credentials</param-name>
           <param-value>true</param-value>
      </init-param>
 </filter>

 <filter-mapping>
      <filter-name>CORSFilter</filter-name>
      <url-pattern>/json/*</url-pattern>
 </filter-mapping>

Important: Substitute the actual URL and ports for your AM and IDM deployments, where you see http://am.example.com:8080 and http://idm.example.com:9080. (I forgot to make these once and couldn’t figure out the problem for a couple of days.)

Configuring AM

  1. If you’ve configured AM on this system before, delete the /home/am/openam directory.
  2. Restart Tomcat with the startup.sh script in the aforementioned apache-tomcat-8.5.30/bin directory.
  3. Navigate to the URL for your AM deployment. In this case, call it http://am.example.com:8080/openam. You’ll create a “Custom Configuration” for OpenAM, and accept the defaults for most cases.
    • When setting up Configuration Data Store Settings, for consistency, use the same root suffix in the Configuration Data Store, i.e. dc=example,dc=com.
    • When setting up User Data Store settings, make sure the entries match what you used when you installed DS. The following table is based on that installation:

      Option Setting
      Directory Name ds.example.com
      Port 1389
      Root Suffix dc=example,dc=com
      Login ID cn=Directory Manager
      Password password

       

  4. When the installation process is complete, you’ll be prompted with a login screen. Log in as the amadmin administrative user with the password you set up during the configuration process. With the following action, you’ll set up an OpenID Connect/OAuth 2.0 service that you’ll configure shortly for a connection to IDM.
    • Select Top-Level Realm -> Configure OAuth Provider -> Configure OpenID Connect -> Create -> OK. This sets up AM as an OIDC authorization server.
  5. Set up IDM as an OAuth 2.0 Client:
    • Select Applications -> OAuth 2.0. Choose Add Client. In the New OAuth 2.0 Client window that appears, set openidm as a Client ID, set changeme as a Client Secret, along with a Redirection URI of http://idm.example.com:9080/oauthReturn/. The scope is openid, which reflects the use of the OpenID Connect standard.
    • Select Create, go to the Advanced Tab, and scroll down. Activate the Implied Consent option.
    • Press Save Changes.
  6. Go to the OpenID Connect tab, and enter the following information in the Post Logout Redirect URIs text box:
    • http://idm.example.com:9080/
    • http://idm.example.com:9080/admin/
    • Press Save Changes.
  7. Select Services -> OAuth2 Provider -> Advanced OpenID Connect:
    • Scroll down and enter openidm in the “Authorized OIDC SSO Clients” text box.
    • Press Save Changes.
  8. Navigate to the Consent tab:
    • Enable the Allow Clients to Skip Consent option.
    • Press Save Changes.

AM is now ready for integration.

 

Configuring IDM

Now you’re ready to configure IDM, using the following steps:

  1. For the purpose of this blog, use the following project subdirectory: /home/idm/openidm/samples/full-stack.
  2. If you haven’t modified the deployment port for AM, modify the port for IDM. To do so, edit the boot.properties file in the openidm/resolver/ subdirectory, and change the port property appropriate for your deployment (openidm.port.http or openidm.port.https). For this blog, I’ve changed the openidm.port.http line to:
    • openidm.port.http=9080
  3. (NEW) You’ll also need to change the openidm.host. By default, it’s set to localhost. For this blog, set it to:
    • openidm.host=idm.example.com
  4. Start IDM using the full-stack project directory:
    • $ cd openidm
    • $ ./startup.sh -p samples/full-stack
      • (If you’re running IDM in a VM, the following command starts IDM and keeps it going after you log out of the system:
        nohup ./startup.sh -p samples/full-stack/ > logs/console.out 2>&1& )
    • As IDM includes pre-configured options for the ForgeRock Identity Platform in the full-stack subdirectory, IDM documentation on the subject frequently refers to the platform as the “Full Stack”.
  5. In a browser, navigate to http://idm.example.com:9080/admin/.
  6. Log in as an IDM administrator:
    • Username: openidm-admin
    • Password: openidm-admin
  7. Reconcile users from the common DS user store to IDM. Select Configure > Mappings. In the page that appears, find the mapping from System/Ldap/Account to Managed/User, and press Reconcile. That will populate the IDM Managed User store with users from the common DS user store.
  8. Select Configure -> Authentication. Choose the ForgeRock Identity Provider option. In the window that appears, scroll down to the configuration details. Based on the instance of AM configured earlier, you’d change:
    Property Entry
    Well-Known Endpoint http://am.example.com:8080/openam/oauth2/.well-known/openid-configuration
    Client ID Matching entry from Step 5 of Configuring AM (openidm)
    Client Secret Matching entry from Step 5 of Configuring AM (changeme)
  9. When you’ve made appropriate changes, press Submit. (You won’t be able to press submit until you’ve entered a valid Well-Known Endpoint.)
    • You’re prompted with the following message:
      • Your current session may be invalid. Click here to logout and re-authenticate.
  10. When you tap on the ‘Click here’ link, you should be taken to http://am.example.com:8080/openam/<some long extension>. Log in with AM administrative credentials:
    • Username: amadmin
    • Password: <what you configured during the AM installation process>

If you see the IDM Admin UI after logging in, congratulations! You now have a working integration between AM, IDM, and DS.

Note: To ensure a rapid response when the AM session expires, the IDM JWT_SESSION timeout has been reduced to 5 seconds. For more information, see the following section of the IDM ForgeRock Identity Platform sample: Changes to Session and Authentication Modules.

 

Building On The ForgeRock Identity Platform

Once you’ve integrated AM, IDM, and DS, you can:

To visualize how this works, review the following diagram. For more information, see the following section of the IDM ForgeRock Identity Platform sample: Authorization Flow.

 

Troubleshooting

If you run into errors, review the following table:

 

Error message Solution
redirect_uri_mismatch Check for typos in the OAuth 2.0 client Redirection URI.
This application is requesting access to your account. Enable “Implied Consent” in the OAuth 2.0 client.

Enable “Allow Clients to Skip Consent” in the OAuth2 Provider.

Upon logout: The redirection URI provided does not match a pre-registered value. Check for typos in the OAuth 2.0 client Post Logout Redirect URIs.
Unable to login using authentication provider, with a redirect to preventAutoLogin=true. Check for typos in the Authorized OIDC SSO Clients list, in the OAuth2 Provider.

Make sure the Client ID and Client Secret in IDM match those configured for the AM OAuth 2.0 Application Client.

Unknown error: Please contact the administrator.

(In the dev console, you might see: Origin ‘http://idm.example.com:9080’ is therefore not allowed access.’).

Check for typos in the URLs in your web.xml file.
The IDM Self-Service UI does not appear, but you can connect to the IDM Admin UI. Check for typos in the URLs in your web.xml file.

 

If you see other errors, the problem is likely beyond the scope of this blog.

 

Runtime configuration for ForgeRock Identity Microservices using Consul

The ForgeRock Identity Microservices are able to read all runtime configuration from environment variables. These could be statically provided as hard coded key-value pairs in a Kubernetes manifest or shell script, but a better solution is to manage the configuration in a centralized configuration store such as Hashicorp Consul.

Managing configuration outside the code as such has been routinely adopted as a practice by modern service providers such as Google, Azure and AWS and business that use both cloud as well as on-premise software.

In this post, I shall walk you through how Consul can be used to set configuration values, envconsul can be used to read these key-values from different namespaces and supply them as environment variables to a subprocess. Envconsul can even be used to trigger automatic restarts on configuration changes without any modifications needed to the ForgeRock Identity Microservices.

Perhaps even more significantly, this post shows how it is possible to use Docker to layer on top of a base microservice docker image from the Forgerock Bintray repo and then use a pre-built go binary for envconsul to spawn the microservice as a subprocess.

This github.com repo has all the docker and kubernetes artifacts needed to run this integration. I used a single cluster minikube setup.

Consul

Starting a Pod in Minikube

We must start with setting up consul as a docker container and this is just a simple docker pull consul followed by a kubectl create -f kube-consul.yaml that sets up a pod running consul in our minikube environment. Consul creates a volume called /consul/data on to which you can store configuration key-value pairs in JSON format that could be directly imported using consul kv import @file.json. consul kv export > file.json can be used to export all stored configuration separated into JSON blobs by namespace.

The consul commands can be run from inside the Consul container by starting  a shell in it using docker exec -ti <consul-container-name> /bin/sh

Configuration setup using Key Value pairs

The namespaces for each microservice are defined either manually in the UI or via REST calls such as:
consul kv put my-app/my-key may-value

 

Expanding the ms-authn namespace yields the following screen:

A sample configuration set for all three microservices is available in the github.com repo for this integration here. Use consul kv import @consul-export.json to import them into your consul server.

Docker Builds

Base Builds

First, I built base images for each of the ForgeRock Identity microservices without using an ENTRYPOINT instruction. These images are built using the openjdk:8-jre-alpine base image which are the smallest alpine-based openjdk image one can possibly find. The openjdk7 and openjdk9 images are larger. Alpine itself is a great choice for a minimalistic docker base image and provides packaging capabilities using apk. It is a great choice for a base image especially if these are to be layered on top of like in this demonstration.

An example for the Authentication microservice is:
FROM openjdk:8-jre-alpine
WORKDIR /opt/authn-microservice
ADD . /opt/authn-microservice
EXPOSE 8080

The command docker build -t authn:BASE-ONLY builds the base image but does not start the authentication listener since no CMD or ENTRYPOINT instruction was specified in the Dockerfile.

The graphic below shows the layers the microservices docker build added on top of the openjdk:8-jre-alpine base image:

Next, we need to define a launcher script that uses a pre-built envconsul go binary to setup communication with the Consul server and use the key-value namespace for the authentication microservice.

/usr/bin/envconsul -log-level debug -consul=192.168.99.100:31100 -pristine -prefix=ms-authn /bin/sh -c "$SERVICE_HOME"/bin/docker-entrypoint.sh

A lot just happened there!

We instructed the launcher script to start envconsul with an address for the Consul server and nodePort defined in the kube-consul.yaml previously. The switch -pristine tells envconsul to not merge extant environment variables with the ones read from the Consul server. The -prefix switch identifies the application namespace we want to retrieve configuration for and the shell spawns a new process for the docker-entrypoint.sh script which is an ENTRYPOINT instruction for the authentication microservice tagged as 1.0.0-SNAPSHOT. Note that this is not used in our demonstration, but instead we built a BASE-ONLY tagged image and layer the envconsul binary and launcher script (shown next) on top of it. The -c switch enables tracking of changes to configuration values in Consul and if detected the subprocess is restarted automatically!

Envconsul Build

With the envconsul.sh script ready as shown above, it is time to build a new image using the base-only tagged microservice image in the FROM instruction as shown below:

FROM forgerock-docker-public.bintray.io/microservice/authn:BASE-ONLY
MAINTAINER Javed Shah <javed.shah@forgerock.com>
COPY envconsul /usr/bin/
ADD envconsul.sh /usr/bin/envconsul.sh
RUN chmod +x /usr/bin/envconsul.sh
ENTRYPOINT ["/usr/bin/envconsul.sh"]

This Dockerfile includes the COPY instruction to add our pre-built envconsul go-binary that can run standalone to our new image and also adds the envconsul.sh script shown above to the /usr/bin/ directory. It also sets the +x permission on the script and makes it the ENTRYPOINT for the container.

At this time we used the docker build -t authn:ENVCONSUL . command to tag the new image correctly. This is available at the ForgeRock Bintray docker repo as well.

This graphic shows the layers we just added using our Dockerfile instruction set above to the new ENVCONSUL tagged image:

These two images are available for use as needed by the github.com repo.

Kubernetes Manifest

It is a matter of pulling down the correctly tagged image and specifying one key item, the hostAliases instruction to point to the OpenAM server running on the host. This server could be anywhere, just update the hostAliases accordingly. Manifests for the three microservices are available in the github.com repo.

Start the pod with kubectl create -f kube-authn-envconsul.yaml will cause the following things to happen, as seen in the logs using kubectl logs ms-authn-env.

This shows that envonsul started up with the address of the Consul server. We deliberately used the older -consul flag for more “haptic” feedback from the pod if you will. Now that envconsul started up, it will immediately reach out to the Consul server unless a Consul agent was provided in the startup configuration. It will fetch the key-value pairs for the namespace ms-authn and make them available as environment variables to the child process!

The configuration has been received from the server and at this time envconsul is ready to spawn the child process with these key-value pairs available as environment variables:

The Authentication microservice is able to startup and read its configuration from the environment provided to it by envconsul.

Change Detection

Lets change a value in the Consul UI for one of our keys, CLIENT_CREDENTIALS_STORE from json to ldap:

This change is immediately detected by envconsul in all running pods:

envconsul then wastes no time in restarting the subprocess microservice:

Pretty cool way to maintain a 12-factor app!

Tests

A quick test for the OAuth2 client credential grant reveals our endeavor has been successful. We receive a bearer access token from the newly started authn-microservice.

The token validation and token exchange microservices were also spun up using the procedures defined above.

Here is an example of validating a stateful access_token generated using the Authorization Code grant flow using OpenAM:

And finally the postman call to the token-exchange service for exchanging a user’s id_token with an access token granting delegation for an “actor” whose id_token is also supplied as input is shown here:

The resulting token is:

This particular use case is described in its own blog.

Whats coming next?

Watch this space for secrets integration with Vault and envconsul!

Token Exchange and Delegation using the ForgeRock Identity Microservices

In 2017 ForgeRock introduced an Early Access program (aka beta) for the ForgeRock Identity Microservices. In summary the capabilities offered include token issuance using the OAuth2 client credentials grant, token validations of OAuth2/OIDC tokens (and even ssotokens) and token exchange based on the draft OAuth2 token exchange spec. I wrote a press release here and an introductory community blog post here.

In this article we will discuss how to implement delegation as per the OAuth2 Token Exchange  draft specification. An overview of the process is worth discussing at this point.

We basically first want to grab hold of an OAuth2 token for our client to use in an Authorization header when calling the Token Exchange service. Next, we need our Authorization Server such as ForgeRock Access Management in our example, to issue our friendly neighbors Alice and Bob their respective OIDC id_tokens. Note that Alice wants to delegate authority to Bob. While we shall show the REST calls to acquire those, the details of setting up the OAuth2 Provider will be left out. We shall however discuss the custom OIDC Claims Script for setting up the delegation framework for authenticated users, as well as the Policy that sets up allowed actors who may be chosen for delegation.

But first things first, so we start with acquiring a bearer authentication token for our client who will make calls into the Token Exchange service. This can be accomplished by having the Authentication microservice issue an OAuth2 access token with the exchange scope using the client-credentials grant_type. For the example we shall just use a JSON backend as our client repository, although the Authentication microservice supports using Cassandra, MongoDB, any LDAP v3 directory including ForgeRock Directory Services.

The repository holds the following data for the client:

{
 "clientId" : "client",
 "clientSecret" : "client",
 "scopes" : [ "exchange", "introspect" ],
 "attributes" : {}
 }

The authentication request is:

POST /service/access_token?grant_type=client_credentials HTTP/1.1
Host: localhost:18080
Content-Type: multipart/form-data
Authorization: Basic ****
Cache-Control: no-cache

After client authenticates, the Authentication microservice issues a token that when decoded looks like this:

{
 "sub": "client",
 "scp": [
     "exchange",
     "introspect"
 ],
 "auditTrackingId": "237cf708-1bde-4800-9bcd-5db5b5f1ca12",
 "iss": "https://fr.tokenissuer.example.com",
 "tokenName": "access_token",
 "token_type": "Bearer",
 "authGrantId": "a55f2bad-cda5-459e-8620-3e826bad9095",
 "aud": "client",
 "nbf": 1523058711,
 "scope": [
     "exchange",
     "introspect"
 ],
 "auth_time": 1523058711,
 "exp": 1523062311,
 "iat": 1523058711,
 "expires_in": 3600,
 "jti": "39b72a84-112d-46f3-a7cb-84ce7513e5bc",
 "cid": "client"
}

Bob, the actor, authenticates with AM and receives id_token:

{
 "at_hash": "Hgjw0D49EfYM6dmB9_K6kg",
 "sub": "user.1",
 "auditTrackingId": "079fda18-c96f-4c78-90f2-fc9be0545b32-111930",
 "iss": "http://localhost:8080/openam/oauth2",
 "tokenName": "id_token",
 "aud": "oidcclient",
 "azp": "oidcclient",
 "auth_time": 1523058714,
 "realm": "/",
 "may_act": {},
 "exp": 1523062314,
 "tokenType": "JWTToken",
 "iat": 1523058714
}

Alice, the user also authenticates with AM and receives a special id_token:

{
 "at_hash": "nT_tDxXhcee7zZHdwladcQ",
 "sub": "user.0",
 "auditTrackingId": "079fda18-c96f-4c78-90f2-fc9be0545b32-111989",
 "iss": "http://localhost:8080/openam/oauth2",
 "tokenName": "id_token",
 "aud": "myuserclient1",
 "azp": "myuserclient1",
 "auth_time": 1523058716,
 "realm": "/",
 "may_act": {
     "sub": "Bob"
 },
 "exp": 1523062316,
 "tokenType": "JWTToken",
 "iat": 1523058716
}

Why does Alice receive a may_act claim but Bob does not? This has to do with Alice’s group membership in DS. The OIDC Claims script attached to the OAuth2 Provider in AM checks for membership to this group, and if found retrieves a value for the assigned “actor” or delegate. In this case Alice’s has designed actor status to Bob (via some out of band method, such as on a user portal, for example). The script retrieves the actor and sets a special claim with its value: may_act. The claims script is shown here in part:

def isAdmin = identity.getMemberships(IdType.GROUP)
.inject(false) { found, group ->
found ||
group.getName() == “policyEval”
}

//…search for actor.. not shown

def mayact = [:]
if (isAdmin) {
mayact.put(“sub”,actor)
}

Next, we must define a Policy in AM. This policy could be based on OAuth2 scopes or URI, whatever you choose. I chose to setup an OAuth2 Scopes policy but whatever scope you choose must match the resource you pass into the TE service. Rest of the policy is pretty standard, for example, here is a summary of the Subject tab of the policy:

{
  "type": "JwtClaim",
  "claimName": "iss",
  "claimValue": "http://localhost:8080/openam/oauth2"
}

I am checking for an acceptable issuer, and thats it. This could be replaced with a subject condition for Authenticated users as well although the client would then have to send in Alice’s “ssotoken” as the subject_token instead of her id_token when invoking the TE service.

This also implies that delegation is “limited” to the case where you are using a JWT for Alice, preferably an OIDC id_token. We cannot do delegation without a JWT because the TE depends on the presence of a “may_act” claim in Alice’s subject_token. When using only an ssotoken or an access_token for Alice we cannot pass in this claim to the TE service. Hopefully thats clear as mud now (laughter).

Anyway.. back to the policy setup in AM. Keep in mind that the TE service expects certain response attributes: “aud” and “scp” are mandatory. “allowedActors” is optional but needed for our discussion of course. It contains a list of allowed actors and this list could be computed based on who the subject is. One subject attribute must also be returned. A simple example of response attributes could be:

aud: images.example.com
scp: read,write
allowedActors: Bob
allowedActors: HelpDeskAdministrator
allowedActors: James

The subject must also be returned from the policy and one way to do that is to parse the id_token and return the sub claim as a response attribute. A scripted policy condition attached to the Policy Environment extracts the sub claim and returns it in the response.

This policy script is attached as an environment condition as follows:

The extracted sub claim is then included as a response attribute.

At this point we are ready to setup the Token Exchange microservice to invoke the Policy REST endpoint in AM for resource-match, allowed actions and subject evaluation. Keep in mind that the TE service supports pluggable policy backends – a really powerful feature – using which one could have specific “pods” of microservices running local JSON-backed policies, other “pods” could have Open Policy Agent (OPA)-powered regex policies and yet another set of “pods” could have the TE service calling out to AM’s policy engine for evaluation.

The setup for when the TE is configured for using AM as the token exchange policy backend  looks like this:

{
 // Credentials for an OpenAM "subject" account with permission to access the OpenAM policy endpoint
 "authSubjectId" : "&{EXCHANGE_OPENAM_AUTH_SUBJECT_ID|service-account}",
 "authSubjectPassword" : "&{EXCHANGE_OPENAM_AUTH_SUBJECT_PASSWORD|****}",

// URL for the OpenAM authentication endpoint, without query parameters
 "authUrl" : "&{EXCHANGE_OPENAM_AUTH_URL|http://localhost:8080/openam/json/authenticate}",

// URL for the OpenAM policy-endpoint, without query parameters
 "policyUrl" : "&{EXCHANGE_OPENAM_POLICY_URL|http://localhost:8080/openam/json/realms/root/policies}",

// OpenAM Policy-Set ID
 "policySetId" : "&{EXCHANGE_OPENAM_POLICY_SET_ID|resource_policies}",

// OpenAM policy-endpoint response-attribute field-name containing "audience" values
 "audienceAttribute" : "&{EXCHANGE_OPENAM_POLICY_AUDIENCE_ATTR|aud}",

// OpenAM policy-endpoint response-attribute field-name containing "scope" values
 "scopeAttribute" : "&{EXCHANGE_OPENAM_POLICY_SCOPE_ATTR|scp}",

// OpenAM policy-endpoint response-attribute field-name containing the token's "subject" value
 "subjectAttribute" : "&{EXCHANGE_OPENAM_POLICY_SUBJECT_ATTR|uid}",

// OpenAM policy-endpoint response-attribute field-name containing the allowedActors
 "allowedActorsAttribute" : "&{EXCHANGE_OPENAM_POLICY_ALLOWED_ACTORS_ATTR|may_act}",

// (optional) OpenAM policy-endpoint response-attribute field-name containing "expires-in-seconds" values
 "expiresInSecondsAttribute" : "&{EXCHANGE_OPENAM_POLICY_EXPIRES_IN_SEC_ATTR|}",

// Set to "true" to copy additional OpenAM policy-endpoint response-attributes into generated token claims
 "copyAdditionalAttributes" : "&{EXCHANGE_OPENAM_POLICY_COPY_ADDITIONAL_ATTR|true}"
}

The TE services checks the response attributes shown above in the policy evaluation result in order to setup claims in the signed JWT it returns to the client.

The body of a sample REST Policy call from the TE service to an OAuth2 scope policy defined in AM looks like as follows:

{
 "subject" : {
     "jwt" : "{{AM_ALICE_ID_TOKEN}}"
 },
 "application" : "resource_policies",
 "resources" : [ "delegate-scope" ]
}

Policy response returned to the TE service:

[
 {
 "resource": "delegate-scope",
 "actions": {
     "GRANT": true
 },
 "attributes": {
     "aud": [
         "images.example.com"
     ],
     "scp": [
         "read"
     ],
     "uid": [
         "Alice"
     ],
     "allowedActors": [
          "Bob"
     ]
 },
 "advices": {},
 "ttl": 9223372036854775807
 }
]

Again, whatever resource type you choose for the policy in AM, it must match the resource you pass into the TE service as shown above.

We have come far and if you are still with me, then it is time to invoke the TE service and get back a signed and exchanged JWT with the correct “act” claim to indicate delegation was successful. The decoded JWT claims set of the issued token is shown below. The new JWT is issued by the Token Exchange service and intended for consumption by a system entity known by the logical name “images.example.com” any time before its expiration. The subject “sub” of the JWT is the same as the subject of the subject_token used to make the request.

{
 "sub": "Alice",
 "aud": "images.example.com",
 "scp": [
     "read",
     "write"
 ],
 "act": {
     "sub": "Bob"
 },
 "iss": "https://fr.tokenexchange.example.com",
 "exp": 1523087727,
 "iat": 1523084127
}

The decoded claims set shows that the actor “act” of the JWT is the same as the subject of the “actor_token” used to make the request. This indicates delegation and identifies Bob as the current actor to whom authority has been delegated to act on behalf of Alice.