ForgeRock Identity Microservices with Consul on OpenShift

Introduction

Openshift, a Kubernetes-as-a-Paas service, is increasingly being considered as an alternative to managed kubernetes platforms such as those from Tectonic, Rancher, etc and vanilla native kubernetes implementations such as those provided by Google, Amazon and even Azure. RedHat’s OpenShift is a PaaS that provides a complete platform which includes features such as source-2-image build and test management, managing images with built-in change detection, and deploying to staging / production.

In this blog, I demonstrate how ForgeRock Identity Microservices could be deployed into OpenShift Origin (community project).

OpenShift Origin Overview

OpenShift Origin is a distribution of Kubernetes optimized for continuous application development and multi-tenant deployment. OpenShift adds developer and operations-centric tools on top of Kubernetes to enable rapid application development, easy deployment and scaling, and long-term lifecycle maintenance for small and large teams.

OpenShift embeds Kubernetes and extends it with security and other integrated concepts. An OpenShift Origin release corresponds to the Kubernetes distribution – for example, OpenShift 1.7 includes Kubernetes 1.7

Docker Images

I described in this blog post how I built the envconsul layer on top of the forgerock-docker-public.bintray.io/microservice/authn:BASE-ONLY image for the Authentication microservice, and similarly for the token-validation and token-exchange micro services. Back then I took the easy route and used the vbox (host) IP for exposing the Consul IP address externally to the rest of the microservices pods.

This time however, I rebuilt the docker images using a modified version of the envconsul.sh script which is of course, used as the ENTRYPOINT as seen in my github repo. I use an environment variable called CONSUL_ADDRESS in the script instead of a hard coded virtualbox adapter IP. The latter works nicely for minikube deployments, but of course, is not production friendly. However, with the env variable we move much closer to a production grade deployment.

How does OpenShift Origin resolve the environment variable on container startup? Read on!

OpenShift Origin Configuration

I have used minishift for this demonstration. More info about minishift can be found here. I started up Openshift Origin using these settings:
minishift start --openshift-version=v3.10.0-rc.0
eval $(minishift oc-env)
eval $(minishift docker-env)
minishift enable addon anyuid

The anyuid addon is only necessary when you have containers that must be run as root. Consul is not one of those, but most docker images rely on root, including the microservices, so I decided to set it as such anyway.

Apps can be deployed to OC in a number of ways, using the web console or the oc CLI. The main method I used is to use an existing container image hosted on the Bintray ForgeRock repository as well as the Consul image on Docker Hub.

The following command creates a new project:

$oc new-project consul-microservices --display-name="Consul based MS" --description="consul powered env vars"

At this point we have a project container for all our OC artifacts, such as deployments, image streams, pods, services and routes.

The new-app command can be used to deploy any image hosted on an external public or private registry. Using this command, OC will create a deployment configuration with a rolling strategy for updatesz. The command shown here will deploy the Hashicorp Consul image from docker hub (the default repository OC looks for). OC will download the image and store it into an internal image registry. The image is then copied from there to each node in an OC cluster where the app is run.

$oc new-app consul --name=consul-ms

With the deployment configuration created, two triggers are added by default: a ConfigChange and ImageChange which means that each deployment configuration change, or image update on the external repository will automatically trigger a new deployment. A replication controller is created automatically by OC soon after the deployment itself.

Make sure to add a route for the Consul UI running at port 8500 as follows:

I have described the Consul configuration needed for ForgeRock Identity Microservices in my earlier blog post on using envconsul for sourcing environment configuration at runtime. For details on how to get the microservices configuration into Consul as Key-Value pairs please refer to that post. A few snapshots of the UI showing the namespaces for the configuration are presented here:

 

A sample key value used for token exchange is shown here:

Next, we move to deploying the microservices. And in order for envconsul to read the configuration from Consul at runtime, we have the option of passing an --env switch to the oc new-app command indicating what the value for the CONSUL_ADDRESS should be, or we could also include the environment variable and a value for it in the deployment definition as shown in the next section.

OpenShift Deployment Artifacts

If bundling more than one object for import, such as pod and service definitions, via a yaml file, OC expects strongly typed template  configuration, as shown here, for the ms-authn microservice. Note the extra metadata for template identification, and use of the objects syntax:

 

This template, once stored in OC, can be reused to create new deployments of the ms-authn microservice. Notice the use of the environment variable indicating the Consul IP. This informs the envconsul.sh script of the location of Consul and envconsul is able to pull the configuration key-value pairs before spawning the microservice.

I used the oc new-app command in this demonstration insteed.

As described in the other blog, copy the configuration export (could be automated using GIT as well) into the Consul docker container:

$docker cp consul-export.json 5a2e414e548a:/consul-export.json

And then import the configuration using the consul kv command:

$consul kv import @consul-export.json
Imported: ms-authn/CLIENT_CREDENTIALS_STORE
Imported: ms-authn/CLIENT_SECRET_SECURITY_SCHEME
Imported: ms-authn/INFO_ACCOUNT_PASSWORD
Imported: ms-authn/ISSUER_JWK_STORE
Imported: ms-authn/METRICS_ACCOUNT_PASSWORD
Imported: ms-authn/TOKEN_AUDIENCE
Imported: ms-authn/TOKEN_DEFAULT_EXPIRATION_SECONDS
Imported: ms-authn/TOKEN_ISSUER
Imported: ms-authn/TOKEN_SIGNATURE_JWK_BASE64
Imported: ms-authn/TOKEN_SUPPORTED_ADDITIONAL_CLAIMS
Imported: token-exchange/
Imported: token-exchange/EXCHANGE_JWT_INTROSPECT_URL
Imported: token-exchange/EXCHANGE_OPENAM_AUTH_SUBJECT_ID
Imported: token-exchange/EXCHANGE_OPENAM_AUTH_SUBJECT_PASSWORD
Imported: token-exchange/EXCHANGE_OPENAM_AUTH_URL
Imported: token-exchange/EXCHANGE_OPENAM_POLICY_ALLOWED_ACTORS_ATTR
Imported: token-exchange/EXCHANGE_OPENAM_POLICY_AUDIENCE_ATTR
Imported: token-exchange/EXCHANGE_OPENAM_POLICY_COPY_ADDITIONAL_ATTR
Imported: token-exchange/EXCHANGE_OPENAM_POLICY_SCOPE_ATTR
Imported: token-exchange/EXCHANGE_OPENAM_POLICY_SET_ID
Imported: token-exchange/EXCHANGE_OPENAM_POLICY_SUBJECT_ATTR
Imported: token-exchange/EXCHANGE_OPENAM_POLICY_URL
Imported: token-exchange/INFO_ACCOUNT_PASSWORD
Imported: token-exchange/ISSUER_JWK_JSON_ISSUER_URI
Imported: token-exchange/ISSUER_JWK_JSON_JWK_BASE64
Imported: token-exchange/ISSUER_JWK_OPENID_URL
Imported: token-exchange/ISSUER_JWK_STORE
Imported: token-exchange/METRICS_ACCOUNT_PASSWORD
Imported: token-exchange/TOKEN_EXCHANGE_POLICIES
Imported: token-exchange/TOKEN_EXCHANGE_REQUIRED_SCOPE
Imported: token-exchange/TOKEN_ISSUER
Imported: token-exchange/TOKEN_SIGNATURE_JWK_BASE64
Imported: token-validation/
Imported: token-validation/INFO_ACCOUNT_PASSWORD
Imported: token-validation/INTROSPECTION_OPENAM_CLIENT_ID
Imported: token-validation/INTROSPECTION_OPENAM_CLIENT_SECRET
Imported: token-validation/INTROSPECTION_OPENAM_ID_TOKEN_INFO_URL
Imported: token-validation/INTROSPECTION_OPENAM_SESSION_URL
Imported: token-validation/INTROSPECTION_OPENAM_URL
Imported: token-validation/INTROSPECTION_REQUIRED_SCOPE
Imported: token-validation/INTROSPECTION_SERVICES
Imported: token-validation/ISSUER_JWK_JSON_ISSUER_URI
Imported: token-validation/ISSUER_JWK_JSON_JWK_BASE64
Imported: token-validation/ISSUER_JWK_STORE
Imported: token-validation/METRICS_ACCOUNT_PASSWORD

The oc new-app command used to create a deployment:
$oc new-app forgerock-docker-public.bintray.io/microservice/authn:ENVCONSUL --env CONSUL_ADDRESS=consul-ms-ui-consul-microservices.192.168.64.3.nip.io

This creates a new deployment for the authn microservice, and as described in detail in this blog, envconsul is able to reach the Consul server, extract configuration and set up the environment required by the auth microservice.

 

Commands for the token validation microservice are shown here:
$oc new-app forgerock-docker-public.bintray.io/microservice/token-validation:ENVCONSUL --env CONSUL_ADDRESS=consul-ms-ui-consul-microservices.192.168.64.3.nip.io

$oc expose svc/token-validation
route "token-validation" exposed

Token Exchange Microservice is setup as follows

$oc new-app forgerock-docker-public.bintray.io/microservice/token-exchange:ENVCONSUL --env CONSUL_ADDRESS=consul-ms-ui-consul-microservices.192.168.64.3.nip.io

$oc expose svc/token-exchange
route "token-validation" exposed

Image Streams

OpenShift has a neat CI/CD trigger for docker images built in to help you manage your pipeline. Whenever the image is updated in the remote public or private repository, the deployment is restarted.

Details about the token exchange micro service image stream are shown here:

More elaborate Jenkins based pipelines can be built and will be demonstrated in a later blog!

Test

Setup the env variables in Postman like so:

I have several tests in this github repo. An example of getting a new access token using client credentials from the authn microservice is shown here:

 

This was an introduction to using OpenShift with our micro services, and there is much more to come!

Monitoring the ForgeRock Identity Platform 6.0 using Prometheus and Grafana


All products within the ForgeRock Identity Platform 6.0 release include native support for monitoring product metrics using Prometheus and visualising this information using Grafana.  To illustrate how these product metrics can be used, alongside each products’ binary download on Backstage, you can also find a zip file containing Monitoring Dashboard Samples.  Each download includes a README file which explains how to setup these dashboards.  In this post, we’ll review the AM 6.0.0 Overview dashboard included within the AM Monitoring Dashboard Samples.

As you might expect from the name, the AM 6.0.0 Overview dashboard is a high level dashboard which gives a cluster-wide summary of the load and average response time for key areas of AM.  For illustration purposes, I’ve used Prometheus to monitor three AM instances running on my laptop and exported an interactive snapshot of the AM 6.0.0 Overview dashboard.  The screenshots which follow have all been taken from that snapshot.

Variables

At the top of the dashboard, there are two dropdowns:

AM 6.0.0 Overview dashboard variables

 

  • Instance – Use this to select which AM instances within your cluster you’d like to monitor.  The instances shown within this dropdown are automatically derived from the metric data you have stored within Prometheus.
  • Aggregation Window – Use this to select a period of time over which you’d like to take a count of events shown within the dashboard. For example, how many sessions were started in the last minute, hour, day, etc.

Note. You’ll need to be up and running with your own Grafana instance in order to use these dropdowns as they can’t be updated within the interactive snapshot.

Authentications

AM 6.0.0 Overview dashboard Authentications section

The top row of the Authentications section shows a count of authentications which have started, succeeded, failed, or timed-out across all AM instances selected in the “Instance” variable drop-down over the selected “Aggregation Window”.  The screenshot above shows that a total of 95 authentications were started over the past minute across all three of my AM instances.

The second row of the Authentications section shows the per second rate of authentications starting, succeeding, failing, or timing-out by AM instance.  This set of line graphs can be used to see how behaviour is changing over time and if any AM instance is taking more or less of the load.

All of the presented Authentications metrics are of the summary type.

Sessions

AM 6.0.0 Overview dashboard Sessions section

As with the Authentications section, the top row of the Sessions section shows cluster-wide aggregations while the second row contains a set of line graphs.

In the Sessions section, the metrics being presented are session creation, session termination, and average session lifetime.  Unlike the other metrics presented in the Authentications and Sessions sections, the session lifetime metric is of the timer type.

In both panels, Prometheus is calculating the average session lifetime by dividing am_session_lifetime_seconds_total by am_session_lifetime_count.  Because this calculation is happening within Prometheus rather than within each product instance, we have control over what period of time the average is calculated, we can choose to include or exclude certain session types by filtering on the session_type tag values, and we can calculate the cluster-wide average.

When working with any timer metric, it’s also possible to present the 50th, 75th, 95th, 98th, 99th, or 99.9th percentiles.  These are calculated from within the monitored product using an exponential decay so that they are representative of roughly the last five minutes of data [1].  Because percentiles are calculated from within the product, this does mean that it’s not possible to combine the results of multiple similar metrics or to aggregate across the whole cluster.

CTS

AM 6.0.0 Overview dashboard CTS section

The CTS is AM’s storage service for tokens and other data objects.  When a token needs to be operated upon, AM describes the desired operation as a Task object and enqueues this to be handled by an available worker thread when a connection to DS is available.

In the CTS section, we can observe…

  • Task Throughput – how many CTS operations are being requested
  • Tasks Waiting in Queues – how many operations are waiting for an available worker thread and connection to DS
  • Task Queueing Time – the time each operation spends waiting for a worker thread and connection
  • Task Service Time – the time spent performing the requested operation against DS

If you’d like to dig deeper into the CTS behaviour, you can use the AM 6.0.0 CTS dashboard to observe any of the recorded percentiles by operation type and token type.  You can also use the AM 6.0.0 CTS Token Reaper dashboard to monitor AM’s removal of expired CTS tokens.  Both of these dashboards are also included in the Monitoring Dashboard Samples zip file.

OAuth2

AM 6.0.0 Overview dashboard OAuth2 section

In the OAuth2 section, we can monitor OAuth2 grants completed or revoked, and tokens issued or revoked.  As OAuth2 refresh tokens can be long-lived and require state to be stored in CTS (to allow the resource owner to revoke consent) tracking grant issuance vs revocation can be helpful to aid with CTS capacity planning.

The dashboard currently shows a line per grant type.  You may prefer to use Prometheus’ sum function to show the count of all OAuth2 grants.  You can see examples of the sum function used in the Authentications section.

Note. you may also prefer to filter the grant_type tag to exclude refresh.

Policy / Authorization

AM 6.0.0 Overview dashboard Policy / Authorizations section

The Policy section shows a count of policy evaluations requested and the average time it took to perform these evaluations.  As you can see from the screenshots, policy evaluation completed in next to no time on my AM instances as I have no policies defined.

JVM

AM 6.0.0 Overview dashboard JVM section

The JVM section shows how JVM metrics forwarded by AM to Prometheus can be used to track memory usage and garbage collections.  As can be seen in the screenshot, the JVM section is repeated per AM instance.  This is a nice feature of Grafana.

Note. the set of garbage collection metric available is dependent on the selected GC algorithm.

Passing thoughts

While I’ve focused heavily on the AM dashboards within this post, all products across the ForgeRock Identitty Platform 6.0 release include the same support for Prometheus and Grafana and you can find sample dashboards for each.  I encourage you to take a look at the sample dashboards for DS, IG and IDM.

References

  1. DropWizard Metrics Exponentially Decaying Reservoirs, https://metrics.dropwizard.io/4.0.0/manual/core.html

This blog post was first published @ https://xennial-bytes.blogspot.com/, included here with permission from the author.

ForgeRock Identity Platform Version 6: Integrating IDM, AM, and DS

For the ForgeRock Identity Platform version 6, integration between our products is easier than ever. In this blog, I’ll show you how to integrate ForgeRock Identity Management (IDM), ForgeRock Access Management (AM), and ForgeRock Directory Services (DS). With integration, you can configure aspects of privacy, consent, trusted devices, and more. This configuration sets up IDM as an OpenID Connect / OAuth 2.0 client of AM, using DS as a common user datastore.

Setting up integration can be a challenge, as it requires you to configure (and read documentation from) three different ForgeRock products. This blog will help you set up that integration. For additional features, refer to the following chapters from IDM documentation: Integrating IDM with the ForgeRock Identity Platform and the Configuring User Self-Service.

While you can find most of the steps in the IDM 6 Samples Guide, this blog collects the information you need to set up integration in one place.

This blog post will guide you through the process. (Here’s the link to the companion blog post for ForgeRock Identity Platform version 5.5.)

Preparing Your System

For the purpose of this blog, I’ve configured all three systems in a single Ubuntu 16.04 VM (8 GB RAM / 40GB HD / 2 CPU).

Install Java 8 on your system. I’ve installed the Ubuntu 16.04-native openjdk-8 packages. In some cases, you may have to include export JAVA_HOME=/usr in your ~/.bashrc or ~/.bash_profile files.

Create separate home directories for each product. For the purpose of this blog, I’m using:

  • /home/idm
  • /home/am
  • /home/ds

Install Tomcat 8 as the web container for AM. For the purpose of this blog, I’ve downloaded Tomcat 8.5.30, and have unpacked it. Activate the executable bit in the bin/ subdirectory:

$ cd apache-tomcat-8.5.30/bin
$ chmod u+x *.sh

As AM requires fully qualified domain names (FQDNs), I’ve set up an /etc/hosts file with FQDNs for all three systems, with the following line:

192.168.0.1 AM.example.com DS.example.com IDM.example.com

(Substitute your IP address as appropriate. You may set up AM, DS, and IDM on different systems.)

If you set up AM and IDM on the same system, make sure they’re configured to connect on different ports. Both products configure default connections on ports 8080 and 8443.

Download AM, IDM, and DS versions 6 from backstage.forgerock.com. For organizational purposes, set them up on their own home directories:

 

Product Download Home Directory
DS DS-6.0.0.zip /home/ds
AM AM-6.0.0.war /home/am
IDM IDM-6.0.0.zip /home/idm

 

Unpack the zip files. For convenience, copy the Example.ldif file from /home/idm/openidm/samples/full-stack/data to the /home/ds directory.

For the purpose of this blog, I’ve downloaded Tomcat 8.5.30 to the /home/am directory.

Configuring ForgeRock Directory Services (DS)

To install DS, navigate to the directory where you unpacked the binary, in this case, /home/ds/opendj. In that directory, you’ll find a setup script. The following command uses that script to start DS as a directory server, with a root DN of “cn=Directory Manager”, with a host name of ds.example.com, port 1389 for LDAP communication, and 4444 for administrative connections:

$ ./setup \
  directory-server \
  --rootUserDN "cn=Directory Manager" \
  --rootUserPassword password \
  --hostname ds.example.com \
  --ldapPort 1389 \
  --adminConnectorPort 4444 \
  --baseDN dc=com \
  --ldifFile /home/ds/Example.ldif \
  --acceptLicense

Earlier in this blog, you copied the Example.ldif file to /home/ds. Substitute if needed. Once the setup script returns you to the command line, DS is ready for integration.

Installing ForgeRock Access Manager (AM)

Use the configured external DS server as a common user store for AM and IDM. For an extended explanation, see the following documentation: Integrating IDM with the ForgeRock Identity Platform. To install AM, use the following steps:

  1. Set up Tomcat for AM. For this blog, I used Tomcat 8.5.30, downloaded from http://tomcat.apache.org/. 
  2. Unzip Tomcat in the /home/am directory.
  3. Make the files in the apache-tomcat-8.5.30/bin directory executable.
  4. Copy the AM-6.0.0.war file from the /home/am directory to apache-tomcat-8.5.30/webapps/openam.war.
  5. Start the Tomcat web container with the startup.sh script in the apache-tomcat-8.5.30/bin directory. This action should unpack the openam.war binary to the
    apache-tomcat-8.5.30/webapps/openam directory.
     
  6. Shut down Tomcat, with the shutdown.sh script in the same directory. Make sure the Tomcat process has stopped.
  7. Open the web.xml file in the following directory: apache-tomcat-8.5.30/webapps/openam/WEB-INF/. For an explanation, see the AM 6 Release Notes. Include the following code blocks in that file to support cross-origin resource sharing:
<filter>
      <filter-name>CORSFilter</filter-name>
      <filter-class>org.apache.catalina.filters.CorsFilter</filter-class>
      <init-param>
           <param-name>cors.allowed.headers</param-name>
           <param-value>Content-Type,X-OpenIDM-OAuth-Login,X-OpenIDM-DataStoreToken,X-Requested-With,Cache-Control,Accept-Language,accept,Origin,Access-Control-Request-Method,Access-Control-Request-Headers,X-OpenAM-Username,X-OpenAM-Password,iPlanetDirectoryPro</param-value>
      </init-param>
      <init-param>
           <param-name>cors.allowed.methods</param-name>
           <param-value>GET,POST,HEAD,OPTIONS,PUT,DELETE</param-value>
      </init-param>
      <init-param>
           <param-name>cors.allowed.origins</param-name>
           <param-value>http://am.example.com:8080,https://idm.example.com:9080</param-value>
      </init-param>
      <init-param>
           <param-name>cors.exposed.headers</param-name>
           <param-value>Access-Control-Allow-Origin,Access-Control-Allow-Credentials,Set-Cookie</param-value>
      </init-param>
      <init-param>
           <param-name>cors.preflight.maxage</param-name>
           <param-value>10</param-value>
      </init-param>
      <init-param>
           <param-name>cors.support.credentials</param-name>
           <param-value>true</param-value>
      </init-param>
 </filter>

 <filter-mapping>
      <filter-name>CORSFilter</filter-name>
      <url-pattern>/json/*</url-pattern>
 </filter-mapping>

Important: Substitute the actual URL and ports for your AM and IDM deployments, where you see http://am.example.com:8080 and http://idm.example.com:9080. (I forgot to make these once and couldn’t figure out the problem for a couple of days.)

Configuring AM

  1. If you’ve configured AM on this system before, delete the /home/am/openam directory.
  2. Restart Tomcat with the startup.sh script in the aforementioned apache-tomcat-8.5.30/bin directory.
  3. Navigate to the URL for your AM deployment. In this case, call it http://am.example.com:8080/openam. You’ll create a “Custom Configuration” for OpenAM, and accept the defaults for most cases.
    • When setting up Configuration Data Store Settings, for consistency, use the same root suffix in the Configuration Data Store, i.e. dc=example,dc=com.
    • When setting up User Data Store settings, make sure the entries match what you used when you installed DS. The following table is based on that installation:

      Option Setting
      Directory Name ds.example.com
      Port 1389
      Root Suffix dc=example,dc=com
      Login ID cn=Directory Manager
      Password password

       

  4. When the installation process is complete, you’ll be prompted with a login screen. Log in as the amadmin administrative user with the password you set up during the configuration process. With the following action, you’ll set up an OpenID Connect/OAuth 2.0 service that you’ll configure shortly for a connection to IDM.
    • Select Top-Level Realm -> Configure OAuth Provider -> Configure OpenID Connect -> Create -> OK. This sets up AM as an OIDC authorization server.
  5. Set up IDM as an OAuth 2.0 Client:
    • Select Applications -> OAuth 2.0. Choose Add Client. In the New OAuth 2.0 Client window that appears, set openidm as a Client ID, set changeme as a Client Secret, along with a Redirection URI of http://idm.example.com:9080/oauthReturn/. The scope is openid, which reflects the use of the OpenID Connect standard.
    • Select Create, go to the Advanced Tab, and scroll down. Activate the Implied Consent option.
    • Press Save Changes.
  6. Go to the OpenID Connect tab, and enter the following information in the Post Logout Redirect URIs text box:
    • http://idm.example.com:9080/
    • http://idm.example.com:9080/admin/
    • Press Save Changes.
  7. Select Services -> OAuth2 Provider -> Advanced OpenID Connect:
    • Scroll down and enter openidm in the “Authorized OIDC SSO Clients” text box.
    • Press Save Changes.
  8. Navigate to the Consent tab:
    • Enable the Allow Clients to Skip Consent option.
    • Press Save Changes.

AM is now ready for integration.

 

Configuring IDM

Now you’re ready to configure IDM, using the following steps:

  1. For the purpose of this blog, use the following project subdirectory: /home/idm/openidm/samples/full-stack.
  2. If you haven’t modified the deployment port for AM, modify the port for IDM. To do so, edit the boot.properties file in the openidm/resolver/ subdirectory, and change the port property appropriate for your deployment (openidm.port.http or openidm.port.https). For this blog, I’ve changed the openidm.port.http line to:
    • openidm.port.http=9080
  3. (NEW) You’ll also need to change the openidm.host. By default, it’s set to localhost. For this blog, set it to:
    • openidm.host=idm.example.com
  4. Start IDM using the full-stack project directory:
    • $ cd openidm
    • $ ./startup.sh -p samples/full-stack
      • (If you’re running IDM in a VM, the following command starts IDM and keeps it going after you log out of the system:
        nohup ./startup.sh -p samples/full-stack/ > logs/console.out 2>&1& )
    • As IDM includes pre-configured options for the ForgeRock Identity Platform in the full-stack subdirectory, IDM documentation on the subject frequently refers to the platform as the “Full Stack”.
  5. In a browser, navigate to http://idm.example.com:9080/admin/.
  6. Log in as an IDM administrator:
    • Username: openidm-admin
    • Password: openidm-admin
  7. Reconcile users from the common DS user store to IDM. Select Configure > Mappings. In the page that appears, find the mapping from System/Ldap/Account to Managed/User, and press Reconcile. That will populate the IDM Managed User store with users from the common DS user store.
  8. Select Configure -> Authentication. Choose the ForgeRock Identity Provider option. In the window that appears, scroll down to the configuration details. Based on the instance of AM configured earlier, you’d change:
    Property Entry
    Well-Known Endpoint http://am.example.com:8080/openam/oauth2/.well-known/openid-configuration
    Client ID Matching entry from Step 5 of Configuring AM (openidm)
    Client Secret Matching entry from Step 5 of Configuring AM (changeme)
  9. When you’ve made appropriate changes, press Submit. (You won’t be able to press submit until you’ve entered a valid Well-Known Endpoint.)
    • You’re prompted with the following message:
      • Your current session may be invalid. Click here to logout and re-authenticate.
  10. When you tap on the ‘Click here’ link, you should be taken to http://am.example.com:8080/openam/<some long extension>. Log in with AM administrative credentials:
    • Username: amadmin
    • Password: <what you configured during the AM installation process>

If you see the IDM Admin UI after logging in, congratulations! You now have a working integration between AM, IDM, and DS.

Note: To ensure a rapid response when the AM session expires, the IDM JWT_SESSION timeout has been reduced to 5 seconds. For more information, see the following section of the IDM ForgeRock Identity Platform sample: Changes to Session and Authentication Modules.

 

Building On The ForgeRock Identity Platform

Once you’ve integrated AM, IDM, and DS, you can:

To visualize how this works, review the following diagram. For more information, see the following section of the IDM ForgeRock Identity Platform sample: Authorization Flow.

 

Troubleshooting

If you run into errors, review the following table:

 

Error message Solution
redirect_uri_mismatch Check for typos in the OAuth 2.0 client Redirection URI.
This application is requesting access to your account. Enable “Implied Consent” in the OAuth 2.0 client.

Enable “Allow Clients to Skip Consent” in the OAuth2 Provider.

Upon logout: The redirection URI provided does not match a pre-registered value. Check for typos in the OAuth 2.0 client Post Logout Redirect URIs.
Unable to login using authentication provider, with a redirect to preventAutoLogin=true. Check for typos in the Authorized OIDC SSO Clients list, in the OAuth2 Provider.

Make sure the Client ID and Client Secret in IDM match those configured for the AM OAuth 2.0 Application Client.

Unknown error: Please contact the administrator.

(In the dev console, you might see: Origin ‘http://idm.example.com:9080’ is therefore not allowed access.’).

Check for typos in the URLs in your web.xml file.
The IDM Self-Service UI does not appear, but you can connect to the IDM Admin UI. Check for typos in the URLs in your web.xml file.

 

If you see other errors, the problem is likely beyond the scope of this blog.

 

Runtime configuration for ForgeRock Identity Microservices using Consul

The ForgeRock Identity Microservices are able to read all runtime configuration from environment variables. These could be statically provided as hard coded key-value pairs in a Kubernetes manifest or shell script, but a better solution is to manage the configuration in a centralized configuration store such as Hashicorp Consul.

Managing configuration outside the code as such has been routinely adopted as a practice by modern service providers such as Google, Azure and AWS and business that use both cloud as well as on-premise software.

In this post, I shall walk you through how Consul can be used to set configuration values, envconsul can be used to read these key-values from different namespaces and supply them as environment variables to a subprocess. Envconsul can even be used to trigger automatic restarts on configuration changes without any modifications needed to the ForgeRock Identity Microservices.

Perhaps even more significantly, this post shows how it is possible to use Docker to layer on top of a base microservice docker image from the Forgerock Bintray repo and then use a pre-built go binary for envconsul to spawn the microservice as a subprocess.

This github.com repo has all the docker and kubernetes artifacts needed to run this integration. I used a single cluster minikube setup.

Consul

Starting a Pod in Minikube

We must start with setting up consul as a docker container and this is just a simple docker pull consul followed by a kubectl create -f kube-consul.yaml that sets up a pod running consul in our minikube environment. Consul creates a volume called /consul/data on to which you can store configuration key-value pairs in JSON format that could be directly imported using consul kv import @file.json. consul kv export > file.json can be used to export all stored configuration separated into JSON blobs by namespace.

The consul commands can be run from inside the Consul container by starting  a shell in it using docker exec -ti <consul-container-name> /bin/sh

Configuration setup using Key Value pairs

The namespaces for each microservice are defined either manually in the UI or via REST calls such as:
consul kv put my-app/my-key may-value

 

Expanding the ms-authn namespace yields the following screen:

A sample configuration set for all three microservices is available in the github.com repo for this integration here. Use consul kv import @consul-export.json to import them into your consul server.

Docker Builds

Base Builds

First, I built base images for each of the ForgeRock Identity microservices without using an ENTRYPOINT instruction. These images are built using the openjdk:8-jre-alpine base image which are the smallest alpine-based openjdk image one can possibly find. The openjdk7 and openjdk9 images are larger. Alpine itself is a great choice for a minimalistic docker base image and provides packaging capabilities using apk. It is a great choice for a base image especially if these are to be layered on top of like in this demonstration.

An example for the Authentication microservice is:
FROM openjdk:8-jre-alpine
WORKDIR /opt/authn-microservice
ADD . /opt/authn-microservice
EXPOSE 8080

The command docker build -t authn:BASE-ONLY builds the base image but does not start the authentication listener since no CMD or ENTRYPOINT instruction was specified in the Dockerfile.

The graphic below shows the layers the microservices docker build added on top of the openjdk:8-jre-alpine base image:

Next, we need to define a launcher script that uses a pre-built envconsul go binary to setup communication with the Consul server and use the key-value namespace for the authentication microservice.

/usr/bin/envconsul -log-level debug -consul=192.168.99.100:31100 -pristine -prefix=ms-authn /bin/sh -c "$SERVICE_HOME"/bin/docker-entrypoint.sh

A lot just happened there!

We instructed the launcher script to start envconsul with an address for the Consul server and nodePort defined in the kube-consul.yaml previously. The switch -pristine tells envconsul to not merge extant environment variables with the ones read from the Consul server. The -prefix switch identifies the application namespace we want to retrieve configuration for and the shell spawns a new process for the docker-entrypoint.sh script which is an ENTRYPOINT instruction for the authentication microservice tagged as 1.0.0-SNAPSHOT. Note that this is not used in our demonstration, but instead we built a BASE-ONLY tagged image and layer the envconsul binary and launcher script (shown next) on top of it. The -c switch enables tracking of changes to configuration values in Consul and if detected the subprocess is restarted automatically!

Envconsul Build

With the envconsul.sh script ready as shown above, it is time to build a new image using the base-only tagged microservice image in the FROM instruction as shown below:

FROM forgerock-docker-public.bintray.io/microservice/authn:BASE-ONLY
MAINTAINER Javed Shah <javed.shah@forgerock.com>
COPY envconsul /usr/bin/
ADD envconsul.sh /usr/bin/envconsul.sh
RUN chmod +x /usr/bin/envconsul.sh
ENTRYPOINT ["/usr/bin/envconsul.sh"]

This Dockerfile includes the COPY instruction to add our pre-built envconsul go-binary that can run standalone to our new image and also adds the envconsul.sh script shown above to the /usr/bin/ directory. It also sets the +x permission on the script and makes it the ENTRYPOINT for the container.

At this time we used the docker build -t authn:ENVCONSUL . command to tag the new image correctly. This is available at the ForgeRock Bintray docker repo as well.

This graphic shows the layers we just added using our Dockerfile instruction set above to the new ENVCONSUL tagged image:

These two images are available for use as needed by the github.com repo.

Kubernetes Manifest

It is a matter of pulling down the correctly tagged image and specifying one key item, the hostAliases instruction to point to the OpenAM server running on the host. This server could be anywhere, just update the hostAliases accordingly. Manifests for the three microservices are available in the github.com repo.

Start the pod with kubectl create -f kube-authn-envconsul.yaml will cause the following things to happen, as seen in the logs using kubectl logs ms-authn-env.

This shows that envonsul started up with the address of the Consul server. We deliberately used the older -consul flag for more “haptic” feedback from the pod if you will. Now that envconsul started up, it will immediately reach out to the Consul server unless a Consul agent was provided in the startup configuration. It will fetch the key-value pairs for the namespace ms-authn and make them available as environment variables to the child process!

The configuration has been received from the server and at this time envconsul is ready to spawn the child process with these key-value pairs available as environment variables:

The Authentication microservice is able to startup and read its configuration from the environment provided to it by envconsul.

Change Detection

Lets change a value in the Consul UI for one of our keys, CLIENT_CREDENTIALS_STORE from json to ldap:

This change is immediately detected by envconsul in all running pods:

envconsul then wastes no time in restarting the subprocess microservice:

Pretty cool way to maintain a 12-factor app!

Tests

A quick test for the OAuth2 client credential grant reveals our endeavor has been successful. We receive a bearer access token from the newly started authn-microservice.

The token validation and token exchange microservices were also spun up using the procedures defined above.

Here is an example of validating a stateful access_token generated using the Authorization Code grant flow using OpenAM:

And finally the postman call to the token-exchange service for exchanging a user’s id_token with an access token granting delegation for an “actor” whose id_token is also supplied as input is shown here:

The resulting token is:

This particular use case is described in its own blog.

Whats coming next?

Watch this space for secrets integration with Vault and envconsul!

Token Exchange and Delegation using the ForgeRock Identity Microservices

In 2017 ForgeRock introduced an Early Access program (aka beta) for the ForgeRock Identity Microservices. In summary the capabilities offered include token issuance using the OAuth2 client credentials grant, token validations of OAuth2/OIDC tokens (and even ssotokens) and token exchange based on the draft OAuth2 token exchange spec. I wrote a press release here and an introductory community blog post here.

In this article we will discuss how to implement delegation as per the OAuth2 Token Exchange  draft specification. An overview of the process is worth discussing at this point.

We basically first want to grab hold of an OAuth2 token for our client to use in an Authorization header when calling the Token Exchange service. Next, we need our Authorization Server such as ForgeRock Access Management in our example, to issue our friendly neighbors Alice and Bob their respective OIDC id_tokens. Note that Alice wants to delegate authority to Bob. While we shall show the REST calls to acquire those, the details of setting up the OAuth2 Provider will be left out. We shall however discuss the custom OIDC Claims Script for setting up the delegation framework for authenticated users, as well as the Policy that sets up allowed actors who may be chosen for delegation.

But first things first, so we start with acquiring a bearer authentication token for our client who will make calls into the Token Exchange service. This can be accomplished by having the Authentication microservice issue an OAuth2 access token with the exchange scope using the client-credentials grant_type. For the example we shall just use a JSON backend as our client repository, although the Authentication microservice supports using Cassandra, MongoDB, any LDAP v3 directory including ForgeRock Directory Services.

The repository holds the following data for the client:

{
 "clientId" : "client",
 "clientSecret" : "client",
 "scopes" : [ "exchange", "introspect" ],
 "attributes" : {}
 }

The authentication request is:

POST /service/access_token?grant_type=client_credentials HTTP/1.1
Host: localhost:18080
Content-Type: multipart/form-data
Authorization: Basic ****
Cache-Control: no-cache

After client authenticates, the Authentication microservice issues a token that when decoded looks like this:

{
 "sub": "client",
 "scp": [
     "exchange",
     "introspect"
 ],
 "auditTrackingId": "237cf708-1bde-4800-9bcd-5db5b5f1ca12",
 "iss": "https://fr.tokenissuer.example.com",
 "tokenName": "access_token",
 "token_type": "Bearer",
 "authGrantId": "a55f2bad-cda5-459e-8620-3e826bad9095",
 "aud": "client",
 "nbf": 1523058711,
 "scope": [
     "exchange",
     "introspect"
 ],
 "auth_time": 1523058711,
 "exp": 1523062311,
 "iat": 1523058711,
 "expires_in": 3600,
 "jti": "39b72a84-112d-46f3-a7cb-84ce7513e5bc",
 "cid": "client"
}

Bob, the actor, authenticates with AM and receives id_token:

{
 "at_hash": "Hgjw0D49EfYM6dmB9_K6kg",
 "sub": "user.1",
 "auditTrackingId": "079fda18-c96f-4c78-90f2-fc9be0545b32-111930",
 "iss": "http://localhost:8080/openam/oauth2",
 "tokenName": "id_token",
 "aud": "oidcclient",
 "azp": "oidcclient",
 "auth_time": 1523058714,
 "realm": "/",
 "may_act": {},
 "exp": 1523062314,
 "tokenType": "JWTToken",
 "iat": 1523058714
}

Alice, the user also authenticates with AM and receives a special id_token:

{
 "at_hash": "nT_tDxXhcee7zZHdwladcQ",
 "sub": "user.0",
 "auditTrackingId": "079fda18-c96f-4c78-90f2-fc9be0545b32-111989",
 "iss": "http://localhost:8080/openam/oauth2",
 "tokenName": "id_token",
 "aud": "myuserclient1",
 "azp": "myuserclient1",
 "auth_time": 1523058716,
 "realm": "/",
 "may_act": {
     "sub": "Bob"
 },
 "exp": 1523062316,
 "tokenType": "JWTToken",
 "iat": 1523058716
}

Why does Alice receive a may_act claim but Bob does not? This has to do with Alice’s group membership in DS. The OIDC Claims script attached to the OAuth2 Provider in AM checks for membership to this group, and if found retrieves a value for the assigned “actor” or delegate. In this case Alice’s has designed actor status to Bob (via some out of band method, such as on a user portal, for example). The script retrieves the actor and sets a special claim with its value: may_act. The claims script is shown here in part:

def isAdmin = identity.getMemberships(IdType.GROUP)
.inject(false) { found, group ->
found ||
group.getName() == “policyEval”
}

//…search for actor.. not shown

def mayact = [:]
if (isAdmin) {
mayact.put(“sub”,actor)
}

Next, we must define a Policy in AM. This policy could be based on OAuth2 scopes or URI, whatever you choose. I chose to setup an OAuth2 Scopes policy but whatever scope you choose must match the resource you pass into the TE service. Rest of the policy is pretty standard, for example, here is a summary of the Subject tab of the policy:

{
  "type": "JwtClaim",
  "claimName": "iss",
  "claimValue": "http://localhost:8080/openam/oauth2"
}

I am checking for an acceptable issuer, and thats it. This could be replaced with a subject condition for Authenticated users as well although the client would then have to send in Alice’s “ssotoken” as the subject_token instead of her id_token when invoking the TE service.

This also implies that delegation is “limited” to the case where you are using a JWT for Alice, preferably an OIDC id_token. We cannot do delegation without a JWT because the TE depends on the presence of a “may_act” claim in Alice’s subject_token. When using only an ssotoken or an access_token for Alice we cannot pass in this claim to the TE service. Hopefully thats clear as mud now (laughter).

Anyway.. back to the policy setup in AM. Keep in mind that the TE service expects certain response attributes: “aud” and “scp” are mandatory. “allowedActors” is optional but needed for our discussion of course. It contains a list of allowed actors and this list could be computed based on who the subject is. One subject attribute must also be returned. A simple example of response attributes could be:

aud: images.example.com
scp: read,write
allowedActors: Bob
allowedActors: HelpDeskAdministrator
allowedActors: James

The subject must also be returned from the policy and one way to do that is to parse the id_token and return the sub claim as a response attribute. A scripted policy condition attached to the Policy Environment extracts the sub claim and returns it in the response.

This policy script is attached as an environment condition as follows:

The extracted sub claim is then included as a response attribute.

At this point we are ready to setup the Token Exchange microservice to invoke the Policy REST endpoint in AM for resource-match, allowed actions and subject evaluation. Keep in mind that the TE service supports pluggable policy backends – a really powerful feature – using which one could have specific “pods” of microservices running local JSON-backed policies, other “pods” could have Open Policy Agent (OPA)-powered regex policies and yet another set of “pods” could have the TE service calling out to AM’s policy engine for evaluation.

The setup for when the TE is configured for using AM as the token exchange policy backend  looks like this:

{
 // Credentials for an OpenAM "subject" account with permission to access the OpenAM policy endpoint
 "authSubjectId" : "&{EXCHANGE_OPENAM_AUTH_SUBJECT_ID|service-account}",
 "authSubjectPassword" : "&{EXCHANGE_OPENAM_AUTH_SUBJECT_PASSWORD|****}",

// URL for the OpenAM authentication endpoint, without query parameters
 "authUrl" : "&{EXCHANGE_OPENAM_AUTH_URL|http://localhost:8080/openam/json/authenticate}",

// URL for the OpenAM policy-endpoint, without query parameters
 "policyUrl" : "&{EXCHANGE_OPENAM_POLICY_URL|http://localhost:8080/openam/json/realms/root/policies}",

// OpenAM Policy-Set ID
 "policySetId" : "&{EXCHANGE_OPENAM_POLICY_SET_ID|resource_policies}",

// OpenAM policy-endpoint response-attribute field-name containing "audience" values
 "audienceAttribute" : "&{EXCHANGE_OPENAM_POLICY_AUDIENCE_ATTR|aud}",

// OpenAM policy-endpoint response-attribute field-name containing "scope" values
 "scopeAttribute" : "&{EXCHANGE_OPENAM_POLICY_SCOPE_ATTR|scp}",

// OpenAM policy-endpoint response-attribute field-name containing the token's "subject" value
 "subjectAttribute" : "&{EXCHANGE_OPENAM_POLICY_SUBJECT_ATTR|uid}",

// OpenAM policy-endpoint response-attribute field-name containing the allowedActors
 "allowedActorsAttribute" : "&{EXCHANGE_OPENAM_POLICY_ALLOWED_ACTORS_ATTR|may_act}",

// (optional) OpenAM policy-endpoint response-attribute field-name containing "expires-in-seconds" values
 "expiresInSecondsAttribute" : "&{EXCHANGE_OPENAM_POLICY_EXPIRES_IN_SEC_ATTR|}",

// Set to "true" to copy additional OpenAM policy-endpoint response-attributes into generated token claims
 "copyAdditionalAttributes" : "&{EXCHANGE_OPENAM_POLICY_COPY_ADDITIONAL_ATTR|true}"
}

The TE services checks the response attributes shown above in the policy evaluation result in order to setup claims in the signed JWT it returns to the client.

The body of a sample REST Policy call from the TE service to an OAuth2 scope policy defined in AM looks like as follows:

{
 "subject" : {
     "jwt" : "{{AM_ALICE_ID_TOKEN}}"
 },
 "application" : "resource_policies",
 "resources" : [ "delegate-scope" ]
}

Policy response returned to the TE service:

[
 {
 "resource": "delegate-scope",
 "actions": {
     "GRANT": true
 },
 "attributes": {
     "aud": [
         "images.example.com"
     ],
     "scp": [
         "read"
     ],
     "uid": [
         "Alice"
     ],
     "allowedActors": [
          "Bob"
     ]
 },
 "advices": {},
 "ttl": 9223372036854775807
 }
]

Again, whatever resource type you choose for the policy in AM, it must match the resource you pass into the TE service as shown above.

We have come far and if you are still with me, then it is time to invoke the TE service and get back a signed and exchanged JWT with the correct “act” claim to indicate delegation was successful. The decoded JWT claims set of the issued token is shown below. The new JWT is issued by the Token Exchange service and intended for consumption by a system entity known by the logical name “images.example.com” any time before its expiration. The subject “sub” of the JWT is the same as the subject of the subject_token used to make the request.

{
 "sub": "Alice",
 "aud": "images.example.com",
 "scp": [
     "read",
     "write"
 ],
 "act": {
     "sub": "Bob"
 },
 "iss": "https://fr.tokenexchange.example.com",
 "exp": 1523087727,
 "iat": 1523084127
}

The decoded claims set shows that the actor “act” of the JWT is the same as the subject of the “actor_token” used to make the request. This indicates delegation and identifies Bob as the current actor to whom authority has been delegated to act on behalf of Alice.

The OAuth2 ForgeRock Identity Microservices

ForgeRock Identity Microservices

ForgeRock released in Q4 2017, an Early Access (aka beta) program for three key Identity Microservices within a compact, single-purpose code set for consumer-scale deployments. For companies who are deploying stateless Microservices architectures, these microservices offer a micro-gateway enabled solution that enables service trust, policy-enforced identity propagation and even OAuth2-based delegation. The stateless architecture of FR Identity Microservices allows for implementing a sidecar-friendly microgateway deployment pattern.

I blogged about it here. The sign up form is here.

The Microservices

OAuth2 Token Issuance

Enables service to service authentication using client-credential grant type. These “bearer” tokens facilitate trust up and down the call chain.

We currently support RSA, EC, and HMAC signing, with the following algorithms:

  • HS256
  • HS384
  • HS512
  • RS256
  • ES256
  • ES384
  • ES512

Token Validation

Supports introspection of OAuth2 / OIDC tokens issued by an OAuth2 AS as well as native AM tokens. Token validations performed using either configured keys, or lookup via JWK URI for issuer and kid verification.

One or more Token Introspection API implementations can be configured by setting the value to a comma-separated list of implementation names. Each implementation is given a chance to handle the token, in given order, until an implementation successfully handles the token. An error response occurs if no implementation successfully introspects a given token.

The following implementations are provided:

  • json : Configured by a JSON file.
  • openam : Proxies introspect calls to ForgeRock Access Management.

Token Exchange

Supports exchanging stateless OAuth2 access tokens and native AM tokens for hybrid tokens that grant specific entitlements per resourceor audience requested. Implements the draft OAuth2 token-exchange specification. Offers no-fuss declarative (“local”) policy enforcement inside the pod without needing to “call home” for identity data. Also provides policy based entitlement decisions by calling out to ForgeRock Access Management.

The Token-Exchange Policy is a pluggable service which determines whether or not a given token exchange may proceed. The service also determines what audience, scope, and so forth the generated token will have. The following implementations are provided:

  • json (default) : Configured by a JSON file.
  • jwt : Simply returns a JWT for any token that can be introspected.
  • openam : Uses a ForgeRock Access Management Policy Set.

Cloud and Platform

Docker and Kubernetes

Dockerfile is bundled with the binary. Soon there will be a docker repository from where the images could be pulled by evaluators.

A Kubernetes manifest is also provided that demonstrates using environment variables to configure a simple demo instance.

Metrics and Prometheus Integration

The microservices are instrumented to publish basic request metrics and a Prometheus endpoint is also provided, which is convenient for monitoring published metrics.

Audit Tracking

Each incoming HTTP request is assigned a transaction ID, for audit logging purposes, which by default is a UUID. ForgeRock Microservices like other products support propagation of a transaction ID between point-to-point service calls.

Cloud Foundry

The standard binary in Zip format is also directly deploy-able to Cloud Foundry, and will get recognized by the Cloud Foundry Java build pack as a “dist zip” format.

Client Credential Repository

ForgeRock Identity Microservices support Cassandra, MongoDB, any LDAP v3 including ForgeRock Directory Services.

ForgeRock welcomes Shankar Raman

Welcome to Shankar Raman, who joins the ForgeRock documentation team today.

Shankar is starting with platform and deployment documentation, where many of you have been asking us to do more.

Shankar comes to the team from curriculum development, having worked for years as an instructor, writer, and course developer at Oracle on everything from the DB, to middleware, to Fusion Applications. Shankar’s understanding of the deployment problem space and what architects and deployers must know to build solutions will help him make your lives, or at least your jobs, a bit easier. Looking forward to that!

This blog post was first published @ marginnotes2.wordpress.com, included here with permission.

Using IDM and DS to synchronise hashed passwords

Overview

In this post I will describe a technique for synchronising a hashed password from ForgeRock IDM to DS.

Out of the box, IDM has a Managed User object that encrypts a password in symmetric (reversible) encryption.  One reason for this is that sometimes it is necessary to pass, in clear text, the password to a destination directory in order for it to perform its own hashing before storing it.  Therefore the out of the box synchronisation model for IDM is take the encrypted password from its own store, decrypt it, and pass it in clear text (typically over a secure channel!) to DS for it to hash and store.

You can see this in the samples for IDM.

However, there are some times when storing an encrypted, rather than hashed, value for a password is not acceptable.  IDM includes the capability to hash properties (such as passwords) not just encrypt them.  In that scenario, given that password hashes are one way, it’s not possible to decrypt the password before synchronisation with other systems such as DS.

Fortunately, DS offers the capability of accepting pre-hashed passwords so IDM is able to pass the hash to DS for storage.  DS obviously needs to know this value is a hash, otherwise it will try to hash the hash!

So, what are the steps required?

  1. Ensure that DS is configured to accept hashed passwords.
  2. Ensure the IDM data model uses ‘Hashing’ for the password property.
  3. Ensure the IDM mapping is setup correctly

Ensure DS is configured to accept hashed passwords

This topic is covered excellently by Mark Craig in this article here:
https://marginnotes2.wordpress.com/2011/07/21/opendj-using-pre-encoded-passwords/

I’m using ForgeRock DS v5.0 here, but Mark references the old name for DS (OpenDJ) because this capability has been around for a while.  The key thing to note about the steps in the article is that you need the allow-pre-encoded-passwords advanced password policy property to be set for the appropriate password policy.  I’m only going to be dealing with one password policy – the default one – so Mark’s article covers everything I need.

(I will be using a Salted SHA-512 algorithm so if you want to follow all the steps, including testing out the change of a user’s password, then specify {SSHA512} in the userPassword value, rather than {SSHA}.  This test isn’t necessary for the later steps in this article, but may help you understand what’s going on).

Ensure IDM uses hashing

Like everything in IDM, you can modify configuration by changing the various config .json files, or the UI (which updates the json config files!)

I’ll use IDM v5.0 here and show the UI.

By default, the Managed User object includes a password property that is defined as ‘Encrypted’:

We need to change this to be Hashed:

And, I’m using the SHA-512 algorithm here (which is a Salted SHA-512 algorithm).

Note that making this change does not update all the user passwords that exist.  It will only take effect when a new value is saved to the property.

Now the value of a password, when it is saved, is string representation of a complex JSON object (just like it is when encrypted) but will look something like:

{"$crypto":

  {"value":

    {"algorithm":"SHA-512","data":"Quxh/PEBXMa2wfh9Jmm5xkgMwbLdQfytGRy9VFP12Bb5I2w4fcpAkgZIiMPX0tcPg8OSo+UbeJRdnNPMV8Kxc354Nj12j0DXyJpzgqkdiWE="},

    "type":"salted-hash"

  }

}

Ensure IDM Mapping is setup correctly

Now we need to configure the mapping.

As you may have noted in the 1st step, DS is told that the password is pre-hashed by the presence of {SSHA512} at the beginning of the password hash value.  Therefore we need a transformation script that takes the algorithm and hash value from IDM and concatenates it in a way suited for DS.

The script is fairly simple, but does need some logic to convert the IDM algorithm representation: SHA-512 into the DS representation: {SSHA512}

This is the transformation script (in groovy) I used (which can be extended of course for other algorithms):

String strHash;

if (source.$crypto.value.algorithm == "SHA-512" ) {

  strHash = "{SSHA512}" + source.$crypto.value.data

}

strHash;

This script replaces the default IDM script that does the decryption of the password.

(You might want to extend the script to cope with both hashed and encrypted values of passwords if you already have data.  Look at functions such as openidm.isHashed and openidm.isEncrypted in the IDM Integrators Guide).

Now when a password is changed, the password is stored in hashed form in IDM.  Then the mapping is triggered to synchronise to DS applying the transformation script that passes the pre-hashed password value.

Now there is no need to store passwords in reversible encryption!

This blog post was first published @ yaunap.blogspot.no, included here with permission from the author.

Kubernetes: Why won’t that deployment roll?

Kubernetes Deployments provide a declarative way of managing replica sets and pods.  A deployment specifies how many pods to run as part of a replica set, where to place pods, how to scale them and how to manage their availability.

Deployments are also used to perform rolling updates to a service. They can support a number of different update strategies such as blue/green and canary deployments.

The examples provided in the Kubernetes documentation explain how to trigger a rolling update by changing a docker image.  For example, suppose you edit the Docker image tag by using the kubectl edit deployment/my-app command, changing the image tag from acme/my-app:v0.2.3 to acme/my-app:v0.3.0.

When Kubernetes sees that the image tag has changed, a rolling update is triggered according to the declared strategy. This results in the pods with the 0.2.3 image being taken down and replaced with pods running the 0.3.0 image.  This is done in a rolling fashion so that the service remains available during the transition.

If your application is packaged up as Helm chart, the helm upgrade command can be used to trigger a rollout:

helm upgrade my-release my-chart

Under the covers, Helm is just applying any updates in your chart to the deployment and sending them to Kubernetes. You can achieve the same thing yourself by applying updates using kubectl.

Now let’s suppose that you want to trigger a rolling update even if the docker image has not changed. A likely scenario is that you want to apply an update to your application’s configuration.

As an example, the ForgeRock Identity Gateway (IG) Helm chart pulls its configuration from a git repository. If the configuration changes in git, we’d like to roll out a new deployment.

My first attempt at this was to perform a helm upgrade on the release, updating the ConfigMap in the chart with the new git branch for the release. Our Helm chart uses the ConfigMap to set an environment variable to the git branch (or commit) that we want to checkout:

kind: ConfigMap
 data:
 GIT_CHECKOUT_BRANCH: test

After editing the ConfigMap, and doing a helm upgrade, nothing terribly exciting happened.  My deployment did not “roll” as expected with the new git configuration.

As it turns out, Kubernetes needs to see a change in the pod’s spec.template before triggers a new deployment. Changing the image tag is one way to do that,  but any change to the template will work. As I discovered, changes to a ConfigMap *do not* trigger deployment updates.

The solution here is to move the git branch variable out of the ConfigMap and into to the pod’s spec.template in the deployment object:

 spec:
      initContainers:
      - name: git-init
        env:
        - name: GIT_CHECKOUT_BRANCH
          value: "{{ .Values.global.git.branch }}"

When the IG helm chart is updated (we supply a new value to the template variable above), the template’s spec changes, and Kubernetes will roll out the new deployment.

Here is what it looks like when we put it all together (you can check out the full IG chart here).

# install IG using helm. The git branch defaults to "master"
 $ helm install openig
 NAME:   hissing-yak

$ kubectl get deployment
 NAME      DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE

openig    1         1         1            1           4m

# Take a look at our deployment history. So far, there is only one revision:
 $ kubectl rollout history deployment
 deployments "openig"
 REVISION CHANGE-CAUSE
 1
 # Let's deploy a different configuration. We can use a commit hash here instead
 # of a branch name:
 $ helm upgrade --set global.git.branch=d0562faec4b1621ad53b852aa5ee61f1072575cc
 hissing-yak openig

# Have a look at our deployment. We now see a new revision
 $ kubectl rollout history deploy
 deployments "openig"
 REVISION  CHANGE-CAUSE
 1
 2
 # Look at the deployment history. You should see an event where the original
 # replicaset is scaled down and a new set is scaled up:
 kubectl describe deployment
 Events:
 Type    Reason             Age   From                   Message
 ----    ------             ----  ----                   -------
 ---> Time=19M ago. This is the original RS
 Normal  ScalingReplicaSet  19m   deployment-controller  Scaled up replica set openig-6f6575cdfd to 1
 ------> Time=3m ago. The new RS being brought up
 Normal  ScalingReplicaSet  3m    deployment-controller  Scaled up replica set openig-7b5788659c to 1
 -----> Time=3m ago. The old RS being scaled down
 Normal  ScalingReplicaSet  3m    deployment-controller  Scaled down replica set openig-6f6575cdfd to 0

One of the neat things we can do with deployments is roll back to a previous revision. For example, if we have an error in our configuration, and want to restore the previous release:

$ kubectl rollout undo deployment/openig
 deployment "openig" rolled back
 $ kubectl describe  deployment:
 Normal  DeploymentRollback  1m  deployment-controller  Rolled back deployment "openig" to revision 1

# We can see the old pod is being terminated and the new one has started:
 $ kubectl get pod
 NAME                      READY     STATUS        RESTARTS   AGE
 openig-6f6575cdfd-tvmgj   1/1       Running       0          28s
 openig-7b5788659c-h7kcb   0/1       Terminating   0          8m

And that my friends, is how we roll.

This blog post was first published @ warrenstrange.blogspot.ca, included here with permission.

Introduction to ForgeRock DevOps – Part 3 – Deploying Clusters

Introduction to ForgeRock DevOps – Part 3 – Deploying Clusters

We have just launched Version 5 of the ForgeRock Identity Platform with numerous enhancements for DevOps friendliness. I have been meaning to jump into the world of DevOps for some time so the new release afforded a great opportunity to do just that.

Catch up with previous entries in the series:

http://identity-implementation.blogspot.co.uk/2017/04/introduction-to-forgerock-devops-part-1.html
http://identity-implementation.blogspot.co.uk/2017/05/introduction-to-forgerock-devops-part-2.html

I will be using IBM Bluemix here as I have recent experience of it but nearly all of the concepts will be similar for any other cloud environment.

Deploying Clusters

So now we have docker images deployed into Bluemix. The next step is to actually deploy the images into a Kubernetes cluster. Firstly we need to create a cluster, then we need to actually deploy into it. For what we are doing here we need a standard paid cluster.

Preperation

1. Log in to the Blue Mix CLI using you Blue Mix account credentials:

bx login -a https://api.ng.bluemix.net

2. Choose a location, you can view locations with:

bx cs locations

2. Choose machine type, you can view machine types for locations with:

bx cs machine-types dal10

3. Check for VLANS. You need to choose both a public and private VLAN for a standard cluster. It should look something like this:

bx cs vlans dal10

If you need to create them… init the SoftLayer CLI first:

bx sl init

Just select Single Sign On: (2)

You should be logged in and able to create vlans:

bx sl vlan create -t public -d dal10 -s 8 -n waynepublic

Note: Your Bluemix account needs permission to create VLANs, if you don’t have this you need to contact support. You’ll be told if this is the case. You should get one free public VLAN I believe.

Creating a Cluster

1. Create a cluster:

Assuming you have public and private VLANs you can create a kubernetes cluster:

bx cs cluster-create --location dal10 --machine-type u1c.2x4 --workers 2 --name wbcluster --private-vlan 1638423 --public-vlan 2106869

You *should* also be able to use the Bluemix UI to create clusters.

2. You may need to wait a little while for the cluster to be deployed. You can check the status of it using:

bx cs clusters

During the deployment you will likely receive various emails from Bluemix confirming infrastructure has been provisioned.

3. When the cluster has finished deployment ( state is pending ), set the new cluster as the current context:

bx cs cluster-config wbcluster

The statement in yellow is the important bit, copy and paste that export back into the terminal to configure the environment for kubernetes to run.

4. Now you can run kubectl commands, view the cluster config with:

kubectl config view

See the kubernetes documentation for the full set of commands you can run, we will only be looking at a few key ones for now.

5. Clone (or download) the ForgeRock Kubernetes repo to somewhere local:

https://stash.forgerock.org/projects/DOCKER/repos/fretes/browse

6. Navigate to the fretes directory:

cd /usr/local/DevOps/stash/fretes

 

7. We need to make a tweak to the fretes/helm/custom.yaml file and add the following:

storageClass: ibmc-file-bronze

This specified the type of storage we want our deployment to use in Bluemix. If it were AWS or Azure you may need something similar.

8. From the same terminal window that you have setup kubectl, navigate to the fretes/helm/ directory and run:

helm init

This will install the helm component into the cluster ready to process the helm scripts we are going to run.

9. Run the OpenAM helm script which will configure instances of AM, backed by DJ into our kubernetes cluster:

/usr/local/DevOps/stash/fretes/helm/bin/openam.sh

This script will take a while and again will trigger the provisioning of infrastructure, storage and other components resulting in emails from Bluemix. While this is happening you should see something like this:

If you have to re-deploy on subsequent occasions, the storage will not need to be re-provisioned and the whole process will be significantly faster. When it is all done you should see something like this:

10. Proxy the kube dash:

kubectl proxy

Navigate to http://127.0.0.1:8001/ui in a browser and you should see the kubernetes console!

Here you can see everything that has been deployed automatically using the helm script!

We have multiple instances of AM and DJ with storage deployed into Bluemix ready to configure!

In the next blog we will take a detailed look at the kubernetes dashboard to understand exactly what we have done, but for now lets take a quick look at one of our new AM instances.

11. Log in to AM:

Ctrl-C the proxy command and type the following:

bx cs workers wbcluster

You can see a list of our workers above, and the IP they have been exposed publicly on.

Note: There are defined ways of accessing applications using Kubernetes, typically you would use an ingress or a load balancer and not go directly using the public IP. We may look at these in later blogs.

As you probably know, AM expects a fully qualified domain name so before we can log in we need to edit /etc/hosts and add the following:

Then you can navigate to AM:

http://openam.example.com:30080/openam

You should be able to login with amadmin/password!

Summary

So far in this series we have created docker containers with the ForgeRock components, uploaded these to Bluemix and run the orchestration helm script to actually deploy instances of these containers into a meaningful architecture. Not bad!

In the next blog we will take a detailed look at the kubernetes console and examine what has actually been deployed.

This blog post was first published @ http://identity-implementation.blogspot.no/, included here with permission from the author.