Kubernetes: Why won’t that deployment roll?

Kubernetes Deployments provide a declarative way of managing replica sets and pods.  A deployment specifies how many pods to run as part of a replica set, where to place pods, how to scale them and how to manage their availability.

Deployments are also used to perform rolling updates to a service. They can support a number of different update strategies such as blue/green and canary deployments.

The examples provided in the Kubernetes documentation explain how to trigger a rolling update by changing a docker image.  For example, suppose you edit the Docker image tag by using the kubectl edit deployment/my-app command, changing the image tag from acme/my-app:v0.2.3 to acme/my-app:v0.3.0.

When Kubernetes sees that the image tag has changed, a rolling update is triggered according to the declared strategy. This results in the pods with the 0.2.3 image being taken down and replaced with pods running the 0.3.0 image.  This is done in a rolling fashion so that the service remains available during the transition.

If your application is packaged up as Helm chart, the helm upgrade command can be used to trigger a rollout:

helm upgrade my-release my-chart

Under the covers, Helm is just applying any updates in your chart to the deployment and sending them to Kubernetes. You can achieve the same thing yourself by applying updates using kubectl.

Now let’s suppose that you want to trigger a rolling update even if the docker image has not changed. A likely scenario is that you want to apply an update to your application’s configuration.

As an example, the ForgeRock Identity Gateway (IG) Helm chart pulls its configuration from a git repository. If the configuration changes in git, we’d like to roll out a new deployment.

My first attempt at this was to perform a helm upgrade on the release, updating the ConfigMap in the chart with the new git branch for the release. Our Helm chart uses the ConfigMap to set an environment variable to the git branch (or commit) that we want to checkout:

kind: ConfigMap
 data:
 GIT_CHECKOUT_BRANCH: test

After editing the ConfigMap, and doing a helm upgrade, nothing terribly exciting happened.  My deployment did not “roll” as expected with the new git configuration.

As it turns out, Kubernetes needs to see a change in the pod’s spec.template before triggers a new deployment. Changing the image tag is one way to do that,  but any change to the template will work. As I discovered, changes to a ConfigMap *do not* trigger deployment updates.

The solution here is to move the git branch variable out of the ConfigMap and into to the pod’s spec.template in the deployment object:

 spec:
      initContainers:
      - name: git-init
        env:
        - name: GIT_CHECKOUT_BRANCH
          value: "{{ .Values.global.git.branch }}"

When the IG helm chart is updated (we supply a new value to the template variable above), the template’s spec changes, and Kubernetes will roll out the new deployment.

Here is what it looks like when we put it all together (you can check out the full IG chart here).

# install IG using helm. The git branch defaults to "master"
 $ helm install openig
 NAME:   hissing-yak

$ kubectl get deployment
 NAME      DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE

openig    1         1         1            1           4m

# Take a look at our deployment history. So far, there is only one revision:
 $ kubectl rollout history deployment
 deployments "openig"
 REVISION CHANGE-CAUSE
 1
 # Let's deploy a different configuration. We can use a commit hash here instead
 # of a branch name:
 $ helm upgrade --set global.git.branch=d0562faec4b1621ad53b852aa5ee61f1072575cc
 hissing-yak openig

# Have a look at our deployment. We now see a new revision
 $ kubectl rollout history deploy
 deployments "openig"
 REVISION  CHANGE-CAUSE
 1
 2
 # Look at the deployment history. You should see an event where the original
 # replicaset is scaled down and a new set is scaled up:
 kubectl describe deployment
 Events:
 Type    Reason             Age   From                   Message
 ----    ------             ----  ----                   -------
 ---> Time=19M ago. This is the original RS
 Normal  ScalingReplicaSet  19m   deployment-controller  Scaled up replica set openig-6f6575cdfd to 1
 ------> Time=3m ago. The new RS being brought up
 Normal  ScalingReplicaSet  3m    deployment-controller  Scaled up replica set openig-7b5788659c to 1
 -----> Time=3m ago. The old RS being scaled down
 Normal  ScalingReplicaSet  3m    deployment-controller  Scaled down replica set openig-6f6575cdfd to 0

One of the neat things we can do with deployments is roll back to a previous revision. For example, if we have an error in our configuration, and want to restore the previous release:

$ kubectl rollout undo deployment/openig
 deployment "openig" rolled back
 $ kubectl describe  deployment:
 Normal  DeploymentRollback  1m  deployment-controller  Rolled back deployment "openig" to revision 1

# We can see the old pod is being terminated and the new one has started:
 $ kubectl get pod
 NAME                      READY     STATUS        RESTARTS   AGE
 openig-6f6575cdfd-tvmgj   1/1       Running       0          28s
 openig-7b5788659c-h7kcb   0/1       Terminating   0          8m

And that my friends, is how we roll.

This blog post was first published @ warrenstrange.blogspot.ca, included here with permission.

Introduction to ForgeRock DevOps – Part 3 – Deploying Clusters

Introduction to ForgeRock DevOps – Part 3 – Deploying Clusters

We have just launched Version 5 of the ForgeRock Identity Platform with numerous enhancements for DevOps friendliness. I have been meaning to jump into the world of DevOps for some time so the new release afforded a great opportunity to do just that.

Catch up with previous entries in the series:

http://identity-implementation.blogspot.co.uk/2017/04/introduction-to-forgerock-devops-part-1.html
http://identity-implementation.blogspot.co.uk/2017/05/introduction-to-forgerock-devops-part-2.html

I will be using IBM Bluemix here as I have recent experience of it but nearly all of the concepts will be similar for any other cloud environment.

Deploying Clusters

So now we have docker images deployed into Bluemix. The next step is to actually deploy the images into a Kubernetes cluster. Firstly we need to create a cluster, then we need to actually deploy into it. For what we are doing here we need a standard paid cluster.

Preperation

1. Log in to the Blue Mix CLI using you Blue Mix account credentials:

bx login -a https://api.ng.bluemix.net

2. Choose a location, you can view locations with:

bx cs locations

2. Choose machine type, you can view machine types for locations with:

bx cs machine-types dal10

3. Check for VLANS. You need to choose both a public and private VLAN for a standard cluster. It should look something like this:

bx cs vlans dal10

If you need to create them… init the SoftLayer CLI first:

bx sl init

Just select Single Sign On: (2)

You should be logged in and able to create vlans:

bx sl vlan create -t public -d dal10 -s 8 -n waynepublic

Note: Your Bluemix account needs permission to create VLANs, if you don’t have this you need to contact support. You’ll be told if this is the case. You should get one free public VLAN I believe.

Creating a Cluster

1. Create a cluster:

Assuming you have public and private VLANs you can create a kubernetes cluster:

bx cs cluster-create --location dal10 --machine-type u1c.2x4 --workers 2 --name wbcluster --private-vlan 1638423 --public-vlan 2106869

You *should* also be able to use the Bluemix UI to create clusters.

2. You may need to wait a little while for the cluster to be deployed. You can check the status of it using:

bx cs clusters

During the deployment you will likely receive various emails from Bluemix confirming infrastructure has been provisioned.

3. When the cluster has finished deployment ( state is pending ), set the new cluster as the current context:

bx cs cluster-config wbcluster

The statement in yellow is the important bit, copy and paste that export back into the terminal to configure the environment for kubernetes to run.

4. Now you can run kubectl commands, view the cluster config with:

kubectl config view

See the kubernetes documentation for the full set of commands you can run, we will only be looking at a few key ones for now.

5. Clone (or download) the ForgeRock Kubernetes repo to somewhere local:

https://stash.forgerock.org/projects/DOCKER/repos/fretes/browse

6. Navigate to the fretes directory:

cd /usr/local/DevOps/stash/fretes

 

7. We need to make a tweak to the fretes/helm/custom.yaml file and add the following:

storageClass: ibmc-file-bronze

This specified the type of storage we want our deployment to use in Bluemix. If it were AWS or Azure you may need something similar.

8. From the same terminal window that you have setup kubectl, navigate to the fretes/helm/ directory and run:

helm init

This will install the helm component into the cluster ready to process the helm scripts we are going to run.

9. Run the OpenAM helm script which will configure instances of AM, backed by DJ into our kubernetes cluster:

/usr/local/DevOps/stash/fretes/helm/bin/openam.sh

This script will take a while and again will trigger the provisioning of infrastructure, storage and other components resulting in emails from Bluemix. While this is happening you should see something like this:

If you have to re-deploy on subsequent occasions, the storage will not need to be re-provisioned and the whole process will be significantly faster. When it is all done you should see something like this:

10. Proxy the kube dash:

kubectl proxy

Navigate to http://127.0.0.1:8001/ui in a browser and you should see the kubernetes console!

Here you can see everything that has been deployed automatically using the helm script!

We have multiple instances of AM and DJ with storage deployed into Bluemix ready to configure!

In the next blog we will take a detailed look at the kubernetes dashboard to understand exactly what we have done, but for now lets take a quick look at one of our new AM instances.

11. Log in to AM:

Ctrl-C the proxy command and type the following:

bx cs workers wbcluster

You can see a list of our workers above, and the IP they have been exposed publicly on.

Note: There are defined ways of accessing applications using Kubernetes, typically you would use an ingress or a load balancer and not go directly using the public IP. We may look at these in later blogs.

As you probably know, AM expects a fully qualified domain name so before we can log in we need to edit /etc/hosts and add the following:

Then you can navigate to AM:

http://openam.example.com:30080/openam

You should be able to login with amadmin/password!

Summary

So far in this series we have created docker containers with the ForgeRock components, uploaded these to Bluemix and run the orchestration helm script to actually deploy instances of these containers into a meaningful architecture. Not bad!

In the next blog we will take a detailed look at the kubernetes console and examine what has actually been deployed.

This blog post was first published @ http://identity-implementation.blogspot.no/, included here with permission from the author.

Storing ForgeRock Directory Services server keys on the Nitrokey HSM

The Nitrokey HSM provides a PKCS#11 hardware security module the form of a USB key. The design is based on open hardware and open software.

This is a low cost option to familiarize yourself with an actual hardware HSM, and to test your procedures. With it, you can demonstrate that ForgeRock Directory Services servers can in fact use the HSM as a key store.

In addition to the documentation that you can access through https://www.nitrokey.com/start, see https://raymii.org/s/articles/Get_Started_With_The_Nitrokey_HSM.html for a helpful introduction.

The current article demonstrates generating and storing keys and certificates on the Nitrokey HSM, and then using they keys to protect DS server communications. It was tested with a build from the current master branch. Thanks to Fabio Pistolesi and others for debugging advice.

This article does not describe how to install the prerequisite tools and libraries to work with the Nitrokey HSM on your system. The introduction mentioned above briefly describes installation on a couple of Linux distributions, but the software itself seems to be cross-platform.

When you first plug the Nitrokey HSM into a USB slot, it has PINs but no keys. The following examples examine the mostly empty Nitrokey HSM when initially plugged in:

# List devices:
$ opensc-tool --list-readers
# Detected readers (pcsc)
Nr. Card Features Name
0 Yes Nitrokey Nitrokey HSM (010000000000000000000000) 00 00

# List slots, where you notice that the Nitrokey HSM is in slot 0 on this system:
$ pkcs11-tool --list-slots
Available slots:
Slot 0 (0x0): Nitrokey Nitrokey HSM (010000000000000000000000) 00 00
 token label : SmartCard-HSM (UserPIN)
 token manufacturer : www.CardContact.de
 token model : PKCS#15 emulated
 token flags : rng, login required, PIN initialized, token initialized
 hardware version : 24.13
 firmware version : 2.5
 serial num : DENK0100751

The following example initializes the Nitrokey HSM, using the default SO PIN and a user PIN of 648219:

$ sc-hsm-tool --initialize --so-pin 3537363231383830 --pin 648219
Using reader with a card: Nitrokey Nitrokey HSM (010000000000000000000000) 00 00
Version : 2.5
Config options :
 User PIN reset with SO-PIN enabled
SO-PIN tries left : 15
User PIN tries left : 3

The following example tests the PIN on the otherwise empty Nitrokey HSM:

$ pkcs11-tool --test --login --pin 648219
Using slot 0 with a present token (0x0)
C_SeedRandom() and C_GenerateRandom():
 seeding (C_SeedRandom) not supported
 seems to be OK
Digests:
 all 4 digest functions seem to work
 MD5: OK
 SHA-1: OK
 RIPEMD160: OK
Signatures (currently only RSA signatures)
Signatures: no private key found in this slot
Verify (currently only for RSA):
 No private key found for testing
Unwrap: not implemented
Decryption (RSA)
No errors

The following example generates a key pair on the Nitrokey HSM:

$ pkcs11-tool 
 --module opensc-pkcs11.so 
 --keypairgen --key-type rsa:2048 
 --id 10 --label server-cert 
 --login --pin 648219
Using slot 0 with a present token (0x0)
Key pair generated:
Private Key Object; RSA
  label: server-cert
  ID: 10
  Usage: decrypt, sign, unwrap
Public Key Object; RSA 2048 bits
  label: server-cert
  ID: 10
  Usage: encrypt, verify, wrap

The following examples show what is on the Nitrokey HSM:

$ pkcs15-tool --dump
Using reader with a card: Nitrokey Nitrokey HSM (010000000000000000000000) 00 00
PKCS#15 Card [SmartCard-HSM]:
 Version : 0
 Serial number : DENK0100751
 Manufacturer ID: www.CardContact.de
 Flags :

PIN [UserPIN]
 Object Flags : [0x3], private, modifiable
 ID : 01
 Flags : [0x812], local, initialized, exchangeRefData
 Length : min_len:6, max_len:15, stored_len:0
 Pad char : 0x00
 Reference : 129 (0x81)
 Type : ascii-numeric
 Path : e82b0601040181c31f0201::
 Tries left : 3

PIN [SOPIN]
 Object Flags : [0x1], private
 ID : 02
 Flags : [0x9A], local, unblock-disabled, initialized, soPin
 Length : min_len:16, max_len:16, stored_len:0
 Pad char : 0x00
 Reference : 136 (0x88)
 Type : bcd
 Path : e82b0601040181c31f0201::
 Tries left : 15

Private RSA Key [server-cert]
 Object Flags : [0x3], private, modifiable
 Usage : [0x2E], decrypt, sign, signRecover, unwrap
 Access Flags : [0x1D], sensitive, alwaysSensitive, neverExtract, local
 ModLength : 2048
 Key ref : 1 (0x1)
 Native : yes
 Auth ID : 01
 ID : 10
 MD:guid : b4212884-6800-34d5-4866-11748bd12289

Public RSA Key [server-cert]
 Object Flags : [0x0]
 Usage : [0x51], encrypt, wrap, verify
 Access Flags : [0x2], extract
 ModLength : 2048
 Key ref : 0 (0x0)
 Native : no
 ID : 10
 DirectValue : 

$ pkcs15-tool --read-public-key 10
Using reader with a card: Nitrokey Nitrokey HSM (010000000000000000000000) 00 00
-----BEGIN PUBLIC KEY-----
MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAlxvdwL6PyRnOs58L0X8d
2z8/WcgA/beoR+p08nymN8KZ4KlWKUo93AKMcFBUW8Bl8zFC80P9ZlNIXM8NSmPr
cBR9Nmpi0nUQDgfTi8vIU51tD84UcYetxX9rSHbh+CKqUmmSk6f7JPIyT6RonrOo
QJQyFmIi4oV9/d0Op8WVCbL7omYaPFwYbdUPetM1MfVyLNpkhzVdvZJE0F46hXF8
Sspqjh4f9KkJWdozIOND8ZTFvxP5Cs1y/kvvuhfjWVAtii52E4LKXRr53SA5Spl2
v1oNu5sqoaEd/SNxjj/52iH6zeGm61I7wbcIgvcHCI5CKONKceSL3PkIYzHeJMu2
SQIDAQAB
-----END PUBLIC KEY-----

The following example self-signs a public key certificate and writes it to the Nitrokey HSM. The example uses openssl, and configures an engine to use the Nitrokey HSM, which implements PKCS#11. The configuration for the OpenSSL engine is stored in a file called hsm.conf. On an Ubuntu 17.04 laptop, the PKCS#11 library installed alongside the tools is /usr/lib/x86_64-linux-gnu/opensc-pkcs11.so as shown below:

$ cat hsm.conf
# PKCS11 engine config
openssl_conf = openssl_def

[openssl_def]
engines = engine_section

[req]
distinguished_name = req_distinguished_name

[req_distinguished_name]
# empty.

[engine_section]
pkcs11 = pkcs11_section

[pkcs11_section]
engine_id = pkcs11
dynamic_path = /usr/lib/x86_64-linux-gnu/openssl-1.0.2/engines/libpkcs11.so
MODULE_PATH = /usr/lib/x86_64-linux-gnu/opensc-pkcs11.so
PIN = 648219
init = 0

# Check the engine configuration. In this case, the PKCS11 engine loads fine:
$ OPENSSL_CONF=./hsm.conf openssl engine -tt -c(rdrand) Intel RDRAND engine
 [RAND]
     [ available ]
(dynamic) Dynamic engine loading support
     [ unavailable ]
(pkcs11) pkcs11 engine
 [RSA]
     [ available ]

# Create a self-signed certificate and write it to server-cert.pem.
# Notice that the key is identified using slot-id:key-id:
$ OPENSSL_CONF=./hsm.conf openssl req 
 -engine pkcs11 -keyform engine -new -key 0:10 
 -nodes -days 3560 -x509 -sha256 -out "server-cert.pem" 
 -subj "/C=FR/O=Example Corp/CN=opendj.example.com"
engine "pkcs11" set.
No private keys found.

The openssl command prints a message, “No private keys found.” Yet, it still returns 0 (success) and writes the certificate file:

$ more server-cert.pem
-----BEGIN CERTIFICATE-----
MIIC/jCCAeYCCQD1SEBmUy8aCzANBgkqhkiG9w0BAQsFADBBMQswCQYDVQQGEwJG
UjEVMBMGA1UECgwMRXhhbXBsZSBDb3JwMRswGQYDVQQDDBJvcGVuZGouZXhhbXBs
ZS5jb20wHhcNMTcwODE0MTEzNzA0WhcNMjcwNTE0MTEzNzA0WjBBMQswCQYDVQQG
EwJGUjEVMBMGA1UECgwMRXhhbXBsZSBDb3JwMRswGQYDVQQDDBJvcGVuZGouZXhh
bXBsZS5jb20wggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCXG93Avo/J
Gc6znwvRfx3bPz9ZyAD9t6hH6nTyfKY3wpngqVYpSj3cAoxwUFRbwGXzMULzQ/1m
U0hczw1KY+twFH02amLSdRAOB9OLy8hTnW0PzhRxh63Ff2tIduH4IqpSaZKTp/sk
8jJPpGies6hAlDIWYiLihX393Q6nxZUJsvuiZho8XBht1Q960zUx9XIs2mSHNV29
kkTQXjqFcXxKymqOHh/0qQlZ2jMg40PxlMW/E/kKzXL+S++6F+NZUC2KLnYTgspd
GvndIDlKmXa/Wg27myqhoR39I3GOP/naIfrN4abrUjvBtwiC9wcIjkIo40px5Ivc
+QhjMd4ky7ZJAgMBAAEwDQYJKoZIhvcNAQELBQADggEBAIX0hQadHtv1c0P7ObpF
sDnIWMRVtq0s+NcXvMEwKhLHHvEHrId9ZM3Ywn0P3CFd+WXMWMNSVz51cPn6SKPS
pdN9CVq6B26cFrvzWrsD06ohP6jCkXEBSshlE/k71FcnukBuJHNzj8O6JMwfyWeE
xT53WIvgz9t02B/ObZSYFlUNX+WApPCbILTHazEzYws3AN0hZPmv4Ng1Vt71nNiT
EZskTxvKsmuEuG5E8j79zO0TYvOrGCISzS3PFRrl7G83vNaSyzBIhTYF2Ilt2g7B
jnNc1/k8R/TXwskJR8gL7EFZyakQ6xUiboFDf6PWa4KMLJVNX5HsVGyLvP9FiVkY
82A=
-----END CERTIFICATE-----

The following example writes the certificate to the Nitrokey HSM:

# Transform the certificate to binary format:
$ openssl x509 -in server-cert.pem -out server-cert.der -outform der

# Write the binary format to the Nitrokey HSM, with the label (aka alias) "server-cert":
$ pkcs11-tool 
 --module opensc-pkcs11.so 
 --login --pin 648219 
 --write-object server-cert.der --type cert 
 --id 10 --label server-cert
Using slot 0 with a present token (0x0)
Created certificate:
Certificate Object, type = X.509 cert
  label:      Certificate
  ID:         10

With the keys and certificate loaded on the Nitrokey HSM, prepare to use it with Java programs. If the Java environment is configured to access the HSM, then you can just use it. In testing, however, where you are trying the HSM, and the Java environment is not configured to use it, you can specify the configuration:

# Edit a configuration file for Java programs to access the Nitrokey HSM:
$ cat /path/to/hsm.conf
name = NitrokeyHSM
description = SunPKCS11 with Nitrokey HSM
library = /usr/lib/x86_64-linux-gnu/opensc-pkcs11.so
slot = 0

# Verify that the Java keytool command can read the certificate on the Nitrokey HSM:
$ keytool 
  -list 
  -keystore NONE 
  -storetype PKCS11 
  -storepass 648219 
  -providerClass sun.security.pkcs11.SunPKCS11 
  -providerArg /path/to/hsm.conf

Keystore type: PKCS11
Keystore provider: SunPKCS11-NitrokeyHSM

Your keystore contains 1 entry

server-cert, PrivateKeyEntry,
Certificate fingerprint (SHA1): B9:A2:88:5F:69:8E:C6:FB:C2:29:BF:F8:39:51:F6:CC:5A:0C:CC:10

A ForgeRock Directory Services server needs to access the configuration indirectly, as there is no setup parameter to specify the HSM configuration file. Add your own settings to extend the Java environment configuration as in the following example:

$ cat /path/to/java.security
# Security provider for accessing Nitrokey HSM:
security.provider.10=sun.security.pkcs11.SunPKCS11 /path/to/hsm.conf

# Unzip OpenDJ server files and edit the configuration before running setup:
$ cd /path/to && unzip 

# Set the Java args to provide access to the Nitrokey HSM.
# Make opendj/template/config/java.properties writable, and edit.
# This allows the OpenDJ server to start as needed:
$ grep java.security /path/to/opendj/template/config/java.properties
start-ds.java-args=-server -Djava.security.properties=/path/to/java.security

# Set up the server:
$ OPENDJ_JAVA_ARGS="-Djava.security.properties=/path/to/java.security" 
 /path/to/opendj/setup 
 directory-server 
 --rootUserDN "cn=Directory Manager" 
 --rootUserPassword password 
 --hostname opendj.example.com 
 --ldapPort 1389 
 --certNickname server-cert 
 --usePkcs11keyStore 
 --keyStorePassword 648219 
 --enableStartTLS 
 --ldapsPort 1636 
 --httpsPort 8443 
 --adminConnectorPort 4444 
 --baseDN dc=example,dc=com 
 --ldifFile /path/to/Example.ldif 
 --acceptLicense

To debug, you can set security options such as the following:

OPENDJ_JAVA_ARGS="-Djava.security.debug=sunpkcs11,pkcs11 -Djava.security.properties=/path/to/java.security"

The following example shows an LDAP search that uses StartTLS to secure the connection:

$ /path/to/opendj/bin/ldapsearch --port 1389 --useStartTLS --baseDn dc=example,dc=com "(uid=bjensen)" cn

Server Certificate:

User DN  : CN=opendj.example.com, O=Example Corp, C=FR
Validity : From 'Mon Aug 14 13:37:04 CEST 2017'
             To 'Fri May 14 13:37:04 CEST 2027'
Issuer   : CN=opendj.example.com, O=Example Corp, C=FR



Do you trust this server certificate?

  1) No
  2) Yes, for this session only
  3) Yes, also add it to a truststore
  4) View certificate details

Enter choice: [2]: 4


[
[
  Version: V1
  Subject: CN=opendj.example.com, O=Example Corp, C=FR
  Signature Algorithm: SHA256withRSA, OID = 1.2.840.113549.1.1.11

  Key:  Sun RSA public key, 2048 bits
modulus:
19075725396235933137769598662661614197862047561628746980441589981485944705910796672312984856468967795133561692335016063740885234669000544938180872617609018349362382746691431903457463096067521727428890407876216335060234859298584617093111442598717549413985534234195585205628275977771336192817217401466821950358077667360760303781781546092776529804134165206111430903307470063770954498312408782707671718644473532565867636087296875111917369665456339790081809729622515754260638402122026793085096606980136589008904235094266835122846140853242190316629669042978441585862504373498978113550866427439699045924980942028978028000841
  public exponent: 65537
  Validity: [From: Mon Aug 14 13:37:04 CEST 2017,
               To: Fri May 14 13:37:04 CEST 2027]
  Issuer: CN=opendj.example.com, O=Example Corp, C=FR
  SerialNumber: [    f5484066 532f1a0b]

]
  Algorithm: [SHA256withRSA]
  Signature:
0000: 85 F4 85 06 9D 1E DB F5   73 43 FB 39 BA 45 B0 39  ........sC.9.E.9
0010: C8 58 C4 55 B6 AD 2C F8   D7 17 BC C1 30 2A 12 C7  .X.U..,.....0*..
0020: 1E F1 07 AC 87 7D 64 CD   D8 C2 7D 0F DC 21 5D F9  ......d......!].
0030: 65 CC 58 C3 52 57 3E 75   70 F9 FA 48 A3 D2 A5 D3  e.X.RW>up..H....
0040: 7D 09 5A BA 07 6E 9C 16   BB F3 5A BB 03 D3 AA 21  ..Z..n....Z....!
0050: 3F A8 C2 91 71 01 4A C8   65 13 F9 3B D4 57 27 BA  ?...q.J.e..;.W'.
0060: 40 6E 24 73 73 8F C3 BA   24 CC 1F C9 67 84 C5 3E  @n$ss...$...g..>
0070: 77 58 8B E0 CF DB 74 D8   1F CE 6D 94 98 16 55 0D  wX....t...m...U.
0080: 5F E5 80 A4 F0 9B 20 B4   C7 6B 31 33 63 0B 37 00  _..... ..k13c.7.
0090: DD 21 64 F9 AF E0 D8 35   56 DE F5 9C D8 93 11 9B  .!d....5V.......
00A0: 24 4F 1B CA B2 6B 84 B8   6E 44 F2 3E FD CC ED 13  $O...k..nD.>....
00B0: 62 F3 AB 18 22 12 CD 2D   CF 15 1A E5 EC 6F 37 BC  b..."..-.....o7.
00C0: D6 92 CB 30 48 85 36 05   D8 89 6D DA 0E C1 8E 73  ...0H.6...m....s
00D0: 5C D7 F9 3C 47 F4 D7 C2   C9 09 47 C8 0B EC 41 59  ..

When using an HSM with a ForgeRock Directory Services server, keep in mind the following caveats:

  • Each time the server needs to access the keys, it accesses the HSM. You can see this with the Nitrokey HSM because it flashes a small red LED when accessed. Depending on the HSM, this could significantly impact performance.
  • The key manager provider supports PKCS#11 as shown. The trust manager provider implementation does not, however, support PKCS#11 at the time of this writing, though there is an RFE for that (OPENDJ-4191).
  • The Crypto Manager stores symmetric keys for encryption using the cn=admin data backend, and the symmetric keys cannot currently be stored in a PKCS#11 module.

This blog post was first published @ marginnotes2.wordpress.com, included here with permission.

Open Banking, PSD2 & Screen Scraping

Open Banking & PSD2

PSD2 is due to come into force September 2018, meanwhile the UK is forging ahead with Open Banking which is due to come into force even earlier in January 2018. Both regulations are all about cracking open banking APIs to increase digital competitiveness an improve consumer choice.

The 9 biggest UK banks have been collaborating in the form of the Open Banking Working Group (OBWG) to define the solution for Open Banking in the UK. After much discussion and deliberation the OBWG has determined that Open Banking should be achieved through the use of open standards and specifically the use of the OAuth 2.0 family of standards.

OAuth 2.0

OAuth 2.0 is something I use just about every day and it’s something that all of us have probably used at one time or another though we may not have realised it. OAuth is a standard designed for Delegated Authorization.

We commonly refer to Authentication as proving who you are, whereas Authorization determines what you are allowed to do. Authentication is typically achieved with some sort of username and password (and ideally a second factor). Authorization is generally concerned with the policy and permissions that apply once I have authenticated.

Effectively, Delegated Authorization is a way to permit someone to do something on my behalf. A very common example can be seen with Instragram and Twitter when a user gives Instagram permission to post to their Twitter feed.

With OAuth 2.0, Instragram will redirect you to Twitter, you will authenticate with Twitter and consent to Instragram posting to your Twitter account. Twitter will then share an authorization code with Instagram that Instagram will exchange for an access token. This access token can only be used to post to your Twitter account, Instagram for example could not use it to delete your tweets.

In a world without OAuth 2.0, Twitter would have to know your Facebook username and password in order to post a Tweet to your Facebook account. This would allow them to post to your Twitter feed but it would also enable them to do anything else that you could do if you authenticated. More crucially your username and password have now been shared with a third party who you have to trust. Propagating passwords is never a good thing for security and is really the very definition of a security anti-pattern. This is how screen scraping works.

Screen Scraping

Up to now there has been no standards based mechanism for sharing account data. There are services at the moment that can aggregate your financial data in one place. These services are convenient for many however to use them you have to share your credentials with them. So, if you want the aggregator to be able to report on your bank account. You need to share your banking credentials with the aggregator. You have to trust a third party with your banking credentials.

Putting aside the issues of trust, massive credential leaks are now a weekly occurrence and the more you share your credentials around the more vulnerable those credentials become.

Open Banking aims to put an end to this by using secure, trusted open standards such as OAuth. As a security professional and as a customer I feel very strongly that this is the right way to do Open Banking and it ensures I remain in control of my account data and enables me to revoke third party access at any time.

Right now there is much debate and discussion as to whether screen scraping should be permitted in both PSD2 and Open Banking. There are a number of groups who are right now petitioning for it to remain a valid approach for data sharing under the new regulations.

I can appreciate the difficulties many organisations may face in transitioning from screen scraping to an OAuth 2.0 based model but I cannot in good conscience support the screen scraping approach and I suspect that if it were to be adopted as an acceptable interim solution that it would persist for the longer term and undermine the benefits that an API driven approach to Open Banking would bring.

The Kantara Initiative is a non-profit organisation dedicated to advancing digital identity and data privacy. If you feel as strongly as I do about this, please visit the Kantara Initiative and sign the pledge against screen scraping:

https://kantarainitiative.org/psd2statement/

This blog post was first published @ http://identity-implementation.blogspot.no/, included here with permission from the author.

Faster docs

One of the things you have asked for is to see large documents load faster on the ForgeRock BackStage docs site. We recently switched from publishing HTML documentation through the BackStage single-page app to publishing separate, static HTML with JavaScript to provide BackStage features.

This allows browsers to use progressive rendering, and start laying out the page before everything has been loaded and styled. The result is that large documents feel faster in your browser.

If you have bookmarks to published HTML, notice that we have dropped the per-chapter view of published docs. Each document is now a single HTML page. So instead of a link to /docs/product/version/book/chapter#section, target /docs/product/version/book/#section.

Also notice that we have consolidated documentation sets to make information easier to find, with only one set per major or minor release. Generally this means that you only have to read one set of release notes, no matter what maintenance version you have right now.

The latest docs are the ones for version 5 of the platform:

We still publish all the same docs as before, including docs for software that is beyond the end of its service life. Please check out the updated site. Open issues there for any problems you notice.

This blog post was first published @ marginnotes2.wordpress.com, included here with permission.

ForgeRock Self-Service Custom Stage

Introduction

A while ago I blogged an article describing how to add custom stages to the ForgeRock IDM self-service config.  At the time I used the sample custom stage available from the ForgeRock Commons Self-Service code base.  I left it as a task for the reader to build their own stage!  However, I recently had cause to build a custom stage for a proof of concept I was working on.

It’s for IDM v5 and I’ve detailed the steps here.

Business Logic

The requirement for the stage was to validate that a registering user had ownership of the provided phone number.  The phone number could be either a mobile or a landline.  The approach taken was to use Twilio (a 3rd party) to send out either an SMS to a mobile, or text-to-speech to a landline.  The content of the message is a code based on HOTP.

Get the code for the module

https://stash.forgerock.org/users/andrew.potter/repos/twilio-stage/browse

Building the module

Follow the instructions in README.md

After deploying the .jar file you must restart IDM for the bundle to be correctly recognised.

The module is targeted for IDMv5.  It uses the maven repositories to get the binary dependencies.
See this article in order to access the ForgeRock ‘private-releases’ maven repo:
https://backstage.forgerock.com/knowledge/kb/article/a74096897

It also uses appropriate pom.xml directives to ensure the final .jar file is packaged as an OSGi bundle so that it can be dropped into IDM

Technical details

The code consists of a few files.  The first two in this list a the key files for any stage.  They implement the necessary interfaces for a stage.  The remaining files are the specific business logic for this stage.

  • TwilioStageConfig.java.  This class manages reading the configuration data from the configuration file.  It simply represents each configuration item for the stage as properties of the class.
  • TwilioStage.java.  This is main orchestration file for the stage.  It copes with both registration and password reset scenarios.  It manages the ‘state’ of the flow within this stage and generates the appropriate callbacks to user, but relies on the other classes to do the real code management work.  If you want to learn about the way a ‘stage’ works then this is file to consider in detail.
  • HOTPAlgorithm.java.  This is taken from the OATH Initiative work and is unchanged by me.  It is a java class to generate a code based on the HOTP algorithm.
  • TwilioService.java. This class manages the process of sending the code.  It generates the code then decides whether to send it using SMS or TTS.  (In the UK, all mobile phone numbers start 07… so it’s very simple logic for my purpose!)  This class also provides a method to validate the code entered by the user.
  • TwilioUtil.java.  The class provides the utility functions that interact directly with the Twilio APIs for sending either an SMS or TTS

 

Configuration

There are also two sample config files for registration and password reset.  You should include the JSON section relating to this class in your self-service configuration files for IDM.
For example:

        {
“class” : “org.forgerock.selfservice.twilio.TwilioStageConfig”,
“codeValidityDuration” : “6000”,
“codeLength” : “5”,
“controlUrl” : “http://twimlets.com/message?Message%5B0%5D=Hello%20Please%20enter%20the%20following%20one%20time%20code”,
“fromPhone” : “+441412803033”,
“accountSid” : “<Enter accountSid>”,
“tokenId” : “<Enter tokenId>”,
“telephoneField” : “telephoneNumber”,
“skipSend” : false
},

Most configuration items should be self explanatory.  However, the ‘skipSend’ option is worthy of special note.  This, when true, will cause the stage to avoid calling the Twilio APIs and instead return the code as part of the callback data.  This means that if you’re using the OOTB UI then the ‘placeholder’ HTML attribute of the input box will tell you the code to enter.  This is really useful for testing this stage if you don’t have access to a Twilio account as this also ignores the Twilio account specific configuration items.

Of course, now you need to deploy it as per my previous article!

This blog post was first published @ yaunap.blogspot.no, included here with permission from the author.

Save greenbacks on Google Container Engine using autoscaling and preemptible VMs

There is an awesome new feature on Google Container Engine (GKE) that lets you combine autoscaling, node pools and preemptible VMs to save big $!

The basic idea is to create a small cluster with an inexpensive VM type that will run 7×24. This primary node can be used for critical services that should not be rescheduled to another pod. A good example would be a Jenkins master server. Here is an example of how to create the cluster:

gcloud alpha container clusters create $CLUSTER 
  --network "default" --num-nodes 1 
  --machine-type  ${small} --zone $ZONE 
  --disk-size 50

Now here is the money saver trick:  A second node pool is added to the cluster. This node pool is configured to auto-scale from one node up to a maximum. This additional node pool uses preemptible VMs. These are VMs that can be taken away at any time if Google needs the capacity, but in exchange you get dirt cheap images. For example, running a 4 core VM with 15GB of RAM for a month comes in under $30.

This second pool is perfect for containers that can survive a restart or migration to a new node. Jenkins slaves would be a good candidate.

Here is an example of adding the node pool to the cluster you created above:

gcloud alpha container node-pools create $NODEPOOL --cluster $CLUSTER --zone $ZONE 
    --machine-type ${medium} --preemptible --disk-size 50 
    --enable-autoscaling --min-nodes=1 --max-nodes=4

That node pool will scale down to a single VM if the cluster is not busy, and scale up to a maximum of 4 nodes.

If your VM gets preempted (and it will at least once every 24 hours),  the pods running on that node will be rescheduled onto a new node created by the auto-scaler.

Container engine assigns a label to nodes which you can use for scheduling. For example, to ensure you Jenkins Master does not get put on a preemptible node, you can add the following annotation to your Pod Spec:

apiVersion: v1kind: Podspec:  nodeSelector:    !cloud.google.com/gke-preemptible
apiVersion: v1kind: Podspec:  nodeSelector:    !cloud.google.com/gke-preemptible
nodeSelector:    !cloud.google.com/gke-preemptible

See https://cloud.google.com/container-engine/docs/preemptible-vm for the details.

This blog post was first published @ warrenstrange.blogspot.ca, included here with permission.

Introduction to ForgeRock DevOps – Part 2 – Building Docker Containers

We have just launched Version 5 of the ForgeRock Identity Platform with numerous enhancements for DevOps friendliness. I have been meaning to jump into the world of DevOps for some time so the new release afforded a great opportunity to do just that.

Catch up with previous entries in the series:
http://identity-implementation.blogspot.co.uk/2017/04/introduction-to-forgerock-devops-part-1.html

I will be using IBM Bluemix here as I have recent experience of it but nearly all of the concepts will be similar for any other cloud environment.

Building Docker Containers

In this blog we are going to build our docker containers that will contain the ForgeRock platform components, tag them and upload them to the Bluemix registry.

Prerequisites

Install all of the below:

Docker: https://www.docker.com
Used to build, tag and upload docker containers.
Bluemix CLI: http://clis.ng.bluemix.net/ui/home.html
Used to deploy and configure the Bluemix environment.
CloudFoundry CLI: https://github.com/cloudfoundry/cli
Bluemix dependency.
Kubectl: https://kubernetes.io/docs/tasks/tools/install-kubectl/

Deploy and manage Kubernetes clusters.

Initial Configuration

1. Log in to the Blue Mix CLI using you Blue Mix account credentials:

bx login -a https://api.ng.bluemix.net

Note we are using the US instance of Bluemix here as it has support for Kubernetes in beta.

When prompted to select an account ( just type 1) and if you are logged in successfully you should see the above. Now you can interact with the Bluemix environment just as you might if you were logged in via a browser.

2. Add the Bluemix Docker components:

bx plugin repo-add Bluemix https://plugins.ng.bluemix.netbx plugin install container-service -r Bluemix
bx plugin install IBM-Containers -r Bluemix

Check they have installed:

bx plugin list

3. Clone (or download) the ForgeRock Docker Repo to somewhere local:

https://stash.forgerock.org/projects/DOCKER/repos/docker/browse

4. Download the ForgeRock AM and DS component binaries from backstage:

https://backstage.forgerock.com/downloads

5. Unzip and copy ForgeRock binaries into the Docker build directories:

AM:

unzip AM-5.0.0.zip
cp openam/AM-5.0.0.war /usr/local/DevOps/stash/docker/openam/

DJ:

mv DS-5.0.0.zip /usr/local/DevOps/stash/docker/openam/opendj.zipcp openam/AM-5.0.0.war /usr/local/DevOps/stash/docker/openam/

Amster:

mv Amster-5.0.0.zip /usr/local/DevOps/stash/docker/amster/amster.zip

For those unfamiliar, Amster is our new RESTful configuration tool for AM in the 5 platform, replacing SSOADM with a far more DevOps friendly tool, I’ll be covering it in a future blog.

Build Containers

We are going to create three containers: AM, DJ & Amster:

1. Build and Tag OpenAM container ( don’t forget the . ) :

cd /usr/local/DevOps/stash/docker/openam
docker build -t wayneblacklockfr/openam .

Note wayneblacklockfr/openam is just a name to tag the container with locally, replace it with whatever you like but keep the /openam.

All being well you will see something like the below:

Congratulations, you have built your first ForgeRock container!

Now we need to get the namespace for tagging, this is usually your username but check using:

bx ic namespace-get

Now lets tag it ready for upload to Bluemix, use the container ID output at the end of the build process and your namespace

docker tag d7e1700cfadd registry.ng.bluemix.net/wayneblacklock/openam:14.0.0

Repeat the process for Amster and DS.

2. Build and Tag Amster container:

cd /usr/local/DevOps/stash/docker/amster
docker build -t wayneblacklockfr/amster .
docker tag 54bf5bd46bf1 registry.ng.bluemix.net/wayneblacklock/amster:14.0.0

3. Build and Tag DS container:

cd /usr/local/DevOps/stash/docker/opendj
docker build -t wayneblacklockfr/opendj .
docker tag 19b8a6f4af73 registry.ng.bluemix.net/wayneblacklock/opendj:4.0.0

4. View the containers:

You can take a look at what we have built with: docker images

Push Containers

Finally we want to push our containers up to the Bluemix registry.

1. Login again:

bx login -a https://api.ng.bluemix.net

2. Initiate the Bluemix container service, this may take a moment:

bx ic init

Ignore Option 1 & Option 2, we are not doing either.

3. Push your Docker images up to Bluemix:

docker push registry.ng.bluemix.net/wayneblacklock/openam:14.0.0

docker push registry.ng.bluemix.net/wayneblacklock/amster:14.0.0

docker push registry.ng.bluemix.net/wayneblacklock/opendj:4.0.0

4. Confirm your images have been uploaded:

bx ic images

If you login to the Bluemix webapp you should be able to see your containers in the catalog:

Next Time

We will take a look at actually deploying a Kubernetes cluster and everything we have to do to ready our containers for deployment.

This blog post was first published @ http://identity-implementation.blogspot.no/, included here with permission from the author.

Extending OpenAM HOTP module to display OTP delivery details

OpenAM provide HOTP authentication module which can send OTP to user’s email address and/or telephone number. By default, OpenAM doesn’t displays user’s email address and/or telephone number while sending this OTP.

Solution

Versions used for this implementation: OpenAM 13.5, OpenDJ 3.5
One of the solution can include extending out of the box OpenAM’s HOTP module:
  • Extend HOTP auth module (openam-auth-hotp).
  • Update below property in extended amAuthHOTP.properties: send.success=Please enter your One Time Password sent at
  • Extend HOTPService appropriately to retrieve user profile details.
  • Change extended HOTP module code as per below (both for auto send and on request):

substituteHeader(START_STATE, bundle.getString("send.success") + <Get User contact details from HOTPService>);

Deploy

Register service and module (Note that for OpenAM v12 use amAuthHOTPExt-12.xml) :
$ ./ssoadm create-svc --adminid amadmin --password-file /tmp/pwd.txt --xmlfile ~/softwares/amAuthHOTPExt.xml
$ ./ssoadm register-auth-module --adminid amadmin --password-file /tmp/pwd.txt --authmodule com.sun.identity.authentication.modules.hotp.HOTPExt

UnRegister service and module (in case module needs to be uninstalled) : 
$ ./ssoadm unregister-auth-module --adminid amadmin --password-file /tmp/pwd.txt --authmodule com.sun.identity.authentication.modules.hotp.HOTPExt
$ ./ssoadm delete-svc --adminid amadmin --password-file /tmp/pwd.txt -s sunAMAuthHOTPExtService
  • Configure HOTPExt module with required SMTP server. Enable both SMS and Email.
  • Create a chain(otpChain) with (LDAP:Required, HOTPExt:Required). Set this chain as default for “Organization Authentication”
  • Restart OpenAM
  • Invoke HOTP module and appropriate message is displayed on screen with user’s email address and/or telephone number:

 

This blog post was first published @ theinfinitelooper.blogspot.com, included here with permission.

OpenAM SP SAML Attribute Mapper extension for updating profile attributes

OpenAM can act as both SP and IdP for SAML webSSO flows. OpenAM also provides ability to dynamically create user profiles.

When OpenAM is acting as SAML SP and Dynamic user profile is enabled, if user profile doesn’t exist on OpenAM then OpenAM dynamically creates this profile from attributes in SAML assertion.
The problem comes if user profile is updated at IdP side, all subsequent SAML webSSO flows doesn’t update these changes at OpenAM SP side. More details here: OPENAM-8340

Solution

Versions used for this implementation: OpenAM 13.5, OpenDJ 3.5

One of the solution can include extending OpenAM SP Attribute Mapper. This extension may include just checking if user profile exists in OpenAM SP and updating any modified or new attributes in OpenAM datastore. Some tips for this implementation:

  1. Extend DefaultSPAttributeMapper and override getAttributes()
  2. Get datastore provider from SAML2Utils.getDataStoreProvider()
  3. Check if user exists: dataStoreProvider.isUserExists(userID)
  4. Get existing user attributes: dataStoreProvider.getAttributes()
  5. Compare attributes in SAML assertion with existing user attributes.
  6. Finally persist any new and updated attributes: dataStoreProvider.setAttributes()

Deploy

  • Compile and deploy this extension in OpenAM under  (OpenAM-Tomcat)/webapps/openam/WEB-INF/lib
  • Change SAML attribute setting in OpenAM. Navigate to Federation > Entity Providers > (SP Hosted Entity) > Assertion Processing. Specify ‘org.forgerock.openam.saml2.plugins.examples.UpdateDynamicUserSPAttMapper’ under Attribute Mapper.
  • Restart OpenAM
  • And we are good to go! Any changes in user profile attributes in SAML assertion will now be persisted in OpenAM datastore.

Note that ideally attributes between different sources should be synced by using some tool like OpenIDM 

See Also

Get code: https://github.com/CharanMann/OpenAM-SAMLSP-updateDynamicUser
OpenAM User Profile settings: https://backstage.forgerock.com/docs/openam/13.5/admin-guide#auth-core-realm-attributes
OpenAM SAML configuration: https://backstage.forgerock.com/docs/openam/13.5/admin-guide#chap-federation

This blog post was first published @ theinfinitelooper.blogspot.com, included here with permission.