Kubernetes Namespaces and OpenAM

I have been conducting some experiments running the ForgeRock stack on Kubernetes. I recently stumbled on namespaces.

In a nutshell Kubernetes (k8) namespaces provide isolation for instances. The typical use case is to provide isolated environments for dev, QA, production and so on.

I had an "Aha!" moment when it occurred to me that namespaces could also provide multi-tenancy on a k8 cluster. How might this work?

Let's create a two node OpenAM cluster using an external OpenDJ instance:

See https://github.com/ForgeRock/fretes  for some samples used in this article

kubectl create -f am-dj-idm/

The above command launches all the containers found in the given directory, wires them up together (updates DNS records), and create a load balancer on GCE.

 If I look at my services:

 kubectl get service 

I see something like this:

NAME       LABELS          SELECTOR   IP(S) PORT(S) 
openam-svc name=openam-svc site=site1 80/TCP 

(note: I am eliding a bit of the output here for brevity)

That second IP for openam-svc is the external IP of the load balancer configured by Kubernetes. If you bring up this IP address you will see the OpenAM login page. 

Now, let's change my namespace to another instance. Say "tenant1" (I previously created this namespace)

kc config use-context tenant1 

A kubectl get services  should be empty, as we have no services running yet in the tenant1 namespace. 

So let's create some: 

kubectl create -f am-dj-idm/ 

This is the same command we ran before - but this time we are running against a different namespace.  

Looking at our services, we see:

NAME        LABELS            SELECTOR   IP(S) PORT(S) 
openam-svc  name=openam-svc   site=site1 80/TCP 

Pretty cool. We now have two OpenAM instances deployed, with complete isolation, on the same cluster, using only a handful of commands. 

Hopefully this gives you a sense of why I am so excited about Kubernetes.

Device Authorization using OAuth2 and OpenAM

This blog post was first published @ identityrelationshipmanagement.blogspot.co.uk, included here with permission.

IoT and smart device style use cases, often require the need to authorize a device to act on behalf of a user.  A common example is things like smart TV’s, home appliances or wearables, that are powerful enough to communicate over HTTPS, and will often access services and APIs on the end user’s behalf.

How can that be done securely, without sharing credentials?  Well, OAuth2 can come to the rescue. Whilst not part of the ratified standard, many of the OAuth2 IETF drafts, describe how this could be acheived using what’s known as the “Device Flow”  This flow leverages the same components of the other OAuth2 flows, with a few subtle differences.

Firstly, the device is generally not known to have a great UI, that can handle decent human interaction – such as logging in or authorizing a consent request.  So, the consenting aspect, needs to be handled on a different device, that does have standard UI capabilities.  The concept, is to have the device trigger a request, before passing the authorization process off to the end user on a different device – basically accessing a URL to “authorize and pair” the device.


From an OpenAM perspective, we create a standard OAuth2 (or OIDC) agent profile with the necessary client identifier and secret (or JWT config) with the necessary scope.  The device starts the process by send a POST request to /oauth2/device/code end point, with arguments such as the scope, client ID and nonce in the URL.  If the request is successful, the response is a JSON payload, with a verification URL, device_code and user_code payload.
The end user views the URL and code (or perhaps notified via email or app) and in a separate device, goes to the necessary URL to enter the code.
This triggers the standard OAuth2 consent screen – showing which scopes the device is trying to access.
Once approved, the end user dashboard in the OpenAM UI shows the authorization – which importantly can be revoked at any time by the end user to “detach” the device.


Once authorized, the device can then call the ../oauth2/device/token? endpoint with the necessary client credentials and device_code, to receive the access and refresh token payload – or OpenID Connect JWT token as well.



The device can then start accessing resources on the users behalf – until the user revokes the bearer token.

NB – this OAuth2 flow is only available in the nightly OpenAM 13.0 build.

DeviceEmulator code that tests the flows is available here.

Managing Account Status Notification in ForgeRock OpenDJ

This blog post was first published @ www.fedji.com, included here with permission.

In a couple of blog posts published in the recent past, One on ForgeRock OpenAM and another on ForgeRock OpenIDM, we had a look at configuring E-mail Services in the aforesaid Products. And it’ll be grossly unfair, if we don’t touch upon the same topic in ForgeRock’s Directory Services solution: OpenDJ

Thanks to the following articles/documents:

ForgeRock Documentation on Managing Account Status Notification in OpenDJ
Mark Craig’s blog
Ludo’s Sketches

Thanks, also to the authors of following contents for helping with the SMTP Server Configuration:

Ubuntu Forum Thread
ArchLinux Forum Thread
DigitalOcean Tutorial

And now on to my ~15 minute long video log on Managing Account Status and Notification in OpenDJ.


ForgeRock at Grenoble Ekiden…

21680633313_c13f693bcc_zLast Sunday, ForgeRock team participated in the Ekiden, a marathon organized in Grenoble, in favor of various charities, including the “Papillons de Charcot” (literally Charcot’s butterflies, to help people with Charcot’s disease).

The race is a marathon, but run as a relay of 6 persons respectively running 5km, 10km, 5km, 10km, 5km and 7.195km).

The ForgeRock team did great and finished in 3 h 28 min and 6 secs. Congrats to the team and we’re looking forward to next year edition, see if we can register 2 teams.


Filed under: General Tagged: charity, ekiden, ForgeRock, grenoble, running, team

Sending Emails from ForgeRock OpenIDM

This blog post was first published @ www.fedji.com, included here with permission.

No one wants to stay logged in on to the User Interface of a Provisioning Tool, waiting for the approval requests to flood into their queue in order to take an appropriate action. We have other things to do in life and for matters that require our attention we all expect notifications, don’t we? Without doubt, one very common and an easy channel for notification is Email, which some consider to be the first and the last killer app of the Internet. Just like many other configurations, setting up Outbound Email in ForgeRock OpenIDM is a walk in the park. If you have just over 5 minutes to spare, watch how it’s done in the video below:


Deeply indebted to this section of the ForgeRock Documentation on OpenIDM.

ForgeRock OpenIG as SAML 2.0 Service Provider

This blog post was first published @ www.fedji.com, included here with permission.

This post is based on the ForgeRock Documentation on configuring OpenIG as SAML 2.0 Service Provider. The video logs embedded just below this write up is a visual representation of what is already there in the document that I mentioned above. For a detailed study, please read through the documentation and then sit back, relax and watch the demonstration in the screen-cast below

SAML 2.0 as you probably know is a standard to exchange information between a SAML authority (a Identity Provider a.k.a IDP) and a SAML Consumer (a Service Provider a.k.a SP). In the demonstration that follows ForgeRock OpenAM acts as an Identity Provider and ForgeRock OpenIG acts as a Service Provider. So the authentication of a user is done by the IDP, who will then send a piece of information (Assertion) to the SP that could contain the attributes of user from the user’s profile in the IDP DataStore. SP will then use the information thus obtained (Assertion) to take further action (like giving access to the user etc.)

There are two ways of getting this done:
(i) SP initiated SSO
(ii) IDP initiated SSO

In simple words, in a SP initiated SSO, the user contacts the Service Provider, who in turns gets in touch with the Identity Provider, who would validate the user credentials and then exchange a piece of information (Assertion) that could contain the user attributes to the Service Provider. Whereas a IDP initiated SSO, the IDP will authenticate the user, and would then send an unsolicited message (Assertion) to the SP, who would then take further action (like giving access to the user etc.)

The following two illustrations might give a rough idea:


In our story (above in the illustration and below in the video), a user authenticates against ForgeRock OpenAM (IDP), who will send then an assertion (containing user’s mail and employeenumber attribute) to ForgeRock OpenIG (Service Provider), who will apply filters (like extracting the attributes from assertion and posting it as username and password) to post the user’s credentials to a protected application (Minimal HTTP Server)

If you’ve got a vague picture on what’s discussed above, I’d believe it’ll be clearer after watching the video below:


How to set up an OAuth2 provider with ssoadm

The ssoadm command line tool is a quite powerful ally if you want to set up your OpenAM environment without performing any operation through the user interface, i.e. when you just want to script everything.

Whilst the tool itself allows you to do almost anything with the OpenAM configuration, finding the right set of commands for performing a certain task, is not always that straightforward… In today’s example we will try to figure out which commands to use to set up an OAuth2 provider.

When using the Common Tasks wizard to set up an OAuth2 Provider we can see that there are essentially two things that the wizard does for us:

  • Configure a realm level service for the OAuth2 Provider
  • Set up a policy to control access to the /oauth2/authorize endpoint

Setting up a realm level service

Well, we know that we are looking for something that sets up a service, so let’s see what command could help us:

$ openam/bin/ssoadm | grep -i service -B 1
        Add service to a realm. 

Well, we want to add a service to a realm, so add-svc-realm sounds like a good fit. Let’s see what parameters it has:

$ openam/bin/ssoadm add-svc-realm
    --realm, -e
        Name of realm.

    --servicename, -s
        Service Name.

    --attributevalues, -a
        Attribute values e.g. homeaddress=here.

    --datafile, -D
        Name of file that contains attribute values data.

Alright, so the realm is straightforward, but what should we use for servicename and datafile?

Each service in OpenAM has a service schema that describes what kind of attributes can that service contain and with what syntaxes/formats/etc. Since all the default service definitions can be found in ~/openam/config/xml directory, let’s have a look around and see if there is anything OAuth2 related:

$ ls ~/openam/config/xml
... OAuth2Provider.xml ...

After opening up OAuth2Provider.xml we can find the service name under the name attribute of the Service element (happens to be OAuth2Provider).

So the next question is what attributes should you use to populate the service? All the attributes are defined in the very same service definition XML file, so it’s not too difficult to figure out what to do now:

$ echo "forgerock-oauth2-provider-authorization-code-lifetime=60
forgerock-oauth2-provider-access-token-lifetime=600" > attrs.txt
$ openam/bin/ssoadm add-svc-realm -e / -s OAuth2Provider -u amadmin -f .pass -D attrs.txt

Creating a policy

Creating a policy is a bit more complex since 12.0.0 and the introduction of XACML policies, but let’s see what we can do about that.

Using ssoadm create-xacml

The XACML XML format is not really pleasant for the eyes, so I would say that you better create that policy using the policy editor first, and then export that policy in XACML format, so that you can automate this flow.

Once you have the policy in XACML format, the ssoadm command itself would look something like this:

$ openam/bin/ssoadm create-xacml -e / -X oauth2-policy.xml -u amadmin -f .pass

Using the REST API

The policy REST endpoints introduced with 12.0.0 are probably a lot more friendly for creating policies, so let’s see how to do that:

$ echo '{
   "resources" : [
   "subject" : {
      "type" : "AuthenticatedUsers"
   "active" : true, 
   "name" : "OAuth2ProviderPolicy",
   "description" : "", 
   "applicationName" : "iPlanetAMWebAgentService",
   "actionValues" : {
      "POST" : true, 
      "GET" : true
}' > policy.json
$ curl -v -H "iplanetdirectorypro: AQIC5wM...*" -H "Content-Type: application/json" -d @policy.json http://openam.example.com:8080/openam/json/policies?_action=create

Hope you found this useful.

Identity Summit London 2015

Mike Ellis, ForgeRock CEO at London Identity SummitLast week, ForgeRockhosted the London edition of the Identity Summit 2015. It was a great event, very successful with over 200 attendees to discuss identity, digital transformation and IoT.ForgeRock Identity Summit London Attendees

My coworker Markus has published a detailed recap of the Summit, so I leave you with my usual picture gallery. Enjoy !

Screen Shot 2015-10-14 at 18.26.19There will be two other Identity Summits in November this year. One in Amsterdam on November 5, one in Düsseldorf on November 12. If you haven’t registered yet, it’s still time !

Filed under: Identity Tagged: conference, digitalTransformation, europe, ForgeRock, identity, Identity Relationship Management, iot, London