Dynamic Profiles in OpenAM 13

I recently had cause to play with ‘Dynamic Profiles’ in OpenAMv13.

If you don’t know then Dynamic Profile is a realm based configuration setting that can dynamically create a user profile in the configured DataStore.  This can be useful in many circumstances where the authentication of a user takes place in a service other than the DataStore.  Examples of this include using OpenAM as Social Media/Oauth2 client (allowing users to sign in with Facebook), and SAML Service Provider (e.g. allowing users to sign in with Federated credentials).  Having authenticated the credentials it now might be useful to maintain a profile of that user that is used throughout their interactions with the services protected by OpenAM.

The specific scenario that triggered this investigation (and therefore this blog post!) was one where the user is authenticated by credentials in an Active Directory, but the user profile (DataStore) was a separate OpenDJ instance.

Now before I go any further it is entirely possible to make the AD your DataStore making things simple.  However, there are many occasions where the schema changes needed in a directory to provide the full range of DataStore capabilities are simply not allowed to be applied to an Active Directory due to business or security policy.  Therefore it is necessary to configure a separate DataStore using, say, OpenDJ as well as allow users to authenticate against AD with their AD credentials.

By default OpenAM configures a realm to ‘Require’ a profile entry in the DataStore for an authenticated user.  Therefore a user profile has to exist in the DataStore in order for authentication to complete.

Now you could provision a set of user profiles into the DataStore using something like OpenIDM to ensure the required profile exists.  Or you can set ‘Dynamic’ profiles in OpenAM.  This causes OpenAM to dynamically create a profile in the configured DataStore if one does not exist.
So, if we’re ‘dynamically’ creating profiles, what data does OpenAM use to populate the DataStore?   Good question…and one that this post is designed to answer!

First, the basics…

  • Authentication Chain
    As described we want the user to authenticate using their AD credentials.  Therefore we need to define the Authentication Chain to contain an LDAP Authentication Module (or the AD module if you prefer).  Let’s assume you can get this working (you may choose to set profile as ‘Ignored’ whilst you set this up so that authentication can complete without the need for a profile at all).

 

  • DataStore
    We’ll be using OpenDJ as the DataStore.  Let’s assume you can setup OpenDJ and configure it as a DataStore.  The OpenAM documentation on Backstage provides guidance on this.

Ok, so now we need to tie these together.  The general idea is that a user logs in with their AD credentials using the Authentication Chain/Module.  OpenAM checks to see if there is an associated user profile record in the DataStore and creates it if not.

Let’s first of all consider the Authentication Module.  There are two key properties here:

  1. Attribute Used to Retrieve User Profile
  2. User Creation Attributes

 

 

Attribute Used to Retrieve User Profile

In my experience the description and helptext for this is not entirely accurate.  In the scenario I’m describing, OpenAM will retrieve the value of this attribute from the AD/LDAP.  It will then be placed into memory for use later on.  Let’s call this ‘AttributeX’.  Typically the attribute you want to retrieve is the unique identifier for the user, so might be sAMAccountName or uid, for example.  We’ll use the value in ‘AttributeX’ later when we search for and find the user profile in the DataStore.

User Creation Attributes

This is a list of attribute values to be retrieved from the AD/LDAP authentication source that we wish to pass through to the user profile creation step.
You can specify these attributes either as singletons such as ‘sn’ or ‘mail’.  Or as pipe separated maps such as phoneNum|telephoneNumber.
Again the description of the pipe mapping syntax is confusing.
In actual fact it should be read as:
OpenAM property|AD/LDAP property
i.e. the setting of phoneNum|telephoneNumber would take the value of telephoneNumber from AD/LDAP and store it in an OpenAM property called phoneNum.  We can then use the OpenAM property later when we create the record in the DataStore.
Note that the singleton syntax such as ‘sn’ will essentially be interpreted as ‘sn|sn’.
Also note that the list that appears here is explicit.  If a property is not in this list then the DataStore will not be able to access it during creation.

Ok, so now we know how to extract information from the AD/LDAP that we authenticate against and store it in OpenAM properties.  Now let’s look at the DataStore configuration.

There are a couple of things to look at here, but they’re mostly in the ‘User Configuration’ section:
1. LDAP Users Search Attribute
2. LDAP People Container Naming Attribute
3. LDAP People Container Value
4. Create User Attribute Mapping
and, in the Authentication Configuration section,
5. Authentication Naming Attribute

LDAP Users Search Attribute

This is the attribute that will contain the value of ‘AttributeX’ i.e. the unique key for the user.  Typically in OpenDJ this is often ‘uid’, but could be something else depending on your configuration.  For example I want a DN of a person record to be something like:
cn=<userid>,cn=users,dc=example,dc=com
whereas the default might be:
uid=<userid>,ou=people,dc=example,dc=com
Therefore I need the search attribute to be ‘cn’ not ‘uid’.

LDAP People Container Naming Attribute

As you can see from my desired DN, the container for the ‘people’ records is ‘users’ named by the ‘cn’ property.  Hence the value I specify here is ‘cn’.  The default (for OpenDJ) is ‘ou’ here.

LDAP People Container Value

Again, from the desired DN you can see that I needs to specify ‘users’ here, whereas the the default is ‘people’.

These settings are used to both find a user as well as set the DN when the user is dynamically created.
So, with my settings a user will be created thus:
cn=<userid>,cn=users,dc=example,dc=com

(The dc=example,dc=com section is defined as the ‘LDAP Organization DN’ elsewhere in the DataStore config but you should already have that setup correctly!)

Create User Attribute Mapping

Now this is the interesting bit!  This is where the values we retrieved from the AD/LDAP Authentication module and placed in to OpenAM properties can be mapped to attributes in the DataStore.
By default the list contains two singletons: cn and sn

But the helptext says you must specify the list as ‘TargetAttributeName=SourceAttributeName’ which the defaults don’t follow.
Remember that ‘AttributeX’ we collected?  Well any singleton attribute in this list will take the value of AttributeX…if it was not explicitly defined in the Authentication Module ‘User Creation Attributes’ map.
i.e. if User Creation Attributes included ‘sn’ then the value of that would be used for the ‘sn’ value here.  If there was no mapping then ‘sn’ here would take the value of AttributeX.

This particular nuance allows you to configure a setting to ensure that attributes that are defined as ‘mandatory’ in the DataStore to always have an attribute value.  This avoids record creation errors due to missing data constraints defined in the DataStore.

Now, what happens if we use the ‘TargetAttributeName=SourceAttributeName’ format?  Well, in this case ‘TargetAttributeName’ refers to the name of the attribute in the DataStore, whereas ‘SourceAttributeName’ refers to the OpenAM property as specified in Create User Attribute Mapping of the Authentication Module.

Oh, and there’s one extra consideration here…
If the OpenAM property name (as defined in Create User Attribute Mapping) exactly matches the name of a DataStore property then you don’t need to specify it at all in this list!!

For example, say you want to take the value of ‘givenName’ from the AD/LDAP and place it in the attribute called givenName in the DataStore then all you need to do is specify ‘givenName’ in the Authentication Module Create User Attribute Mapping settings.  There is no need to explicitly define it in this list.

However, if you do define it in this list then you must define it as givenName=givenName.
If you were to just specify givenName then it’s value will be that of the mysterious ‘AttributeX’.

and finally…

Authentication Naming Attribute

For my configuration I set this to ‘cn’.  This is a bit of conjecture, but I believe this configuration is used when the DataStore Authentication Module is used.  The DataStore Authn Module will take the username entered by the user on the login page and try to find a user record where the ‘cn’ equals it.
Now in my scenario this should never happen…I never intend the DataStore to be used as an authentication source…but, as they say, YMMV.

 

This blog post was first published @ yaunap.blogspot.no, included here with permission from the author.

A Beginners Guide to OpenIDM – Part 1

Introducing OpenIDM

This is the first in a series of blogs aiming to demystify OpenIDM, the Identity Management component of the ForgeRock platform.

I have actually been really impressed with OpenIDM and how much you can accomplish with it in a short time. It is fair to say though that if you are used to more traditional IDM technologies such as Oracle Identity Manager then it can take a bit of time to get your head around how OpenIDM works and how to get things done.

In the first of this series of blogs I want to walkthrough a basic installation of OpenIDM, look at the architecture of the product and how everything fits together.

Overview

OpenIDM is primarily concerned with the following functionality:
  • Objects and relationships: Quickly modelling complex objects, schemas and the relationships between them, e.g. for users, devices and things and exposing them as RESTful resources.
  • Data Synchronization: Moving data to and from systems such as Active Directory, databases, webservices and others, makes use of connectors and mappings to:
    • Create and update users and accounts in target systems i.e. pushing data to target systems from OpenIDM.
    • Reconcile users and accounts from target systems i.e. pulling data into OpenIDM from target systems.
    • Move data about users, devices and things to and from any other system.
  • Workflow Engine: processes such as request and approval of access to resources and much more.
  • Self Service: Enabling end users to easily and securely register accounts, retrieve forgotten passwords and manage their profiles.
  • Task Scheduling: Automating certain processes to run periodically.
All of this is built upon a consistent set of REST APIs with numerous hooks throughout the platform for scripting behaviors using Groovy or javascript.
OpenIDM also makes use of a data store into which it reads and writes:
  • Data for users, devices and things: e.g. actual user account data such as first_name=Wayne, last_name=Blacklock for all objects that OpenIDM is managing.
  • Linked account data: “Mirrored data” for the systems that OpenIDM has been integrated with. This enables you to view and manipulate all of a users account data across all systems from OpenIDM.
  • Various pieces of state relating to workflow, scheduling and other functionality.
Finally, all of the OpenIDM’s config is stored as .json files locally per deployment.

Logical Architecture

The diagram below aims to give you a bit of an overview of how OpenIDM fits together. We will explore each major component in detail with worked examples over the next few months.

Getting Started

This blog series is intended to be a practical introduction to OpenIDM so the first thing we need to do is download and install it from here:
Note: For now we are going to use the embedded OpenIDM OrientDB database, rather than install an external database. The OrientDB database ships with OpenIDM and is ready to go right from the start however please note it is not suitable for production deployments. We will cover the usage of another database for enterprise deployments later in the series.
Download and unzip OpenIDM to a directory. Make sure you have Java installed, configured and available from the command line.
To start up OpenIDM simply type:

Linux:

 ./startup.sh
Windows:
 startup.bat
That’s it! By default OpenIDM runs on port 8080. You can them navigate to the interfaces at:
http://localhost.localdomain.com:8080
http://localhost.localdomain.com:8080/admin

You’ll note both pages look similar, but one is for users and one is for admins.

The default username and password for the administrator is openidm-admin / openidm-admin.

Log into the administrator interface, once you have logged in you should see the dashboard:

Over the rest of this series we will explore the functionality of OpenIDM in detail.


This blog post was first published @ http://identity-implementation.blogspot.no/, included here with permission from the author.

Blocking on Promises (Hard-learned lessons on asynchronous programming)

OpenIG is now 100% asynchronous! In other words, we’re using a lot of Promises. Recently, we faced a strange issue where a thread remained in the WAITING state, waiting for an HTTP response to come.

Here is the thread dump we got:

"I/O dispatcher 1" #13 prio=5 os_prio=31 tid=0x00007f8f930c3000 nid=0x5b03 in Object.wait() [0x000070000185d000]
   java.lang.Thread.State: WAITING (on object monitor)
	at java.lang.Object.wait(Native Method)
	- waiting on <0x000000076b155b80> (a org.forgerock.util.promise.PromiseImpl)
	at java.lang.Object.wait(Object.java:502)
	at org.forgerock.util.promise.PromiseImpl.await(PromiseImpl.java:618)
	- locked <0x000000076b155b80> (a org.forgerock.util.promise.PromiseImpl)
	at org.forgerock.util.promise.PromiseImpl.getOrThrow(PromiseImpl.java:144)

Ok, to tell the truth, the code was performing a blocking call on a Promise<Response>, so we got what we deserved, right? Well, that code has been around (in more or less the same form) for a long time, and, AFAIK, nobody had experienced a thread blockage issue.

Here is the code where the blocking call happened:

try {
  Promise<JsonValue, OAuth2ErrorException> promise = registration.getUserInfo(context, session);
  return promise.getOrThrow(); // < - - - - - - block here
} catch (OAuth2ErrorException e) {
  logger.error(...);
} catch (InterruptedException e) {
  logger.error(...);
}

Dead simple, isn’t it ?

The strangest thing happened when we engaged a timeout on the promise (using getOrThrow(10, SECONDS)). After the timeout expired, the Promise unblocked and we saw a real Response inside (with an associated SocketTimeoutException), just like if it was already there, but without the promise triggering callbacks.

How could this be possible? Having a thread waiting for a result of another HTTP request, when the http client library in use (Apache HttpAsyncClient in our case) is supposed to handle threads by itself (and correctly).

Well, we had to dig, but we found the key deep inside the HTTP library:

// Distribute new channels among the workers
final int i = Math.abs(this.currentWorker++ % this.workerCount);
this.dispatchers[i].addChannel(entry);

What is this code doing ?

This code is called when an NIO event comes back into the HTTP library (such as the content of a response). The code basically selects one of the worker threads to be responsible for processing the response.

Is this wrong ?

Depends on your point of view ;) Initially, I was thinking that it was plain wrong: this code doesn’t know if the thread is busy doing something else or blocked.

After a bit more thought, it’s not that obvious - because responses are processed asynchronously, the request and response flows are clearly decoupled, so there is no easy way to know if the requestor thread is the same thread as the response thread.

So what happened ?

The scenario is quite simple:

  • Create a CHF HttpClientHandler
  • Send the first HTTP request
  • When the response is there, trigger another HTTP call
  • See the blocked thread

In practice, you probably have to configure the number of workers, until you can find a setting where the distribution function re-assigns the response to the requestor’s thread. The easiest configuration is to use a single-thread :)

Here is a code sample to reproduce the “issue”:

// Create an HTTP Client with a single thread
Options options = Options.defaultOptions()
                         .set(AsyncHttpClientProvider.OPTION_WORKER_THREADS, 1);
HttpClientHandler client = new HttpClientHandler(options);

// Perform a first request
Promise<Response, NeverThrowsException> main;
Request first = new Request().setMethod("GET").setUri("http://forgerock.org");
main = client.handle(new RootContext(), first)
             .then(value -> {
                 // Perform a second request on the thread used to receive the response
                 try {
                     Request second = new Request().setMethod("GET")
                                                   .setUri(URI.create("http://www.apache.org"));
                     return client.handle(new RootContext(), second)
                                  // and block here
                                  .getOrThrow(5, TimeUnit.SECONDS);
                 } catch (InterruptedException e) {
                     return newInternalServerError(e);
                 } catch (TimeoutException e) {
                     return newInternalServerError(e);
                 }
             });

// Get the response on the "main" thread
Response response = main.getOrThrow();
long length = response.getHeaders().get(ContentLengthHeader.class).getLength();
System.out.printf("response size: %d bytes%n", length);

Note that you can clone the sauthieg/blocking-on-promise GitHub repository if you want to play with that code by yourself.

The solution

Avoid the blocking call and use Promise with appropriately typed callbacks in every step of the processing.

Registering callbacks (ResultHandler, Function or AsyncFunction) instead of actively waiting for a result/failure prevents any form of thread blockage.

So now, the caller thread is not blocked. It will be available for its next task after all callbacks are registered on the promise.

Bad code example

try {
  Promise<JsonValue, OAuth2ErrorException> promise = registration.getUserInfo(context, session);
  JsonValue info = promise.getOrThrow(); // < - - - - - - block here
  return new Response(Status.OK).setEntity(info);
} catch (OAuth2ErrorException e) {
  return newInternalServerError(e);
} catch (InterruptedException e) {
  logger.error(...);
}

Good code example

return registration.getUserInfo(context, session)
                   .then((info) -> {
                     // process the result when it will be available
                     return new Response(Status.OK).setEntity(info);
                   },
                   (e) -> {
                     // Convert exception
                     return newInternalServerError(e);
                   })

The conclusion

Never block any threads when you’re doing asynchronous processing.

The async programming model is designed to maximize use of machine’s resources, and implicitly requires that there are no blocking call on the stack. As there should be no threads blocked at anytime, any thread can be selected to process a response. That explains why our HTTP library is not even trying to see if the elected thread is busy or not.

More pragmatically, when using our Promise API, you’ll know that you’re in trouble (and a potential victim of that threading issue) if you see code that uses one of the get() method variations on the Promise interface.

In OpenIG, this can be in any Filter / Handler that you write by yourself, or in any Groovy script. So take a look at the code you execute in OpenIG: we make a point to write 100% asynchronous / non-blocking code, what about you?

Exhaustive list of blocking methods in Promise

  • Promise.get() / Promise.get(long, TimeUnit)
  • Promise.getOrThrow() / Promise.getOrThrow(long, TimeUnit)
  • Promise.getOrThrowUninterruptibly() / Promise.getOrThrowUninterruptibly(long, TimeUnit)

Creating an internal CA and signed server certificates for OpenDJ using cfssl, keytool and openssl


Yes, that title is quite a mouthful, and mostly intended to get the Google juice if I need to find this entry again.

I spent a couple of hours figuring out the magical incantations, so thought I would document this here.

The problem: You want OpenDJ to use something other than the default self-signed certificate for SSL connections.   A "real" certificate signed by a CA (Certificate Authority) is expensive and a pain to procure and install.

The next best alternative is to create your own "internal" CA, and  have that CA sign certificates for your services.   In most cases, this is going to work fine for *internal* services that do not need to be trusted by a browser.

You might ask why is this better than just using self-signed certificates?  The idea is that you can import your CA certificate once into the truststore for your various clients, and thereafter those clients will trust any certificate presented that is signed by your CA.

For example, assume I have OpenDJ servers:  server-1,server-2 and server-3.  Using only  self-signed certificates, I will need to import the certs for each server (three in this case) into my client's truststore. If instead, I use a CA, I need only import a single CA certificate. The OpenDJ server certificates will be trusted because they are signed by my CA.  Once you start to get a lot of services deployed using self-signed certificates becomes super painful. Hopefully, that all makes sense...

Now how do you create all these certificates?  Using CloudFlare's open source  cfssl utility, Java keytool, and a little openssl.

I'll spare you the details, and point you to this shell script which you can edit for your environment:

Here is the gist:



How to read and write shared state in the OpenAM Scripted Module

If you’ve used OpenAM for a while, you will probably know that it has a concept of shared state; a map of values that can be passed from one authentication module to the next in an authentication chain. You can use the iplanet-am-auth-store-shared-state-enabled and iplanet-am-auth-shared-state-enabled keywords to direct modules to put credentials into shared state, or read the credentials from shared state and try to use them.

If you have a scripted module in your OpenAM authentication chain,  you may want to pass credentials from the scripted module to other modules in the chain. Or you may want to access credentials that have been set in a preceding authentication module.

To read the username and password entered in a previous module in the authentication chain, you can use the following javascript in your server side authentication script:

//get username and password from shared state var someUserName = sharedState.get("javax.security.auth.login.name"); var somePassword = sharedState.get("javax.security.auth.login.password");

And to put a username and password into shared state:
//set the username and password for other authentication modules to use sharedState.put("javax.security.auth.login.password",someUserName); sharedState.put("javax.security.auth.login.password",somePassword);

This blog post was first published @ http://authntoz.blogspot.no/, included here with permission from the author.

OpenAM JavaScript Wrapper

OpenAM JavaScript Wrapper

A long time ago I started writing an example of a small web application that was using the OpenAM REST APIs. Once working on this task, I realised it was relatively easy to create wrappers around the ForgeRock APIs in any programming language. I decided to start with a JavaScript wrapper.

I am a bit rusted in this area, and since I was doing it only for fun and on my “idle” time, it took a while.  Anyway, here the openam.js wrapper and the openamUtils.js utility library. https://github.com/ForgeRock/openamjs

This is a work in progress, and also a JavaScript coding exercise. Initially it is leveraging the Authentication and SSO API’s but with a little help of the community it can be extended to cover the whole set of APIs, inclulding Authorization, OAuth2, OIDC, UMA, STS, etc. It can also be an idea to start similar wrappers for the OpenIDM and OpenDJ REST APIs. It is NOT supported NOR endorsed by ForgeRock. If you feel it is useful, please contribute.

openam.js is a small library/wrapper of some of the REST APIs of OpenAM. The intention is to provide an easy way to integrate the calls in your Client JavaScript code without needing to implement the REST code yourself.

openamUtils.js is another wrapper to render configurable Login Buttons and Login Boxes. It uses openam.js and the css style contained in the github repository, of course you can adjust the css to your needs but it should work nicely out of the box. This wrapper does not need JQuery, but of course you can combine it with any other JS UI framework. In the future I would like to create another library combined with bootstrap.

Several examples are included together with the source code: https://github.com/ForgeRock/openamjs/tree/master/examples

Each example is a single web page.

Before trying the library and examples be sure to configure your OpenAM to support CORS (see this blog entry for more info on CORS and OpenAM, hint it involves to modify the web.xml of OpenAM or modify your Web Container -Tomcat for example has CORS support-).

The documentation is available for both libraries here:

Here two videos showing how to use the libraries.

  1. OpenAM Configuration of the instance to be used with the examples.
     

     

     

  2. And here a video showing how the examples included in the github code work:

     

 

Give the libraries a try and send us your feedback. Again you are more than welcome to contribute.

How to protect your OpenAM deployment against clickjacking

If you ever seen a security report for one of your web applications, there is a good chance that you have seen a big warning about Clickjacking already. Clickjacking is a certain kind of attack that essentially allows the attacker to trick a victim into performing an operation that most likely they didn’t want to carry out. If you want to learn more about clickjacking then I would recommend having a read of this well detailed page.

The best way to protect against these attacks is actually rather simple: RFC 7034 describes the X-Frame-Options header that needs to be set on the HTTP responses for pages that you wish to prevent from being clickjacked. The X-Frame-Options header has three accepted values:

  • DENY: the browser should never display the contents of the requested content in a frame.
  • SAMEORIGIN: Only display the content in a frame, if the enclosing page(/top level browsing context — see RFC) is in the same origin as the content itself.
  • ALLOW-FROM: Allows you to specify an origin from which it is allowed to display the contents of the requested resource.

How to configure OpenAM?

Since OpenAM 12.0.1 it is possible to utilize a built-in servlet filter to add arbitrary HTTP headers to our responses. The configuration of the filter is quite simple, you just have to add the following snippets to web.xml (obeying the XML schema):

<filter>
  <filter-name>Clickjacking</filter-name>
  <filter-class>org.forgerock.openam.headers.SetHeadersFilter</filter-class>
  <init-param>
    <param-name>X-Frame-Options</param-name>
    <param-value>DENY</param-value>
  </init-param>
</filter>
...
<filter-mapping>
  <filter-name>Clickjacking</filter-name>
  <url-pattern>/XUI/*</url-pattern>
  <url-pattern>/UI/*</url-pattern>
  <url-pattern>/console/*</url-pattern>
  <url-pattern>/oauth2/authorize</url-pattern>
  <dispatcher>FORWARD</dispatcher>
  <dispatcher>REQUEST</dispatcher>
  <dispatcher>INCLUDE</dispatcher>
  <dispatcher>ERROR</dispatcher>
</filter-mapping>

The above url-patterns list is not an exhaustive list of resources that you may wish to protect, however it should serve as a good start. Alternatively you could just change the url-pattern to /* and then you only really need the REQUEST dispatcher in your filter mapping config.

Please keep in mind that there are lots of different ways to set the X-Frame-Options header for your deployment, so feel free to utilize those instead if needed.

It’s The Little Things – Part 1

Since I began working with the ForgeRock technology I have been impressed by how much you can do with it in a very short time and from what I have seen this is really underpinned by an philosophy of developer friendliness. There are lots of small features throughout the platform that are solely there to make life for implementors and developers easier.

In this series I wanted to take the opportunity to call out some of these little things that can make a big difference to your day to day experience.

Simulated Attribute Mapping

One of my favourite features in OpenIDM. Say you need to create provisioning integration with a target system. More often than not you need to manipulate or transform source attributes to achieve this.

Below you can see I have created a mapping to Active Directory in OpenIDM. This is a fairly common requirement that comes up time and again. Among other attributes, when you create an Active Directory account you need to define a Distinguished Name (DN).

I have configured the following script to generate a DN:

This is fairly simple, it just takes the userName of the user in OpenIDM and appends the rest of the desired DN as a fixed string. This is all fairly standard stuff. What is really useful, is that I can simulate the output of my DN transformation before I actually provision any accounts. To do this you just need to select an existing OpenIDM user using the Sample Source feature:

You can now see what the target output will actually look like for a given user. This is a really handy timesaver if you need to write complex mappings and enables you to quickly get a feel whether or not your transformation is correct before you have to go back and forth with failed provisioning operations against Active Directory.

This blog post was first published @ http://identity-implementation.blogspot.no/, included here with permission from the author.

ForgeRock OpenAM 13 Installation & Configuration

I was asked if I could cut a quick video on the installation and configuration for ForgeRock OpenAM 13. While I had done a similar video on an earlier version of OpenAM and that the procedure by and large remains the same, I used this opportunity to get over my laziness. Here’s the video:

This blog post was first published @ www.fedji.com, included here with permission.

Blockchain For Identity: Access Request Managmement

This is the first in a series of blogs, that will start to look at some use cases for leveraging block chain technology in the world of identity and access management.  I don't proclaim to be a BC expert and there are several blogs better equipped to tackle that subject, but a good introductory text is the O'Reilly published "Blockchain: Blueprint for a New Economy".

I want to first look at access request management.  An age old issue that has developed substaintially in the last 30 years, to several sub-industries within the IAM world, with specialist vendors, standards and methodologies.

In the Old Days

Embedded/Local Assertion Managment

So this is a typical "standalone" model of access management.  An application manages both users and access control list information within it's own boundary.  Each application needs a separate login and access control database. The subject is typically a person and the object an application with functions and processes.

Specialism & Economies of Scale

So whilst the first example is the starting point - and still exists in certain environments - specialism quickly occured, with separate processes for identity assertion management and access control list management.



Externalised Identity & ACL Management

So this could be a typical enterprise web access management paradigm.  An identity provider generates a token or assertion, with a policy enforcement process acting as a gatekeeper down into the protected objects.  This works perfectly well for single domain scenarios, where identity and resource data can be easily controlled.  Scaling too is not really a major issue here, as traditionally, this approach would be within the same LAN for example.

So far so good.  But today, we are starting to see a much more federated and broken landscape. Organisations have complex supply chains, with partners, sub-companies and external users all requiring access into once previously internal-only objects.  Employees too, want to access resources in other domains and as-a-service providers.


Federated Identities


This then creates a much more federated landscape.  Protocols such as SAML2 and OAuth2/OIDC allow identity data from trusted 3rd parties, but not originating from the objects domain, to interact with those resource securely.

Again, from a scaling perspective this tends to work quite well.  The main external interactions tend to be at the identity layer, with access control information still sitting within the object's domain - albeit externalised from the resource itself.

The Mesh and Super-Federation

As the Internet of Things becomes normality, the increased volume of both subjects and objects creates numerous challenges.  Firstly the definition of both changes.  A subject will become not just a person, but also a thing and potentially another service.  An object will become not just an application, but an autonomous piece of data, an API or even another subject.  This then creates a multi-point set of interactions, with subjects accessing other subjects, API's accessing API's, things accessing API's and so on.

Enter the Blockchain

So where does the block chain fit into all this?  Well, the main characteristics that can be valueable in this sort of landscape, would be the decentralised, append-only, globally accessible nature of a blockchain.  The blockchain technology could be used as an access request warehouse.  This warehouse could contain the output from the access request workflow process such as this sample of psuedo code:

{"sub":"1234-org2", "obj":"file.dat", "access":"granted", "iss":"tomorrow", "exp":"tomorrow+1", "issuingAuth":"org1", "added":"now"}

This is basic, but would be hashed and cryptographically made secure from a trusted access request manager.  That manager would have the necessary circle of trust relationships with the necessary identity and access control managers.

After each access request, an entry would be made to the chain.  Each object would then be able to make a query against the chain, to identify all corresponding entries that map to their object set, unionise all entries and work out the necessary access control result.  For example, this would contain all access granted and access denied results.


A Blockchained Enabled Access Requestment Mgmt Workflow

So What?

So we now have another system and process to manage?  Well possibly, but this could provide a much more scaleable and interoperable model with request to all the necessary access control decisions that would need to take place to allow an IoT and API enabled world.

Each object could have access to any BC enabled node - so there would be massive fault tolerance and elastic scaling.  Each subject would simply present a self-contained assertion.  Today that could be a JWT or a token within a proof-of-possession framework.  They could collect that from any generator they choose.  Things like authentication and identity validation would not be altered.

Access request workflow management would be abstracted - the same asychronous processes, approvals and trusted interactions would take place.  The blockchain would simply be an externalised, distribued, secure storage mechanism.

From a technology perspective I don't believe this framework exists, and I will be investigating a proof of concept in this area.

Blog originally posted at The Identity Cookbook