OpenIDM Widgets Using the ELK Stack

OpenIDM 4.5 includes support for integration with ElasticSearch. For those not familiar, ElasticSearch is part of what is commonly referred to as the ELK stack:

E – ElasticSearch: Processes incoming data and allows for it to be quickly and RESTfully analysed and searched.
L – LogStash: A logging processor that takes input data, puts it through a pipeline of operations and outputs it somewhere ( usually into ElasticSearch ).
K – Kibana: Visualisation engine that utilises the data provided by ElasticSearch to create visual charts of all different types that can be embedded into web pages.

Together they enable you to create slick visualisations of data.


In the latest 4.5 release of OpenIDM two new features have been introduced:

  • An ElasticSearch audit handler, that can be used to push audit events into ElasticSearch.
  • Configurable dashboard widgets in the admin interface. There are a number of widgets available OOTB, one of which enables the embedding of Kibana visualisations.
This is incredibly powerful and relatively easy to set up, so I wanted to do a quick blog to explain how to it.

In this blog we do not actually need LogStash as we will use the OpenIDM audit handler to push data to ElasticSearch. However ordinarily may well want to use LogStash as well to process and transform additional logging data ( as well as just audit data ).

Installing ElasticSearch & Kibana

The first step is to download and install ElasticSearch from https://www.elastic.co/downloads/elasticsearch

Simply unzip ElasticSearch somewhere and start it up:

cd /usr/local/env/elk/
unzip elasticsearch-2.4.1.zip
mv elasticsearch-2.4.1 elasticsearch

/usr/local/env/elk/elasticsearch/bin/elasticsearch

You can check that it is running by navigating to http://localhost.localdomain.com:9200, and you should see something like this:



Next, download and install Kibana from https://www.elastic.co/products/kibana

Again, just unzip it somewhere:

cd /usr/local/env/elk
tar -xvf kibana-4.6.1-darwin-x86_64.tar.gz

But before starting it up check the following file:


/kibana/config/kibana.yml

And ensure it is pointing at your elasticsearch instance:

Then start it up:

/usr/local/env/elk/kibana/bin/kibana

You can navigate to Kibana on http://localhost.localdomain.com:5601


If you can see this kibana splash page, then everyone is on track.

Configuring OpenIDM 4.5 for ElasticSearch Audit

Log into OpenIDM as the administrator and navigate to Configure, System Preferences

Then select the Audit tab


And add an ElasticSearch event handler

Give your handler a name, configure all the events you are interested in, and make sure Enabled is selected


Note: In earlier builds there was a bug where to enable the handler you had to manually edit conf/audit.json. If you follow all the steps and something is not working, double check that the handler is set to “enabled” : true

Scroll down and configure the handler parameters as below:


host: your host
port: 9200 ( unless you changed it)
indexName: audit
Press Submit then on the next page you must remember to scroll down and press Save

Now, put your elasticsearch process window somewhere you can see it, then log out and back in of OpenIDM and take a look. You should see some activity:


OpenIDM is now successfully pushing audit data to elasticsearch!

Processing Audit Data in Kibana

Now we need to configure Kibana to process and do things with all that juicy audit data from OpenIDM. Navigate to Kibana again and replace “logstash-*” with “audit, then hit Create:

We now have an Audit indices, and you should see the OpenIDM audit attributes have already been catalogued:


And if you click Discover you should be able to browse the data!



Creating a Visualisation

Now with our data, we can quickly create visualisations. There are many blogs on this subject so I won’t go into detail here but I will create a quick visualisation to show failed authentications into OpenIDM over time.

The easiest way to do this is to generate such an event ( try logging in with the wrong password for example ) then take a look at the data.

We should see something like this:


And if we drill down further:


In Kibana’s language ( based on Lucene http://www.lucenetutorial.com/lucene-query-syntax.html) to select these events we want a search filter like:


eventName:”authentication” AND result:”FAILED”

Now we have just failed authentication events, ( not successful ones) :



Hit Save and give the search it a name:


Now go to Visualize and select Vertical bar chart:


And use our saved search:


As you can guess this isn’t quite right, but if we configure the x-axis and press Apply ( play button )

We get something that looks a bit more useful. Now if you try generating some failed authentications and refreshing the page you should see this update.

Save the visualisation.

Adding the Visualisation to OpenIDM

Now it is time to bring it all together. Select Share Visualisation

Examine the first URL ( the Embed URL ).

Copy the URL somewhere, just the http://localhost… discard the iframe tag e.g. 

http://localhost.localdomain.com:5601/app/kibana#/visualize/edit/Failed-IDM-Authn?embed=true&_g=(refreshInterval:(display:Off,pause:!f,value:0),time:(from:now-1h,mode:quick,to:now))&_a=(filters:!(),linked:!t,query:(query_string:(query:’*’)),uiState:(),vis:(aggs:!((id:’1′,params:(),schema:metric,type:count),(id:’2′,params:(customInterval:’2h’,extended_bounds:(),field:timestamp,interval:m,min_doc_count:1),schema:segment,type:date_histogram)),listeners:(),params:(addLegend:!t,addTimeMarker:!f,addTooltip:!t,defaultYExtents:!f,mode:stacked,scale:linear,setYExtents:!f,shareYAxis:!t,times:!(),yAxis:()),title:’Failed%20IDM%20Authn’,type:histogram))

Navigate to OpenIDM and log in as administrator, then Dashboards and New Dashboard:


Call it whatever you want and press Create:

Add a widget to your new dashboard:


Select Embed Webpage:

Then close the widgets menu (x):


Select Settings:


And enter the following values:


URL: The URL from the Kibana visualisation
Height: You may have to play with this a bit, but 200px is a good start.
Title: Whatever you like.

And press Save.

You should now see your visualisation in OpenIDM!


Summary

In around an hour, we have configured the ELK stack, integrated it with OpenIDM and created a (very) basic visualisation. This is really the tip of the ice berg of what you could do with OpenIDM and ELK and I plan to explore this in much more detail in future blogs and share any useful visualisations I come up with.
 

This blog post was first published @ http://identity-implementation.blogspot.no/, included here with permission from the author.

A Beginners Guide to OpenIDM Part 5 – User Registration

Last time we configured mappings to create user objects in OpenIDM from a CSV file. This is commonly something you might do when populating a new identity system for the first time with your existing users, but what about getting new users into the system?

 

This blog we will take a look at configuring OpenIDM’s user self registration functionality and how we can perform some post processing on new users.Note: As many of you will be aware OpenIDM 4.5 has recently been released, for now I am going to continue using OpenIDM 4 for this blog but everything we talk about here is still applicable to 4.5, albeit with minor differences. I will likely move to OpenIDM 4.5 (or 5 even) when the beginners series is concluded.

User Registration

To turn on User Registration in OpenIDM, first log in as an administrator. Then navigate to Configure, then User Registration.

 

 You will see the User Registration configuration screen:

 

 

Now before we turn it on, log out and take a look at the OpenIDM user login screen: http://localhost.localdomain.com:8080/#login/

 

 

Just try to remember what it looks like right now, as it will change in a moment.

 

By default, User Registration is disabled. Lets enable it now. Click the “Enable User Registration” slider.

 

You can see that we have a number of Registration Steps. These can be turned on and off as you require, they can also be re-ordered by dragging and dropping.

 

Lets briefly talk about what each of these steps involves:
  • reCAPTCHA for Registration: Google’s “I am not a robot” dialog. To prevent automated bots registering.
  • Email Validation: User is sent an email, they must click a link in the email to progress registration. Verifying that they do indeed have access to the email account.
  • User Details: Simply allows you to modify the attribute on the user object associated with email address. By default this is mail but you might want to change it based on your managed user schema if you have made changes.
  • KBA Stage: Whether or not to enforce the use of Knowledge Based Authentication i.e. challenge questions and answers.
  • Registration Form: Here you can change the Identity Service URL to another managed object.
Note that you cannot configure everything in the UI, but we will shortly take a close look at how you can do that for Email Validation and KBA Stage if required.

 

So lets make sure that we have turned everything on in the as in the image above, except for reCAPTCHA. There is no save button. Toggling the sliders is sufficient here.

 

Now log out again, and take another look at the user login screen:

 

 

You should see we now have a link to Register. You can take a look but this won’t really work yet as we need to configure the steps, mainly Email Validation. So lets do that now.

 

Configuring Email Validation

We need to configure Email Validation and for this we are going to need an email server. If you have an email server, great. If not I suggest downloading something like https://mailtrap.io

 

Navigate to Configure, then System Preferences.

 

Select the Email tab.

 

 

Toggle the Enable Email slider, then Use SMTP Authentication.

 

 

Enter the host, port and credentials for your email server.

 

 

Then press Save.

Testing Registration

Logout of OpenIDM, navigate to the user login page: http://localhost.localdomain.com:8080/#login/ and select the Register link again.

 

Enter a test email address and press SEND.

 

All being well you should receive an email:

 

 

If you click the link, you will continue the registration process in OpenIDM.

 

Press Save, then you can complete your KBA questions:

 

 

When that is done, press Save for the final time and you should see that…

 

 

Registration is complete, if you Return to Login Page you should be able to log in with your new user ( by default this is the username we set on the 2nd page, not the email address ).

 

 

Customising Registration

As with everything ForgeRock every step of the Registration process can be customised, and you can use our REST API’s if you want to build a completely customised experience.

 

However there are two customisations which are frequently performed, let’s take a quick look at them now.

Customising the Registration Email

This can be easily achieved through the UI, if you return to Configure, then User Registration and edit the Email Validation step.

 

 

If you prefer you can also edit this directly via <openidm>/conf/selfservice-registration.json

 

Customising KBA Challenge Questions

If you want to edit KBA questions you need to do this directly by editing <openidm>/conf/selfservice.kba.json

This blog post was first published @ http://identity-implementation.blogspot.no/, included here with permission from the author.

Debugging OpenAM and OpenIDM

My background is the what I call the world of legacy identity, as such it took me a little while to get used to the world of ForgeRock, REST API’s and the like.


If you come from that world you may find debugging implementation issues with ForgeRock is a little different so I wanted to write up a short guide to share what I have learned.

Have you checked the logs?

Some things never change and your first port of call to debug any issues should be the log files.

OpenAM

There are actually two places to check when debugging OpenAM:


Note: <openam_install_dir> is the directory in which OpenAM was installed, not the web container.


<openam_install_dir>openam/debug



This is where you find the debug logs. Generally very detailed logs for all the different components of OpenAM.


<openam_install_dir>openam/log


This is where you find access and error logs. Detailing access requests and the result of those requests.


For example, if we look at access.csv:



You can see the result of my last login as amadmin.

Configuring log levels

If you don’t see anything, you may need to change the logging configuration.


Navigate to: http://localhost.localdomain.com:18080/openam/Debug.jsp




This interface is fairly straight forward



In the above example I have set the policy component to Message level.


Just hit confirm and the change will be made immediately without a restart required.

OpenIDM

Again, there are actually two places to check when debugging OpenIDM:


<openidm_install_dir>openidm/logs


The main OpenIDM log files can be found here. OpenIDM used JDK logging and the configuration file can be found here if you need to make changes to logging levels:


<openidm_install_dir>openidm/conf/logging.properties


There is a helpful guide as to how do that here: http://www.javapractices.com/topic/TopicAction.do?Id=143


So far that is all fairly standard, however if you do not find anything in the logs then you may want to examine the REST services.

Debugging REST services

As I have said a few times on his blog, the ForgeRock platform is completely underpinned by REST services.


The User Interfaces for OpenAM and OpenIDM both make extensive use of REST service calls in their functioning.


When debugging issues, especially anything that results in an error visible in the UI. You should take a look at the requests and responses.


I use FireFox Developer Tools to do this but I know there are equivalents for other browsers.


It’s as simple as turning on Developer Tools and examining the network requests whilst using OpenAM and OpenIDM.



So lets try making a new authentication chain in OpenAM.



What we need to find is the POST request for the creation of the chain. If you browse up and down the list you should find it pretty quickly. On the right you can see the Headers in the request, the parameters and importantly the response code:




And the response:


So let’s see what that looks like when we have an error:



Generally you will see something like the above, and if you check the actual response, you should see a more detailed message that can help you debug the issue.

This blog post was first published @ http://identity-implementation.blogspot.no/, included here with permission from the author.

OpenAM Text To Speech Authentication Module

For a recent POC I had a requirement to implement two factor authentication for users who may have impaired vision and would struggle to read an OTP or the code from a 2FA application.

To achieve this I cloned the existing OpenAM HOTP module and integrated it with The Twilio service: https://www.twilio.com/

I might do a bit more of a deep dive into how exactly this works later but for now the code is up in Github:

https://github.com/wayneblacklock/ttsAuthModule

And you can see a video of how it works below:

There is definitely room for improvement here, at the moment I am using the a Twimlet to encode the voice, in a production deployment you will want to generate TWIML (https://www.twilio.com/docs/api/twiml) in order to have more control over the voice i.e. introduce pauses etc.

This blog post was first published @ http://identity-implementation.blogspot.no/, included here with permission from the author.

A Beginners Guide to OpenIDM – Part 4 – Mappings

Overview

Last time we configured a connector to process data from a data source. In this blog we start to bring everything together and will define a mapping that creates objects using the connector.

Mappings define the relationships at the attribute level between data in connector data sources and objects. They also define rules to create attributes where they do not already exist and transform attributes if required.

Mappings

Architecture

In OpenIDM mappings consist of a number of logical components:
  • Properties: Defines attribute mappings between source attributes and target attributes, and scripts to create and transform attribute values. Also defines a link qualifier, required to link a single record in the source to multiple different records in the target. Which might be required in some situations e.g. event based HR data feeds where the same user record may appear multiple times.
  • Association: Queries, record validation scripts and association rules that work together define how a source account is linked to a target account. i.e. how OpenIDM knows that Anne’s account in the target system maps to Anne’s account in the source system, and not Bob’s account.
  • Behaviors: Defines the rules that tell OpenIDM what to do in different situations e.g. if an account doesn’t exist in the target system, then you might configure a behaviour to create it. This is incredibly powerful and we will explore it further.
  • Scheduling: Schedules for reconciliation operations e.g. run the mapping every hour, day, week etc. This is not the same as livesync (which we touched on last time) which will sync changes between systems as they occur.

Mappings Example

Creating the Mapping

As always with this blog, the best way to learn is to work through an example. We will build upon the existing work done in blogs 1-3 so you will need to complete those first.
Log in as administrator (openidm-admin), navigate to Configure then Mappings:
Then New Mapping:
You should see the following:
A mapping requires a source and a target and data will flow from the source to the target.
You can see there are a number of selectable options on this screen that you can assign as either a source or target:
  • Connectors: Any connectors you have configured, e.g. flat file, database, LDAP.
  • Managed Objects: The managed objects defined in OpenIDM.
So for this example, we want to use the CSV connector configured previously to create new managed users in OpenIDM. Effectively using the CSV data to create user identities.
Click Add to Mapping on the UserLoadCSV connector, this is going to be our source.
Similarly, now click Add to Mapping on the User managed object, this is our target.
You should see the following:
You have probably noticed that the CSV connector has a selectable drop down. There are some connectors where you might be interested in mapping more than one type of object e.g. accounts & groups in the case of LDAP. We have only configured the default _ACCOUNT_ object in this example.
 
Click Create Mapping.
 
 
Mapping Name: You can enter a more friendly mapping name if you like.
Linked Mapping: If you are creating two mappings in either direction, you should use this to link your second mapping to your first.
Click OK and the mapping should be created:

Configuring the Mapping

We have a mapping, now we need to configure it.
Navigate to the Properties tab and examine the Attribute Grid.
 
 
Start by clicking Add Missing Required Properties, this will automatically add those target properties which the connector definition or managed object specify must be populated. Make sure you press Save Properties after doing this.
You should see the required attributes for the managed user schema:
Now we need to configure the source attributes for each of the following target attributes.
Select the userName row:
You should see the following popup:
There are a few tabs here worth spending a moment on:
  • Property List: The source property to map from.
  • Transformation Script:  In line or file based scripts to transform attributes, a common use of this is to generate LDAP distinguished names.
  • Conditional Updates: Advanced logic that allows you to define rules for the conditional update of certain properties on sync. Useful if you want to ensure that a particular property is never over wrote.
  • Default Values: Simply set a fixed default value for the target, e.g. “true”, “active”, whatever.
Navigate to the Property List tab and select __NAME__. If you recall this corresponds to the connector we defined in the last blog. Now press Update.
Do the same with the rest of the mappings as below, then press Save Properties. You should end up with something like the below.
We can quickly check if this makes sense before we synchronize, look at Sample source, set the Link Qualifier to default and try entering one of the user ID’s in the CSV:
You should see a preview of the mapping result for whichever user you entered.
Next, we need to configure an Association Rule. Navigate to Association then look at Association Rules. Select Correlation Queries
Then Add Correlation Query:
Set the Link Qualifier to default, and look at Expression Builder.
 
 
Press + and select userName:
 
 
Then press Submit, and Save.
Select Save and Reconcile.
You should see the following:
Expand out In Progress, to see the results of reconciliation:
You can see there were 100 Absent results, you can examine the tooltip to see what this means but briefly it means that the source object has no matching target, i.e. there is no managed user record for all 100s lines in the CSV file. Which makes sense, as this is the first time we have run the reconciliation.
You might have expected that those managed users will now have been created, navigate to Manage, then User:
You should see No Data:
Which might not be what we expected, however lets navigate back to Configure, Mappings and look at the Behaviors tab of our mapping again. Look at Current Policy.
By default, new mapping are set up only to read data and not actually perform any operations, such as user creates.
Look at the policy table:
Now change the Current Policy to Default Actions, and note how the table changes:
Note that the actions have all changed, examine the tooltips to understand what is going on here. I plan to revisit Policies in a later blog because they are incredibly powerful.
For now note that Absent has now changed to Create. So for our situation earlier, we would now expect those 100 Absent users to result in Create operations.
Make sure you press Save before moving on.
Finally, press Reconcile Now to run another reconciliation:
You should see the same results as before, but lets check out our Users again under Manage, User.
 
If we have done everything correctly, you should now see all of the users from the CSV file created as OpenIDM managed users.
That’s all for this blog. Thanks.
 

This blog post was first published @ http://identity-implementation.blogspot.no/, included here with permission from the author.

A Beginners Guide to OpenIDM – Part 3 – Connectors

Overview

Previously in this series we have looked at a general overview of OpenIDM and had a detailed look at objects. In this blog I want to explore connectors.
Connectors are the integration glue that enables you to bring data into OpenIDM from all sorts of different systems and data stores. We will take a look at the different types of connectors available in OpenIDM, how they work and end with a practical example of how to actually configure a connector.

Connectors

Architecture

Every identity system that I have ever worked with has a concept similar of a connector. Usually they comprise of Java libraries or scripts that perform the actual push and pull of data to and from a target data source.
Standard connector operations in OpenIDM include:
  • Create: Create a new object ( usually an account ) in a target data store.
  • Update: Update an existing object e.g. if a user changes their email address then we may want to update the user record in a target data store.
  • Get: Retrieve a specific instance of an object ( e.g. an account) from a target data store.
  • Search: Query the collection and return a specific set of results.
There are a number of other operations which we will explore in later blogs.
At a high level connectors are comprised of:
  • Provisioner configuration: configuration data defining the connector usually containing:
    • Reference to the underlying Java class that implements the connector. This should be populated automatically when you choose your connector type. You can explore the connector source code if you like but for the most part you shouldn’t need to be concerned with the underlying classes.
    • All of the credentials and configuration needed to access the data store. You need to configure this.
    • The data store schema for the object or account. You need to configure this.
Connectors are configured through the user interface but like all OpenIDM configuration they are also stored ( and can be edited ) locally on the file system. Connector configuration files ( like most OpenIDM configuration files) can be found in openidm/conf and have the following naming convention:
provisioner.openicf-something.json ( where something is whatever you have named your connector ).
Note connector configuration files will not appear unless you have configured a connector using the UI, we will revisit this later.
The logical flow in OpenIDM for utilising connectors is as follows:
  • Data Synchronization engine outputs data and a requested operation e.g. create, delete, update or one of several others
  • Provisioner engine invokes the connector class with the requested operation and the data from the synchronization engine.
  • Connector class uses the configuration parameters from the provisioner file and the data passed in the invocation to actually do the work and push or pull to or from the target.

Connector Example

So now we have a basic understanding of how connectors work, lets try configuring one.
I’m going to use the CSV connector for this example and we are going to read users from a Comma Seperate Value list. Ultimately we will be reading this data into the managed user object using a mapping. For this blog though we will just focus on configuring the connector.
Feel free to use any CSV file but if you want to follow along with the example then download the CSV here that I created using Mockaroo.



Copy the file to somewhere on the same file system that OpenIDM has been installed on, it doesn’t matter where so long as OpenIDM can access it. I’m going to use /home/data/users.csv
Then log in to OpenIDM as administrator. Navigate to Configure, then Connectors.


 

Press “New Connector”



You will see the UI for configuring a new connector:



Give your new connector a name (I have used UserLoadCSV above – no spaces permitted), and look at the connector types. These are all the different systems you can integrate with.
Note that with further configuration, more connectors are available, and using the scripted connector you can pretty much integrate with any system that offers a suitable API.
 
Select the “CSV File Connector”. Now we need to complete the “Base Connector Details”. Starting with the path to the CSV File we actually want to process.


Now let’s take a look at the next few fields:



They are populated by default but we need to configure these up to match our spreadsheet.
Looking at the data:
  • Header UID = id
  • Header Name = username
So in this instance we just need to change the Header UID to match.



You will note there are a few more fields:
  • Header Password: We will not be processing any passwords from this CSV, that might be something you want to do, although typically you will have OpenIDM generate passwords for you ( more on that later ).
  • Quote Character: If you have an unusually formatted CSV, you can change the character that surrounds your data values. This is used by OpenIDM to successfully parse the CSV file.
  • Field Delimiter: Similarly if you are using a delimiter ( the character that splits up data entries ) that is anything other than a “,” you can tell OpenIDM here.
  • Newline String: As above.
  • Sync Retention Count: Todo
Note that these parameters are all unique to the CSV connector. If you were to use another connector, say the database connector, you would have a different set of parameters that must be configure for OpenIDM to successfully connect to the database and query the table.
Ok, with all that done lets add the connector:



All being well you should get a positive confirmation message. Congratulations, you have added a connector! All very well but what can we do with it?
Click on the menu option ( the vertical dots):


Then Data (_ACCOUNT_)



If you have done everything correctly you should see the data from the CSV in OpenIDM!



It is important to understand, that at this point the data has not been loaded into OpenIDM, OpenIDM is simply providing a live view of the data in the CSV. This works for any connector and we will revisit it at the end of this blog.
Before that, there are a few things I want to cover. Go back to the Connector screen, you should have a new Connector:



Select it, and select “Object Types”:



Then edit “_ACCOUNT_”.




What you should be able to see if a list of all of the attributes in the CSV file. OpenIDM has automatically parsed the CSV and built a schema for interpreting the data. You may also spot “__NAME__”. This is a special attribute, and it maps to the  Header Name attribute we configured earlier.

Again, the concept of Object Type is universal to all our connectors and sometimes additional configuration of the Object Type may be required in order to successfully process data.


Finally, let’s take a look at Sync:

On this page you can configure LiveSync. LiveSync is a special case of synchronization. Ordinarily synchronization is performed through the mappings interface ( or automatically on a schedule ).

However if the connector and target system support it, then LiveSync can be used. With LiveSync changes are picked up as they occur in the target. Ordinarily with a normal synchronization ( often called reconciliation ) all accounts in the target must be examined against the source for changes. With LiveSync, only accounts in the target that have changed will be processed. For this to work the target must support some form of change log that OpenIDM can read. In systems with large numbers of accounts this is a much more efficient way of keeping data in sync.

Connectors And The REST API

As before, we can make use of the REST API here to query our new connector. We can actually use the API to read or write to the underlying CSV data store. Just take a moment to think about what that means. In an enterprise implementation you might have hundreds of different data stores of every type. Once you have configured connectors to OpenIDM you can query those data stores using a single, consistent and centralised RESTful API via OpenIDM. That really is a very powerful tool.

Let’s take a look at this now. Navigate back to the data accounts page from earlier:




Take a look at the URL:

As before, this corresponds to our REST API. Please fire up Postman again.

Enter the following URL

http://localhost.localdomain.com:8080/openidm/system/UserLoadCSV/__ACCOUNT__?_queryId=query-all-ids

You should see the following result



We have just queried the CSV file using the REST API, and retrieved the list of usernames.
Let’s try retrieving the data for a specific user:

http://localhost.localdomain.com:8080/openidm/system/UserLoadCSV/__ACCOUNT__?_queryFilter=/email eq “tgardner0@nsw.gov.au”


Here we are searching for the user with the email address tgardner0@nsw.gov.au.

 



Again, this is just a small sample of what the REST API is capable of, you can learn much more here:
https://forgerock.org/openidm/doc/bootstrap/integrators-guide/index.html#appendix-rest
And more on how queries work here:

https://forgerock.org/openidm/doc/bootstrap/integrators-guide/#constructing-queries

 

Come back next time for a look at mappings where we will join together the managed user and the connector to actually create some users in the system.

This blog post was first published @ http://identity-implementation.blogspot.no/, included here with permission from the author.

It’s The Little Things – Authentication Chains

Authentication Chains

We have not talked much about OpenAM on the blog. AM has some really great features that make it very simple to use. Perhaps my favourite feature is the authentication chains UI.

Let’s take a quick look at what an authentication chain looks like, then we will talk through it and have a go at creating a brand new one. I assume you are using OpenAM 13.

You can see what an auth chain looks like above. Essentially it is a series of steps ( I think of them as Lego like building blocks ) for authentication. Each block represents a different mechanism for authenticating. In addition each block is also assigned one of four authentication behaviors (required, optional, requisite & sufficient) which determine how ( and if ) one block flows into the next depending whether that block succeeds.

As stated above, successful authentication required at least one pass and no fail flags.

In the above example there are four blocks, lets look at each in turn:

  • DataStore: Basic username and password authentication against the OpenAM data store. If this step is a:
    • FAIL: The user hasn’t even got their username and password right. We definitely are not letting them in, and as such exit the chain with a FAIL.
    • PASS: The username and password is correct. We move to the next block in the chain DeviceMatch.
  • DeviceMatch: First step of device match authentication ( essentially asking the question: has OpenAM seen the use log in from this device before? ). If this step is a:
    • CONTINUE: OpenAM has not seen the user log in using this particular laptop or mobile before. This block has failed but, because it is sufficient this does not equate to a fail flag. We have to be a bit more suspicious and go into the TwoFactor block.
    • PASS: This is a device the user has used before and OpenAM recognises it. At this point the user has authenticated with username and password from a recognised device. We exit the chain with a PASS. 
  • TwoFactor: Challenge the user to provide the code from a two factor mobile soft token. This second factor proves that not only does the user have the right username and password, but also that they have the mobile device they originally registered with in their possession. If this step is a:
    • FAIL: The user has failed 2FA. At this point we don’t have the confidence this is really the user being claimed and exit with a FAIL.
    • PASS:
  • DeviceSave: The last step of device match authentication. We save a record of the device so we can match it next time in the DeviceMatch step. If this step is a fail:
    • FAIL: The user is not actually being challenge for anything. Authentication is complete. We just need to save the device which will not fail.
    • PASS: We have now saved the device, in future, so long as the user continues to use this particular laptop or mobile to login. They will not have to do the TwoFactor step.

Note that I have chosen the above authentication “blocks” for this particular blog. I could easily have used others. There are many different types of blocks available in OpenAM covering nearly every conceivable authentication requirement.

I think the way OpenAM allows you to quickly use these building blocks to build authentication visually is really neat.

Let’s now try building the above chain in OpenAM.

Building an Authentication Chain

Firstly we need to create the authentication building blocks we want. I am going to assume you have an installation of OpenAM up and running with a Top Level Realm configured ( though you can do this any realm ).
Select the realm:
And navigate to Authentication, then Modules.
Out of the box the above modules are configured. We need to configure a few more.
Press Add Module, select “Device Match” from the drop down and give it a name ( I used DeviceMatch earlier ).
Press Create and you should see the configuration screen:
The defaults are fine here, just press Save Changes.
Now repeat the last two steps for the Device Id ( Save ) and ForgeRock Authenticator (OATH) modules.
When this is done you should have the following modules:
Now we need to create a new authentication chain. Navigate to Authentication, then Chains.
Press Add Chain, and give it a name ( I used secureAuthService above ) then press Create, you will now have an empty authentication chain.
Now just Add Module‘s. You don’t have to worry about the order, just add all the modules as in my example at the start of this blog:
If you get the order wrong, don’t worry about it! Just drag and drop authentication blocks to move them around. Ensure you have set the Criteria as follows:
DataStore: Requisite
DeviceMatch: Sufficient
TwoFactor: Requisite
DeviceMatch: Required
Save Changes and you are done. That’s all there is to it!
Not quite… there is one additional step I want to do here. By default Two Factor is optional for end users. In some cases that is desirable, it’s an additional security control and if you are big retailer you don’t want to force it on users but you do want it to be an option for them.
However in this demo, I want to make it mandatory to do so, navigate to Authentication, Settings, then General and check the Two Factor Authentication Mandatory box.
Then Save Changes.

Testing the Authentication Chain

So how do we test the authentication chain? Well, remember we named it secureAuthService? Let’s try logging in using the following URL:
http://localhost.localdomain.com:18080/openam/login?service=secureAuthService
 
Then try entering the standard demo and changeit credentials.
You would normally be logged into OpenAM at this point, however instead, you should see the following:
This is the DeviceMatch module doing it’s work. Make sure to press Share Location.
 
Note: this is just a default and that capturing location is optional.
 
As this is the first time I am logging in using this device. I need to use the ForgeRock authenticator as a second factor.
Note: for this explanation I have already download the ForgeRock authenticator from the Apple App ( or Google Play ) stores. I have also already registered it with OpenAM. The first time you do this you will be asked to register and need to take a photo of a QR code in OpenAM. This is relatively straight forward but feel free to leave questions in the comments.
 
 
I now enter the code generated by the ForgeRock authenticator on my phone, and assuming I get that right and press SUBMIT. I am then asked if I want to trust this device ( the laptop I am logging in from ) and to give it a name:
After which I am successfully logged into OpenAM!
Now, if you try logging out and back in. You won’t be challenged for 2FA authentication! So long as you are using the same laptop.
One more thing. If you log in again and navigate to DASHBOARD, you can see the trusted profile for your Laptop and the 2FA token. If you want you can delete the trusted profile, at which point OpenAM no longer knows about your laptop and will challenge you for 2FA again.
Authentication chains are really easy to understand and configure, and incredibly powerful.

This blog post was first published @ http://identity-implementation.blogspot.no/, included here with permission from the author.

A Beginners Guide to OpenIDM – Part 2 – Objects & Relationships

Overview

At the heart of OpenIDM are managed objects. Out of the box three managed objects are configured:
  • User: User identities, effectively this is your central identity store.
  • Role: An object for modelling roles.
  • Assignment: An object for modelling assignments. Assignments are effectively ways of capturing sets of entitlements across mapping. Which can then be associated with roles.
In this blog we will examine the user managed object in detail, roles and assignments will be explored later in the series.
It is important to understand that objects can really be anything and you can create new objects very easily. This is an incredibly powerful way to model all sorts of different things:
  • Users
  • Organisations, divisions, teams or other parts of a business.
  • Devices and hardware.
  • Products and offerings.
  • Anything else you can think of! Managed objects are completely configurable.
Not only can you model things, but you can also model the relationships between things. For example:
  • Which organisations a user belongs to.
  • The devices that a user owns.
  • The products a user has.
  • The teams that belong to an organisation.
  • Anything else you can think of!

Objects

All objects have the following properties:
  • Details: The name and icon that represents the object in the UI.
  • Schema: Properties, their validation rules and their relationships.
  • Scripts: Different hooks for running scripts throughout the object lifecycle e.g. postCreate
  • Properties: Rules for special attribute behaviors e.g. passwords should be encrypted and private.
Lets look at each of this in detail.

Details

Not much to say here. Just the name of your object and you can select a funky icon that will be displayed throughout the interface wherever your object is used.

Schema

The properties that actually comprise your object. Lets take a look at the managed user schema.
On the left, under Schema Properties you can see each property that comprises a user. There are many properties available out of the box and you can easily add or remove properties as required.
Let’s look at a property in detail.
So what does a property comprise of:
  • Property Name: The internal name users within the OpenIDM platform to refer to the property, think of it like a variable name only used internally.
  • Readable Title: The name that will be used to refer to the property in the user interface.
  • Description: Simple description of the attribute that when populated is used throughout the interface as a tooltip.
  • Viewable: Can it be seen in the UI?
  • Searchable: Is it indexed and searchable in the UI?
  • End users allowed to edit: Used are allowed to update the value using self service.
  • Minimum Length: Minimum length of the attribute value.
  • Pattern: Any specific pattern to which the value of the property must adhere. e.g. date formats.
  • Validation Policies: Rules that can be used to define attribute behavior. We will look at these in detail in a moment.
  • Required: Must be populated with a value.
  • Return by Default: If true, will be returned when user details are requested via the API. If false, will only be returned if specifically asked for.
  • Type: Type of the attribute: String, Array, Boolean, Integer, Number. Object or Relationship. We will look at relationships in a moment.

Validation Policies

Validation policies are ways to validate the attribute. The example below checks that the mail attribute is a valid email address. This prevents the user from inputting an invalid email address during self registration or an administrator changing the email incorrectly.
 
Similarly for the password attribute validation policies allow you to enforce password rules, for example:

Relationships

Relationships are incredibly powerful and really at the heart of what OpenIDM does. If you have installed OpenIDM in part 1 then I recommend you take a look at the out of the box managed objects to really understand this, however we will briefly discuss it.
The out of the box managed user object defines a relationship between managers and reports.
manager:
reports:
What are we saying here?
  • User’s have a manager. This is a Relationship. It is in fact a reverse relationship. As manager A, has reports X,Y,Z and reports X,Y,Z have the manager A.
  • User’s can also have reports. They may have multiple reports. Note this is an Array of Relationships: A manages X, A manages Y, A manages Z. Likewise this is a reverse relationship.
Relationships let you model relationships between all sorts of types of objects, users, organisations, devices, products, anything.

Scripts

Objects also have events which can be used to trigger events.
Out of the box, the above scripts are configured:
onCreate: The script that runs when the object is created. In this case, a script used to set the default fields for a user.
onDelete: The script that runs when the object is deleted. In this case, a script is used to cleanup users after deletion.
These scripts are completely configurable and new scripts can easily be added.
If you try add a new script you will see there are three options:
  1. Script Inline Script: script defined within the UI.
  2. Script File Path: a script stored within the OpenIDM configuration directory. This is how out of the box scripts work. If you navigate to /openidm/bin/defaults/script/ui you can examine these out of the box scripts to see what they do.
  3. Workflow – Event can be used to trigger a workflow.
Note: If you add new scripts, these should be placed somewhere else, usually: /usr/local/env/box/openidm/script
 
Scripting is a great way to do all sorts of things to help you manage objects.

Properties

Properties let you define additional behaviors for attributes.
  • Encrypted: The attribute value is encrypted. This means it can be decrypted and the value retrieved if required. 
  • Private: Restricts HTTP access to sensitive data, if this is true the attribute is not returned when using the REST API.
  • Virtual: The attribute is calculated on the fly, usually from a script.
  • Hashed: The attribute is hashed. Hashing is a one way function and the usual way that passwords should be stored. You hash the password when a user registers for the first time. When they log in again subsequently you hash the password that they enter against the original password hash. If they match you know the passwords are the same. Crucially, it is impossible to take a hash and extract the original password from it.
A common use for this is calculating effective roles. Effective roles are dynamically calculated using an out of the box script:
You can examine the script here: /openidm/bin/defaults/script/roles/effectiveRoles.js. 

Managed Objects and the REST API

For the final part of this blog I want to take a look at something I think is pretty cool. The OpenIDM REST API.
All managed objects ( including the ones you can create yourself ) are automatically made available using a REST API.

Using the API you can Create, Return, Update and Delete objects ( CRUD ) as well as search for and query objects. We will dive into the REST API in a later series but we can do a quick demo just to get a feel for how it works.

I recommend downloading Postman for this, Postman is a plug in for Chrome that lets you easiy invoke REST API’s. You can grab it here: https://www.getpostman.com/
Once you have Postman. Log into OpenIDM as administrator and go to Manage, then User and create a new user:
Press Save. Now look at the URL:
Note the long string of letters and numbers. This is the object id for our new user.
Now if we go to Postman, we can setup a new request:
Make sure you populate the headers as I have above. Set the request to a GET and enter a URL to return. In our case:
How does this break down:
Now, if you press Send, you should retrieve the user we just created:
This is just a small taster of what the REST API can do and we will explore it in much more detail in later blogs. You can also read all about the REST API here:

 

 

This blog post was first published @ http://identity-implementation.blogspot.no/, included here with permission from the author.

A Beginners Guide to OpenIDM – Part 1

Introducing OpenIDM

This is the first in a series of blogs aiming to demystify OpenIDM, the Identity Management component of the ForgeRock platform.

I have actually been really impressed with OpenIDM and how much you can accomplish with it in a short time. It is fair to say though that if you are used to more traditional IDM technologies such as Oracle Identity Manager then it can take a bit of time to get your head around how OpenIDM works and how to get things done.

In the first of this series of blogs I want to walkthrough a basic installation of OpenIDM, look at the architecture of the product and how everything fits together.

Overview

OpenIDM is primarily concerned with the following functionality:
  • Objects and relationships: Quickly modelling complex objects, schemas and the relationships between them, e.g. for users, devices and things and exposing them as RESTful resources.
  • Data Synchronization: Moving data to and from systems such as Active Directory, databases, webservices and others, makes use of connectors and mappings to:
    • Create and update users and accounts in target systems i.e. pushing data to target systems from OpenIDM.
    • Reconcile users and accounts from target systems i.e. pulling data into OpenIDM from target systems.
    • Move data about users, devices and things to and from any other system.
  • Workflow Engine: processes such as request and approval of access to resources and much more.
  • Self Service: Enabling end users to easily and securely register accounts, retrieve forgotten passwords and manage their profiles.
  • Task Scheduling: Automating certain processes to run periodically.
All of this is built upon a consistent set of REST APIs with numerous hooks throughout the platform for scripting behaviors using Groovy or javascript.
OpenIDM also makes use of a data store into which it reads and writes:
  • Data for users, devices and things: e.g. actual user account data such as first_name=Wayne, last_name=Blacklock for all objects that OpenIDM is managing.
  • Linked account data: “Mirrored data” for the systems that OpenIDM has been integrated with. This enables you to view and manipulate all of a users account data across all systems from OpenIDM.
  • Various pieces of state relating to workflow, scheduling and other functionality.
Finally, all of the OpenIDM’s config is stored as .json files locally per deployment.

Logical Architecture

The diagram below aims to give you a bit of an overview of how OpenIDM fits together. We will explore each major component in detail with worked examples over the next few months.

Getting Started

This blog series is intended to be a practical introduction to OpenIDM so the first thing we need to do is download and install it from here:
Note: For now we are going to use the embedded OpenIDM OrientDB database, rather than install an external database. The OrientDB database ships with OpenIDM and is ready to go right from the start however please note it is not suitable for production deployments. We will cover the usage of another database for enterprise deployments later in the series.
Download and unzip OpenIDM to a directory. Make sure you have Java installed, configured and available from the command line.
To start up OpenIDM simply type:

Linux:

 ./startup.sh
Windows:
 startup.bat
That’s it! By default OpenIDM runs on port 8080. You can them navigate to the interfaces at:
http://localhost.localdomain.com:8080
http://localhost.localdomain.com:8080/admin

You’ll note both pages look similar, but one is for users and one is for admins.

The default username and password for the administrator is openidm-admin / openidm-admin.

Log into the administrator interface, once you have logged in you should see the dashboard:

Over the rest of this series we will explore the functionality of OpenIDM in detail.


This blog post was first published @ http://identity-implementation.blogspot.no/, included here with permission from the author.

It’s The Little Things – Part 1

Since I began working with the ForgeRock technology I have been impressed by how much you can do with it in a very short time and from what I have seen this is really underpinned by an philosophy of developer friendliness. There are lots of small features throughout the platform that are solely there to make life for implementors and developers easier.

In this series I wanted to take the opportunity to call out some of these little things that can make a big difference to your day to day experience.

Simulated Attribute Mapping

One of my favourite features in OpenIDM. Say you need to create provisioning integration with a target system. More often than not you need to manipulate or transform source attributes to achieve this.

Below you can see I have created a mapping to Active Directory in OpenIDM. This is a fairly common requirement that comes up time and again. Among other attributes, when you create an Active Directory account you need to define a Distinguished Name (DN).

I have configured the following script to generate a DN:

This is fairly simple, it just takes the userName of the user in OpenIDM and appends the rest of the desired DN as a fixed string. This is all fairly standard stuff. What is really useful, is that I can simulate the output of my DN transformation before I actually provision any accounts. To do this you just need to select an existing OpenIDM user using the Sample Source feature:

You can now see what the target output will actually look like for a given user. This is a really handy timesaver if you need to write complex mappings and enables you to quickly get a feel whether or not your transformation is correct before you have to go back and forth with failed provisioning operations against Active Directory.

This blog post was first published @ http://identity-implementation.blogspot.no/, included here with permission from the author.