A Beginners Guide to OpenIDM – Part 6 – Provisioning to Active Directory

Last time we looked at configuring user registration. So now we have our users? How do we get them into our other systems? For example a user directory, in this case Active Directory (AD) ? In this blog we will take a look at configuring OpenIDM provisioning, which consists of synchronisation to AD and reconciliation from AD. We will collect […]

A Beginners Guide to OpenIDM – Part 4 – Mappings

Overview

Last time we configured a connector to process data from a data source. In this blog we start to bring everything together and will define a mapping that creates objects using the connector.

Mappings define the relationships at the attribute level between data in connector data sources and objects. They also define rules to create attributes where they do not already exist and transform attributes if required.

Mappings

Architecture

In OpenIDM mappings consist of a number of logical components:
  • Properties: Defines attribute mappings between source attributes and target attributes, and scripts to create and transform attribute values. Also defines a link qualifier, required to link a single record in the source to multiple different records in the target. Which might be required in some situations e.g. event based HR data feeds where the same user record may appear multiple times.
  • Association: Queries, record validation scripts and association rules that work together define how a source account is linked to a target account. i.e. how OpenIDM knows that Anne’s account in the target system maps to Anne’s account in the source system, and not Bob’s account.
  • Behaviors: Defines the rules that tell OpenIDM what to do in different situations e.g. if an account doesn’t exist in the target system, then you might configure a behaviour to create it. This is incredibly powerful and we will explore it further.
  • Scheduling: Schedules for reconciliation operations e.g. run the mapping every hour, day, week etc. This is not the same as livesync (which we touched on last time) which will sync changes between systems as they occur.

Mappings Example

Creating the Mapping

As always with this blog, the best way to learn is to work through an example. We will build upon the existing work done in blogs 1-3 so you will need to complete those first.
Log in as administrator (openidm-admin), navigate to Configure then Mappings:
Then New Mapping:
You should see the following:
A mapping requires a source and a target and data will flow from the source to the target.
You can see there are a number of selectable options on this screen that you can assign as either a source or target:
  • Connectors: Any connectors you have configured, e.g. flat file, database, LDAP.
  • Managed Objects: The managed objects defined in OpenIDM.
So for this example, we want to use the CSV connector configured previously to create new managed users in OpenIDM. Effectively using the CSV data to create user identities.
Click Add to Mapping on the UserLoadCSV connector, this is going to be our source.
Similarly, now click Add to Mapping on the User managed object, this is our target.
You should see the following:
You have probably noticed that the CSV connector has a selectable drop down. There are some connectors where you might be interested in mapping more than one type of object e.g. accounts & groups in the case of LDAP. We have only configured the default _ACCOUNT_ object in this example.
 
Click Create Mapping.
 
 
Mapping Name: You can enter a more friendly mapping name if you like.
Linked Mapping: If you are creating two mappings in either direction, you should use this to link your second mapping to your first.
Click OK and the mapping should be created:

Configuring the Mapping

We have a mapping, now we need to configure it.
Navigate to the Properties tab and examine the Attribute Grid.
 
 
Start by clicking Add Missing Required Properties, this will automatically add those target properties which the connector definition or managed object specify must be populated. Make sure you press Save Properties after doing this.
You should see the required attributes for the managed user schema:
Now we need to configure the source attributes for each of the following target attributes.
Select the userName row:
You should see the following popup:
There are a few tabs here worth spending a moment on:
  • Property List: The source property to map from.
  • Transformation Script:  In line or file based scripts to transform attributes, a common use of this is to generate LDAP distinguished names.
  • Conditional Updates: Advanced logic that allows you to define rules for the conditional update of certain properties on sync. Useful if you want to ensure that a particular property is never over wrote.
  • Default Values: Simply set a fixed default value for the target, e.g. “true”, “active”, whatever.
Navigate to the Property List tab and select __NAME__. If you recall this corresponds to the connector we defined in the last blog. Now press Update.
Do the same with the rest of the mappings as below, then press Save Properties. You should end up with something like the below.
We can quickly check if this makes sense before we synchronize, look at Sample source, set the Link Qualifier to default and try entering one of the user ID’s in the CSV:
You should see a preview of the mapping result for whichever user you entered.
Next, we need to configure an Association Rule. Navigate to Association then look at Association Rules. Select Correlation Queries
Then Add Correlation Query:
Set the Link Qualifier to default, and look at Expression Builder.
 
 
Press + and select userName:
 
 
Then press Submit, and Save.
Select Save and Reconcile.
You should see the following:
Expand out In Progress, to see the results of reconciliation:
You can see there were 100 Absent results, you can examine the tooltip to see what this means but briefly it means that the source object has no matching target, i.e. there is no managed user record for all 100s lines in the CSV file. Which makes sense, as this is the first time we have run the reconciliation.
You might have expected that those managed users will now have been created, navigate to Manage, then User:
You should see No Data:
Which might not be what we expected, however lets navigate back to Configure, Mappings and look at the Behaviors tab of our mapping again. Look at Current Policy.
By default, new mapping are set up only to read data and not actually perform any operations, such as user creates.
Look at the policy table:
Now change the Current Policy to Default Actions, and note how the table changes:
Note that the actions have all changed, examine the tooltips to understand what is going on here. I plan to revisit Policies in a later blog because they are incredibly powerful.
For now note that Absent has now changed to Create. So for our situation earlier, we would now expect those 100 Absent users to result in Create operations.
Make sure you press Save before moving on.
Finally, press Reconcile Now to run another reconciliation:
You should see the same results as before, but lets check out our Users again under Manage, User.
 
If we have done everything correctly, you should now see all of the users from the CSV file created as OpenIDM managed users.
That’s all for this blog. Thanks.
 

This blog post was first published @ http://identity-implementation.blogspot.no/, included here with permission from the author.

A Beginners Guide to OpenIDM – Part 3 – Connectors

Overview

Previously in this series we have looked at a general overview of OpenIDM and had a detailed look at objects. In this blog I want to explore connectors.
Connectors are the integration glue that enables you to bring data into OpenIDM from all sorts of different systems and data stores. We will take a look at the different types of connectors available in OpenIDM, how they work and end with a practical example of how to actually configure a connector.

Connectors

Architecture

Every identity system that I have ever worked with has a concept similar of a connector. Usually they comprise of Java libraries or scripts that perform the actual push and pull of data to and from a target data source.
Standard connector operations in OpenIDM include:
  • Create: Create a new object ( usually an account ) in a target data store.
  • Update: Update an existing object e.g. if a user changes their email address then we may want to update the user record in a target data store.
  • Get: Retrieve a specific instance of an object ( e.g. an account) from a target data store.
  • Search: Query the collection and return a specific set of results.
There are a number of other operations which we will explore in later blogs.
At a high level connectors are comprised of:
  • Provisioner configuration: configuration data defining the connector usually containing:
    • Reference to the underlying Java class that implements the connector. This should be populated automatically when you choose your connector type. You can explore the connector source code if you like but for the most part you shouldn’t need to be concerned with the underlying classes.
    • All of the credentials and configuration needed to access the data store. You need to configure this.
    • The data store schema for the object or account. You need to configure this.
Connectors are configured through the user interface but like all OpenIDM configuration they are also stored ( and can be edited ) locally on the file system. Connector configuration files ( like most OpenIDM configuration files) can be found in openidm/conf and have the following naming convention:
provisioner.openicf-something.json ( where something is whatever you have named your connector ).
Note connector configuration files will not appear unless you have configured a connector using the UI, we will revisit this later.
The logical flow in OpenIDM for utilising connectors is as follows:
  • Data Synchronization engine outputs data and a requested operation e.g. create, delete, update or one of several others
  • Provisioner engine invokes the connector class with the requested operation and the data from the synchronization engine.
  • Connector class uses the configuration parameters from the provisioner file and the data passed in the invocation to actually do the work and push or pull to or from the target.

Connector Example

So now we have a basic understanding of how connectors work, lets try configuring one.
I’m going to use the CSV connector for this example and we are going to read users from a Comma Seperate Value list. Ultimately we will be reading this data into the managed user object using a mapping. For this blog though we will just focus on configuring the connector.
Feel free to use any CSV file but if you want to follow along with the example then download the CSV here that I created using Mockaroo.



Copy the file to somewhere on the same file system that OpenIDM has been installed on, it doesn’t matter where so long as OpenIDM can access it. I’m going to use /home/data/users.csv
Then log in to OpenIDM as administrator. Navigate to Configure, then Connectors.


 

Press “New Connector”



You will see the UI for configuring a new connector:



Give your new connector a name (I have used UserLoadCSV above – no spaces permitted), and look at the connector types. These are all the different systems you can integrate with.
Note that with further configuration, more connectors are available, and using the scripted connector you can pretty much integrate with any system that offers a suitable API.
 
Select the “CSV File Connector”. Now we need to complete the “Base Connector Details”. Starting with the path to the CSV File we actually want to process.


Now let’s take a look at the next few fields:



They are populated by default but we need to configure these up to match our spreadsheet.
Looking at the data:
  • Header UID = id
  • Header Name = username
So in this instance we just need to change the Header UID to match.



You will note there are a few more fields:
  • Header Password: We will not be processing any passwords from this CSV, that might be something you want to do, although typically you will have OpenIDM generate passwords for you ( more on that later ).
  • Quote Character: If you have an unusually formatted CSV, you can change the character that surrounds your data values. This is used by OpenIDM to successfully parse the CSV file.
  • Field Delimiter: Similarly if you are using a delimiter ( the character that splits up data entries ) that is anything other than a “,” you can tell OpenIDM here.
  • Newline String: As above.
  • Sync Retention Count: Todo
Note that these parameters are all unique to the CSV connector. If you were to use another connector, say the database connector, you would have a different set of parameters that must be configure for OpenIDM to successfully connect to the database and query the table.
Ok, with all that done lets add the connector:



All being well you should get a positive confirmation message. Congratulations, you have added a connector! All very well but what can we do with it?
Click on the menu option ( the vertical dots):


Then Data (_ACCOUNT_)



If you have done everything correctly you should see the data from the CSV in OpenIDM!



It is important to understand, that at this point the data has not been loaded into OpenIDM, OpenIDM is simply providing a live view of the data in the CSV. This works for any connector and we will revisit it at the end of this blog.
Before that, there are a few things I want to cover. Go back to the Connector screen, you should have a new Connector:



Select it, and select “Object Types”:



Then edit “_ACCOUNT_”.




What you should be able to see if a list of all of the attributes in the CSV file. OpenIDM has automatically parsed the CSV and built a schema for interpreting the data. You may also spot “__NAME__”. This is a special attribute, and it maps to the  Header Name attribute we configured earlier.

Again, the concept of Object Type is universal to all our connectors and sometimes additional configuration of the Object Type may be required in order to successfully process data.


Finally, let’s take a look at Sync:

On this page you can configure LiveSync. LiveSync is a special case of synchronization. Ordinarily synchronization is performed through the mappings interface ( or automatically on a schedule ).

However if the connector and target system support it, then LiveSync can be used. With LiveSync changes are picked up as they occur in the target. Ordinarily with a normal synchronization ( often called reconciliation ) all accounts in the target must be examined against the source for changes. With LiveSync, only accounts in the target that have changed will be processed. For this to work the target must support some form of change log that OpenIDM can read. In systems with large numbers of accounts this is a much more efficient way of keeping data in sync.

Connectors And The REST API

As before, we can make use of the REST API here to query our new connector. We can actually use the API to read or write to the underlying CSV data store. Just take a moment to think about what that means. In an enterprise implementation you might have hundreds of different data stores of every type. Once you have configured connectors to OpenIDM you can query those data stores using a single, consistent and centralised RESTful API via OpenIDM. That really is a very powerful tool.

Let’s take a look at this now. Navigate back to the data accounts page from earlier:




Take a look at the URL:

As before, this corresponds to our REST API. Please fire up Postman again.

Enter the following URL

http://localhost.localdomain.com:8080/openidm/system/UserLoadCSV/__ACCOUNT__?_queryId=query-all-ids

You should see the following result



We have just queried the CSV file using the REST API, and retrieved the list of usernames.
Let’s try retrieving the data for a specific user:

http://localhost.localdomain.com:8080/openidm/system/UserLoadCSV/__ACCOUNT__?_queryFilter=/email eq “tgardner0@nsw.gov.au”


Here we are searching for the user with the email address tgardner0@nsw.gov.au.

 



Again, this is just a small sample of what the REST API is capable of, you can learn much more here:
https://forgerock.org/openidm/doc/bootstrap/integrators-guide/index.html#appendix-rest
And more on how queries work here:

https://forgerock.org/openidm/doc/bootstrap/integrators-guide/#constructing-queries

 

Come back next time for a look at mappings where we will join together the managed user and the connector to actually create some users in the system.

This blog post was first published @ http://identity-implementation.blogspot.no/, included here with permission from the author.