OpenDJ Pets on Kubernetes


Stateless “12-factor” applications are all the rage, but there are some kinds of services that are inherently stateful. Good examples are things like relational databases (Postgres, MySQL) and NoSQL databases (Cassandra, etc).

These services are difficult to containerize, because the default docker model favours ephemeral containers where the data disappears when the container is destroyed.

These services also have a strong need for identity. A database “primary” server is different than the “slave”. In Cassandra, certain nodes are designated as seed nodes, and so on.

OpenDJ is an open source LDAP directory server from ForgeRock. LDAP servers are inherently “pet like” insomuch as the directory data must persist beyond the container lifetime. OpenDJ nodes also replicate data between themselves to provide high-availability and therefore need some kind of stable network identity.

Kubernetes 1.3  introduces a feature called “Pet Sets” that is designed specifically for these kinds of stateful applications.   A Kubernetes PetSet provides applications with:

  • Permanent hostnames that persist across restarts
  • Automatically provisioned persistent disks per container that live beyond the life of a container
  • Unique identities in a group to allow for clustering and leader election
  • Initialization containers which are critical for starting up clustered applications

These features are exactly what we need to deploy OpenDJ instances.  If you want to give this a try, read on…

You will need access to a Kubernetes 1.3 environment. Using minikube is the recommended way to get started on the desktop.

You will need to fork and clone the ForgeRock Docker repository to build the OpenDJ base image. The repository is on our stash server:

 https://stash.forgerock.org/projects/DOCKER/repos/docker/browse

To build the OpenDJ image, you will do something like:

cd opendj
 docker build -t forgerock/opendj:latest .

If you are using minikube,  you should connect your docker client to the docker daemon running in your minikube cluster (use minikube docker-env).  Kubernetes will not need to “pull” the image from a registry – it will already be loaded.  For development this approach will speed things up considerably.

Take a look at the README for the OpenDJ image. There are a few environment variables that the container uses to determine how it is bootstrapped and configured.  The most important ones:

  • BOOTSTRAP: Path to a shell script that will initialize OpenDJ. This is only executed if the data/config directory is empty. Defaults to /opt/opendj/boostrap/setup.sh
  • BASE_DN: The base DN to create. Used in setup and replication
  • DJ_MASTER_SERVER: If set, run.sh will call bootstrap/replicate.sh to enable replication to this master. This only happens if the data/config directory does not exist

There are sample bootstrap setup.sh scripts provided as part of the container, but you can override these and provide your own script.

Next,  fork and clone the ForgeRock Kubernetes project here:
https://stash.forgerock.org/projects/DOCKER/repos/fretes/browse

The opendj directory contains the Pet Set example.  You must edit the files to suit your needs, but as provided, the artifacts do the following:

  • Configures two OpenDJ servers (opendj-0 and opendj-1) in a pet set.
  • Runs the  cts/setup.sh script provided as part of the docker image to configure OpenDJ as an OpenAM CTS server.
  • Optionally assigns persistent volumes to each pet, so the data will live across restarts
  • Assigns “opendj-0” as the master.  The replicate.sh script provided as part of the Docker image will replicate each node to this master.  The script ignores any attempt by the master to replicate to itself.  As each pet is added (Kubernetes creates them in order) replication will be configured between that pet and the opendj-0 master.
  • Creates a Kubernetes service to access the OpenDJ instances. Instances can be addressed by their unique name (opendj-1), or by a service name (opendj) which will go through a primitive load balancing function (at this time round robin).  Applications can also perform DNS lookup on the opendj SRV record to obtain a list of all the OpenDJ instances in the cluster.

The replication topology is quite simple. We simply replicate each OpenDJ instance to opendj-0. This is going to work fine for small OpenDJ clusters. For more complex installations you will need to enhance this example.

To create the petset:

kubectl create -f opendj/

If you bring up the minikube dashboard:

minikube dashboard

You should see the two pets being created (be patient, this takes a while).

Take a look at the pod logs using the dashboard or:

kubectl logs opendj-0 -f

Now try scaling up your PetSet. In the dashboard, edit the Pet Set object, and change the number of replicas from 2 to 3:

You should see a new OpenDJ instance being created. If you examine the logs for that instance, you will see it has joined the replication topology.

Note: Scaling down the Pet Set is not implemented at this time. Kubernetes will remove the pod, but the OpenDJ instances will still think the scaled down node is in the replication topology.

This blog post was first published @ blog.warrenstrange.com, included here with permission.

OpenDJ Pets on Kubernetes





Stateless "12-factor" applications are all the rage, but there are some kinds of services that are inherently stateful. Good examples are things like relational databases (Postgres, MySQL) and NoSQL databases (Cassandra, etc).

These services are difficult to containerize, because the default docker model favours ephemeral containers where the data disappears when the container is destroyed.

These services also have a strong need for identity. A database "primary" server is different than the "slave". In Cassandra, certain nodes are designated as seed nodes, and so on.

OpenDJ is an open source LDAP directory server from ForgeRock. LDAP servers are inherently "pet like" insomuch as the directory data must persist beyond the container lifetime. OpenDJ nodes also replicate data between themselves to provide high-availability and therefore need some kind of stable network identity.

Kubernetes 1.3  introduces a feature called "Pet Sets" that is designed specifically for these kinds of stateful applications.   A Kubernetes PetSet provides applications with:
  • Permanent hostnames that persist across restarts
  • Automatically provisioned persistent disks per container that live beyond the life of a container
  • Unique identities in a group to allow for clustering and leader election
  • Initialization containers which are critical for starting up clustered applications

These features are exactly what we need to deploy OpenDJ instances.  If you want to give this a try, read on...

You will need access to a Kubernetes 1.3 environment. Using minikube is the recommended way to get started on the desktop. 

You will need to fork and clone the ForgeRock Docker repository to build the OpenDJ base image. The repository is on our stash server: 


To build the OpenDJ image, you will do something like:

cd opendj
docker build -t forgerock/opendj:latest . 

If you are using minikube,  you should connect your docker client to the docker daemon running in your minikube cluster (use minikube docker-env).  Kubernetes will not need to "pull" the image from a registry - it will already be loaded.  For development this approach will speed things up considerably.


Take a look at the README for the OpenDJ image. There are a few environment variables that the container uses to determine how it is bootstrapped and configured.  The most important ones:
  • BOOTSTRAP: Path to a shell script that will initialize OpenDJ. This is only executed if the data/config directory is empty. Defaults to /opt/opendj/boostrap/setup.sh
  • BASE_DN: The base DN to create. Used in setup and replication
  • DJ_MASTER_SERVER: If set, run.sh will call bootstrap/replicate.sh to enable replication to this master. This only happens if the data/config directory does not exist

There are sample bootstrap setup.sh scripts provided as part of the container, but you can override these and provide your own script.  

Next,  fork and clone the ForgeRock Kubernetes project here:

The opendj directory contains the Pet Set example.  You must edit the files to suit your needs, but as provided, the artifacts do the following:

  • Configures two OpenDJ servers (opendj-0 and opendj-1) in a pet set. 
  • Runs the  cts/setup.sh script provided as part of the docker image to configure OpenDJ as an OpenAM CTS server.
  • Optionally assigns persistent volumes to each pet, so the data will live across restarts
  • Assigns "opendj-0" as the master.  The replicate.sh script provided as part of the Docker image will replicate each node to this master.  The script ignores any attempt by the master to replicate to itself.  As each pet is added (Kubernetes creates them in order) replication will be configured between that pet and the opendj-0 master. 
  • Creates a Kubernetes service to access the OpenDJ instances. Instances can be addressed by their unique name (opendj-1), or by a service name (opendj) which will go through a primitive load balancing function (at this time round robin).  Applications can also perform DNS lookup on the opendj SRV record to obtain a list of all the OpenDJ instances in the cluster.

The replication topology is quite simple. We simply replicate each OpenDJ instance to opendj-0. This is going to work fine for small OpenDJ clusters. For more complex installations you will need to enhance this example.


To create thet petset:

kubectl create -f opendj/


If you bring up the minikube dashboard:

minikube dashboard 

You should see the two pets being created (be patient, this takes a while). 




Take a look at the pod logs using the dashboard or:

kubectl logs opendj-0 -f 

Now try scaling up your PetSet. In the dashboard, edit the Pet Set object, and change the number of replicas from 2 to 3:



















You should see a new OpenDJ instance being created. If you examine the logs for that instance, you will see it has joined the replication topology. 

Note: Scaling down the Pet Set is not implemented at this time. Kubernetes will remove the pod, but the OpenDJ instances will still think the scaled down node is in the replication topology. 




Identity Disorder Podcast, Episode 1

I’m excited to introduce a new podcast series hosted by Daniel Raskin and myself. The series will focus on (what we hope are!) interesting identity topics, news about ForgeRock, events, and much more. Take a listen to the debut episode below where we discuss why and how to get rid of passwords, how stateless OAuth2 tokens work, and some current events, too!

-Chris

It’s The Little Things – Authentication Chains

Authentication Chains

We have not talked much about OpenAM on the blog. AM has some really great features that make it very simple to use. Perhaps my favourite feature is the authentication chains UI.

Let’s take a quick look at what an authentication chain looks like, then we will talk through it and have a go at creating a brand new one. I assume you are using OpenAM 13.

You can see what an auth chain looks like above. Essentially it is a series of steps ( I think of them as Lego like building blocks ) for authentication. Each block represents a different mechanism for authenticating. In addition each block is also assigned one of four authentication behaviors (required, optional, requisite & sufficient) which determine how ( and if ) one block flows into the next depending whether that block succeeds.

As stated above, successful authentication required at least one pass and no fail flags.

In the above example there are four blocks, lets look at each in turn:

  • DataStore: Basic username and password authentication against the OpenAM data store. If this step is a:
    • FAIL: The user hasn’t even got their username and password right. We definitely are not letting them in, and as such exit the chain with a FAIL.
    • PASS: The username and password is correct. We move to the next block in the chain DeviceMatch.
  • DeviceMatch: First step of device match authentication ( essentially asking the question: has OpenAM seen the use log in from this device before? ). If this step is a:
    • CONTINUE: OpenAM has not seen the user log in using this particular laptop or mobile before. This block has failed but, because it is sufficient this does not equate to a fail flag. We have to be a bit more suspicious and go into the TwoFactor block.
    • PASS: This is a device the user has used before and OpenAM recognises it. At this point the user has authenticated with username and password from a recognised device. We exit the chain with a PASS. 
  • TwoFactor: Challenge the user to provide the code from a two factor mobile soft token. This second factor proves that not only does the user have the right username and password, but also that they have the mobile device they originally registered with in their possession. If this step is a:
    • FAIL: The user has failed 2FA. At this point we don’t have the confidence this is really the user being claimed and exit with a FAIL.
    • PASS:
  • DeviceSave: The last step of device match authentication. We save a record of the device so we can match it next time in the DeviceMatch step. If this step is a fail:
    • FAIL: The user is not actually being challenge for anything. Authentication is complete. We just need to save the device which will not fail.
    • PASS: We have now saved the device, in future, so long as the user continues to use this particular laptop or mobile to login. They will not have to do the TwoFactor step.

Note that I have chosen the above authentication “blocks” for this particular blog. I could easily have used others. There are many different types of blocks available in OpenAM covering nearly every conceivable authentication requirement.

I think the way OpenAM allows you to quickly use these building blocks to build authentication visually is really neat.

Let’s now try building the above chain in OpenAM.

Building an Authentication Chain

Firstly we need to create the authentication building blocks we want. I am going to assume you have an installation of OpenAM up and running with a Top Level Realm configured ( though you can do this any realm ).
Select the realm:
And navigate to Authentication, then Modules.
Out of the box the above modules are configured. We need to configure a few more.
Press Add Module, select “Device Match” from the drop down and give it a name ( I used DeviceMatch earlier ).
Press Create and you should see the configuration screen:
The defaults are fine here, just press Save Changes.
Now repeat the last two steps for the Device Id ( Save ) and ForgeRock Authenticator (OATH) modules.
When this is done you should have the following modules:
Now we need to create a new authentication chain. Navigate to Authentication, then Chains.
Press Add Chain, and give it a name ( I used secureAuthService above ) then press Create, you will now have an empty authentication chain.
Now just Add Module‘s. You don’t have to worry about the order, just add all the modules as in my example at the start of this blog:
If you get the order wrong, don’t worry about it! Just drag and drop authentication blocks to move them around. Ensure you have set the Criteria as follows:
DataStore: Requisite
DeviceMatch: Sufficient
TwoFactor: Requisite
DeviceMatch: Required
Save Changes and you are done. That’s all there is to it!
Not quite… there is one additional step I want to do here. By default Two Factor is optional for end users. In some cases that is desirable, it’s an additional security control and if you are big retailer you don’t want to force it on users but you do want it to be an option for them.
However in this demo, I want to make it mandatory to do so, navigate to Authentication, Settings, then General and check the Two Factor Authentication Mandatory box.
Then Save Changes.

Testing the Authentication Chain

So how do we test the authentication chain? Well, remember we named it secureAuthService? Let’s try logging in using the following URL:
http://localhost.localdomain.com:18080/openam/login?service=secureAuthService
 
Then try entering the standard demo and changeit credentials.
You would normally be logged into OpenAM at this point, however instead, you should see the following:
This is the DeviceMatch module doing it’s work. Make sure to press Share Location.
 
Note: this is just a default and that capturing location is optional.
 
As this is the first time I am logging in using this device. I need to use the ForgeRock authenticator as a second factor.
Note: for this explanation I have already download the ForgeRock authenticator from the Apple App ( or Google Play ) stores. I have also already registered it with OpenAM. The first time you do this you will be asked to register and need to take a photo of a QR code in OpenAM. This is relatively straight forward but feel free to leave questions in the comments.
 
 
I now enter the code generated by the ForgeRock authenticator on my phone, and assuming I get that right and press SUBMIT. I am then asked if I want to trust this device ( the laptop I am logging in from ) and to give it a name:
After which I am successfully logged into OpenAM!
Now, if you try logging out and back in. You won’t be challenged for 2FA authentication! So long as you are using the same laptop.
One more thing. If you log in again and navigate to DASHBOARD, you can see the trusted profile for your Laptop and the 2FA token. If you want you can delete the trusted profile, at which point OpenAM no longer knows about your laptop and will challenge you for 2FA again.
Authentication chains are really easy to understand and configure, and incredibly powerful.

This blog post was first published @ http://identity-implementation.blogspot.no/, included here with permission from the author.

A Beginners Guide to OpenIDM – Part 2 – Objects & Relationships

Overview

At the heart of OpenIDM are managed objects. Out of the box three managed objects are configured:
  • User: User identities, effectively this is your central identity store.
  • Role: An object for modelling roles.
  • Assignment: An object for modelling assignments. Assignments are effectively ways of capturing sets of entitlements across mapping. Which can then be associated with roles.
In this blog we will examine the user managed object in detail, roles and assignments will be explored later in the series.
It is important to understand that objects can really be anything and you can create new objects very easily. This is an incredibly powerful way to model all sorts of different things:
  • Users
  • Organisations, divisions, teams or other parts of a business.
  • Devices and hardware.
  • Products and offerings.
  • Anything else you can think of! Managed objects are completely configurable.
Not only can you model things, but you can also model the relationships between things. For example:
  • Which organisations a user belongs to.
  • The devices that a user owns.
  • The products a user has.
  • The teams that belong to an organisation.
  • Anything else you can think of!

Objects

All objects have the following properties:
  • Details: The name and icon that represents the object in the UI.
  • Schema: Properties, their validation rules and their relationships.
  • Scripts: Different hooks for running scripts throughout the object lifecycle e.g. postCreate
  • Properties: Rules for special attribute behaviors e.g. passwords should be encrypted and private.
Lets look at each of this in detail.

Details

Not much to say here. Just the name of your object and you can select a funky icon that will be displayed throughout the interface wherever your object is used.

Schema

The properties that actually comprise your object. Lets take a look at the managed user schema.
On the left, under Schema Properties you can see each property that comprises a user. There are many properties available out of the box and you can easily add or remove properties as required.
Let’s look at a property in detail.
So what does a property comprise of:
  • Property Name: The internal name users within the OpenIDM platform to refer to the property, think of it like a variable name only used internally.
  • Readable Title: The name that will be used to refer to the property in the user interface.
  • Description: Simple description of the attribute that when populated is used throughout the interface as a tooltip.
  • Viewable: Can it be seen in the UI?
  • Searchable: Is it indexed and searchable in the UI?
  • End users allowed to edit: Used are allowed to update the value using self service.
  • Minimum Length: Minimum length of the attribute value.
  • Pattern: Any specific pattern to which the value of the property must adhere. e.g. date formats.
  • Validation Policies: Rules that can be used to define attribute behavior. We will look at these in detail in a moment.
  • Required: Must be populated with a value.
  • Return by Default: If true, will be returned when user details are requested via the API. If false, will only be returned if specifically asked for.
  • Type: Type of the attribute: String, Array, Boolean, Integer, Number. Object or Relationship. We will look at relationships in a moment.

Validation Policies

Validation policies are ways to validate the attribute. The example below checks that the mail attribute is a valid email address. This prevents the user from inputting an invalid email address during self registration or an administrator changing the email incorrectly.
 
Similarly for the password attribute validation policies allow you to enforce password rules, for example:

Relationships

Relationships are incredibly powerful and really at the heart of what OpenIDM does. If you have installed OpenIDM in part 1 then I recommend you take a look at the out of the box managed objects to really understand this, however we will briefly discuss it.
The out of the box managed user object defines a relationship between managers and reports.
manager:
reports:
What are we saying here?
  • User’s have a manager. This is a Relationship. It is in fact a reverse relationship. As manager A, has reports X,Y,Z and reports X,Y,Z have the manager A.
  • User’s can also have reports. They may have multiple reports. Note this is an Array of Relationships: A manages X, A manages Y, A manages Z. Likewise this is a reverse relationship.
Relationships let you model relationships between all sorts of types of objects, users, organisations, devices, products, anything.

Scripts

Objects also have events which can be used to trigger events.
Out of the box, the above scripts are configured:
onCreate: The script that runs when the object is created. In this case, a script used to set the default fields for a user.
onDelete: The script that runs when the object is deleted. In this case, a script is used to cleanup users after deletion.
These scripts are completely configurable and new scripts can easily be added.
If you try add a new script you will see there are three options:
  1. Script Inline Script: script defined within the UI.
  2. Script File Path: a script stored within the OpenIDM configuration directory. This is how out of the box scripts work. If you navigate to /openidm/bin/defaults/script/ui you can examine these out of the box scripts to see what they do.
  3. Workflow – Event can be used to trigger a workflow.
Note: If you add new scripts, these should be placed somewhere else, usually: /usr/local/env/box/openidm/script
 
Scripting is a great way to do all sorts of things to help you manage objects.

Properties

Properties let you define additional behaviors for attributes.
  • Encrypted: The attribute value is encrypted. This means it can be decrypted and the value retrieved if required. 
  • Private: Restricts HTTP access to sensitive data, if this is true the attribute is not returned when using the REST API.
  • Virtual: The attribute is calculated on the fly, usually from a script.
  • Hashed: The attribute is hashed. Hashing is a one way function and the usual way that passwords should be stored. You hash the password when a user registers for the first time. When they log in again subsequently you hash the password that they enter against the original password hash. If they match you know the passwords are the same. Crucially, it is impossible to take a hash and extract the original password from it.
A common use for this is calculating effective roles. Effective roles are dynamically calculated using an out of the box script:
You can examine the script here: /openidm/bin/defaults/script/roles/effectiveRoles.js. 

Managed Objects and the REST API

For the final part of this blog I want to take a look at something I think is pretty cool. The OpenIDM REST API.
All managed objects ( including the ones you can create yourself ) are automatically made available using a REST API.

Using the API you can Create, Return, Update and Delete objects ( CRUD ) as well as search for and query objects. We will dive into the REST API in a later series but we can do a quick demo just to get a feel for how it works.

I recommend downloading Postman for this, Postman is a plug in for Chrome that lets you easiy invoke REST API’s. You can grab it here: https://www.getpostman.com/
Once you have Postman. Log into OpenIDM as administrator and go to Manage, then User and create a new user:
Press Save. Now look at the URL:
Note the long string of letters and numbers. This is the object id for our new user.
Now if we go to Postman, we can setup a new request:
Make sure you populate the headers as I have above. Set the request to a GET and enter a URL to return. In our case:
How does this break down:
Now, if you press Send, you should retrieve the user we just created:
This is just a small taster of what the REST API can do and we will explore it in much more detail in later blogs. You can also read all about the REST API here:

 

 

This blog post was first published @ http://identity-implementation.blogspot.no/, included here with permission from the author.

Custom Stages to User Self-Service

Commons Project

One of the great features of the OpenAMv13 and OpenIDMv4 releases is actually not a feature of those products at all.  I’m talking about the Self-Service capability which is actually part of the ‘COMMONS’ project at ForgeRock.  As the name suggests, the functionality on ‘Commons’ may be found in more than one of the final products.  Commons includes capabilities such as audit logging, the CREST framework, the UI framework, as well as the User Self-Service functionality.

Self-Service Configuration

Now there is lots of good documentation about how to make use of the User Self-Service functionality as exposed through the OpenAM and OpenIDM products.
For example, for OpenAM: https://backstage.forgerock.com/#!/docs/openam/13/admin-guide#chap-usr-selfservices, and for OpenIDM: https://backstage.forgerock.com/#!/docs/openidm/4/integrators-guide#ui-configuring.

Whilst the end-user functionality is the same, the way you configure the functionality in OpenAM and OpenIDM is slightly different.  This is the OpenIDM configuration view:

One thing you might notice from the documentation is that the User Self-Service flows (Registration, Forgotten Password, and Forgotten Username) have the ability to use custom ‘plugins’ – sometimes called ‘stages’.  However, another thing you might notice is a distinct lack of documentation on how to go about creating such a plugin/stage and adding it to your configuration.

Note that there is an outstanding JIRA logged for ForgeRock to provide documentation (https://bugster.forgerock.org/jira/browse/OPENIDM-5630) but, in the meantime, this post attempts to serve as an interim.  But, I’m only going to use OpenIDM as the target here, not OpenAM.

Fortunately, the JIRA linked above highlights that within the code base there already exists the code for a custom module, and a sample configuration.  So, in this post I’ll explain the steps required to build, deploy, and configure that pre-existing sample.

The easiest way to do this is to get the code of the Standalone Self-Service application!

Get the Standalone Self-Service code

You’ll need to head over the ‘self-service’ repository in the ‘COMMONS’ project of the ForgeRock Stash server: https://stash.forgerock.org/projects/COMMONS/repos/forgerock-selfservice/browse
(You may need to register, but go ahead, it’s free!)

If you’re following the instructions in this post, and are targeting OpenIDMv4  (as opposed to any later releases) then you’ll specifically want v1.0.3 of this SelfService repository.
i.e
https://stash.forgerock.org/projects/COMMONS/repos/forgerock-selfservice/browse?at=refs%2Ftags%2F1.0.3

Now download the code to your local drive so we can build it.

Compile and run it

You can see that the ‘readme.txt’ provides the details of how to compile this project.  Note that this will compile the full ‘Commons Self-Service’ functionality, including the custom stage, in a standalone harness.

Once it’s built you can browse to the web site and play with the functionality.  Any users registered are held in memory of this harness, and therefore flushed each time you stop the jetty container.

It’s also worth noting that the readme.txt instructs you to enter email username and password.  These are used to connect to the configured email service of this test harness in order to actually send registration emails.  (The implementations in OpenAM and OpenIDM will use the configured email services for those products).  By default, the configured email service is gmail.  And, by default, gmail stops this type of activity unless you change your gmail account settings.  However, you may instead choose to run a dummy SMTP service to capture the sent emails.  One such utility that I’ll use here is FakeSMTP: https://nilhcem.github.io/FakeSMTP/
So, once you have an accessible SMTP service, you might now need to change the email service config of the User Self-Service functionality.  You find this for the test harness – assuming the mvn build has worked successfully – here:

forgerock-selfservice-example/target/classes/config.json

If you’re running FakeSMTP on port 2525, then this might look like:

{
  "mailserver": {
    "host": "localhost",
    "port": "2525"
  }
}

Now when you run the example webapp emails will be delivered to FakeSMTP (and will ignore whatever you use for username and password)

So, go ahead, register a user. The first thing you should see is a “Math Problem” stage.  Eh? What? Where did that come from? Well, that’s the custom stage!!  Yes, this standalone version of Self-Service includes the custom stage!

Step1. Math Problem

Assuming you can add 12 and 4 together complete the ‘puzzle’.  Then follow the remaining steps of the registration (noting that the email gets delivered to FakeSMTP, where you can open it and click the link to complete the registration).

Step 2. Email Verification
Email link
Step 3. Register details
Step 4. KBA
Success!

Inspect the configuration

Now, if we take a look at the ‘Registration Stage’ configuration for this example app, which we can find here:

forgerock-selfservice-example/target/classes/registration.json

we will see it begins like this:

{
  "stageConfigs": [
    {
      "class" : "org.forgerock.selfservice.custom.MathProblemStageConfig",
      "leftValue" : 12,
      "rightValue" : 4
    },
    {
      "name" : "emailValidation",
      "emailServiceUrl": "/email",
      "from": "info@admin.org", 
...

Brilliant!  That first item in the stageConfigs array is the “Math Problem” with it’s configuration (i.e. which two numbers to sum!) The remaining array items reference ‘name’ which are the native/in-built modules and their necessary configuration.
So, what we’ve achieved so far is:

  • Compiled the custom stage (Math Problem)
  • Compiled standalone version of Common User Self-Service
  • Tested a Registration flow that includes the custom stage, along with some native stages.

And what’s left to do?

  • Deploy and configure the custom stage in OpenIDM

 

OpenIDM Deployment

Simply copy the custom stage JAR file to the ‘bundle’ directory of your OpenIDM deployment
e.g. 
cp forgerock-selfservice-custom-stage/target/forgerock-selfservice-custom-stage-1.0.3.jar <openidm>/bundle
And update the ‘selfservice-registration.json’ file in your ‘conf’ folder.
This is OpenIDM’s name for the ‘registration.json’ file of the standalone SelfService app, so use the same configuration snippet you saw in the standalone app.
It seems that this method will not allow you to see the custom stage in the Admin UI for OpenIDM. Happily, changes made within the Admin UI do not remove the custom stage from the file. However, be warned that if you disable, then re-enable SelfService Registration there is no guarantee that custom stage will be added into the re-created ‘selfservice-registration.json’ file in the correct place.

So, with User Self Registration enabled, and the custom stage included in the config, when a user tries to register they will be prompted for the calculation at the appropriate point in the flow.

Custom Stage in OpenIDM flow!

Exercise for the Reader!

As you can see I have use the pre-supplied custom stage.  Showing one approach to building, deploying and configuring it.  If you need a custom stage to do something different, then you’ll have to investigate the interfaces that need implementing in order to develop a custom stage.
 

 

This blog post was first published @ yaunap.blogspot.no, included here with permission from the author.

OpenDJ: Configuration over REST

ForgeRock LogoIn recent builds of OpenDJ directory server, the REST to LDAP configuration changed quite a bit… for the better.

The draft release notes tell only part of the story:

The changes let you configure multiple endpoints each with multiple versions, resource type inheritance, subresource definitions, and protection with OAuth 2.0. This version of REST to LDAP also brings many minor improvements.

A really cool “minor” improvement is that you can now configure OpenDJ directory server over HTTP. In the draft Admin Guide, you can also find a procedure titled, To Set Up REST Access to Administrative Data.

tl;dr—Directory administrators can configure the server over REST through the /admin/config endpoint, and can read monitoring info under the /admin/monitor endpoint.

Important note: Before you go wild writing a whole new OpenDJ web-based console as a single-page app, keep in mind that the REST to LDAP implementation is still an Evolving interface, so incompatible changes can happen even in minor releases.

Here’s one example using /admin/config:

#
# This example demonstrates 
# using the /admin/config endpoint
# to create a password policy
# as a directory administrator
# who is also a regular user.
# 
# This requires a nightly build or release 
# from no earlier than late June 2016.
# 
# In order to get this working,
# first set up OpenDJ directory server
# with data from Example.ldif,
# and enable the HTTP connection handler.
#

#
# Give Kirsten Vaughan the right
# to read/write the server configuration.
# This command updates privileges, 
# which are explained in the Admin Guide:
#
/path/to/opendj/bin/ldapmodify 
 --port 1389 
 --bindDN "cn=Directory Manager" 
 --bindPassword password
dn: uid=kvaughan,ou=People,dc=example,dc=com
changetype: modify
add: ds-privilege-name
ds-privilege-name: config-read
-
add: ds-privilege-name
ds-privilege-name: config-write

#
# Give Kirsten access to write password policies.
# This command adds a global ACI.
# Global ACIs are explained in the Admin Guide:
#
/path/to/opendj/bin/dsconfig 
 set-access-control-handler-prop 
 --port 4444 
 --hostname opendj.example.com 
 --bindDN "cn=Directory Manager" 
 --bindPassword "password" 
 --add global-aci:"(target="ldap:///cn=Password Policies,cn=config")(targetscope="subtree")(targetattr="*")(version 3.0; acl "Manage password policies"; allow (all) userdn="ldap:///uid=kvaughan,ou=People,dc=example,dc=com";)" 
 --trustAll 
 --no-prompt

#
# Server config-based password policies
# are under the container entry
# /admin/config/password-policies.
# This corresponds to 
# cn=Password Policies,cn=config in LDAP.
#
# The following are standard common REST operations.
# Common REST is explained and demonstrated
# in the OpenDJ Server Dev Guide.
#
# In production, of course,
# use HTTPS (as described in the Admin Guide).
#

#
# Add a new password policy:
#
curl 
 --user kvaughan:bribery 
 --request POST 
 --header "Content-Type: application/json" 
 --data '{
    "_id": "New Account Password Policy",
    "_schema": "password-policy",
    "password-attribute": "userPassword",
    "force-change-on-add": true,
    "default-password-storage-scheme": "Salted SHA-1"
}' http://opendj.example.com:8080/admin/config/password-policies

#
# Read the new password policy:
#
# curl --user kvaughan:bribery http://opendj.example.com:8080/admin/config/password-policies/New%20Account%20Password%20Policy

#
# An exercise for the reader:
# Figure out how to set a user's pwd policy over REST.
#

If you have not yet learned how to use commons REST and OpenDJ REST to LDAP, have a look at the Server Dev Guide chapter, Performing RESTful Operations.