Running OpenAM and OpenDJ on Kubernetes with Google Container Engine



Still quite experimental, but if you are adventurous, have a look at:

https://github.com/ForgeRock/frstack/tree/master/docker/k8


This will set up a two node Kubernetes cluster running OpenAM and OpenDJ.  This uses images on the Docker hub that provide nightly builds for OpenAM and OpenDJ.

I will be presenting this at the ForgeRock IRM summit this thursday. Fingers crossed that the demo gods smile down on me!


Dart’s Async / Await is here. The Future starts now




The latest development release of the Dart Editor includes experimental support for Async / Await.  Check out Gilad's article for an introduction. In the Editor go into preferences -> experimental to enable this feature.

async / await is "syntactic sugar" for what can be accomplished using Futures, Completors and a whack of nested then() closures. But this sugar is ohh so sweet (and calorie free!).  Your code will be much easier to understand and debug.


Here is a little before and after example using async/await. In this example, we need to perform 3 async LDAP operations in sequence. Using Futures and nested then() closures, we get something like this:



// add mickey to directory
ldap.add(dn, attrs).then( expectAsync((r) {
expect( r.resultCode, equals(0));
// modify mickey's sn
var m = new Modification.replace("sn", ["Sir Mickey"]);
ldap.modify(dn, [m]).then( expectAsync((result) {
expect(result.resultCode,equals(0));
// finally delete mickey
ldap.delete(dn).then( expectAsync((result) {
expect(result.resultCode,equals(0));
}));
}));
}));


Kinda ugly isn't it?  And hard to type (did you miss a closing brace somewhere??).  The sequence of operations is just not easy to see. If we add in error handling for each Future operation, this gets even worse.


So let's rewrite using async/await:



// add mickey
var result = await ldap.add(dn, attrs);
expect( result.resultCode, equals(0));
// modify mickey's sn
var m = new Modification.replace("sn", ["Sir Mickey"]);
result = await ldap.modify(dn, [m]);
expect(result.resultCode,equals(0));
// finally delete mickey
result = await ldap.delete(dn);
expect(result.resultCode,equals(0));


The intent of the async/await version is much easier to follow, and we can eliminate the unit test expectAsync weirdness.

If any of the Futures throws an exception, we can handle that with a nice vanilla try / catch block.  Here is a sample unit test where the Future is expected to throw an error:



test('Bind to a bad DN', () async {
try {
await ldap.bind("cn=foofoo","password");
fail("Should not be able to bind to a bad DN");
}
catch(e) {
expect(e.resultCode,equals(ResultCode.INVALID_CREDENTIALS));
}
});

Note the use of the "async" keyword before the curly brace. You must mark a function as async in order to use await.

Yet another reason to love Dart!





ForgeRock OpenIG 3.0- OIDC authentication example


My colleague, Simon Moffat has written a nice introductory article on some of the new features in OpenIG 3.0.

OpenIG is a Java based reverse proxy server with a focus on solving identity management challenges. The release adds support for scripting in Groovy and Javascript, and adds new authentication and authorization filters for OpenID Connect and OAuth 2.

I like to describe OpenIG as the Swiss Army knife of identity proxy servers. It can perform arbitrary transformations on HTTP requests and broker them to a number of backend services.


If you want a "ready to run" sample OpenIG project that demonstrates the new OpenID Connect filter  have a look at example1 in  https://github.com/wstrange/openig_examples


Hopefully the README.md cleary explains how this all works, but if not, drop me a note and I will improve the documentation.


If you have any OpenIG samples that you would like to share please feel free to send a pull request.





Systemd is the Cat’s Pyjamas



I have been converting some of the startup scripts for my Open Identity Stack project to use systemd. Systemd is now available on Fedora, CentOS and Redhat - and is coming soon to Debian and Ubuntu (you can actually get it now in Debian testing).

What strikes me is how dead simple it is to create init services that just work.  Here is an example for openidm.service that leverages start/stop scripts that come with OpenIDM:




[Unit]
Description=OpenIDM
After=remote-fs.target nss-lookup.target

[Service]
Type=simple
ExecStart=/opt/ois/openidm/startup.sh
ExecStop=/opt/ois/openidm/shutdown.sh
User=fr
SuccessExitStatus=143

[Install]
WantedBy=multi-user.target

* The only tricky thing above is the SuccessExitStatus. For reasons that I do not fully understand, many Java based programs started with shell scripts will use that system exit code.

Copy the above to /etc/systemd/system/openidm.service and you are good to go:

systemctl start openidm.service
systemctl stop openidm.service
systemctl status openidm.service

To see the output use:

journalctl 

Scroll to the end by typing "G"


This service file was dead simple to write, and "it just works". Systemd takes care of tracking the process and any spawned children.

By way of constrast, I will leave you with the old openidm script that I spent considerable time hacking to get the correct start/stop behavior. I could never get this init script to reliably execute the shell scripts that came with OpenIDM.


#!/bin/sh
# chkconfig: 345 95 5
# description: start/stop openidm



# clean up left over pid files if necessary
cleanupPidFile() {
if [ -f $OPENIDM_PID_FILE ]; then
rm -f "$OPENIDM_PID_FILE"
fi
trap - EXIT
exit
}

JAVA_BIN={{java_home}}/bin/java

OPENIDM_HOME={{install_root}}/openidm
OPENIDM_USER={{fr_user}}
OPENIDM_PID_FILE=$OPENIDM_HOME/.openidm.pid
OPENIDM_OPTS="-Xmx1024m -Dfile.encoding=UTF-8"

cd ${OPENIDM_HOME}

# Set JDK Logger config file if it is present and an override has not been issued
if [ -z "$LOGGING_CONFIG" ]; then
if [ -r "$OPENIDM_HOME"/conf/logging.properties ]; then
LOGGING_CONFIG="-Djava.util.logging.config.file=$OPENIDM_HOME/conf/logging.properties"
else
LOGGING_CONFIG="-Dnop"
fi
fi


CLASSPATH="$OPENIDM_HOME/bin/*:$OPENIDM_HOME/framework/*"
START_CMD="nohup $JAVA_BIN $LOGGING_CONFIG $JAVA_OPTS $OPENIDM_OPTS \
-Djava.endorsed.dirs=$JAVA_ENDORSED_DIRS \
-classpath $CLASSPATH \
-Dopenidm.system.server.root=$OPENIDM_HOME \
-Djava.awt.headless=true \
org.forgerock.commons.launcher.Main -c $OPENIDM_HOME/bin/launcher.json > $OPENIDM_HOME/logs/server.out 2>&1 &"
case "${1}" in
start)
su $OPENIDM_USER -c "$START_CMD eval echo \$\! > $OPENIDM_PID_FILE"
exit ${?}
;;
stop)
./shutdown.sh > /dev/null
exit ${?}
;;
restart)
./shutdown.sh > /dev/null
su $OPENIDM_USER -c "$START_CMD eval echo \$\! > $OPENIDM_PID_FILE"
exit ${?}
;;
*)
echo "Usage: openidm { start | stop | restart }"
exit 1
;;


esac

Will it blend? Configure OpenAM to use Ping’s OIDC RP module



OpenAM can be configured as an OpenID Connect provider.  Ping provides an open source relying party (RP) module for Apache that supports OIDC. This module is an an Apache filter that protects pages and requires the user to authenticate with an OIDC provider. The module asserts the user's identity to proxied applications by setting HTTP headers.

Prerequisites:
  • A recent OpenAM 12 build. Subscription customers can contact ForgeRock to get the required functionality in OpenAM 11.x
  • The Ping OIDC module from here https://github.com/pingidentity/mod_auth_openidc
  • Configure OpenAM as an OIDC provider
  • Create an Agent for the Ping module (Realm -> Agents -> OAuth2 -> new agent)

The Apache configuration details will depend on your O/S distribution. Create an Apache .conf file for the OIDC module and include it your configuration . Here is an example:
From:

OIDCProviderIssuer https://openam.example.com:8443/openam
OIDCProviderAuthorizationEndpoint https://openam.example.com:8443/openam/oauth2/authorize
OIDCProviderTokenEndpoint https://openam.example.com:8443/openam/oauth2/access_token
OIDCProviderTokenEndpointAuth client_secret_post
OIDCProviderUserInfoEndpoint https://openam.example.com:8443/openam/oauth2/userinfo
OIDCSSLValidateServer Off
OIDCOAuthSSLValidateServer Off
 
OIDCClientID apache
OIDCClientSecret password
OIDCScope "openid email profile"
OIDCRedirectURI https://www.example.com:1443/openam/redirect_uri
OIDCCryptoPassphrase password
 
<Location /openam/>
Authtype openid-connect
require valid-user
</Location>


The OIDC connect configuration will depend on the details of your OpenAM installation. Things to watch out for:

  • Add the redirect uri to OpenAM's agent configuration. In the above example the Apache server is available at www.example.com. The redirect_uri from above is not a real web resource (you will not find a page that corresponds to that URL). The Ping module intercepts requests to the URL to handle the OAuth protocol dance.
  • The Location directive (/openam) protects pages at that root with the OIDC module. This is just an example - you do not need to use /openam. 

Ansible roles to install ForgeRock’s OpenDJ LDAP server



Ansible is a really nice "dev-ops" automation tool in the spirit of Chef, Puppet, etc.  It's virtues are simplicity, an "agentless" installation model and a very active and growing community.

One of the neat features of Ansible is the concept of "roles". These are reusable chunks of dev-ops code that perform a specific task. Ansible "Playbooks" orchestrate a number of roles together to perform software installation and configuration.


Roles by themselves are not sufficient to drive reusability.  We need a way to collaborate and share roles.    Enter Ansible Galaxy, the central repository for Ansible roles.

If you have ever used apt or yum, galaxy will appear quite familiar. For example, to install and use the "opendj" role, you issue the following command:

$ ansible-galaxy install warren.strange.opendj

(Roles are prefixed with a contributor name to avoid name collisions).


If you want to install ForgeRock's OpenDJ server, here are two new Ansible roles:


  • opendj  - Downloads and installs the OpenDJ server
  • opendj-replication - sets up replication between two OpenDJ instances.



Here is a sample Ansible playbook that installs two instances on a single host and replicates between them:


---
# Example of installing two OpenDJ instances on the same host (different ports)
# and enabling replication between them
# Most of the variables here are defaulted (see the role opendj/defaults/main.yml for defaults)
- remote_user: fr
sudo: yes
hosts: ois
roles:
- { role: warren.strange.opendj, install_root: "/opt/a" }
- { role: warren.strange.opendj, install_root: "/opt/b", opendj_admin_port: 1444, opendj_ldap_port: 2389,
opendj_ldaps_port: 2636 , opendj_jmx_port: 2689, opendj_service_name: "opendj2" }
- { role: warren.strange.opendj-replication, install_root: "/opt/a", opendj_host2: localhost, opendj_admin_port2: 1444 }



This is my first attempt at an Ansible role. Feedback is most welcome! 
	

Logstash configuration for collecting OpenAM and OpenIDM logs


Following on to my previous posting, here is a logstash configuration that collects logs from both OpenAM and OpenIDM, and feeds them into elastic search:



 input {  
file {
type => idmRecon
start_position => beginning
path => "/opt/openidm/audit/recon.csv"
}
file {
type => idmActivity
start_position => beginning
path => "/opt/openidm/audit/activity.csv"
}
file {
type => amAccess
# start_position => beginning
path => "/opt/openam/openam-config/openam/log/amAuthentication.*"
}
}
filter {
if [type] == "idmRecon" {
csv {
columns => [
"idX","action","actionId","ambiguousTargetObjectIds","entryType","message","reconciling","reconId",
"rootActionId","situation","sourceObjectId","status","targetObjectId","timestamp"
]
}
date {
match => ["timestamp", "ISO8601"]
}
}
if [type] == "idmActivity" {
csv {
columns => [
"_id","action","activityId","after","before","changedFields","message","objectId","parentActionid",
"passwordChanged","requester","rev","rootActionId","status","timestamp"
]
}
date {
match => ["timestamp", "ISO8601"]
}
}
if [type] == "amAccess" {
csv {
columns => [time,Data,LoginID,ContextID, IPAddr, LogLevel,
Domain, LoggedBy, MessageID, ModuleName, NameID, HostName]
separator => " "
}
date {
match => ["time", "yyyy-MM-dd HH:mm:ss"]
}
geoip {
database => "/usr/share/GeoIP/GeoIP.dat"
source => ["IPAddr"]
}
}
}
output {
# Use stdout in debug mode again to see what logstash makes of the event.
stdout {
debug => true
codec => rubydebug
}
elasticsearch { embedded => true }
}



Now we can issue elastic search queries across all of the data sets. Here is a very simple Kibana dashboard showing events over time and their source:



















While this configuration is quite basic, it allows us to find and correlate events of interest across OpenAM and OpenIDM.

Try searching for a sample user "fred" by entering the string into the top search box. You will see all OpenAM and OpenIDM events that contain this string in any field. You can of course build more specific queries - but the default free form search does an excellent job.