How to configure social authentication with LinkedIn

When trying to configure Social Authentication with OpenAM 12 you may notice that out of the box OpenAM only supports Microsoft, Google and Facebook. The reasoning behind this is that at the time of the implementation these providers supported OpenID Connect (well Facebook supports Facebook Connect, but that’s close enough). In case you would like to set up social authentication with other providers then that is still possible, but a bit tricky. In this article I’m going to try to show how social authentication can be configured for example with LinkedIn (that currently only supports OAuth2, not OIDC).

Create an OAuth2 app at LinkedIn

In order to be able to obtain OAuth2 access tokens from LinkedIn, you will need to register your OpenAM as a LinkedIn application by filling out some silly forms. The second page of this wizard gets a bit more interesting, so here are a couple of things that you should do:

  • Take a note of the Client ID and Client Secret displayed.
  • Make sure that OpenAM’s Redirect URI is added as a valid OAuth 2.0 Authorized Redirect URLs, by default that would look something like:

Configure OpenAM for Social authentication

To simply configure LinkedIn for OAuth2 based authentication, you just need to create a new authentication module instance with OAuth 2.0 / OpenID Connect type. With ssoadm that would look something like:

$ openam/bin/ssoadm create-auth-instance -e / -m linkedin -t OAuth -u amadmin -f .pass

This just configures an OAuth2 authentication module with the default settings, so now let’s update those settings to actually match up with LinkedIn:

$ openam/bin/ssoadm update-auth-instance -e / -m linkedin -u amadmin -f .pass -D

Where contains:


At this stage you should be able to authenticate with LinkedIn by simply opening up /openam/XUI/#login/&module=linkedin .

To set up this OAuth2 module for social authentication you just need to do a few more things:
Add the authentication module to a chain (social authentication uses authentication chains to allow more complex authentication flows):

$ openam/bin/ssoadm create-auth-cfg -e / -m linkedinChain -u amadmin -f .pass
$ openam/bin/ssoadm add-auth-cfg-entr -e / -m linkedinChain -o linkedin -c REQUIRED -u amadmin -f .pass

Now to enable the actual social authentication icon on the login pages, just add the Social authentication service to your realm:

$ openam/bin/ssoadm add-svc-realm -e / -s socialAuthNService -u amadmin -f .pass -D social.txt

Where social.txt contains:


Please keep in mind that OAuth2 is primarily for authorization purposes, for authentication you should really utilize OpenID Connect as a protocol. As the social authentication implementation is quite generic, actually you should be able to configure any kind of authentication mechanism and display it with a pretty logo on the login page if you’d like.

Some links I’ve found useful when writing up this post:
OpenAM 12 – Social Authentication
LinkedIn OAuth2 docs

LDAPS or StartTLS? That is the question…

Due to the various security issues around the different SSL implementations, I’ve seen an increasing demand for OpenAM’s StartTLS support even though OpenAM perfectly supported LDAPS. In this post I’m going to show you how to set up both StartTLS and LDAPS in a dummy OpenDJ 2.6.2 environment, and then I’ll attempt to compare them from a security point of view.

NOTE: the instructions provided here for setting up secure connections are by no means the most accurate ones, as I only provide them for demonstration purposes. You should always consult the product documentation for much more detailed, and precise information.

Common Steps

Both LDAPS and StartTLS requires a private key, I’m just going to assume you know how to generate/convert (or obtain from a trusted CA) it.
Once you have your JKS file ready first make sure that it contains a PrivateKeyEntry:

$ keytool -list  -keystore server.jks 
Enter keystore password:  

Keystore type: JKS
Keystore provider: SUN

Your keystore contains 1 entry

1, 2015.04.22., PrivateKeyEntry, 
Certificate fingerprint (SHA1): AA:30:0D:8E:DE:4C:F9:AB:AC:FA:61:E7:B4:F5:56:EF:3E:F4:F6:4A

After verifying the keystore, copy it into the config folder with the name “keystore” and also create a file that contains the keystore’s password.

In order to make this JKS available for OpenDJ you’ll need to run the following dsconfig command (for this demo’s purpose we are going to reuse the existing Key/Trust Manager Providers):

dsconfig set-key-manager-provider-prop 
          --provider-name JKS 
          --set enabled:true 
          --hostname localhost 
          --port 4444 
          --trustStorePath /path/to/opendj/config/admin-truststore 
          --bindDN cn=Directory Manager 
          --bindPassword ****** 

Since key management always goes hand-in-hand with trust management, we need to set up the Trust Management Provider as well. For this, you’ll need to create a JKS keystore which only contains the public certificate and place it into the config folder with the name “truststore“, and again, you’ll need to create a file containing the truststore’s password.

dsconfig set-trust-manager-provider-prop 
          --provider-name JKS 
          --set enabled:true 
          --set trust-store-pin-file:config/ 
          --hostname localhost 
          --port 4444 
          --trustStorePath /path/to/opendj/config/admin-truststore 
          --bindDN cn=Directory Manager 
          --bindPassword ****** 


From protocol point of view LDAPS is not too different from HTTPS actually: in order to establish a connection to the directory, the client MUST perform an SSL/TLS Handshake with the server first, hence all the LDAP protocol messages are transported on a secure, encrypted channel.
Configuring and enabling the LDAPS Connection Handler in OpenDJ doesn’t really take too much effort:

dsconfig set-connection-handler-prop 
          --handler-name LDAPS Connection Handler 
          --set enabled:true 
          --set listen-port:1636 
          --hostname localhost 
          --port 4444 
          --trustStorePath /path/to/opendj/config/admin-truststore 
          --bindDN cn=Directory Manager 
          --bindPassword ****** 

After this you will need to restart the directory server, during the startup you should see the following message:

[22/04/2015:20:47:26 +0100] category=PROTOCOL severity=NOTICE msgID=2556180 msg=Started listening for new connections on LDAPS Connection Handler port 1636

To test the connection, you can just run a simple ldapsearch command:

$ bin/ldapsearch -Z -h localhost -p 1636 -D "cn=Directory Manager" -w ****** -b dc=example,dc=com "uid=user.0" "*"
The server is using the following certificate: 
    Subject DN:,, OU=Sustaining, O=ForgeRock Ltd, L=Bristol, C=GB
    Issuer DN:,, OU=Sustaining, O=ForgeRock Ltd, L=Bristol, C=GB
    Validity:  Wed Apr 22 19:43:22 BST 2015 through Thu Apr 21 19:43:22 BST 2016
Do you wish to trust this certificate and continue connecting to the server?
Please enter "yes" or "no":

As you can see I’ve been prompted to accept my self-signed certificate, and after entering “yes”, I could see the entry I’ve been looking for.


StarTLS for LDAP is slightly different from LDAPS, the main difference being, that first the client needs to establish an unencrypted connection with the directory server. At any point in time after establishing the connection (as long as there are no outstanding LDAP operations on the connection), the StartTLS extended operation shall be sent across to the server. Once a successful extended operation response has been received, the client can initiate the TLS handshake over the existing connection. Once the handshake is done, all future LDAP operations will be transmitted on the now secure, encrypted channel.
Personally my concerns with StartTLS are:

  • You must have a plain LDAP port open on the network.
  • Even after a client connects to the directory there is absolutely nothing preventing the user from sending BIND or any other kind of requests on the unencrypted channel before actually performing the StartTLS extended operation.

Now let’s see how to set up StartTLS:

dsconfig set-connection-handler-prop 
          --handler-name LDAP Connection Handler 
          --set allow-start-tls:true 
          --set key-manager-provider:JKS 
          --set trust-manager-provider:JKS 
          --hostname localhost 
          --port 4444 
          --trustStorePath /path/to/opendj/config/admin-truststore 
          --bindDN cn=Directory Manager 
          --bindPassword ****** 

Restart the server, and then verify that the connection works, run:

$ bin/ldapsearch -q -h localhost -p 1389 -D "cn=Directory Manager" -w ****** -b dc=example,dc=com "uid=user.0" "*"
The server is using the following certificate: 
    Subject DN:,, OU=Sustaining, O=ForgeRock Ltd, L=Bristol, C=GB
    Issuer DN:,, OU=Sustaining, O=ForgeRock Ltd, L=Bristol, C=GB
    Validity:  Wed Apr 22 19:43:22 BST 2015 through Thu Apr 21 19:43:22 BST 2016
Do you wish to trust this certificate and continue connecting to the server?
Please enter "yes" or "no":

Again, you can see that the entry is returned just fine after accepting the server certificate. For the sake of testing you can remove the “-q” (–useStartTLS) parameter from the ldapsearch command and you should still see the entry being returned, but this time around the connection was not encrypted at all.

So how does one prevent clients from using the connection without actually performing the StartTLS extended operation?
There is no real solution for this (based on my limited understanding of ACIs), because I couldn’t really find anything in the available list of permissions that would match BIND operations. Actually I’ve tried to set up an ACI like this:

aci: (target="ldap:///dc=example,dc=com")(version 3.0;acl "Prevent plain LDAP operations"; deny (all)(ssf<="1");)

but the BIND operations were still successful over plain LDAP. Whilst it was good that I couldn't really perform other LDAP operations, I think the worst has already happened, the user's password was transferred over an insecure network connection.
For more details on ssf by the way, feel free to check out the documentation. ;)

Chris Ridd let me know that there is a way to enforce secure connections for BIND operations as well by configuring the password policy. To set up the password policy just run the following command:

dsconfig set-password-policy-prop 
          --policy-name Default Password Policy 
          --set require-secure-authentication:true 
          --hostname localhost 
          --port 4444 
          --trustStorePath /path/to/opendj/config/admin-truststore 
          --bindDN cn=Directory Manager 
          --bindPassword ****** 

Future BIND operations on unsecured LDAP connection will result in the following error:

[23/04/2015:10:04:27 +0100] BIND RES conn=2 op=0 msgID=1 result=49 authFailureID=197124 authFailureReason="Rejecting a simple bind request because the password policy requires secure authentication" authDN="uid=user.0,ou=people,dc=example,dc=com" etime=1

The problem is though that again, nothing actually prevented the user from sending the password over the unsecured connection...

Common misconceptions

I think the following misconceptions are causing the most problems around security:

  • StartTLS is more secure, because it has TLS in the name: WRONG! StartTLS allows just as well the usage of SSL(v2/v3) protocols, it is definitely not limited to TLS v1.x protocols by any means! Hopefully my explanation above makes it clearer that StartTLS is probably less secure than LDAPS.
  • LDAPS is less secure, because it has the ugly S (thinking it stands for SSL, but actually it stands for Secure): WRONG! as always, the actual security you can gain by using LDAPS connections is all matter of configuration. A badly configured LDAPS can still result in unsafe communication, yes, but LDAPS can just as well leverage the (currently considered safe) TLSv1.2 protocol and be perfectly safe.

I think I just can't emphasize this enough: use LDAPS if possible.

Understanding the login process

As authentication is pretty much the core functionality of OpenAM, I believe it is helpful to have a good understanding of how it works really. For starters let’s have a look at the different concepts around authentication.

Authentication modules

Authentication modules are just a simple piece of functionality that is meant to identify the user by some means. Depending on your requirements an authentication module could verify user credentials, or perform some kind of two factor verification process. For more complex use-cases you could use the auth module to collect informations about the end-user and with the help of a fraud-detection system determine if the current login attempt is “risky”.

In any case, the authentication modules are performing some (customizable) logic, and at the end of the module processing you can either ignore the current authentication module (neither a success nor a failure), OR succeed with a logged in user, OR just simply fail (invalid credentials, etc).

The authentication module implementations are JAAS based (with some abstraction on top of plain JAAS), so it is all based on callbacks (think of callbacks as input fields) that needs to be “handled” and submitted. When the callbacks are submitted, the AMLoginModule’s #process gets invoked with the callbacks. This is the time when the authentication module can start to process the submitted data and determine if the authentication attempt was successful. Since an authentication process can involve multiple steps (more than one set of callbacks to submit: for example requesting username, and than some verification code), the #process method needs to return a number that represents the next state (there are special numbers like ISAuthConstants.LOGIN_SUCCEED that represent successful authentication result, i.e. no further need to present callbacks), which then will be used to determine the next set of callbacks to display on the UI. Assuming that the authentication finished successfully, we need to return the magic LOGIN_SUCCEED state.

So how does OpenAM really know who the user is?

Once the authentication is successful, the auth framework will call AMLoginModule#getPrincipal which needs to return the authenticated user’s principal. #getPrincipal has a key role in the authentication process, so make sure its implemented correctly (or when using built-in modules, make sure they are configured correctly).

Authentication chains

The next sensible building blocks are the authentication chains. The auth chains can be considered as combinations of various authentication modules to present a single authentication procedure for the end-users. Following the previous example, one could think that checking some verification code after providing a username and password may not be actually the job of a single authentication module, and probably should be implemented separately. In that case one could implement one module for username/password login, and then implement an another module for code verification. To make sure the user logs in using both auth modules, one could create an authentication chain that includes both of them, and then the user will just need to authenticate against that chain.

Since the modules are JAAS-based, it makes sense to set up the chain configurations similarly to JAAS as well, but I’m not going to go into too much details on that front, instead you should just read about JAAS a bit more (especially about the “flags”).

Profile lookup

Once the user has successfully authenticated, there is a thing called “profile lookup”. This bit is all about trying to find the logged in user in one of the configured data stores, and then ensure that the user-specific settings (things like custom idle/max timeout values, or session quota even) are all applied for the just-to-be-created session. There are other additional checks as well, like ensuring that the logged in user actually has an active status in the system (e.g. doesn’t have a locked account).

To make things a bit more clearer let’s talk about User Profile Mode now (Access controlrealmAuthenticationAll Core Settings). The user profile mode tells what should happen at the profile lookup stage of the authentication, and these are the possible modes:

  • Required – this is the default mode, which just means that the user profile MUST exist in the configured data store.
  • Ignored – the user profile does not have to exist, the user profile will not be looked up as part of the authentication process.
  • Dynamic – the profile will be looked up, but if it doesn’t already exist, it will be dynamically created in the data store.
  • Dynamic with User Alias – this is similar to Dynamic, but it also appears to store user alias attributes in the newly created entries (I must admit I don’t fully understand this mode yet).

I believe it is important to stop here a bit and emphasize the following:
The authentication module may interact with arbitrary external components during the authentication phase, however when it comes to profile lookup, that will be always performed against the configured data stores. If you are running into the infamous “User has no profile in this organization” error message, then that means, that the authentication was successful, but the profile lookup failed, since OpenAM was unable to find the user in the configured data stores.

The profile lookup itself is performed based on the return value of #getPrincipal, so this is why it is really important to ensure that the module works correctly. The returned principal can be simply a username like “helloworld”, but it can also have a DN format like “uid=helloworld,ou=people,dc=example,dc=com” (see LDAP module’s Return User DN to DataStore setting). When the returned value is a DN, then the RDN value will be used for the data store search (so helloworld), hence it is important to ensure that the data store has been configured to search users based on the attribute that is expected to have the value of helloworld.

The idea behind all of this is that in #getPrincipal the username returned should be unique across the user base, so even if let’s say you allow someone authenticating with “John Smith”, you should still return a bit more meaningful username (like jsmith123) to the backend. That way you can ensure that when you ask for “John Smith”‘s user details, you will get the right set of values.

Post authentication actions

After a successful profile lookup there are various additional things that OpenAM does, but I’m not really going to go into the nifty details for those. Here’s a small list of things that normally happens:

  • Check if user account is active.
  • Check if the account is locked (using OpenAM’s built-in Account Lockout feature).
  • Check if there are user-specific session settings configured for the user, and apply those values for the newly created session.
  • When the user session is created, check if the session quota has been exhausted and run the corresponding quota exhaustion action if yes.
  • Execute the Post Authentication Processing plugins.
  • Determine the user’s success login URL and also ensure that the goto URL is validated.


Whilst we only discussed portions of the actual authentication process, I think the main concepts for authentication are laid out, so hopefully when you need to configure OpenAM the next time around, you can reuse the things learned here. :)

Cross-Domain Single Sign-On

A couple of blog posts ago I’ve been detailing how regular cookie based Single Sign-On (SSO) works. I believe now it is time to have a look at this again, but make it a bit more difficult: let’s see what happens if you want to achieve SSO across different domains.

So let’s say that you have an OpenAM instance on the domain and the goal is to achieve SSO with a site on the domain. As you can see right away, the domains are different, which means that regular SSO won’t be enough here (if you don’t see why, have a read of my SSO post).

Cross-Domain Single Sign-On

We already know now that in the regular SSO flow the point is that the application is in the same domain as OpenAM, hence it has access to the session cookie, which in turn allows the application to identify the user. Since in the cross-domain scenario the session cookie isn’t accessible, our goal is going to be to somehow get the session cookie from the OpenAM domain to the application’s domain, more precisely:
We need to have a mechanism that is implemented by both OpenAM and the application, which allows us to transfer the session cookie from one place to the other.
Whilst this sounds great, it would be quite cumbersome for all the different applications to implement this cookie sharing mechanism, so what else can we do?

Policy Agents to the rescue

For CDSSO OpenAM has its own proprietary implementation to transfer the session ID across domains. It’s proprietary, as it does not follow any SSO standard (for example SAML), but it does attempt to mimic them to some degree.
OpenAM has the concept of Policy Agents (PA) which are (relatively) easily installable modules for web servers (or Java EE containers) that can add extra security to your applications. It does so by ensuring that the end-users are always properly authenticated and authorized to view your protected resources.
As the Policy Agents are OpenAM components, they implement this proprietary CDSSO mechanism in order to simplify SSO integrations.

Without further ado, let’s have a look at the CDSSO sequence diagram:
CDSSO sequence

A bit more detail about each step:

  1. The user attempts to access a resource that is protected by the Policy Agent.
  2. The PA is unable to find the user’s session cookie, so it redirects the user to…
  3. …the cdcservlet. (The Cross-Domain Controller Servlet is the component that will eventually share the session ID value with the PA.)
  4. When the user accesses the cdcservlet, OpenAM is able to detect whether the user has an active session (the cdcservlet is on OpenAM’s domain, hence a previously created session cookie should be visible there), and…
  5. when the token is invalid…
  6. we redirect to…
  7. …the sign in page,
  8. which will be displayed to the user…
  9. …and then the user submits her credentials.
  10. If the credentials were correct, we redirect the user back to…
  11. …the cdcservlet, which will…
  12. …ensure…
  13. …that the user’s session ID is actually valid, and then…
  14. …displays a self-submitting HTML form to the user, which contains a huge Base64 encoded XML that holds the user’s session ID.
  15. The user then auto-submits the form to the PA, where…
  16. …the PA checks the validity…
  17. …of the session ID extracted from the POST payload…
  18. …and then performs the necessary authorization checks…
  19. …to ensure that the user is allowed to access the protected resource…
  20. …and then the PA creates the session cookie for the application’s domain, and the user either sees the requested content or an HTTP 403 Forbidden page. For subsequent requests the PA will see the session cookie on the application domain, hence this whole authentication / authorization process will become much simpler.

An example CDSSO LARES (Liberty Alliance RESponse) response that gets submitted to PA (like in step 15 above) looks just like this:

<lib:AuthnResponse xmlns:lib="" xmlns:saml="urn:oasis:names:tc:SAML:1.0:assertion" xmlns:samlp="urn:oasis:names:tc:SAML:1.0:protocol" xmlns:ds="" xmlns:xsi="" ResponseID="sb976ed48177fd6c052e2241229cca5dee6b62617"  InResponseTo="s498ed3a335122c67461c145b2349b68e5e08075d" MajorVersion="1" MinorVersion="0" IssueInstant="2014-05-22T20:29:46Z"><samlp:Status>
<samlp:StatusCode Value="samlp:Success">
<saml:Assertion  xmlns:saml="urn:oasis:names:tc:SAML:1.0:assertion" xmlns:xsi=""  xmlns:lib=""  id="s7e0dad257500d7b92aca165f258a88caadcc3e9801" MajorVersion="1" MinorVersion="0" AssertionID="s7e0dad257500d7b92aca165f258a88caadcc3e9801" Issuer="" IssueInstant="2014-05-22T20:29:46Z" InResponseTo="s498ed3a335122c67461c145b2349b68e5e08075d" xsi:type="lib:AssertionType">
<saml:Conditions  NotBefore="2014-05-22T20:29:46Z" NotOnOrAfter="2014-05-22T20:30:46Z" >
<saml:AuthenticationStatement  AuthenticationMethod="vir" AuthenticationInstant="2014-05-22T20:29:46Z" ReauthenticateOnOrAfter="2014-05-22T20:30:46Z" xsi:type="lib:AuthenticationStatementType"><saml:Subject   xsi:type="lib:SubjectType"><saml:NameIdentifier NameQualifier="">AQIC5wM2LY4SfcxADqjyMPRB8ohce%2B6kH4VGD408SnVyfUI%3D%40AAJTSQACMDEAAlNLABQtMzk5MDEwMTM3MzUzOTY5MTcyOQ%3D%3D%23</saml:NameIdentifier>
<lib:IDPProvidedNameIdentifier  NameQualifier="" >AQIC5wM2LY4SfcxADqjyMPRB8ohce%2B6kH4VGD408SnVyfUI%3D%40AAJTSQACMDEAAlNLABQtMzk5MDEwMTM3MzUzOTY5MTcyOQ%3D%3D%23</lib:IDPProvidedNameIdentifier>
</saml:Subject><saml:SubjectLocality  IPAddress="" DNSAddress="localhost" /><lib:AuthnContext><lib:AuthnContextClassRef></lib:AuthnContextClassRef><lib:AuthnContextStatementRef></lib:AuthnContextStatementRef></lib:AuthnContext></saml:AuthenticationStatement></saml:Assertion>

If you watch carefully you can see the important bit right in the middle:

<saml:NameIdentifier NameQualifier="">AQIC5wM2LY4SfcxADqjyMPRB8ohce%2B6kH4VGD408SnVyfUI%3D%40AAJTSQACMDEAAlNLABQtMzk5MDEwMTM3MzUzOTY5MTcyOQ%3D%3D%23</saml:NameIdentifier>

As you can see the value of the NameIdentifier element is the user’s session ID. Once the PA creates the session cookie on the application’s domain you’ve pretty much achieved single sign-on across domains, well done! ;)

P.S.: If you are looking for a guide on how to set up Policy Agents for CDSSO, check out the documentation.

Certificate authentication over REST

A little bit of background

Amongst the various different authentication mechanisms that OpenAM supports, there is one particular module that always proves to be difficult to get correctly working: Client certificate authentication, or Certificate authentication module as defined in OpenAM. The setup is mainly complex due to the technology (SSL/TLS) itself, and quite frankly in most of the cases the plain concept of SSL is just simply not well understood by users.

Disclaimer: I have to admit I’m certainly not an expert on SSL, so I’m not going to deep dive into the details of how client certificate authentication itself works, instead, I’m just going to try to highlight the important bits that everyone should know who wants to set up a simple certificate based authentication.

The main thing to understand is that client cert authentication happens as part of the SSL handshake. That is it… It will NOT work if you access your site over HTTP. The authentication MUST happen at the network component that provides HTTPS for the end users.
Again, due to SSL’s complexity there are several possibilities: it could be that SSL is provided by the web container itself, but it is also possible that there is a network component (like a load balancer or a reverse proxy) where SSL is terminated. In the latter case it is quite a common thing for these components to embed the authenticated certificate in a request header for the underlying application (remember: the client cert authentication is part of the SSL handshake, so by the time OpenAM is hit, authentication WAS already performed by the container).

Now this is all nice, but how do you actually authenticate using your client certificate over REST?

Setting it all up

Now some of this stuff may look a bit familiar to you, but for the sake of simplicity let me repeat the exact steps of setting this up:

  • Go to Access Control – realm – Authentication page and Add a new Module Instance called cert with type Certificate
  • Open the Certificate module configuration, and make sure the LDAP Server Authentication User/Password settings are correct.
  • Generate a new self signed certificate by following this guide, but make sure that in the CSR you set the CN to “demo”. The resulting certificate and private key for me was client.crt and client.key respectively.
  • Create PKCS#12 keystore for the freshly generated private key and certificate:
    openssl pkcs12 -export -inkey -in client.crt -out client.p12
  • Install the PKCS#12 keystore in your browser (Guide for Firefox)
  • Enable Client Authentication on the container. For example on Tomcat 7.0.53 you would have to edit conf/server.xml and set up the SSL connector like this:
    <Connector port="8443" protocol="org.apache.coyote.http11.Http11Protocol"
     maxThreads="150" SSLEnabled="true" scheme="https" secure="true"
     keystoreFile="/Users/aldaris/server.jks" keystorePass="changeit"
     truststoreFile="/Users/aldaris/trust.jks" truststorePass="changeit"
     clientAuth="true" sslProtocol="TLS" />
  • Add your user’s public certificate into the truststore (since the container performs the authentication, it needs to trust the client):
    keytool -import -keystore /Users/aldaris/trust.jks -file client.crt -alias demo

    In cases when the truststoreFile attribute is missing from the Connector settings, Tomcat by default falls back to the JVM’s truststore, so make sure you update the correct truststore.

  • Restart the container

Now if you did everything correctly, you should be able to go to /openam/UI/Login?module=cert and the browser should ask for the client certificate. After selecting the correct certificate you should get access to OpenAM right away.

It’s time to test it via REST

For my tests I’m going to use curl, I must say though that curl on OSX Mavericks (7.30) isn’t really capable to do client authentication, hence I would suggest installing curl from MacPorts (7.36) instead.
To perform the login via REST one would:

$ curl -X POST -v -k --cert-type pem --cert client.crt --key ""

And if you want to authenticate into a subrealm:

$ curl -X POST -v -k --cert-type pem --cert client.crt --key ""

Simple as that. ;)

One-Time Passwords – HOTP and TOTP

About One-Time Passwords in general

One-Time Passwords (OTP) are pretty much what their name says: a password that can be only used one time. Comparing to regular passwords OTP is considered safer since the password keeps on changing, meaning that it isn’t vulnerable against replay attacks.

When it comes to authentication mechanisms, usually OTP is used as an additional authentication mechanism (hence OTP is commonly referred to as two factor authentication/second factor authentication/step-up authentication). The main/first authentication step is still using regular passwords, so in order your authentication to be successful you need to prove two things: knowledge and possession of a secret.

Since the OTP is only usable once, memorizing them isn’t quite simple. There is two main way of acquiring these One-Time Passwords:

  • hardware tokens: for example YubiKey devices, which you can plug in to your USB port and will automatically type in the OTP code for you.
  • software tokens: like Google Authenticator, in this case a simple Android application displays you the OTP code which you can enter on your login form.

There is two main standard for generating One-Time Passwords: HOTP and TOTP, both of which are governed by the Initiative For Open Authentication (OATH). In the followings we will discuss the differences between these algorithms and finally we will attempt to use these authentication mechanisms with OpenAM.

Hmac-based One-Time Password algorithm

This algorithm relies on two basic things: a shared secret and a moving factor (a.k.a counter). As part of the algorithm an HmacSHA1 hash (to be precise it’s a hash-based message authentication code) of the moving factor will be generated using the shared secret. This algorithm is event-based, meaning that whenever a new OTP is generated, the moving factor will be incremented, hence the subsequently generated passwords should be different each time.

Time-based One-Time Password algorithm

This algorithm works similarly to HOTP: it also relies on a shared secret and a moving factor, however the moving factor works a bit differently. In case of TOTP, the moving factor constantly changes based on the time passed since an epoch. The HmacSHA1 is calculated in the same way as with HOTP.

Which one is better?

The main difference between HOTP and TOTP is that the HOTP passwords can be valid for an unknown amount of time, while the TOTP passwords keep on changing and are only valid for a short window in time. Because of this difference generally speaking the TOTP is considered as a more secure One-Time Password solution.

So how does OpenAM implement OTP?

In OpenAM currently there is two authentication module offering One-Time Password capabilities: HOTP and OATH. So let’s see what is the difference between them and how to use them in practice.

HOTP authentication module

The HOTP module – as you can figure – tries to implement the Hmac-based One-Time Password algorithm, but it does so by circumventing some portions of the specification. Let me try to summarize what’s different exactly:

  • The shared secret isn’t really shared between the user and OpenAM, actually it is just a freshly generated random number, each HOTP code will be based on a different shared secret.
  • The moving factor is statically held in OpenAM, and it just keeps on incrementing by each user performing HOTP based authentications.
  • A given HOTP code is only valid for a limited amount of time (so it’s a bit similar to TOTP in this sense).

These differences also mean that it does not work with a hardware/software token since the OTP code generation depends on OpenAM’s internal state and not on shared information. So in order to be able to use the generated OTP code, OpenAM has to share it somehow with the user: this can be via SMS or E-mail or both.

So let’s create an example HOTP module using ssoadm:

openam/bin/ssoadm create-auth-instance --realm / --name HOTP --authtype HOTP --adminid amadmin --password-file .pass

Now we have an HOTP module in our root realm, let’s configure it:

openam/bin/ssoadm update-auth-instance --realm / --name HOTP --adminid amadmin --password-file .pass -D

Where contains:


NOTE: The names of the service attributes can be found in the OPENAM_HOME/config/xml/amAuthHOTP.xml file.

At this stage I’ve modified the built-in demo user’s e-mail address, so I actually receive the OTP codes for my tests.

Last step is to create an authentication chain with the new HOTP module:

openam/bin/ssoadm create-auth-cfg --realm / --name hotptest --adminid amadmin --password-file .pass
openam/bin/ssoadm add-auth-cfg-entr --realm / --name hotptest --modulename DataStore --criteria REQUISITE --adminid amadmin --password-file .pass
openam/bin/ssoadm add-auth-cfg-entr --realm / --name hotptest --modulename HOTP --criteria REQUIRED --adminid amadmin --password-file .pass

Here the first command created the chain itself, the other two commands just added the necessary modules to the chain.

We can test this now by visiting /openam/UI/Login?service=hotptest . Since I don’t like to take screenshots you just have to believe me that the authentication was successful and I’ve received my HOTP code in e-mail, which was in turn accepted by the HOTP module. :)

OATH authentication module

The OATH module is a more recent addition to OpenAM which implements both HOTP and TOTP as they are defined in the corresponding RFCs. This is how they work under the hood:

  • Both OTP mode retrieves the shared secret from the user profile (e.g. it is defined in an LDAP attribute of the user’s entry).
  • HOTP retrieves the moving factor from the user profile.
  • TOTP does not let a user to authenticate within the same TOTP window (since a given TOTP may be valid for several time windows), hence it stores the last login date in the user profile.

For the following tests I’ve been using the Google Authenticator Android app to generate my OTP codes.

Testing OATH + HOTP

First set up the authentication module instance:

openam/bin/ssoadm create-auth-instance --realm / --name OATH --authtype OATH --adminid amadmin --password-file .pass
openam/bin/ssoadm update-auth-instance --realm / --name OATH --adminid amadmin --password-file .pass -D

Where contains:


NOTE: here I’m using the givenName and the sn attributes for my personal tests but in a real environment I would use dedicated attributes for these of course.

Again let’s create a test authentication chain:

openam/bin/ssoadm create-auth-cfg --realm / --name oathtest --adminid amadmin --password-file .pass
openam/bin/ssoadm add-auth-cfg-entr --realm / --name oathtest --modulename DataStore --criteria REQUISITE --adminid amadmin --password-file .pass
openam/bin/ssoadm add-auth-cfg-entr --realm / --name oathtest --modulename OATH --criteria REQUIRED --adminid amadmin --password-file .pass

In order to actually test this setup I’m using a QR code generator to generate a scannable QR code for the Google Authenticator with the following value:


NOTE: The Google Authenticator needs the secret key in Base32 encoded format, but the module needs the key to be in HEX encoded format (“48656c6c6f576f726c644f664f415448″) in the user profile.

After all these changes bring up /openam/UI/Login?service=oathservice and just simply ask for a new HOTP code on your phone, you should see something like this:

Google Authenticator - HOTP

Enter the same code on the login screen and you are in, yay. :)

Testing OATH + TOTP

For testing TOTP I’ll be reusing the authentication modules and chains from the OATH HOTP test:

openam/bin/ssoadm update-auth-instance --realm / --name OATH --adminid amadmin --password-file .pass -D

Where contains:


We need to generate a new QR code now with the following content:


Again, let’s test it at /openam/UI/Login?service=oathtest and this time we should have a slightly different UI on the Google Authenticator:

Google Authenticator - TOTP

As you can see there is a nice indicator now showing us how long this TOTP code will remain valid.

I think now we know about OTP codes more than enough, hope you’ve found this article useful. ;)

Single Sign-On – the basic concepts

What is Single Sign-On?

A good buzzword at least, but on top of that it’s a solution which lets users authenticate at one place and then use that same user session at many completely different applications without reauthenticating over and over again.
To implement SSO, OpenAM (as most other SSO solutions) uses HTTP cookies (RFC 6265 [1]) to track the user’s session. Before we go into any further details on the how, let’s step back first and get a good understanding first on cookies. Please bear with me here, believe me when I say that being familiar with cookies will pay out at the end.

Some important things to know about cookies

By using cookies an application can store information at the user-agent (browser) across multiple different HTTP requests, where the data is being stored in a name=value basic format. From the Cookie RFC we can see that there are many different extra fields in a cookie, but out of those you will mostly only run into these:

  • Domain – the domain of the cookie where this cookie can be sent to. In case the domain is not present, it will be handled as a host based cookie, and browsers will only send it to that _exact_ domain (so no subdomains! Also beware that IE may behave differently…[2]).
  • Max-Age – the amount of time the cookie should be valid. Of course IE doesn’t support this field, so everyone uses “Expires” instead with a GMT timestamp.
  • Path – a given URL path where the cookie applies to (for example /openam)
  • Secure – when used, the cookie can be only transferred through HTTPS, regular HTTP requests won’t include this cookie.
  • HttpOnly – when used, the cookie won’t be accessible through JavaScript, giving you some protection against XSS attacks

Let’s go into a bit more detail:

  • If you create a cookie with“, then that Cookie will be available for an application sitting at as well, and basically for all other subdomains, BUT that very same cookie won’t be available at nor at, because does not match the cookie domain. Browsers will only send cookies with requests made to the corresponding domains. Moreover browser only will set cookies for domains where the response did actually come from (i.e. at the browser will discard Set-Cookie headers with Domain).
  • Browsers will discard cookies created by applications on TLDs (Top Level Domain, like or .com), also the same happens with cookies created for IP addresses, or for non valid domains (like “localhost”, or “myserver”), the Domain has to be a valid FQDN (Fully Qualified Domain Name) if present.
  • If you do not specify Max-Age/Expires, the cookie will be valid until the browser is closed.
  • In order to clear out/remove a cookie you need to create a cookie with the same name (value can be anything), and set the Expires property to a date in the past.
  • In case you request a page, but a Set-Cookie is coming out of a subsequent request (i.e. resource on the page – frame/iframe/etc), then that cookie is considered as a third party cookie. Browsers may ignore third party cookies based on their security settings, so watch out.
  • Containers tend to handle the cookie spec a bit differently when it comes to special characters in the cookie value. By default when you create a cookie value with an “=” sign in it for example, then the cookie value should be enclosed with quotes (“). This should be done by the container when it generates the Set-Cookie header in the response, and in the same way when there is an incoming request with a quoted value, the container should remove the quotes, and only provide the unquoted value through the Java EE API. This is not always working as expected (you may get back only a portion of the actual value, due to stripping out the illegal characters), hence you should stick with allowed characters in cookie values.

Funky IE problems

  • IE does not like cookies created on domain names that do not follow the URI RFC [3]
  • IE9 may not save cookies if that setting happened on a redirect. [4]

How is all of this related to SSO?

Well, first of all as mentioned previously OpenAM uses cookie to track the OpenAM session, so when you do regular SSO, then the sequence flow would be something like this:

SSO Authentication Flow

SSO Authentication Flow

And here is a quick outline for the above diagram:

  • User visits application A at, where the agent or other component realizes that there is no active session yet.
  • User gets redirected to OpenAM at, where quite cleverly the cookie domain was set to (i.e. the created cookie will be visible at the application domain as well)
  • User logs in, as configured OpenAM will create a cookie for the domain.
  • OpenAM redirects back to the application at, where the app/policy agent will see the session cookie created, and it will then validate the session, upon success, it will show the protected resource, since there is no need to log in any more.

Oh no, a redirect loop!

In my previous example there are some bits that could go wrong and essentually result in a redirect loop. A redirect loop 90+% of the time happens because of cookies and domains – i.e. misconfiguration. So what happens is that OpenAM creates the cookie for its domain, but when the user is redirected back to the application, the app/agent won’t be able to find the cookie in the incoming request, but authentication is still required, so it redirects back to OpenAM again. But wait, AM already has a valid session cookie on its domain, no need for authentication, let’s redirect back to the app, and this goes on and on and on. The most common reasons for redirect loops:

  • The cookie domain for OpenAM does not match at all with the protected application.
  • The cookie domain is set to, instead of, and hence the cookie is not available at .
  • The cookie is using Secure flag on the AM side, but the protected application is listening on HTTP.
  • Due to some customization possibly, the AM cookie has a path “/openam” instead of the default “/”, so even if the cookie domain is matching the path won’t match at the application.
  • You run into one of the previously listed IE quirks. :)
  • Your cookie domain for OpenAM isn’t actually an FQDN.

So how does OpenAM deal with applications running in different domains? The answer is CDSSO (Cross Domain Single Sign-On). But let’s discuss that one in the next blog post instead. Hopefully this post will give people a good understanding of the very basic SSO concepts, and in future posts we can dive into the more complicated use-cases.


Custom Auth module – Configuration basics

If you want to create a custom OpenAM auth module or a service, then you probably going to end up writing a configuration XML. This XML describes to OpenAM what kind of UI elements to render on the admin console, and what values should be stored in the configuration store for the given module.
Let’s take a look at the following sample XML:

<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE ServicesConfiguration
PUBLIC "=//iPlanet//Service Management Services (SMS) 1.0 DTD//EN"

    <Service name="sunAMAuthMyModuleService" version="1.0">

                <AttributeSchema name="sunAMAuthMyModuleAuthLevel"
                                 syntax="number_range" rangeStart="0"

                <SubSchema name="serverconfig" inheritance="multiple">
                    <AttributeSchema name="sunAMAuthMyModuleAuthLevel"
                                     syntax="number_range" rangeStart="0"

What you should know about authentication module service XMLs:

  • According to this piece of code the service name HAS to start with either iPlanetAMAuth or sunAMAuth, and HAS to end with Service (like sunAMAuthMyModuleService in our case)
  • the serviceHierarchy attribute is /DSAMEConfig/authentication/<service-name>
  • in the i18nFileName attribute you need to add the name of a properties file, which is on OpenAM classpath (like openam.war!WEB-INF/classes). This internationalization file will be used by OpenAM to lookup the i18nKeys for the various items in the XML.
  • You should put the module options into an Organization Schema, this will make sure that the module will be configurable per realm.
  • All the attribute schema definitions should be also listed under a SubSchema element (this will allow you to set up two module instances based on the same module type with different configurations).

The AttributeSchema element contains informations about a single configuration item (what name should OpenAM use to store the parameter in the configstore, what kind of UI element needs to be rendered, what restrictions does the attribute have, etc).

Available types for an attribute:

  • single -> singlevalued attribute
  • list -> multivalued attribute
  • single_choice -> radio choice, only one selectable value
  • multiple_choice -> checkboxes, where multiple item could be selected
  • signature -> unknown, possibly not used
  • validator -> if you want to use a custom validator for your attribute, you need to include the validator itself (beware of OPENAM-974).

Available uitypes for an attribute:

  • radio -> radiobutton mostly for displaying yes-no selections
  • link -> this is going to be rendered as a link, where the href will be the value of the propertiesViewBeanURL attribute
  • button -> unknown, not used
  • name_value_list -> generates a table with add/delete buttons (see Globalization settings for example)
  • unorderedlist -> a multiple choice field in which you can dynamically add and remove values. The values are stored unordered.
  • orderedlist -> a multiple choice field in which you can dynamically add and remove values. The values are stored ordered.
  • maplist -> a multiple choice field in which you can add/remove key-value pairs
  • globalmaplist -> same as maplist, but it allows the key to be empty.
  • addremovelist -> basically a palette where you can select items from the left list and move them to the right list

Available syntaxes for an attribute:

  • boolean -> can be true or false
  • string -> any kind of string
  • paragraph -> multilined text
  • password -> this will tell the console, that it should mask the value, when it’s displayed
  • encrypted_password -> same as the password syntax
  • dn -> valid LDAP DN
  • email
  • url
  • numeric -> its value can only contain numbers
  • percent
  • number
  • decimal_number
  • number_range -> see rangeStart and rangeEnd attributes
  • decimal_range -> see rangeStart and rangeEnd attributes
  • xml
  • date

NOTE: Some of these syntaxes aren’t really used within the product, so choose it wisely.

Other than these, there is also the i18nKey attribute, which basically points to i18n keys in the referred i18nFile configured for the service. This is used when the config is displayed on the admin console.
This should cover the basics for authentication module service configuration I think. ;)

Implementing remember me functionality – part 2

In my last post we were trying to use the built-in persistent cookie mechanisms to implement remember me functionality. This post tries to go beyond that, so we are going to implement our own persistent cookie solution using a custom authentication module and a post authentication processing hook. We need these hooks, because:

  • The authentication module verifies that the value of the persistent cookie is correct and figures out the username that the session should be created with.
  • The post authentication processing class makes sure that when an authentication attempt was successful a persistent cookie is created. Also it will clear the persistent cookie, when the user logs out.

In order to demonstrate this implementation, I’ve created a sample project on Github, so it’s easier to explain, the full source is available at:
You most likely want to open up the source files as I’m going through them in order to see what I’m referring to. ;)

Let’s start with the Post Authentication Processing (PAP) class, as that is the one that actually creates the persistent cookie. In the PAP onLoginSuccess method, I’m checking first whether the request is available (for REST/ClientSDK authentications it might not be!), then I try to retrieve the “pCookie” cookie from the request. If the cookie is not present in the request, then I start to create a string, that holds the following informations:

  • username – good to know who the user actually is
  • realm – in which realm did the user actually authenticate (to prevent accepting persistence cookies created for users in other realms)
  • current time – the current timestamp to make the content a bit more dynamic, and also it gives a mean to make sure that an expired cookie cannot be used to reauthenticate

After constructing such a cookie (separator is ‘%’), I encrypt the whole content using OpenAM’s symmetric key and create the cookie for all the configured domains. The created cookie will follow the cookie security settings, so if you’ve enabled Secure/HttpOnly cookies, then the created cookie will adhere these settings as well.
In the onLogout method of the PAP I make sure that the persistent cookie gets cleared, so this way logged out users will truly get logged out.

On the other hand the authentication module’s job is to figure out whether the incoming request contains an already existing “pCookie”, and if yes, whether the value is valid. In order to do that, again, we check whether the request is available, then try to retrieve the cookie. If there is no cookie, then there is nothing to talk about, otherwise we will decrypt the cookie value using OpenAM’s symmetric key.
The decrypted value then will be tokenized based on the “%” character, then we first check whether the current realm matches with the cookie’s realm. If yes, then we check for the validity interval and the stored timestamp. If things don’t add up, then this is still a failed authentication attempt. However if everything is alright, then we can safely say that the user is authenticated, and the username is coming from the decrypted content.
In case there was some issue with the cookie, then we will just simply remove the “pCookie”, so hopefully we won’t run into it again.


There are a couple of limitations with this example module though:

  • when the PAP is part of the authentication process, it will always create a persistent cookie for every single user (but only when the cookie don’t already exist).
  • the validity interval and the cookie name is hardcoded, moreover every single realm will use the same cookie, that can be a problem in certain deployments.

If you are looking for installation details, then check out the Github project README ;)

Implementing remember me functionality – part 1

It’s quite common requirement to have long running sessions within OpenAM, so I’m going to try to enumerate the different ways to achieve “persistent” sessions and provide some comparison.
There is two main way to achieve long running sessions:

  • Using built-in functionality
  • Implementing custom authentication module + post processing class

Today we are only going to deal with the built-in persistent cookie features, the next article will describe the more user-specific solution.

Setting Expires attribute on session cookie

By default OpenAM issues session cookies without Expires attribute, meaning the cookie is only stored until the browser is stopped/restarted. This solution is especially handy if you have large session timeout values set, because in any other cases your session cookie would be saved in the browser for a good while, but the underlying session could have already time out.
Now there are two ways to enable this mode:

  • Enable persistent cookies globally for all sessions:
    Go to Configuration -> Servers and Sites -> Default Server Config -> Advanced tab, then add the following properties:

  • After doing so you will probably need to restart the container for the changes to take effect. Any new sessions created afterwards will set the Expires attribute on the session cookie.
    When using DAS you need to set these properties in the DAS configuration file instead.

  • Allow to have persistent sessions on-demand:
    Go to Configuration -> Servers and Sites -> Default Server Config -> Advanced tab, then add the following properties:


    In this case to get a persistent cookie you need to include the openam.session.persist_am_cookie=true parameter on the Login URL (so you can’t actually put this on the Login form unfortunately)

  • When using DAS you need to set these properties in the DAS configuration file instead.

NOTE: the timeToLive value is in minutes.
NOTE 2: these properties only fully work on the core servers if you have OPENAM-1280 integrated, but if you use DAS these should work just fine.
NOTE 3: the timeToLive value only tells how long the cookie should be stored by the browser, it has absolutely NO influence to the actual session timeout intervals.

Creating a long living non-session cookie

When using this option OpenAM will create a second “DProPCookie” cookie next to the iPlanetDirectoryPro cookie with an encrypted content. This encrypted cookie stores a set of informations that are necessary to “recreate” a session when a user goes to OpenAM without a valid session cookie, but with a valid persistent cookie. OpenAM then decrypts the cookie and based on the stored information, it will create a new session.
In order to enable this persistent cookie mode you need to go to Access Control -> realm -> Authentication -> All Core Settings page and do the followings:

  • Enable the “Persistent Cookie Mode” option
  • Set “Persistent Cookie Maximum Time” option

After this when you include the “iPSPCookie=Yes” parameter on the login URL, or you have an actual checkbox on your Login.jsp submitting the exact same parameter during login, OpenAM will issue the DProPCookie persistent cookie with the configured timeout.
This method does not seem to be implemented for DAS.
NOTE: OpenAM will delete the persistent cookie as well, when the user logs out.
NOTE 2: due to a possible bug (OPENAM-1777) OpenAM may not invoke Post Authentication Processing classes when it recreates the user session.


As you probably noticed, the Expires on session cookie method is only really useful when you have large timeouts, also it is a global option, meaning that once you enable it, it will be set for all the different realms.
On the other hand DProPCookie is configurable per realm, but it does not seem to work when using DAS.

That’s all folks, this is how the built-in persistent cookies work, if these do not seem to match your requirements, then stay tuned for the next post.