How to protect your OpenAM deployment against clickjacking

If you ever seen a security report for one of your web applications, there is a good chance that you have seen a big warning about Clickjacking already. Clickjacking is a certain kind of attack that essentially allows the attacker to trick a victim into performing an operation that most likely they didn’t want to carry out. If you want to learn more about clickjacking then I would recommend having a read of this well detailed page.

The best way to protect against these attacks is actually rather simple: RFC 7034 describes the X-Frame-Options header that needs to be set on the HTTP responses for pages that you wish to prevent from being clickjacked. The X-Frame-Options header has three accepted values:

  • DENY: the browser should never display the contents of the requested content in a frame.
  • SAMEORIGIN: Only display the content in a frame, if the enclosing page(/top level browsing context — see RFC) is in the same origin as the content itself.
  • ALLOW-FROM: Allows you to specify an origin from which it is allowed to display the contents of the requested resource.

How to configure OpenAM?

Since OpenAM 12.0.1 it is possible to utilize a built-in servlet filter to add arbitrary HTTP headers to our responses. The configuration of the filter is quite simple, you just have to add the following snippets to web.xml (obeying the XML schema):

<filter>
  <filter-name>Clickjacking</filter-name>
  <filter-class>org.forgerock.openam.headers.SetHeadersFilter</filter-class>
  <init-param>
    <param-name>X-Frame-Options</param-name>
    <param-value>DENY</param-value>
  </init-param>
</filter>
...
<filter-mapping>
  <filter-name>Clickjacking</filter-name>
  <url-pattern>/XUI/*</url-pattern>
  <url-pattern>/UI/*</url-pattern>
  <url-pattern>/console/*</url-pattern>
  <url-pattern>/oauth2/authorize</url-pattern>
  <dispatcher>FORWARD</dispatcher>
  <dispatcher>REQUEST</dispatcher>
  <dispatcher>INCLUDE</dispatcher>
  <dispatcher>ERROR</dispatcher>
</filter-mapping>

The above url-patterns list is not an exhaustive list of resources that you may wish to protect, however it should serve as a good start. Alternatively you could just change the url-pattern to /* and then you only really need the REQUEST dispatcher in your filter mapping config.

Please keep in mind that there are lots of different ways to set the X-Frame-Options header for your deployment, so feel free to utilize those instead if needed.

How to boost OAuth2 performance in OpenAM 13

One of the unfortunate issues with OpenAM 13 is that there is a performance problem when performing OAuth2 operations, more namely: OPENAM-8023. Whilst the underlying root cause appears to be a rather complex problem deep in the SMS framework, there is a quite simple, but very effective way to work around this issue.

You’ll need to run the following ssoadm commands for all the realms (where you are using OAuth2):

$ openam/bin/ssoadm add-svc-realm -e  -s ScriptingService -u amadmin -f .pass -D file
$ openam/bin/ssoadm create-sub-cfg -s ScriptingService -g scriptConfigurations -u amadmin -f .pass -D file -e 

UPDATE!: I forgot to detail a very important thing: what the input file contains for these commands… The answer is: absolutely nothing. They need to be completely empty files (use “touch file” for example).

Common sense: Please note that you only need to run these commands on versions that are affected by OPENAM-8023.

Installing WebSphere on Linux

In order to test/resolve certain WebSphere specific OpenAM bugs, I decided to install this lovely container on a brand new Ubuntu VM. Now I must tell, I’m slightly biased towards open source containers, as they tend to be actually usable and aren’t as overcomplicated as their enterprise competitors (yes I’m talking about you WebSphere and WebLogic). So keeping that in mind let’s see what kind of suffering does one have to go through to get to a running WebSphere instance. NB: this is mostly a rant, the mostly useful info can be found at the bottom of this post. πŸ™‚

How not to do it

I started with searching for “download ibm websphere 8.5.0”, and after a few clicks I figured out a few things:

  • There are service packs for each release or something alike, and apparently 8.5.0.2 is the latest for 8.5.0
  • There is also a release called 8.5.5
  • According to Wikipedia 8.5.5 can run on Java 8 as well

Since I like shiny new things, and one can only hope that new is always better, I decided that I want to install 8.5.5. After visiting the 8.5.5 pages I had to realize that installing 8.5.5 has the prerequisite of having 8.5.0 installed (enterprise software, eh), so let’s go back to the 8.5.0.2 downloads again…
The descriptions of the downloads are rather dubious, so I ended up downloading the first downloadable thing, and hoped for the best. Of course at this point I didn’t even wonder why the hell I need to download 2.4 GiB worth of ZIP files for an application server…
So I have unzipped the 2 downloaded files and I had to conclude that simply put there is no binary file that you can actually run. It turns out that the files that I just got now, are files that can be utilized by an IBM Installation Manager (facepalm). Of course there is little to no information about whether all the IM versions are actually able to install all existing IBM software, but who knows, maybe I’ll get lucky.

Nope. The IBM Installation Manager doesn’t really allow you to install anything by default, you need to add repositories manually to get it working (why it doesn’t come with default set of repositories that allows installing anything is beyond me), so I end up trying to point IM towards my unzipped 8.5.0.2 files and it seems like it’s able to pick up that repository.config file just fine. Trying to install anything still yields the same error though about not having any repositories present or that the ones configured have nothing installable. Just great.
The problem must be that I’m using the wrong version of the IM, and maybe an older version will be able to work with my downloaded 8.5.0.2, right? So I start to look for an IM version that is recommended for 8.5.0.2, after some random Google hits I find a documentation that mentions 1.5.3 version of IM, and I’ll attempt to download it, but I hit a new problem now.

Apparently writing 64 bit software for Linux seems to be a difficult thing to do for IBM, and the only downloadable items are for 32 bit Linux and some very weird 64 bit platforms that I’ve never heard of before. Downloading the 32 bit version of IM and then installing the following packages on Ubuntu helped a little bit:

apt-get install lib32z1 libgcc1:i386

But even with this old version of IBM IM I’m unable to install 8.5.0.2. At this point I just start to search a lot more, and finally I get to the holy grail.

How to install WebSphere

Search for WebSphere Express Trial and go through some silly registration process then make sure you uncheck all options about contacting you and if you did everything right, you will get access to a WebSphere 8.5.5.8 installer (somehow downloading the Express does not have the prerequisite of installing 8.5.0, no comment).

Once installed using the IBM Installation Manager you should realize that the IBM JDK shipped with WebSphere is a 32 bit only application, so make sure you’ve ran the above apt-get command beforehand.

Create a custom profile

I still don’t really know what a profile is meant to be, but I had this strong urge to create one, as everything in WebSphere seems to rely on these things. So I came up with the following commands:

$ bin/manageprofiles.sh -create -templatePath /opt/IBM/WebSphere/AppServer/profileTemplates/default -profileName default -profilePath /opt/IBM/WebSphere/AppServer/profiles/default
$ bin/manageprofiles.sh -setDefaultName -profileName default

Making my profile the default one should mean that I don’t have to use the -profileName default all the time when interacting with the CLI, so hopefully that will make my life easier in the long run.

At this stage I should mention that if for some reason the profile creation just hangs and doesn’t want to finish at all, then apparently your problem is that your /bin/sh does not map to /bin/bash! I mean what the hell…

Using IBM JDK7

Foolishly I thought that a standalone IBM JDK7 can be utilized by WebSphere, but of course I couldn’t really get that to work (not sure how to make managesdk aware of an external JDK installation), so I had to follow the official guide of downloading a WebSphere specific IBM JDK7, and then I had to use the managesdk utility to ensure that my default profile will use IBM JDK7:

$ bin/managesdk.sh -setNewProfileDefault -sdkName 1.7_32

That’s it, after all this you should be able to run WebSphere with the following command:

$ bin/startServer.sh server1 # No idea where that server1 comes from

Once WebSphere is running you can access the admin console at http://localhost:9060/ibm/console (if you only enter localhost:9060 you will face an error message to be absolutely user friendly), and your applications should be theoretically under port 9080 (because who needs to be consistent with every other container and port 8080).

Next time I’ll blog about OpenAM instead, promise. πŸ™‚

How to configure social authentication with LinkedIn

When trying to configure Social Authentication with OpenAM 12 you may notice that out of the box OpenAM only supports Microsoft, Google and Facebook. The reasoning behind this is that at the time of the implementation these providers supported OpenID Connect (well Facebook supports Facebook Connect, but that’s close enough). In case you would like to set up social authentication with other providers then that is still possible, but a bit tricky. In this article I’m going to try to show how social authentication can be configured for example with LinkedIn (that currently only supports OAuth2, not OIDC).

Create an OAuth2 app at LinkedIn

In order to be able to obtain OAuth2 access tokens from LinkedIn, you will need to register your OpenAM as a LinkedIn application by filling out some silly forms. The second page of this wizard gets a bit more interesting, so here are a couple of things that you should do:

  • Take a note of the Client ID and Client Secret displayed.
  • Make sure that OpenAM’s Redirect URI is added as a valid OAuth 2.0 Authorized Redirect URLs, by default that would look something like:
    http://openam.example.com:8080/openam/oauth2c/OAuthProxy.jsp
    

Configure OpenAM for Social authentication

To simply configure LinkedIn for OAuth2 based authentication, you just need to create a new authentication module instance with OAuth 2.0 / OpenID Connect type. With ssoadm that would look something like:

$ openam/bin/ssoadm create-auth-instance -e / -m linkedin -t OAuth -u amadmin -f .pass

This just configures an OAuth2 authentication module with the default settings, so now let’s update those settings to actually match up with LinkedIn:

$ openam/bin/ssoadm update-auth-instance -e / -m linkedin -u amadmin -f .pass -D linkedin.properties

Where linkedin.properties contains:

iplanet-am-auth-oauth-client-id=
iplanet-am-auth-oauth-client-secret=
iplanet-am-auth-oauth-auth-service=https://www.linkedin.com/uas/oauth2/authorization
iplanet-am-auth-oauth-token-service=https://www.linkedin.com/uas/oauth2/accessToken
iplanet-am-auth-oauth-scope=r_basicprofile
iplanet-am-auth-oauth-user-profile-service=https://api.linkedin.com/v1/people/~?format=json
org-forgerock-auth-oauth-account-mapper-configuration=id=uid
org-forgerock-auth-oauth-attribute-mapper-configuration=lastName=sn
org-forgerock-auth-oauth-attribute-mapper-configuration=firstName=givenName
org-forgerock-auth-oauth-attribute-mapper-configuration=id=uid
org-forgerock-auth-oauth-prompt-password-flag=false

At this stage you should be able to authenticate with LinkedIn by simply opening up /openam/XUI/#login/&module=linkedin .

To set up this OAuth2 module for social authentication you just need to do a few more things:
Add the authentication module to a chain (social authentication uses authentication chains to allow more complex authentication flows):

$ openam/bin/ssoadm create-auth-cfg -e / -m linkedinChain -u amadmin -f .pass
$ openam/bin/ssoadm add-auth-cfg-entr -e / -m linkedinChain -o linkedin -c REQUIRED -u amadmin -f .pass

Now to enable the actual social authentication icon on the login pages, just add the Social authentication service to your realm:

$ openam/bin/ssoadm add-svc-realm -e / -s socialAuthNService -u amadmin -f .pass -D social.txt

Where social.txt contains:

socialAuthNDisplayName=[LinkedIn]=LinkedIn
socialAuthNAuthChain=[LinkedIn]=linkedinChain
socialAuthNIcon=[LinkedIn]=https://static.licdn.com/scds/common/u/images/logos/linkedin/logo_in_nav_44x36.png
socialAuthNEnabled=LinkedIn

Please keep in mind that OAuth2 is primarily for authorization purposes, for authentication you should really utilize OpenID Connect as a protocol. As the social authentication implementation is quite generic, actually you should be able to configure any kind of authentication mechanism and display it with a pretty logo on the login page if you’d like.

Some links I’ve found useful when writing up this post:
OpenAM 12 – Social Authentication
LinkedIn OAuth2 docs
LinkedIn REST API

How to set up an OAuth2 provider with ssoadm

The ssoadm command line tool is a quite powerful ally if you want to set up your OpenAM environment without performing any operation through the user interface, i.e. when you just want to script everything.

Whilst the tool itself allows you to do almost anything with the OpenAM configuration, finding the right set of commands for performing a certain task, is not always that straightforward… In today’s example we will try to figure out which commands to use to set up an OAuth2 provider.

When using the Common Tasks wizard to set up an OAuth2 Provider we can see that there are essentially two things that the wizard does for us:

  • Configure a realm level service for the OAuth2 Provider
  • Set up a policy to control access to the /oauth2/authorize endpoint

Setting up a realm level service

Well, we know that we are looking for something that sets up a service, so let’s see what command could help us:

$ openam/bin/ssoadm | grep -i service -B 1
...
--
    add-svc-realm
        Add service to a realm. 
--
...

Well, we want to add a service to a realm, so add-svc-realm sounds like a good fit. Let’s see what parameters it has:

$ openam/bin/ssoadm add-svc-realm
...
Options:
    --realm, -e
        Name of realm.

    --servicename, -s
        Service Name.

    --attributevalues, -a
        Attribute values e.g. homeaddress=here.

    --datafile, -D
        Name of file that contains attribute values data.

Alright, so the realm is straightforward, but what should we use for servicename and datafile?

Each service in OpenAM has a service schema that describes what kind of attributes can that service contain and with what syntaxes/formats/etc. Since all the default service definitions can be found in ~/openam/config/xml directory, let’s have a look around and see if there is anything OAuth2 related:

$ ls ~/openam/config/xml
... OAuth2Provider.xml ...

After opening up OAuth2Provider.xml we can find the service name under the name attribute of the Service element (happens to be OAuth2Provider).

So the next question is what attributes should you use to populate the service? All the attributes are defined in the very same service definition XML file, so it’s not too difficult to figure out what to do now:

$ echo "forgerock-oauth2-provider-authorization-code-lifetime=60
forgerock-oauth2-provider-refresh-token-lifetime=600
forgerock-oauth2-provider-access-token-lifetime=600" > attrs.txt
$ openam/bin/ssoadm add-svc-realm -e / -s OAuth2Provider -u amadmin -f .pass -D attrs.txt

Creating a policy

Creating a policy is a bit more complex since 12.0.0 and the introduction of XACML policies, but let’s see what we can do about that.

Using ssoadm create-xacml

The XACML XML format is not really pleasant for the eyes, so I would say that you better create that policy using the policy editor first, and then export that policy in XACML format, so that you can automate this flow.

Once you have the policy in XACML format, the ssoadm command itself would look something like this:

$ openam/bin/ssoadm create-xacml -e / -X oauth2-policy.xml -u amadmin -f .pass

Using the REST API

The policy REST endpoints introduced with 12.0.0 are probably a lot more friendly for creating policies, so let’s see how to do that:

$ echo '{
   "resources" : [
      "http://openam.example.com:8080/openam/oauth2/authorize?*"
   ],
   "subject" : {
      "type" : "AuthenticatedUsers"
   },
   "active" : true, 
   "name" : "OAuth2ProviderPolicy",
   "description" : "", 
   "applicationName" : "iPlanetAMWebAgentService",
   "actionValues" : {
      "POST" : true, 
      "GET" : true
   }
}' > policy.json
$ curl -v -H "iplanetdirectorypro: AQIC5wM...*" -H "Content-Type: application/json" -d @policy.json http://openam.example.com:8080/openam/json/policies?_action=create

Hope you found this useful.

Sessions

Sessions are one of the key components of OpenAM, so it can be quite beneficial to have a good understanding of how they are modelled, stored and used. In the followings I will try to do a deep dive on how sessions work, what crosstalk means, and how session failover with CTS works.

Originally this post was also meant to discuss how CTS reduced crosstalk mode changes things, but I couldn’t really finish with all the necessary tests so that I could write it up 100% accurately unfortunately… Hopefully this will be a good read on Sessions regardless πŸ™‚

Sessions

Firstly let’s clarify that OpenAM does NOT use HttpSessions (i.e. the session that the container maintains through the JSESSIONID cookie) for session tracking purposes, instead OpenAM uses it’s own session storage system with its own session ID format. The sessions by default are stored in memory in a Hashtable. This means that if you have more than one OpenAM server in a deployment, then each of the servers are going to have different sets of sessions stored in memory. How can one server then validate an another server’s session?

Crosstalk

Since each session is essentially bound to an OpenAM server, validating a session in a multi-node deployment can be only done by validating the session at the server that actually owns it. The cross-server session validation is done by one server making an HTTP request to the other server (i.e. to the server that is meant to own the session), and this is what in OpenAM world we call crosstalk.

Before going into any further details though, we should talk a bit about the format of the SessionID:

AQIC5wM2LY4Sfcwww8u5l2MYyuEyGXUR0JX1RIS-NSxCyRI.*AAJTSQACMDIAAlNLABQtNDQxMDI2NzQ5NjQ5NDMxMTg3NgACUzEAAjAx*

The first part of the session ID up until the .* contains the random identifier of the session, and the value between the .* and the trailing * is the extension part of the session. After correctly decoding the extension, one can see that the following information is stored in there:

  • Server ID: 01
  • Site ID: 02
  • Storage Key: -4410267496494311876

In short, the Server ID and the Site ID allows the OpenAM servers to figure out which server actually owns a given session, the Storage Key will become important when we look into the session failover in more detail. In case you want to read more about sessions then I would strongly suggest to check out Bill Nelson’s blog post about it.

Now that we know how OpenAM servers can determine who owns a given session, it’s time to demonstrate how crosstalk really works. Let’s assume that we have the following (simplified) deployment diagram:

network

Scenario 1

There is a session validation request received by Server 03 for a session that is actually owned by Server 01.

Server 03 will first look for the session locally, then checks which server actually is meant to own the session. Since the session in this case is owned by Server 01, Server 03 will perform an HTTP request against Server 01’s sessionservice PLL (Platform Low Level -> essentially an XML over HTTP protocol) endpoint. At this point, Server 01 will check the session locally and if it exists, then it will be returned to Server 03 in an XML format (called SessionInfo). Server 03 then processes the returned SessionInfo (or error) and is able to answer the question whether the session in question is actually valid or not. The SessionInfo retrieved from Server 01 will then be stored on Server03 for caching purposes.

A sequence diagram to demonstrate this would look something like (the session listener part is explained a bit further down):

crosstalk

Scenario 2

In this scenario there is a session validation request received by Server 04 for a session that is actually owned by Server 01.

In this case the flow is slightly different, since Server 04 will actually send the sessionservice request to Site 02. Depending on LB configuration/stickiness/luck there is a chance that the request will be routed to Server 03, in which case Scenario 1 kicks in.

Moral of these scenarios

One of the most important thing to understand when it comes to crosstalk is that all these HTTP requests that are sent between the different instances are blocking calls. For example in the second scenario both Server 04 and (potentially) Server 03 will have 1 request processing thread waiting for Server 01 (and Server 03) to respond, which means that a single user request kept 3 different request processing threads busy. This is why when it comes to sizing a deployment it is key to understand how much crosstalk is expected in the environment (which will be very much dependent on how sticky your LB is).

Since we are talking about HTTP requests, it’s also key to ensure that the HTTP Connect and Read timeout settings are configured to sensible values. In case of improper settings, there is a chance that an unresponsive instance kills an another (otherwise perfectly running) instance, because the good instance ends up waiting 30 seconds (or potentially more) on the bad server. During this time period the request processing threads will be occupied with user requests (which in turn are waiting on the crosstalk responses), essentially making the container inaccessible for other users. The connect and read timeout settings are advanced server properties (stored under ConfigurationServers and sitesDefault Server SettingsAdvanced tab), and they should probably look something like:

com.sun.identity.url.readTimeout=5000
org.forgerock.openam.url.connectTimeout=2000

Or something similar. This is a tuning parameter really, so take this example only as a guideline please, and make sure you tune/test this setting in your environment first before changing anything.

Caching

After all of this, probably you are already wondering how this scales well, and what prevents the OpenAM servers from just querying each other for the same sessions over and over again. As I’ve already shortly mentioned in Scenario 1, the retrieved SessionInfo is cached on the server that made the crosstalk request. The interval of how long this SessionInfo is cached is controlled by the setting called Maximum Caching Time under ConfigurationGlobalSession. Within this interval OpenAM will not update the idle timeout of the session, and also it will just use the cached SessionInfo, instead of asking for the same information again. Once the interval passes and there is an incoming request for the corresponding session, OpenAM will obtain the SessionInfo again from the authoritative server and cache it again.

Performing non-read operations (like setting a session property, or a logout) will result in a crosstalk request regardless of the caching setting, as changes to the sessions can be only made by the authoritative server.

In order to prevent the cached session data from becoming stale, the servers are also registering notification listeners. This way when the session gets updated, the authoritative server can send out notifications to all the interested OpenAM servers about the changes.

Session Failover

As usually the purpose of a multi-node environment is to achieve high availability of the service, we should think about what happens when an OpenAM instance goes down. Since OpenAM stores the sessions in memory, once the JVM shuts down, all the sessions that are hosted by that OpenAM instance will be lost, forcing end-users to reauthenticate.

In certain deployments this behavior may not be acceptable. To make this more seamless for end-users, OpenAM’s session failover solution can be enabled. Session failover essentially means the following three things:

  • Storing session information in a persistent storage system.
  • Monitoring the other servers in the deployment for availability.
  • Recovering sessions if the host server is down.

To be able to discuss these, first let’s assume that we have the following slightly more complex deployment:

sfo-network

Storing sessions

Since OpenAM 11.0.0, sessions are stored by the Core Token Service (CTS) into an embedded/external OpenDJ instance as directory entries.

When it comes to session failover, we are talking about the storage and retrieval of session tokens. These session tokens are converted into a generic token format, and then stored in the directory server. An example session would look something like this in OpenDJ:

dn: coreTokenId=474806826517738981,ou=famrecords,ou=openam-session,ou=tokens,dc=openam,dc=forgerock,dc=org
objectClass: top
objectClass: frCoreToken
coreTokenString02: AQIC5wM2LY4SfcyiG2X5Sgn2teO8deDdR6NUPHxmITkXNhg.*AAJTSQACMDIAAlNLABI0NzQ4MDY4MjY1MTc3Mzg5ODEAAlMxAAIwMQ..*
coreTokenType: SESSION
coreTokenId: 474806826517738981
coreTokenString03: shandle:AQIC5wM2LY4Sfcw2KnP6-SuDR0OueZg90KT938-gFM6jWY4.*AAJTSQACMDIAAlMxAAIwMQACU0sAEjQ3NDgwNjgyNjUxNzczODk4MQ..*
coreTokenUserId: id=demo,ou=user,dc=openam,dc=forgerock,dc=org
coreTokenExpirationDate: 20140728160134+0100
coreTokenString01: 1406557594
coreTokenObject: <long JSON blob representing a session>

The most important part of the above token is actually the coreTokenObject field, which is actually what really represents the full session object, that can be restored at a later time if necessary.

It is good to keep in mind that the CTS stores these entries in a directory server, more specifically in OpenDJ. Amongst many other things, this means that the tokens needs to be transferred over the network from OpenAM to one of the OpenDJ instances using the LDAP protocol. A regular session token’s size can vary between 3k and 10k depending on deployment size and session usage; and this data needs to get transferred over the network for persistence.

In a multi-node OpenDJ deployment – where replication has been set up -, each of the token related (ADD/MODIFY/DELETE) LDAP operations gets tracked in the changelog, which means all the token information gets persisted essentially twice (note that the changelog gets periodically purged, see replication-purge-delay). By reducing the token size firstly you will write less data to the disk (less disk IO) and secondly the network traffic should be smaller as well (for both the initial operation, and for the replication itself afterwards).

The CTS framework implements two kind of compression mechanism for the coreTokenObject field in order to reduce the token size:

  • attribute compression: In this case the JSON object itself is compressed by essentially shortening the field names for example sessionProperties -> sP (com.sun.identity.session.repository.enableAttributeCompression advanced server property).
  • compression: GZIP compression (com.sun.identity.session.repository.enableCompression advanced server property).

Both of these approaches should reduce the size of the session tokens, but they also increase the processing time on the OpenAM side. Personally I would strongly recommend enabling GZIP compression to reduce the token size in deployments where session failover is enabled.

Monitoring servers

As we already know, under normal circumstances OpenAM uses crosstalk when requests gets misrouted, and this is something that does not change once session failover is enabled. Although the sessions are stored in OpenDJ, OpenAM will remain to send crosstalk requests to the other instances (in essence, this behavior is what the reduced crosstalk mode attempts to change).

In case of session failover it doesn’t really help if AM waits on crosstalk response from a node which may be actually unavailable. In order to prevent this scenario, there is a component that gets automatically enabled once session failover has been enabled: the ClusterStateService (or CSS as I’d like to call it). The ClusterStateService essentially tries to monitor the servers within the current site (and since 12 it also monitors remote site URLs), so that if there is a need to do a crosstalk, OpenAM first checks with ClusterStateService whether the given node is available, before sending any kind of crosstalk request.
To see what kind of settings are there to control CSS, have a look at the Admin Guide.

If the server that is meant to host the session is down, then OpenAM will recover the session.

Recovering sessions

Before the session can be recovered from CTS, OpenAM needs to figure out first who will own the session in the absence of its current owner. Each OpenAM server has a thing called the PermutationGenerator, which is essentially to ensure that all the OpenAM servers within the same deployment will always generate numbers in the exact same order. The owner of the session is determined by ClusterStateService and PermutationGenerator, i.e. the first server that is available and is next in the line for the current session, will have the responsibility of hosting the session for the currently unavailable server.

The recovery itself is quite simple: the session gets retrieved from CTS, and a session object gets created based on the information stored in the token.

That’s it for now

Hopefully you found this post helpful.

Once I finally get some time to test reduced crosstalk in a real agent-enabled environment there will be a new post coming your way. πŸ˜‰

LDAPS or StartTLS? That is the question…

Due to the various security issues around the different SSL implementations, I’ve seen an increasing demand for OpenAM’s StartTLS support even though OpenAM perfectly supported LDAPS. In this post I’m going to show you how to set up both StartTLS and LDAPS in a dummy OpenDJ 2.6.2 environment, and then I’ll attempt to compare them from a security point of view.

NOTE: the instructions provided here for setting up secure connections are by no means the most accurate ones, as I only provide them for demonstration purposes. You should always consult the product documentation for much more detailed, and precise information.

Common Steps

Both LDAPS and StartTLS requires a private key, I’m just going to assume you know how to generate/convert (or obtain from a trusted CA) it.
Once you have your JKS file ready first make sure that it contains a PrivateKeyEntry:

$ keytool -list  -keystore server.jks 
Enter keystore password:  

Keystore type: JKS
Keystore provider: SUN

Your keystore contains 1 entry

1, 2015.04.22., PrivateKeyEntry, 
Certificate fingerprint (SHA1): AA:30:0D:8E:DE:4C:F9:AB:AC:FA:61:E7:B4:F5:56:EF:3E:F4:F6:4A

After verifying the keystore, copy it into the config folder with the name “keystore” and also create a keystore.pin file that contains the keystore’s password.

In order to make this JKS available for OpenDJ you’ll need to run the following dsconfig command (for this demo’s purpose we are going to reuse the existing Key/Trust Manager Providers):

dsconfig set-key-manager-provider-prop 
          --provider-name JKS 
          --set enabled:true 
          --hostname localhost 
          --port 4444 
          --trustStorePath /path/to/opendj/config/admin-truststore 
          --bindDN cn=Directory Manager 
          --bindPassword ****** 
          --no-prompt

Since key management always goes hand-in-hand with trust management, we need to set up the Trust Management Provider as well. For this, you’ll need to create a JKS keystore which only contains the public certificate and place it into the config folder with the name “truststore“, and again, you’ll need to create a truststore.pin file containing the truststore’s password.

dsconfig set-trust-manager-provider-prop 
          --provider-name JKS 
          --set enabled:true 
          --set trust-store-pin-file:config/truststore.pin 
          --hostname localhost 
          --port 4444 
          --trustStorePath /path/to/opendj/config/admin-truststore 
          --bindDN cn=Directory Manager 
          --bindPassword ****** 
          --no-prompt

LDAPS

From protocol point of view LDAPS is not too different from HTTPS actually: in order to establish a connection to the directory, the client MUST perform an SSL/TLS Handshake with the server first, hence all the LDAP protocol messages are transported on a secure, encrypted channel.
Configuring and enabling the LDAPS Connection Handler in OpenDJ doesn’t really take too much effort:

dsconfig set-connection-handler-prop 
          --handler-name LDAPS Connection Handler 
          --set enabled:true 
          --set listen-port:1636 
          --hostname localhost 
          --port 4444 
          --trustStorePath /path/to/opendj/config/admin-truststore 
          --bindDN cn=Directory Manager 
          --bindPassword ****** 
          --no-prompt

After this you will need to restart the directory server, during the startup you should see the following message:

[22/04/2015:20:47:26 +0100] category=PROTOCOL severity=NOTICE msgID=2556180 msg=Started listening for new connections on LDAPS Connection Handler 0.0.0.0 port 1636

To test the connection, you can just run a simple ldapsearch command:

$ bin/ldapsearch -Z -h localhost -p 1636 -D "cn=Directory Manager" -w ****** -b dc=example,dc=com "uid=user.0" "*"
The server is using the following certificate: 
    Subject DN:  EMAILADDRESS=peter.major@forgerock.com, CN=aldaris.sch.bme.hu, OU=Sustaining, O=ForgeRock Ltd, L=Bristol, C=GB
    Issuer DN:  EMAILADDRESS=peter.major@forgerock.com, CN=aldaris.sch.bme.hu, OU=Sustaining, O=ForgeRock Ltd, L=Bristol, C=GB
    Validity:  Wed Apr 22 19:43:22 BST 2015 through Thu Apr 21 19:43:22 BST 2016
Do you wish to trust this certificate and continue connecting to the server?
Please enter "yes" or "no":

As you can see I’ve been prompted to accept my self-signed certificate, and after entering “yes”, I could see the entry I’ve been looking for.

StartTLS

StarTLS for LDAP is slightly different from LDAPS, the main difference being, that first the client needs to establish an unencrypted connection with the directory server. At any point in time after establishing the connection (as long as there are no outstanding LDAP operations on the connection), the StartTLS extended operation shall be sent across to the server. Once a successful extended operation response has been received, the client can initiate the TLS handshake over the existing connection. Once the handshake is done, all future LDAP operations will be transmitted on the now secure, encrypted channel.
Personally my concerns with StartTLS are:

  • You must have a plain LDAP port open on the network.
  • Even after a client connects to the directory there is absolutely nothing preventing the user from sending BIND or any other kind of requests on the unencrypted channel before actually performing the StartTLS extended operation.

Now let’s see how to set up StartTLS:

dsconfig set-connection-handler-prop 
          --handler-name LDAP Connection Handler 
          --set allow-start-tls:true 
          --set key-manager-provider:JKS 
          --set trust-manager-provider:JKS 
          --hostname localhost 
          --port 4444 
          --trustStorePath /path/to/opendj/config/admin-truststore 
          --bindDN cn=Directory Manager 
          --bindPassword ****** 
          --no-prompt

Restart the server, and then verify that the connection works, run:

$ bin/ldapsearch -q -h localhost -p 1389 -D "cn=Directory Manager" -w ****** -b dc=example,dc=com "uid=user.0" "*"
The server is using the following certificate: 
    Subject DN:  EMAILADDRESS=peter.major@forgerock.com, CN=aldaris.sch.bme.hu, OU=Sustaining, O=ForgeRock Ltd, L=Bristol, C=GB
    Issuer DN:  EMAILADDRESS=peter.major@forgerock.com, CN=aldaris.sch.bme.hu, OU=Sustaining, O=ForgeRock Ltd, L=Bristol, C=GB
    Validity:  Wed Apr 22 19:43:22 BST 2015 through Thu Apr 21 19:43:22 BST 2016
Do you wish to trust this certificate and continue connecting to the server?
Please enter "yes" or "no":

Again, you can see that the entry is returned just fine after accepting the server certificate. For the sake of testing you can remove the “-q” (–useStartTLS) parameter from the ldapsearch command and you should still see the entry being returned, but this time around the connection was not encrypted at all.

So how does one prevent clients from using the connection without actually performing the StartTLS extended operation?
There is no real solution for this (based on my limited understanding of ACIs), because I couldn’t really find anything in the available list of permissions that would match BIND operations. Actually I’ve tried to set up an ACI like this:

aci: (target="ldap:///dc=example,dc=com")(version 3.0;acl "Prevent plain LDAP operations"; deny (all)(ssf<="1");)

but the BIND operations were still successful over plain LDAP. Whilst it was good that I couldn't really perform other LDAP operations, I think the worst has already happened, the user's password was transferred over an insecure network connection.
For more details on ssf by the way, feel free to check out the documentation. πŸ˜‰

UPDATE!
Chris Ridd let me know that there is a way to enforce secure connections for BIND operations as well by configuring the password policy. To set up the password policy just run the following command:

dsconfig set-password-policy-prop 
          --policy-name Default Password Policy 
          --set require-secure-authentication:true 
          --hostname localhost 
          --port 4444 
          --trustStorePath /path/to/opendj/config/admin-truststore 
          --bindDN cn=Directory Manager 
          --bindPassword ****** 
          --no-prompt

Future BIND operations on unsecured LDAP connection will result in the following error:

[23/04/2015:10:04:27 +0100] BIND RES conn=2 op=0 msgID=1 result=49 authFailureID=197124 authFailureReason="Rejecting a simple bind request because the password policy requires secure authentication" authDN="uid=user.0,ou=people,dc=example,dc=com" etime=1

The problem is though that again, nothing actually prevented the user from sending the password over the unsecured connection...

Common misconceptions

I think the following misconceptions are causing the most problems around security:

  • StartTLS is more secure, because it has TLS in the name: WRONG! StartTLS allows just as well the usage of SSL(v2/v3) protocols, it is definitely not limited to TLS v1.x protocols by any means! Hopefully my explanation above makes it clearer that StartTLS is probably less secure than LDAPS.
  • LDAPS is less secure, because it has the ugly S (thinking it stands for SSL, but actually it stands for Secure): WRONG! as always, the actual security you can gain by using LDAPS connections is all matter of configuration. A badly configured LDAPS can still result in unsafe communication, yes, but LDAPS can just as well leverage the (currently considered safe) TLSv1.2 protocol and be perfectly safe.

I think I just can't emphasize this enough: use LDAPS if possible.

Understanding the login process

As authentication is pretty much the core functionality of OpenAM, I believe it is helpful to have a good understanding of how it works really. For starters let’s have a look at the different concepts around authentication.

Authentication modules

Authentication modules are just a simple piece of functionality that is meant to identify the user by some means. Depending on your requirements an authentication module could verify user credentials, or perform some kind of two factor verification process. For more complex use-cases you could use the auth module to collect informations about the end-user and with the help of a fraud-detection system determine if the current login attempt is “risky”.

In any case, the authentication modules are performing some (customizable) logic, and at the end of the module processing you can either ignore the current authentication module (neither a success nor a failure), OR succeed with a logged in user, OR just simply fail (invalid credentials, etc).

The authentication module implementations are JAAS based (with some abstraction on top of plain JAAS), so it is all based on callbacks (think of callbacks as input fields) that needs to be “handled” and submitted. When the callbacks are submitted, the AMLoginModule’s #process gets invoked with the callbacks. This is the time when the authentication module can start to process the submitted data and determine if the authentication attempt was successful. Since an authentication process can involve multiple steps (more than one set of callbacks to submit: for example requesting username, and than some verification code), the #process method needs to return a number that represents the next state (there are special numbers like ISAuthConstants.LOGIN_SUCCEED that represent successful authentication result, i.e. no further need to present callbacks), which then will be used to determine the next set of callbacks to display on the UI. Assuming that the authentication finished successfully, we need to return the magic LOGIN_SUCCEED state.

So how does OpenAM really know who the user is?

Once the authentication is successful, the auth framework will call AMLoginModule#getPrincipal which needs to return the authenticated user’s principal. #getPrincipal has a key role in the authentication process, so make sure its implemented correctly (or when using built-in modules, make sure they are configured correctly).

Authentication chains

The next sensible building blocks are the authentication chains. The auth chains can be considered as combinations of various authentication modules to present a single authentication procedure for the end-users. Following the previous example, one could think that checking some verification code after providing a username and password may not be actually the job of a single authentication module, and probably should be implemented separately. In that case one could implement one module for username/password login, and then implement an another module for code verification. To make sure the user logs in using both auth modules, one could create an authentication chain that includes both of them, and then the user will just need to authenticate against that chain.

Since the modules are JAAS-based, it makes sense to set up the chain configurations similarly to JAAS as well, but I’m not going to go into too much details on that front, instead you should just read about JAAS a bit more (especially about the “flags”).

Profile lookup

Once the user has successfully authenticated, there is a thing called “profile lookup”. This bit is all about trying to find the logged in user in one of the configured data stores, and then ensure that the user-specific settings (things like custom idle/max timeout values, or session quota even) are all applied for the just-to-be-created session. There are other additional checks as well, like ensuring that the logged in user actually has an active status in the system (e.g. doesn’t have a locked account).

To make things a bit more clearer let’s talk about User Profile Mode now (Access controlrealmAuthenticationAll Core Settings). The user profile mode tells what should happen at the profile lookup stage of the authentication, and these are the possible modes:

  • Required – this is the default mode, which just means that the user profile MUST exist in the configured data store.
  • Ignored – the user profile does not have to exist, the user profile will not be looked up as part of the authentication process.
  • Dynamic – the profile will be looked up, but if it doesn’t already exist, it will be dynamically created in the data store.
  • Dynamic with User Alias – this is similar to Dynamic, but it also appears to store user alias attributes in the newly created entries (I must admit I don’t fully understand this mode yet).

I believe it is important to stop here a bit and emphasize the following:
The authentication module may interact with arbitrary external components during the authentication phase, however when it comes to profile lookup, that will be always performed against the configured data stores. If you are running into the infamous “User has no profile in this organization” error message, then that means, that the authentication was successful, but the profile lookup failed, since OpenAM was unable to find the user in the configured data stores.

The profile lookup itself is performed based on the return value of #getPrincipal, so this is why it is really important to ensure that the module works correctly. The returned principal can be simply a username like “helloworld”, but it can also have a DN format like “uid=helloworld,ou=people,dc=example,dc=com” (see LDAP module’s Return User DN to DataStore setting). When the returned value is a DN, then the RDN value will be used for the data store search (so helloworld), hence it is important to ensure that the data store has been configured to search users based on the attribute that is expected to have the value of helloworld.

The idea behind all of this is that in #getPrincipal the username returned should be unique across the user base, so even if let’s say you allow someone authenticating with “John Smith”, you should still return a bit more meaningful username (like jsmith123) to the backend. That way you can ensure that when you ask for “John Smith”‘s user details, you will get the right set of values.

Post authentication actions

After a successful profile lookup there are various additional things that OpenAM does, but I’m not really going to go into the nifty details for those. Here’s a small list of things that normally happens:

  • Check if user account is active.
  • Check if the account is locked (using OpenAM’s built-in Account Lockout feature).
  • Check if there are user-specific session settings configured for the user, and apply those values for the newly created session.
  • When the user session is created, check if the session quota has been exhausted and run the corresponding quota exhaustion action if yes.
  • Execute the Post Authentication Processing plugins.
  • Determine the user’s success login URL and also ensure that the goto URL is validated.

Summary

Whilst we only discussed portions of the actual authentication process, I think the main concepts for authentication are laid out, so hopefully when you need to configure OpenAM the next time around, you can reuse the things learned here. πŸ™‚

Jenkins and the mysterious Accept timed out error

Not so long ago I’ve been trying to set up a new Jenkins job on a freshly created Jenkins slave. Unfortunately for me the first attempt of running the Maven build resulted in a failure:

ERROR: Aborted Maven execution for InterruptedIOException
java.net.SocketTimeoutException: Accept timed out
 at java.net.PlainSocketImpl.socketAccept(Native Method)
 at java.net.AbstractPlainSocketImpl.accept(AbstractPlainSocketImpl.java:398)
 at java.net.ServerSocket.implAccept(ServerSocket.java:530)
 at java.net.ServerSocket.accept(ServerSocket.java:498)
 at hudson.maven.AbstractMavenProcessFactory$SocketHandler$AcceptorImpl.accept(AbstractMavenProcessFactory.java:211)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:606)
 at hudson.remoting.RemoteInvocationHandler$RPCRequest.perform(RemoteInvocationHandler.java:309)
 at hudson.remoting.RemoteInvocationHandler$RPCRequest.call(RemoteInvocationHandler.java:290)
 at hudson.remoting.RemoteInvocationHandler$RPCRequest.call(RemoteInvocationHandler.java:249)
 at hudson.remoting.UserRequest.perform(UserRequest.java:118)
 at hudson.remoting.UserRequest.perform(UserRequest.java:48)
 at hudson.remoting.Request$2.run(Request.java:328)
 at hudson.remoting.InterceptingExecutorService$1.call(InterceptingExecutorService.java:72)
 at java.util.concurrent.FutureTask.run(FutureTask.java:262)
 at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
 at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
 at java.lang.Thread.run(Thread.java:745)

Jenkins normally kicks off Maven builds by forking a Maven process on the system and then using Maven extensions to become part of the build process. So let’s have a look at an example command Jenkins runs for a Maven build:

[OpenAM] $ /jdk8/bin/java -cp /jenkins/maven31-agent.jar:/apache-maven-3.2.3/boot/plexus-classworlds-2.5.1.jar:/apache-maven-3.2.3/conf/logging jenkins.maven3.agent.Maven31Main /apache-maven-3.2.3 /jenkins/slave.jar /jenkins/maven31-interceptor.jar /jenkins/maven3-interceptor-commons.jar 57186

Here, that last parameter, 57186, looks quite suspiciously like a random port number. Could it be that Jenkins tries to connect to this local port and this connection fails resulting in the above stacktrace? Well, easy to test, let’s run the following iptables command on the box:

iptables -I INPUT -i lo -j ACCEPT

Try to rerun the build, and voilΓ , the build progresses further. Lesson learned. πŸ˜‰

How to determine NSS/NSPR versions on Linux

Reverse engineering is a quite important skill to have when working with OpenAM, and this is even more the case for the web policy agents. Determining the version of the NSS and NSPR libraries may prove important when trying to build the agents, so here is a trick I’ve used in the past to determine the version of the bundled libraries.

To determine the version for NSPR, create nspr.c with the following content:

#include <dlfcn.h>
#include <stdio.h>

int main() {
        void* lib = dlopen("/opt/web_agents/apache24_agent/lib/libnspr4.so", RTLD_NOW);
        const char* (*func)() = dlsym(lib, "PR_GetVersion");
        printf("%sn", func());

        dlclose(lib);
        return 0;
}

Compile it using the following command (of course, make sure the path to the library is actually correct), then run the command:

$ gcc nspr.c -ldl
$ ./a.out
4.10.6

Here is the equivalent source for NSS, saved under nss.c:

#include <dlfcn.h>
#include <stdio.h>

int main() {
        void* lib = dlopen("/opt/web_agents/apache24_agent/lib/libnss3.so", RTLD_NOW);
        const char* (*func)() = dlsym(lib, "NSS_GetVersion");
        printf("%sn", func());

        dlclose(lib);
        return 0;
}

And an example output would look like:

$ gcc nss.c -ldl
$ ./a.out
3.16.3 Basic ECC

To determine the correct symbol names I’ve been using the following command:

nm -D libns{s,p}*.so | grep -i version

NOTE: While this code works nicely on Linux, for Windows and Solaris you will probably need a few adjustments, or there are potentially other/better ways to get information on the libraries.