Happy Christmas (This isn’t a Scam)

It really isn't - just a simple note to wish all the Infosec Pro readers a relaxing festive break, for yourself, friends and family.

2013 has been a interesting year yet again in the Infosec world.  Connectivity has been the buzz, with topics such as the 'Internet of Things' 'Relationship Management' and 'Social Graphing' all producing great value and enhanced user experiences, but have brought with them some tough challenges with regards to authentication, context aware security and privacy.

The government surveillance initiatives on both sides of the Atlantic, have brought home the seemingly omnipresent nature of snooping, hacking and eavesdropping.  Whilst not new (anyone read Spycatcher ?), the once private and encrypted world of email, SMS and telephony may now never be seen in the same light again.

Snowdon continues to grab the headlines, playing an elusive game of cat and mouse between the Russians and the United States.  If, as believed he has released only 1% of the material he has access to, 2014 could certainly be more interesting.

But what will 2014 bring?  Certainly the same corporate issues that have faced many organisations for the last 3 or 4 years have not been solved.

BYOD, identity assurance and governance, SIEM management, context aware authentication and the ever present 'big security data' challenges still exist.  I can only see these issues becoming of greater importance, more complex and more costly to solve.  The increased connected nature of individuals, things and consumers, is bringing organisations closer their respective market audiences, but requires interesting platforms, bringing together data warehousing, identity management, authentication and RESTful interfaces.  2014, may just be the year where security goes agile.  We can hope.

By Simon Moffatt

Experimenting with OpenDJ in CoreOS / Docker

CoreOS is new minimal Linux based OS designed to run applications in containers.  The design concept is similar to Joyent's SmartOS (aside: I would love to see the CoreOS team adopt ZFS. It has so many compelling features for hosting containers. But I digress...)

CoreOS uses Docker lightweight containers, which are in turn based on Linux LXC containers. You will want to check out the excellent getting started guide, but the readers digest summary is that Docker containers are built up incrementally and inherit from their parent containers.  Each new container contains only the deltas from the parent - making it possible to distribute a small incremental feature set.

When you run a Docker container, you are running only the processes that are needed for your service (for example, OpenDJ). You are not running an entire copy of the OS, making these containers super lightweight (OpenSolaris fans have had this feature for years in the form of zones).

For my OpenDJ experiments, I started with a Fedora 20 docker image:

docker pull mattdm/fedora:f20 

That command pulls down the image from the Docker repository.  You can start a shell on that image by running:

docker run -i -t mattdm/fedora:f20 /bin/bash

In the shell we can install additional packages using yum. For example:

yum install java-1.7.0-openjdk-devel.x86_64
curl -O http://download.forgerock.org/downloads/opendj/20131204020001/opendj-2.7.0-1.20131204.noarch.rpm
yum install opendj-2.7.0-1.20131204.noarch.rpm
cd /opt/opendj
.... continue OpenDJ install...

If we exit our bash shell at this point we will find that our changes vanish.  To persist these changes we must create a new container. In a second coreOS shell, we use:

docker commit 4ee0ae2b1bb1 wstrange/f20dj

This command tells Docker to create a new container. The container id from above came from the "docker ps" command and represents our (still) running container. The new container is saved to a container named wstrange/f20dj. 

This new container will provide the JDK and OpenDJ.  Under the covers, the container requires only the incremental bits that we added on top of the original Fedora 20 image. 

Now that we have a new image, we can fire up an instance of OpenDJ:

docker run -p 3389:389 -p 4444:4444 -p 8989:8989  -d wstrange/f20dj /opt/opendj/bin/start-ds -N

This tells Docker to start the container and run the "start-ds" command to start OpenDJ. 

Docker containers are NATed by default. The -p options is used to redirect local ports to container ports. In the above example, DJ listens on port 389. This is wired to port 3389 on our host. We set up redirection for 4444 (management) and 8989 (replication) as well. 

At this time should be able to fire up your favourite LDAP browser and connect to port 3389.

Now let's have some fun with replication. To replicate we need a second DJ installation. That turns out to be quite easy:

docker run -p 4389:389 -p 4446:4444 -p 8988:8989  -d wstrange/f20dj /opt/opendj/bin/start-ds -N

You will note that we simply fire up a second instance of our DJ container - changing the port numbers so that we do not collide with our first instance.  We now have two instances running:

docker ps
CONTAINER ID        IMAGE                   COMMAND                CREATED             STATUS              PORTS                                                                   NAMES
7ce9a470f422        wstrange/f20dj:latest   /opt/opendj/bin/star   46 minutes ago      Up 46 minutes>389/tcp,>4444/tcp,>8989/tcp   berserk_darwin       
8c1c4160f01f        wstrange/f20dj:latest   /opt/opendj/bin/star   46 minutes ago      Up 46 minutes>389/tcp,>4444/tcp,>8989/tcp   romantic_engelbart   

And sure enough, if you connect your LDAP browser to port 4389, you can browse the second DJ server. 

But wouldn't it be nice to enable replication between those instances?   Let's create another interactive container so we can get access to the OpenDJ commands:

docker run  -i -t wstrange/f20dj /bin/bash  

Now we enable replication between the two instances:

# This is our HOST only network for our containers
export HOST=
# enable replication
bin/dsreplication enable --host1  $HOST --port1 4444\
  --bindDN1 "cn=directory manager" \
  --bindPassword1 password --replicationPort1 8989 \
  --host2 $HOST --port2 4446 --bindDN2 "cn=directory manager" \
  --bindPassword2 password --replicationPort2 8988 \
  --adminUID admin --adminPassword password --baseDN "dc=example,dc=com" -X -n

# initialize replication
bin/dsreplication initialize --baseDN "dc=example,dc=com" \
  --adminUID admin --adminPassword password \
  --hostSource $HOST --portSource 4444 \
  --hostDestination $HOST --portDestination 4446 -X -n

[Note: For reasons that escape me, the dsreplication command must be issued twice. You will get an error on the first try - but the second attempt will work]

You now have two OpenDJ instances that are replicating between each other. Try this out by changing an entry on one instance and verifying that it is updated on the other (DJ has multi-master replication - so it does not matter where you make the change). 

The next step in this experiment will be to automate more of the above using Ansible.  A task for another day...

Setting up Java Fedlet with Shibboleth IdP

The Java Fedlet is basically a lightweight SAML Service Provider (SP) implementation that can be used to add SAML support to existing Java EE applications. Today we are going to try to set up the fedlet sample application with a Shibboleth IdP (available at testshib.org).

Preparing the fedlet

There is two kind of Java fedlet in general: configured and unconfigured. The configured fedlet is what you can generate on the OpenAM admin console, and that will basically preconfigure the fedlet to use the hosted OpenAM IdP instance, and it will also set up the necessary SP settings. The unconfigured fedlet on the other hand is more like starting from scratch (as the name itself suggests :) ) and you have to perform all the configuration steps manually. To simplify things, today we are going to use the configured fedlet for our little demo.

To get a configured fedlet first you have to install OpenAM of course. Once you have an OpenAM set up, Create a new dummy Hosted IdP (to generate a fedlet it is currently required to also have a hosted IdP):

  • On the Common Tasks page click on Create Hosted Identity Provider.
  • Leave the entity ID as the default value.
  • For the name of the New Circle Of Trust enter cot.
  • Click on the Configure button.

Now to generate the configured fedlet let’s go back to the Common Tasks page and click on Create Fedlet option.

  • Here you can set the Name to any arbitrary string, this will be the fedlet’s entity ID. For the sake of simplicity let’s use the fedlet’s URL as entity ID, e.g., http://fedlet.example.com:18080/fedlet.
  • The Destination URL of the Service Provider which will include the Fedlet setting on the other hand needs to be the exact URL of the fedlet, so for me this is just a matter of copy paste: http://fedlet.example.com:18080/fedlet.
  • Click on the Create button.

This will generate a fedlet that should be available under the OpenAM configuration directory (in my case, it was under ~/openam/myfedlets/httpfedletexamplecom18080fedlet/Fedlet.zip), let’s grab this file and unzip it to a convenient location. Now we need to edit the contents of the fedlet.war itself and modify the contents of the files under the conf folder. As a first step open sp.xml and remove the RoleDescriptor and XACMLAuthzDecisionQueryDescriptor elements from the end of the XML.

At this point in time, we have everything we need for REGISTERing our fedlet on the testshib.org site, so let’s head there and upload our metadata (sp.xml), but in order to prevent clashes with other entity configurations, we should rename the sp.xml file to something more unique first, like fedlet.example.com.xml.

After successful registration the next step is to grab the testshib IdP metadata and add it to the fedlet as idp.xml, but there are some small changes we need to make on the metadata, to make it actually work with the fedlet:

  • Remove the EntitiesDescriptor wrapping element, but make sure you copy the xmlns* attributes to the EntityDescriptor element.
  • Since now the XML has two EntityDescriptor root elements, you should only keep the one made for the IdP (i.e. the one that has the “https://idp.testshib.org/idp/shibboleth” entityID), and remove the other.

The next step now is that we need to update the idp-extended.xml file by replacing the entityID attribute’s value in the EntityConfig element to the actual entity ID of the testshib instance, which should be https://idp.testshib.org/idp/shibboleth.

After all of this we should have all the standard and extended metadata files sorted, so the last thing to sort out is to set up the Circle Of Trust between the remote IdP and the fedlet. To do that we need to edit the fedlet.cot file and update the sun-fm-trusted-providers property to have the correct IdP entity ID:


It’s time to start the testing now, so let’s repackage the WAR (so it has all the updated configuration files) and deploy it to an actual web container. After deploying the WAR, let’s access it at http://fedlet.example.com:18080/fedlet. Since there is no fedlet home directory yet, the fedlet suggests to click on a link to create one based on the configuration in the WAR file, so let’s click on it and hope for the best. :)

Testing the fedlet

If we did everything correctly, we end up on a page finally where there are some details about the fedlet configuration, and there are also some links to initiate the authentication process. As a test let’s click on the Run Fedlet (SP) initiated Single Sign-On using HTTP POST binding one, and now we should be facing the testshib login page where you can provide one of the suggested credentials. After performing the login an error message is shown at the fedlet saying Invalid Status code in Response. After investigating a bit further, the debug logs tells us what’s going on under ~/fedlet/debug/libSAML2:

SPACSUtils.getResponse: got response=<samlp:Response xmlns:samlp="urn:oasis:names:tc:SAML:2.0:protocol" ID="_0b33f19185348a26fffe9c3a1aa6e652" InResponseTo="s2be040cf929456a64f444527dfc1d7413ce178531" Version="2.0" IssueInstant="2013-12-04T18:39:32Z" Destination="http://agent.sch.bme.hu:18080/fedlet/fedletapplication"><saml:Issuer xmlns:saml="urn:oasis:names:tc:SAML:2.0:assertion" Format="urn:oasis:names:tc:SAML:2.0:nameid-format:entity">https://idp.testshib.org/idp/shibboleth</saml:Issuer><samlp:Status xmlns:samlp="urn:oasis:names:tc:SAML:2.0:protocol">
<samlp:StatusCode xmlns:samlp="urn:oasis:names:tc:SAML:2.0:protocol"
<samlp:StatusMessage xmlns:samlp="urn:oasis:names:tc:SAML:2.0:protocol">
Unable to encrypt assertion

Now we can also look into the testshib logs (see TEST tab), and that will tell us what was the real problem:

Could not resolve a key encryption credential for peer entity: http://fedlet.example.com:18080/fedlet

So this just tells us that the Shibboleth IdP tries to generate an encrypted assertion for our fedlet instance, however it fails to do so, because it is unable to determine the public certificate for the fedlet. This is happening because the basic fedlet metadata does not include a certificate by default, to remedy this let’s do the followings:

  • Acquire the PEM encoded certificate for the default OpenAM certificate:
    $ keytool -exportcert -keystore ~/openam/openam/keystore.jks -alias test -file openam.crt -rfc
  • Drop the BEGIN and the END CERTIFICATE lines from the cert, so you only have the PEM encoded data, and then you’ll need to add some extra XML around it to look something like this:
    <ds:KeyInfo xmlns:ds="http://www.w3.org/2000/09/xmldsig#">
  • Add the KeyDescriptor under the SPSSODescriptor as a first element.
  • Upload the updated SP metadata with the same filename (fedlet.example.com.xml) again at the testshib site.

Since the decryption process requires the presence of the private key of the certificate, we need to ensure that the private key is available, so let’s do the followings:

  • Copy the ~/openam/openam/keystore.jks file to the fedlet home directory
  • Visit the http://fedlet.example.com:18080/fedlet/fedletEncode.jsp page and enter changeit (the password of the default keystore and private key as well).
  • Grab the encrypted value and create ~/fedlet/.storepass and ~/fedlet/.keypass files containing only the encrypted password.
  • Open up ~/fedlet/sp-extended.xml and ensure that the encryptionCertAlias setting has the value of test.
  • Restart the container, so the changes are picked up.

At this stage we should retry the login process again, and if nothing went wrong you can see all the nice details of the received SAML assertion from testshib.org! It Works! :)