Using your phone with a mobile OpenAM demo environment

(Another blog which is a memo-to-self)

Here's the problem:

  • My OpenAM server is running on Tomcat on my Mac
  • My Mac (which is a client machine really) moves with me across different networks, getting different network addresses as it goes
  • My phone needs to connect to my Mac using a dns name
  • And for a bonus point, in order to demo upcoming Push Authentication:
    • the Mac needs to be connected to the Internet;
    • the phone needs to be connected to a data connection.
So I need a setup like this:

DNS Server

The key to getting this setup to work is to run a DNS server on the Mac. I used the excellent dnsmasq which by default uses the /etc/hosts file on the Mac as its source of information.
So in my /etc/hosts I have something like: ahall
where is the IP address of my Mac on the wireless network.

Phone Settings

Then I configured my iPhone (which has to be on the same WiFi) to point to the Mac as a DNS server.  Go to on the "i" and add the Mac's IP address (i.e. as a DNS Server, ahead of the usual DNS Servers you may use (such as

While trying to get this to work, I found that occasionally I had to stop and start dnsmasq:
# sudo launchctl stop homebrew.mxcl.dnsmasq
# sudo launchctl start homebrew.mxcl.dnsmasq
...especially after making changes to /etc/hosts.

(You may also find that Dyn Dig is a useful tool to have at hand. It is a mobile app version of the DNS resolution tool dig.)

On the Move

What this setup does require is that when your Mac moves to a different WiFi network or, in general, gets a different IP address, you will clearly need to change your /etc/hosts and Phone settings again. So it is not a perfect solution.
But it does mean I can test OpenAM from my phone:


A Framework for Dynamic Roles and Assignments in OpenIDM

This solution article demonstrates how to add and delete users to ldap groups, statically and also dynamically using custom mappings in reconciliations. I attempt to present a framework in OpenIDM that can be used for setting up a simple entitlement framework where you can automatically attach roles to assignments, making RBAC easier to implement.
This blog article uses sample2b and is specifically referring to the following doc link:!/docs/openidm/4/samples-guide%23provrole-add-assignments
Let us begin by laying the groundwork first, and describe how to create roles and assignment manually in OpenIDM. This solution article assumes you have gone through the doc-link above and are familiar with sample2b.

Static Assignments

The key idea is to create an assignment, and add the group you want assigned under “Attributes” as shown below:
Inline image 1
And then of course, “attach” this assignment to the Provisioning Role you reconciled from DJ:
Inline image 2
Now, it is a matter of assigning the Provisioning Role to the user in order to have the LDAP group “cn=impersonation” assigned to the user in OpenDJ.
Inline image 3
DJ’s ldapsearch will validate the correct group was assigned:

./ldapsearch –port 1389 –hostname localhost –baseDN “dc=forgerock,dc=com” –bindDN “cn=directory manager” –bindPassword xxx –searchscope sub “(uid=user.10)” dn uid isMemberOf

dn: uid=user.10,ou=People,dc=forgerock,dc=com

uid: user.10

isMemberOf: cn=impersonation,ou=groups,dc=forgerock,dc=com

Once you remove the role, ldapsearch will validate that the group was deleted in DJ:

Inline image 4

./ldapsearch –port 1389 –hostname localhost –baseDN “dc=forgerock,dc=com” –bindDN “cn=directory manager” –bindPassword xxx –searchscope sub “(uid=user.10)” dn uid isMemberOf

dn: uid=user.10,ou=People,dc=forgerock,dc=com

uid: user.10


Note that, each one of these IDM UI steps can be performed over REST as well.

Dynamic Assignments

Creating assignments dynamically from incoming LDAP groups can also be performed by setting “managed/assignment” and “managed/role” as Recon targets with the help of some scripting involved in the attribute grid.
Begin by creating the following two mappings:
system/ldap/group to managed/assignment
Screen Shot 2016-05-25 at 5.15.43 PM
and, system/ldap/group to managed/role
Screen Shot 2016-05-25 at 5.17.15 PM
The idea here is to automatically create two managed entities from the LDAP Groups container in OpenDJ.
The first managed entity (“managed/role”) is created to hold the named LDAP group objects – deemed Provisioning Roles – in our use case. These LDAP groups you want the users to be automatically added to (or provisioned to in OpenDJ) whenever “same-named” provisioning roles are assigned to a user. These “same-named” provisioning roles happen to be attached to “same-named” assignment objects! (magically!.. not quite, but via transform-scripting as shown below).
The second managed entity (managed/assignment) is created to hold the “same-named” assignment object that you will setup in such a way that it references (internally) the “same-named” provisioning role!
This will become clearer, read on.

Mapping called sourceLdapGroup_managedRole

Screen Shot 2016-05-25 at 5.29.05 PM
This mapping is simple and aims to create “named” provisioning roles inside OpenIDM from the LDAP group objects in OpenDJ. Cannot get simpler than that in this use case- but keep your seat belts fastened, and read on!


Side bar for the advanced OpenIDM user: Under reconciliation behavior policies configuration, you want to set Missing condition to Unlink in order to “reset the state” if you will, in the case where you delete a provisioning role accidentally in OpenIDM, but want the next LDAP recon on managed/role to fix it. Just remember to run recon twice!

Mapping called sourceLdapGroup_managedAssignment

Screen Shot 2016-05-25 at 5.20.52 PM

The picture above shows which attributes to map for this particular mapping definition.

For the /attributes map, use the following transformation script:

([{assignmentOperation:'mergeWithTarget',unassignmentOperation:'removeFromTarget',name: 'ldapGroups',value: [ source.dn ]}])

This transformation script sets up the incoming group object as an OpenIDM assignment and also sets up the value of the “ldapGroups” attribute to the DN of the incoming group object.

For /roles, use this one:

([{_ref: 'managed/role/'+( openidm.query('managed/role',{'_queryFilter':'/name eq "' + + '"'}).result[0]._id)}])
This script queries the managed/role container in OpenIDM for a “named” provisioning role whose CN happens to equal the CN of the incoming group object. This assumes that you reconciled the managed/role objects first- and this is the only dependency for this use case. You cannot really search for a provisioning role’s CN value before you reconcile it from LDAP. After retrieving the correct role object, the script sets the _ref property of the assignment to that particular provisioning role. This is what you would do if you were manually “attaching” a provisioning role to an assignment as shown in the Static Assignments section above.


Now it is a matter of running reconciliation on sourceLdapGroup_managedRole first, followed by running a reconciliation on sourceLdapGroup_managedAssignment. You should see roles such as:

Screen Shot 2016-05-25 at 5.38.38 PM

And you should see assignments as well- these are basically “named” after the LDAP group objects that have an ldapGroup attribute setup and also an “attachment” setup to a provisioning role:

Screen Shot 2016-05-25 at 5.40.17 PM


For example, lets look at the impersonation assignment in detail.

This picture shows the mapping for this assignment object, and the description reconciled from the OpenDJ groups org unit.


Screen Shot 2016-05-25 at 5.40.59 PM


The picture below shows how the DN of the incoming LDAP group object was mapped to an attribute called “ldapGroups”.

Screen Shot 2016-05-25 at 5.41.05 PM


The picture below shows the provisioning role, with the same name of course, that is “attached” to this assignment object.

Screen Shot 2016-05-25 at 5.41.16 PM


Now we have an “entitlement” framework for automatically creating assignment and role objects in OpenIDM using single or multiple “source” LDAP group containers, and then attaching these provisioning role objects to assignments. We have simply demonstrated a one-provisioning-role to one-assignment mapping in this use case, but far more complex mappings such as many-to-many or many-to-one are possible. This framework can serve as the foundation for an RBAC type scenario in your deployments! Good luck.





Introducing ForgeRock’s New Cloud Foundry Service Broker

CloudFoundryCorp_rgbAs originally posted on our blog, we’re announcing a preview of a new identity service broker for the Cloud Foundry platform. An extension of the OpenAM project, the new service broker will allow externally deployed ForgeRock solutions to protect applications and microservices running on any iteration of Cloud Foundry.

This new Cloud Foundry service broker will enable developers to create persistent identities that are portable across clouds. It marks the first time that a cloud offering is universally available through the open source OpenAM project.

What is Cloud Foundry?

Cloud Foundry is an open source cloud computing platform as a service (PaaS) that is available as freeware, and also as commercial offerings from Pivotal Software, IBM Bluemix, Swisscom, HP and several other vendors. All of these iterations of Cloud Foundry offer a collection of platform elements that enable developers to create and host production versions of online services and applications. These platform elements include features for monitoring, logging, messaging, authentication, traffic routing and other tasks. Learn more about Cloud Foundry here.

Where can I access the ForgeRock Cloud Foundry service broker?

The open source code for the service broker preview is accessible through GitHub, and ForgeRock welcomes feedback on the project. The service broker preview and IAM for cloud deployments will be discussed at ForgeRock’s upcoming UnSummit, taking place in San Francisco on June 1st. More information on the ForgeRock Identity Summit Series is accessible here.

OpenAM Security Advisory #201604

Security vulnerabilities have been discovered in OpenAM components. These issues may be present in versions of OpenAM including 13.0.0, 12.0.x, 11.0.x, 10.1.0-Xpress, 10.0.x, 9.x, and possibly previous versions.

This advisory provides guidance on how to ensure your deployments can be secured. Workarounds or patches are available for all of the issues.

The maximum severity of issues in this advisory is Critical. Deployers should take steps as outlined in this advisory and apply the relevant update(s) at the earliest opportunity.

The recommendation is to deploy the relevant patches. Patch bundles are available for the following versions (in accordance with ForgeRock’s Maintenance and Patch availability policy):

  • 11.0.3
  • 12.0.1
  • 12.0.2
  • 13.0.0

Customers can obtain these patch bundles from BackStage.

Issue #201604-01: User Impersonation via OAuth2 access tokens

Product: OpenAM
Affected versions: 11.0.0-11.0.3, 12.0.1-12.0.2, 13.0.0
Fixed versions: 12.0.3
Component: Core Server, Server Only
Severity: Critical

A specific type of request to the /openam/oauth2/access_token endpoint can result in obtaining OAuth2 access token on behalf of any user in the current realm.

Ensure that com.sun.identity.saml.checkcert advanced server property is set to on (default) so that basic certificate validation is being carried out. Additionally, you must verify that the OpenAM keystore does not contain expired and/or untrusted certificates.

If unsure, block all access to the /openam/oauth2/access_token endpoint.

Deploy the relevant patch bundle. Note that as part of the resolution several additional checks have been implemented for the SAML2 OAuth2 grant. After installing a patch you will need to perform the following additional steps:

  • The issuer of the assertion must be configured as a remote IdP
  • The audience of the assertion must be configured as a hosted SP
  • The hosted SP and the remote IdP must be in the same Circle Of Trust
  • The assertion parameter value MUST be Base64url encoded

Issue #201604-02: Open Redirect

Product: OpenAM
Affected versions: 9-9.5.5, 10.0.0-10.0.2, 10.1.0-Xpress, 11.0.0-11.0.3, 12.0.0-12.0.2, 13.0.0
Fixed versions: 12.0.3
Component: Core Server, Server Only
Severity: High

The following endpoint does not correctly validate redirect URLs allowing an attacker to redirect an end-user to a site they control:

  • /openam/idm/EndUser

Block all access to the /openam/idm/EndUser endpoint

Deploy the relevant patch bundle and ensure that at least one whitelist URL is defined for the redirection validation to be applied.

Issue #201604-03: Cross Site Scripting

Product: OpenAM
Affected versions: 9-9.5.5, 10.0.0-10.0.2, 10.1.0-Xpress, 11.0.0-11.0.3, 12.0.0-12.0.2, 13.0.0
Fixed versions: 12.0.3
Component: Core Server, Server Only, DAS
Severity: High

OpenAM is vulnerable to cross-site scripting (XSS) attacks which could lead to session hijacking or phishing.
The following endpoint was found vulnerable:

  • /openam/cdcservlet

Block all access to the /openam/cdcservlet endpoint.

Deploy the relevant patch bundle.

Issue #201604-04: Insufficient Authorization

Product: OpenAM
Affected versions: 11.0.0-11.0.3, 12.0.0-12.0.2, 13.0.0
Fixed versions: 12.0.3
Component: Core Server, Server Only
Severity: High

Due to insufficient authorization checks it is possible to modify arbitrary user attributes for a personal account when using the /json/users endpoint.

Disable the forgotten password feature in all realms:

  • Disable Forgot Password for Users under Legacy User Self Service service (13.0.0)
  • Disable Forgot Password for Users under User Self Service service (12.0.x)
  • Disable Forgot Password for Users under REST Security service (11.0.x)

Deploy the relevant patch bundle.

Issue #201604-05: Information Leakage via Account Lockout

Product: OpenAM
Affected versions: 13.0.0 (and versions with #201601 security patch applied)
Fixed versions: 12.0.3
Component: Core Server, Server Only
Severity: Medium

OpenAM can leak information about password correctness even when OpenAM’s Account Lockout feature is enabled, allowing brute-force attackers to guess passwords for end-users.

Disable Account Lockout in OpenAM, and utilize the underlying Data Store’s account locking capabilities.

Deploy the relevant patch bundle.

Issue #201604-06: Information Leakage

Product: OpenAM
Affected versions: 9-9.5.5, 10.0.0-10.0.2, 10.1.0-Xpress, 11.0.0-11.0.3, 12.0.0-12.0.2, 13.0.0
Fixed versions: 12.0.3
Component: Core Server, Server Only
Severity: Medium

OpenAM can leak details about the home directory of the user running the OpenAM container.

Remove the /openam/nowritewarning.jsp file from the OpenAM WAR file.

Deploy the relevant patch bundle and delete the nowritewarning.jsp file from the OpenAM deployment.

Federated Authorization Using 3rd Party JWTs

Continuing on the theme of authorization from recent blogs, I’ve seen several emerging requirements for what you could describe as federated authorization using an offline assertion.  The offline component pertaining to the fact that the policy decision point (PDP), has no prior or post knowledge of the calling user.  All of the subject information and context are self contained in the PDP evaluation request. Eg a request that is using a JSON Web Token for example.

A common illustration could be where you have distinct domains or operational boundaries that exist between the assertion issuer and the protected resources. An example could be being able to post a tweet on Twitter with only your Facebook account, with no Twitter profile at all.

A neat feature of OpenAM, is the ability to perform policy decision actions without having prior knowledge of the subject, or in fact having the subject have a profile in the AM user store.  To do this requires a few neat steps.

Firstly let me create a resource type – for interest I’ll make a non-URL based resource based on gaining access to a meeting room.

For my actions, I’ll add in some activities you could perform within a meeting room…

Next step is to add in a policy set for my Meeting Room #1 and a policy to allow my External Users access to it.

My subjects tab for my policy is the first slight difference to a normal OpenAM policy.  Firstly my users who are accessing the meeting are external, so will not have a session or entry in the OpenAM profile store. So instead of looking for authenticated users, I switch to check for presented claims.  I add in 3 claims – one to check the issuer (obviously only trusted issuers are important to me….but at this step we’re not verifying the issuer, that comes later..), the audience and a claim called Role.  Note the claims checks here are simply string comparators not wild cards and no signature checks have been done.

I next add in some actions that my external users can perform against my meeting room.  As managers, I add in the ability to order food, but they can’t use the white board!

So far pretty simple.  However, there is one big thing we haven’t done.  That is to verify the presented JWT.  The JWT should be signed by the 3rd party IDP in order to provide authenticity of the initial authentication.  For further info on JWT structure see RFE7519 –  but basically there are 3 components, a header, payload and signature.  The header contains algorithm and data structure information, the payload the user claims and the signature a crypto element.  This creates a base64 encoded dot-delimited payload.  However…we need to verify the JWT is from the issuer we trust.  To do this I create a scripted policy condition that verifies the signature.

This simply calls either a Groovy or JavaScript that I create in the OpenAM UI or upload over REST.

The script basically does a check to make sure a JWT is present in the REST PDP call, strips out the various components and creates a corresponding signature based on a shared secret.  If the reconstructed signature matches the submitted JWT signature we’re in business.

The script calls in the ForgeRock JSON, JSE, JWS and JWT libraries that are already being used throughout the product, so we’re not having to recreate anything new here.

To test the entire flow, you need to create a JWT with the appropriate claims from a 3rd party IDP. There are lots of online generators that can do this.  I used this one to build my JWT.

Note the selection of the algorithm and key.  The key is needed in the script on the AM side.

I can now take my newly minted JWT and make the appropriate REST call into OpenAM.

The call sends a request into ../json/policies?_action=evaluate with my payload of the resource I’m trying to access and my JWT (note this is currently submitted both within the subject.jwt attribute and also the environment map due to OPENAM-8893).  In order to make the call – remember my subject doesn’t have a session within OpenAM – I create a service account called policyEvaluator that I use to call the REST endpoint with the appropriate privileges.

A successful call results in access to the meeting room, once my JWT has been verified correctly:

If the signature verification fails I am given an advice message:

Code for the policy script is available here.

This blog post was first published @, included here with permission.

KuppingerCole’s Latest Access Management and Federation Leadership Compass – It’s ForgeRock all the way!

In KuppingerCole’s 2016 Access Management and Federation Leadership Compass, ForgeRock makes it to the top of the list in each of the report’s four categories: Product, Market, Innovation and Overall.

Read the official press release here. To get access to the report, try this link.

This blog post was first published @, included here with permission.

Deploying #OpenAM instances in #Docker

Deploying services with Docker has become pretty popular in the DevOps world (understatement).

I want to demonstrate how to deploy an instance of ForgeRock’s OpenAM and OpenDJ using Docker.

Essentially this is my ForgeRock Docker Cheat Sheet

I am running this on a virtual Ubuntu instance in Virtualbox on my laptop. You can run Docker on both Windows and OS X too … I just personally prefer Linux.

Step 1: Install Docker:

Step 2: Clone ForgeRock Docker Files:

cd /home/brad/Dev/

Use git to clone from:

This will create a directory called “docker” in the above path.

Step 3: Build Files:

cd /home/brad/Dev/docker
make clean

At this point a few images are created on your local host, to view Images:

docker images


OpenDJ Instance:
Note: the first time you run an instance you need to create the “dj” directory first (persistent storage)

cd /home/brad
mkdir dj // <— just run this once; the first time you launch an instance on this host
docker run -d -p 1389:389 -v `pwd`/dj:/opt/opendj/instances/instance1 -t 9f332a0fbb88

To enable a persistent store you can use docker’s volume capability. From the above command, “-v `pwd`/dj:/opt/opendj/instances/instance1” this tells docker to cp “/opt/opendj/instances/instance1” from the running instance to `pwd`/dj on the docker host. You can then kill this instance and then launch a new one, referring to the same volume.

To view the running docker instances:

docker ps

Now when we launch OpenAM, we’ll want to allow it to access the OpenDJ container. By default Docker does not setup this networking but we can create a link (see run command below). Using the link parameter, Docker will edit the /etc/hosts file on the OpenAM container and create a “link” to the OpenDJ serverOpenAM:

cd /home/brad
mkdir am // <— just run this once; the first time you launch an instance on this host
docker run -d -p 8080:8080 -v `pwd`/am:/root/openam –link dreamy_hypatia:opendj -t c02f00f42e18

As we did with OpenDJ we tell Docker to create a volume, on the Docker host, and copy the OpenAM configurations to this location. This allows us to launch a new instance without having to reconfigure OpenAM.

Next Steps:
There are a lot of things that I did not cover in this post, specifically running multiple instances for scalability. OpenDJ would need to be configured for replication and OpenAM would need to be configured to join a Site. I plan on covering these things in a future post.

Also, I didn’t cover Docker best practices (specifically security). In your environment, treat your container ids as you would passwords.

Lastly, I plan on exploring other options for persistent storage, in future posts. I am pretty sure there are better alternatives than storing this data on the Docker host’s filesystem. Possibly looking at creating another Docker container specifically for storage.

Warren Strange (ForgeRock) … he’s constantly producing awesome and developed a lot (probably most) of the capability around the ForgeRock docker instances

My friends at GoodDogLabs for mentoring me on all things Docker

Also, I have been gleaning a lot of Docker tips from @frazelledazzell … she drops a ton of Docker knowledge via Twitter and her blog.


This blog post was first published @, included here with permission.