node-openam-agent: Your App’s New Friend

This blog is posted on behalf of Zoltan Tarcsay.

As you may know, the purpose of OpenAM Policy Agents is to enforce authentication and authorization for web resources. But while OpenAM itself has been becoming ever more feature-rich and easy to use over the years, the Policy Agents have stayed roughly the same. The ways that web resources are built and accessed today demand new enforcement strategies. The openam-agent module for Node.js takes a new approach to addressing these concerns.

The Old Ways

It sometimes feels like Policy Agents are remnants of an era when all that people had for web content was static (or server generated) HTML pages with fixed URLs, and possibly some SOAP web services.

There are two things that a web policy agent can do (OK, 3):

  • Enforce the validity of a user’s SSO session ID (which is sent in a Cookie header)
  • Enforce authorization for requested URLs served by the web container.
  • In addition, Java agents allow you to use JAAS and the OpenAM client SDK in your Java application.

If you’ve ever tried to use the OpenAM client SDK for Java, you will probably agree that it’s somewhat complicated and time consuming. Also, it doesn’t give you much control over the agent itself (think of caching, event handling, communication with OpenAM). And if you ever tried to use an OpenAM client SDK with anything other than Java, you probably found that there isn’t one (OK, there’s one for C).

So for those whose website are powered by JavaScript, Ruby, Python, PHP or anything else, there are two options:

  • Having a web agent on a web proxy server which enforces URL policies
  • Integrating with OpenAM directly by writing custom code (i.e. a policy agent)

Good news: it turns out that writing a policy agent is not so difficult. It has to do three things:

  • Intercept requests when some resource is being accessed
  • Get an access control decision based on the request (from OpenAM)
  • Throw an error or let the request pass

Now that we know that agents are not that big of a deal, it seems a little unreasonable that the existing ones are so opinionated about how people should use them. I mean, they can’t even be extended with custom functionality, unless you add some C code and recompile them…

Your New Friend

What you are about to see is a new approach to how agents should behave, most importantly, from the developer’s point of view. This groundbreaking new idea is that, instead of being an arch enemy, the policy agent should be the developer’s friend.

As an experiment, a JavaScript policy agent for Node.js was born. It is meant to be a developer-friendly, hackable, light-weight, transparent utility that acts as your app’s spirit guide to OpenAM. Everything about it is extensible and all of its functionality is exposed to your Node.js code through public APIs. It also comes with some handy features like OAuth2 token validation or pluggable backends for caching session data.

It has of the following parts:

  • OpenAMClient
    • This is a class that knows how to talk to OpenAM
  • PolicyAgent
    • Talks to OpenAM through a pluggable OpenAMClient to get decisions, identity data, etc.
    • Has its own identity and session
    • Receives notifications from OpenAM (e.g. about sessions)
    • Has a pluggable cache for storing stuff (e.g. identity information)
    • Can intercept requests and run it through pluggable enforcement strategies (i.e. Shields)
    • You can have as many as you want (more on this later)
  • Shield
    • A particular enforcement strategy (e.g. checking an OAuth2 access_token)
    • Gets a request, runs a check, then fails or succeeds
    • Can be used with any agent within the app
  • Cache
    • An interface to some backend where the agent can store its session data

Getting Started

OK, let’s look at some code.

First, create a new Node.js project and install the dependencies:

mkdir my-app && cd my-app
 npm init -y
 npm install --save express openam-agent
 touch index.js

Next, let’s add some code to index.js:

var express = require('express'),
 openamAgent = require('openam-agent'),
 app = express(),
 agent = openamAgent({openamUrl: ''});

app.get('/', agent.shield(openamAgent.cookieShield({getProfiles: true})), function (req, res) {
 res.send('Hello, ' + req.session.userName);

Done, you have a web application with a single route that is protected by a cookie shield (it checks your session cookie). The cookie shield also put the user’s profile data into the req object, so you can use it in your own middleware.


It’s important to note here that openam-agent currently only works with the Express framework, but the plan is to make it work with just regular Node.js requests and responses as well.

In the example above, the variable app will be your Express application. An express app is a collection of routes (URL paths) and middleware (functions that handle requests that are sent to the routes). One route can have multiple middleware, i.e. a requests can be sent through a chain of middleware functions before sending a response.

The agent fits beautifully in this architecture: the agent’s agent.shield(someShield) function returns a middleware function for Express to handle the request. Which means that you can use any enforcement strategy with any agent with any route, as you see fit.


You can do things like this:

var policyShieldFoo = openamAgent.policyShield({application: 'foo'}),
 policyShieldBar = openamAgent.policyShield({application: 'bar'});

app.get('/my/awesome/api/foo', agent.shield(policyShieldFoo));
 app.get('/my/awesome/api/foo/oof', function (req, res) {
 // this is a sub-resource, so it's protected by the foo shield

app.get('/my/awesome/api/bar', agent.shield(policyShieldBar));
 app.get('/my/awesome/api/bar', function (req, res) {
 // this middleware is called after the bar shield on the same path, so it's protected

In this case you have two Shields, both using a different application (or policy set) in OpenAM; you can then use one for one route, and the other for the other route. Whether the policy shield will apply to the incoming request is determined by path and the order in which you mounted your middleware functions.

Note that the agent needs special privileges for getting policy decisions from OpenAM, so it will need some credentials (typically an agent profile) in OpenAM:

var agent = openamAgent({
 openamUrl: '',
 username: 'my-agent',
 password: 'secret12'

When the agent tries to get a policy decision for the first time, it will create a session in OpenAM for itself.

Note that a policy decision needs a subject, so the request will need to contain a valid session ID.


This is how you enforce a valid OAuth2 token:

app.use('/my/mobile/content', agent.shield(openamAgent.oauth2Shield()), function (req, res) {
 // the OAuth2 token info is in
 // if you wanted to check the scopes against something, you could write a shield to do it

Notifications and CDSSO

There are cases when the agent needs to be able to accept data from OpenAM. One example is notifications (e.g. when a user logs out, OpenAM can notify the agents so they can clear that session from their cache). The node-openam-agent lets you mount a notification route to your app as such:

var agent = openamAgent({notificationsEnabled: true});

CDSSO is also possible (although it becomes tricky when your original request is anything other than GET, because of the redirects):

var agent = openamAgent({notificationsEnabled: true});

Note: OpenAM needs to know that you want to use the cdcservlet after you log in (this servlet creates a SAML1.1 assertion containing the user’s session ID, which is then POSTed to the agent through the client’s browser). For this, you will need to create a web agent profile and enable CDSSO.


The current features add some extra functionality to the classic agent behavior, but there is so much more that can be done, some of it will be very specific to each application and how people use OpenAM.

Extensibility is at the heart of this agent, and it is meant to be very simple. Here’s an example of a custom Shield.

First, extend the Shield class:

var util = require('util'),
 Shield = require('openam-agent').Shield;

 * @constructor
 function UnicornShield(options) {
 this.options = options;

UnicornShield.prototype.evaluate = function (req, success, fail) {
 // check if this request has a unicorn in it
 // (we could also use this.agent to talk to OpenAM)
 if ( && req.headers.unicorn) {
 } else {

And then use it in your app:

app.use(agent.shield(new UnicornShield({foo: true})));

There’s all sorts of docs (API and otherwise) in the wiki if you’re interested in extending the agent.

More stuff

There is much more to show and tell about this agent, especially when it comes to specific use cases, but it doesn’t all fit in one blog post. Stay tuned for more stuff!


node-openam-agent is a community driven open source project on GitHub, and it is not owned or sponsored by ForgeRock. The software comes with an MIT license and is free to use without any restrictions but comes without any warranty or official support. Contributions are most welcome, please read the wiki, open issues and feel free to submit pull requests.

OpenAM as a SAMLv2 IdP for the AWS Administration console. 2nd Part

OpenAM as a SAMLv2 IdP for the AWS Administration console. 2nd Part.

In a previous blog post, we defined a fixed role for the users federating to AWS, however if we want to add flexibility and allow each user to map to a different role, depending on the type of user, we can assign the role using a profile attribute.

Here a couple of configuration examples/use-cases.

a) Users wanting to have multiple federated logins for different roles in the same AWS account

In this case the AWS account will be the same, but the IdP account will be different depending on the role that the user wants to use. For example a user demo-dev with an attribute/role holding a “Dev” role can map to an account in AWS with a role with certain “development” permissions, while another account demo-prod can hold a different “Production” role in the same attribute/role and map to the same AWS account with a different role with “Production” permissions.

In this case we have two OpenAM IdP login names for the same user account in AWS, each with a different role. In our example we have “demo-dev” and “demo-prod”:

Screen Shot 2016-02-10 at 11.07.06

We are going to use one of the user’s profile attributes to set the Role for the user. In this example we will use the “User Alias List” attribute to set it up, but in a real case scenario you might want to set your own specific attribute for it, like for example “AWSRole”.

For this example (as in the previous post) the Attribute “mail” will be used to map the AWS user account and it contains the value The attribute “User Alias List” will be used to map the AWS Wole and it contains the Role value:

  • mail:
  • iplanet-am-user-alias-list: arn:aws:iam::123456789012:role/sso-Dev,arn:aws:iam::123456789012:saml-provider/forgerocklabs

Screen Shot 2016-06-06 at 11.04.30

For the “demo-prod” user we will use a different Role:

  • mail:
  • iplanet-am-user-alias-list: arn:aws:iam::123456789012:role/sso-Production,arn:aws:iam::123456789012:saml-provider/forgerocklabs

Screen Shot 2016-06-06 at 11.05.21

Now we need to map the attributes of the User Profile to the attributes in teh SAMLv2 ASsertion, this is easy. In the Federation tab, select the AWS Service Provider and go to the “Assertion Processing” sub tab (See previos post for more info on how to get there), and set the Attribute Mapper to:

So it looks like this:

Screen Shot 2016-06-06 at 10.16.22

That’s it, now if you use the IdP Inititated SSO and login as demo-dev in OpenAM, you will get logged in in AWS as and if you login in OpenAM as demo-prod you will be mapped into in the AWS console.

Now the other case.

b) Users wanting to have one federated login for different roles in the same AWS account

In this case the AWS account will be the same, and the IdP account will have multiple roles assigned, with a configuration like this then the user will need to select during login time what role to use.

The configuration in OpenAM is the same, except that the User Account should contain all the possible roles it can use. In this example we have the user “demo” with both roles sso-Dev and sso-Production assigned to the multivalued attribute “User Alias List” (as mentioned before this is just for the sake of the example).

Screen Shot 2016-06-06 at 11.07.45

The AWS SP Configuration remains the same, but now when using the IdP Initiated SSO URL and login in as “demo”, the user will see a selection menu from the AWS console, indicating he/she needs to select what role to use:

Screen Shot 2016-06-06 at 10.52.07

There are more options to configure the OpenAM to federate with the AWS console, the OpenAM flexibility allows to define Custom SP Attribute mappers (plug-ins) to for example construct values out of user profile attributes, session attributes or even external (3rd party) attributes. So depending on your needs there is always a solution using OpenAM.

Introducing ForgeRock’s New Cloud Foundry Service Broker

CloudFoundryCorp_rgbAs originally posted on our blog, we’re announcing a preview of a new identity service broker for the Cloud Foundry platform. An extension of the OpenAM project, the new service broker will allow externally deployed ForgeRock solutions to protect applications and microservices running on any iteration of Cloud Foundry.

This new Cloud Foundry service broker will enable developers to create persistent identities that are portable across clouds. It marks the first time that a cloud offering is universally available through the open source OpenAM project.

What is Cloud Foundry?

Cloud Foundry is an open source cloud computing platform as a service (PaaS) that is available as freeware, and also as commercial offerings from Pivotal Software, IBM Bluemix, Swisscom, HP and several other vendors. All of these iterations of Cloud Foundry offer a collection of platform elements that enable developers to create and host production versions of online services and applications. These platform elements include features for monitoring, logging, messaging, authentication, traffic routing and other tasks. Learn more about Cloud Foundry here.

Where can I access the ForgeRock Cloud Foundry service broker?

The open source code for the service broker preview is accessible through GitHub, and ForgeRock welcomes feedback on the project. The service broker preview and IAM for cloud deployments will be discussed at ForgeRock’s upcoming UnSummit, taking place in San Francisco on June 1st. More information on the ForgeRock Identity Summit Series is accessible here.

In case you missed it…. ForgeRock’s Identity Management Solution

Back on January 22nd, ForgeRock changed the world (well it certainly changed for me!). We released the update to the entire catalog of ForgeRock offerings – as well as a nomenclature change. For years, I’ve been the Product Manager for ForgeRock’s OpenIDM. Now, I’m the Product Manager for ForgeRock’s Identity Management Solution.

You’re probably saying “Sounds like a bunch of marketing to me…” -and on the one hand, you wouldn’t be wrong. However, we want people to think in terms of the functionality they need from the ForgeRock Platform rather than any features of a specific product. In this way – we can ensure that people are open to discussing requirements and we can look at fulfilling requirements rather than trying to get you to install this product or that.

So what does this release provide that’s new in terms of Identity Management and Provisioning functionality? Let me provide you with a short list and we can discuss the items you’re interested in with the comments section of the forum.

First off – UI: there’s a new, bootstrap based UI model that includes dashboards, widgets, theming, expanded extensibility and more. We also separated all Administrative functionality and put it in the Admin UI, while placing all self-service capabilities under what we call (obviously) the Self-Service portal.
As a sub point to the new Self-Service portal, we have a new (Self-Service) Registration, Password Reset and Forgotten User Name experience that’s configurable – and common across the ForgeRock platform. I know this will generate a lot of interest, so if you’d like to learn more, let me know!

Second – Roles and Relationships: We created and utilized the new intrinsic Relationship model to expand on our Role model and functionality. That means there is reverse relationship behind the Role model we have in Identity Management – and (like most things in the Identity Management area) it’s configurable and extensible to fit most any situation: Parent<->Child, Owner<->Device, User<->Group, Thing<->Sensor – whatever you can think of (and you’re not limited to one relationship… you can model whatever use case you may need)! Roles also are fully managed from the Admin UI – that mean Create, Read, Update, Delete (and more) are able to done visually from the UI (and of course from API/REST based interfaces).

Third – Mutli-Account Linking: Have you had to maintain separate accounts for users because they have different personas (like Administrators AND regular User, or multiple bank accounts – but just one login). The new Linked Qualifier allows you to have multiple personas (and any specific policies or attributes for those personas) while maintaining a single user account in a resource (like a database, Active Directory, LDAP or whatever). Internally, we handle the reconciliation of the account and each persona specific value or policy.

Fourth – Passwords: Until we can get rid of them, they’re here to stay. Therefore, we make it easier to manage password with Multiple password policies (even conditional policy), Hashing (rather than encrypting) of passwords (and to be honest, any attribute you want hashed). Even authenticating using ForgeRock’s Access Management solution (we used to call this OpenAM) rather than intrinsic accounts.

Finally, there’s a new upgrade and patching framework that instructs and guides you through maintaining the Identity Management Solution. As you receive updates from support (in the form of patches) or the product team (in the form of product updates or upgrades), we have service oriented, or UI driven ways of informing what the change is, pausing (or more properly, putting into maintenance) Identity Management services, backing up the existing configuration and files, updating/upgrading the services, and reporting back all the changes that we made (while maintaining your old configuration and files). This makes managing the solution much easier and maintainable.

Of course there are lots more items I could talk about – Documentation has been updated (and several new guides are included), added support for repository technologies, ForgeRock commons audit, and of course connectors! This was just intended to give you a taste of what’s new in OpenIDM 4…err, the ForgeRock Identity Management Solution (give me a little time to adjust :^) )

You can find all of the newest releases at and!


ForgeRock doc tools 3.1.0 released

ForgeRock doc tools 3.1.0 are out.

This is a minor release, compatible with 3.0.0. See the release notes for details.

ForgeRock doc tools 3.1.0 includes the following components:

  • forgerock-doc-maven-plugin
  • forgerock-doc-common-content
  • forgerock-doc-default-branding
  • forgerock-doc-maven-archetype

This release adds a few improvements and resolves a number of bugs.

One of the improvements is initial support for Asciidoc. The doc build plugin generates DocBook from Asciidoc source, and then processes the resulting output in the same way as other documents. At this time the doc build plugin does not allow you to mix Asciidoc and DocBook in the same document. For details, see the README.

Thanks to Peter Major for providing a new release of docbook-linktester, improving the link check usability with a more human-readable report, better supporting <olink> elements, and troubleshooting an issue related to throttling that affected link checks for some documents.

Thanks again to Chris Lee for a number of improvements to Bootstrap HTML output, and for fixing inter-document links in PDF (depends on the renderer, seen to work with Adobe Acrobat).

Thanks also to Lana Frost, Chris Clifton, David Goldsmith, Gene Hirayama, and Mike Jang for testing and bug reports.

ForgeRock doc tools 3.0.0 released

ForgeRock doc tools 3.0.0 is finally done!

This is a major release, and the build plugin configuration has changed. See the release notes for details.

ForgeRock doc tools 3.0.0 includes the following components:

  • forgerock-doc-maven-plugin
  • forgerock-doc-common-content
  • forgerock-doc-default-branding
  • forgerock-doc-maven-archetype

This release resolves 92 issues, with dozens of new features, fixes, and improvements.

Hats off to Chris Lee for his work to provide much better HTML, styled with Bootstrap, and to Gene Hirayama for his many improvements to PDFs.

Thanks also to Lana Frost, David Goldsmith, and Mike Jang for testing and bug reports.

Special thanks to Peter Major for docbook-linktester 1.3.0.

See the README for more about how to use the doc tools, and for details on the new features.

OpenAM 12 and Social Authentication

Many OpenAM deployments are consumer-facing where organizations are looking to deliver a great service to their existing, and new, customers. Earlier, we talked about how self-service registration in OpenAM 12 makes it easy for new customers to sign up, but even a simple web form is too much trouble for some people (myself included).

So the arrival of Social Authentication in OpenAM 12 is warmly welcomed. This means that administrators can quickly roll out support for social identities, from the likes of Google, Facebook and Microsoft, and customers or users get a great new way to sign in by simply clicking on the social Identity Provider (IDP) logo.
No more registration forms, just easy and rapid access to your OpenAM protected service.

Here's how it works:


The OpenAM administrator needs an account with the relevant IDP but then he simply:
  1. Registers the OpenAM server deployment as a Client App with the Social IDP;
  2. Configures OpenAM using these newly created Client App ID details at the IDP;
  3. That's it! Users can now login using their Google/Facebook/Microsoft credentials.


(In this example we'll use Google but the same basic procedure is used with all the IDPs.)
Firstly, I go to my Social IDP registration page. At the time of writing these are:
...and create a project or app.

With Google it goes like this (click on the screenshots to zoom in):
(1) Create a Project:

(1a) For Google, we also need to enable the Google+ API:

(2) In a separate browser window, go to the Administration Console of OpenAM, go to the Common Tasks pane and click on the appropriate IDP, Google in our case:

(3) Copy the pre-filled Redirect URL from OpenAM:
(4) Now return to the Google developer console browser window and create a new Client ID:

(5) Paste the previously copied Redirect URL to associate it with this Client ID:

(6) Now copy the Google Client ID and Secret and paste them back into OpenAM:

(7) On clicking Create, OpenAM uses this information to automatically configure:

  1. An OAuth2/OpenID Connect authentication module;
  2. An authentication chain containing this authentication module;
  3. A social service which can be queried by the OpenAM user interface or other REST clients to get information about the configured social authentication providers.

User Experience

Now we'll look at the user experience...
(1) When the login page is reached the new OpenAM 12 XUI, which is a smart javascript client, queries the REST endpoint of the social authentication service to discover what is available. This endpoint provides a logo which is displayed as part of the login dialog:

(2) When the user clicks on this logo, she is redirected to the social authentication page:

(3) The first time the user does this a consent page is displayed:

(4) and on Accepting this, the user is logged in to OpenAM:

OpenAM can optionally create new accounts based on data gleaned from the social IDP so that services using OpenAM can identify and provide a rich experience to returning social users.


Social Authentication in OpenAM 12 takes only a few minutes for administrators to configure.
For sites looking to make life as easy as possible for new customers or users, Social Authentication is a great option.

- FB

The Persistent Cookie Authentication Module

How long should a session last?

I guess we all get fed up when we are asked to log in to a web site over, and over again. So when protecting your resources, it is important that an admin chooses the "right" session length.

But what is "right" depends on your resources and how tightly you'd like to protect them. Some people might want a short session length because the content may be extremely valuable and you want the user to prove that he is still the same guy.

But for other sites and resources, you may want to err on the side of convenience for the user and have a long session lifetime, maybe even stretching beyond the lifetime of a browser session. For example, the "Remember me for a week" feature seen here...

If you want to provide a long-lived session, you might want to consider the OpenAM 12 Persistent Cookie authentication module.

Here's how it could work:

Administrator experience

  1. Create a new Persistent Cookie authentication module and set the length of your session. (Note to self - Remember that if you use a Secure Cookie, the session must be over https else the browser will not submit it.)
  2. Create an authentication chain (here called "test")with the Persistent Cookie module first and mark it as sufficient.
  3. On the same Authentication Chain page, add the Persistent Cookie auth module to the Post authentication processing class for this chain. 
  4. Assign the chain to the realm you want this to work with.

User Experience

  1. The first time the user hits your site there will be no persistent cookie so the chain means you'll fall through to login using the usual method (whatever that is).
  2. On a successful login, the Post Authentication processing will store a persistent cookie. You can examine this using cookie tools like the Cookies app for Chrome or Firefox.
  3. Close the browser, logout of your PC, make a cup of tea, whatever.
  4. When you next start up the browser and return to the site you will get straight to your resources and this will continue to happen for the configured lifetime of the Persistent Cookie.
In case you're interested, the Persistent Cookie is a JWT and by default is called "session-jwt", and you can also mess around with idle times too, but this is left as an exercise to the reader ;-)

Hope this helps someone out there.