node-openam-agent: Your App’s New Friend

This blog is posted on behalf of Zoltan Tarcsay.


As you may know, the purpose of OpenAM Policy Agents is to enforce authentication and authorization for web resources. But while OpenAM itself has been becoming ever more feature-rich and easy to use over the years, the Policy Agents have stayed roughly the same. The ways that web resources are built and accessed today demand new enforcement strategies. The openam-agent module for Node.js takes a new approach to addressing these concerns.

The Old Ways

It sometimes feels like Policy Agents are remnants of an era when all that people had for web content was static (or server generated) HTML pages with fixed URLs, and possibly some SOAP web services.

There are two things that a web policy agent can do (OK, 3):

  • Enforce the validity of a user’s SSO session ID (which is sent in a Cookie header)
  • Enforce authorization for requested URLs served by the web container.
  • In addition, Java agents allow you to use JAAS and the OpenAM client SDK in your Java application.

If you’ve ever tried to use the OpenAM client SDK for Java, you will probably agree that it’s somewhat complicated and time consuming. Also, it doesn’t give you much control over the agent itself (think of caching, event handling, communication with OpenAM). And if you ever tried to use an OpenAM client SDK with anything other than Java, you probably found that there isn’t one (OK, there’s one for C).

So for those whose website are powered by JavaScript, Ruby, Python, PHP or anything else, there are two options:

  • Having a web agent on a web proxy server which enforces URL policies
  • Integrating with OpenAM directly by writing custom code (i.e. a policy agent)

Good news: it turns out that writing a policy agent is not so difficult. It has to do three things:

  • Intercept requests when some resource is being accessed
  • Get an access control decision based on the request (from OpenAM)
  • Throw an error or let the request pass

Now that we know that agents are not that big of a deal, it seems a little unreasonable that the existing ones are so opinionated about how people should use them. I mean, they can’t even be extended with custom functionality, unless you add some C code and recompile them…

Your New Friend

What you are about to see is a new approach to how agents should behave, most importantly, from the developer’s point of view. This groundbreaking new idea is that, instead of being an arch enemy, the policy agent should be the developer’s friend.

As an experiment, a JavaScript policy agent for Node.js was born. It is meant to be a developer-friendly, hackable, light-weight, transparent utility that acts as your app’s spirit guide to OpenAM. Everything about it is extensible and all of its functionality is exposed to your Node.js code through public APIs. It also comes with some handy features like OAuth2 token validation or pluggable backends for caching session data.

It has of the following parts:

  • OpenAMClient
    • This is a class that knows how to talk to OpenAM
  • PolicyAgent
    • Talks to OpenAM through a pluggable OpenAMClient to get decisions, identity data, etc.
    • Has its own identity and session
    • Receives notifications from OpenAM (e.g. about sessions)
    • Has a pluggable cache for storing stuff (e.g. identity information)
    • Can intercept requests and run it through pluggable enforcement strategies (i.e. Shields)
    • You can have as many as you want (more on this later)
  • Shield
    • A particular enforcement strategy (e.g. checking an OAuth2 access_token)
    • Gets a request, runs a check, then fails or succeeds
    • Can be used with any agent within the app
  • Cache
    • An interface to some backend where the agent can store its session data

Getting Started

OK, let’s look at some code.

First, create a new Node.js project and install the dependencies:

mkdir my-app && cd my-app
 npm init -y
 npm install --save express openam-agent
 touch index.js

Next, let’s add some code to index.js:

var express = require('express'),
 openamAgent = require('openam-agent'),
 app = express(),
 agent = openamAgent({openamUrl: 'https://openam.example.com/openam'});

app.get('/', agent.shield(openamAgent.cookieShield({getProfiles: true})), function (req, res) {
 res.send('Hello, ' + req.session.userName);
 });
 app.listen(1337);

Done, you have a web application with a single route that is protected by a cookie shield (it checks your session cookie). The cookie shield also put the user’s profile data into the req object, so you can use it in your own middleware.

Express

It’s important to note here that openam-agent currently only works with the Express framework, but the plan is to make it work with just regular Node.js requests and responses as well.

In the example above, the variable app will be your Express application. An express app is a collection of routes (URL paths) and middleware (functions that handle requests that are sent to the routes). One route can have multiple middleware, i.e. a requests can be sent through a chain of middleware functions before sending a response.

The agent fits beautifully in this architecture: the agent’s agent.shield(someShield) function returns a middleware function for Express to handle the request. Which means that you can use any enforcement strategy with any agent with any route, as you see fit.

Policies

You can do things like this:

var policyShieldFoo = openamAgent.policyShield({application: 'foo'}),
 policyShieldBar = openamAgent.policyShield({application: 'bar'});

app.get('/my/awesome/api/foo', agent.shield(policyShieldFoo));
 app.get('/my/awesome/api/foo/oof', function (req, res) {
 // this is a sub-resource, so it's protected by the foo shield
 });

app.get('/my/awesome/api/bar', agent.shield(policyShieldBar));
 app.get('/my/awesome/api/bar', function (req, res) {
 // this middleware is called after the bar shield on the same path, so it's protected
 });

In this case you have two Shields, both using a different application (or policy set) in OpenAM; you can then use one for one route, and the other for the other route. Whether the policy shield will apply to the incoming request is determined by path and the order in which you mounted your middleware functions.

Note that the agent needs special privileges for getting policy decisions from OpenAM, so it will need some credentials (typically an agent profile) in OpenAM:

var agent = openamAgent({
 openamUrl: 'https://openam.example.com/openam',
 username: 'my-agent',
 password: 'secret12'
 })

When the agent tries to get a policy decision for the first time, it will create a session in OpenAM for itself.

Note that a policy decision needs a subject, so the request will need to contain a valid session ID.

OAuth2

This is how you enforce a valid OAuth2 token:

app.use('/my/mobile/content', agent.shield(openamAgent.oauth2Shield()), function (req, res) {
 // the OAuth2 token info is in req.session.data
 // if you wanted to check the scopes against something, you could write a shield to do it
 });

Notifications and CDSSO

There are cases when the agent needs to be able to accept data from OpenAM. One example is notifications (e.g. when a user logs out, OpenAM can notify the agents so they can clear that session from their cache). The node-openam-agent lets you mount a notification route to your app as such:

var agent = openamAgent({notificationsEnabled: true});
 app.use(agent.notifications('/some/path/to/the/notifications/endpoint'));

CDSSO is also possible (although it becomes tricky when your original request is anything other than GET, because of the redirects):

var agent = openamAgent({notificationsEnabled: true});
 app.use(agent.cdsso('/some/path/to/the/cdsso/client/endpoint'));

Note: OpenAM needs to know that you want to use the cdcservlet after you log in (this servlet creates a SAML1.1 assertion containing the user’s session ID, which is then POSTed to the agent through the client’s browser). For this, you will need to create a web agent profile and enable CDSSO.

Extensions

The current features add some extra functionality to the classic agent behavior, but there is so much more that can be done, some of it will be very specific to each application and how people use OpenAM.

Extensibility is at the heart of this agent, and it is meant to be very simple. Here’s an example of a custom Shield.

First, extend the Shield class:

var util = require('util'),
 Shield = require('openam-agent').Shield;

/**
 * @constructor
 */
 function UnicornShield(options) {
 this.options = options;
 }

UnicornShield.prototype.evaluate = function (req, success, fail) {
 // check if this request has a unicorn in it
 // (we could also use this.agent to talk to OpenAM)
 if (this.options.foo && req.headers.unicorn) {
 success();
 } else {
 fail();
 }
 };

And then use it in your app:

app.use(agent.shield(new UnicornShield({foo: true})));

There’s all sorts of docs (API and otherwise) in the wiki if you’re interested in extending the agent.

More stuff

There is much more to show and tell about this agent, especially when it comes to specific use cases, but it doesn’t all fit in one blog post. Stay tuned for more stuff!

Contributions

node-openam-agent is a community driven open source project on GitHub, and it is not owned or sponsored by ForgeRock. The software comes with an MIT license and is free to use without any restrictions but comes without any warranty or official support. Contributions are most welcome, please read the wiki, open issues and feel free to submit pull requests.

Using Push Notifications for Passwordless Authentication and Easy MFA



There is often a trade-off between the convenience of an authentication system and the strength of security around it. Oftentimes, the stronger the security, the more tedious it can be for the end user. But now that (almost) everyone has a smartphone, can we somehow use this magical device as an authenticator?

The mid-year release of the ForgeRock Identity Platform introduced some exciting new Access Management technology, namely Push Authentication. When a user wants to login, they simply identify themselves (e.g. username or email) and the system sends them a Push Notification message asking if they want to authorize the login. This message is fielded by the ForgeRock Authenticator App (iPhone or Android) and the user can use swipe or TouchId to agree to the authentication attempt, or Cancel to deny it. Cool stuff, let's check it out...

We'll look at:

  • The User experience of logging in using Push Auth
  • The Architecture underpinning this
  • The Admin experience of setting this up
  • Customizing the experience

User Experience

Before you can use Push you'll need to register your phone to your account so you'll typically login in the traditional way...


...before being presented with a QR code...

Using the ForgeRock Authenticator app on your phone you can scan this to create an account for that IDP...

Now when the user wants to login, they can simply enter their username...

...and their phone buzzes and displays something like this...



The user decides if this is a login attempt by them and, if so, uses TouchId (or swipe if TouchId not present or enabled) to get logged in.

The Architecture

The players in this dance are:

  • The user on their primary device (say laptop, but could be phone too, see later);
  • The ForgeRock AM server;
  • The Push Service in the Cloud;
  • The phone.



How to set it up (The administrator's experience)

To set this up we'll need:

  • ForgeRock Access Management (AM) version 13.5;
  • We'll create 2 new authentication module instances
    • ForgeRock Authenticator (Push) Registration - used to link phone to account;
    • ForgeRock Authenticator (Push) - used when logging in;
  • We'll create a new realm-based Push Notification Service - this is how AM talks to the Cloud push service;

Authentication Modules and Chains

First, in the AM Admin Console, create the 2 new authentication modules (let's call them Push-Reg and Push-Auth) and use the default values....
 
They will look something like this...




Now create 2 Authentication Chains, also called Push-Auth and Push-Reg.

For Push-Reg we'll use a simple Datastore (username/password) module, to identify the user during registration, followed by the Push-Reg Authentication module, and to keep things simple, lets just use the Push-Auth module in the Push-Auth chain...



So now we have 2 new chains...



At this point you can test these chains out by visiting
<deployment-url>/XUI/#login/&service=Push-Auth
where Push-Auth is the chain name.

But this won't work yet because we need to tell AM how to send Push Notifications by creating the Push Notification Service.

Push Notification Service

The Admin Console has changed a bit in 13.5 in the Services area and is now much easier to configure. First, create a New Service of type Push Notification Service...



Once created, we want to configure this. This is slightly tricky but not too hard for people who have read this far ;-)

At the time of writing, ForgeRock use AWS Simple Notification Service for sending Push Notifications to Android and Apple phones. And ForgeRock have provided a convenient way for customers to generate credentials to configure this Service.




Go to Backstage, login and navigate to Projects. If you haven't registered a Project before, create one and also an Environment too within the Project. Then simply press the big button marked "Set Up Push Auth Credentials"
This will generate some credentials which you can use to populate the Push Notification Service on your AM deployment.



Providing your phone can reach your AM server, your users should now be able to register and login using Push Notifications.

Customizing the IDPs

Say you now want to customize the IDP to have your corporate logo and colorscheme.
Return to the Push-Reg Auth Module and you'll see that you can configure Issuer Name, background color and Logo.

And in the Push-Auth Module you can tailor the message that is presented to the user.

This all means that on your phone you can deliver an experience like this....



Summary

This was a simple "getting you going" blog entry, and I hope we've now achieved that.

In internet facing deployments you may want to use more of the capability of AM's Authentication Chains to use Push as a super-easy 2FA offering, or if you want to deliver a Passwordless experience, put more intelligence around detecting the identity of the user attempting to login to prevent unsolicited Push messages being sent to a user.



Using Push Notifications for Passwordless Authentication and Easy MFA

This blog post by the OpenAM product manager was first published @ thefatblokesings.blogspot.com, included here with permission.

There is often a trade-off between the convenience of an authentication system and the strength of security around it. Oftentimes, the stronger the security, the more tedious it can be for the end user. But now that (almost) everyone has a smartphone, can we somehow use this magical device as an authenticator?

The mid-year release of the ForgeRock Identity Platform introduced some exciting new Access Management technology, namely Push Authentication. When a user wants to login, they simply identify themselves (e.g. username or email) and the system sends them a Push Notification message asking if they want to authorize the login. This message is fielded by the ForgeRock Authenticator App (iPhone or Android) and the user can use swipe or TouchId to agree to the authentication attempt, or Cancel to deny it. Cool stuff, let’s check it out…

We’ll look at:

  • The User experience of logging in using Push Auth
  • The Architecture underpinning this
  • The Admin experience of setting this up
  • Customizing the experience

User Experience

Before you can use Push you’ll need to register your phone to your account so you’ll typically login in the traditional way…

 

 

…before being presented with a QR code…

 


Using the ForgeRock Authenticator app on your phone you can scan this to create an account for that IDP…

 

Now when the user wants to login, they can simply enter their username…

 

…and their phone buzzes and displays something like this…

 

 

The user decides if this is a login attempt by them and, if so, uses TouchId (or swipe if TouchId not present or enabled) to get logged in.

The Architecture

The players in this dance are:
  1. The user on their primary device (say laptop, but could be phone too, see later);
  2. The ForgeRock AM server;
  3. The Push Service in the Cloud;
  4. The phone.

 

How to set it up (The administrator’s experience)

To set this up we’ll need:
  • ForgeRock Access Management (AM) version 13.5;
  • We’ll create 2 new authentication module instances
    • ForgeRock Authenticator (Push) Registration – used to link phone to account;
    • ForgeRock Authenticator (Push) – used when logging in;
  • We’ll create a new realm-based Push Notification Service – this is how AM talks to the Cloud push service;

Authentication Modules and Chains

 

First, in the AM Admin Console, create the 2 new authentication modules (let’s call them Push-Reg and Push-Auth) and use the default values….

 

 

They will look something like this…

 

Now create 2 Authentication Chains, also called Push-Auth and Push-Reg.

For Push-Reg we’ll use a simple Datastore (username/password) module, to identify the user during registration, followed by the Push-Reg Authentication module…

 

and to keep things simple, lets just use the Push-Auth module in the Push-Auth chain…

 

So now we have 2 new chains…

 

At this point you can test these chains out by visiting
<deployment-url>/XUI/#login/&service=Push-Auth
where Push-Auth is the chain name.
But this won’t work yet because we need to tell AM how to send Push Notifications by creating the Push Notification Service.

 

Push Notification Service

The Admin Console has changed a bit in 13.5 in the Services area and is now much easier to configure. First, create a New Service of type  Push Notification Service…

 

 

Once created, we want to configure this. This is slightly tricky but not too hard for people who have read this far ;-)

 

At the time of writing, ForgeRock use AWS Simple Notification Service for sending Push Notifications to Android and Apple phones. And ForgeRock have provided a convenient way for customers to generate credentials to configure this Service.

 

 

Go to Backstage, login and navigate to Projects. If you haven’t registered a Project before, create one and also an Environment too within the Project. Then simply press the big button marked “Set Up Push Auth Credentials”

 

This will generate some credentials which you can use to populate the Push Notification Service on your AM deployment.

 

 

Providing your phone can reach your AM server, your users should now be able to register and login using Push Notifications.

Customizing the IDPs

Say you now want to customize the IDP to have your corporate logo and colorscheme.

 


Return to the Push-Reg Auth Module and you’ll see that you can configure Issuer Name, background color and Logo. And in the Push-Auth Module you can tailor the message that is presented to the user. This all means that on your phone you can deliver an experience like this….

 

Summary

This was a simple “getting you going” blog entry.

In internet facing deployments you may want to use more of the capability of AM’s Authentication Chains to use Push as a super-easy 2FA offering, or if you want to deliver a Passwordless experience, put more intelligence around detecting the identity of the user attempting to login to prevent unsolicited Push messages being sent to a user.

Using Push Notifications for Passwordless Authentication and Easy MFA

There is often a trade-off between the convenience of an authentication system and the strength of security around it. Oftentimes, the stronger the security, the more tedious it can be for the end user. But now that (almost) everyone has a smartphone, can we somehow use this magical device as an authenticator?

The mid-year release of the ForgeRock Identity Platform introduced some exciting new Access Management technology, namely Push Authentication. When a user wants to login, they simply identify themselves (e.g. username or email) and the system sends them a Push Notification message asking if they want to authorize the login. This message is fielded by the ForgeRock Authenticator App (iPhone or Android) and the user can use swipe or TouchId to agree to the authentication attempt, or Cancel to deny it. Cool stuff, let's check it out...

We'll look at:

  • The User experience of logging in using Push Auth
  • The Architecture underpinning this
  • The Admin experience of setting this up
  • Customizing the experience

User Experience

Before you can use Push you'll need to register your phone to your account so you'll typically login in the traditional way...

...before being presented with a QR code...

Using the ForgeRock Authenticator app on your phone you can scan this to create an account for that IDP...
Now when the user wants to login, they can simply enter their username...
...and their phone buzzes and displays something like this...
The user decides if this is a login attempt by them and, if so, uses TouchId (or swipe if TouchId not present or enabled) to get logged in. 

The Architecture

The players in this dance are:
  1. The user on their primary device (say laptop, but could be phone too, see later);
  2. The ForgeRock AM server;
  3. The Push Service in the Cloud;
  4. The phone.


How to set it up (The administrator's experience)

To set this up we'll need:
  • ForgeRock Access Management (AM) version 13.5;
  • We'll create 2 new authentication module instances 
    • ForgeRock Authenticator (Push) Registration - used to link phone to account;
    • ForgeRock Authenticator (Push) - used when logging in;
  • We'll create a new realm-based Push Notification Service - this is how AM talks to the Cloud push service;

Authentication Modules and Chains

First, in the AM Admin Console, create the 2 new authentication modules (let's call them Push-Reg and Push-Auth) and use the default values....

They will look something like this...

Now create 2 Authentication Chains, also called Push-Auth and Push-Reg.
For Push-Reg we'll use a simple Datastore (username/password) module, to identify the user during registration, followed by the Push-Reg Authentication module...
and to keep things simple, lets just use the Push-Auth module in the Push-Auth chain...
So now we have 2 new chains...

At this point you can test these chains out by visiting
<deployment-url>/XUI/#login/&service=Push-Auth
where Push-Auth is the chain name.
But this won't work yet because we need to tell AM how to send Push Notifications by creating the Push Notification Service.

Push Notification Service

The Admin Console has changed a bit in 13.5 in the Services area and is now much easier to configure. First, create a New Service of type  Push Notification Service...

 

Once created, we want to configure this. This is slightly tricky but not too hard for people who have read this far ;-)

At the time of writing, ForgeRock use AWS Simple Notification Service for sending Push Notifications to Android and Apple phones. And ForgeRock have provided a convenient way for customers to generate credentials to configure this Service.

Go to Backstage, login and navigate to Projects. If you haven't registered a Project before, create one and also an Environment too within the Project. Then simply press the big button marked "Set Up Push Auth Credentials"



This will generate some credentials which you can use to populate the Push Notification Service on your AM deployment.

Providing your phone can reach your AM server, your users should now be able to register and login using Push Notifications.

Customizing the IDPs

Say you now want to customize the IDP to have your corporate logo and colorscheme.

Return to the Push-Reg Auth Module and you'll see that you can configure Issuer Name, background color and Logo.
And in the Push-Auth Module you can tailor the message that is presented to the user.
This all means that on your phone you can deliver an experience like this....












Summary

This was a simple "getting you going" blog entry.

In internet facing deployments you may want to use more of the capability of AM's Authentication Chains to use Push as a super-easy 2FA offering, or if you want to deliver a Passwordless experience, put more intelligence around detecting the identity of the user attempting to login to prevent unsolicited Push messages being sent to a user.


Introducing our introductory video series – What is Identity

Whilst many of the visitors to this site are well versed in the finer details of the identity and access management space, there are still a fair few who are still coming up to speed as well. Furthermore, we know that sometimes one needs to explain at a higher level what it is we do on a day to day basis; whether it’s to explain to a colleague why identity management is important, or to explain to one’s mother what it is we do for a living.

To facilitate the further education around identity and access management related topics, we’ve put together a comprehensive video series (19 videos currently) that provides high-level overviews of a variety of identity-related topics.

Some examples include:

What is Identity Management?

 

What is Authentication?

and some more in-depth topics like…

Machine to Machine Identity

You can see the full series (all 19 videos) here.

Share and enjoy!

Fun with OpenAM13 Authz Policies over REST – the ‘jwt’ parameter of the ‘Subject’

Summary

I’ve previously blogged about the ‘claims’ and ‘ssoToken’ parameters of the ‘subject’ item used in the REST call to evaluate a policy for a resource. These articles are:

Now we’re going to look at the ‘jwt’ parameter.
For reference, the REST call we’ll be using is documented in the developer guide, here:

The ‘JWT’ Parameter

The documentation describes the ‘jwt’ paramter as:

The value is a JWT string

What does that mean?
Firstly, it’s worth understanding the JWT specification: RFC7519
To summarise, a JWT is a URL-safe encoded, signed (and possibly encrypted) representation of a ‘JWT Claims Set’. The JWT specification defines the ‘JWT Claims Set’ as:

A JSON object that contains the claims conveyed by the JWT.

Where ‘claims’ are name/value pairs about the ‘subject’ of the JWT.  Typically a ‘subject’ might be an identity representing a person, and the ‘claims’ might be attributes about that person such as their name, email address, and phone number etc

So a JWT is generic way of representing a subject’s claims.

OpenID Connect (OIDC)

OIDC makes use of the JWT specification by stating that the id_token must be a JWT.  It also defines a set of claims that must be present within the JWT when generated by an OpenID Provider  See: http://openid.net/specs/openid-connect-core-1_0.html#IDToken

The specification also says that additional claims may be present in the token.  Just hang on to that thought for the moment…we’ll come back to it.

OpenAM OIDC configuration

For the purposes of investigating the ‘jwt’ parameter, let’s configure OpenAM to generate OIDC id_tokens.  I’m not going to cover that here, but we’ll assume you’ve followed the wizard to setup up an OIDC provider for the realm.  We’ll also assume you’ve created/updated the OAuth2/OIDC Client Agent profile to allow the ‘profile’ and ‘openid’ scopes.  I’m also going to use an ‘invoices’ scope so the config must allow me to request that too.

Now I can issue:
 curl --request POST --user "apiclient:password" --data "grant_type=password&username=bob&password=password&scope=invoices openid profile" http://as.uma.com:8080/openam/oauth2/access_token?realm=ScopeAz

Note the request for the openid and profile scopes in order to ensure I get the OpenID Connect response.

And I should get something similar to the following:

{
  "access_token":"0d0cbd2a-c99c-478a-84c9-78463ec16ad4",
  "scope":"invoices openid profile",
  "id_token":"eyAidHlwIjogIkpXVCIsICJraWQiOiAidWFtVkdtRktmejFaVGliVjU2dXlsT2dMOVEwPSIsICJhbGciOiAiUlMyNTYiIH0.eyAiYXRfaGFzaCI6ICJVbmFHMk0ydU5kS1JZMk5UOGlqcFRRIiwgInN1YiI6ICJib2IiLCAiaXNzIjogImh0dHA6Ly9hcy51bWEuY29tOjgwODAvb3BlbmFtL29hdXRoMi9TY29wZUF6IiwgInRva2VuTmFtZSI6ICJpZF90b2tlbiIsICJhdWQiOiBbICJhcGljbGllbnQiIF0sICJvcmcuZm9yZ2Vyb2NrLm9wZW5pZGNvbm5lY3Qub3BzIjogIjhjOWNhNTU3LTk0OTgtNGU2Yy04ZjZmLWY2ZjYwZjNlOWM4NyIsICJhenAiOiAiYXBpY2xpZW50IiwgImF1dGhfdGltZSI6IDE0NjkwMjc1MTMsICJyZWFsbSI6ICIvU2NvcGVBeiIsICJleHAiOiAxNDY5MDMxMTEzLCAidG9rZW5UeXBlIjogIkpXVFRva2VuIiwgImlhdCI6IDE0NjkwMjc1MTMgfQ.MS6jnMoeQ19y1DQky4UdD3Mqp28T0JYigNQ0d0tdm04HjicQb4ha818qdaErSxuKyXODaTmtqkGbBnELyrckkl7m2aJki9akbJ5vXVox44eaRMmQjdm4EcC9vmdNZSVORKi1gK6uNGscarBBmFOjvJWBBBPhdeOPKApV0lDIzX7xP8JoAtxCr8cnNAngmle6MyTnVQvhFGWIFjmEyumD6Bsh3TZz8Fjkw6xqOyYSwfCaOrG8BxsH4BQTCp9FgsEjI52dZd7J0otKLIk0EVmZIkI4-hgRIcrM1Rfiz9LMHvjAWY97JBMcGBciS8fLHjWWiLDqMHEE0Wn5haYkMSsHYg",
  "token_type":"Bearer",
  "expires_in":3599
}

Note the lengthy id_token field.  This is the OIDC JWT made up according to the specification.  Also note that, by default, OpenAM will sign this JWT with the 1024-bit ‘test’ certificate using the RS256 algorithm.  I’ve updated my instance to use a new 2048-bit certificate called ‘test1’ so my response will be longer than the default.  I’ve used a 2048-bit certificate because I want to use this tool to inspect the JWT and its signature: http://kjur.github.io/jsjws/tool_jwt.html.  And, this tool only seems to support 2048-bit certificates which is probably due to the JWS specification   (I could have used jwt.io to inspect the JWT, but this does not support verification of RSA based signatures).

So, in the JWT tool linked above you can paste the full value of the id_token field into ‘Step 3’, then click the ‘Just Decode JWT’ button.  You should see the decode JWT claims in the ‘Payload’ box:

You can also see that the header field shows how the signature was generated in order to allow clients to verify this signature. In order to get this tool to verify the signature, you need to get the PEM formatted version of the public key of the signing certificate.  i.e. ‘test1’ in my case. I’ve got this from the KeyStoreExplorer tool, and now I can paste it into the ‘Step 4’ box, using the ‘X.509 certificate for RSA’ option.  Now I can click ‘Verify It’:

The tool tells me the signature is valid, and also decodes the token as before.  If I was to change the content of the message, of the signature of the JWT then the tool would tell me that the signature is not valid. For example, changing one character of the message would return this:

Note that the message box says that the signature is *Invalid*, as well as the Payload now being incorrect.

The ‘jwt’ Parameter

So now we’ve understood that the id_token field of the OIDC response is a JWT, we can use this as the ‘jwt’ parameter of the ‘subject’ field in the policy evaluation call.

For example, a call like this:
 curl --request POST --header "iPlanetDirectoryPro: AQIC5wM2LY4Sfcx-sATGr4BojcF5viQOrP-1IeLDz2Un8VM.*AAJTSQACMDEAAlNLABQtMjUwMzE4OTQxMDA1NDk1MTAyNwACUzEAAA..*" --header "Content-Type: application/json" --data '{"resources":["invoices"],"application":"api","subject":{"jwt":"eyAidHlwIjogIkpXVCIsICJraWQiOiAidWFtVkdtRktmejFaVGliVjU2dXlsT2dMOVEwPSIsICJhbGciOiAiUlMyNTYiIH0.eyAiYXRfaGFzaCI6ICJVbmFHMk0ydU5kS1JZMk5UOGlqcFRRIiwgInN1YiI6ICJib2IiLCAiaXNzIjogImh0dHA6Ly9hcy51bWEuY29tOjgwODAvb3BlbmFtL29hdXRoMi9TY29wZUF6IiwgInRva2VuTmFtZSI6ICJpZF90b2tlbiIsICJhdWQiOiBbICJhcGljbGllbnQiIF0sICJvcmcuZm9yZ2Vyb2NrLm9wZW5pZGNvbm5lY3Qub3BzIjogIjhjOWNhNTU3LTk0OTgtNGU2Yy04ZjZmLWY2ZjYwZjNlOWM4NyIsICJhenAiOiAiYXBpY2xpZW50IiwgImF1dGhfdGltZSI6IDE0NjkwMjc1MTMsICJyZWFsbSI6ICIvU2NvcGVBeiIsICJleHAiOiAxNDY5MDMxMTEzLCAidG9rZW5UeXBlIjogIkpXVFRva2VuIiwgImlhdCI6IDE0NjkwMjc1MTMgfQ.MS6jnMoeQ19y1DQky4UdD3Mqp28T0JYigNQ0d0tdm04HjicQb4ha818qdaErSxuKyXODaTmtqkGbBnELyrckkl7m2aJki9akbJ5vXVox44eaRMmQjdm4EcC9vmdNZSVORKi1gK6uNGscarBBmFOjvJWBBBPhdeOPKApV0lDIzX7xP8JoAtxCr8cnNAngmle6MyTnVQvhFGWIFjmEyumD6Bsh3TZz8Fjkw6xqOyYSwfCaOrG8BxsH4BQTCp9FgsEjI52dZd7J0otKLIk0EVmZIkI4-hgRIcrM1Rfiz9LMHvjAWY97JBMcGBciS8fLHjWWiLDqMHEE0Wn5haYkMSsHYg"}}' http://as.uma.com:8080/openam/json/ScopeAz/policies?_action=evaluate

might return:

[
  {
    "ttl":9223372036854775807,
    "advices":{},
    "resource":"invoices",
    "actions":{"permit":true},
    "attributes":{"hello":["world"]}
  }
]

This assumes the following policy definition:

Note that in this case I am using the ‘iss’ claim within the token in order to ensure I trust the issuer of the token when evaluating the policy condition.

As mentioned in previous articles, it is imperative that the id_token claims includes a ‘sub’ field.  Fortunately, the OIDC specification makes this mandatory so using an OIDC token here will work just fine.

It's also worth noting that OpenAM does *not* verify the signature of the id_token submitted in 'jwt' field.  This means that you could shorten the 'curl' call above to remove the signature component of the 'jwt'. For example, this works just the same as above:
 curl --request POST --header "iPlanetDirectoryPro: AQIC5wM2LY4Sfcx-sATGr4BojcF5viQOrP-1IeLDz2Un8VM.*AAJTSQACMDEAAlNLABQtMjUwMzE4OTQxMDA1NDk1MTAyNwACUzEAAA..*" --header "Content-Type: application/json" --data '{"resources":["invoices"],"application":"api","subject":{"jwt":"eyAidHlwIjogIkpXVCIsICJraWQiOiAidWFtVkdtRktmejFaVGliVjU2dXlsT2dMOVEwPSIsICJhbGciOiAiUlMyNTYiIH0.eyAiYXRfaGFzaCI6ICJVbmFHMk0ydU5kS1JZMk5UOGlqcFRRIiwgInN1YiI6ICJib2IiLCAiaXNzIjogImh0dHA6Ly9hcy51bWEuY29tOjgwODAvb3BlbmFtL29hdXRoMi9TY29wZUF6IiwgInRva2VuTmFtZSI6ICJpZF90b2tlbiIsICJhdWQiOiBbICJhcGljbGllbnQiIF0sICJvcmcuZm9yZ2Vyb2NrLm9wZW5pZGNvbm5lY3Qub3BzIjogIjhjOWNhNTU3LTk0OTgtNGU2Yy04ZjZmLWY2ZjYwZjNlOWM4NyIsICJhenAiOiAiYXBpY2xpZW50IiwgImF1dGhfdGltZSI6IDE0NjkwMjc1MTMsICJyZWFsbSI6ICIvU2NvcGVBeiIsICJleHAiOiAxNDY5MDMxMTEzLCAidG9rZW5UeXBlIjogIkpXVFRva2VuIiwgImlhdCI6IDE0NjkwMjc1MTMgfQ."}}' http://as.uma.com:8080/openam/json/ScopeAz/policies?_action=evaluate

Note that the ‘jwt’ string needs to have two dots ‘.’ in it to conform to the JWT specification.  The content following the second dot is the signature, which has been removed entirely in this second curl example.  i.e. this is an unsigned-JWT which is completely valid.

But, just to prove that OpenAM does *not* validate signed JWTs, you could attempt a curl call that includes garbage for the signature.  For example:

curl --request POST --header "iPlanetDirectoryPro: AQIC5wM2LY4Sfcx-sATGr4BojcF5viQOrP-1IeLDz2Un8VM.*AAJTSQACMDEAAlNLABQtMjUwMzE4OTQxMDA1NDk1MTAyNwACUzEAAA..*" --header "Content-Type: application/json" --data '{"resources":["invoices"],"application":"api","subject":{"jwt":"eyAidHlwIjogIkpXVCIsICJraWQiOiAidWFtVkdtRktmejFaVGliVjU2dXlsT2dMOVEwPSIsICJhbGciOiAiUlMyNTYiIH0.eyAiYXRfaGFzaCI6ICJVbmFHMk0ydU5kS1JZMk5UOGlqcFRRIiwgInN1YiI6ICJib2IiLCAiaXNzIjogImh0dHA6Ly9hcy51bWEuY29tOjgwODAvb3BlbmFtL29hdXRoMi9TY29wZUF6IiwgInRva2VuTmFtZSI6ICJpZF90b2tlbiIsICJhdWQiOiBbICJhcGljbGllbnQiIF0sICJvcmcuZm9yZ2Vyb2NrLm9wZW5pZGNvbm5lY3Qub3BzIjogIjhjOWNhNTU3LTk0OTgtNGU2Yy04ZjZmLWY2ZjYwZjNlOWM4NyIsICJhenAiOiAiYXBpY2xpZW50IiwgImF1dGhfdGltZSI6IDE0NjkwMjc1MTMsICJyZWFsbSI6ICIvU2NvcGVBeiIsICJleHAiOiAxNDY5MDMxMTEzLCAidG9rZW5UeXBlIjogIkpXVFRva2VuIiwgImlhdCI6IDE0NjkwMjc1MTMgfQ.garbage!!"}}' http://as.uma.com:8080/openam/json/ScopeAz/policies?_action=evaluate

…would still successfully be authorised.
It’s also worth noting that the id_token claims of an OIDC token includes an ‘exp’ field signifying the ‘expiry time’ of the id_token.  OpenAM does not evaluate this field in this call.

Signature Verification

You might be wondering if it is possible to verify the signature and other aspects, such as the ‘exp’ field.  Yes, it is!  With a little bit clever scripting – of course!

The first thing is that we need to ensure that jwt token can be parsed by a script.  Unfortunately, simply passing it in the jwt parameter does not permit this.  But, we can *also* pass the jwt token in the ‘environment’ field of the policy decision request.  I’ll shorten the jwt tokens in the following CURL command to make it easier to read, but you should supply the full signed jwt in the ‘environment’ field:

curl --request POST --header "iPlanetDirectoryPro: "AQIC....*" --header "Content-Type: application/json" --data '{"resources":["invoices"],"application":"api","subject":{"jwt":"eyAidHlw...MyNTYiIH0.eyAiYXRfa...MTMgfQ.MS6jn...sHYg"},"environment":{"jwt":["eyAidHlw...MyNTYiIH0.eyAiYXRfa...MTMgfQ.MS6jn...sHYg"]}}' http://as.uma.com:8080/openam/json/ScopeAz/policies?_action=evaluate

Note in this that the ‘environment’ field now includes a ‘jwt’ field whose data can be utilised in a script.  And what would such a policy condition script look like?

Well head over to https://github.com/smof/openAM_scripts and take a look at the ‘ExternalJWTVerifier.groovy’ script.  The associated blogpost from my colleague, Simon Moffatt, will set this script into context: http://identityrelationshipmanagement.blogspot.co.uk/2016/05/federated-authorization-using-3rd-party.html.  This will validate either an HMAC signed JWT – if you enter the appropriate shared secret – as well as an RSA 256 signed OIDC JWT – if you specify the jwk_uri for the OpenID Connect Provider.
And, now that you have claims accessible to the scripting engine you can pretty much apply any form of logic to them to validate the token – including validating the ‘exp’ field.

This blog post was first published @ yaunap.blogspot.no, included here with permission from the author.

A Beginners Guide to OpenIDM – Part 3 – Connectors

Overview

Previously in this series we have looked at a general overview of OpenIDM and had a detailed look at objects. In this blog I want to explore connectors.
Connectors are the integration glue that enables you to bring data into OpenIDM from all sorts of different systems and data stores. We will take a look at the different types of connectors available in OpenIDM, how they work and end with a practical example of how to actually configure a connector.

Connectors

Architecture

Every identity system that I have ever worked with has a concept similar of a connector. Usually they comprise of Java libraries or scripts that perform the actual push and pull of data to and from a target data source.
Standard connector operations in OpenIDM include:
  • Create: Create a new object ( usually an account ) in a target data store.
  • Update: Update an existing object e.g. if a user changes their email address then we may want to update the user record in a target data store.
  • Get: Retrieve a specific instance of an object ( e.g. an account) from a target data store.
  • Search: Query the collection and return a specific set of results.
There are a number of other operations which we will explore in later blogs.
At a high level connectors are comprised of:
  • Provisioner configuration: configuration data defining the connector usually containing:
    • Reference to the underlying Java class that implements the connector. This should be populated automatically when you choose your connector type. You can explore the connector source code if you like but for the most part you shouldn’t need to be concerned with the underlying classes.
    • All of the credentials and configuration needed to access the data store. You need to configure this.
    • The data store schema for the object or account. You need to configure this.
Connectors are configured through the user interface but like all OpenIDM configuration they are also stored ( and can be edited ) locally on the file system. Connector configuration files ( like most OpenIDM configuration files) can be found in openidm/conf and have the following naming convention:
provisioner.openicf-something.json ( where something is whatever you have named your connector ).
Note connector configuration files will not appear unless you have configured a connector using the UI, we will revisit this later.
The logical flow in OpenIDM for utilising connectors is as follows:
  • Data Synchronization engine outputs data and a requested operation e.g. create, delete, update or one of several others
  • Provisioner engine invokes the connector class with the requested operation and the data from the synchronization engine.
  • Connector class uses the configuration parameters from the provisioner file and the data passed in the invocation to actually do the work and push or pull to or from the target.

Connector Example

So now we have a basic understanding of how connectors work, lets try configuring one.
I’m going to use the CSV connector for this example and we are going to read users from a Comma Seperate Value list. Ultimately we will be reading this data into the managed user object using a mapping. For this blog though we will just focus on configuring the connector.
Feel free to use any CSV file but if you want to follow along with the example then download the CSV here that I created using Mockaroo.



Copy the file to somewhere on the same file system that OpenIDM has been installed on, it doesn’t matter where so long as OpenIDM can access it. I’m going to use /home/data/users.csv
Then log in to OpenIDM as administrator. Navigate to Configure, then Connectors.


 

Press “New Connector”



You will see the UI for configuring a new connector:



Give your new connector a name (I have used UserLoadCSV above – no spaces permitted), and look at the connector types. These are all the different systems you can integrate with.
Note that with further configuration, more connectors are available, and using the scripted connector you can pretty much integrate with any system that offers a suitable API.
 
Select the “CSV File Connector”. Now we need to complete the “Base Connector Details”. Starting with the path to the CSV File we actually want to process.


Now let’s take a look at the next few fields:



They are populated by default but we need to configure these up to match our spreadsheet.
Looking at the data:
  • Header UID = id
  • Header Name = username
So in this instance we just need to change the Header UID to match.



You will note there are a few more fields:
  • Header Password: We will not be processing any passwords from this CSV, that might be something you want to do, although typically you will have OpenIDM generate passwords for you ( more on that later ).
  • Quote Character: If you have an unusually formatted CSV, you can change the character that surrounds your data values. This is used by OpenIDM to successfully parse the CSV file.
  • Field Delimiter: Similarly if you are using a delimiter ( the character that splits up data entries ) that is anything other than a “,” you can tell OpenIDM here.
  • Newline String: As above.
  • Sync Retention Count: Todo
Note that these parameters are all unique to the CSV connector. If you were to use another connector, say the database connector, you would have a different set of parameters that must be configure for OpenIDM to successfully connect to the database and query the table.
Ok, with all that done lets add the connector:



All being well you should get a positive confirmation message. Congratulations, you have added a connector! All very well but what can we do with it?
Click on the menu option ( the vertical dots):


Then Data (_ACCOUNT_)



If you have done everything correctly you should see the data from the CSV in OpenIDM!



It is important to understand, that at this point the data has not been loaded into OpenIDM, OpenIDM is simply providing a live view of the data in the CSV. This works for any connector and we will revisit it at the end of this blog.
Before that, there are a few things I want to cover. Go back to the Connector screen, you should have a new Connector:



Select it, and select “Object Types”:



Then edit “_ACCOUNT_”.




What you should be able to see if a list of all of the attributes in the CSV file. OpenIDM has automatically parsed the CSV and built a schema for interpreting the data. You may also spot “__NAME__”. This is a special attribute, and it maps to the  Header Name attribute we configured earlier.

Again, the concept of Object Type is universal to all our connectors and sometimes additional configuration of the Object Type may be required in order to successfully process data.


Finally, let’s take a look at Sync:

On this page you can configure LiveSync. LiveSync is a special case of synchronization. Ordinarily synchronization is performed through the mappings interface ( or automatically on a schedule ).

However if the connector and target system support it, then LiveSync can be used. With LiveSync changes are picked up as they occur in the target. Ordinarily with a normal synchronization ( often called reconciliation ) all accounts in the target must be examined against the source for changes. With LiveSync, only accounts in the target that have changed will be processed. For this to work the target must support some form of change log that OpenIDM can read. In systems with large numbers of accounts this is a much more efficient way of keeping data in sync.

Connectors And The REST API

As before, we can make use of the REST API here to query our new connector. We can actually use the API to read or write to the underlying CSV data store. Just take a moment to think about what that means. In an enterprise implementation you might have hundreds of different data stores of every type. Once you have configured connectors to OpenIDM you can query those data stores using a single, consistent and centralised RESTful API via OpenIDM. That really is a very powerful tool.

Let’s take a look at this now. Navigate back to the data accounts page from earlier:




Take a look at the URL:

As before, this corresponds to our REST API. Please fire up Postman again.

Enter the following URL

http://localhost.localdomain.com:8080/openidm/system/UserLoadCSV/__ACCOUNT__?_queryId=query-all-ids

You should see the following result



We have just queried the CSV file using the REST API, and retrieved the list of usernames.
Let’s try retrieving the data for a specific user:

http://localhost.localdomain.com:8080/openidm/system/UserLoadCSV/__ACCOUNT__?_queryFilter=/email eq “tgardner0@nsw.gov.au”


Here we are searching for the user with the email address tgardner0@nsw.gov.au.

 



Again, this is just a small sample of what the REST API is capable of, you can learn much more here:
https://forgerock.org/openidm/doc/bootstrap/integrators-guide/index.html#appendix-rest
And more on how queries work here:

https://forgerock.org/openidm/doc/bootstrap/integrators-guide/#constructing-queries

 

Come back next time for a look at mappings where we will join together the managed user and the connector to actually create some users in the system.

This blog post was first published @ http://identity-implementation.blogspot.no/, included here with permission from the author.

Identity Disorder Podcast, Episode 2

Identity Disorder, Episode 2: It’s a DevOps World, We Just Live In It

identity-disorder-speakers-ep002

In the second episode of Identity Disorder, join Daniel and me as we chat with ForgeRock’s resident DevOps guru Warren Strange. Topics include why DevOps and elastic environments are a bit like herding cattle, how ForgeRock works in a DevOps world, more new features in the mid-year 2016 ForgeRock Identity Platform release, the Pokémon training center next to Daniel’s house, and if Canada might also consider withdrawing from its neighbors.

Episode Links:

Learn more about ForgeRock DevOps and cloud resources: https://wikis.forgerock.org/confluence/display/DC/ForgeRock+DevOps+and+Cloud+Resources

Videos of the new features in the mid-year 2016 ForgeRock Identity Platform release:
https://vimeo.com/album/4053949

Information on the 2016 Sydney Identity Summit and Sydney Identity Unconference (August 9-10, 2016):
https://summits.forgerock.com/sydney/

All upcoming ForgeRock events:
https://www.forgerock.com/about-us/events/

 

Fun with OpenAM13 Authz Policies over REST – the ‘ssoToken’ parameter of the ‘Subject’

I recently blogged about the using the ‘claims’ parameter of the subject item in a REST call for policy evaluation in OpenAM 13.  (See http://yaunap.blogspot.co.uk/2016/07/fun-with-openam13-authz-policies-over.html). In that article I blithely stated that using the ‘ssoToken’ parameter was fairly obvious.  However, I thought I’d take the time to explore this in a little more detail to ensure my understanding is complete.  This is partly because I started thinking about OIDC JWT tokens, and the fact that OpenAM stateless sessions (nothing to do with OIDC) also use JWT tokens.

Let’s first ensure we understand stateful and stateless sessions.
(It’s documented here, in the Admin guide: https://backstage.forgerock.com/#!/docs/openam/13.5/admin-guide#chap-session-state)

Stateful sessions are your typical OpenAM session.  When a user successfully authenticates with OpenAM they will establish a session.  A Stateful session means that all the details about that session are held by the OpenAM server-side services.  By default, this is ‘in-memory’, but can be persisted to an OpenDJ instances in order to support high-availability and scalability across geographically disperse datacentres.  The client of the authentication request receives a session identifier, typically stored by a web application as a session cookie, that is passed back to the OpenAM servers so that the session details can be retrieved.  It’s called ‘stateful’ because the server needs to maintain the state of the session.
A session identifier for a stateful session might look something like this:
AQIC5wM2LY4Sfcw4EfByyKNoSnml3Ngk0bxcJa-LD-qrwSc.*AAJTSQACMDEAAlNLABM3NzI1Nzk4NDU0NTIyMTczODA2AAJTMQAA*
Basically, it’s just a unique key to the session state.

Stateless sessions are new in OpenAM 13.  These alleviate the need for servers to maintain and store state, which avoids the need to replicate persisted state across multiple datacentres.  Of course, there is still session ‘state’…it’s just no longer stored on the server.  Instead all state information is packaged up into a JWT and passed to the client to maintain.  Now, on each request, the client can send the complete session information back to an OpenAM server in order for it to be processed.  OpenAM does not need to perform a lookup of the session information from the stateful repository because all the information is right there in the JWT.  This means that for a realm configured to operate with stateless sessions, the client will receive a much bigger token on successful authentication
Therefore, a stateless session token might look something like:

AQIC5wM2LY4Sfcx_OSZ6Qe07K0NShFK6hZ2LWb6Pn2jNBTs.*AAJTSQACMDEAAlNLABMzMjQ1MDI5NDA0OTk0MjQyMTY0AAJTMQAA*eyAidHlwIjogIkpXVCIsICJhbGciOiAiSFMyNTYiIH0.eyAic2VyaWFsaXplZF9zZXNzaW9uIjogIntcInNlY3JldFwiOlwiM2M0NzczYzQtM2ZkZS00MjI2LTk4YzctMzNiZGQ5OGY2MjU0XCIsXCJleHBpcnlUaW1lXCI6MTQ2ODg2MTk3NTE0OCxcImxhc3RBY3Rpdml0eVRpbWVcIjoxNDY4ODU0Nzc1MTQ4LFwic3RhdGVcIjpcInZhbGlkXCIsXCJwcm9wZXJ0aWVzXCI6e1wiQ2hhclNldFwiOlwiVVRGLThcIixcIlVzZXJJZFwiOlwiYm9iXCIsXCJGdWxsTG9naW5VUkxcIjpcIi9vcGVuYW0vVUkvTG9naW4_cmVhbG09U2NvcGVBelwiLFwic3VjY2Vzc1VSTFwiOlwiL29wZW5hbS9jb25zb2xlXCIsXCJjb29raWVTdXBwb3J0XCI6XCJ0cnVlXCIsXCJBdXRoTGV2ZWxcIjpcIjVcIixcIlNlc3Npb25IYW5kbGVcIjpcInNoYW5kbGU6QVFJQzV3TTJMWTRTZmN3bG9wOHFRNFpydmZfY2N1am85VlZCLWxJU1ltR3FvdjQuKkFBSlRTUUFDTURFQUFsTkxBQk0yTlRreU9URXdPVFl6T1RjNU5qSTJNVEF3QUFKVE1RQUEqXCIsXCJVc2VyVG9rZW5cIjpcImJvYlwiLFwibG9naW5VUkxcIjpcIi9vcGVuYW0vVUkvTG9naW5cIixcIlByaW5jaXBhbHNcIjpcImJvYlwiLFwiU2VydmljZVwiOlwibGRhcFNlcnZpY2VcIixcInN1bi5hbS5Vbml2ZXJzYWxJZGVudGlmaWVyXCI6XCJpZD1ib2Isb3U9dXNlcixvPXNjb3BlYXosb3U9c2VydmljZXMsZGM9b3BlbmFtLGRjPWZvcmdlcm9jayxkYz1vcmdcIixcImFtbGJjb29raWVcIjpcIjAxXCIsXCJPcmdhbml6YXRpb25cIjpcIm89c2NvcGVheixvdT1zZXJ2aWNlcyxkYz1vcGVuYW0sZGM9Zm9yZ2Vyb2NrLGRjPW9yZ1wiLFwiTG9jYWxlXCI6XCJlbl9VU1wiLFwiSG9zdE5hbWVcIjpcIjEyNy4wLjAuMVwiLFwiQXV0aFR5cGVcIjpcIkRhdGFTdG9yZVwiLFwiSG9zdFwiOlwiMTI3LjAuMC4xXCIsXCJVc2VyUHJvZmlsZVwiOlwiQ3JlYXRlXCIsXCJBTUN0eElkXCI6XCI0OTVjNmVjN2ZjNmQyMWU4MDFcIixcImNsaWVudFR5cGVcIjpcImdlbmVyaWNIVE1MXCIsXCJhdXRoSW5zdGFudFwiOlwiMjAxNi0wNy0xOFQxNToxMjo1NVpcIixcIlByaW5jaXBhbFwiOlwiaWQ9Ym9iLG91PXVzZXIsbz1zY29wZWF6LG91PXNlcnZpY2VzLGRjPW9wZW5hbSxkYz1mb3JnZXJvY2ssZGM9b3JnXCJ9LFwiY2xpZW50SURcIjpcImlkPWJvYixvdT11c2VyLG89c2NvcGVheixvdT1zZXJ2aWNlcyxkYz1vcGVuYW0sZGM9Zm9yZ2Vyb2NrLGRjPW9yZ1wiLFwic2Vzc2lvbklEXCI6bnVsbCxcImNsaWVudERvbWFpblwiOlwibz1zY29wZWF6LG91PXNlcnZpY2VzLGRjPW9wZW5hbSxkYz1mb3JnZXJvY2ssZGM9b3JnXCIsXCJzZXNzaW9uVHlwZVwiOlwidXNlclwiLFwibWF4SWRsZVwiOjMwLFwibWF4Q2FjaGluZ1wiOjMsXCJuZXZlckV4cGlyaW5nXCI6ZmFsc2UsXCJtYXhUaW1lXCI6MTIwfSIgfQ.FSmj5Sn-ibGoqWTCerGBZ-IYVp1V54HVGj5A53Td8Ao

Obviously, this is much larger and looks more complex.  This token is essentially made up of two parts:
1. a fake stateful session identifier
2. a JWT
OpenAM always prepends a fake stateful session identifier to this JWT for backwards compatibility. So, the actual JWT starts *after* the second asterisk (*).  i.e. from the bit that begins eyAidH… right through to the end.

You can use tools like jwt.io and jwtinspector.io to unpack and read this JWT.
e.g, for the JWT above, you can see the payload data which is how OpenAM represents the session information:

Now, turning our attention to the policy evaluation REST calls we see that there is an option to use ‘ssoToken’ as a parameter to the ‘subject’ item.

In a realm that uses the default ‘stateful’ sessions then any policy evaluation REST call that uses the ‘ssoToken’ parameter should use a stateful session identifier.  The policy will then have full access to the session information as well the profile data of the user identified by the session.

A stateless realm works exactly the same way.  You now need to provide the *full* stateless token (including the ‘fake’ stateful identifier with the JWT component) and the policy will have access to the state information from the JWT as well as information about the user from the datastore (such as group membership)

For example:

curl --request POST --header "iPlanetDirectoryPro: AQIC5wM2LY4SfcxxJaG7LFOia1TVHZuJ4_OVm9lq5Ih5uXA.*AAJTSQACMDEAAlNLABQtMjU4MDgxNTIwMzk1NzA5NDg0MwACUzEAAA..*" --header "Content-Type: application/json" --data '{"resources":["orders"],"application":"api","subject":{"ssoToken":"AQIC5wM2LY4SfcyRBqm_r02CEJ5luC4k9A6HPqDitS9T5-0.*AAJTSQACMDEAAlNLABQtNTc4MzI5MTk2NjQzMjUxOTc2MAACUzEAAA..*eyAidHlwIjogIkpXVCIsICJhbGciOiAiSFMyNTYiIH0.eyAic2VyaWFsaXplZF9zZXNzaW9uIjogIntcInNlY3JldFwiOlwiN2RiODdhMjQtMjk5Ni00YzkxLTkyNTUtOGIwNzdmZDEyYmFkXCIsXCJleHBpcnlUaW1lXCI6MTQ2ODkzNTgyODUyNSxcImxhc3RBY3Rpdml0eVRpbWVcIjoxNDY4OTI4NjI4NTI1LFwic3RhdGVcIjpcInZhbGlkXCIsXCJwcm9wZXJ0aWVzXCI6e1wiQ2hhclNldFwiOlwiVVRGLThcIixcIlVzZXJJZFwiOlwiYm9iXCIsXCJGdWxsTG9naW5VUkxcIjpcIi9vcGVuYW0vVUkvTG9naW4_cmVhbG09U2NvcGVBelwiLFwic3VjY2Vzc1VSTFwiOlwiL29wZW5hbS9jb25zb2xlXCIsXCJjb29raWVTdXBwb3J0XCI6XCJ0cnVlXCIsXCJBdXRoTGV2ZWxcIjpcIjVcIixcIlNlc3Npb25IYW5kbGVcIjpcInNoYW5kbGU6QVFJQzV3TTJMWTRTZmN3Y3YzMFFJTGF0Z3E3d3NJMWM4RThqRmZkTDMzTlZVQjAuKkFBSlRTUUFDTURFQUFsTkxBQk15TVRNME9USTRPVFk0TmpBNE1qSTFNelF3QUFKVE1RQUEqXCIsXCJVc2VyVG9rZW5cIjpcImJvYlwiLFwibG9naW5VUkxcIjpcIi9vcGVuYW0vVUkvTG9naW5cIixcIlByaW5jaXBhbHNcIjpcImJvYlwiLFwiU2VydmljZVwiOlwibGRhcFNlcnZpY2VcIixcInN1bi5hbS5Vbml2ZXJzYWxJZGVudGlmaWVyXCI6XCJpZD1ib2Isb3U9dXNlcixvPXNjb3BlYXosb3U9c2VydmljZXMsZGM9b3BlbmFtLGRjPWZvcmdlcm9jayxkYz1vcmdcIixcImFtbGJjb29raWVcIjpcIjAxXCIsXCJPcmdhbml6YXRpb25cIjpcIm89c2NvcGVheixvdT1zZXJ2aWNlcyxkYz1vcGVuYW0sZGM9Zm9yZ2Vyb2NrLGRjPW9yZ1wiLFwiTG9jYWxlXCI6XCJlbl9VU1wiLFwiSG9zdE5hbWVcIjpcIjEyNy4wLjAuMVwiLFwiQXV0aFR5cGVcIjpcIkRhdGFTdG9yZVwiLFwiSG9zdFwiOlwiMTI3LjAuMC4xXCIsXCJVc2VyUHJvZmlsZVwiOlwiQ3JlYXRlXCIsXCJBTUN0eElkXCI6XCI2MzE2MDI4YjcyYWU5MWMyMDFcIixcImNsaWVudFR5cGVcIjpcImdlbmVyaWNIVE1MXCIsXCJhdXRoSW5zdGFudFwiOlwiMjAxNi0wNy0xOVQxMTo0Mzo0OFpcIixcIlByaW5jaXBhbFwiOlwiaWQ9Ym9iLG91PXVzZXIsbz1zY29wZWF6LG91PXNlcnZpY2VzLGRjPW9wZW5hbSxkYz1mb3JnZXJvY2ssZGM9b3JnXCJ9LFwiY2xpZW50SURcIjpcImlkPWJvYixvdT11c2VyLG89c2NvcGVheixvdT1zZXJ2aWNlcyxkYz1vcGVuYW0sZGM9Zm9yZ2Vyb2NrLGRjPW9yZ1wiLFwic2Vzc2lvbklEXCI6bnVsbCxcImNsaWVudERvbWFpblwiOlwibz1zY29wZWF6LG91PXNlcnZpY2VzLGRjPW9wZW5hbSxkYz1mb3JnZXJvY2ssZGM9b3JnXCIsXCJzZXNzaW9uVHlwZVwiOlwidXNlclwiLFwibWF4SWRsZVwiOjMwLFwibWF4Q2FjaGluZ1wiOjMsXCJuZXZlckV4cGlyaW5nXCI6ZmFsc2UsXCJtYXhUaW1lXCI6MTIwfSIgfQ.Dnjk-9MgANmhX4jOez12HcYAW9skck-HFuTPnzEmIq8"}}' http://as.uma.com:8080/openam/json/ScopeAz/policies?_action=evaluate
Might return:
[{"advices":{},"ttl":9223372036854775807,"resource":"orders","actions":{"permit":true},"attributes":{}}]
Assuming the policy looks something like this:
…and, in this specific case, that the authentication level for the ‘subject’ of the ssoToken is set to two or greater, as well as the ‘subject’ being a member of the the ‘api_order’ group in the datastore.
Next up, we’ll look at using OIDC tokens in the subject parameter of the REST call.

This blog post was first published @ yaunap.blogspot.no, included here with permission from the author.