OpenAM Security Advisory #201605

Security vulnerabilities have been discovered in OpenAM components. These issues may be present in versions of OpenAM including 13.0.x, 12.0.x, 11.0.x, 10.1.0-Xpress, 10.0.x, 9.x, and possibly previous versions.

This advisory provides guidance on how to ensure your deployments can be secured. Workarounds or patches are available for all of the issues.

The maximum severity of issues in this advisory is Critical. Deployers should take steps as outlined in this advisory and apply the relevant update(s) at the earliest opportunity.

The recommendation is to deploy the relevant patches. Patch bundles are available for the following versions (in accordance with ForgeRock’s Maintenance and Patch availability policy):

  • 11.0.3
  • 12.0.2-12.0.3
  • 13.0.0

Customers can obtain these patch bundles from BackStage.

Issue #201605-01: Credential Forgery

Product: OpenAM
Affected versions: 11.0.0-11.0.3, 12.0.0-12.0.3, 13.0.0
Fixed versions: 13.5.0
Component: Core Server, Server Only
Severity: Critical

The Persistent Cookie authentication module is vulnerable to credential forgery. In some configurations this may allow an attacker unauthorized access to the system as any user.

Workaround:
Disable Persistent Cookie authentication module instances and require manual authentication, or combine the module with a mandatory second factor.

Resolution:
Update/upgrade to a fixed version or deploy the relevant patch bundle.

Issue #201605-02: Insufficient Authorization

Product: OpenAM
Affected versions: 13.0.0
Fixed versions: 13.5.0
Component: Core Server, Server Only
Severity: Critical

Insufficient authorization on a query endpoint allows a non-privileged user to access details of other users on the system.

Workaround:
No workaround available.

Resolution:
Update/upgrade to a fixed version or deploy the relevant patch bundle.

Issue #201605-03: Authentication Bypass

Product: OpenAM
Affected versions: 9-9.5.5, 10.0.0-10.0.2, 10.1.0-Xpress, 11.0.0-11.0.3, 12.0.0-12.0.3, 13.0.0
Fixed versions: 13.5.0
Component: Core Server, Server Only
Severity: High

In some configurations a user may be able to bypass additional authentication requirements and login with just username and password.

Workaround:
Ensure all authorization mechanisms and policies enforce all chain/module/service/role requirements have been met after authentication, such as by using OpenAM’s “Authenticated by Module Chain”, “Authenticated by Module Instance” or “Authenticated to Realm” environment conditions in conjunction with a policy agent.

Resolution:
Update/upgrade to a fixed version or deploy the relevant patch bundle and apply the workaround.

Issue #201605-04: Cross-Site Request Forgery (CSRF)

Product: OpenAM
Affected versions: 10.1.0-Xpress, 11.0.0-11.0.3, 12.0.0-12.0.3, 13.0.0
Fixed versions: 13.5.0
Component: Core Server, Server Only
Severity: High

The OAuth2 consent page is vulnerable to a CSRF attack.

Workaround:
No workaround available.

Resolution:
Update/upgrade to a fixed version or deploy the relevant patch bundle and update any customized authorize.ftl template files based on the patch.

Issue #201605-05: Cross Site Scripting (XSS)

Product: OpenAM
Affected versions: 9-9.5.5, 10.0.0-10.0.2, 10.1.0-Xpress, 11.0.0-11.0.3, 12.0.0-12.0.3, 13.0.0
Fixed versions: 13.5.0
Component: Core Server, Server Only
Severity: High

OpenAM is vulnerable to cross-site scripting (XSS) attacks which could lead to session hijacking or phishing. The following endpoints are vulnerable:

  • /openam/cdcservlet
  • /openam/SAMLPOSTProfileServlet

Workaround:
Protect the listed endpoints with the container (for example using the mod_security Apache module) or filter external requests until a patch is deployed.

Resolution:
Update/upgrade to a fixed version or deploy the relevant patch bundle.

Issue #201605-06: Credentials appear in CTS access log

Product: OpenAM
Affected versions: 13.0.0
Fixed versions: 13.5.0
Component: Core Server, Server Only
Severity: Medium

OAuth 2 client requests using HTTP Basic authentication may result in the base64-encoded credentials being recorded in the CTS access logs.

Workaround:
Use alternative authentication mechanisms for OAuth2 clients, or protect the OpenDJ access logs for the CTS store.

Resolution:
Update/upgrade to a fixed version or deploy the relevant patch bundle.

Issue #201605-07: Content Spoofing Vulnerability

Product: OpenAM
Affected versions: 9-9.5.5, 10.0.0-10.0.2, 10.1.0-Xpress, 11.0.0-11.0.3, 12.0.0-12.0.3, 13.0.0
Fixed versions: 13.5.0
Component: Core Server, Server Only
Severity: Low

Using a carefully crafted request an attacker can cause an alternative image and title text to be displayed on an admin console page.

Workaround:
Block access to the following endpoint:

  • /openam/ccversion/Version

Resolution:
Update/upgrade to a fixed version or deploy the relevant patch bundle.

If You’re Coming to Our Sydney Unconference…

Our very first Unconference in Australia is happening later today (Wednesday, August 10th) at the Sydney Museum of Contemporary Art Australia. We’re on Level 6, same location as our Identity Summit from yesterday, but today we’re in the Quayside Room, rather than the Harbourside Room.SydneyOperaView

If you haven’t been to a ForgeRock Unconference previously, you might be curious as to what exactly takes place. Basically, it’s a gathering of identity geeks, coming together to share knowledge, demonstrate new ForgeRock features and functions, and discuss the latest industry developments. We’ve got a list of topics for discussion – see below – but the agenda will be determined by the attendees, so we need your participation. ForgeRock product management, customer success and engineering teams will facilitate the day and will share their expertise through hands-on demonstrations (50 minute sessions), technical guidance, and deep dives on design, implementation, provisioning and management.

Topics For Discussion

  • Mobile Push Notification
  • Stateless Sessions
  • Policy and Authorisation
  • Pragmatic Application of Advanced Federation
  • Standards 101 + UMA 101
  • Customising Self-Service
  • Social Registration Preview
  • Inside IDM: APIs, Tuning, Object Model
  • Cloud Readiness
  • DevOps
  • Platform Commons: REST, Audit, UI, Scripting
  • All About Authentication, MFA, Biometrics
  • University: Access Mgmt 101
  • University: IDM Mgmt 101
  • University: Identity Gateway 101
  • University: Directory Server 101

Looking forward to seeing you in Sydney!

Setting up an Active Directory domain for evaluating the ForgeRock stack

This post walks through setting up a single Windows machine that you can use for testing various parts of the ForgeRock stack that integrate with Microsoft products. It is aimed at those who are tech-savy but new to Microsoft Active Directory.

By the end of the walk through, you should have:

  1. A Windows Active Directory domain
  2. An Active Directory DNS server
  3. An Active Directory LDAP service, running on SSL
  4. Active Directory Certificate Services – a CA.
  5. A kerberos realm.
  6. A vaguely realistic directory layout with sample users.
  7. PowerShell scripts for configuring the above.
The above items should allow you to test:
  1. OpenAM – Active directory authentication, DataStores, self service features and behera password policy support.
  2. OpenAM – Integrated Windows Authentication (Which for some reason we call Windows Desktop SSO in ForgeRock)
  3. OpenAM – SmartCard authentication
  4. OpenAM – ADFS federation with WSFED, SAML2 and OIDC.
  5. OpenIDM – Password Synchronisation

This post will follow the following high level steps:

  1. Setup a Windows VM or cloud instance.
  2. Give the computer an appropriate hostname
  3. Run Windows Update.
  4. Install Active Directory Domain Services (ADDS, also known as “promotion to a domain controller”)
  5. Install Active Directory Certificate Services (ADCS). Amongst other things this is a quick way of installing a certificate on the LDAPS port of a Windows domain controller.
  6. Allow all users to log on locally.
  7. Creating a sample AD structure in PowerShell

 

Set up a Windows Machine

To get hold of a Windows Server instance that you can start playing with, you can either build an instance yourself on your local or on-prem virtualisation software, or rent an instance from the likes of AWS and Azure.
At the time of writing, the most up to date edition of Windows server is 2012 R2, which is what I will use for the remainder of this post.
I find it easiest to perform most of my testing on a local VM on my laptop. If you don’t have an MSDN license or a licensed copy of Windows Server to hand, Microsoft give away a fully functional 180 day trial of Windows Server 2012 R2.
For testing, a Windows 2012 R2 server will scrape by on 2GB of RAM, but I would give it at least 4GB if you can. And while it will install happily on a 20GB hard disk, don’t expect there to be much room for Windows updates or any other software you may want to install. Some cloud providers offer 20GB images, I would avoid these if you can afford it. Go with 40GB at least to avoid constantly having to juggle things around.

Setting up a VM and automating it

Install your copy of Windows Server from the ISO file. I won’t detail going through the installation steps on the first few screens, they simply ask for things like localisation, keyboard layout and the password of the administrator account. I find that the data centre edition covers all the features I need.
If you want to skip through all of this, you can set up an autounattend.xml file and add it to the root of your Windows Server ISO image. Here is one that I made which works with the 180 day trial images of server 2012 R2. I made this using the Windows ADK, but you can also use some third party generator websites or just start with another one and manually edit it yourself.
If you choose to use my autounattend.xml, it is set up with the following:

username: administrator
password: Cangetinwin1
hostname: svr1
IE ESC: disabled

I have disabled IE ESC (aka that setting that prevents Internet Explorer from doing anything at all on Windows Server) above for testing purposes, but in production I would avoid doing this.

Using a cloud instance

A windows machine in the cloud will likely boot straight to the desktop, there isn’t much you need to do. Make sure you can remotely access it by remote desktop (tcp port 3389) from your IP address and make sure you have a remote desktop client handy. Windows obviously comes with one (mstsc.exe), there are some great ones for mac and Linux too.

Choose your weapon – server manager or PowerShell

When you first boot to a Windows 2012r2 desktop, you’ll see four icons on the start bar. First is the Windows 8 start menu, which I would avoid.
Second is the server manager. This useful tool provides quick access to almost everything you need to administer Windows server. I will describe using it to access various tools in the remainder of this post.
Thirdly is PowerShell. I’ll also describe how to do most things on this post with PowerShell.

The fourth icon is good old windows explorer, which gives you access to the file system.

Run Windows Update

In production, your Windows updates would be carefully managed via group policy and possibly a private Windows Server Update Services (WSUS) Server. For test and evaluation, it’s up to you whether you want to get updates from Microsoft. For testing, I would run it once and then turn it off.

To access the Windows Update settings in server manager, navigate to Local Server > Windows Update.

Give the computer an appropriate hostname

You can skip this part if you used my autounattend.xml file above, your hostname will be svr1. Otherwise, in server manager, navigate to Local Server > Computer Name
You can also set the computer name with the Rename-Computer cmdlet in PowerShell:
Rename-Computer -NewName svr1

Install Active Directory Domain Services

Installing active directory domain services (used to be known as promotion to a domain controller) is a two step process. First of all, you install the services required for ADDS. Secondly, you configure it. As usual, you can do these with both PowerShell and server manager.

In the following steps, the following will be configured:

  1. A brand new active directory forest containing a single domain called windom.example.com
  2. A single domain controller, running an active directory DNS server and LDAP service.
  3. A Kerberos realm called windom.example.com

ADDS Using Server Manager

This may seem like a lengthy process, but it mostly consists of clicking next in the installation and configuration wizards. If you are familiar with PowerShell, you may wish to skip to the end of this section and use the PowerShell commands which achieve the same thing.
  1. Open up Server Manager and select “Add roles and Features”

  2. Click Next

  3. You aren’t doing a RDS installation, so just click Next.

  4. The local server should be selected, so click next.

  5. Select the “Active Directory Domain Services” role. This will pop up a dialogue asking you to add some features. Go with the defaults, making sure to install the management tools, then click next. Important: Don’t install certificate services at this point, this should be done AFTER domain services has been installed.
  6. You probably don’t want to install any other features right now, so click next.

  7. There is some general info here about ADDS. Click next.

  8. You can click install at the confirmation stage.

  9. Now the service has been installed, you have to configure it. Note that you can export an XML file here which contains the options you have specified so far. This is useful if you want to script an unattended deployment. Click “promote this server to a domain controller” to continue with configuration.

  10. As we’re just creating an AD setup for test and evaluation, “click add a new forest”. Specify a domain name. If you are testing, make sure you either use a domain that you own or a valid test domain, such as example.com, example.org or example.net. It’s bad practice to use a publicly available DNS domain for Active Directory, so choose a sub domain, such as windom.example.com

  11. Use the default forest and domain functional levels and make sure that you are installing a DNS server. Specify the DSRM password – this is only needed if there is a serious problem with your domain controller that prevents it from booting.

  12. Don’t worry about DNS delegation, you are using a locally installed DNS server.

  13. Use the default netBIOS domain name.

  14. Use the default filesystem locations.

  15. You can now review what you have done before you start the configuration. Note that you can export a pre-made PowerShell script at this point containing what you have configured.

  16. The pre-requisites check should pass with some warnings about default security settings and DNS. This build is just for evaluation, so click install.

  17. After a couple of minutes the machine will reboot.

A this point you should now have a working Active Directory domain controller. You will be able to connect to it using LDAP on port 389, but LDAPS is not available yet.

ADDS Using PowerShell

Here is some PowerShell used to configure everything in the screenshots above. The first command uses an XML file that was generated from the “add roles” wizard above. The second command was generated by the configuration wizard.

Install-WindowsFeature -ConfigurationFilePath .ADDS-DeploymentConfigTemplate.xml

Import-Module ADDSDeployment
Install-ADDSForest `
-CreateDnsDelegation:$false `
-DatabasePath "C:WindowsNTDS" `
-DomainMode "Win2012R2" `
-DomainName "windom.example.com" `
-DomainNetbiosName "WINDOM" `
-ForestMode "Win2012R2" `
-InstallDns:$true `
-LogPath "C:WindowsNTDS" `
-NoRebootOnCompletion:$false `
-SysvolPath "C:WindowsSYSVOL" `
-Force:$true

Installing Active Directory Certificate Services

The following section walks through installing Active Directory Certificate Services (ADCS). This is the enterprise grade PKI infrastructure offering from Microsoft, which you can use to generate certificates for strong authentication, for example when implementing SmartCard authentication.

One nice thing about ADCS is that if you install it on a domain controller, it will automatically issue a certificate for LDAPS and configure the domain controller to use it. The default policy that is enabled on Active Directory prevents changes to any object from occuring over plain text LDAP. Therefore, if you want products like OpenAM and OpenIDM to write anything to active directory, for example when using self service or account provisioning, then you need to be using LDAPS.

ADCS Using Server Manager

  1. Open server manager again and select “add roles and features”. Select “Active Directory Certificate Services”, accept the suggested features and management tools, then click next.

  2. You don’t need any other features right now, so click next.

  3. There is some useful information here.

  4. For now, just install the certificate authority. You can come back and install further services later on if you wish.

  5. Once again, it’s time to install the service. Just like with domain services, you can export an XML file for automating these steps.

  6. Once installed, click “configure active directory certificate services” to begin the configuration process.

  7. For our testing, using the default domain administrator account is fine.

  8. Select certificate authority and click next.

  9. We want our CA to be integrated with Active Directory so that it can automatically issue certificates to services like LDAP. Select Enterprise CA and click next.

  10. Create and new private key.

  11. I’ve increased the default hashing algorithm here from SAH1 to SHA256, as many applications consider SHA1 to be obsolete.

  12. The distinguished name and CN for the CA are set here. These can not be changed, so you may want to consider what they should be for your testing. For my testing with the ForgeRock stack, the defaults are sufficient.

  13. You can increase the certificate validity period here if you wish.

  14. For testing, stick with the default file system locations.

  15. Now it’s time to apply the configuration. Unlike installing services like ADDS and ADFS, there is no option here to generate a PowerShell script of your options. I have created a PowerShell command with these options in the next section.

  16. The installation should complete successfully. Now reboot your server. On the next boot you’ll find that you can connect to your active directory server using LDAPS on port 636.

ADCS Using PowerShell

The following two commands will apply the above configuration. The first command requires an XML file that was generated from the “add roles and features” wizard.

Install-WindowsFeature -ConfigurationFilePath .ADCS-DeploymentConfigTemplate.xml

Install-AdcsCertificationAuthority `
-AllowAdministratorInteraction `
-CAType EnterpriseRootCA `
-HashAlgorithmName SHA256 `
-KeyLength 2048 `
-ValidityPeriod Years `
-ValidityPeriodUnits 10

Allow all users to logon locally

This is something that you should absolutely not do on a production domain controller.

By default, Windows server allows two simultaneous sessions from different users without having to enable the full remote desktop services service. That means you can log on to the machine as the administrator and remote desktop in as another user for testing purposes. Windows server usually runs regular desktop applications just fine (it has to for RDS) so it can make it ideal for testing services such as Office 365.

The default policies on a domain controller prevent normal users from logging on. Here, we are going to change that.
Open up server manager and navigate to Local Server > Remote Desktop. This will open the “system” control panel applet (sysdm.cpl), where you can configure remotes desktop.

Creating a sample AD structure in PowerShell

I’ve put together a script which generates a predictable list of any number of users and a fairly typical directory layout. The script is largely based on this one by SharePointRyan, only it does a few extra things.
If you wish to use it, copy the script to your machine (you can use remote desktop to copy files). Then execute it:
You should see output indicating that 200 users have been added (this number is adjustable in the script):
The script evenly distributes the users between three OUs representing world regions. It also creates a fairly common directory layout under the OU “windomcorp” (the script uses netbios name + “corp”).

Conclusion

That’s it. I will use this configuration as a basis for some future blog posts that I have in the works. Next up is integrating OpenAM with Office 365, then I’ll do a technical deep dive look at using supporting Integrated Windows Authentication with OpenAM.

This blog post was first published @ http://authntoz.blogspot.no/, included here with permission from the author.

Data Confidentiality with OpenDJ LDAP Directory Services

FR_plogo_org_FC_openDJ-300x86Directory Servers have been used and continue to be used to store and retrieve identity information, including some data that is sensitive and should be protected. OpenDJ LDAP Directory Services, like many directory servers, has an extensive set of features to protect the data, from securing network connections and communications, authenticating users, to access controls and privileges… However, in the last few years, the way LDAP directory services have been deployed and managed has changed significantly, as they are moving to the “Cloud”. Already many of ForgeRock customers are deploying OpenDJ servers on Amazon or MS Azure, and the requirements for data confidentiality are increasing, especially as the file system and disk management are no longer under their control. For that reason, we’ve recently introduced a new feature in OpenDJ, giving the ability to administrators to encrypt all or part of the directory data before writing to disk.clouddataprotection

The OpenDJ Data Confidentiality feature can be enabled on a per database backend basis to encrypt LDAP entries before being stored to disk. Optionally, indexes can also be protected, individually. An administrator may chose to protect all indexes, or only a few of them, those that contain data that should remain confidential, like cn (common name), sn (surname)… Additionally, the confidentiality of the replication logs can be enabled, and then it’s enabled for all changes of all database backends. Note that if data confidentiality is enabled on an equality index, this index can no longer be used for ordering, and thus for initial substring nor sorted requests.

Example of command to enable data confidentiality for the userRoot backend:

dsconfig set-backend-prop 
 -h opendj.example.com -p 4444 
 -D "cn=Directory Manager" -w secret12 -n -X 
 --backend-name userRoot --set confidentiality-enabled:true

Data confidentiality is a dynamic feature, and can be enabled, disabled without stopping the server. When enabling on a backend, only the updated or created entries will be encrypted. If there is existing data that need confidentiality, it is better to export and reimport the data. With indexes data confidentiality, the behaviour is different. When changing the data confidentiality on an index, you must rebuild the index before it can be used with search requests.

Key Management - Photo adapted from https://www.flickr.com/people/ecossystems/

When enabling data confidentiality, you can select the cipher algorithm and the key length, and again this can be per database backend. The encryption key itself is generated on the server itself and securely distributed to all replicated servers through the replication of the Admin Backend (“cn=admin data”), and thus it’s never exposed to any administrator. Should a key get compromised, we provide a way to mark it so and generate a new key. Also, a backup of an encrypted database backend can be restored on any server with the same configuration, as long as the server still has its configuration and its Admin backend intact. Restoring such backend backup to fresh new server requires that it’s configured for replication first.

The Data Confidentiality feature can be tested with the OpenDJ nightly builds. It is also available to ForgeRock customers as part of our latest update of the ForgeRock Identity Platform.


Filed under: Directory Services Tagged: confidentiality, data-confidentiality, directory-server, encryption, ForgeRock, identity, java, ldap, opendj, opensource, security

node-openam-agent: Your App’s New Friend

This blog is posted on behalf of Zoltan Tarcsay.


As you may know, the purpose of OpenAM Policy Agents is to enforce authentication and authorization for web resources. But while OpenAM itself has been becoming ever more feature-rich and easy to use over the years, the Policy Agents have stayed roughly the same. The ways that web resources are built and accessed today demand new enforcement strategies. The openam-agent module for Node.js takes a new approach to addressing these concerns.

The Old Ways

It sometimes feels like Policy Agents are remnants of an era when all that people had for web content was static (or server generated) HTML pages with fixed URLs, and possibly some SOAP web services.

There are two things that a web policy agent can do (OK, 3):

  • Enforce the validity of a user’s SSO session ID (which is sent in a Cookie header)
  • Enforce authorization for requested URLs served by the web container.
  • In addition, Java agents allow you to use JAAS and the OpenAM client SDK in your Java application.

If you’ve ever tried to use the OpenAM client SDK for Java, you will probably agree that it’s somewhat complicated and time consuming. Also, it doesn’t give you much control over the agent itself (think of caching, event handling, communication with OpenAM). And if you ever tried to use an OpenAM client SDK with anything other than Java, you probably found that there isn’t one (OK, there’s one for C).

So for those whose website are powered by JavaScript, Ruby, Python, PHP or anything else, there are two options:

  • Having a web agent on a web proxy server which enforces URL policies
  • Integrating with OpenAM directly by writing custom code (i.e. a policy agent)

Good news: it turns out that writing a policy agent is not so difficult. It has to do three things:

  • Intercept requests when some resource is being accessed
  • Get an access control decision based on the request (from OpenAM)
  • Throw an error or let the request pass

Now that we know that agents are not that big of a deal, it seems a little unreasonable that the existing ones are so opinionated about how people should use them. I mean, they can’t even be extended with custom functionality, unless you add some C code and recompile them…

Your New Friend

What you are about to see is a new approach to how agents should behave, most importantly, from the developer’s point of view. This groundbreaking new idea is that, instead of being an arch enemy, the policy agent should be the developer’s friend.

As an experiment, a JavaScript policy agent for Node.js was born. It is meant to be a developer-friendly, hackable, light-weight, transparent utility that acts as your app’s spirit guide to OpenAM. Everything about it is extensible and all of its functionality is exposed to your Node.js code through public APIs. It also comes with some handy features like OAuth2 token validation or pluggable backends for caching session data.

It has of the following parts:

  • OpenAMClient
    • This is a class that knows how to talk to OpenAM
  • PolicyAgent
    • Talks to OpenAM through a pluggable OpenAMClient to get decisions, identity data, etc.
    • Has its own identity and session
    • Receives notifications from OpenAM (e.g. about sessions)
    • Has a pluggable cache for storing stuff (e.g. identity information)
    • Can intercept requests and run it through pluggable enforcement strategies (i.e. Shields)
    • You can have as many as you want (more on this later)
  • Shield
    • A particular enforcement strategy (e.g. checking an OAuth2 access_token)
    • Gets a request, runs a check, then fails or succeeds
    • Can be used with any agent within the app
  • Cache
    • An interface to some backend where the agent can store its session data

Getting Started

OK, let’s look at some code.

First, create a new Node.js project and install the dependencies:

mkdir my-app && cd my-app
 npm init -y
 npm install --save express openam-agent
 touch index.js

Next, let’s add some code to index.js:

var express = require('express'),
 openamAgent = require('openam-agent'),
 app = express(),
 agent = openamAgent({openamUrl: 'https://openam.example.com/openam'});

app.get('/', agent.shield(openamAgent.cookieShield({getProfiles: true})), function (req, res) {
 res.send('Hello, ' + req.session.userName);
 });
 app.listen(1337);

Done, you have a web application with a single route that is protected by a cookie shield (it checks your session cookie). The cookie shield also put the user’s profile data into the req object, so you can use it in your own middleware.

Express

It’s important to note here that openam-agent currently only works with the Express framework, but the plan is to make it work with just regular Node.js requests and responses as well.

In the example above, the variable app will be your Express application. An express app is a collection of routes (URL paths) and middleware (functions that handle requests that are sent to the routes). One route can have multiple middleware, i.e. a requests can be sent through a chain of middleware functions before sending a response.

The agent fits beautifully in this architecture: the agent’s agent.shield(someShield) function returns a middleware function for Express to handle the request. Which means that you can use any enforcement strategy with any agent with any route, as you see fit.

Policies

You can do things like this:

var policyShieldFoo = openamAgent.policyShield({application: 'foo'}),
 policyShieldBar = openamAgent.policyShield({application: 'bar'});

app.get('/my/awesome/api/foo', agent.shield(policyShieldFoo));
 app.get('/my/awesome/api/foo/oof', function (req, res) {
 // this is a sub-resource, so it's protected by the foo shield
 });

app.get('/my/awesome/api/bar', agent.shield(policyShieldBar));
 app.get('/my/awesome/api/bar', function (req, res) {
 // this middleware is called after the bar shield on the same path, so it's protected
 });

In this case you have two Shields, both using a different application (or policy set) in OpenAM; you can then use one for one route, and the other for the other route. Whether the policy shield will apply to the incoming request is determined by path and the order in which you mounted your middleware functions.

Note that the agent needs special privileges for getting policy decisions from OpenAM, so it will need some credentials (typically an agent profile) in OpenAM:

var agent = openamAgent({
 openamUrl: 'https://openam.example.com/openam',
 username: 'my-agent',
 password: 'secret12'
 })

When the agent tries to get a policy decision for the first time, it will create a session in OpenAM for itself.

Note that a policy decision needs a subject, so the request will need to contain a valid session ID.

OAuth2

This is how you enforce a valid OAuth2 token:

app.use('/my/mobile/content', agent.shield(openamAgent.oauth2Shield()), function (req, res) {
 // the OAuth2 token info is in req.session.data
 // if you wanted to check the scopes against something, you could write a shield to do it
 });

Notifications and CDSSO

There are cases when the agent needs to be able to accept data from OpenAM. One example is notifications (e.g. when a user logs out, OpenAM can notify the agents so they can clear that session from their cache). The node-openam-agent lets you mount a notification route to your app as such:

var agent = openamAgent({notificationsEnabled: true});
 app.use(agent.notifications('/some/path/to/the/notifications/endpoint'));

CDSSO is also possible (although it becomes tricky when your original request is anything other than GET, because of the redirects):

var agent = openamAgent({notificationsEnabled: true});
 app.use(agent.cdsso('/some/path/to/the/cdsso/client/endpoint'));

Note: OpenAM needs to know that you want to use the cdcservlet after you log in (this servlet creates a SAML1.1 assertion containing the user’s session ID, which is then POSTed to the agent through the client’s browser). For this, you will need to create a web agent profile and enable CDSSO.

Extensions

The current features add some extra functionality to the classic agent behavior, but there is so much more that can be done, some of it will be very specific to each application and how people use OpenAM.

Extensibility is at the heart of this agent, and it is meant to be very simple. Here’s an example of a custom Shield.

First, extend the Shield class:

var util = require('util'),
 Shield = require('openam-agent').Shield;

/**
 * @constructor
 */
 function UnicornShield(options) {
 this.options = options;
 }

UnicornShield.prototype.evaluate = function (req, success, fail) {
 // check if this request has a unicorn in it
 // (we could also use this.agent to talk to OpenAM)
 if (this.options.foo && req.headers.unicorn) {
 success();
 } else {
 fail();
 }
 };

And then use it in your app:

app.use(agent.shield(new UnicornShield({foo: true})));

There’s all sorts of docs (API and otherwise) in the wiki if you’re interested in extending the agent.

More stuff

There is much more to show and tell about this agent, especially when it comes to specific use cases, but it doesn’t all fit in one blog post. Stay tuned for more stuff!

Contributions

node-openam-agent is a community driven open source project on GitHub, and it is not owned or sponsored by ForgeRock. The software comes with an MIT license and is free to use without any restrictions but comes without any warranty or official support. Contributions are most welcome, please read the wiki, open issues and feel free to submit pull requests.

Using Push Notifications for Passwordless Authentication and Easy MFA

This blog post by the OpenAM product manager was first published @ thefatblokesings.blogspot.com, included here with permission.

There is often a trade-off between the convenience of an authentication system and the strength of security around it. Oftentimes, the stronger the security, the more tedious it can be for the end user. But now that (almost) everyone has a smartphone, can we somehow use this magical device as an authenticator?

The mid-year release of the ForgeRock Identity Platform introduced some exciting new Access Management technology, namely Push Authentication. When a user wants to login, they simply identify themselves (e.g. username or email) and the system sends them a Push Notification message asking if they want to authorize the login. This message is fielded by the ForgeRock Authenticator App (iPhone or Android) and the user can use swipe or TouchId to agree to the authentication attempt, or Cancel to deny it. Cool stuff, let’s check it out…

We’ll look at:

  • The User experience of logging in using Push Auth
  • The Architecture underpinning this
  • The Admin experience of setting this up
  • Customizing the experience

User Experience

Before you can use Push you’ll need to register your phone to your account so you’ll typically login in the traditional way…

 

 

…before being presented with a QR code…

 


Using the ForgeRock Authenticator app on your phone you can scan this to create an account for that IDP…

 

Now when the user wants to login, they can simply enter their username…

 

…and their phone buzzes and displays something like this…

 

 

The user decides if this is a login attempt by them and, if so, uses TouchId (or swipe if TouchId not present or enabled) to get logged in.

The Architecture

The players in this dance are:
  1. The user on their primary device (say laptop, but could be phone too, see later);
  2. The ForgeRock AM server;
  3. The Push Service in the Cloud;
  4. The phone.

 

How to set it up (The administrator’s experience)

To set this up we’ll need:
  • ForgeRock Access Management (AM) version 13.5;
  • We’ll create 2 new authentication module instances
    • ForgeRock Authenticator (Push) Registration – used to link phone to account;
    • ForgeRock Authenticator (Push) – used when logging in;
  • We’ll create a new realm-based Push Notification Service – this is how AM talks to the Cloud push service;

Authentication Modules and Chains

 

First, in the AM Admin Console, create the 2 new authentication modules (let’s call them Push-Reg and Push-Auth) and use the default values….

 

 

They will look something like this…

 

Now create 2 Authentication Chains, also called Push-Auth and Push-Reg.

For Push-Reg we’ll use a simple Datastore (username/password) module, to identify the user during registration, followed by the Push-Reg Authentication module…

 

and to keep things simple, lets just use the Push-Auth module in the Push-Auth chain…

 

So now we have 2 new chains…

 

At this point you can test these chains out by visiting
<deployment-url>/XUI/#login/&service=Push-Auth
where Push-Auth is the chain name.
But this won’t work yet because we need to tell AM how to send Push Notifications by creating the Push Notification Service.

 

Push Notification Service

The Admin Console has changed a bit in 13.5 in the Services area and is now much easier to configure. First, create a New Service of type  Push Notification Service…

 

 

Once created, we want to configure this. This is slightly tricky but not too hard for people who have read this far ;-)

 

At the time of writing, ForgeRock use AWS Simple Notification Service for sending Push Notifications to Android and Apple phones. And ForgeRock have provided a convenient way for customers to generate credentials to configure this Service.

 

 

Go to Backstage, login and navigate to Projects. If you haven’t registered a Project before, create one and also an Environment too within the Project. Then simply press the big button marked “Set Up Push Auth Credentials”

 

This will generate some credentials which you can use to populate the Push Notification Service on your AM deployment.

 

 

Providing your phone can reach your AM server, your users should now be able to register and login using Push Notifications.

Customizing the IDPs

Say you now want to customize the IDP to have your corporate logo and colorscheme.

 


Return to the Push-Reg Auth Module and you’ll see that you can configure Issuer Name, background color and Logo. And in the Push-Auth Module you can tailor the message that is presented to the user. This all means that on your phone you can deliver an experience like this….

 

Summary

This was a simple “getting you going” blog entry.

In internet facing deployments you may want to use more of the capability of AM’s Authentication Chains to use Push as a super-easy 2FA offering, or if you want to deliver a Passwordless experience, put more intelligence around detecting the identity of the user attempting to login to prevent unsolicited Push messages being sent to a user.

Using Push Notifications for Passwordless Authentication and Easy MFA

There is often a trade-off between the convenience of an authentication system and the strength of security around it. Oftentimes, the stronger the security, the more tedious it can be for the end user. But now that (almost) everyone has a smartphone, can we somehow use this magical device as an authenticator?

The mid-year release of the ForgeRock Identity Platform introduced some exciting new Access Management technology, namely Push Authentication. When a user wants to login, they simply identify themselves (e.g. username or email) and the system sends them a Push Notification message asking if they want to authorize the login. This message is fielded by the ForgeRock Authenticator App (iPhone or Android) and the user can use swipe or TouchId to agree to the authentication attempt, or Cancel to deny it. Cool stuff, let's check it out...

We'll look at:

  • The User experience of logging in using Push Auth
  • The Architecture underpinning this
  • The Admin experience of setting this up
  • Customizing the experience

User Experience

Before you can use Push you'll need to register your phone to your account so you'll typically login in the traditional way...

...before being presented with a QR code...

Using the ForgeRock Authenticator app on your phone you can scan this to create an account for that IDP...
Now when the user wants to login, they can simply enter their username...
...and their phone buzzes and displays something like this...
The user decides if this is a login attempt by them and, if so, uses TouchId (or swipe if TouchId not present or enabled) to get logged in. 

The Architecture

The players in this dance are:
  1. The user on their primary device (say laptop, but could be phone too, see later);
  2. The ForgeRock AM server;
  3. The Push Service in the Cloud;
  4. The phone.


How to set it up (The administrator's experience)

To set this up we'll need:
  • ForgeRock Access Management (AM) version 13.5;
  • We'll create 2 new authentication module instances 
    • ForgeRock Authenticator (Push) Registration - used to link phone to account;
    • ForgeRock Authenticator (Push) - used when logging in;
  • We'll create a new realm-based Push Notification Service - this is how AM talks to the Cloud push service;

Authentication Modules and Chains

First, in the AM Admin Console, create the 2 new authentication modules (let's call them Push-Reg and Push-Auth) and use the default values....

They will look something like this...

Now create 2 Authentication Chains, also called Push-Auth and Push-Reg.
For Push-Reg we'll use a simple Datastore (username/password) module, to identify the user during registration, followed by the Push-Reg Authentication module...
and to keep things simple, lets just use the Push-Auth module in the Push-Auth chain...
So now we have 2 new chains...

At this point you can test these chains out by visiting
<deployment-url>/XUI/#login/&service=Push-Auth
where Push-Auth is the chain name.
But this won't work yet because we need to tell AM how to send Push Notifications by creating the Push Notification Service.

Push Notification Service

The Admin Console has changed a bit in 13.5 in the Services area and is now much easier to configure. First, create a New Service of type  Push Notification Service...

 

Once created, we want to configure this. This is slightly tricky but not too hard for people who have read this far ;-)

At the time of writing, ForgeRock use AWS Simple Notification Service for sending Push Notifications to Android and Apple phones. And ForgeRock have provided a convenient way for customers to generate credentials to configure this Service.

Go to Backstage, login and navigate to Projects. If you haven't registered a Project before, create one and also an Environment too within the Project. Then simply press the big button marked "Set Up Push Auth Credentials"



This will generate some credentials which you can use to populate the Push Notification Service on your AM deployment.

Providing your phone can reach your AM server, your users should now be able to register and login using Push Notifications.

Customizing the IDPs

Say you now want to customize the IDP to have your corporate logo and colorscheme.

Return to the Push-Reg Auth Module and you'll see that you can configure Issuer Name, background color and Logo.
And in the Push-Auth Module you can tailor the message that is presented to the user.
This all means that on your phone you can deliver an experience like this....












Summary

This was a simple "getting you going" blog entry.

In internet facing deployments you may want to use more of the capability of AM's Authentication Chains to use Push as a super-easy 2FA offering, or if you want to deliver a Passwordless experience, put more intelligence around detecting the identity of the user attempting to login to prevent unsolicited Push messages being sent to a user.


Introducing our introductory video series – What is Identity

Whilst many of the visitors to this site are well versed in the finer details of the identity and access management space, there are still a fair few who are still coming up to speed as well. Furthermore, we know that sometimes one needs to explain at a higher level what it is we do on a day to day basis; whether it’s to explain to a colleague why identity management is important, or to explain to one’s mother what it is we do for a living.

To facilitate the further education around identity and access management related topics, we’ve put together a comprehensive video series (19 videos currently) that provides high-level overviews of a variety of identity-related topics.

Some examples include:

What is Identity Management?

 

What is Authentication?

and some more in-depth topics like…

Machine to Machine Identity

You can see the full series (all 19 videos) here.

Share and enjoy!

Fun with OpenAM13 Authz Policies over REST – the ‘jwt’ parameter of the ‘Subject’

Summary

I’ve previously blogged about the ‘claims’ and ‘ssoToken’ parameters of the ‘subject’ item used in the REST call to evaluate a policy for a resource. These articles are:

Now we’re going to look at the ‘jwt’ parameter.
For reference, the REST call we’ll be using is documented in the developer guide, here:

The ‘JWT’ Parameter

The documentation describes the ‘jwt’ paramter as:

The value is a JWT string

What does that mean?
Firstly, it’s worth understanding the JWT specification: RFC7519
To summarise, a JWT is a URL-safe encoded, signed (and possibly encrypted) representation of a ‘JWT Claims Set’. The JWT specification defines the ‘JWT Claims Set’ as:

A JSON object that contains the claims conveyed by the JWT.

Where ‘claims’ are name/value pairs about the ‘subject’ of the JWT.  Typically a ‘subject’ might be an identity representing a person, and the ‘claims’ might be attributes about that person such as their name, email address, and phone number etc

So a JWT is generic way of representing a subject’s claims.

OpenID Connect (OIDC)

OIDC makes use of the JWT specification by stating that the id_token must be a JWT.  It also defines a set of claims that must be present within the JWT when generated by an OpenID Provider  See: http://openid.net/specs/openid-connect-core-1_0.html#IDToken

The specification also says that additional claims may be present in the token.  Just hang on to that thought for the moment…we’ll come back to it.

OpenAM OIDC configuration

For the purposes of investigating the ‘jwt’ parameter, let’s configure OpenAM to generate OIDC id_tokens.  I’m not going to cover that here, but we’ll assume you’ve followed the wizard to setup up an OIDC provider for the realm.  We’ll also assume you’ve created/updated the OAuth2/OIDC Client Agent profile to allow the ‘profile’ and ‘openid’ scopes.  I’m also going to use an ‘invoices’ scope so the config must allow me to request that too.

Now I can issue:
 curl --request POST --user "apiclient:password" --data "grant_type=password&username=bob&password=password&scope=invoices openid profile" http://as.uma.com:8080/openam/oauth2/access_token?realm=ScopeAz

Note the request for the openid and profile scopes in order to ensure I get the OpenID Connect response.

And I should get something similar to the following:

{
  "access_token":"0d0cbd2a-c99c-478a-84c9-78463ec16ad4",
  "scope":"invoices openid profile",
  "id_token":"eyAidHlwIjogIkpXVCIsICJraWQiOiAidWFtVkdtRktmejFaVGliVjU2dXlsT2dMOVEwPSIsICJhbGciOiAiUlMyNTYiIH0.eyAiYXRfaGFzaCI6ICJVbmFHMk0ydU5kS1JZMk5UOGlqcFRRIiwgInN1YiI6ICJib2IiLCAiaXNzIjogImh0dHA6Ly9hcy51bWEuY29tOjgwODAvb3BlbmFtL29hdXRoMi9TY29wZUF6IiwgInRva2VuTmFtZSI6ICJpZF90b2tlbiIsICJhdWQiOiBbICJhcGljbGllbnQiIF0sICJvcmcuZm9yZ2Vyb2NrLm9wZW5pZGNvbm5lY3Qub3BzIjogIjhjOWNhNTU3LTk0OTgtNGU2Yy04ZjZmLWY2ZjYwZjNlOWM4NyIsICJhenAiOiAiYXBpY2xpZW50IiwgImF1dGhfdGltZSI6IDE0NjkwMjc1MTMsICJyZWFsbSI6ICIvU2NvcGVBeiIsICJleHAiOiAxNDY5MDMxMTEzLCAidG9rZW5UeXBlIjogIkpXVFRva2VuIiwgImlhdCI6IDE0NjkwMjc1MTMgfQ.MS6jnMoeQ19y1DQky4UdD3Mqp28T0JYigNQ0d0tdm04HjicQb4ha818qdaErSxuKyXODaTmtqkGbBnELyrckkl7m2aJki9akbJ5vXVox44eaRMmQjdm4EcC9vmdNZSVORKi1gK6uNGscarBBmFOjvJWBBBPhdeOPKApV0lDIzX7xP8JoAtxCr8cnNAngmle6MyTnVQvhFGWIFjmEyumD6Bsh3TZz8Fjkw6xqOyYSwfCaOrG8BxsH4BQTCp9FgsEjI52dZd7J0otKLIk0EVmZIkI4-hgRIcrM1Rfiz9LMHvjAWY97JBMcGBciS8fLHjWWiLDqMHEE0Wn5haYkMSsHYg",
  "token_type":"Bearer",
  "expires_in":3599
}

Note the lengthy id_token field.  This is the OIDC JWT made up according to the specification.  Also note that, by default, OpenAM will sign this JWT with the 1024-bit ‘test’ certificate using the RS256 algorithm.  I’ve updated my instance to use a new 2048-bit certificate called ‘test1’ so my response will be longer than the default.  I’ve used a 2048-bit certificate because I want to use this tool to inspect the JWT and its signature: http://kjur.github.io/jsjws/tool_jwt.html.  And, this tool only seems to support 2048-bit certificates which is probably due to the JWS specification   (I could have used jwt.io to inspect the JWT, but this does not support verification of RSA based signatures).

So, in the JWT tool linked above you can paste the full value of the id_token field into ‘Step 3’, then click the ‘Just Decode JWT’ button.  You should see the decode JWT claims in the ‘Payload’ box:

You can also see that the header field shows how the signature was generated in order to allow clients to verify this signature. In order to get this tool to verify the signature, you need to get the PEM formatted version of the public key of the signing certificate.  i.e. ‘test1’ in my case. I’ve got this from the KeyStoreExplorer tool, and now I can paste it into the ‘Step 4’ box, using the ‘X.509 certificate for RSA’ option.  Now I can click ‘Verify It’:

The tool tells me the signature is valid, and also decodes the token as before.  If I was to change the content of the message, of the signature of the JWT then the tool would tell me that the signature is not valid. For example, changing one character of the message would return this:

Note that the message box says that the signature is *Invalid*, as well as the Payload now being incorrect.

The ‘jwt’ Parameter

So now we’ve understood that the id_token field of the OIDC response is a JWT, we can use this as the ‘jwt’ parameter of the ‘subject’ field in the policy evaluation call.

For example, a call like this:
 curl --request POST --header "iPlanetDirectoryPro: AQIC5wM2LY4Sfcx-sATGr4BojcF5viQOrP-1IeLDz2Un8VM.*AAJTSQACMDEAAlNLABQtMjUwMzE4OTQxMDA1NDk1MTAyNwACUzEAAA..*" --header "Content-Type: application/json" --data '{"resources":["invoices"],"application":"api","subject":{"jwt":"eyAidHlwIjogIkpXVCIsICJraWQiOiAidWFtVkdtRktmejFaVGliVjU2dXlsT2dMOVEwPSIsICJhbGciOiAiUlMyNTYiIH0.eyAiYXRfaGFzaCI6ICJVbmFHMk0ydU5kS1JZMk5UOGlqcFRRIiwgInN1YiI6ICJib2IiLCAiaXNzIjogImh0dHA6Ly9hcy51bWEuY29tOjgwODAvb3BlbmFtL29hdXRoMi9TY29wZUF6IiwgInRva2VuTmFtZSI6ICJpZF90b2tlbiIsICJhdWQiOiBbICJhcGljbGllbnQiIF0sICJvcmcuZm9yZ2Vyb2NrLm9wZW5pZGNvbm5lY3Qub3BzIjogIjhjOWNhNTU3LTk0OTgtNGU2Yy04ZjZmLWY2ZjYwZjNlOWM4NyIsICJhenAiOiAiYXBpY2xpZW50IiwgImF1dGhfdGltZSI6IDE0NjkwMjc1MTMsICJyZWFsbSI6ICIvU2NvcGVBeiIsICJleHAiOiAxNDY5MDMxMTEzLCAidG9rZW5UeXBlIjogIkpXVFRva2VuIiwgImlhdCI6IDE0NjkwMjc1MTMgfQ.MS6jnMoeQ19y1DQky4UdD3Mqp28T0JYigNQ0d0tdm04HjicQb4ha818qdaErSxuKyXODaTmtqkGbBnELyrckkl7m2aJki9akbJ5vXVox44eaRMmQjdm4EcC9vmdNZSVORKi1gK6uNGscarBBmFOjvJWBBBPhdeOPKApV0lDIzX7xP8JoAtxCr8cnNAngmle6MyTnVQvhFGWIFjmEyumD6Bsh3TZz8Fjkw6xqOyYSwfCaOrG8BxsH4BQTCp9FgsEjI52dZd7J0otKLIk0EVmZIkI4-hgRIcrM1Rfiz9LMHvjAWY97JBMcGBciS8fLHjWWiLDqMHEE0Wn5haYkMSsHYg"}}' http://as.uma.com:8080/openam/json/ScopeAz/policies?_action=evaluate

might return:

[
  {
    "ttl":9223372036854775807,
    "advices":{},
    "resource":"invoices",
    "actions":{"permit":true},
    "attributes":{"hello":["world"]}
  }
]

This assumes the following policy definition:

Note that in this case I am using the ‘iss’ claim within the token in order to ensure I trust the issuer of the token when evaluating the policy condition.

As mentioned in previous articles, it is imperative that the id_token claims includes a ‘sub’ field.  Fortunately, the OIDC specification makes this mandatory so using an OIDC token here will work just fine.

It's also worth noting that OpenAM does *not* verify the signature of the id_token submitted in 'jwt' field.  This means that you could shorten the 'curl' call above to remove the signature component of the 'jwt'. For example, this works just the same as above:
 curl --request POST --header "iPlanetDirectoryPro: AQIC5wM2LY4Sfcx-sATGr4BojcF5viQOrP-1IeLDz2Un8VM.*AAJTSQACMDEAAlNLABQtMjUwMzE4OTQxMDA1NDk1MTAyNwACUzEAAA..*" --header "Content-Type: application/json" --data '{"resources":["invoices"],"application":"api","subject":{"jwt":"eyAidHlwIjogIkpXVCIsICJraWQiOiAidWFtVkdtRktmejFaVGliVjU2dXlsT2dMOVEwPSIsICJhbGciOiAiUlMyNTYiIH0.eyAiYXRfaGFzaCI6ICJVbmFHMk0ydU5kS1JZMk5UOGlqcFRRIiwgInN1YiI6ICJib2IiLCAiaXNzIjogImh0dHA6Ly9hcy51bWEuY29tOjgwODAvb3BlbmFtL29hdXRoMi9TY29wZUF6IiwgInRva2VuTmFtZSI6ICJpZF90b2tlbiIsICJhdWQiOiBbICJhcGljbGllbnQiIF0sICJvcmcuZm9yZ2Vyb2NrLm9wZW5pZGNvbm5lY3Qub3BzIjogIjhjOWNhNTU3LTk0OTgtNGU2Yy04ZjZmLWY2ZjYwZjNlOWM4NyIsICJhenAiOiAiYXBpY2xpZW50IiwgImF1dGhfdGltZSI6IDE0NjkwMjc1MTMsICJyZWFsbSI6ICIvU2NvcGVBeiIsICJleHAiOiAxNDY5MDMxMTEzLCAidG9rZW5UeXBlIjogIkpXVFRva2VuIiwgImlhdCI6IDE0NjkwMjc1MTMgfQ."}}' http://as.uma.com:8080/openam/json/ScopeAz/policies?_action=evaluate

Note that the ‘jwt’ string needs to have two dots ‘.’ in it to conform to the JWT specification.  The content following the second dot is the signature, which has been removed entirely in this second curl example.  i.e. this is an unsigned-JWT which is completely valid.

But, just to prove that OpenAM does *not* validate signed JWTs, you could attempt a curl call that includes garbage for the signature.  For example:

curl --request POST --header "iPlanetDirectoryPro: AQIC5wM2LY4Sfcx-sATGr4BojcF5viQOrP-1IeLDz2Un8VM.*AAJTSQACMDEAAlNLABQtMjUwMzE4OTQxMDA1NDk1MTAyNwACUzEAAA..*" --header "Content-Type: application/json" --data '{"resources":["invoices"],"application":"api","subject":{"jwt":"eyAidHlwIjogIkpXVCIsICJraWQiOiAidWFtVkdtRktmejFaVGliVjU2dXlsT2dMOVEwPSIsICJhbGciOiAiUlMyNTYiIH0.eyAiYXRfaGFzaCI6ICJVbmFHMk0ydU5kS1JZMk5UOGlqcFRRIiwgInN1YiI6ICJib2IiLCAiaXNzIjogImh0dHA6Ly9hcy51bWEuY29tOjgwODAvb3BlbmFtL29hdXRoMi9TY29wZUF6IiwgInRva2VuTmFtZSI6ICJpZF90b2tlbiIsICJhdWQiOiBbICJhcGljbGllbnQiIF0sICJvcmcuZm9yZ2Vyb2NrLm9wZW5pZGNvbm5lY3Qub3BzIjogIjhjOWNhNTU3LTk0OTgtNGU2Yy04ZjZmLWY2ZjYwZjNlOWM4NyIsICJhenAiOiAiYXBpY2xpZW50IiwgImF1dGhfdGltZSI6IDE0NjkwMjc1MTMsICJyZWFsbSI6ICIvU2NvcGVBeiIsICJleHAiOiAxNDY5MDMxMTEzLCAidG9rZW5UeXBlIjogIkpXVFRva2VuIiwgImlhdCI6IDE0NjkwMjc1MTMgfQ.garbage!!"}}' http://as.uma.com:8080/openam/json/ScopeAz/policies?_action=evaluate

…would still successfully be authorised.
It’s also worth noting that the id_token claims of an OIDC token includes an ‘exp’ field signifying the ‘expiry time’ of the id_token.  OpenAM does not evaluate this field in this call.

Signature Verification

You might be wondering if it is possible to verify the signature and other aspects, such as the ‘exp’ field.  Yes, it is!  With a little bit clever scripting – of course!

The first thing is that we need to ensure that jwt token can be parsed by a script.  Unfortunately, simply passing it in the jwt parameter does not permit this.  But, we can *also* pass the jwt token in the ‘environment’ field of the policy decision request.  I’ll shorten the jwt tokens in the following CURL command to make it easier to read, but you should supply the full signed jwt in the ‘environment’ field:

curl --request POST --header "iPlanetDirectoryPro: "AQIC....*" --header "Content-Type: application/json" --data '{"resources":["invoices"],"application":"api","subject":{"jwt":"eyAidHlw...MyNTYiIH0.eyAiYXRfa...MTMgfQ.MS6jn...sHYg"},"environment":{"jwt":["eyAidHlw...MyNTYiIH0.eyAiYXRfa...MTMgfQ.MS6jn...sHYg"]}}' http://as.uma.com:8080/openam/json/ScopeAz/policies?_action=evaluate

Note in this that the ‘environment’ field now includes a ‘jwt’ field whose data can be utilised in a script.  And what would such a policy condition script look like?

Well head over to https://github.com/smof/openAM_scripts and take a look at the ‘ExternalJWTVerifier.groovy’ script.  The associated blogpost from my colleague, Simon Moffatt, will set this script into context: http://identityrelationshipmanagement.blogspot.co.uk/2016/05/federated-authorization-using-3rd-party.html.  This will validate either an HMAC signed JWT – if you enter the appropriate shared secret – as well as an RSA 256 signed OIDC JWT – if you specify the jwk_uri for the OpenID Connect Provider.
And, now that you have claims accessible to the scripting engine you can pretty much apply any form of logic to them to validate the token – including validating the ‘exp’ field.

This blog post was first published @ yaunap.blogspot.no, included here with permission from the author.

A Beginners Guide to OpenIDM – Part 3 – Connectors

Overview

Previously in this series we have looked at a general overview of OpenIDM and had a detailed look at objects. In this blog I want to explore connectors.
Connectors are the integration glue that enables you to bring data into OpenIDM from all sorts of different systems and data stores. We will take a look at the different types of connectors available in OpenIDM, how they work and end with a practical example of how to actually configure a connector.

Connectors

Architecture

Every identity system that I have ever worked with has a concept similar of a connector. Usually they comprise of Java libraries or scripts that perform the actual push and pull of data to and from a target data source.
Standard connector operations in OpenIDM include:
  • Create: Create a new object ( usually an account ) in a target data store.
  • Update: Update an existing object e.g. if a user changes their email address then we may want to update the user record in a target data store.
  • Get: Retrieve a specific instance of an object ( e.g. an account) from a target data store.
  • Search: Query the collection and return a specific set of results.
There are a number of other operations which we will explore in later blogs.
At a high level connectors are comprised of:
  • Provisioner configuration: configuration data defining the connector usually containing:
    • Reference to the underlying Java class that implements the connector. This should be populated automatically when you choose your connector type. You can explore the connector source code if you like but for the most part you shouldn’t need to be concerned with the underlying classes.
    • All of the credentials and configuration needed to access the data store. You need to configure this.
    • The data store schema for the object or account. You need to configure this.
Connectors are configured through the user interface but like all OpenIDM configuration they are also stored ( and can be edited ) locally on the file system. Connector configuration files ( like most OpenIDM configuration files) can be found in openidm/conf and have the following naming convention:
provisioner.openicf-something.json ( where something is whatever you have named your connector ).
Note connector configuration files will not appear unless you have configured a connector using the UI, we will revisit this later.
The logical flow in OpenIDM for utilising connectors is as follows:
  • Data Synchronization engine outputs data and a requested operation e.g. create, delete, update or one of several others
  • Provisioner engine invokes the connector class with the requested operation and the data from the synchronization engine.
  • Connector class uses the configuration parameters from the provisioner file and the data passed in the invocation to actually do the work and push or pull to or from the target.

Connector Example

So now we have a basic understanding of how connectors work, lets try configuring one.
I’m going to use the CSV connector for this example and we are going to read users from a Comma Seperate Value list. Ultimately we will be reading this data into the managed user object using a mapping. For this blog though we will just focus on configuring the connector.
Feel free to use any CSV file but if you want to follow along with the example then download the CSV here that I created using Mockaroo.



Copy the file to somewhere on the same file system that OpenIDM has been installed on, it doesn’t matter where so long as OpenIDM can access it. I’m going to use /home/data/users.csv
Then log in to OpenIDM as administrator. Navigate to Configure, then Connectors.


 

Press “New Connector”



You will see the UI for configuring a new connector:



Give your new connector a name (I have used UserLoadCSV above – no spaces permitted), and look at the connector types. These are all the different systems you can integrate with.
Note that with further configuration, more connectors are available, and using the scripted connector you can pretty much integrate with any system that offers a suitable API.
 
Select the “CSV File Connector”. Now we need to complete the “Base Connector Details”. Starting with the path to the CSV File we actually want to process.


Now let’s take a look at the next few fields:



They are populated by default but we need to configure these up to match our spreadsheet.
Looking at the data:
  • Header UID = id
  • Header Name = username
So in this instance we just need to change the Header UID to match.



You will note there are a few more fields:
  • Header Password: We will not be processing any passwords from this CSV, that might be something you want to do, although typically you will have OpenIDM generate passwords for you ( more on that later ).
  • Quote Character: If you have an unusually formatted CSV, you can change the character that surrounds your data values. This is used by OpenIDM to successfully parse the CSV file.
  • Field Delimiter: Similarly if you are using a delimiter ( the character that splits up data entries ) that is anything other than a “,” you can tell OpenIDM here.
  • Newline String: As above.
  • Sync Retention Count: Todo
Note that these parameters are all unique to the CSV connector. If you were to use another connector, say the database connector, you would have a different set of parameters that must be configure for OpenIDM to successfully connect to the database and query the table.
Ok, with all that done lets add the connector:



All being well you should get a positive confirmation message. Congratulations, you have added a connector! All very well but what can we do with it?
Click on the menu option ( the vertical dots):


Then Data (_ACCOUNT_)



If you have done everything correctly you should see the data from the CSV in OpenIDM!



It is important to understand, that at this point the data has not been loaded into OpenIDM, OpenIDM is simply providing a live view of the data in the CSV. This works for any connector and we will revisit it at the end of this blog.
Before that, there are a few things I want to cover. Go back to the Connector screen, you should have a new Connector:



Select it, and select “Object Types”:



Then edit “_ACCOUNT_”.




What you should be able to see if a list of all of the attributes in the CSV file. OpenIDM has automatically parsed the CSV and built a schema for interpreting the data. You may also spot “__NAME__”. This is a special attribute, and it maps to the  Header Name attribute we configured earlier.

Again, the concept of Object Type is universal to all our connectors and sometimes additional configuration of the Object Type may be required in order to successfully process data.


Finally, let’s take a look at Sync:

On this page you can configure LiveSync. LiveSync is a special case of synchronization. Ordinarily synchronization is performed through the mappings interface ( or automatically on a schedule ).

However if the connector and target system support it, then LiveSync can be used. With LiveSync changes are picked up as they occur in the target. Ordinarily with a normal synchronization ( often called reconciliation ) all accounts in the target must be examined against the source for changes. With LiveSync, only accounts in the target that have changed will be processed. For this to work the target must support some form of change log that OpenIDM can read. In systems with large numbers of accounts this is a much more efficient way of keeping data in sync.

Connectors And The REST API

As before, we can make use of the REST API here to query our new connector. We can actually use the API to read or write to the underlying CSV data store. Just take a moment to think about what that means. In an enterprise implementation you might have hundreds of different data stores of every type. Once you have configured connectors to OpenIDM you can query those data stores using a single, consistent and centralised RESTful API via OpenIDM. That really is a very powerful tool.

Let’s take a look at this now. Navigate back to the data accounts page from earlier:




Take a look at the URL:

As before, this corresponds to our REST API. Please fire up Postman again.

Enter the following URL

http://localhost.localdomain.com:8080/openidm/system/UserLoadCSV/__ACCOUNT__?_queryId=query-all-ids

You should see the following result



We have just queried the CSV file using the REST API, and retrieved the list of usernames.
Let’s try retrieving the data for a specific user:

http://localhost.localdomain.com:8080/openidm/system/UserLoadCSV/__ACCOUNT__?_queryFilter=/email eq “tgardner0@nsw.gov.au”


Here we are searching for the user with the email address tgardner0@nsw.gov.au.

 



Again, this is just a small sample of what the REST API is capable of, you can learn much more here:
https://forgerock.org/openidm/doc/bootstrap/integrators-guide/index.html#appendix-rest
And more on how queries work here:

https://forgerock.org/openidm/doc/bootstrap/integrators-guide/#constructing-queries

 

Come back next time for a look at mappings where we will join together the managed user and the connector to actually create some users in the system.

This blog post was first published @ http://identity-implementation.blogspot.no/, included here with permission from the author.