Most database technologies (Cloud DB as a Service offerings, traditional DBs, LDAP services, etc.) typically run in a single primary mode, with multiple secondary nodes to ensure high availability. The main rationale is it’s the only surefire way to ensure data consistency, and integrity is maintained.
If an active topology was enabled, replication delay (the amount of time it takes for a data WRITE operation on one node to propagate to a peer) may cause the following to occur:
The client executes a WRITE operation on node 1.
The client then executes a READ operation soon after the WRITE.
In an all active topology this READ operation may target node 2, but because the data has not yet been replicated (due to load, network latency, etc) you get a data miss and application level chaos ensues.
Another scenario is lock counters:
User A is an avid Manchester UTD football fan (alas more of a curse than a blessing nowadays!) and is keen to watch the game. In haste, they try to login but supply an incorrect password. The lock counter increments by +1 on User A’s profile on node 1. Counter moves from 0 to 1.
User A, desperate to catch the game then quickly tries to login again, but again supplies an incorrect password.
This time, if replication is not quick enough, node 2 may be targeted and thus the lock counter moves from 0 to 1 instead of from 1 to 2. Fail!!!
These scenarios and others like it mandate a single primary topology, which for high load environments, results in high cost, as the primary needs to be vertically scaled to handle all of the load (plus headroom) and wasted compute resource as the secondaries (same spec as the primary) are sat idle costing $$$ for no gain.
Tada — Roll up Affinity Based Load Balancing
ForgeRock Directory Services (DS) is the high performance, high scale, LDAP-based persistent layer product within the ForgeRock Identity Platform. Any DS instance can take both WRITE and READ operations at scale; for many customers enabling an all active infrastructure without Affinity Based Load Balancing is viable.
However, for high scale customers and/or those who need to guarantee absolute consistency of data, then Affinity Based Load Balancing, a technology unique to the ForgeRock Identity Platform is the key to enabling an all active persistence layer. Nice!
Affinity what now?
It is a load balancing algorithm built into the DS SDK which is part of both the ForgeRock Directory Proxy product and the ForgeRock Access Management (AM) product.
It works like this:
For each and every inbound LDAP request which contains a distinguished name (DN) like uid=Darinder,ou=People,o=ForgeRock, the SDK takes a hash and allocates the result to a specific DS instance. In the case of AM, all servers in the pool compute the same hash, and thus send all READ/MODIFY/DELETE requests for uid=Darinder to say DS Node 1 (origin node).
A request with a different DN (e.g. uid=Ronaldo,ou=People,o=ForgeRock) is again hashed but may be sent to DS Node 2; all READ/MODIFY/DELETE operations for uid=Ronaldo target this specific origin node and so on. This means all READ/MODIFY/DELETE operations for a specific DN always target the same DS instance, thus eliminating issues caused by replication delay and solving the scenarios (and others) described in the Problem statement above. Sweet!
The following topology depicts this architecture:
What else does this trick DS SDK do then?
Well… The SDK also makes sure ADD requests are spread evenly across all DS nodes in the pool to not overloaded one DS node while the others remain idle.
Also for a bit of icing on top, the SDK is instance aware if the origin DS node becomes unavailable (in our example, say DS Node 1 for uid=Darinder), the SDK detects this and re-routes all requests for uid=Darinder to another DS node in the pool, and then (here’s the cherry) ensures all further requests remains sticky to this new DS node (it becomes the new origin node). Assuming data has been replicated in time; there will be no functional impact.
Oh, and when the original DS node comes back online, all requests fail back for any DNs where it was the origin server (so, in our case, uid=Darinder would flip back to DS Node 1). Booom!
Which components of the ForgeRock Platform support Affinity Based Load Balancing?
ForgeRock AM’s DS Core Token Service (CTS)
ForgeRock AM’s DS User / Identity Store
ForgeRock AM’s App and Policy Stores
Note: the AM Configuration Store does not support affinity but this is intentional as AM configuration will soon move to file-based configuration (FBC) and in the interim customers can look to deploy like this.
What are the advantages of Affinity?
As the title says, Affinity Based Load Balancing enables an active persistent storage layer
Instead of having a single massively vertically scaled primary DS instance, DS can be horizontally scaled so all nodes are primary to increase throughput and maximise compute resource.
As the topology is all active, smaller (read: cheaper) instances can be used; thus, significantly reducing costs, especially in a Cloud environment.
Eliminates functional, data integrity, and data consistency issues causes by replication delay.
To learn more about how to configure ForgeRock AM for Affinity Based Load Balancing check out this.
Immutable Deployment Pattern for ForgeRock Access Management (AM) Configuration without File Based Configuration (FBC)
The standard Production Grade deployment pattern for ForgeRock AM is to use replicated sets of Configuration Directory Server instances to store all of AM’s configuration. The deployment pattern has worked well in the past, but is less suited to the immutable, DevOps enabled environments of today.
This blog presents an alternative view of how an immutable deployment pattern could be applied to AM in lieu of the upcoming full File Based Configuration (FBC) for AM in version 7.0 of the ForgeRock Platform. This pattern could also support easier transition to FBC.
Current Common Deployment Pattern
Currently most customers deploy AM with externalised Configuration, Core Token Service (CTS) and UserStore instances.
The following diagram illustrates such a topology spread over two sites; the focus is on the DS Config Stores hence the CTS and DS Userstore connections and replication topology have been simplified . Note this blog is still applicable to deployments which are single site.
In this topology AM uses connection strings to the DS Config stores to enable an all active Config store architecture, with each AM targeting one DS Config store as primary and the second as failover per site. Note in this model there is no cross site failover for AM to Config stores connections (possible but discouraged). The DS Config stores do communicate across site for replication to create a full mesh as do the User and CTS stores.
A slight divergence from this model and one applicable to cloud environments is to use a load balancer between AM and it’s DS Config Stores, however we have observed many customers experience problems with features such as Persistent Searches failing due to dropped connections. Hence, where possible Consulting Services recommends the use of AM Connection Strings.
This model has worked well in the past; the DS Config stores contain all the stuff AM needs to boot and operate plus a handful of runtime entries.
However, times are a changing!
The advent of Open Banking introduces potentially hundreds of thousands of OAuth2 clients, AM policies entry numbers are ever increasing and with UMA thrown in for good measure; the previously small, minimal footprint are fairly static DS Config Stores are suddenly much more dynamic and contains many thousands of entries. Managing the stuff AM needs to boot and operate and all this runtime data suddenly becomes much more complex.
TADA! Roll up the new DS App and Policy Stores. These new data stores address this by allowing separation from this stuff AM needs to boot and operate from long lived environment specifics data such as policies, OAuth2 clients, SAML entities etc. Nice!
However, one problem still remains; it is still difficult to do stack by stack deployments, blue/green type deployments, rolling deployments and/or support immutable style deployments as DS Config Store replication is in place and needs to be very carefully managed during deployment scenarios.
Some common issues:
Making a change to one AM can quite easily have a ripple effect through DS replication, which impacts and/or impairs the other AM nodes both within the same site or remote. This behaviour can make customers more hesitant to introduce patches, config or code changes.
In a dual site environment the typical deployment pattern is to stop cross site replication, force traffic to site B, disable site A, upgrade site A, test it in isolation, force traffic back to the newly deployed site A, ensure production is functional, disable traffic to site B, push replication from site A to site B and re-enable replication, upgrade site B before finally returning to normal service.
Complexity is further increased if App and Policy stores are not in use as the in service DS Config stores may have new OAuth2 clients, UMA data etc created during transition which needs to be preserved. So in the above scenario an LDIF export of site B’s DS Config Stores for such data needs to be taken and imported in site A prior to site A going live (to catch changes while site A deployed was in progress) and after site B is disabled another LDIF export needs to taken from B and imported into A to catch any last minute changes between the first LDIF export and the switch over. Sheesh!
Even in a single site deployment model managing replication as well as managing the AM upgrade/deployment itself introduces risk and several potential break points.
New Deployment Model
The real enabler for a new deployment model for AM is the introduction of App and Policy stores, which will be replicated across sites. They enable full separation from the stuff AM needs to boot and run, from environmental runtime data. In such a model the DS Config stores return to a minimal footprint, containing only AM boot data with the App and Policy Stores containing the long lived environmental runtime data which is typically subject to zero loss SLAs and long term preservation.
Another enabler is a different configuration pattern for AM, where each AM effectively has the same FQDN and serverId allowing AM to be built once and then cloned into an image to allow rapid expansion and contraction of the AM farm without having to interact with the DS Config Store to add/delete new instances or go through the build process again and again.
Finally the last key component to this model is Affinity Based Load Balancing for the Userstore, CTS, App and Policy stores to both simplify the configuration and enable an all-active datastore architecture immune to data misses as a result of replication delay and is central to this new model.
Affinity is a unique feature of the ForgeRock platform and is used extensively by many customers. For more on Affinity click here.
The proposed topology below illustrates this new deployment model and is applicable to both active-active deployments and active-standby. Note cross site replication for the User, App and CTS stores is depicted, but for global/isolated deployments may well not be required.
As the DS Config store footprint will be minimal, to enable immutable configuration and massively simplify step-by-step/blue green/rolling deployments the proposal is to move the DS Config Stores local to AM with each AM built with exactly the same FQDN and serverId. Each local DS Config Store lives in isolation and replication is not enabled between these stores.
In order to provision each DS Config Store in lieu of replication, either the same build script can be executed on each host or a quicker and more optimised approach would be to build one AM-DS Config store instance/Pod in full, clone it and deploy the complete image to deploy a new AM-DS instance. The latter approach removes the need to interact with Amster to build additional instances and for example Git to pull configuration artefacts. With this model any new configuration changes require a new package/docker image/AMI, etc, i.e. an immutable build.
At boot time AM uses its local address to connect to its DS Config Store and Affinity to connect to the user Store, CTS and the App/Policy stores.
Advantages of this model:
As the DS Config Stores are not replicated most AM configuration and code level changes can be implemented or rolled back (using a new image or similar) without impacting any of the other AM instances and without the complexity of managing replication. Blue/green, rolling and stack by stack deployments and upgrades are massively simplified as is rollback.
Enables simplified expansion and contraction of the AM pool especially if an image/clone of a full AM instance and associated DS Config instance is used. This cloning approach also protects against configuration changes in Git or other code repositories inadvertently rippling to new AM instances; the same code and configuration base is deployment everywhere.
Promotes the cattle vs pet paradigm, for any new configuration deploy a new image/package.
This approach does not require any additional instances; the existing DS Config Stores are repurposed as App/Policy stores and the DS Config Stores are hosted locally to AM (or in a small Container in the same Pod as AM).
The existing DS Config Store can be quickly repurposed as App/Policy Stores no new instances or data level deployment steps are required other than tuning up the JVM and potentially uprating storage; enabling rapid switching from DS Config to App/Policy Stores
Enabler for FBC; when FBC becomes available the local DS Config stores are simply stopped in favour of FBC. Also if transition to FBC becomes problematic, rollback is easy — fire up the local DS Config stores and revert back.
Disadvantages of this model:
No DS Config Store failover; if the local DS Config Store fails the AM connected to it would also fail and not recover. However, this fits well with the pets vs cattle paradigm; if a local component fails, kill the whole instance and instantiate a new one.
Any log systems which have logic based on individual FQDNs for AM (Splunk, etc) would need their configuration to be modified to take into account each AM now has the same FQDN.
This deployment pattern is only suitable for customers who have mature DevOps processes. The expectation is no changes are made in production, instead a new release/build is produced and promoted to production. If for example a customer makes changes via REST or the UI directly then these changes will not be replicated to all other AM instances in the cluster, which would severely impair performance and stability.
This suggested model would significantly improve a customer’s ability to take on new configuration/code changes and potentially rollback without impacting other AM servers in the pool, makes effective use of the App/Policy stores without additional kit, allows easy transition to FBC and enables DevOps style deployments.
It’s a relatively common requirement to need to integrate the products that make up the ForgeRock Identity Platform. The IDM Samples Guide contains a good working example of just how to do this. Each version of the ForgeRock stack has slight differences, both in the products themselves, as well as the integrations. As such this blog will focus on version 6.5 of the products and will endeavour to include as much useful information to speed integrations for readers of this blog, including sample configuration files, REST calls etc.
In this integration IDM acts as an OIDC Relying Party, talking to AM as the OIDC Provider using the OAuth 2.0 authorization grant. The following sequence diagram illustrates successful processing from the authorization request, through grant of the authorization code, and ID token from the authorization provider, AM. You can find more details in the IDM Samples Guide.
Login to AM as an administrator and browse to the Realm you’re configuring. As already mentioned, I’m using the Top Level Realm for simplicity.
Once you’ve browsed to the Realm, on the Dashboard, select Configure OAuth Provider.
Now select Configure OpenID Connect.
If you need to modify any values like the Realm then do, but I’ll just press Create.
You’ll get a success message.
AM is now configured as an OP, let’s set some required values. Browse to Services, then click on OAuth2 Provider.
On the Consent tab, check the box next to Allow Clients to Skip Consent, then press Save.
Now browse to the Advanced OpenID Connect tab set openidm as the value for Authorized OIDC SSO Clients ( this is the name of the Relying Party / Client which we will create next). Press Save.
Configure Relying Party / Client for IDM
You can do these steps manually but let’s call AM’s REST interface, you can save these calls and easily replicate this step with the click of a button if you use a tool like Postman (See HERE for the Postman collection). I am using a simple CURL command from the AM server.
Firstly we’ll need an AM administrator session to create the client so call the /authenticate endpoint. (I’ve used jq for better formatting, so I’ll leave that for you to add in if required)
The SSO Token / Session will be returned as tokenId, so save this value.
Now we have a session we can use that in the next call to create the client. Again this will be done via REST however you can do this manually if you want. Substitute the tokenId value from above for the value of the iPlanetDrectoryPro.
AM is now configured as an OIDC provider and has an OIDC Relying Party for IDM to use, so now we can configure the final step, that is, tell IDM to outsource authentication to AM.
Feel free to modify and copy this authentication.json file directly into your ~/openidm/samples/full-stack/conf folder or follow these steps to configure.
Browse to Configure, then Authentication.
Authentication should be configured to Local, select ForgeRock Identity Provider.
After you click above, the Configure ForgeRock Identity Provider page will pop up.
Set the appropriate values for
Note that the common datastore is set to the DS shared repository, leave this as is.
You can change the others to match your environment but be careful as the values must match those set in AM OP/RP configuration above. You can also refer to the sample the sample authentication.json. Once completed press Submit. You will be asked to re-authenticate.
Testing the integrated environment
Everything is now configured so we are ready to test end to end.
For the ForgeRock Identity Platform version 6, integration between our products is easier than ever. In this blog, I’ll show you how to integrate ForgeRock Identity Management (IDM), ForgeRock Access Management (AM), and ForgeRock Directory Services (DS). With integration, you can configure aspects of privacy, consent, trusted devices, and more. This configuration sets up IDM as an OpenID Connect / OAuth 2.0 client of AM, using DS as a common user datastore.
(Substitute your IP address as appropriate. You may set up AM, DS, and IDM on different systems.)
If you set up AM and IDM on the same system, make sure they’re configured to connect on different ports. Both products configure default connections on ports 8080 and 8443.
Download AM, IDM, and DS versions 6 from backstage.forgerock.com. For organizational purposes, set them up on their own home directories:
Unpack the zip files. For convenience, copy the Example.ldif file from /home/idm/openidm/samples/full-stack/data to the /home/ds directory.
For the purpose of this blog, I’ve downloaded Tomcat 8.5.30 to the /home/am directory.
Configuring ForgeRock Directory Services (DS)
To install DS, navigate to the directory where you unpacked the binary, in this case, /home/ds/opendj. In that directory, you’ll find a setup script. The following command uses that script to start DS as a directory server, with a root DN of “cn=Directory Manager”, with a host name of ds.example.com, port 1389 for LDAP communication, and 4444 for administrative connections:
Set up Tomcat for AM. For this blog, I used Tomcat 8.5.30, downloaded from http://tomcat.apache.org/.
Unzip Tomcat in the /home/am directory.
Make the files in the apache-tomcat-8.5.30/bin directory executable.
Copy the AM-6.0.0.war file from the /home/am directory to apache-tomcat-8.5.30/webapps/openam.war.
Start the Tomcat web container with the startup.sh script in the apache-tomcat-8.5.30/bin directory. This action should unpack the openam.war binary to the
Shut down Tomcat, with the shutdown.sh script in the same directory. Make sure the Tomcat process has stopped.
Open the web.xml file in the following directory: apache-tomcat-8.5.30/webapps/openam/WEB-INF/. For an explanation, see the AM 6 Release Notes. Include the following code blocks in that file to support cross-origin resource sharing:
Important: Substitute the actual URL and ports for your AM and IDM deployments, where you see http://am.example.com:8080 and http://idm.example.com:9080. (I forgot to make these once and couldn’t figure out the problem for a couple of days.)
If you’ve configured AM on this system before, delete the /home/am/openam directory.
Restart Tomcat with the startup.sh script in the aforementioned apache-tomcat-8.5.30/bin directory.
Navigate to the URL for your AM deployment. In this case, call it http://am.example.com:8080/openam. You’ll create a “Custom Configuration” for OpenAM, and accept the defaults for most cases.
When setting up Configuration Data Store Settings, for consistency, use the same root suffix in the Configuration Data Store, i.e. dc=example,dc=com.
When setting up User Data Store settings, make sure the entries match what you used when you installed DS. The following table is based on that installation:
When the installation process is complete, you’ll be prompted with a login screen. Log in as the amadmin administrative user with the password you set up during the configuration process. With the following action, you’ll set up an OpenID Connect/OAuth 2.0 service that you’ll configure shortly for a connection to IDM.
Select Top-Level Realm -> Configure OAuth Provider -> Configure OpenID Connect -> Create -> OK. This sets up AM as an OIDC authorization server.
Set up IDM as an OAuth 2.0 Client:
Select Applications -> OAuth 2.0. Choose Add Client. In the New OAuth 2.0 Client window that appears, set openidm as a Client ID, set changeme as a Client Secret, along with a Redirection URI of http://idm.example.com:9080/oauthReturn/. The scope is openid, which reflects the use of the OpenID Connect standard.
Select Create, go to the Advanced Tab, and scroll down. Activate the Implied Consent option.
Press Save Changes.
Go to the OpenID Connect tab, and enter the following information in the Post Logout Redirect URIs text box:
Scroll down and enter openidm in the “Authorized OIDC SSO Clients” text box.
Press Save Changes.
Navigate to the Consent tab:
Enable the Allow Clients to Skip Consent option.
Press Save Changes.
AM is now ready for integration.
Now you’re ready to configure IDM, using the following steps:
For the purpose of this blog, use the following project subdirectory: /home/idm/openidm/samples/full-stack.
If you haven’t modified the deployment port for AM, modify the port for IDM. To do so, edit the boot.properties file in the openidm/resolver/ subdirectory, and change the port property appropriate for your deployment (openidm.port.http or openidm.port.https). For this blog, I’ve changed the openidm.port.http line to:
(NEW) You’ll also need to change the openidm.host. By default, it’s set to localhost. For this blog, set it to:
Start IDM using the full-stack project directory:
$ cd openidm
$ ./startup.sh -p samples/full-stack
(If you’re running IDM in a VM, the following command starts IDM and keeps it going after you log out of the system: nohup ./startup.sh -p samples/full-stack/ > logs/console.out 2>&1& )
As IDM includes pre-configured options for the ForgeRock Identity Platform in the full-stack subdirectory, IDM documentation on the subject frequently refers to the platform as the “Full Stack”.
In a browser, navigate to http://idm.example.com:9080/admin/.
Log in as an IDM administrator:
Reconcile users from the common DS user store to IDM. Select Configure > Mappings. In the page that appears, find the mapping from System/Ldap/Account to Managed/User, and press Reconcile. That will populate the IDM Managed User store with users from the common DS user store.
Select Configure -> Authentication. Choose the ForgeRock Identity Provider option. In the window that appears, scroll down to the configuration details. Based on the instance of AM configured earlier, you’d change:
Matching entry from Step 5 of Configuring AM (openidm)
Matching entry from Step 5 of Configuring AM (changeme)
When you’ve made appropriate changes, press Submit. (You won’t be able to press submit until you’ve entered a valid Well-Known Endpoint.)
You’re prompted with the following message:
Your current session may be invalid. Click here to logout and re-authenticate.
When you tap on the ‘Click here’ link, you should be taken to http://am.example.com:8080/openam/<some long extension>. Log in with AM administrative credentials:
Password: <what you configured during the AM installation process>
If you see the IDM Admin UI after logging in, congratulations! You now have a working integration between AM, IDM, and DS.
Note: To ensure a rapid response when the AM session expires, the IDM JWT_SESSION timeout has been reduced to 5 seconds. For more information, see the following section of the IDM ForgeRock Identity Platform sample: Changes to Session and Authentication Modules.
Both AM and IG support UMA 1.0.1 where AM acts as UMA Authorization Server (AS) and IG as UMA Resource Server (RS).
Currently there are some limitations in UMA support in IG, one of the most important is: PAT is stored in IG memory and is not persisted and if IG is restarted then the resource owner must perform the entire share process again.
OpenAM provides “Account Lockout” functionality which can be used to configure various lockout parameters such as failure count, lockout interval etc.
Note that OpenDJ also provides Account Lockout functionality, this article is based on OpenAM Account Lockout policies. Refer this KB article for more differences between OpenAM and OpenDJ lockout polices.
Using OpenAM “Account Lockout” policies, users may get locked out with invalid login attempts. OpenAM offers both Memory and Physical lockouts. Using memory lockout, users get unlocked automatically after specified duration.
Many deployments use “Physical lockout” due to security requirements. When this lockout mode is used then there should be some Self-service flow so that user can unlock themselves. Why not use OpenAM forgot password self-service flow ?
OpenAM forgot password allows user to reset password after successfully completing various stages (such as KBA, email confirmation, reCaptcha etc). Unfortunately, the problem is that the account is not unlocked when this flow is used. There is already an open RFE for this issue.
Versions used for this implementation: OpenAM 13.5, OpenDJ 3.5
One of the solution can include extending out of the box OpenAM’s forgot password self-service flow by adding custom stage to unlock user’s account:
Implement ForgottenPasswordConfigProviderExt to include account unlock stage.
Implement unlock custom stage
Extend selfServiceExt.xml to include custom provider.
Build the custom stage by using maven.
Delete all instances of User Self-Service from all realms.
ForgeRock AM 5.0 ships with Amster a lightweight command line tool and interactive shell, that allows for the automation of many management and configuration tasks.
A common task often associated with SAML2 identity provider configs, is the updating of certificates that are used for signing and the possible encryption of assertions. A feature added in 13.0 of OpenbAM, was the ability to have multiple certificates within an IDP config. This is useful to overcome the age old challenge of how to handle certificate expiration. An invalid cert can brake integrations with service providers. The process to remove, then add a new certificate, would require any entities within the circle of trust to retrieve new metadata into their configs – and thus create downtime, so the timing of this is often an issue. The ability to have multiple certificates in the config, would allow service providers to pull down meta data at a known date, instead of specifically when certificates expired.
Here we see the basic admin view of the IDP config…showing the list of certs available. These certs are stored in the JCEKS keystore in AM5.0 (previously the JKS keystore).
So the config contains am1 and am2 certs – an export of the meta data (from the ../openam/saml2/jsp/exportmetadata.jsp?entityid=idp endpoint) will list both certs that could be used for signing:
The first certificate listed in the config, is the one that is used to sign. When that expires, just remove from the list and the second certificate is then used. As the service provider already has both certs in their originally downloaded metadata, there should be no break in service.
Anyway….back to automation. Amster can manage the the SAML2 entities, either via the shell or script. This allows admins to operationally create, edit and update entities…and a regular task could be to add new certificates to the IDP list as necessary.
To do this I created a script that does just this. It’s a basic bash script that utilises Amster to read, edit then re-import the entity as a JSON wrapped XML object.
OpenAM provide HOTP authentication module which can send OTP to user’s email address and/or telephone number. By default, OpenAM doesn’t displays user’s email address and/or telephone number while sending this OTP.
Versions used for this implementation: OpenAM 13.5, OpenDJ 3.5
One of the solution can include extending out of the box OpenAM’s HOTP module:
Extend HOTP auth module (openam-auth-hotp).
Update below property in extended amAuthHOTP.properties: send.success=Please enter your One Time Password sent at
Extend HOTPService appropriately to retrieve user profile details.
Change extended HOTP module code as per below (both for auto send and on request):
substituteHeader(START_STATE, bundle.getString("send.success") + <Get User contact details from HOTPService>);
OpenAM can act as both SP and IdP for SAML webSSO flows. OpenAM also provides ability to dynamically create user profiles.
When OpenAM is acting as SAML SP and Dynamic user profile is enabled, if user profile doesn’t exist on OpenAM then OpenAM dynamically creates this profile from attributes in SAML assertion.
The problem comes if user profile is updated at IdP side, all subsequent SAML webSSO flows doesn’t update these changes at OpenAM SP side. More details here: OPENAM-8340
Versions used for this implementation: OpenAM 13.5, OpenDJ 3.5
One of the solution can include extending OpenAM SP Attribute Mapper. This extension may include just checking if user profile exists in OpenAM SP and updating any modified or new attributes in OpenAM datastore. Some tips for this implementation:
Extend DefaultSPAttributeMapper and override getAttributes()
Get datastore provider from SAML2Utils.getDataStoreProvider()
Check if user exists: dataStoreProvider.isUserExists(userID)
Get existing user attributes: dataStoreProvider.getAttributes()
Compare attributes in SAML assertion with existing user attributes.
Finally persist any new and updated attributes: dataStoreProvider.setAttributes()
Compile and deploy this extension in OpenAM under (OpenAM-Tomcat)/webapps/openam/WEB-INF/lib
Change SAML attribute setting in OpenAM. Navigate to Federation > Entity Providers > (SP Hosted Entity) > Assertion Processing. Specify ‘org.forgerock.openam.saml2.plugins.examples.UpdateDynamicUserSPAttMapper’ under Attribute Mapper.
And we are good to go! Any changes in user profile attributes in SAML assertion will now be persisted in OpenAM datastore.
Note that ideally attributes between different sources should be synced by using some tool like OpenIDM
Yubico is a manufacturer of multi-factor authentication devices, that typically are just USB dongles. They can provide a range of different MFA options including traditional static password linking, one-time-password generation and integration using FIDO (Fast Identity Online) Universal 2nd Factor (U2F).
I want to quickly show the route of integrating your Yubico Yubikey with ForgeRock Access Management. ForgeRock and Yubico have had integrations for the last 6 years, but I thought it was good to have a simple update on integration using the OATH compliant OTP.
First of all you need a Yubikey. I’m using a Yubikey Nano, which couldn’t be any smaller if it tried. Just make sure you don’t lose it… The Yubikey needs configuring first of all to generate one time passwords. This is done using the Yubico personalisation tool. This is a simple util that works on Mac, Windows and Linux. Download the tool from Yubico and install. Setting up the Yubikey for OTP generation is a 3 min job. There’s even a nice Vimeo on how to do it, if you can’t be bothered RTFM.
This set up process, basically generates a secret, that is bound to the Yubikey along with some config. If you want to use your own secret, just fill in the field…but don’t forget it :-)
Next step is to setup ForgeRock AM (aka OpenAM), to use the Yubikey during login.
Access Management has shipped with an OATH compliant authentication module for years. Even since the Sun OpenSSO days. This module works with any Open Authentication compliant device.
Create a new module instance and add in the fields where you will store the secret and counter against the users profile. For quickness (and laziness) I just used employeeNumber and telephoneNumber as they are already shipped in the profile schema and weren’t being used. In the “real world” you would just add two specific attributes to the profile schema.
Make sure you then copy the secret that the Yubikey personalisation tool created, into the user record within the employeeNumber field…
Next, just add the module to a chain, that contains your data store module first – the data store isn’t essential, but you do need a way to identify the user first, in order to look up their OTP seed in the profile store, so user name and password authentication seems the quickest – albeit you could just use persistent cookie if the user had authenticated previously, or maybe even just a username module.
Done. Next, to use your new authentication service, just augment the authentication URL with the name of the service – in this case yubikeyOTPService. Eg: ../openam/XUI/#login/&authIndexType=service&authIndexValue=yubikeyOTPService This first asks me for my username and password…
…then my OTP.
At this point, I just add my Yubikey Nano into my USB drive, then touch it for 3 seconds, to auto generate the 6 digit OTP and log me in. Note the 3 seconds bit is important. Most Yubikeys have 2 configuration slots and the 1 slot is often configured for the Yubico Cloud Service, and is activated if you touch the key for only 1 second. To activate the second configuration and in our case the OTP, just hold a little longer…