Immutable Deployment Pattern for ForgeRock Access Management (AM) Configuration without File Based…

Immutable Deployment Pattern for ForgeRock Access Management (AM) Configuration without File Based Configuration (FBC)

Introduction

The standard Production Grade deployment pattern for ForgeRock AM is to use replicated sets of Configuration Directory Server instances to store all of AM’s configuration. The deployment pattern has worked well in the past, but is less suited to the immutable, DevOps enabled environments of today.

This blog presents an alternative view of how an immutable deployment pattern could be applied to AM in lieu of the upcoming full File Based Configuration (FBC) for AM in version 7.0 of the ForgeRock Platform. This pattern could also support easier transition to FBC.

Current Common Deployment Pattern

Currently most customers deploy AM with externalised Configuration, Core Token Service (CTS) and UserStore instances.

The following diagram illustrates such a topology spread over two sites; the focus is on the DS Config Stores hence the CTS and DS Userstore connections and replication topology have been simplified . Note this blog is still applicable to deployments which are single site.

Dual site AM deployment pattern. Focus is on the DS Configuration stores

In this topology AM uses connection strings to the DS Config stores to enable an all active Config store architecture, with each AM targeting one DS Config store as primary and the second as failover per site. Note in this model there is no cross site failover for AM to Config stores connections (possible but discouraged). The DS Config stores do communicate across site for replication to create a full mesh as do the User and CTS stores.

A slight divergence from this model and one applicable to cloud environments is to use a load balancer between AM and it’s DS Config Stores, however we have observed many customers experience problems with features such as Persistent Searches failing due to dropped connections. Hence, where possible Consulting Services recommends the use of AM Connection Strings.

It should be noted that the use of AM Connection Strings specific to each AM can only be used if each AM has a unique FQDN — for example: https://openam1.example.com:8443/openam, https://openam2.example.com:8443/openam and so on.

For more on AM Connection Strings click here

Problem Statement

This model has worked well in the past; the DS Config stores contain all the stuff AM needs to boot and operate plus a handful of runtime entries.

However, times are a changing!

The advent of Open Banking introduces potentially hundreds of thousands of OAuth2 clients, AM policies entry numbers are ever increasing and with UMA thrown in for good measure; the previously small, minimal footprint are fairly static DS Config Stores are suddenly much more dynamic and contains many thousands of entries. Managing the stuff AM needs to boot and operate and all this runtime data suddenly becomes much more complex.

TADA! Roll up the new DS App and Policy Stores. These new data stores address this by allowing separation from this stuff AM needs to boot and operate from long lived environment specifics data such as policies, OAuth2 clients, SAML entities etc. Nice!

However, one problem still remains; it is still difficult to do stack by stack deployments, blue/green type deployments, rolling deployments and/or support immutable style deployments as DS Config Store replication is in place and needs to be very carefully managed during deployment scenarios.

Some common issues:

  • Making a change to one AM can quite easily have a ripple effect through DS replication, which impacts and/or impairs the other AM nodes both within the same site or remote. This behaviour can make customers more hesitant to introduce patches, config or code changes.
  • In a dual site environment the typical deployment pattern is to stop cross site replication, force traffic to site B, disable site A, upgrade site A, test it in isolation, force traffic back to the newly deployed site A, ensure production is functional, disable traffic to site B, push replication from site A to site B and re-enable replication, upgrade site B before finally returning to normal service.
  • Complexity is further increased if App and Policy stores are not in use as the in service DS Config stores may have new OAuth2 clients, UMA data etc created during transition which needs to be preserved. So in the above scenario an LDIF export of site B’s DS Config Stores for such data needs to be taken and imported in site A prior to site A going live (to catch changes while site A deployed was in progress) and after site B is disabled another LDIF export needs to taken from B and imported into A to catch any last minute changes between the first LDIF export and the switch over. Sheesh!
  • Even in a single site deployment model managing replication as well as managing the AM upgrade/deployment itself introduces risk and several potential break points.

New Deployment Model

The real enabler for a new deployment model for AM is the introduction of App and Policy stores, which will be replicated across sites. They enable full separation from the stuff AM needs to boot and run, from environmental runtime data. In such a model the DS Config stores return to a minimal footprint, containing only AM boot data with the App and Policy Stores containing the long lived environmental runtime data which is typically subject to zero loss SLAs and long term preservation.

Another enabler is a different configuration pattern for AM, where each AM effectively has the same FQDN and serverId allowing AM to be built once and then cloned into an image to allow rapid expansion and contraction of the AM farm without having to interact with the DS Config Store to add/delete new instances or go through the build process again and again.

Finally the last key component to this model is Affinity Based Load Balancing for the Userstore, CTS, App and Policy stores to both simplify the configuration and enable an all-active datastore architecture immune to data misses as a result of replication delay and is central to this new model.

Affinity is a unique feature of the ForgeRock platform and is used extensively by many customers. For more on Affinity click here.

The proposed topology below illustrates this new deployment model and is applicable to both active-active deployments and active-standby. Note cross site replication for the User, App and CTS stores is depicted, but for global/isolated deployments may well not be required.

Localised DS Config Store for each AM with replication disabled

As the DS Config store footprint will be minimal, to enable immutable configuration and massively simplify step-by-step/blue green/rolling deployments the proposal is to move the DS Config Stores local to AM with each AM built with exactly the same FQDN and serverId. Each local DS Config Store lives in isolation and replication is not enabled between these stores.

In order to provision each DS Config Store in lieu of replication, either the same build script can be executed on each host or a quicker and more optimised approach would be to build one AM-DS Config store instance/Pod in full, clone it and deploy the complete image to deploy a new AM-DS instance. The latter approach removes the need to interact with Amster to build additional instances and for example Git to pull configuration artefacts. With this model any new configuration changes require a new package/docker image/AMI, etc, i.e. an immutable build.

At boot time AM uses its local address to connect to its DS Config Store and Affinity to connect to the user Store, CTS and the App/Policy stores.

Advantages of this model:

  • As the DS Config Stores are not replicated most AM configuration and code level changes can be implemented or rolled back (using a new image or similar) without impacting any of the other AM instances and without the complexity of managing replication. Blue/green, rolling and stack by stack deployments and upgrades are massively simplified as is rollback.
  • Enables simplified expansion and contraction of the AM pool especially if an image/clone of a full AM instance and associated DS Config instance is used. This cloning approach also protects against configuration changes in Git or other code repositories inadvertently rippling to new AM instances; the same code and configuration base is deployment everywhere.
  • Promotes the cattle vs pet paradigm, for any new configuration deploy a new image/package.
  • This approach does not require any additional instances; the existing DS Config Stores are repurposed as App/Policy stores and the DS Config Stores are hosted locally to AM (or in a small Container in the same Pod as AM).
  • The existing DS Config Store can be quickly repurposed as App/Policy Stores no new instances or data level deployment steps are required other than tuning up the JVM and potentially uprating storage; enabling rapid switching from DS Config to App/Policy Stores
  • Enabler for FBC; when FBC becomes available the local DS Config stores are simply stopped in favour of FBC. Also if transition to FBC becomes problematic, rollback is easy — fire up the local DS Config stores and revert back.

Disadvantages of this model:

  • No DS Config Store failover; if the local DS Config Store fails the AM connected to it would also fail and not recover. However, this fits well with the pets vs cattle paradigm; if a local component fails, kill the whole instance and instantiate a new one.
  • Any log systems which have logic based on individual FQDNs for AM (Splunk, etc) would need their configuration to be modified to take into account each AM now has the same FQDN.
  • This deployment pattern is only suitable for customers who have mature DevOps processes. The expectation is no changes are made in production, instead a new release/build is produced and promoted to production. If for example a customer makes changes via REST or the UI directly then these changes will not be replicated to all other AM instances in the cluster, which would severely impair performance and stability.

Conclusions

This suggested model would significantly improve a customer’s ability to take on new configuration/code changes and potentially rollback without impacting other AM servers in the pool, makes effective use of the App/Policy stores without additional kit, allows easy transition to FBC and enables DevOps style deployments.

This blog post was first published @ https://medium.com/@darinder.shokar included here with permission.

Integrating ForgeRock Identity Platform 6.5

Integrating The ForgeRock Identity Platform 6.5

It’s a relatively common requirement to need to integrate the products that make up the ForgeRock Identity Platform. The IDM Samples Guide contains a good working example of just how to do this. Each version of the ForgeRock stack has slight differences, both in the products themselves, as well as the integrations. As such this blog will focus on version 6.5 of the products and will endeavour to include as much useful information to speed integrations for readers of this blog, including sample configuration files, REST calls etc.

In this integration IDM acts as an OIDC Relying Party, talking to AM as the OIDC Provider using the OAuth 2.0 authorization grant. The following sequence diagram illustrates successful processing from the authorization request, through grant of the authorization code, and ID token from the authorization provider, AM. You can find more details in the IDM Samples Guide.

Full Stack Authorization Code Flow

Sample files

For this integration I’ve included configured sample files which can be found by accessing the link below, modified, and either used as an example or just dropped straight into your test environment. It should not have to be said but just in case…DO NOT deploy these to production without appropriate testing / hardening: https://stash.forgerock.org/users/mark.nienaber_forgerock.com/repos/fullstack6.5/.

Postman Collection

For any REST calls made in this blog you’ll find the Postman collection available here: https://documenter.getpostman.com/view/5814408/SVfJVBYR?version=latest

Sample Scripts

We will set up some basic vanilla instances of our products to get started. I’ve provided some scripts to install both DS as well as AM.

Products used in this integration

Documentation can be found https://backstage.forgerock.com/docs/.

Note: In this blog I install the products under my home directory, this is not best practice, but keep in mind the focus is on the integration, and not meant as a detailed install guide.

Setup new DS instances

On Server 1 (AM Server) install a fresh DS 6.5 instance for an external AM config store (optional) and a DS repository to be shared with IDM.

Add the following entries to the hosts file:

sudo vi /etc/hosts
127.0.0.1       localhost opendj.example.com amconfig1.example.com openam.example.com

Before running the DS install script you’ll need to copy the Example.ldif file from the full-stack IDM sample to the DS/AM server. You can do this manually or use SCP from the IDM server.

scp ~/openidm/samples/full-stack/data/Example.ldif fradmin@opendj.example.com:~/Downloads

Modify the sample file installDSFullStack6.5.sh including:

  • server names
  • port numbers
  • location of DS-6.5.0.zip file
  • location of the Example.ldif from the IDM full-stack sample.

Once completed, run to install both an external AM Config store as well as the DS shared repository. i.e.

script_dir=`pwd`
ds_zip_file=~/Downloads/DS/DS-6.5.0.zip
ds_instances_dir=~/opends/ds_instances
ds_config=${ds_instances_dir}/config
ds_fullstack=${ds_instances_dir}/fullstack
#IDMFullStackSampleRepo
ds_fullstack_server=opendj.example.com
ds_fullstack_server_ldapPort=5389
ds_fullstack_server_ldapsPort=5636
ds_fullstack_server_httpPort=58081
ds_fullstack_server_httpsPort=58443
ds_fullstack_server_adminConnectorPort=54444
#Config
ds_amconfig_server=amconfig1.example.com
ds_amconfig_server_ldapPort=3389
ds_amconfig_server_ldapsPort=3636
ds_amconfig_server_httpPort=38081
ds_amconfig_server_httpsPort=38443
ds_amconfig_server_adminConnectorPort=34444

Now let’s run the script on Server 1, the AM / DS server, to create a new DS instances.

chmod +x ./installdsFullStack.sh
./installdsFullStack.sh
Install DS instances

You now have 2 DS servers installed and configured, let’s install AM.

Setup new AM server

On Server 1 we will install a fresh AM 6.5.2 on Tomcat using the provided Amster script.

Assuming the tomcat instance is started drop the AM WAR file under the webapps directory renaming the context to secure (change this as you wish).

cp ~/Downloads/AM-6.5.2.war ~/tomcat/webapps/secure.war

Copy the amster script install_am.amster into your amster 6.5.2 directory and make any modifications as required.

install-openam 
--serverUrl http://openam.example.com:8080/secure 
--adminPwd password 
--acceptLicense 
--cookieDomain example.com 
--cfgStoreDirMgr 'uid=am-config,ou=admins,ou=am-config' 
--cfgStoreDirMgrPwd password 
--cfgStore dirServer 
--cfgStoreHost amconfig1.example.com 
--cfgStoreAdminPort 34444 
--cfgStorePort 3389 
--cfgStoreRootSuffix ou=am-config 
--userStoreDirMgr 'cn=Directory Manager' 
--userStoreDirMgrPwd password 
--userStoreHost opendj.example.com  
--userStoreType LDAPv3ForOpenDS 
--userStorePort 5389 
--userStoreRootSuffix dc=example,dc=com
:exit

Now run amster and pass it this script to install AM. You can do this manually if you like, but scripting will make your life easier and allow you to repeat it later on.

cd amster6.5.2/
./amster install_am.amster
Install AM

At the end of this you’ll have AM installed and configured to point to the DS instances you set up previously.

AM Successfully installed

Setup new IDM server

On Server 2 install/unzip a new IDM 6.5.0.1 instance.

Make sure the IDM server can reach AM and DS servers by adding an entry to the hosts file.

sudo vi /etc/hosts
Hosts file entry

Modify the hostname and port of the IDM instance in the boot.properties file. An example boot.properties file can be found HERE.

cd ~/openidm
vi /resolver/boot.properties

Set appropriate openidm.host and openidm.port.http/s.

openidm/resolver/boot.properties

We now have the basic AM / IDM and DS setup and are ready to configure each of the products.

Configure IDM to point to shared DS repository

Modify the IDM LDAP connector file to point to the shared repository. An example file can be found HERE.

vi ~/openidm/samples/full-stack/conf/provisioner.openicf-ldap.json

Change “host” and “port” to match shared repo configured above.

provisioner.openicf-ldap.json

Configure AM to point to shared DS repository

Login as amadmin and browse to Identity Stores in the realm you’re configuring. For simplicity I’m using Top Level Realm , DO NOT this in production!!.

Set the correct values for your environment, then select Load Schema and Save.

Shared DS Identity Store

Browse to Identities and you should now see two identities that were imported in the Example.ldif when you setup the DS shared repository.

List of identities

We will setup another user to ensure AM is configured to talk to DS correctly. Select + Add Identity and fill in the values then press Create.

Create new identity

Modify some of the users values and press Save Changes.

Modify attributes

Prepare IDM

We will start IDM now and check that it’s connected to the DS shared repository.

Start the IDM full-stack sample

Enter the following commands to start the full-stack sample.

cd ~/openidm/
./startup.sh -p samples/full-stack
Start full-stack sample

Login to IDM

Login to the admin interface as openidm-admin.

http://openidm.example.com:8080/admin/

Login as openidm-admin

Reconcile DS Shared Repository

We will now run a reconciliation to pull users from the DS shared repository into IDM.

Browse to Configure, then Mappings then on the System/ldap/account→Mananged/user mapping select Reconcile

Reconcile DS shared repository

The users should now exist under Manage, Users.

User list

Let’s Login with our newly created testuser1 to make sure it’s all working.

End User UI Login

You should see the welcome screen.

Welcome Page

IDM is now ready for integration with AM.

Configure AM for integration

Now we will set up AM for integration with IDM.

Setup CORS Filter

Set up a CORS filter in AM to allow IDM as an origin. An example web.xml can be found HERE.

vi tomcat/webapps/secure/WEB-INF/web.xml

Add the appropriate CORS filter. (See sample file above)

</filter-mapping>
<filter-mapping>
<filter-name>CORSFilter</filter-name>
<url-pattern>/json/*</url-pattern>
</filter-mapping>
<filter-mapping>
<filter-name>CORSFilter</filter-name>
<url-pattern>/oauth2/*</url-pattern>
</filter-mapping>
<filter>
<filter-name>CORSFilter</filter-name>
<filter-class>org.forgerock.openam.cors.CORSFilter</filter-class>
<init-param>
<param-name>methods</param-name>
<param-value>POST,PUT,OPTIONS</param-value>
</init-param>
<init-param>
<param-name>origins</param-name>
<param-value>http://openidm.example.com:8080,http://openam.example.com:8080,http://localhost:8000,http://localhost:8080,null,file://</param-value>
</init-param>
<init-param>
<param-name>allowCredentials</param-name>
<param-value>true</param-value>
</init-param>
<init-param>
<param-name>headers</param-name>
<param-value>X-OpenAM-Username,X-OpenAM-Password,X-Requested-With,Accept,iPlanetDirectoryPro</param-value>
</init-param>
<init-param>
<param-name>expectedHostname</param-name>
<param-value>openam.example.com:8080</param-value>
</init-param>
<init-param>
<param-name>exposeHeaders</param-name>
<param-value>Access-Control-Allow-Origin,Access-Control-Allow-Credentials,Set-Cookie</param-value>
</init-param>
<init-param>
<param-name>maxAge</param-name>
<param-value>1800</param-value>
</init-param>
</filter>

Configure AM as an OIDC Provider

Login to AM as an administrator and browse to the Realm you’re configuring. As already mentioned, I’m using the Top Level Realm for simplicity.

Once you’ve browsed to the Realm, on the Dashboard, select Configure OAuth Provider.

Configure OAuth Provider

Now select Configure OpenID Connect.

Configure OIDC

If you need to modify any values like the Realm then do, but I’ll just press Create.

OIDC Server defaults

You’ll get a success message.

Success

AM is now configured as an OP, let’s set some required values. Browse to Services, then click on OAuth2 Provider.

AM Services

On the Consent tab, check the box next to Allow Clients to Skip Consent, then press Save.

Consent settings

Now browse to the Advanced OpenID Connect tab set openidm as the value for Authorized OIDC SSO Clients ( this is the name of the Relying Party / Client which we will create next). Press Save.

Authorized SSO clients

Configure Relying Party / Client for IDM

You can do these steps manually but let’s call AM’s REST interface, you can save these calls and easily replicate this step with the click of a button if you use a tool like Postman (See HERE for the Postman collection). I am using a simple CURL command from the AM server.

Firstly we’ll need an AM administrator session to create the client so call the /authenticate endpoint. (I’ve used jq for better formatting, so I’ll leave that for you to add in if required)

curl -X POST 
http://openam.example.com:8080/secure/json/realms/root/authenticate 
-H 'Accept-API-Version: resource=2.0, protocol=1' 
-H 'Content-Type: application/json' 
-H 'X-OpenAM-Password: password' 
-H 'X-OpenAM-Username: amadmin' 
-H 'cache-control: no-cache' 
-d '{}'

The SSO Token / Session will be returned as tokenId, so save this value.

Authenticate

Now we have a session we can use that in the next call to create the client. Again this will be done via REST however you can do this manually if you want. Substitute the tokenId value from above for the value of the iPlanetDrectoryPro.

curl -X POST 
'http://openam.example.com:8080/secure/json/realms/root/agents/?_action=create' 
-H 'Accept-API-Version: resource=3.0, protocol=1.0' 
-H 'Content-Type: application/json' 
-H 'iPlanetDirectoryPro: djyfFX3h97jH2P-61auKig_i23o.*AAJTSQACMDEAAlNLABxFc2d0dUs2c1RGYndWOUo0bnU3dERLK3pLY2c9AAR0eXBlAANDVFMAAlMxAAA.*' 
-d '{
"username": "openidm",
"userpassword": "openidm",
"realm": "/",
"AgentType": ["OAuth2Client"],
"com.forgerock.openam.oauth2provider.grantTypes": [
"[0]=authorization_code"
],
"com.forgerock.openam.oauth2provider.scopes": [
"[0]=openid"
],
"com.forgerock.openam.oauth2provider.tokenEndPointAuthMethod": [
"client_secret_basic"
],
"com.forgerock.openam.oauth2provider.redirectionURIs": [
"[0]=http://openidm.example.com:8080/"
],
"isConsentImplied": [
"true"
],
"com.forgerock.openam.oauth2provider.postLogoutRedirectURI": [
"[0]=http://openidm.example.com:8080/",
"[1]=http://openidm.example.com:8080/admin/"
]
}'
Create Relying Party

The OAuth Client should be created.

RP / Client created

Integrate IDM and AM

AM is now configured as an OIDC provider and has an OIDC Relying Party for IDM to use, so now we can configure the final step, that is, tell IDM to outsource authentication to AM.

Feel free to modify and copy this authentication.json file directly into your ~/openidm/samples/full-stack/conf folder or follow these steps to configure.

Browse to Configure, then Authentication.

Configure authentication

Authentication should be configured to Local, select ForgeRock Identity Provider.

Outsource authentication to AM

After you click above, the Configure ForgeRock Identity Provider page will pop up.

Set the appropriate values for

  • Well-Known Endpoint
  • Client ID
  • Client Secret
  • Note that the common datastore is set to the DS shared repository, leave this as is.

You can change the others to match your environment but be careful as the values must match those set in AM OP/RP configuration above. You can also refer to the sample the sample authentication.json. Once completed press Submit. You will be asked to re-authenticate.

IDM authentication settings

Testing the integrated environment

Everything is now configured so we are ready to test end to end.

Test End User UI

Browse to the IDM End User UI.

http://openidm.example.com:8080/

You will be directed to AM for login, let’s login with the test user we created earlier.

Login as test user

After authentication you will be directed to IDM End User UI welcome page.

Login success

Congratulations you did it!

Test IDM Admin UI

In this test you’ll login to AM as amadmin and IDM will convert this to a session for the IDM administrator openidm-admin.

This is achieved through the following script:

~/openidm/bin/defaults/script/auth/amSessionCheck.js

Specifically this code snippet is used:

if (security.authenticationId.toLowerCase() === "amadmin") {
security.authorization = {
"id" : "openidm-admin",
"component" : "internal/user",
"roles" : ["internal/role/openidm-admin", "internal/role/openidm-authorized"],
"moduleId" : security.authorization.moduleId
};

In a live environment you should not use amadmin, or openidm-admin but create your own delegated administrators, however in this case we will stick with the sample.

Browse to the IDM Admin UI.

http://openidm.example.com:8080/admin/

You will be directed to AM for login, login with amadmin.

Login as administrator

After authentication you will be directed to IDM End User UI welcome page.

Note: This may seem strange as you requested the Admin UI but it is expected as the redirection URI is set to this page (don’t change this as you can just browse to admin URL after authentication).

Login success

On the top right click the drop down and select Admin, or alternatively just hit the IDM Admin UI URL again.

Switch to Admin UI

You’ll now be directed to the Admin UI and as an IDM administrator you should be able to browse around as expected.

IDM Admin UI success

Congratulations you did it!

Finished.

References

  1. https://backstage.forgerock.com/docs/idm/6.5/samples-guide/#chap-full-stack
  2. khttps://forum.forgerock.com/2018/05/forgerock-identity-platform-version-6-integrating-idm-ds/
  3. https://forum.forgerock.com/2016/02/forgerock-full-stack-configuration/

This blog post was first published @ https://medium.com/@marknienaber included here with permission.

Deploying the ForgeRock platform on Kubernetes using Skaffold and Kustomize

Image result for forgerock logo

If you are following along with the ForgeOps repository, you will see some significant changes in the way we deploy the ForgeRock IAM platform to Kubernetes.  These changes are aimed at dramatically simplifying the workflow to configure, test and deploy ForgeRock Access Manager, Identity Manager, Directory Services and the Identity Gateway.

To understand the motivation for the change, let’s recap the current deployment approach:

  • The Kubernetes manifests are maintained in one git repository (forgeops), while the product configuration is another (forgeops-init).
  • At runtime,  Kubernetes init containers clone the configuration from git and make it  available to the component using a shared volume.
The advantage of this approach is that the docker container for a product can be (relatively) stable. Usually it is the configuration that is changing, not the product binary.
This approach seemed like a good idea at the time, but in retrospect it created a lot of complexity in the deployment:
  • The runtime configuration is complex, requiring orchestration (init containers) to make the configuration available to the product.
  • It creates a runtime dependency on a git repository being available. This isn’t a show stopper (you can create a local mirror), but it is one more moving part to manage.
  • The helm charts are complicated. We need to weave git repository information throughout the deployment. For example, putting git secrets and configuration into each product chart. We had to invent a mechanism to allow the user to switch to a different git repo or configuration – adding further complexity. Feedback from users indicated this was a frequent source of errors. 
  • Iterating on configuration during development is slow. Changes need to be committed to git and the deployment rolled to test out a simple configuration change.
  • Kubernetes rolling deployments are tricky. The product container version must be in sync with the git configuration. A mistake here might not get caught until runtime. 
It became clear that it would be *much* simpler if the products could just bundle the configuration in the docker container so that it is “ready to run” without any complex orchestration or runtime dependency on git.
[As an aside, we often get asked why we don’t store configuration in ConfigMaps. The short answer is: We do – for top level configuration such as domain names and global environment variables. Products like AM have large and complex configurations (~1000 json files for a full AM export). Managing these in ConfigMaps gets to be cumbersome. We also need a hierarchical directory structure – which is an outstanding ConfigMap RFE]
The challenge with the “bake the configuration in the docker image” approach is that  it creates *a lot* of docker containers. If each configuration change results in a new (and unique) container, you quickly realize that automation is required to be successful. 
About a year ago, one of my colleagues happened to stumble across a new tool from Google called skaffold.  From the documentation

“Skaffold handles the workflow for building, pushing and deploying your application.
So you can focus more on application development”
To some extent skaffold is syntactic sugar on top of this workflow:
docker build; docker tag; docker push;
kustomize build |  kubectl apply -f – 
Calling it syntactic sugar doesn’t really do it justice, so do read through their excellent documentation. 
There isn’t anything that skaffold does that you can’t accomplish with other tools (or a whack of bash scripts), but skaffold focuses on smoothing out and automating this basic workflow.
A key element of Skaffold is its tagging strategy. Skaffold will apply a unique tag to each docker image (the tagging strategy is pluggable, but is generally a sha256 hash, or a git commit). This is essential for our workflow where we want to ensure that combination of the product (say AM) and a specific configuration is guaranteed to be unique. By using a git commit tag on the final image, we can be confident that we know exactly how a container was built including its configuration.  This also makes rolling deployments much more tractable, as we can update a deployment tag and let Kubernetes spin down the older container and replace it with the new one.
If it isn’t clear from the above, the configuration for the product lives inside the docker image, and that in turn is tracked in a git repository. If for example you check out the source for the IDM container: https://github.com/ForgeRock/forgeops/tree/master/docker/idm 
You will see that the Dockerfile COPYs the configuration into the final image. When IDM runs, its configuration will be right there, ready to go. 
Skaffold has two major modes of operation.  The “run” mode  is a one shot build, tag, push and deploy.  You will typically use skaffold run as part of CD pipeline. Watch for git commit, and invoke skaffold to deploy the change.  Again – you can do this with other tools, but Skaffold just makes it super convenient.
Where Skaffold really shines is in “dev” mode. If you run skaffold dev, it will run a continual loop watching the file system for changes, and rebuilding and deploying as you edit files.
This diagram (lifted from the skaffold project) shows the workflow:
architectureThis process is really snappy. We find that we can deploy changes within 20-30 seconds (most of that is just container restarts).  When pushing to a remote GKE cluster, the first deployment is a little slower as we need to push all those containers to gcr.io, but subsequent updates are fast as you are pushing configuration deltas that are just a few KB in size.
Note that git commits are not required during development.  A developer will iterate on the desired configuration, and only when they are happy will they commit the changes to git and create a pull request. At this stage a CD process will pick up the commit and deploy the change to a QA environment. We have a simple CD sample using Google Cloudbuild.
At this point we haven’t said anything about helm and why we decided to move to Kustomize.  
Once our runtime deployments became simpler (no more git init containers, simpler ConfigMaps, etc.), we found ourselves questioning the need for  complex helm templates. There was some internal resistance from our developers on using golang templates (they *are* pretty ugly when combined with yaml), and the security issues raised by Helm’s Tiller component raised additional red flags. 
Suffice to say, there was no compelling reason to stick with Helm, and transitioning to Kustomize was painless. A shout out to the folks at Replicated – who have a very nifty tool called ship, that will convert your helm charts to Kustomize.  The “port” from Helm to Kustomize took a couple of days. We might look at Helm 3 in the future, but for now our requirements are being met by Kustomize. One nice side effect that we noticed is that Kustomize deployments with skaffold are really fast. 
This work is being done on the master branch of forgeops (targetting the 7.0 release), but if you would like to try out this new workflow with the current (6.5.2) products, you are in luck!  We have a preview branch  that uses the current products.  
The following should just work(TM) on minikube:
cd forgeops
git checkout skaffold-6.5
skaffold dev 
There are some prerequisites that you need to install. See the README-skaffold
The initial feedback on this workflow has been very positive. We’d love for folks to try it out and let us know what you think. Feel free to reach out to me at my ForgeRock email (warren dot strange at forgerock.com).

This blog post was first published @ warrenstrange.blogspot.ca, included here with permission.

Use ForgeRock Access Manager to provide MFA to Linux using PAM Radius

Use ForgeRock Access Manager to provide Multi-Factor Authentication to Linux

Introduction

Our aim is to set up an integration to provide Multi-Factor Authentication (MFA) to the Linux (Ubuntu) platform using ForgeRock Access Manager. The integration uses pluggable authentication module (PAM) to point to a RADIUS server. In this case AM is configured as a RADIUS server.

We achieve the following:

  1. Outsource Authentication of Linux to ForgeRock Access Manager.
  2. Provide an MFA solution to the Linux Platform.
  3. Configure ForgeRock Access Manager as a RADIUS Server.
  4. Configure PAM on Linux server point to our new RADIUS Server.

Setup

  • ForgeRock Access Manager 6.5.2 Installed and configured.
  • OS — Ubuntu 16.04.
  • PAM exists on your your server (this is common these days and you’ll find PAM here: /etc/pam.d/ ).

Configuration Steps

Configure a chain in AM

Firstly we configure a simple Authentication Chain in Access Manager with two modules.

a. First module – DataStore.

b. Second Module – HOTP. Email configured to point to local fakesmtp server.

Simple Authentication Chain

Configure ForgeRock AM as a RADIUS Server

Now we configure AM as a RADIUS Server.

a. Follow steps here: https://backstage.forgerock.com/docs/am/6.5/radius-server-guide/#chap-radius-implementation

RADIUS Server setup

Secondary Configuration — i.e. RADIUS Client

We have to configure a trusted RADIUS client, our Linux server.

a. Enter the IP address of the client (Linux Server).

b. Set the Client Secret.

b. Select your Realm — I used top level realm (don’t do this in production!).

c. Select your Chain.

Configure RADIUS Client

Configure pam_radius on Linux Server (Ubuntu)

Following these instructions, configure pam_radius on your Linux server:

https://www.wikidsystems.com/support/how-to/how-to-configure-pam-radius-in-ubuntu/

a. Install pam_radius.

sudo apt-get install libpam-radius-auth

b. Configure pam_radius to talk to RADIUS server (In this case AM).

sudo vim /etc/pam_radius_auth.conf

i.e <AM Server VM>:1812 secret 1

Point pam_radius to your RADIUS server

Tell SSH to use pam_radius for authentication.

a. Add this line to the top of the /etc/pam.d/sshd file.

auth sufficient pam_radius_auth.so debug

Note: debug is optional and has been added for testing, do not do this in production.

pam_radius is sufficient to authentication

Enable Challenge response for MFA

Tell your sshd config to allow challenge/response and use PAM.

a. Set the following values in your /etc/ssh/sshd_config file.

ChallengeResponseAuthentication yes

UsePAM yes

Create a local user on your Linux server and in AM

In this simple use case you will required a separate account on your Linux server and in AM.

a. Create a Linux user.

sudo adduser test

Note: Make sure the user has a different password than the user in AM to ensure you’re not authenticating locally. Users may have no password if your system allows it, but in this demo I set the password to some random string.

Create Linux User

b. Ensure the user is created in AM with an email address.

User exists in AM with an email address for OTP

Test Authentication to Unix via SSH

It’s now time to put it all together.

a. I recommend you tail the auth log file.

tail -f /var/log/auth.log

b. SSH to your server using.

ssh test@<server name>

c. You should be authenticating to the first module in your AM chain so enter your AM Password.

d. You should be prompted for your OTP, check your email.

OTP generated and sent to mail attribute on user

e. Enter your OTP and press enter then enter again (the UI i.e. challenge/response is not super friendly here).

f. If successfully entered you should be logged in.

You can follow the Auth logs, as well as the AM logs i.e. authentication.audit.json to view the process.

The End.

References:

This blog post was first published @ https://medium.com/@marknienaber included here with permission.

Next Generation Distributed Authorization

Many of today’s security models spend a lot of time focusing upon network segmentation and authentication.  Both of these concepts are critical in building out a baseline defensive security posture.  However, there is a major area that is often overlooked, or at least simplified to a level of limited use.  That of authorization.  Working out what, a user, service, or thing, should be able to do within another service.  The permissions.  Entitlements.  The access control entries.  I don’t want to give an introduction into the many, sometimes academic acronyms and ideas around authorization (see RBAC, MAC, DAC, ABAC, PDP/PEP amongst others). I want to spend a page delving into the some of the current and future requirements surrounding distributed authorization.

New Authorization Requirements

Classic authorization modelling, tends to have a centralised policy decision (PDP) point – a central location where applications, agents and other SDK’s call, in order to get a decision regarding a subject/object/action combination.  The PDP contains signatures (or policies) that map the objects and actions to a bunch of users and services.
That call out process is now a bottle neck, for several reasons.  Firstly the number of systems being protected is rapidly increasing, with the era of microservices, API’s and IoT devices all needing some sort of access control.  Having them all hitting a central PDP doesn’t seem a good use of network bandwidth or central processing power.  Secondly, that increase in objects, also gives way to a more mesh and federated set of interactions such as the following.

Distributed Enforcement

This gives way to a more distributed enforcement requirement.  How can the protected object perform an access control evaluation without having to go back to the mother ship?  There are a few things that could help.  
Firstly we need to probably achieve three things.  Work out what we need to identify the calling user or service (aka authentication token), map that to what that identity can do, before finally making sure that actually happens.  The first part, is often completed using tokens – and in the distributed world a token that has been cryptographically signed by a central authority.  JSON Web Tokens (JWTs) are popular, but not the only approach.
The second part – working out what they can do – could be handled in two slightly different ways.  One, is the calling subject brings with them what they can do.  They could do this by having the cryptographically signed token, contain their access control entries.  This approach, would require the service that issues tokens, to also know what the calling user or service could do, so would need to have knowledge of the access control entries to use.  That list of entries, would also need things like governance, audit, version control and so, but that is needed irregardless of where those entries are stored.
So here, a token gets issued and the objects being protected, have a method to crytographically validate the presented token, extract the access control entries (ACE) and enforce what is being asked.
Having a token that contains the actual ACE, is not that new.  Capability Based Access Control (CBAC) follows this concept, where the token could contain the object and associated actions.  It could also contain the subject identifier, or perhaps that could be delivered as a separate token.  A similar practical implementation is described in Google’s Macaroons project.
What we’ve achieved here, is to basically remove the access control logic from the object or service, but equally, removed the need to perform a call back to a policy mother ship.
A subtly different approach, is to pass the access control logic back down to the object – but instead of it originating within the service itself – it is still owned and managed by central authority – just distributed to the edges.
This allows for local enforcement, but central governance and management.  Modern distribution technologies like web sockets, or even flat file systems like JSON and YAML, allow for repave and replace approaches as policy definitions change, which fits nicely into devops deployment models.  
The object itself, would still need to know a few things to make the enforcement complete – a token representing the user or service and some context to help validate the request.

Contextual Integration

Access control decisions generally require the subject, the object and any associated actions.  For example subject=Bob, could perform actions=open on object=Meeting Room.  Another dimension that is now required, especially within zero trust based approaches, is that of context.  In Bob’s example, context may include time of day, day of the week, or even the project he is working on.  They could all impact the decision.  Previous access control requests and decisions could also come into play here.  For example, say Bob was just given access to the Safe Room where the gold bullion was stored, perhaps his request to gain access to the Back Door is denied.
The capturing of context, both during authentication time and during authorization evaluation time is now critical, as it allows the object to have a much clearer understanding of how to handle access requests.

ML – Defining Normal

I’ve talked a lot so far about access control logic and where that should sit.  Well, how do we know what that access control logic looks like?  I spent many a year, designing role based access control systems (wow, that was 10+ years ago), using a system known as role mining.  Big data crunching before machine learning was in vogue.  Taking groups of users and trying to understand what access control patterns existed, and trying to shoe horn the results into business and technical roles.
Today, there are loads of great SaaS based machine learning systems, that can take user activity logs (logs that describe user to application interactions) and provide views on whether their activity levels are “normal” – normal for them, normal for their peers, their business unit, location, purchasing patterns and so on.  The output of that process, can be used to help define the initial baseline policies.
Enforcing access based on policies though is not enough.  It is time consuming and open to many many avenues of circumvention.  Machine learning also has a huge role to play within the enforcement aspect too, especially as the idea of context and what is valid or not, becomes a highly complicated and ever changing question.
One of the key issues of modern authorization, is the distinction between access control logic, enforcement and the vehicles used to deliver the necessary parts to the protected services.
If at all possible, they should be as modular as possible, to allow for future proofing and the ability to design a secure system that is flexible enough to meet business requirements, scale out to millions of transactions a second and integrate thousands of services simultaneously.

This blog post was first published @ www.infosecprofessional.com, included here with permission.

Implementing JWT Profile for OAuth2 Access Tokens

There is a new IETF draft stream called JSON Web Token (JWT) Profile for OAuth 2.0 Access Tokens.  This is a very early 0 version, that looks to describe the format of OAuth2 issued access_tokens.

Access tokens, are typically bearer tokens, but the OAuth2 spec, doesn’t really describe what format they should be.  They typically end up being two high level types – stateless and stateful.  Stateful just means “by reference”, with a long opaque random string being issued to the requestor, which resource servers can then send back into the authorization service, in order to introspect and validate.  On their own, stateful or reference tokens, don’t really provide the resource servers with any detail.

The alternative is to use a stateless token – namely a JSON Web Token (JWT).  This new spec, aims to standardise what the content and format should be.

From a ForgeRock AM perspective, this is good news.  AM has delivered JWT based tokens (web session, OIDC id_tokens and OAuth2 access_tokens) for a long time.  The format and content of the access_tokens, out of the box, generally look something like the following:

The out of the box header (using RS256 signing):

The out of the box payload:

Note there is a lot of stuff in that access_token.  Note the cnf claim (confirmation key).  This is used for proof of possession support which is of course optional, so you can easily reduce the size by not implementing that.  There are several claims, that are specific to the AM authorization service, which may not always be needed in a stateless JWT world, where perhaps the RS is performing offline validation away from the AS.

In AM 6.5.2 and above, new functionality allows for the ability to rapidly customize the content of the access_token.  You can add custom claims, remove out of the box fields and generally build token formats that suit your deployment.  We do this, by the addition of scriptable support.  Within the settings of the OAuth2 provider, note the new field for OAuth2 Access Token Modification Script.

The scripting ability, was already in place for OIDC id_tokens.  Similar concepts now apply.

The draft JWT profile spec, basically mandates iss, exp, aud, sub and client_id, with auth_time and jti as optional.  The AM token already contains those claims.  The perhaps only differing component, is that the JWT Profile spec –  section 2.1 – recommends the header typ value be set to “at+JWT” – meaning access token JWT, so the RS does not confuse the token as an id_token.  The FR AM scripting support, does not allow for changes to the typ, but the payload already contains a tokenName claim (value as access_token) to help this distinction.

If we add a couple of lines to the out of the box script, namely the following, we cut back the token content to the recommended JWT Profile:

accessToken.removeField(“cts”);
accessToken.removeField(“expires_in”);
accessToken.removeField(“realm”);
accessToken.removeField(“grant_type”);
accessToken.removeField(“nbf”);
accessToken.removeField(“authGrantId”);
accessToken.removeField(“cnf”);

The new token payload is now much more slimmed down:

The accessToken.setField(“name”, “value”) method, allows simple extension and alteration of standard claims.

For further details see the following documentation on scripted token content – https://backstage.forgerock.com/docs/am/6.5/oauth2-guide/#modifying-access-tokens-scripts

This blog post was first published @ http://www.theidentitycookbook.com/, included here with permission from the author.

Directory Services – Docker, Kubernetes: Friends or Foes?

Two weeks ago, at the ForgeRock Identity Live conference, I did a talk about ForgeRock Directory Services (DS) in the Docker/Kubernetes (K8S) world, trying to answer the question whether DS and Docker/K8S were friends or foes.

Before I dive into the question, let me say that it’s obvious that our whole industry is moving to the Cloud, and that Docker/Kubernetes are becoming the standard way to deploy software in the Cloud, in any Cloud. Therefore whether DS and K8S are ultimately friends or foes is not the right question. I believe it is unavoidable and that in the near future we will deploy and fully support Directory Services in K8S. But is it a good idea to do it today? Let’s examine why we are questioning this today, what are the benefits of using Kubernetes to deploy software, what are the constraints of deploying the current version of Directory Services (6.5) in Kubernetes, and what ForgeRock is working on to improve DS in K8S. Finally I will highlight why Directory Services is a good solution to persist data, whether it’s on premise or in the Cloud. 

Why the discussion about DS and K8S?

The main reason we are having this discussion is due to the nature of Directory Services. DS is not the usual stateless web application. Directory Services is both a stateful application and a distributed one. These are two main aspects that require special care when trying to deploy in containers. First Directory Services is a stateful application because it is the place where one can store the state for all these stateless web-applications. In our platform, we use DS to store ForgeRock Access Management data, whether it’s runtime configuration data, tokens and user identities. Second Directory Services is a distributed application because instances need to talk with each other so that the data is replicated and consistent. Because databases and distributed applications require stronger orchestration and coordination between elements of the system, they are implemented as Stateful Sets in the Kubernetes world, and make use of Persistent Volumes (PV). Therefore our Cloud Deployment Model of ForgeRock Directory Services is also implemented this way.

It’s worth noting that Persistent Volume is a Kubernetes API and there are several types of volumes and many different providers implementations. Some of the PV types are very recent and still beta versions. So, when using Kubernetes for applications that persist data, you should have a good understanding of the characteristics and the performance of the Persistent Volumes choices that are available in your environment.

Benefits of Containers and Kubernetes

Developers are making a great use of containers because it simplifies focus on what they have to build and test. Instead of spending hours figuring how to install and configure a database, and build a monitoring platform to validate their work, they can pull one or more docker images that will automate this task.

When going into production, the automation is a key aspect. Kubernetes and its family of tools, allow administrators to describe their target architectures, automate deployment, monitoring and incident response. Typically in a Kubernetes cluster, if the administrator requires at least 3 instances of an application, Kubernetes will react to the disappearance of an instance and will restart a new one immediately. Another key benefit of Kubernetes is auto-scalability. The Kubernetes deployment can react to monitoring alerts or external signals to add or remove instances of an application in order to support a greater or smaller workload. This optimises the cost of running the solution, balancing the capacity to absorb peak loads with the cost of running at normal or low usage levels.

Directory Services 6.5 constraints in K8S

But auto-scaling is not something that is suitable to all applications, and typically Directory Services, like most of the databases, does not scale automatically by adding more running instances. Because databases have state and data, and expect exclusive access to the files, adding a new replica is a costly operation. The data needs to be duplicated in order to let another instance using it. Also, adding a Directory Services instance only helps to scale read operations. A write operation on any server will need to be replicated to all other servers. So all servers will have the same write throughput and the same amount of disk I/Os. In the world of databases, the only way to scale write operations is to distribute (shard) the data to multiple servers. Such capability is not yet available in Directory Services, but it’s planned for future releases. (Note that Directory Proxy Services 6.5 already has support for sharding, but with some constraints. And the proxy is not yet part of the Cloud Deployment Model).

Another constraint of Directory Services 6.5 is how replication works. The DS replication feature was designed years ago when customers would deploy servers and would not touch them unless they were broken. Servers had stable hostnames or IP addresses and would know all of their peers. In the container world, the address of an instance is only known after the instance is started. And sometimes you want to start several instances at the same time. The current ForgeRock Cloud Deployment Model and the Directory Services docker images that we propose, work around the design limitation of replication management, by pre-configuring replication for a fixed (and small) maximum number of replicas. It’s not possible to dynamically add another replica after that. Also, the “dsreplication” utility cannot be used in Kubernetes. Luckily, monitoring replication and more importantly its latency is possible with Prometheus which is the default monitoring technology in Kubernetes.

Coming Improvements in Directory Services

For the past year, we’ve been working hard on redesigning how we manage and bootstrap replication between Directory Services instances. Our main challenge with that work has been to do it in a way that allows us to continue to replicate with previous versions. Interoperability and compatibility of replication between different versions of Directory Services has been and will remain a key value of the product, allowing customers to roll out new versions with zero downtime of the service. We’re moving towards using full CA-based certificates and mutual TLS authentication for establishing trust between replicas. Configuring a new replica will no longer require updating all servers in the topology, and replicas that are uninstalled or stopped for some time will be automatically removed from the topology (and so will be their associated change logs and meta-data). When starting a new replica, it will only need to know of one other running replica (or be told that it is the first one). These changes will make automating the deployment of new replica much simpler and remove the limit to the number of replicas. We are also improving the way we are doing backup and restore of a database backend or the whole server, allowing to directly use cloud buckets such as S3 or GCS. All of these things are planned for the next major release due in the first half of 2020. Most of these features will be used by our own ForgeRock Identity Platform as a Service offering that will go in stages of Early Access and Beta later this year.

Once we have the ability to fully automate the deployment and the upgrade of a cluster of Directory Services instances, in one or more data-centres, we will start working on horizontal scalability for Directory Services, and provide a way to scale the number of servers as the data stored grows, allowing a consistent level of write throughout. All of this fully automated to be deployed in the Cloud using Kubernetes.

Benefits of using Directory Services as a data store

Often people ask me why they should use ForgeRock Directory Services rather than a real database. First of all, Directory Services is a database. It’s a specialised database, built on a standard data model and a standard access protocol: Lightweight Directory Access Protocol aka LDAP. Several people in the past have pointed out that LDAP might have even been the first successful NoSQL database! 🙂  Furthermore, Directory Services also exposes all of the data through a REST/JSON API, yet still providing the same security and fine grained access controls mechanisms as through LDAP. But the main value of Directory Services is that you can achieve very high availability of the data (in the 5 9’s), using standard systems (whether they are bare metal systems or virtual hosts or containers), even with world wide geographic distribution. We have many customers that have deployed a single directory services distributed in 3 to 6 data centers around the globe. The LDAP data model has a flexible schema that can be extended, customised without having to rebuild the database nor even restart the servers. The data can even be exposed through versioned APIs using our REST API. Finally, the combination of flexible and extensive schema with fine-grained access controls, allow multiple applications to access the data, but with great control of which application can read or write which data. This results in a single identity and credentials for a user, but multiple sets of attributes, that can be shared by applications or restricted to a single one: a single central view of the user that is then easier and more cost effective to manage.

Conclusion

Back to the track of Kubernetes, and because of the constraints of the current Directory Services Cloud Deployment Model with version 6.5, we would recommend that you try to keep your Directory Services deployed in VMs or on bare metal. But with the next release which underpins the ForgeRock Cloud offering, we will fully support deploying Directory Services on Docker/Kubernetes. We will continue our investment in the product to be able to support Auto-Scaling (using data sharding) in subsequent releases. Building these solutions is not extremely difficult, but we need time to prove that it’s 100% reliable in all conditions, because in the end, the most wanted and appreciated feature of ForgeRock Directory Services is its reliability.

This blog post was first published @ ludopoitou.com, included here with permission.

How To Build An Authentication Platform

Today’s authentication requirements go way beyond hooking into a database or directory and challenging every user and service for an Id and password.  Authentication and the login experience, is the application entry point and can make or break your security posture and end user experience. 

Authentication is typically associated with identifying, to a certain degree of assurance, who or what you are interacting with.  Authorization is typically identifying and allowing what that person or thing can do.  This blog is focused on the former, but I might stray in to the latter from time to time.

There are numerous use cases that a modern enterprise needs to fulfil, if authentication services are to deliver value.  These can include:

  • Authentication for a service or API
  • Device authentication
  • Metrics, timing and analytics of flows
  • Threat intelligence integration
  • Anonymous to known authentication profiling
  • Contextual analysis
In addition to the basic functional requirements, there are several non-functional basics too.  These are going to include:
  • Simple customisation
  • Being highly available
  • Stateless and elastic
  • Simple integrations
  • API first
I’m going to take some of these key requirements and describe them in a little more detail.

Non Identity Intelligence

From a feature perspective, the new requirements consistently rely upon Intelligence:  the new buzz in the cyber security world.  Every week a new more consolidated threat intelligence tool comes to market.  Organisations up and down the land, are rapidly building out Security Operations Centres (SOC) with wily ex-military veterans creating strategies and starry eyed graduates analysing SIEM and NIDS logs.  We need data.  We have data.  What we need is information.  Actionable intelligence.  Intelligence can be rapidly integrated into any number of different security architecture components. 
Intelligence here, is basically a focus upon non-identity data signals.  Sources of malware, malicious IP addresses, app assurance ratings, breached credentials data and so on.
The vast breadth and depth of cyber threat intelligence (CTI) sources is staggering.  Free, chargeable, subscription based, cloud based, you name it, it’s available.  A common factor must be simplicity of integration – ideally via some like a REST/JSON based API that developers are familiar with.  Long tale integration must be avoided too, with the ability to swap out and have a zero barrier to exit being important.  This last point is extremely important.  You need to able to future proof your data inputs.  
Whatever you want to integrate today, will be out of date tomorrow.  

Integration

Integration is not just limited to threat intelligence sources.  This is really just a non-functional, but I want to spend some time on it.  It is quite common to find legacy (I hate this word, let’s call them “classic” or initial system) authentication products are generally difficult to integrate against and extend.  
Many systems integrators (SI’s) (and many do excellent jobs in highly challenging environments) will work tirelessly, and at some considerable cost, to add different authentication modalities, customize one time password options, integrate with difficult LDAP account lockout options, mobile-ise and more.  These “integration” steps are often described as non-BAU.  They require change control and are charged via a time and materials or scope creep premium model.  Integration costs in a modern system, really need to be minimized if not removed.  Authentication is becoming so fluid that changes including new authentication factors, data sources, UI flows and so on, should be a standard operator journey.

Roadmapping

So why is integration such an issue?  A common problem of historical authentication deployments, has often been around lack of foresight. In honesty, foresight and robust road mapping has never been a real requirement for a login system.  Login using user names and passwords and occasionally an MFA, was pretty much it.  Like it or lump.  Well, in today’s digitised ecosystems, new requirements pop up daily.  Think of the following basic scenarios, that will impact an authentication system:
  • New go to markets requiring localization
  • A new product that requires new API’s and apps
  • A merger resulting in differing regulatory compliance requirements
  • New attack patterns and vector discovery
  • Competitive innovations
  • Commodity innovations
If you looked at your authentication services library and compare that to the applications and users consuming those services, do you know their functional and non-functional requirements, business objectives and challenges for  the next 12-18 months?  Some will, so the underlying authentication service needs to a) have a road map and b) be able to accommodate new requirements and demands, in a agile and iterative fashion.
Part of this is technical and part of that is operational management.  The business owners of an authentication platform, need to have interactions with the new stakeholders to the login journey.  The login process is basically the application from an end user perspective.  It needs to uphold security, whilst improving the user experience.  Requirements gathering must be a fully integrated process not just for application development, but for identity and authentication services too.

Platform versus Product

I purposefully chose the word platform in the title as opposed to service or product.  Modern authentication is a platform.  It powers transformation, by supporting API’s, applications and services that allow organisations to create value driven software.  It becomes the wiring in the hotel, that allows all of the auxiliary products and shiny things to flourish.  
Many point authentication products exist. I am not discrediting them by any means.  Best of breed point solutions for biometry, mobile SDK integration, device operating or behaviour profiling exist and will need integrating to the underlying platform.  They are integration points.  Cogs inside a bigger machine.
The glue that drives the business value however, will be the authentication platform, capable of delivering a range of services to different applications, user communities, geographies and customers.  A single product is unlikely to be able to achieve this.
In summary, authentication has become a critical component, not only for securing user and data centric integrations, but also for helping to deliver continuous modernization of the enterprise.  
It has become a foundational component, that requires a wide breadth of coverage, coupled with agility and extensibility.

This blog post was first published @ www.infosecprofessional.com, included here with permission.

ForgeRock DS and the LDAP Relax Rules Control

In ForgeRock Directory Services 6.5, we’ve added the support for the LDAP Relax Rules Control, both on the server and our clients. One of my colleagues, involved with the customers’ deployment, asked me why we’ve added the control and what it should be used for.

The LDAP Relax Rules Control is an LDAP extension that allows a directory user agent (a client) to request the directory service to temporarily relax enforcement of various data and service model rules. The internet-draft is explicit about which rules can be relaxed or not. But typically it can be used to allow a client to write specific operational attributes that should be read-only and managed by the server.

Starting with OpenDJ 3.0, we’ve removed the ability to bulk import LDIF data to a server while preserving the existing data (the “append mode”). First, performing an import-ldif in append mode was breaking replication. The import needed to be applied to all replica, while no change was to happen on the new data. The process was cumbersome, especially when having multiple data-centers. But also, removing this feature allowed us to have a more generic interface and implement multiple backend using different underlying key-value stores.

But we have a few customers that have the need to seldom bulk load a large set of users to their directory service. In DS 6.0, we’ve added an option to speed bulk operations using ldapmodify or ldapdelete: –numConnections. Instead of serialising all updates or adds contained in an LDIF file, the tool will run them in parallel across multiple connections, while also controlling dependencies of changes. With this options, some of our customers have added several millions of users to their replicated directory services in minutes. By controlling the number of connections, one can also balance the need for speed of bulk loading data against the need to keep bandwidth for the regular client applications.

Doing bulk updates over LDAP is now fast, but some customers used the import process to also carry over some attributes that are usually managed by the directory server and thus read-only, such as the CreateTimeStamp, the CreatorsName.

And this is specifically what the Relax Rules Control is meant to allow.

So, if you have a need to bulk load large set of data, or synchronise over LDAP data from another server, and need to preserve some of the operational attribute, you can use the Relax Rules Control as illustrated below. Note that the OID for the control is 1.3.6.1.4.1.4203.666.5.12 but ForgeRock DS tools also recognise the RelaxRules string alias.

$ ldapmodify -p 1389 -D cn=directory manager -w secret12
-J RelaxRules:true --numConnections 4 ../50Kusers.ldif
...
ADD operation successful for DN uid=user.10021,ou=People,dc=example,dc=com
ADD operation successful for DN uid=user.10022,ou=People,dc=example,dc=com
ADD operation successful for DN uid=user.10001,ou=People,dc=example,dc=com
ADD operation successful for DN uid=user.10020,ou=People,dc=example,dc=com
ADD operation successful for DN uid=user.10026,ou=People,dc=example,dc=com
ADD operation successful for DN uid=user.10025,ou=People,dc=example,dc=com
ADD operation successful for DN uid=user.10024,ou=People,dc=example,dc=com
ADD operation successful for DN uid=user.10005,ou=People,dc=example,dc=com
ADD operation successful for DN uid=user.10033,ou=People,dc=example,dc=com
ADD operation successful for DN uid=user.10029,ou=People,dc=example,dc=com
...

Note that because the Relax Rules Control allows to override some of the rules enforced normally by the server, it’s important to control and restrict which clients or users are allowed to make use of it. In ForgeRock DS, you would use ACIs (global or not) to define who has permission to use the control. Out of the box, only Directory Manager can, because it has the bypass access controls privilege. Check the “Use Control or Extended Operation” section of the Administration Guide for the details on how to allow a user to use a control.

This blog post was first published @ ludopoitou.com, included here with permission.

Explaining index-entry-limit in ForgeRock Directory Services / OpenDJ

A few years ago, I’ve explained the various resource limits in OpenDJ, the open source LDAP and REST directory server. A few months ago, someone read the post and asked on twitter about the index-entry-limit:

Screen Shot 2016-08-20 at 16.28.01

The index-entry-limit is probably the least understood parameter in the OpenDJ directory server, as was the AllIDThreshold in Sun Directory Server (and its siblings : Netscape Directory, Red Hat Directory, Oracle DSEE…). So before I dive in explaining what is this parameter, how it’s used and how it can be tuned, let me start with answering the question : how does index-entry-limit relate to other administrative limits ?

Answer: It doesn’t ! The index-entry-limit is an internal limit and does not really limits the results returned to clients. It just limits the resources consumed when processing indexes.

A Directory Server is a very specialized data-store based on the LDAP standard, and its primary goal was to be able to search and return user information such as email addresses or names and phone numbers, very quickly and for a large number of different clients. For that, the directory servers were designed to favor reads over writes, and read optimization was achieved through the use of indexes.

In LDAP, a search request (which can be used to read an entry or search for one or more through the whole database) contains a search filter. The filter may be simple or complex, and composed of one or more attribute value assertions.

A simple filter can be “(sn=Smith)”. Complex filters combine operators and different attributes : “(&(objectclass=Person)(|(sn=Smith)(cn=*Smith*)))” – find a person whose surname is smith or whose common name contains smith

When the ForgeRock Directory Server / OpenDJ receives a search request, it processes it in 2 phases. In the first phase, it analyzes the search filter, to identify which attributes are indexed, and then uses these indexes to build a list of possible candidates to return. If there are no indexed attributes or the list is too large, the server decides that the list is actually the whole database. Such search request is tagged as “unindexed” and the server verify if the authenticated user has the “unindexed-search” privilege before continuing. In the second phase, it reads all the candidates from the database, and assess the full filter to decide to return the entry to the client or not (subject to access controls).

ForgeRock DS / OpenDJ implements attribute indexes as reversed index. Meaning that for a specific attribute, we keep a pair of each unique value and a list of the entries that  contain that value. Because maintaining a large list of entries for each value of all indexed attributes may have a big cost, both in term of memory usage and disk I/O (think that when you add an entry in the Directory, all of its indexed attributes will need to be updated), we introduced a limit to the number of entries that an index record can contain: the index-entry-limit. For example, if the number of entries that contain the objectClass person exceeds the limit, then we mark the key as “full” and we consider that the list of candidates is actually the whole set of directory entries.  This saves us from updating and reading a very long record, allocating lots of data, to end up iterating through almost all entries. You might ask, so why having an index for objectClass then ? Well, in a directory server that contains millions of users, there are in fact very few entries that are not persons. These entries will have their objectClass values indexed, and searching for those entries will be very efficient thanks for the index.

The index-entry-limit is a limit of the number of entries that are contained in a single index record, per value of an attribute index. Its default value is 4000 and works for most medium to large scale deployments. So, why is it a configurable parameter, and when should you change it?

Because ForgeRock DS is used in many different environments with various use cases, and a great range of number of entries (some of our customers have over 100 millions entries in a directory service), we know that one size doesn’t fit all. But the default value works for most of the index usages. Also, the index-entry-limit can be set for each individual index, or for the whole backend (and this value applies to all indexes that don’t have a specific value). It is highly recommended that you only try to change the index-entry-limit of specific indexes, and not the backend default value.

In no case, should you increase the index-entry-limit to a value close to the total number of entries in the directory. This will undermine performances of both searches and updates, significantly increase the footprint of the data stored on disk.

There are few known cases where the index-entry-limit value should be changed (and equally cases where increasing the value will only consume more resources for no performance gain). Keep also in mind that when you change the index-entry-limit, you need to rebuild the indexes for which the limit was changed. So it’s not something that you want to do too often. And definitely not something that you need to adjust constantly.

Groups. When the server starts, it issues an internal search to find all group entries and cache them for better performances. The search is based on the ObjectClass attribute. If there are more than 4000 groups of one kind (the search is for GroupOfNames, GroupOfUniqueNames, GroupOfEntries, DynamicGroup and ds-virtual-static-group), the search will be unindexed and can take a long time to proceed. In that case, you should increase the index-entry-limit for the ObjectClass attribute, to a value just above the number of groups.

Members (or uniqueMembers). If you have more than 4000 static groups, and you know that some users are likely to be member of more than 4000 groups, then you should also increase the index-entry-limit for the member attribute (or uniqueMember) to a value just above the maximum number of group a user can be in, especially if you have enabled the Referential Integrity Plugin (that removes a user from groups when its entry is deleted).

Another typical use case for increasing the index-entry-limit is when you have millions of entries, and an attribute doesn’t have a flat distribution of values. Think about the surname of users. In a wide range of population, there are probably more “Smith” or “Lee” than “Washington”. Within 10M users, would there be more than 4000 “Lee”? If it’s possible, and the server receives searches with filters such as “(sn=Lee)”, then you should consider increasing the limit for the sn attribute.

Backendstat is the tool you want to use to verify the state of the index and whether some records have reached the index-entry-limit. For some attributes, such as ObjectClass, it is normal that the limit is reached. For others, such as sn, it’s probably something you want to check regularly.

The backendstat tool requires exclusive access to the database, and thus can only run against a server that is stopped (or a backup).

To list the indexes, use backendstat list-indexes:

$ backendstat list-indexes -b dc=example,dc=com -n userRoot

Index Name Raw DB Name Type Record Count
dn2id /dc=com,dc=example/dn2id DN2ID 10002
id2entry /dc=com,dc=example/id2entry ID2Entry 10002
referral /dc=com,dc=example/referral DN2URI 0
id2childrencount /dc=com,dc=example/id2childrencount ID2ChildrenCount 3
state /dc=com,dc=example/state State 18
uniqueMember.uniqueMemberMatch /dc=com,dc=example/uniqueMember.uniqueMemberMatch MatchingRuleIndex 0
mail.caseIgnoreIA5SubstringsMatch:6 /dc=com,dc=example/mail.caseIgnoreIA5SubstringsMatch:6 MatchingRuleIndex 31232
mail.caseIgnoreIA5Match /dc=com,dc=example/mail.caseIgnoreIA5Match MatchingRuleIndex 10000
aci.presence /dc=com,dc=example/aci.presence MatchingRuleIndex 0
member.distinguishedNameMatch /dc=com,dc=example/member.distinguishedNameMatch MatchingRuleIndex 0
givenName.caseIgnoreMatch /dc=com,dc=example/givenName.caseIgnoreMatch MatchingRuleIndex 8605
givenName.caseIgnoreSubstringsMatch:6 /dc=com,dc=example/givenName.caseIgnoreSubstringsMatch:6 MatchingRuleIndex 19629
telephoneNumber.telephoneNumberSubstringsMatch:6 /dc=com,dc=example/telephoneNumber.telephoneNumberSubstringsMatch:6 MatchingRuleIndex 73235
telephoneNumber.telephoneNumberMatch /dc=com,dc=example/telephoneNumber.telephoneNumberMatch MatchingRuleIndex 10000
ds-sync-hist.changeSequenceNumberOrderingMatch /dc=com,dc=example/ds-sync-hist.changeSequenceNumberOrderingMatch MatchingRuleIndex 0
ds-sync-conflict.distinguishedNameMatch /dc=com,dc=example/ds-sync-conflict.distinguishedNameMatch MatchingRuleIndex 0
entryUUID.uuidMatch /dc=com,dc=example/entryUUID.uuidMatch MatchingRuleIndex 10002
sn.caseIgnoreMatch /dc=com,dc=example/sn.caseIgnoreMatch MatchingRuleIndex 10000
sn.caseIgnoreSubstringsMatch:6 /dc=com,dc=example/sn.caseIgnoreSubstringsMatch:6 MatchingRuleIndex 32217
cn.caseIgnoreMatch /dc=com,dc=example/cn.caseIgnoreMatch MatchingRuleIndex 10000
cn.caseIgnoreSubstringsMatch:6 /dc=com,dc=example/cn.caseIgnoreSubstringsMatch:6 MatchingRuleIndex 86040
objectClass.objectIdentifierMatch /dc=com,dc=example/objectClass.objectIdentifierMatch MatchingRuleIndex 6
uid.caseIgnoreMatch /dc=com,dc=example/uid.caseIgnoreMatch MatchingRuleIndex 10000

Total: 23

To check the status of the indexes and see which keys are full (i.e. exceeded the index-entry-limit), use backendstat show-index-status. Warning, this may take a long time.

$ backendstat show-index-status -b dc=example,dc=com -n userRoot
Index Name Raw DB Name Valid Confidential Record Count Over Entry Limit 95% 90% 85%
uniqueMember.uniqueMemberMatch /dc=com,dc=example/uniqueMember.uniqueMemberMatch true false 0 0 0 0 0
mail.caseIgnoreIA5SubstringsMatch:6 /dc=com,dc=example/mail.caseIgnoreIA5SubstringsMatch:6 true false 31232 12 0 0 0
mail.caseIgnoreIA5Match /dc=com,dc=example/mail.caseIgnoreIA5Match true false 10000 0 0 0 0
aci.presence /dc=com,dc=example/aci.presence true false 0 0 0 0 0
member.distinguishedNameMatch /dc=com,dc=example/member.distinguishedNameMatch true false 0 0 0 0 0
givenName.caseIgnoreMatch /dc=com,dc=example/givenName.caseIgnoreMatch true false 8605 0 0 0 0
givenName.caseIgnoreSubstringsMatch:6 /dc=com,dc=example/givenName.caseIgnoreSubstringsMatch:6 true false 19629 0 0 0 0
telephoneNumber.telephoneNumberSubstringsMatch:6 /dc=com,dc=example/telephoneNumber.telephoneNumberSubstringsMatch:6 true false 73235 0 0 0 0
telephoneNumber.telephoneNumberMatch /dc=com,dc=example/telephoneNumber.telephoneNumberMatch true false 10000 0 0 0 0
ds-sync-hist.changeSequenceNumberOrderingMatch /dc=com,dc=example/ds-sync-hist.changeSequenceNumberOrderingMatch true false 0 0 0 0 0
ds-sync-conflict.distinguishedNameMatch /dc=com,dc=example/ds-sync-conflict.distinguishedNameMatch true false 0 0 0 0 0
entryUUID.uuidMatch /dc=com,dc=example/entryUUID.uuidMatch true false 10002 0 0 0 0
sn.caseIgnoreMatch /dc=com,dc=example/sn.caseIgnoreMatch true false 10000 0 0 0 0
sn.caseIgnoreSubstringsMatch:6 /dc=com,dc=example/sn.caseIgnoreSubstringsMatch:6 true false 32217 0 0 0 0
cn.caseIgnoreMatch /dc=com,dc=example/cn.caseIgnoreMatch true false 10000 0 0 0 0
cn.caseIgnoreSubstringsMatch:6 /dc=com,dc=example/cn.caseIgnoreSubstringsMatch:6 true false 86040 0 0 0 0
objectClass.objectIdentifierMatch /dc=com,dc=example/objectClass.objectIdentifierMatch true false 6 4 0 0 0
uid.caseIgnoreMatch /dc=com,dc=example/uid.caseIgnoreMatch true false 10000 0 0 0 0
Total: 18
Index: /dc=com,dc=example/mail.caseIgnoreIA5SubstringsMatch:6
Over index-entry-limit keys: [.com] [@examp] [ample.] [com] [e.com] [exampl] [le.com] [m] [mple.c] [om] [ple.co] [xample]
Index: /dc=com,dc=example/objectClass.objectIdentifierMatch
Over index-entry-limit keys: [inetorgperson] [organizationalperson] [person] [top]

I hope this long article will help you better understand and tune your ForgeRock Directory Servers for search performances. Please let me know how it goes.

This blog post was first published @ ludopoitou.com, included here with permission.