Principles of Usable Security

I want to talk about the age old trade off between the simplicity of a website or app, versus the level of friction, restriction and inhibition associated with applying security controls. There was always a tendency to split security at the other end of the cool and usable spectrum. If it was secure, it was ugly. If it was easy to use and cool, it was likely full of exploitable vulnerability. Is that still true?

In recent years, there have been significant attempts – certainly by vendors – but also by designers and architects, to meet somewhere in the middle – and deliver usable yet highly secure and robust systems. But how to do it? I want to try and capture some of those points here.

Most Advanced Yet Acceptable


I first want to introduce the concept of MAYA: Most Advanced Yet Acceptable. MAYA was a concept created by the famous industrial design genius, Raymond Loewy [1] in the 1950’s. The premise, was that to create successful innovative products, you had to reach a point of inflexion, between novelty and familiarity. If something was unusual in its nature or use, it would only appeal to a small audience. An element of familiarity had to anchor the viewer or user in order to allow incremental changes to take the product in new directions.

Observation


When it comes to designing – or redesigning a product or piece of software – it is often the case that observation is the best ingredient. Attempting to design in isolation can often lead to solutions looking for problems, or in the case of MAYA, something so novel that it is not fit for purpose. A key focus of Loewy’s modus operandi, was to observe users of the product he was aiming to improve. Be it a car, a locomotive engine or copying machine. He wanted to see how it was being broken. The good things, the bad, the obstacles, the areas which required no explanation and the areas not being used at all. The same applies when improving software flow.

Take the classic sign-up and sign-in flows seen on nearly every website and mobile application. To the end user, these flows are the application. If they fail, create unnecessary friction, or are difficult to understand, the end user will become so frustrated they are likely to attribute the entire experience to the service or product they are trying to access.  And go to the nearest competitor.

In order to improve, there needs to be a mechanism to view, track and observe how the typical end user will use and interact with the flow. Capture clicks, drop outs, the time it takes to perform certain operations. All these steps, provide invaluable input in how to create a more optimal set of interactions. These observations of course, need comparing to a baseline or some sort of acceptable SLA.


Define Usable?


But why are we observing and how to define usable in the first place? Security can be a relatively simple metric. Define some controls. Compare process or software to said controls. Apply metrics. Rinse and repeat. Simple right? But how much usability is required? And where does that usability get counted?

Usable for the End User

The most obvious stand point is usability for the end user. If we continue the sign-up and sign-in flows, they would need to be simply labelled and responsive – altering their expression dynamically depending on the device type and maybe location the end user is accessing from.

End user choice is also critical, empowering the end user without overloading them with options and overly complex decisions. Assumptions are powerful, but only if enough information is available to the back-end system, that allows for the creation of a personalised experience.

Usable for the Engineer

But the end user is only one part of the end to end delivery cycle for a product. The engineering team needs usability too. Complexity in code design, is the enemy of security. Modularity, clean interfaces and nice levels of cohesion, allow for agile and rapid feature development, that reduces the impact on unrelated areas. Simplicity in code design, makes testing simpler and helps reduce attack vectors.

Usable for the Support Team

The other main area to think about, is that of the post sales teams. How do teams support, repair and patch existing systems that are in use? Does that process inhibit either the end user happiness or underlying security posture of the system? Does it allow for enhancements or just fixes?

Reduction


A classic theme of Loewy’s designs, if you look at them over time, is that of reduction. Reduction in components, features, lines and angles involved in the overall product. By reducing the number of fields, buttons, screens and steps, the end user user then has fewer decisions to make. Fewer decisions result in fewer mistakes. Fewer mistakes result in less friction. Less friction, seems a good design choice when it comes to usability.

Fewer components, should also reduce the attack surface and the support complexity.


Incremental Change


But change needs to be incremental. An end user does not like shocks. A premise of MAYA, is to think to the future, but observe and provide value today. Making radical changes will reduce usability as features and concepts will be too alien and too novel.

Develop constructs that benefit the end user immediately, instil familiarity, that allows trust in the incremental changes that will follow.  All whist keeping those security controls in mind.


[1] - https://en.wikipedia.org/wiki/Raymond_Loewy

Principles of Usable Security

I want to talk about the age old trade off between the simplicity of a website or app, versus the level of friction, restriction and inhibition associated with applying security controls. There was always a tendency to split security at the other end of the cool and usable spectrum. If it was secure, it was ugly. If it was easy to use and cool, it was likely full of exploitable vulnerability. Is that still true?

In recent years, there have been significant attempts – certainly by vendors – but also by designers and architects, to meet somewhere in the middle – and deliver usable yet highly secure and robust systems. But how to do it? I want to try and capture some of those points here.

Most Advanced Yet Acceptable


I first want to introduce the concept of MAYA: Most Advanced Yet Acceptable. MAYA was a concept created by the famous industrial design genius, Raymond Loewy [1] in the 1950’s. The premise, was that to create successful innovative products, you had to reach a point of inflexion, between novelty and familiarity. If something was unusual in its nature or use, it would only appeal to a small audience. An element of familiarity had to anchor the viewer or user in order to allow incremental changes to take the product in new directions.

Observation


When it comes to designing – or redesigning a product or piece of software – it is often the case that observation is the best ingredient. Attempting to design in isolation can often lead to solutions looking for problems, or in the case of MAYA, something so novel that it is not fit for purpose. A key focus of Loewy’s modus operandi, was to observe users of the product he was aiming to improve. Be it a car, a locomotive engine or copying machine. He wanted to see how it was being broken. The good things, the bad, the obstacles, the areas which required no explanation and the areas not being used at all. The same applies when improving software flow.

Take the classic sign-up and sign-in flows seen on nearly every website and mobile application. To the end user, these flows are the application. If they fail, create unnecessary friction, or are difficult to understand, the end user will become so frustrated they are likely to attribute the entire experience to the service or product they are trying to access.  And go to the nearest competitor.

In order to improve, there needs to be a mechanism to view, track and observe how the typical end user will use and interact with the flow. Capture clicks, drop outs, the time it takes to perform certain operations. All these steps, provide invaluable input in how to create a more optimal set of interactions. These observations of course, need comparing to a baseline or some sort of acceptable SLA.


Define Usable?


But why are we observing and how to define usable in the first place? Security can be a relatively simple metric. Define some controls. Compare process or software to said controls. Apply metrics. Rinse and repeat. Simple right? But how much usability is required? And where does that usability get counted?

Usable for the End User

The most obvious stand point is usability for the end user. If we continue the sign-up and sign-in flows, they would need to be simply labelled and responsive – altering their expression dynamically depending on the device type and maybe location the end user is accessing from.

End user choice is also critical, empowering the end user without overloading them with options and overly complex decisions. Assumptions are powerful, but only if enough information is available to the back-end system, that allows for the creation of a personalised experience.

Usable for the Engineer

But the end user is only one part of the end to end delivery cycle for a product. The engineering team needs usability too. Complexity in code design, is the enemy of security. Modularity, clean interfaces and nice levels of cohesion, allow for agile and rapid feature development, that reduces the impact on unrelated areas. Simplicity in code design, makes testing simpler and helps reduce attack vectors.

Usable for the Support Team

The other main area to think about, is that of the post sales teams. How do teams support, repair and patch existing systems that are in use? Does that process inhibit either the end user happiness or underlying security posture of the system? Does it allow for enhancements or just fixes?

Reduction


A classic theme of Loewy’s designs, if you look at them over time, is that of reduction. Reduction in components, features, lines and angles involved in the overall product. By reducing the number of fields, buttons, screens and steps, the end user user then has fewer decisions to make. Fewer decisions result in fewer mistakes. Fewer mistakes result in less friction. Less friction, seems a good design choice when it comes to usability.

Fewer components, should also reduce the attack surface and the support complexity.


Incremental Change


But change needs to be incremental. An end user does not like shocks. A premise of MAYA, is to think to the future, but observe and provide value today. Making radical changes will reduce usability as features and concepts will be too alien and too novel.

Develop constructs that benefit the end user immediately, instil familiarity, that allows trust in the incremental changes that will follow.  All whist keeping those security controls in mind.


[1] - https://en.wikipedia.org/wiki/Raymond_Loewy

Immutable Deployment Pattern for ForgeRock Access Management (AM) Configuration without File Based…

Immutable Deployment Pattern for ForgeRock Access Management (AM) Configuration without File Based Configuration (FBC)

Introduction

The standard Production Grade deployment pattern for ForgeRock AM is to use replicated sets of Configuration Directory Server instances to store all of AM’s configuration. The deployment pattern has worked well in the past, but is less suited to the immutable, DevOps enabled environments of today.

This blog presents an alternative view of how an immutable deployment pattern could be applied to AM in lieu of the upcoming full File Based Configuration (FBC) for AM in version 7.0 of the ForgeRock Platform. This pattern could also support easier transition to FBC.

Current Common Deployment Pattern

Currently most customers deploy AM with externalised Configuration, Core Token Service (CTS) and UserStore instances.

The following diagram illustrates such a topology spread over two sites; the focus is on the DS Config Stores hence the CTS and DS Userstore connections and replication topology have been simplified . Note this blog is still applicable to deployments which are single site.

Dual site AM deployment pattern. Focus is on the DS Configuration stores

In this topology AM uses connection strings to the DS Config stores to enable an all active Config store architecture, with each AM targeting one DS Config store as primary and the second as failover per site. Note in this model there is no cross site failover for AM to Config stores connections (possible but discouraged). The DS Config stores do communicate across site for replication to create a full mesh as do the User and CTS stores.

A slight divergence from this model and one applicable to cloud environments is to use a load balancer between AM and it’s DS Config Stores, however we have observed many customers experience problems with features such as Persistent Searches failing due to dropped connections. Hence, where possible Consulting Services recommends the use of AM Connection Strings.

It should be noted that the use of AM Connection Strings specific to each AM can only be used if each AM has a unique FQDN — for example: https://openam1.example.com:8443/openam, https://openam2.example.com:8443/openam and so on.

For more on AM Connection Strings click here

Problem Statement

This model has worked well in the past; the DS Config stores contain all the stuff AM needs to boot and operate plus a handful of runtime entries.

However, times are a changing!

The advent of Open Banking introduces potentially hundreds of thousands of OAuth2 clients, AM policies entry numbers are ever increasing and with UMA thrown in for good measure; the previously small, minimal footprint are fairly static DS Config Stores are suddenly much more dynamic and contains many thousands of entries. Managing the stuff AM needs to boot and operate and all this runtime data suddenly becomes much more complex.

TADA! Roll up the new DS App and Policy Stores. These new data stores address this by allowing separation from this stuff AM needs to boot and operate from long lived environment specifics data such as policies, OAuth2 clients, SAML entities etc. Nice!

However, one problem still remains; it is still difficult to do stack by stack deployments, blue/green type deployments, rolling deployments and/or support immutable style deployments as DS Config Store replication is in place and needs to be very carefully managed during deployment scenarios.

Some common issues:

  • Making a change to one AM can quite easily have a ripple effect through DS replication, which impacts and/or impairs the other AM nodes both within the same site or remote. This behaviour can make customers more hesitant to introduce patches, config or code changes.
  • In a dual site environment the typical deployment pattern is to stop cross site replication, force traffic to site B, disable site A, upgrade site A, test it in isolation, force traffic back to the newly deployed site A, ensure production is functional, disable traffic to site B, push replication from site A to site B and re-enable replication, upgrade site B before finally returning to normal service.
  • Complexity is further increased if App and Policy stores are not in use as the in service DS Config stores may have new OAuth2 clients, UMA data etc created during transition which needs to be preserved. So in the above scenario an LDIF export of site B’s DS Config Stores for such data needs to be taken and imported in site A prior to site A going live (to catch changes while site A deployed was in progress) and after site B is disabled another LDIF export needs to taken from B and imported into A to catch any last minute changes between the first LDIF export and the switch over. Sheesh!
  • Even in a single site deployment model managing replication as well as managing the AM upgrade/deployment itself introduces risk and several potential break points.

New Deployment Model

The real enabler for a new deployment model for AM is the introduction of App and Policy stores, which will be replicated across sites. They enable full separation from the stuff AM needs to boot and run, from environmental runtime data. In such a model the DS Config stores return to a minimal footprint, containing only AM boot data with the App and Policy Stores containing the long lived environmental runtime data which is typically subject to zero loss SLAs and long term preservation.

Another enabler is a different configuration pattern for AM, where each AM effectively has the same FQDN and serverId allowing AM to be built once and then cloned into an image to allow rapid expansion and contraction of the AM farm without having to interact with the DS Config Store to add/delete new instances or go through the build process again and again.

Finally the last key component to this model is Affinity Based Load Balancing for the Userstore, CTS, App and Policy stores to both simplify the configuration and enable an all-active datastore architecture immune to data misses as a result of replication delay and is central to this new model.

Affinity is a unique feature of the ForgeRock platform and is used extensively by many customers. For more on Affinity click here.

The proposed topology below illustrates this new deployment model and is applicable to both active-active deployments and active-standby. Note cross site replication for the User, App and CTS stores is depicted, but for global/isolated deployments may well not be required.

Localised DS Config Store for each AM with replication disabled

As the DS Config store footprint will be minimal, to enable immutable configuration and massively simplify step-by-step/blue green/rolling deployments the proposal is to move the DS Config Stores local to AM with each AM built with exactly the same FQDN and serverId. Each local DS Config Store lives in isolation and replication is not enabled between these stores.

In order to provision each DS Config Store in lieu of replication, either the same build script can be executed on each host or a quicker and more optimised approach would be to build one AM-DS Config store instance/Pod in full, clone it and deploy the complete image to deploy a new AM-DS instance. The latter approach removes the need to interact with Amster to build additional instances and for example Git to pull configuration artefacts. With this model any new configuration changes require a new package/docker image/AMI, etc, i.e. an immutable build.

At boot time AM uses its local address to connect to its DS Config Store and Affinity to connect to the user Store, CTS and the App/Policy stores.

Advantages of this model:

  • As the DS Config Stores are not replicated most AM configuration and code level changes can be implemented or rolled back (using a new image or similar) without impacting any of the other AM instances and without the complexity of managing replication. Blue/green, rolling and stack by stack deployments and upgrades are massively simplified as is rollback.
  • Enables simplified expansion and contraction of the AM pool especially if an image/clone of a full AM instance and associated DS Config instance is used. This cloning approach also protects against configuration changes in Git or other code repositories inadvertently rippling to new AM instances; the same code and configuration base is deployment everywhere.
  • Promotes the cattle vs pet paradigm, for any new configuration deploy a new image/package.
  • This approach does not require any additional instances; the existing DS Config Stores are repurposed as App/Policy stores and the DS Config Stores are hosted locally to AM (or in a small Container in the same Pod as AM).
  • The existing DS Config Store can be quickly repurposed as App/Policy Stores no new instances or data level deployment steps are required other than tuning up the JVM and potentially uprating storage; enabling rapid switching from DS Config to App/Policy Stores
  • Enabler for FBC; when FBC becomes available the local DS Config stores are simply stopped in favour of FBC. Also if transition to FBC becomes problematic, rollback is easy — fire up the local DS Config stores and revert back.

Disadvantages of this model:

  • No DS Config Store failover; if the local DS Config Store fails the AM connected to it would also fail and not recover. However, this fits well with the pets vs cattle paradigm; if a local component fails, kill the whole instance and instantiate a new one.
  • Any log systems which have logic based on individual FQDNs for AM (Splunk, etc) would need their configuration to be modified to take into account each AM now has the same FQDN.
  • This deployment pattern is only suitable for customers who have mature DevOps processes. The expectation is no changes are made in production, instead a new release/build is produced and promoted to production. If for example a customer makes changes via REST or the UI directly then these changes will not be replicated to all other AM instances in the cluster, which would severely impair performance and stability.

Conclusions

This suggested model would significantly improve a customer’s ability to take on new configuration/code changes and potentially rollback without impacting other AM servers in the pool, makes effective use of the App/Policy stores without additional kit, allows easy transition to FBC and enables DevOps style deployments.

This blog post was first published @ https://medium.com/@darinder.shokar included here with permission.

Integrating ForgeRock Identity Platform 6.5

Integrating The ForgeRock Identity Platform 6.5

It’s a relatively common requirement to need to integrate the products that make up the ForgeRock Identity Platform. The IDM Samples Guide contains a good working example of just how to do this. Each version of the ForgeRock stack has slight differences, both in the products themselves, as well as the integrations. As such this blog will focus on version 6.5 of the products and will endeavour to include as much useful information to speed integrations for readers of this blog, including sample configuration files, REST calls etc.

In this integration IDM acts as an OIDC Relying Party, talking to AM as the OIDC Provider using the OAuth 2.0 authorization grant. The following sequence diagram illustrates successful processing from the authorization request, through grant of the authorization code, and ID token from the authorization provider, AM. You can find more details in the IDM Samples Guide.

Full Stack Authorization Code Flow

Sample files

For this integration I’ve included configured sample files which can be found by accessing the link below, modified, and either used as an example or just dropped straight into your test environment. It should not have to be said but just in case…DO NOT deploy these to production without appropriate testing / hardening: https://stash.forgerock.org/users/mark.nienaber_forgerock.com/repos/fullstack6.5/.

Postman Collection

For any REST calls made in this blog you’ll find the Postman collection available here: https://documenter.getpostman.com/view/5814408/SVfJVBYR?version=latest

Sample Scripts

We will set up some basic vanilla instances of our products to get started. I’ve provided some scripts to install both DS as well as AM.

Products used in this integration

Documentation can be found https://backstage.forgerock.com/docs/.

Note: In this blog I install the products under my home directory, this is not best practice, but keep in mind the focus is on the integration, and not meant as a detailed install guide.

Setup new DS instances

On Server 1 (AM Server) install a fresh DS 6.5 instance for an external AM config store (optional) and a DS repository to be shared with IDM.

Add the following entries to the hosts file:

sudo vi /etc/hosts
127.0.0.1       localhost opendj.example.com amconfig1.example.com openam.example.com

Before running the DS install script you’ll need to copy the Example.ldif file from the full-stack IDM sample to the DS/AM server. You can do this manually or use SCP from the IDM server.

scp ~/openidm/samples/full-stack/data/Example.ldif fradmin@opendj.example.com:~/Downloads

Modify the sample file installDSFullStack6.5.sh including:

  • server names
  • port numbers
  • location of DS-6.5.0.zip file
  • location of the Example.ldif from the IDM full-stack sample.

Once completed, run to install both an external AM Config store as well as the DS shared repository. i.e.

script_dir=`pwd`
ds_zip_file=~/Downloads/DS/DS-6.5.0.zip
ds_instances_dir=~/opends/ds_instances
ds_config=${ds_instances_dir}/config
ds_fullstack=${ds_instances_dir}/fullstack
#IDMFullStackSampleRepo
ds_fullstack_server=opendj.example.com
ds_fullstack_server_ldapPort=5389
ds_fullstack_server_ldapsPort=5636
ds_fullstack_server_httpPort=58081
ds_fullstack_server_httpsPort=58443
ds_fullstack_server_adminConnectorPort=54444
#Config
ds_amconfig_server=amconfig1.example.com
ds_amconfig_server_ldapPort=3389
ds_amconfig_server_ldapsPort=3636
ds_amconfig_server_httpPort=38081
ds_amconfig_server_httpsPort=38443
ds_amconfig_server_adminConnectorPort=34444

Now let’s run the script on Server 1, the AM / DS server, to create a new DS instances.

chmod +x ./installdsFullStack.sh
./installdsFullStack.sh
Install DS instances

You now have 2 DS servers installed and configured, let’s install AM.

Setup new AM server

On Server 1 we will install a fresh AM 6.5.2 on Tomcat using the provided Amster script.

Assuming the tomcat instance is started drop the AM WAR file under the webapps directory renaming the context to secure (change this as you wish).

cp ~/Downloads/AM-6.5.2.war ~/tomcat/webapps/secure.war

Copy the amster script install_am.amster into your amster 6.5.2 directory and make any modifications as required.

install-openam 
--serverUrl http://openam.example.com:8080/secure 
--adminPwd password 
--acceptLicense 
--cookieDomain example.com 
--cfgStoreDirMgr 'uid=am-config,ou=admins,ou=am-config' 
--cfgStoreDirMgrPwd password 
--cfgStore dirServer 
--cfgStoreHost amconfig1.example.com 
--cfgStoreAdminPort 34444 
--cfgStorePort 3389 
--cfgStoreRootSuffix ou=am-config 
--userStoreDirMgr 'cn=Directory Manager' 
--userStoreDirMgrPwd password 
--userStoreHost opendj.example.com  
--userStoreType LDAPv3ForOpenDS 
--userStorePort 5389 
--userStoreRootSuffix dc=example,dc=com
:exit

Now run amster and pass it this script to install AM. You can do this manually if you like, but scripting will make your life easier and allow you to repeat it later on.

cd amster6.5.2/
./amster install_am.amster
Install AM

At the end of this you’ll have AM installed and configured to point to the DS instances you set up previously.

AM Successfully installed

Setup new IDM server

On Server 2 install/unzip a new IDM 6.5.0.1 instance.

Make sure the IDM server can reach AM and DS servers by adding an entry to the hosts file.

sudo vi /etc/hosts
Hosts file entry

Modify the hostname and port of the IDM instance in the boot.properties file. An example boot.properties file can be found HERE.

cd ~/openidm
vi /resolver/boot.properties

Set appropriate openidm.host and openidm.port.http/s.

openidm/resolver/boot.properties

We now have the basic AM / IDM and DS setup and are ready to configure each of the products.

Configure IDM to point to shared DS repository

Modify the IDM LDAP connector file to point to the shared repository. An example file can be found HERE.

vi ~/openidm/samples/full-stack/conf/provisioner.openicf-ldap.json

Change “host” and “port” to match shared repo configured above.

provisioner.openicf-ldap.json

Configure AM to point to shared DS repository

Login as amadmin and browse to Identity Stores in the realm you’re configuring. For simplicity I’m using Top Level Realm , DO NOT this in production!!.

Set the correct values for your environment, then select Load Schema and Save.

Shared DS Identity Store

Browse to Identities and you should now see two identities that were imported in the Example.ldif when you setup the DS shared repository.

List of identities

We will setup another user to ensure AM is configured to talk to DS correctly. Select + Add Identity and fill in the values then press Create.

Create new identity

Modify some of the users values and press Save Changes.

Modify attributes

Prepare IDM

We will start IDM now and check that it’s connected to the DS shared repository.

Start the IDM full-stack sample

Enter the following commands to start the full-stack sample.

cd ~/openidm/
./startup.sh -p samples/full-stack
Start full-stack sample

Login to IDM

Login to the admin interface as openidm-admin.

http://openidm.example.com:8080/admin/

Login as openidm-admin

Reconcile DS Shared Repository

We will now run a reconciliation to pull users from the DS shared repository into IDM.

Browse to Configure, then Mappings then on the System/ldap/account→Mananged/user mapping select Reconcile

Reconcile DS shared repository

The users should now exist under Manage, Users.

User list

Let’s Login with our newly created testuser1 to make sure it’s all working.

End User UI Login

You should see the welcome screen.

Welcome Page

IDM is now ready for integration with AM.

Configure AM for integration

Now we will set up AM for integration with IDM.

Setup CORS Filter

Set up a CORS filter in AM to allow IDM as an origin. An example web.xml can be found HERE.

vi tomcat/webapps/secure/WEB-INF/web.xml

Add the appropriate CORS filter. (See sample file above)

</filter-mapping>
<filter-mapping>
<filter-name>CORSFilter</filter-name>
<url-pattern>/json/*</url-pattern>
</filter-mapping>
<filter-mapping>
<filter-name>CORSFilter</filter-name>
<url-pattern>/oauth2/*</url-pattern>
</filter-mapping>
<filter>
<filter-name>CORSFilter</filter-name>
<filter-class>org.forgerock.openam.cors.CORSFilter</filter-class>
<init-param>
<param-name>methods</param-name>
<param-value>POST,PUT,OPTIONS</param-value>
</init-param>
<init-param>
<param-name>origins</param-name>
<param-value>http://openidm.example.com:8080,http://openam.example.com:8080,http://localhost:8000,http://localhost:8080,null,file://</param-value>
</init-param>
<init-param>
<param-name>allowCredentials</param-name>
<param-value>true</param-value>
</init-param>
<init-param>
<param-name>headers</param-name>
<param-value>X-OpenAM-Username,X-OpenAM-Password,X-Requested-With,Accept,iPlanetDirectoryPro</param-value>
</init-param>
<init-param>
<param-name>expectedHostname</param-name>
<param-value>openam.example.com:8080</param-value>
</init-param>
<init-param>
<param-name>exposeHeaders</param-name>
<param-value>Access-Control-Allow-Origin,Access-Control-Allow-Credentials,Set-Cookie</param-value>
</init-param>
<init-param>
<param-name>maxAge</param-name>
<param-value>1800</param-value>
</init-param>
</filter>

Configure AM as an OIDC Provider

Login to AM as an administrator and browse to the Realm you’re configuring. As already mentioned, I’m using the Top Level Realm for simplicity.

Once you’ve browsed to the Realm, on the Dashboard, select Configure OAuth Provider.

Configure OAuth Provider

Now select Configure OpenID Connect.

Configure OIDC

If you need to modify any values like the Realm then do, but I’ll just press Create.

OIDC Server defaults

You’ll get a success message.

Success

AM is now configured as an OP, let’s set some required values. Browse to Services, then click on OAuth2 Provider.

AM Services

On the Consent tab, check the box next to Allow Clients to Skip Consent, then press Save.

Consent settings

Now browse to the Advanced OpenID Connect tab set openidm as the value for Authorized OIDC SSO Clients ( this is the name of the Relying Party / Client which we will create next). Press Save.

Authorized SSO clients

Configure Relying Party / Client for IDM

You can do these steps manually but let’s call AM’s REST interface, you can save these calls and easily replicate this step with the click of a button if you use a tool like Postman (See HERE for the Postman collection). I am using a simple CURL command from the AM server.

Firstly we’ll need an AM administrator session to create the client so call the /authenticate endpoint. (I’ve used jq for better formatting, so I’ll leave that for you to add in if required)

curl -X POST 
http://openam.example.com:8080/secure/json/realms/root/authenticate 
-H 'Accept-API-Version: resource=2.0, protocol=1' 
-H 'Content-Type: application/json' 
-H 'X-OpenAM-Password: password' 
-H 'X-OpenAM-Username: amadmin' 
-H 'cache-control: no-cache' 
-d '{}'

The SSO Token / Session will be returned as tokenId, so save this value.

Authenticate

Now we have a session we can use that in the next call to create the client. Again this will be done via REST however you can do this manually if you want. Substitute the tokenId value from above for the value of the iPlanetDrectoryPro.

curl -X POST 
'http://openam.example.com:8080/secure/json/realms/root/agents/?_action=create' 
-H 'Accept-API-Version: resource=3.0, protocol=1.0' 
-H 'Content-Type: application/json' 
-H 'iPlanetDirectoryPro: djyfFX3h97jH2P-61auKig_i23o.*AAJTSQACMDEAAlNLABxFc2d0dUs2c1RGYndWOUo0bnU3dERLK3pLY2c9AAR0eXBlAANDVFMAAlMxAAA.*' 
-d '{
"username": "openidm",
"userpassword": "openidm",
"realm": "/",
"AgentType": ["OAuth2Client"],
"com.forgerock.openam.oauth2provider.grantTypes": [
"[0]=authorization_code"
],
"com.forgerock.openam.oauth2provider.scopes": [
"[0]=openid"
],
"com.forgerock.openam.oauth2provider.tokenEndPointAuthMethod": [
"client_secret_basic"
],
"com.forgerock.openam.oauth2provider.redirectionURIs": [
"[0]=http://openidm.example.com:8080/"
],
"isConsentImplied": [
"true"
],
"com.forgerock.openam.oauth2provider.postLogoutRedirectURI": [
"[0]=http://openidm.example.com:8080/",
"[1]=http://openidm.example.com:8080/admin/"
]
}'
Create Relying Party

The OAuth Client should be created.

RP / Client created

Integrate IDM and AM

AM is now configured as an OIDC provider and has an OIDC Relying Party for IDM to use, so now we can configure the final step, that is, tell IDM to outsource authentication to AM.

Feel free to modify and copy this authentication.json file directly into your ~/openidm/samples/full-stack/conf folder or follow these steps to configure.

Browse to Configure, then Authentication.

Configure authentication

Authentication should be configured to Local, select ForgeRock Identity Provider.

Outsource authentication to AM

After you click above, the Configure ForgeRock Identity Provider page will pop up.

Set the appropriate values for

  • Well-Known Endpoint
  • Client ID
  • Client Secret
  • Note that the common datastore is set to the DS shared repository, leave this as is.

You can change the others to match your environment but be careful as the values must match those set in AM OP/RP configuration above. You can also refer to the sample the sample authentication.json. Once completed press Submit. You will be asked to re-authenticate.

IDM authentication settings

Testing the integrated environment

Everything is now configured so we are ready to test end to end.

Test End User UI

Browse to the IDM End User UI.

http://openidm.example.com:8080/

You will be directed to AM for login, let’s login with the test user we created earlier.

Login as test user

After authentication you will be directed to IDM End User UI welcome page.

Login success

Congratulations you did it!

Test IDM Admin UI

In this test you’ll login to AM as amadmin and IDM will convert this to a session for the IDM administrator openidm-admin.

This is achieved through the following script:

~/openidm/bin/defaults/script/auth/amSessionCheck.js

Specifically this code snippet is used:

if (security.authenticationId.toLowerCase() === "amadmin") {
security.authorization = {
"id" : "openidm-admin",
"component" : "internal/user",
"roles" : ["internal/role/openidm-admin", "internal/role/openidm-authorized"],
"moduleId" : security.authorization.moduleId
};

In a live environment you should not use amadmin, or openidm-admin but create your own delegated administrators, however in this case we will stick with the sample.

Browse to the IDM Admin UI.

http://openidm.example.com:8080/admin/

You will be directed to AM for login, login with amadmin.

Login as administrator

After authentication you will be directed to IDM End User UI welcome page.

Note: This may seem strange as you requested the Admin UI but it is expected as the redirection URI is set to this page (don’t change this as you can just browse to admin URL after authentication).

Login success

On the top right click the drop down and select Admin, or alternatively just hit the IDM Admin UI URL again.

Switch to Admin UI

You’ll now be directed to the Admin UI and as an IDM administrator you should be able to browse around as expected.

IDM Admin UI success

Congratulations you did it!

Finished.

References

  1. https://backstage.forgerock.com/docs/idm/6.5/samples-guide/#chap-full-stack
  2. khttps://forum.forgerock.com/2018/05/forgerock-identity-platform-version-6-integrating-idm-ds/
  3. https://forum.forgerock.com/2016/02/forgerock-full-stack-configuration/

This blog post was first published @ https://medium.com/@marknienaber included here with permission.

Deploying the ForgeRock platform on Kubernetes using Skaffold and Kustomize

Image result for forgerock logo

If you are following along with the ForgeOps repository, you will see some significant changes in the way we deploy the ForgeRock IAM platform to Kubernetes.  These changes are aimed at dramatically simplifying the workflow to configure, test and deploy ForgeRock Access Manager, Identity Manager, Directory Services and the Identity Gateway.

To understand the motivation for the change, let’s recap the current deployment approach:

  • The Kubernetes manifests are maintained in one git repository (forgeops), while the product configuration is another (forgeops-init).
  • At runtime,  Kubernetes init containers clone the configuration from git and make it  available to the component using a shared volume.
The advantage of this approach is that the docker container for a product can be (relatively) stable. Usually it is the configuration that is changing, not the product binary.
This approach seemed like a good idea at the time, but in retrospect it created a lot of complexity in the deployment:
  • The runtime configuration is complex, requiring orchestration (init containers) to make the configuration available to the product.
  • It creates a runtime dependency on a git repository being available. This isn’t a show stopper (you can create a local mirror), but it is one more moving part to manage.
  • The helm charts are complicated. We need to weave git repository information throughout the deployment. For example, putting git secrets and configuration into each product chart. We had to invent a mechanism to allow the user to switch to a different git repo or configuration – adding further complexity. Feedback from users indicated this was a frequent source of errors. 
  • Iterating on configuration during development is slow. Changes need to be committed to git and the deployment rolled to test out a simple configuration change.
  • Kubernetes rolling deployments are tricky. The product container version must be in sync with the git configuration. A mistake here might not get caught until runtime. 
It became clear that it would be *much* simpler if the products could just bundle the configuration in the docker container so that it is “ready to run” without any complex orchestration or runtime dependency on git.
[As an aside, we often get asked why we don’t store configuration in ConfigMaps. The short answer is: We do – for top level configuration such as domain names and global environment variables. Products like AM have large and complex configurations (~1000 json files for a full AM export). Managing these in ConfigMaps gets to be cumbersome. We also need a hierarchical directory structure – which is an outstanding ConfigMap RFE]
The challenge with the “bake the configuration in the docker image” approach is that  it creates *a lot* of docker containers. If each configuration change results in a new (and unique) container, you quickly realize that automation is required to be successful. 
About a year ago, one of my colleagues happened to stumble across a new tool from Google called skaffold.  From the documentation

“Skaffold handles the workflow for building, pushing and deploying your application.
So you can focus more on application development”
To some extent skaffold is syntactic sugar on top of this workflow:
docker build; docker tag; docker push;
kustomize build |  kubectl apply -f – 
Calling it syntactic sugar doesn’t really do it justice, so do read through their excellent documentation. 
There isn’t anything that skaffold does that you can’t accomplish with other tools (or a whack of bash scripts), but skaffold focuses on smoothing out and automating this basic workflow.
A key element of Skaffold is its tagging strategy. Skaffold will apply a unique tag to each docker image (the tagging strategy is pluggable, but is generally a sha256 hash, or a git commit). This is essential for our workflow where we want to ensure that combination of the product (say AM) and a specific configuration is guaranteed to be unique. By using a git commit tag on the final image, we can be confident that we know exactly how a container was built including its configuration.  This also makes rolling deployments much more tractable, as we can update a deployment tag and let Kubernetes spin down the older container and replace it with the new one.
If it isn’t clear from the above, the configuration for the product lives inside the docker image, and that in turn is tracked in a git repository. If for example you check out the source for the IDM container: https://github.com/ForgeRock/forgeops/tree/master/docker/idm 
You will see that the Dockerfile COPYs the configuration into the final image. When IDM runs, its configuration will be right there, ready to go. 
Skaffold has two major modes of operation.  The “run” mode  is a one shot build, tag, push and deploy.  You will typically use skaffold run as part of CD pipeline. Watch for git commit, and invoke skaffold to deploy the change.  Again – you can do this with other tools, but Skaffold just makes it super convenient.
Where Skaffold really shines is in “dev” mode. If you run skaffold dev, it will run a continual loop watching the file system for changes, and rebuilding and deploying as you edit files.
This diagram (lifted from the skaffold project) shows the workflow:
architectureThis process is really snappy. We find that we can deploy changes within 20-30 seconds (most of that is just container restarts).  When pushing to a remote GKE cluster, the first deployment is a little slower as we need to push all those containers to gcr.io, but subsequent updates are fast as you are pushing configuration deltas that are just a few KB in size.
Note that git commits are not required during development.  A developer will iterate on the desired configuration, and only when they are happy will they commit the changes to git and create a pull request. At this stage a CD process will pick up the commit and deploy the change to a QA environment. We have a simple CD sample using Google Cloudbuild.
At this point we haven’t said anything about helm and why we decided to move to Kustomize.  
Once our runtime deployments became simpler (no more git init containers, simpler ConfigMaps, etc.), we found ourselves questioning the need for  complex helm templates. There was some internal resistance from our developers on using golang templates (they *are* pretty ugly when combined with yaml), and the security issues raised by Helm’s Tiller component raised additional red flags. 
Suffice to say, there was no compelling reason to stick with Helm, and transitioning to Kustomize was painless. A shout out to the folks at Replicated – who have a very nifty tool called ship, that will convert your helm charts to Kustomize.  The “port” from Helm to Kustomize took a couple of days. We might look at Helm 3 in the future, but for now our requirements are being met by Kustomize. One nice side effect that we noticed is that Kustomize deployments with skaffold are really fast. 
This work is being done on the master branch of forgeops (targetting the 7.0 release), but if you would like to try out this new workflow with the current (6.5.2) products, you are in luck!  We have a preview branch  that uses the current products.  
The following should just work(TM) on minikube:
cd forgeops
git checkout skaffold-6.5
skaffold dev 
There are some prerequisites that you need to install. See the README-skaffold
The initial feedback on this workflow has been very positive. We’d love for folks to try it out and let us know what you think. Feel free to reach out to me at my ForgeRock email (warren dot strange at forgerock.com).

This blog post was first published @ warrenstrange.blogspot.ca, included here with permission.

1H2019 Identity Management Funding Analysis

As the first half of 2019 has been and gone, I've taken a quick look at the funding rounds that have taken place so far this year, within the identity and access management space and attempted some coarse grained analysis.  The focus is global and the sector definition is quite broad and based on the categories Crunchbase use.

Key Facts January to June 2017 / 2018 / 2019

Funding increased 261% year on year for the first half of 2019, compared to the same period in 2018.  There were some pretty large, latter stage investments, which looks like that has skewed the number some what.  

The number of organisations actually receiving funding, dropped, as did the % of organisations receiving seed funding.  This is pretty typical as the market matures and stabilises in some sub categories.



1H2019
  • ~$604million overall funding
  • Seed funding accounted for 20%
  • Median announcement date March 27th
  • 35 companies funded

1H2018
  • ~$231million overall funding
  • Seed funding accounted for 45.3%
  • Median announcement date March 30th
  • 53 companies funded

1H2017

  • ~$237million overall funding
  • Seed funding accounted for 32.1%
  • Median announcement date April 7th
  • 56 companies funded


1H2019 Company Analysis

A coarse grained analysis of the 2019 numbers, shows a pretty typical geographic spread - with North America, as ever the major centre, not just for funded companies, but the funding entities too.



The types of companies funded, is also interesting.  The categories are based on what Crunchbase curates, and maps against company descriptions, so there might be some overlap or ambiguity when performing detailed analysis on that.



It's certainly interesting to see a range covering B2C fraud, payments and analytics - typically as the return on investment on such products is very tangible.

The stage of funding, provides a broad and distributed range.  Seed is the clear leader, but the long tail is indicating a maturity of market, with many investors expecting strong returns.


1H2019 Top 10 Companies By Funding Amounts

The following is a simple top down list, of the companies that received the highest funding and at what stage that funding was received:
  1. Dashlane ($110m, Series D) - http://www.dashlane.com/ (https://techcrunch.com/2019/05/30/dashlane-series-d/)
  1. Auth0 ($103m, Series E) - https://auth0.com/ (s://auth0.com/blog/auth0-closes-103m-in-funding-passes-1b-valuation/)
  1. OneLogin ($100m, Series D) - http://onelogin.com/ (https://venturebeat.com/2019/01/10/onelogin-raises-100-million-to-help-enterprises-manage-access-and-identity/)

  2. Onfido ($50m, Series C) - http://www.onfido.com/ (https://venturebeat.com/2019/04/03/onfido-raises-50-million-for-ai-powered-identity-verification/)

  3. Socure ($30m, Series C) - http://www.socure.com/ (https://www.socure.com/about/news/socure-raises-30-million-in-additional-financing-to-identify-the-human-race)
  1. Dashlane ($30m, Debt Financing) - http://www.dashlane.com/
  2. Payfone ($24m, Series G) - http://www.payfone.com/ (https://www.alleywatch.com/2019/05/nyc-startup-funding-top-largest-april-2019-vc/7/)
  1. Evident ($20m, Series B) - https://www.evidentid.com/ (http://www.finsmes.com/2019/05/evident-raises-20m-in-series-b-funding.html)
  1. Bamboocloud ($15m, Series B) - http://www.bamboocloud.cn/ (https://www.volanews.com/portal/article/index/id/1806.html)
  1. Proxy ($13.6m, Series A) - https://proxy.com/ (https://www.globenewswire.com/news-release/2019/03/27/1774258/0/en/Proxy-Raises-13-6M-in-Series-A-Funding-Led-by-Kleiner-Perkins-Emerges-from-Stealth-to-Launch-Its-Universal-Identity-Signal-for-Frictionless-Access-to-Everything-in-the-Physical-Wor.html)

NB - all data and reporting done via Crunchbase.

1H2019 Identity Management Funding Analysis

As the first half of 2019 has been and gone, I've taken a quick look at the funding rounds that have taken place so far this year, within the identity and access management space and attempted some coarse grained analysis.  The focus is global and the sector definition is quite broad and based on the categories Crunchbase use.

Key Facts January to June 2017 / 2018 / 2019

Funding increased 261% year on year for the first half of 2019, compared to the same period in 2018.  There were some pretty large, latter stage investments, which looks like that has skewed the number some what.  

The number of organisations actually receiving funding, dropped, as did the % of organisations receiving seed funding.  This is pretty typical as the market matures and stabilises in some sub categories.



1H2019
  • ~$604million overall funding
  • Seed funding accounted for 20%
  • Median announcement date March 27th
  • 35 companies funded

1H2018
  • ~$231million overall funding
  • Seed funding accounted for 45.3%
  • Median announcement date March 30th
  • 53 companies funded

1H2017

  • ~$237million overall funding
  • Seed funding accounted for 32.1%
  • Median announcement date April 7th
  • 56 companies funded


1H2019 Company Analysis

A coarse grained analysis of the 2019 numbers, shows a pretty typical geographic spread - with North America, as ever the major centre, not just for funded companies, but the funding entities too.



The types of companies funded, is also interesting.  The categories are based on what Crunchbase curates, and maps against company descriptions, so there might be some overlap or ambiguity when performing detailed analysis on that.



It's certainly interesting to see a range covering B2C fraud, payments and analytics - typically as the return on investment on such products is very tangible.

The stage of funding, provides a broad and distributed range.  Seed is the clear leader, but the long tail is indicating a maturity of market, with many investors expecting strong returns.


1H2019 Top 10 Companies By Funding Amounts

The following is a simple top down list, of the companies that received the highest funding and at what stage that funding was received:
  1. Dashlane ($110m, Series D) - http://www.dashlane.com/ (https://techcrunch.com/2019/05/30/dashlane-series-d/)
  1. Auth0 ($103m, Series E) - https://auth0.com/ (s://auth0.com/blog/auth0-closes-103m-in-funding-passes-1b-valuation/)
  1. OneLogin ($100m, Series D) - http://onelogin.com/ (https://venturebeat.com/2019/01/10/onelogin-raises-100-million-to-help-enterprises-manage-access-and-identity/)

  2. Onfido ($50m, Series C) - http://www.onfido.com/ (https://venturebeat.com/2019/04/03/onfido-raises-50-million-for-ai-powered-identity-verification/)

  3. Socure ($30m, Series C) - http://www.socure.com/ (https://www.socure.com/about/news/socure-raises-30-million-in-additional-financing-to-identify-the-human-race)
  1. Dashlane ($30m, Debt Financing) - http://www.dashlane.com/
  2. Payfone ($24m, Series G) - http://www.payfone.com/ (https://www.alleywatch.com/2019/05/nyc-startup-funding-top-largest-april-2019-vc/7/)
  1. Evident ($20m, Series B) - https://www.evidentid.com/ (http://www.finsmes.com/2019/05/evident-raises-20m-in-series-b-funding.html)
  1. Bamboocloud ($15m, Series B) - http://www.bamboocloud.cn/ (https://www.volanews.com/portal/article/index/id/1806.html)
  1. Proxy ($13.6m, Series A) - https://proxy.com/ (https://www.globenewswire.com/news-release/2019/03/27/1774258/0/en/Proxy-Raises-13-6M-in-Series-A-Funding-Led-by-Kleiner-Perkins-Emerges-from-Stealth-to-Launch-Its-Universal-Identity-Signal-for-Frictionless-Access-to-Everything-in-the-Physical-Wor.html)

NB - all data and reporting done via Crunchbase.

Use ForgeRock Access Manager to provide MFA to Linux using PAM Radius

Use ForgeRock Access Manager to provide Multi-Factor Authentication to Linux

Introduction

Our aim is to set up an integration to provide Multi-Factor Authentication (MFA) to the Linux (Ubuntu) platform using ForgeRock Access Manager. The integration uses pluggable authentication module (PAM) to point to a RADIUS server. In this case AM is configured as a RADIUS server.

We achieve the following:

  1. Outsource Authentication of Linux to ForgeRock Access Manager.
  2. Provide an MFA solution to the Linux Platform.
  3. Configure ForgeRock Access Manager as a RADIUS Server.
  4. Configure PAM on Linux server point to our new RADIUS Server.

Setup

  • ForgeRock Access Manager 6.5.2 Installed and configured.
  • OS — Ubuntu 16.04.
  • PAM exists on your your server (this is common these days and you’ll find PAM here: /etc/pam.d/ ).

Configuration Steps

Configure a chain in AM

Firstly we configure a simple Authentication Chain in Access Manager with two modules.

a. First module – DataStore.

b. Second Module – HOTP. Email configured to point to local fakesmtp server.

Simple Authentication Chain

Configure ForgeRock AM as a RADIUS Server

Now we configure AM as a RADIUS Server.

a. Follow steps here: https://backstage.forgerock.com/docs/am/6.5/radius-server-guide/#chap-radius-implementation

RADIUS Server setup

Secondary Configuration — i.e. RADIUS Client

We have to configure a trusted RADIUS client, our Linux server.

a. Enter the IP address of the client (Linux Server).

b. Set the Client Secret.

b. Select your Realm — I used top level realm (don’t do this in production!).

c. Select your Chain.

Configure RADIUS Client

Configure pam_radius on Linux Server (Ubuntu)

Following these instructions, configure pam_radius on your Linux server:

https://www.wikidsystems.com/support/how-to/how-to-configure-pam-radius-in-ubuntu/

a. Install pam_radius.

sudo apt-get install libpam-radius-auth

b. Configure pam_radius to talk to RADIUS server (In this case AM).

sudo vim /etc/pam_radius_auth.conf

i.e <AM Server VM>:1812 secret 1

Point pam_radius to your RADIUS server

Tell SSH to use pam_radius for authentication.

a. Add this line to the top of the /etc/pam.d/sshd file.

auth sufficient pam_radius_auth.so debug

Note: debug is optional and has been added for testing, do not do this in production.

pam_radius is sufficient to authentication

Enable Challenge response for MFA

Tell your sshd config to allow challenge/response and use PAM.

a. Set the following values in your /etc/ssh/sshd_config file.

ChallengeResponseAuthentication yes

UsePAM yes

Create a local user on your Linux server and in AM

In this simple use case you will required a separate account on your Linux server and in AM.

a. Create a Linux user.

sudo adduser test

Note: Make sure the user has a different password than the user in AM to ensure you’re not authenticating locally. Users may have no password if your system allows it, but in this demo I set the password to some random string.

Create Linux User

b. Ensure the user is created in AM with an email address.

User exists in AM with an email address for OTP

Test Authentication to Unix via SSH

It’s now time to put it all together.

a. I recommend you tail the auth log file.

tail -f /var/log/auth.log

b. SSH to your server using.

ssh test@<server name>

c. You should be authenticating to the first module in your AM chain so enter your AM Password.

d. You should be prompted for your OTP, check your email.

OTP generated and sent to mail attribute on user

e. Enter your OTP and press enter then enter again (the UI i.e. challenge/response is not super friendly here).

f. If successfully entered you should be logged in.

You can follow the Auth logs, as well as the AM logs i.e. authentication.audit.json to view the process.

The End.

References:

This blog post was first published @ https://medium.com/@marknienaber included here with permission.

Next Generation Distributed Authorization

Many of today’s security models spend a lot of time focusing upon network segmentation and authentication.  Both of these concepts are critical in building out a baseline defensive security posture.  However, there is a major area that is often overlooked, or at least simplified to a level of limited use.  That of authorization.  Working out what, a user, service, or thing, should be able to do within another service.  The permissions.  Entitlements.  The access control entries.  I don’t want to give an introduction into the many, sometimes academic acronyms and ideas around authorization (see RBAC, MAC, DAC, ABAC, PDP/PEP amongst others). I want to spend a page delving into the some of the current and future requirements surrounding distributed authorization.

New Authorization Requirements

Classic authorization modelling, tends to have a centralised policy decision (PDP) point – a central location where applications, agents and other SDK’s call, in order to get a decision regarding a subject/object/action combination.  The PDP contains signatures (or policies) that map the objects and actions to a bunch of users and services.
That call out process is now a bottle neck, for several reasons.  Firstly the number of systems being protected is rapidly increasing, with the era of microservices, API’s and IoT devices all needing some sort of access control.  Having them all hitting a central PDP doesn’t seem a good use of network bandwidth or central processing power.  Secondly, that increase in objects, also gives way to a more mesh and federated set of interactions such as the following.

Distributed Enforcement

This gives way to a more distributed enforcement requirement.  How can the protected object perform an access control evaluation without having to go back to the mother ship?  There are a few things that could help.  
Firstly we need to probably achieve three things.  Work out what we need to identify the calling user or service (aka authentication token), map that to what that identity can do, before finally making sure that actually happens.  The first part, is often completed using tokens – and in the distributed world a token that has been cryptographically signed by a central authority.  JSON Web Tokens (JWTs) are popular, but not the only approach.
The second part – working out what they can do – could be handled in two slightly different ways.  One, is the calling subject brings with them what they can do.  They could do this by having the cryptographically signed token, contain their access control entries.  This approach, would require the service that issues tokens, to also know what the calling user or service could do, so would need to have knowledge of the access control entries to use.  That list of entries, would also need things like governance, audit, version control and so, but that is needed irregardless of where those entries are stored.
So here, a token gets issued and the objects being protected, have a method to crytographically validate the presented token, extract the access control entries (ACE) and enforce what is being asked.
Having a token that contains the actual ACE, is not that new.  Capability Based Access Control (CBAC) follows this concept, where the token could contain the object and associated actions.  It could also contain the subject identifier, or perhaps that could be delivered as a separate token.  A similar practical implementation is described in Google’s Macaroons project.
What we’ve achieved here, is to basically remove the access control logic from the object or service, but equally, removed the need to perform a call back to a policy mother ship.
A subtly different approach, is to pass the access control logic back down to the object – but instead of it originating within the service itself – it is still owned and managed by central authority – just distributed to the edges.
This allows for local enforcement, but central governance and management.  Modern distribution technologies like web sockets, or even flat file systems like JSON and YAML, allow for repave and replace approaches as policy definitions change, which fits nicely into devops deployment models.  
The object itself, would still need to know a few things to make the enforcement complete – a token representing the user or service and some context to help validate the request.

Contextual Integration

Access control decisions generally require the subject, the object and any associated actions.  For example subject=Bob, could perform actions=open on object=Meeting Room.  Another dimension that is now required, especially within zero trust based approaches, is that of context.  In Bob’s example, context may include time of day, day of the week, or even the project he is working on.  They could all impact the decision.  Previous access control requests and decisions could also come into play here.  For example, say Bob was just given access to the Safe Room where the gold bullion was stored, perhaps his request to gain access to the Back Door is denied.
The capturing of context, both during authentication time and during authorization evaluation time is now critical, as it allows the object to have a much clearer understanding of how to handle access requests.

ML – Defining Normal

I’ve talked a lot so far about access control logic and where that should sit.  Well, how do we know what that access control logic looks like?  I spent many a year, designing role based access control systems (wow, that was 10+ years ago), using a system known as role mining.  Big data crunching before machine learning was in vogue.  Taking groups of users and trying to understand what access control patterns existed, and trying to shoe horn the results into business and technical roles.
Today, there are loads of great SaaS based machine learning systems, that can take user activity logs (logs that describe user to application interactions) and provide views on whether their activity levels are “normal” – normal for them, normal for their peers, their business unit, location, purchasing patterns and so on.  The output of that process, can be used to help define the initial baseline policies.
Enforcing access based on policies though is not enough.  It is time consuming and open to many many avenues of circumvention.  Machine learning also has a huge role to play within the enforcement aspect too, especially as the idea of context and what is valid or not, becomes a highly complicated and ever changing question.
One of the key issues of modern authorization, is the distinction between access control logic, enforcement and the vehicles used to deliver the necessary parts to the protected services.
If at all possible, they should be as modular as possible, to allow for future proofing and the ability to design a secure system that is flexible enough to meet business requirements, scale out to millions of transactions a second and integrate thousands of services simultaneously.

This blog post was first published @ www.infosecprofessional.com, included here with permission.

Next Generation Distributed Authorization

Many of today's security models spend a lot of time focusing upon network segmentation and authentication.  Both of these concepts are critical in building out a baseline defensive security posture.  However, there is a major area that is often overlooked, or at least simplified to a level of limited use.  That of authorization.  Working out what, a user, service, or thing, should be able to do within another service.  The permissions.  Entitlements.  The access control entries.  I don't want to give an introduction into the many, sometimes academic acronyms and ideas around authorization (see RBAC, MAC, DAC, ABAC, PDP/PEP amongst others). I want to spend a page delving into the some of the current and future requirements surrounding distributed authorization.

New Authorization Requirements

Classic authorization modelling, tends to have a centralised policy decision point (PDP)- a central location where applications, agents and other SDK's call, in order to get a decision regarding a subject/object/action combination.  The PDP contains signatures (or policies) that map the objects and actions to a bunch of users and services.

That call out process is now a bottle neck, for several reasons.  Firstly the number of systems being protected is rapidly increasing, with the era of microservices, API's and IoT devices all needing some sort of access control.  Having them all hitting a central PDP doesn't seem a good use of network bandwidth or central processing power.  Secondly, that increase in objects, also gives way to a more mesh and federated set of interactions such as the following, where microservices and IoT are more common.


Distributed Enforcement


This gives way to a more distributed enforcement requirement.  How can the protected object perform an access control evaluation without having to go back to the mother ship?  There are a few things that could help.  

Firstly,  we need to probably achieve three things.  Work out what we need to identify the calling user or service (aka authentication token), map that to what that identity can do, before finally making sure that actually happens.  The first part, is often completed using tokens - and in the distributed world a token that has been cryptographically signed by a central authority.  JSON Web Tokens (JWTs) are popular, but not the only approach.

The second part - working out what they can do - could be handled in two slightly different ways.  One, is the calling subject brings with them what they can do.  They could do this by having the cryptographically signed token, contain their access control entries.  This approach, would require the service that issues tokens, to also know what the calling user or service could do, so would need to have knowledge of the access control entries to use.  That list of entries, would also need things like governance, audit, version control and so, but that is needed irregardless of where those entries are stored.



So here, a token gets issued and the objects being protected, have a method to crytographically validate the presented token, extract the access control entries (ACE) and enforce what is being asked.

Having a token that contains the actual ACE, is not that new.  Capability Based Access Control (CBAC) follows this concept, where the token could contain the object and associated actions.  It could also contain the subject identifier, or perhaps that could be delivered as a separate token.  A similar practical implementation is described in Google's Macaroons project.

What we've achieved here, is to basically remove the access control logic from the object or service, but equally, removed the need to perform a call back to a policy mother ship.

A subtly different approach, is to pass the access control logic back down to the object - but instead of it originating within the service itself - it is still owned and managed by central authority - just distributed to the edges.



This allows for local enforcement, but central governance and management.  Modern distribution technologies like web sockets could be useful for this.  In addition, even flat file systems like JSON and YAML, could allow for "repave and replace" approach, as policy definitions change, which fits nicely into devops deployment models.  

The object itself, would still need to know a few things to make the enforcement complete - a token representing the user or service and some context to help validate the request.

Contextual Integration

Access control decisions generally require the subject, the object and any associated actions.  For example subject=Bob, could perform actions=open on object=Meeting Room.  Another dimension that is now required, especially within zero trust based approaches, is that of context.  In Bob's example, context may include time of day, day of the week, or even the project he is working on.  They could all impact the decision. 

Previous access control requests and decisions could also come into play here.  For example, say Bob was just given access to the Safe Room where the gold bullion was stored.  Maybe his request two minutes later to gain access to the Back Door is denied.  If that first request didn't occur, perhaps his request to open the Back Door is legitimate and is permitted.

The capturing of context, both during authentication time and during authorization evaluation time is now critical, as it allows the object to have a much clearer understanding of how to handle access requests.

ML - Defining Normal

I've talked a lot so far about access control logic and where that should sit.  Well, how do we know what that access control logic looks like?  I spent many a year, designing role based access control systems (wow, that was 10+ years ago), using a system known as role mining.  Big data crunching before machine learning was in vogue.  Taking groups of users and trying to understand what access control patterns existed, and trying to shoe horn the results into business and technical roles.

Today, there are loads of great SaaS based machine learning systems, that can take user activity logs (logs that describe user to application interactions) and provide views on whether their activity levels are "normal" - normal for them, normal for their peers, their business unit, location, purchasing patterns and so on.  The typical "access path analytics".  The output of that process, can be used to help define the initial baseline policies.

Enforcing access based on policies though is not enough though.  It is time consuming and open to many many avenues of circumvention.  Machine learning also has a huge role to play within the enforcement aspect too, especially as the idea of context and what is valid or not, becomes a highly complicated and ever changing question.

One of the key issues of modern authorization, is the distinction between access control logic, enforcement and the vehicles used to deliver the necessary parts to the protected services.

If at all possible, they should be as modular as possible, to allow for future proofing and the ability to design a secure system that is flexible enough to meet business requirements, scale out to millions of transactions a second and integrate thousands of services simultaneously.