1H2019 Identity Management Funding Analysis

As the first half of 2019 has been and gone, I've taken a quick look at the funding rounds that have taken place so far this year, within the identity and access management space and attempted some coarse grained analysis.  The focus is global and the sector definition is quite broad and based on the categories Crunchbase use.

Key Facts January to June 2017 / 2018 / 2019

Funding increased 261% year on year for the first half of 2019, compared to the same period in 2018.  There were some pretty large, latter stage investments, which looks like that has skewed the number some what.  

The number of organisations actually receiving funding, dropped, as did the % of organisations receiving seed funding.  This is pretty typical as the market matures and stabilises in some sub categories.



1H2019
  • ~$604million overall funding
  • Seed funding accounted for 20%
  • Median announcement date March 27th
  • 35 companies funded

1H2018
  • ~$231million overall funding
  • Seed funding accounted for 45.3%
  • Median announcement date March 30th
  • 53 companies funded

1H2017

  • ~$237million overall funding
  • Seed funding accounted for 32.1%
  • Median announcement date April 7th
  • 56 companies funded


1H2019 Company Analysis

A coarse grained analysis of the 2019 numbers, shows a pretty typical geographic spread - with North America, as ever the major centre, not just for funded companies, but the funding entities too.



The types of companies funded, is also interesting.  The categories are based on what Crunchbase curates, and maps against company descriptions, so there might be some overlap or ambiguity when performing detailed analysis on that.



It's certainly interesting to see a range covering B2C fraud, payments and analytics - typically as the return on investment on such products is very tangible.

The stage of funding, provides a broad and distributed range.  Seed is the clear leader, but the long tail is indicating a maturity of market, with many investors expecting strong returns.


1H2019 Top 10 Companies By Funding Amounts

The following is a simple top down list, of the companies that received the highest funding and at what stage that funding was received:
  1. Dashlane ($110m, Series D) - http://www.dashlane.com/ (https://techcrunch.com/2019/05/30/dashlane-series-d/)
  1. Auth0 ($103m, Series E) - https://auth0.com/ (s://auth0.com/blog/auth0-closes-103m-in-funding-passes-1b-valuation/)
  1. OneLogin ($100m, Series D) - http://onelogin.com/ (https://venturebeat.com/2019/01/10/onelogin-raises-100-million-to-help-enterprises-manage-access-and-identity/)

  2. Onfido ($50m, Series C) - http://www.onfido.com/ (https://venturebeat.com/2019/04/03/onfido-raises-50-million-for-ai-powered-identity-verification/)

  3. Socure ($30m, Series C) - http://www.socure.com/ (https://www.socure.com/about/news/socure-raises-30-million-in-additional-financing-to-identify-the-human-race)
  1. Dashlane ($30m, Debt Financing) - http://www.dashlane.com/
  2. Payfone ($24m, Series G) - http://www.payfone.com/ (https://www.alleywatch.com/2019/05/nyc-startup-funding-top-largest-april-2019-vc/7/)
  1. Evident ($20m, Series B) - https://www.evidentid.com/ (http://www.finsmes.com/2019/05/evident-raises-20m-in-series-b-funding.html)
  1. Bamboocloud ($15m, Series B) - http://www.bamboocloud.cn/ (https://www.volanews.com/portal/article/index/id/1806.html)
  1. Proxy ($13.6m, Series A) - https://proxy.com/ (https://www.globenewswire.com/news-release/2019/03/27/1774258/0/en/Proxy-Raises-13-6M-in-Series-A-Funding-Led-by-Kleiner-Perkins-Emerges-from-Stealth-to-Launch-Its-Universal-Identity-Signal-for-Frictionless-Access-to-Everything-in-the-Physical-Wor.html)

NB - all data and reporting done via Crunchbase.

1H2019 Identity Management Funding Analysis

As the first half of 2019 has been and gone, I've taken a quick look at the funding rounds that have taken place so far this year, within the identity and access management space and attempted some coarse grained analysis.  The focus is global and the sector definition is quite broad and based on the categories Crunchbase use.

Key Facts January to June 2017 / 2018 / 2019

Funding increased 261% year on year for the first half of 2019, compared to the same period in 2018.  There were some pretty large, latter stage investments, which looks like that has skewed the number some what.  

The number of organisations actually receiving funding, dropped, as did the % of organisations receiving seed funding.  This is pretty typical as the market matures and stabilises in some sub categories.



1H2019
  • ~$604million overall funding
  • Seed funding accounted for 20%
  • Median announcement date March 27th
  • 35 companies funded

1H2018
  • ~$231million overall funding
  • Seed funding accounted for 45.3%
  • Median announcement date March 30th
  • 53 companies funded

1H2017

  • ~$237million overall funding
  • Seed funding accounted for 32.1%
  • Median announcement date April 7th
  • 56 companies funded


1H2019 Company Analysis

A coarse grained analysis of the 2019 numbers, shows a pretty typical geographic spread - with North America, as ever the major centre, not just for funded companies, but the funding entities too.



The types of companies funded, is also interesting.  The categories are based on what Crunchbase curates, and maps against company descriptions, so there might be some overlap or ambiguity when performing detailed analysis on that.



It's certainly interesting to see a range covering B2C fraud, payments and analytics - typically as the return on investment on such products is very tangible.

The stage of funding, provides a broad and distributed range.  Seed is the clear leader, but the long tail is indicating a maturity of market, with many investors expecting strong returns.


1H2019 Top 10 Companies By Funding Amounts

The following is a simple top down list, of the companies that received the highest funding and at what stage that funding was received:
  1. Dashlane ($110m, Series D) - http://www.dashlane.com/ (https://techcrunch.com/2019/05/30/dashlane-series-d/)
  1. Auth0 ($103m, Series E) - https://auth0.com/ (s://auth0.com/blog/auth0-closes-103m-in-funding-passes-1b-valuation/)
  1. OneLogin ($100m, Series D) - http://onelogin.com/ (https://venturebeat.com/2019/01/10/onelogin-raises-100-million-to-help-enterprises-manage-access-and-identity/)

  2. Onfido ($50m, Series C) - http://www.onfido.com/ (https://venturebeat.com/2019/04/03/onfido-raises-50-million-for-ai-powered-identity-verification/)

  3. Socure ($30m, Series C) - http://www.socure.com/ (https://www.socure.com/about/news/socure-raises-30-million-in-additional-financing-to-identify-the-human-race)
  1. Dashlane ($30m, Debt Financing) - http://www.dashlane.com/
  2. Payfone ($24m, Series G) - http://www.payfone.com/ (https://www.alleywatch.com/2019/05/nyc-startup-funding-top-largest-april-2019-vc/7/)
  1. Evident ($20m, Series B) - https://www.evidentid.com/ (http://www.finsmes.com/2019/05/evident-raises-20m-in-series-b-funding.html)
  1. Bamboocloud ($15m, Series B) - http://www.bamboocloud.cn/ (https://www.volanews.com/portal/article/index/id/1806.html)
  1. Proxy ($13.6m, Series A) - https://proxy.com/ (https://www.globenewswire.com/news-release/2019/03/27/1774258/0/en/Proxy-Raises-13-6M-in-Series-A-Funding-Led-by-Kleiner-Perkins-Emerges-from-Stealth-to-Launch-Its-Universal-Identity-Signal-for-Frictionless-Access-to-Everything-in-the-Physical-Wor.html)

NB - all data and reporting done via Crunchbase.

Use ForgeRock Access Manager to provide MFA to Linux using PAM Radius

Use ForgeRock Access Manager to provide Multi-Factor Authentication to Linux

Introduction

Our aim is to set up an integration to provide Multi-Factor Authentication (MFA) to the Linux (Ubuntu) platform using ForgeRock Access Manager. The integration uses pluggable authentication module (PAM) to point to a RADIUS server. In this case AM is configured as a RADIUS server.

We achieve the following:

  1. Outsource Authentication of Linux to ForgeRock Access Manager.
  2. Provide an MFA solution to the Linux Platform.
  3. Configure ForgeRock Access Manager as a RADIUS Server.
  4. Configure PAM on Linux server point to our new RADIUS Server.

Setup

  • ForgeRock Access Manager 6.5.2 Installed and configured.
  • OS — Ubuntu 16.04.
  • PAM exists on your your server (this is common these days and you’ll find PAM here: /etc/pam.d/ ).

Configuration Steps

Configure a chain in AM

Firstly we configure a simple Authentication Chain in Access Manager with two modules.

a. First module – DataStore.

b. Second Module – HOTP. Email configured to point to local fakesmtp server.

Simple Authentication Chain

Configure ForgeRock AM as a RADIUS Server

Now we configure AM as a RADIUS Server.

a. Follow steps here: https://backstage.forgerock.com/docs/am/6.5/radius-server-guide/#chap-radius-implementation

RADIUS Server setup

Secondary Configuration — i.e. RADIUS Client

We have to configure a trusted RADIUS client, our Linux server.

a. Enter the IP address of the client (Linux Server).

b. Set the Client Secret.

b. Select your Realm — I used top level realm (don’t do this in production!).

c. Select your Chain.

Configure RADIUS Client

Configure pam_radius on Linux Server (Ubuntu)

Following these instructions, configure pam_radius on your Linux server:

https://www.wikidsystems.com/support/how-to/how-to-configure-pam-radius-in-ubuntu/

a. Install pam_radius.

sudo apt-get install libpam-radius-auth

b. Configure pam_radius to talk to RADIUS server (In this case AM).

sudo vim /etc/pam_radius_auth.conf

i.e <AM Server VM>:1812 secret 1

Point pam_radius to your RADIUS server

Tell SSH to use pam_radius for authentication.

a. Add this line to the top of the /etc/pam.d/sshd file.

auth sufficient pam_radius_auth.so debug

Note: debug is optional and has been added for testing, do not do this in production.

pam_radius is sufficient to authentication

Enable Challenge response for MFA

Tell your sshd config to allow challenge/response and use PAM.

a. Set the following values in your /etc/ssh/sshd_config file.

ChallengeResponseAuthentication yes

UsePAM yes

Create a local user on your Linux server and in AM

In this simple use case you will required a separate account on your Linux server and in AM.

a. Create a Linux user.

sudo adduser test

Note: Make sure the user has a different password than the user in AM to ensure you’re not authenticating locally. Users may have no password if your system allows it, but in this demo I set the password to some random string.

Create Linux User

b. Ensure the user is created in AM with an email address.

User exists in AM with an email address for OTP

Test Authentication to Unix via SSH

It’s now time to put it all together.

a. I recommend you tail the auth log file.

tail -f /var/log/auth.log

b. SSH to your server using.

ssh test@<server name>

c. You should be authenticating to the first module in your AM chain so enter your AM Password.

d. You should be prompted for your OTP, check your email.

OTP generated and sent to mail attribute on user

e. Enter your OTP and press enter then enter again (the UI i.e. challenge/response is not super friendly here).

f. If successfully entered you should be logged in.

You can follow the Auth logs, as well as the AM logs i.e. authentication.audit.json to view the process.

The End.

References:

This blog post was first published @ https://medium.com/@marknienaber included here with permission.

Next Generation Distributed Authorization

Many of today’s security models spend a lot of time focusing upon network segmentation and authentication.  Both of these concepts are critical in building out a baseline defensive security posture.  However, there is a major area that is often overlooked, or at least simplified to a level of limited use.  That of authorization.  Working out what, a user, service, or thing, should be able to do within another service.  The permissions.  Entitlements.  The access control entries.  I don’t want to give an introduction into the many, sometimes academic acronyms and ideas around authorization (see RBAC, MAC, DAC, ABAC, PDP/PEP amongst others). I want to spend a page delving into the some of the current and future requirements surrounding distributed authorization.

New Authorization Requirements

Classic authorization modelling, tends to have a centralised policy decision (PDP) point – a central location where applications, agents and other SDK’s call, in order to get a decision regarding a subject/object/action combination.  The PDP contains signatures (or policies) that map the objects and actions to a bunch of users and services.
That call out process is now a bottle neck, for several reasons.  Firstly the number of systems being protected is rapidly increasing, with the era of microservices, API’s and IoT devices all needing some sort of access control.  Having them all hitting a central PDP doesn’t seem a good use of network bandwidth or central processing power.  Secondly, that increase in objects, also gives way to a more mesh and federated set of interactions such as the following.

Distributed Enforcement

This gives way to a more distributed enforcement requirement.  How can the protected object perform an access control evaluation without having to go back to the mother ship?  There are a few things that could help.  
Firstly we need to probably achieve three things.  Work out what we need to identify the calling user or service (aka authentication token), map that to what that identity can do, before finally making sure that actually happens.  The first part, is often completed using tokens – and in the distributed world a token that has been cryptographically signed by a central authority.  JSON Web Tokens (JWTs) are popular, but not the only approach.
The second part – working out what they can do – could be handled in two slightly different ways.  One, is the calling subject brings with them what they can do.  They could do this by having the cryptographically signed token, contain their access control entries.  This approach, would require the service that issues tokens, to also know what the calling user or service could do, so would need to have knowledge of the access control entries to use.  That list of entries, would also need things like governance, audit, version control and so, but that is needed irregardless of where those entries are stored.
So here, a token gets issued and the objects being protected, have a method to crytographically validate the presented token, extract the access control entries (ACE) and enforce what is being asked.
Having a token that contains the actual ACE, is not that new.  Capability Based Access Control (CBAC) follows this concept, where the token could contain the object and associated actions.  It could also contain the subject identifier, or perhaps that could be delivered as a separate token.  A similar practical implementation is described in Google’s Macaroons project.
What we’ve achieved here, is to basically remove the access control logic from the object or service, but equally, removed the need to perform a call back to a policy mother ship.
A subtly different approach, is to pass the access control logic back down to the object – but instead of it originating within the service itself – it is still owned and managed by central authority – just distributed to the edges.
This allows for local enforcement, but central governance and management.  Modern distribution technologies like web sockets, or even flat file systems like JSON and YAML, allow for repave and replace approaches as policy definitions change, which fits nicely into devops deployment models.  
The object itself, would still need to know a few things to make the enforcement complete – a token representing the user or service and some context to help validate the request.

Contextual Integration

Access control decisions generally require the subject, the object and any associated actions.  For example subject=Bob, could perform actions=open on object=Meeting Room.  Another dimension that is now required, especially within zero trust based approaches, is that of context.  In Bob’s example, context may include time of day, day of the week, or even the project he is working on.  They could all impact the decision.  Previous access control requests and decisions could also come into play here.  For example, say Bob was just given access to the Safe Room where the gold bullion was stored, perhaps his request to gain access to the Back Door is denied.
The capturing of context, both during authentication time and during authorization evaluation time is now critical, as it allows the object to have a much clearer understanding of how to handle access requests.

ML – Defining Normal

I’ve talked a lot so far about access control logic and where that should sit.  Well, how do we know what that access control logic looks like?  I spent many a year, designing role based access control systems (wow, that was 10+ years ago), using a system known as role mining.  Big data crunching before machine learning was in vogue.  Taking groups of users and trying to understand what access control patterns existed, and trying to shoe horn the results into business and technical roles.
Today, there are loads of great SaaS based machine learning systems, that can take user activity logs (logs that describe user to application interactions) and provide views on whether their activity levels are “normal” – normal for them, normal for their peers, their business unit, location, purchasing patterns and so on.  The output of that process, can be used to help define the initial baseline policies.
Enforcing access based on policies though is not enough.  It is time consuming and open to many many avenues of circumvention.  Machine learning also has a huge role to play within the enforcement aspect too, especially as the idea of context and what is valid or not, becomes a highly complicated and ever changing question.
One of the key issues of modern authorization, is the distinction between access control logic, enforcement and the vehicles used to deliver the necessary parts to the protected services.
If at all possible, they should be as modular as possible, to allow for future proofing and the ability to design a secure system that is flexible enough to meet business requirements, scale out to millions of transactions a second and integrate thousands of services simultaneously.

This blog post was first published @ www.infosecprofessional.com, included here with permission.

Next Generation Distributed Authorization

Many of today's security models spend a lot of time focusing upon network segmentation and authentication.  Both of these concepts are critical in building out a baseline defensive security posture.  However, there is a major area that is often overlooked, or at least simplified to a level of limited use.  That of authorization.  Working out what, a user, service, or thing, should be able to do within another service.  The permissions.  Entitlements.  The access control entries.  I don't want to give an introduction into the many, sometimes academic acronyms and ideas around authorization (see RBAC, MAC, DAC, ABAC, PDP/PEP amongst others). I want to spend a page delving into the some of the current and future requirements surrounding distributed authorization.

New Authorization Requirements

Classic authorization modelling, tends to have a centralised policy decision point (PDP)- a central location where applications, agents and other SDK's call, in order to get a decision regarding a subject/object/action combination.  The PDP contains signatures (or policies) that map the objects and actions to a bunch of users and services.

That call out process is now a bottle neck, for several reasons.  Firstly the number of systems being protected is rapidly increasing, with the era of microservices, API's and IoT devices all needing some sort of access control.  Having them all hitting a central PDP doesn't seem a good use of network bandwidth or central processing power.  Secondly, that increase in objects, also gives way to a more mesh and federated set of interactions such as the following, where microservices and IoT are more common.


Distributed Enforcement


This gives way to a more distributed enforcement requirement.  How can the protected object perform an access control evaluation without having to go back to the mother ship?  There are a few things that could help.  

Firstly,  we need to probably achieve three things.  Work out what we need to identify the calling user or service (aka authentication token), map that to what that identity can do, before finally making sure that actually happens.  The first part, is often completed using tokens - and in the distributed world a token that has been cryptographically signed by a central authority.  JSON Web Tokens (JWTs) are popular, but not the only approach.

The second part - working out what they can do - could be handled in two slightly different ways.  One, is the calling subject brings with them what they can do.  They could do this by having the cryptographically signed token, contain their access control entries.  This approach, would require the service that issues tokens, to also know what the calling user or service could do, so would need to have knowledge of the access control entries to use.  That list of entries, would also need things like governance, audit, version control and so, but that is needed irregardless of where those entries are stored.



So here, a token gets issued and the objects being protected, have a method to crytographically validate the presented token, extract the access control entries (ACE) and enforce what is being asked.

Having a token that contains the actual ACE, is not that new.  Capability Based Access Control (CBAC) follows this concept, where the token could contain the object and associated actions.  It could also contain the subject identifier, or perhaps that could be delivered as a separate token.  A similar practical implementation is described in Google's Macaroons project.

What we've achieved here, is to basically remove the access control logic from the object or service, but equally, removed the need to perform a call back to a policy mother ship.

A subtly different approach, is to pass the access control logic back down to the object - but instead of it originating within the service itself - it is still owned and managed by central authority - just distributed to the edges.



This allows for local enforcement, but central governance and management.  Modern distribution technologies like web sockets could be useful for this.  In addition, even flat file systems like JSON and YAML, could allow for "repave and replace" approach, as policy definitions change, which fits nicely into devops deployment models.  

The object itself, would still need to know a few things to make the enforcement complete - a token representing the user or service and some context to help validate the request.

Contextual Integration

Access control decisions generally require the subject, the object and any associated actions.  For example subject=Bob, could perform actions=open on object=Meeting Room.  Another dimension that is now required, especially within zero trust based approaches, is that of context.  In Bob's example, context may include time of day, day of the week, or even the project he is working on.  They could all impact the decision. 

Previous access control requests and decisions could also come into play here.  For example, say Bob was just given access to the Safe Room where the gold bullion was stored.  Maybe his request two minutes later to gain access to the Back Door is denied.  If that first request didn't occur, perhaps his request to open the Back Door is legitimate and is permitted.

The capturing of context, both during authentication time and during authorization evaluation time is now critical, as it allows the object to have a much clearer understanding of how to handle access requests.

ML - Defining Normal

I've talked a lot so far about access control logic and where that should sit.  Well, how do we know what that access control logic looks like?  I spent many a year, designing role based access control systems (wow, that was 10+ years ago), using a system known as role mining.  Big data crunching before machine learning was in vogue.  Taking groups of users and trying to understand what access control patterns existed, and trying to shoe horn the results into business and technical roles.

Today, there are loads of great SaaS based machine learning systems, that can take user activity logs (logs that describe user to application interactions) and provide views on whether their activity levels are "normal" - normal for them, normal for their peers, their business unit, location, purchasing patterns and so on.  The typical "access path analytics".  The output of that process, can be used to help define the initial baseline policies.

Enforcing access based on policies though is not enough though.  It is time consuming and open to many many avenues of circumvention.  Machine learning also has a huge role to play within the enforcement aspect too, especially as the idea of context and what is valid or not, becomes a highly complicated and ever changing question.

One of the key issues of modern authorization, is the distinction between access control logic, enforcement and the vehicles used to deliver the necessary parts to the protected services.

If at all possible, they should be as modular as possible, to allow for future proofing and the ability to design a secure system that is flexible enough to meet business requirements, scale out to millions of transactions a second and integrate thousands of services simultaneously.


















Next Generation Distributed Authorization

Many of today's security models spend a lot of time focusing upon network segmentation and authentication.  Both of these concepts are critical in building out a baseline defensive security posture.  However, there is a major area that is often overlooked, or at least simplified to a level of limited use.  That of authorization.  Working out what, a user, service, or thing, should be able to do within another service.  The permissions.  Entitlements.  The access control entries.  I don't want to give an introduction into the many, sometimes academic acronyms and ideas around authorization (see RBAC, MAC, DAC, ABAC, PDP/PEP amongst others). I want to spend a page delving into the some of the current and future requirements surrounding distributed authorization.

New Authorization Requirements

Classic authorization modelling, tends to have a centralised policy decision point (PDP)- a central location where applications, agents and other SDK's call, in order to get a decision regarding a subject/object/action combination.  The PDP contains signatures (or policies) that map the objects and actions to a bunch of users and services.

That call out process is now a bottle neck, for several reasons.  Firstly the number of systems being protected is rapidly increasing, with the era of microservices, API's and IoT devices all needing some sort of access control.  Having them all hitting a central PDP doesn't seem a good use of network bandwidth or central processing power.  Secondly, that increase in objects, also gives way to a more mesh and federated set of interactions such as the following, where microservices and IoT are more common.


Distributed Enforcement


This gives way to a more distributed enforcement requirement.  How can the protected object perform an access control evaluation without having to go back to the mother ship?  There are a few things that could help.  

Firstly,  we need to probably achieve three things.  Work out what we need to identify the calling user or service (aka authentication token), map that to what that identity can do, before finally making sure that actually happens.  The first part, is often completed using tokens - and in the distributed world a token that has been cryptographically signed by a central authority.  JSON Web Tokens (JWTs) are popular, but not the only approach.

The second part - working out what they can do - could be handled in two slightly different ways.  One, is the calling subject brings with them what they can do.  They could do this by having the cryptographically signed token, contain their access control entries.  This approach, would require the service that issues tokens, to also know what the calling user or service could do, so would need to have knowledge of the access control entries to use.  That list of entries, would also need things like governance, audit, version control and so, but that is needed irregardless of where those entries are stored.



So here, a token gets issued and the objects being protected, have a method to crytographically validate the presented token, extract the access control entries (ACE) and enforce what is being asked.

Having a token that contains the actual ACE, is not that new.  Capability Based Access Control (CBAC) follows this concept, where the token could contain the object and associated actions.  It could also contain the subject identifier, or perhaps that could be delivered as a separate token.  A similar practical implementation is described in Google's Macaroons project.

What we've achieved here, is to basically remove the access control logic from the object or service, but equally, removed the need to perform a call back to a policy mother ship.

A subtly different approach, is to pass the access control logic back down to the object - but instead of it originating within the service itself - it is still owned and managed by central authority - just distributed to the edges.



This allows for local enforcement, but central governance and management.  Modern distribution technologies like web sockets could be useful for this.  In addition, even flat file systems like JSON and YAML, could allow for "repave and replace" approach, as policy definitions change, which fits nicely into devops deployment models.  

The object itself, would still need to know a few things to make the enforcement complete - a token representing the user or service and some context to help validate the request.

Contextual Integration

Access control decisions generally require the subject, the object and any associated actions.  For example subject=Bob, could perform actions=open on object=Meeting Room.  Another dimension that is now required, especially within zero trust based approaches, is that of context.  In Bob's example, context may include time of day, day of the week, or even the project he is working on.  They could all impact the decision. 

Previous access control requests and decisions could also come into play here.  For example, say Bob was just given access to the Safe Room where the gold bullion was stored.  Maybe his request two minutes later to gain access to the Back Door is denied.  If that first request didn't occur, perhaps his request to open the Back Door is legitimate and is permitted.

The capturing of context, both during authentication time and during authorization evaluation time is now critical, as it allows the object to have a much clearer understanding of how to handle access requests.

ML - Defining Normal

I've talked a lot so far about access control logic and where that should sit.  Well, how do we know what that access control logic looks like?  I spent many a year, designing role based access control systems (wow, that was 10+ years ago), using a system known as role mining.  Big data crunching before machine learning was in vogue.  Taking groups of users and trying to understand what access control patterns existed, and trying to shoe horn the results into business and technical roles.

Today, there are loads of great SaaS based machine learning systems, that can take user activity logs (logs that describe user to application interactions) and provide views on whether their activity levels are "normal" - normal for them, normal for their peers, their business unit, location, purchasing patterns and so on.  The typical "access path analytics".  The output of that process, can be used to help define the initial baseline policies.

Enforcing access based on policies though is not enough though.  It is time consuming and open to many many avenues of circumvention.  Machine learning also has a huge role to play within the enforcement aspect too, especially as the idea of context and what is valid or not, becomes a highly complicated and ever changing question.

One of the key issues of modern authorization, is the distinction between access control logic, enforcement and the vehicles used to deliver the necessary parts to the protected services.

If at all possible, they should be as modular as possible, to allow for future proofing and the ability to design a secure system that is flexible enough to meet business requirements, scale out to millions of transactions a second and integrate thousands of services simultaneously.