Next Generation Distributed Authorization

Many of today's security models spend a lot of time focusing upon network segmentation and authentication.  Both of these concepts are critical in building out a baseline defensive security posture.  However, there is a major area that is often overlooked, or at least simplified to a level of limited use.  That of authorization.  Working out what, a user, service, or thing, should be able to do within another service.  The permissions.  Entitlements.  The access control entries.  I don't want to give an introduction into the many, sometimes academic acronyms and ideas around authorization (see RBAC, MAC, DAC, ABAC, PDP/PEP amongst others). I want to spend a page delving into the some of the current and future requirements surrounding distributed authorization.

New Authorization Requirements

Classic authorization modelling, tends to have a centralised policy decision point (PDP)- a central location where applications, agents and other SDK's call, in order to get a decision regarding a subject/object/action combination.  The PDP contains signatures (or policies) that map the objects and actions to a bunch of users and services.

That call out process is now a bottle neck, for several reasons.  Firstly the number of systems being protected is rapidly increasing, with the era of microservices, API's and IoT devices all needing some sort of access control.  Having them all hitting a central PDP doesn't seem a good use of network bandwidth or central processing power.  Secondly, that increase in objects, also gives way to a more mesh and federated set of interactions such as the following, where microservices and IoT are more common.


Distributed Enforcement


This gives way to a more distributed enforcement requirement.  How can the protected object perform an access control evaluation without having to go back to the mother ship?  There are a few things that could help.  

Firstly,  we need to probably achieve three things.  Work out what we need to identify the calling user or service (aka authentication token), map that to what that identity can do, before finally making sure that actually happens.  The first part, is often completed using tokens - and in the distributed world a token that has been cryptographically signed by a central authority.  JSON Web Tokens (JWTs) are popular, but not the only approach.

The second part - working out what they can do - could be handled in two slightly different ways.  One, is the calling subject brings with them what they can do.  They could do this by having the cryptographically signed token, contain their access control entries.  This approach, would require the service that issues tokens, to also know what the calling user or service could do, so would need to have knowledge of the access control entries to use.  That list of entries, would also need things like governance, audit, version control and so, but that is needed irregardless of where those entries are stored.



So here, a token gets issued and the objects being protected, have a method to crytographically validate the presented token, extract the access control entries (ACE) and enforce what is being asked.

Having a token that contains the actual ACE, is not that new.  Capability Based Access Control (CBAC) follows this concept, where the token could contain the object and associated actions.  It could also contain the subject identifier, or perhaps that could be delivered as a separate token.  A similar practical implementation is described in Google's Macaroons project.

What we've achieved here, is to basically remove the access control logic from the object or service, but equally, removed the need to perform a call back to a policy mother ship.

A subtly different approach, is to pass the access control logic back down to the object - but instead of it originating within the service itself - it is still owned and managed by central authority - just distributed to the edges.



This allows for local enforcement, but central governance and management.  Modern distribution technologies like web sockets could be useful for this.  In addition, even flat file systems like JSON and YAML, could allow for "repave and replace" approach, as policy definitions change, which fits nicely into devops deployment models.  

The object itself, would still need to know a few things to make the enforcement complete - a token representing the user or service and some context to help validate the request.

Contextual Integration

Access control decisions generally require the subject, the object and any associated actions.  For example subject=Bob, could perform actions=open on object=Meeting Room.  Another dimension that is now required, especially within zero trust based approaches, is that of context.  In Bob's example, context may include time of day, day of the week, or even the project he is working on.  They could all impact the decision. 

Previous access control requests and decisions could also come into play here.  For example, say Bob was just given access to the Safe Room where the gold bullion was stored.  Maybe his request two minutes later to gain access to the Back Door is denied.  If that first request didn't occur, perhaps his request to open the Back Door is legitimate and is permitted.

The capturing of context, both during authentication time and during authorization evaluation time is now critical, as it allows the object to have a much clearer understanding of how to handle access requests.

ML - Defining Normal

I've talked a lot so far about access control logic and where that should sit.  Well, how do we know what that access control logic looks like?  I spent many a year, designing role based access control systems (wow, that was 10+ years ago), using a system known as role mining.  Big data crunching before machine learning was in vogue.  Taking groups of users and trying to understand what access control patterns existed, and trying to shoe horn the results into business and technical roles.

Today, there are loads of great SaaS based machine learning systems, that can take user activity logs (logs that describe user to application interactions) and provide views on whether their activity levels are "normal" - normal for them, normal for their peers, their business unit, location, purchasing patterns and so on.  The typical "access path analytics".  The output of that process, can be used to help define the initial baseline policies.

Enforcing access based on policies though is not enough though.  It is time consuming and open to many many avenues of circumvention.  Machine learning also has a huge role to play within the enforcement aspect too, especially as the idea of context and what is valid or not, becomes a highly complicated and ever changing question.

One of the key issues of modern authorization, is the distinction between access control logic, enforcement and the vehicles used to deliver the necessary parts to the protected services.

If at all possible, they should be as modular as possible, to allow for future proofing and the ability to design a secure system that is flexible enough to meet business requirements, scale out to millions of transactions a second and integrate thousands of services simultaneously.


















Next Generation Distributed Authorization

Many of today's security models spend a lot of time focusing upon network segmentation and authentication.  Both of these concepts are critical in building out a baseline defensive security posture.  However, there is a major area that is often overlooked, or at least simplified to a level of limited use.  That of authorization.  Working out what, a user, service, or thing, should be able to do within another service.  The permissions.  Entitlements.  The access control entries.  I don't want to give an introduction into the many, sometimes academic acronyms and ideas around authorization (see RBAC, MAC, DAC, ABAC, PDP/PEP amongst others). I want to spend a page delving into the some of the current and future requirements surrounding distributed authorization.

New Authorization Requirements

Classic authorization modelling, tends to have a centralised policy decision point (PDP)- a central location where applications, agents and other SDK's call, in order to get a decision regarding a subject/object/action combination.  The PDP contains signatures (or policies) that map the objects and actions to a bunch of users and services.

That call out process is now a bottle neck, for several reasons.  Firstly the number of systems being protected is rapidly increasing, with the era of microservices, API's and IoT devices all needing some sort of access control.  Having them all hitting a central PDP doesn't seem a good use of network bandwidth or central processing power.  Secondly, that increase in objects, also gives way to a more mesh and federated set of interactions such as the following, where microservices and IoT are more common.


Distributed Enforcement


This gives way to a more distributed enforcement requirement.  How can the protected object perform an access control evaluation without having to go back to the mother ship?  There are a few things that could help.  

Firstly,  we need to probably achieve three things.  Work out what we need to identify the calling user or service (aka authentication token), map that to what that identity can do, before finally making sure that actually happens.  The first part, is often completed using tokens - and in the distributed world a token that has been cryptographically signed by a central authority.  JSON Web Tokens (JWTs) are popular, but not the only approach.

The second part - working out what they can do - could be handled in two slightly different ways.  One, is the calling subject brings with them what they can do.  They could do this by having the cryptographically signed token, contain their access control entries.  This approach, would require the service that issues tokens, to also know what the calling user or service could do, so would need to have knowledge of the access control entries to use.  That list of entries, would also need things like governance, audit, version control and so, but that is needed irregardless of where those entries are stored.



So here, a token gets issued and the objects being protected, have a method to crytographically validate the presented token, extract the access control entries (ACE) and enforce what is being asked.

Having a token that contains the actual ACE, is not that new.  Capability Based Access Control (CBAC) follows this concept, where the token could contain the object and associated actions.  It could also contain the subject identifier, or perhaps that could be delivered as a separate token.  A similar practical implementation is described in Google's Macaroons project.

What we've achieved here, is to basically remove the access control logic from the object or service, but equally, removed the need to perform a call back to a policy mother ship.

A subtly different approach, is to pass the access control logic back down to the object - but instead of it originating within the service itself - it is still owned and managed by central authority - just distributed to the edges.



This allows for local enforcement, but central governance and management.  Modern distribution technologies like web sockets could be useful for this.  In addition, even flat file systems like JSON and YAML, could allow for "repave and replace" approach, as policy definitions change, which fits nicely into devops deployment models.  

The object itself, would still need to know a few things to make the enforcement complete - a token representing the user or service and some context to help validate the request.

Contextual Integration

Access control decisions generally require the subject, the object and any associated actions.  For example subject=Bob, could perform actions=open on object=Meeting Room.  Another dimension that is now required, especially within zero trust based approaches, is that of context.  In Bob's example, context may include time of day, day of the week, or even the project he is working on.  They could all impact the decision. 

Previous access control requests and decisions could also come into play here.  For example, say Bob was just given access to the Safe Room where the gold bullion was stored.  Maybe his request two minutes later to gain access to the Back Door is denied.  If that first request didn't occur, perhaps his request to open the Back Door is legitimate and is permitted.

The capturing of context, both during authentication time and during authorization evaluation time is now critical, as it allows the object to have a much clearer understanding of how to handle access requests.

ML - Defining Normal

I've talked a lot so far about access control logic and where that should sit.  Well, how do we know what that access control logic looks like?  I spent many a year, designing role based access control systems (wow, that was 10+ years ago), using a system known as role mining.  Big data crunching before machine learning was in vogue.  Taking groups of users and trying to understand what access control patterns existed, and trying to shoe horn the results into business and technical roles.

Today, there are loads of great SaaS based machine learning systems, that can take user activity logs (logs that describe user to application interactions) and provide views on whether their activity levels are "normal" - normal for them, normal for their peers, their business unit, location, purchasing patterns and so on.  The typical "access path analytics".  The output of that process, can be used to help define the initial baseline policies.

Enforcing access based on policies though is not enough though.  It is time consuming and open to many many avenues of circumvention.  Machine learning also has a huge role to play within the enforcement aspect too, especially as the idea of context and what is valid or not, becomes a highly complicated and ever changing question.

One of the key issues of modern authorization, is the distinction between access control logic, enforcement and the vehicles used to deliver the necessary parts to the protected services.

If at all possible, they should be as modular as possible, to allow for future proofing and the ability to design a secure system that is flexible enough to meet business requirements, scale out to millions of transactions a second and integrate thousands of services simultaneously.


















How To Build An Authentication Platform

Today's authentication requirements go way beyond hooking into a database or directory and challenging every user and service for an Id and password.  Authentication and the login experience, is the application entry point and can make or break your security posture and end user experience. 

Authentication is typically associated with identifying, to a certain degree of assurance, who or what you are interacting with.  Authorization is typically identifying and allowing what that person or thing can do.  This blog is focused on the former, but I might stray in to the latter from time to time.

There are numerous use cases that a modern enterprise needs to fulfil, if authentication services are to deliver value.  These can include:

  • Authentication for a service or API
  • Device authentication
  • Metrics, timing and analytics of flows
  • Threat intelligence integration
  • Anonymous to known authentication profiling
  • Contextual analysis
In addition to the basic functional requirements, there are several non-functional basics too.  These are going to include:

  • Simple customisation
  • Being highly available
  • Stateless and elastic
  • Simple integrations
  • API first

I'm going to take some of these key requirements and describe them in a little more detail.

Non Identity Intelligence

From a feature perspective, the new requirements consistently rely upon Intelligence:  the new buzz in the cyber security world.  Every week a new more consolidated threat intelligence tool comes to market.  Organisations up and down the land, are rapidly building out Security Operations Centres (SOC) with wily ex-military veterans creating strategies and starry eyed graduates analysing SIEM and NIDS logs.  We need data.  We have data.  What we need is information.  Actionable intelligence.  Intelligence can be rapidly integrated into any number of different security architecture components. 

Intelligence here, is basically a focus upon non-identity data signals.  Sources of malware, malicious IP addresses, app assurance ratings, breached credentials data and so on.

The vast breadth and depth of cyber threat intelligence (CTI) sources is staggering.  Free, chargeable, subscription based, cloud based, you name it, it's available.  A common factor must be simplicity of integration - ideally via some like a REST/JSON based API that developers are familiar with.  Long tale integration must be avoided too, with the ability to swap out and have a zero barrier to exit being important.  This last point is extremely important.  You need to able to future proof your data inputs.  

Whatever you want to integrate today, will be out of date tomorrow.  

Integration

Integration is not just limited to threat intelligence sources.  This is really just a non-functional, but I want to spend some time on it.  It is quite common to find legacy (I hate this word, let's call them "classic" or initial system) authentication products are generally difficult to integrate against and extend.  

Many systems integrators (SI's) (and many do excellent jobs in highly challenging environments) will work tirelessly, and at some considerable cost, to add different authentication modalities, customize one time password options, integrate with difficult LDAP account lockout options, mobile-ise and more.  These "integration" steps are often described as non-BAU.  They require change control and are charged via a time and materials or scope creep premium model.  Integration costs in a modern system, really need to be minimized if not removed.  Authentication is becoming so fluid that changes including new authentication factors, data sources, UI flows and so on, should be a standard operator journey.

Roadmapping

So why is integration such an issue?  A common problem of historical authentication deployments, has often been around lack of foresight. In honesty, foresight and robust road mapping has never been a real requirement for a login system.  Login using user names and passwords and occasionally an MFA, was pretty much it.  Like it or lump.  Well, in today's digitised ecosystems, new requirements pop up daily.  Think of the following basic scenarios, that will impact an authentication system:

  • New go to markets requiring localization
  • A new product that requires new API's and apps
  • A merger resulting in differing regulatory compliance requirements
  • New attack patterns and vector discovery
  • Competitive innovations
  • Commodity innovations
If you looked at your authentication services library and compare that to the applications and users consuming those services, do you know their functional and non-functional requirements, business objectives and challenges for  the next 12-18 months?  Some will, so the underlying authentication service needs to a) have a road map and b) be able to accommodate new requirements and demands, in a agile and iterative fashion.

Part of this is technical and part of that is operational management.  The business owners of an authentication platform, need to have interactions with the new stakeholders to the login journey.  The login process is basically the application from an end user perspective.  It needs to uphold security, whilst improving the user experience.  Requirements gathering must be a fully integrated process not just for application development, but for identity and authentication services too.


Platform versus Product

I purposefully chose the word platform in the title as opposed to service or product.  Modern authentication is a platform.  It powers transformation, by supporting API's, applications and services that allow organisations to create value driven software.  It becomes the wiring in the hotel, that allows all of the auxiliary products and shiny things to flourish.  

Many point authentication products exist. I am not discrediting them by any means.  Best of breed point solutions for biometry, mobile SDK integration, device operating or behaviour profiling exist and will need integrating to the underlying platform.  They are integration points.  Cogs inside a bigger machine.

The glue that drives the business value however, will be the authentication platform, capable of delivering a range of services to different applications, user communities, geographies and customers.  A single product is unlikely to be able to achieve this.

In summary, authentication has become a critical component, not only for securing user and data centric integrations, but also for helping to deliver continuous modernization of the enterprise.  

It has become a foundational component, that requires a wide breadth of coverage, coupled with agility and extensibility.



How To Build An Authentication Platform

Today's authentication requirements go way beyond hooking into a database or directory and challenging every user and service for an Id and password.  Authentication and the login experience, is the application entry point and can make or break your security posture and end user experience. 

Authentication is typically associated with identifying, to a certain degree of assurance, who or what you are interacting with.  Authorization is typically identifying and allowing what that person or thing can do.  This blog is focused on the former, but I might stray in to the latter from time to time.

There are numerous use cases that a modern enterprise needs to fulfil, if authentication services are to deliver value.  These can include:

  • Authentication for a service or API
  • Device authentication
  • Metrics, timing and analytics of flows
  • Threat intelligence integration
  • Anonymous to known authentication profiling
  • Contextual analysis
In addition to the basic functional requirements, there are several non-functional basics too.  These are going to include:

  • Simple customisation
  • Being highly available
  • Stateless and elastic
  • Simple integrations
  • API first

I'm going to take some of these key requirements and describe them in a little more detail.

Non Identity Intelligence

From a feature perspective, the new requirements consistently rely upon Intelligence:  the new buzz in the cyber security world.  Every week a new more consolidated threat intelligence tool comes to market.  Organisations up and down the land, are rapidly building out Security Operations Centres (SOC) with wily ex-military veterans creating strategies and starry eyed graduates analysing SIEM and NIDS logs.  We need data.  We have data.  What we need is information.  Actionable intelligence.  Intelligence can be rapidly integrated into any number of different security architecture components. 

Intelligence here, is basically a focus upon non-identity data signals.  Sources of malware, malicious IP addresses, app assurance ratings, breached credentials data and so on.

The vast breadth and depth of cyber threat intelligence (CTI) sources is staggering.  Free, chargeable, subscription based, cloud based, you name it, it's available.  A common factor must be simplicity of integration - ideally via some like a REST/JSON based API that developers are familiar with.  Long tale integration must be avoided too, with the ability to swap out and have a zero barrier to exit being important.  This last point is extremely important.  You need to able to future proof your data inputs.  

Whatever you want to integrate today, will be out of date tomorrow.  

Integration

Integration is not just limited to threat intelligence sources.  This is really just a non-functional, but I want to spend some time on it.  It is quite common to find legacy (I hate this word, let's call them "classic" or initial system) authentication products are generally difficult to integrate against and extend.  

Many systems integrators (SI's) (and many do excellent jobs in highly challenging environments) will work tirelessly, and at some considerable cost, to add different authentication modalities, customize one time password options, integrate with difficult LDAP account lockout options, mobile-ise and more.  These "integration" steps are often described as non-BAU.  They require change control and are charged via a time and materials or scope creep premium model.  Integration costs in a modern system, really need to be minimized if not removed.  Authentication is becoming so fluid that changes including new authentication factors, data sources, UI flows and so on, should be a standard operator journey.

Roadmapping

So why is integration such an issue?  A common problem of historical authentication deployments, has often been around lack of foresight. In honesty, foresight and robust road mapping has never been a real requirement for a login system.  Login using user names and passwords and occasionally an MFA, was pretty much it.  Like it or lump.  Well, in today's digitised ecosystems, new requirements pop up daily.  Think of the following basic scenarios, that will impact an authentication system:

  • New go to markets requiring localization
  • A new product that requires new API's and apps
  • A merger resulting in differing regulatory compliance requirements
  • New attack patterns and vector discovery
  • Competitive innovations
  • Commodity innovations
If you looked at your authentication services library and compare that to the applications and users consuming those services, do you know their functional and non-functional requirements, business objectives and challenges for  the next 12-18 months?  Some will, so the underlying authentication service needs to a) have a road map and b) be able to accommodate new requirements and demands, in a agile and iterative fashion.

Part of this is technical and part of that is operational management.  The business owners of an authentication platform, need to have interactions with the new stakeholders to the login journey.  The login process is basically the application from an end user perspective.  It needs to uphold security, whilst improving the user experience.  Requirements gathering must be a fully integrated process not just for application development, but for identity and authentication services too.


Platform versus Product

I purposefully chose the word platform in the title as opposed to service or product.  Modern authentication is a platform.  It powers transformation, by supporting API's, applications and services that allow organisations to create value driven software.  It becomes the wiring in the hotel, that allows all of the auxiliary products and shiny things to flourish.  

Many point authentication products exist. I am not discrediting them by any means.  Best of breed point solutions for biometry, mobile SDK integration, device operating or behaviour profiling exist and will need integrating to the underlying platform.  They are integration points.  Cogs inside a bigger machine.

The glue that drives the business value however, will be the authentication platform, capable of delivering a range of services to different applications, user communities, geographies and customers.  A single product is unlikely to be able to achieve this.

In summary, authentication has become a critical component, not only for securing user and data centric integrations, but also for helping to deliver continuous modernization of the enterprise.  

It has become a foundational component, that requires a wide breadth of coverage, coupled with agility and extensibility.