Next Generation Distributed Authorization

Many of today’s security models spend a lot of time focusing upon network segmentation and authentication.  Both of these concepts are critical in building out a baseline defensive security posture.  However, there is a major area that is often overlooked, or at least simplified to a level of limited use.  That of authorization.  Working out what, a user, service, or thing, should be able to do within another service.  The permissions.  Entitlements.  The access control entries.  I don’t want to give an introduction into the many, sometimes academic acronyms and ideas around authorization (see RBAC, MAC, DAC, ABAC, PDP/PEP amongst others). I want to spend a page delving into the some of the current and future requirements surrounding distributed authorization.

New Authorization Requirements

Classic authorization modelling, tends to have a centralised policy decision (PDP) point – a central location where applications, agents and other SDK’s call, in order to get a decision regarding a subject/object/action combination.  The PDP contains signatures (or policies) that map the objects and actions to a bunch of users and services.
That call out process is now a bottle neck, for several reasons.  Firstly the number of systems being protected is rapidly increasing, with the era of microservices, API’s and IoT devices all needing some sort of access control.  Having them all hitting a central PDP doesn’t seem a good use of network bandwidth or central processing power.  Secondly, that increase in objects, also gives way to a more mesh and federated set of interactions such as the following.

Distributed Enforcement

This gives way to a more distributed enforcement requirement.  How can the protected object perform an access control evaluation without having to go back to the mother ship?  There are a few things that could help.  
Firstly we need to probably achieve three things.  Work out what we need to identify the calling user or service (aka authentication token), map that to what that identity can do, before finally making sure that actually happens.  The first part, is often completed using tokens – and in the distributed world a token that has been cryptographically signed by a central authority.  JSON Web Tokens (JWTs) are popular, but not the only approach.
The second part – working out what they can do – could be handled in two slightly different ways.  One, is the calling subject brings with them what they can do.  They could do this by having the cryptographically signed token, contain their access control entries.  This approach, would require the service that issues tokens, to also know what the calling user or service could do, so would need to have knowledge of the access control entries to use.  That list of entries, would also need things like governance, audit, version control and so, but that is needed irregardless of where those entries are stored.
So here, a token gets issued and the objects being protected, have a method to crytographically validate the presented token, extract the access control entries (ACE) and enforce what is being asked.
Having a token that contains the actual ACE, is not that new.  Capability Based Access Control (CBAC) follows this concept, where the token could contain the object and associated actions.  It could also contain the subject identifier, or perhaps that could be delivered as a separate token.  A similar practical implementation is described in Google’s Macaroons project.
What we’ve achieved here, is to basically remove the access control logic from the object or service, but equally, removed the need to perform a call back to a policy mother ship.
A subtly different approach, is to pass the access control logic back down to the object – but instead of it originating within the service itself – it is still owned and managed by central authority – just distributed to the edges.
This allows for local enforcement, but central governance and management.  Modern distribution technologies like web sockets, or even flat file systems like JSON and YAML, allow for repave and replace approaches as policy definitions change, which fits nicely into devops deployment models.  
The object itself, would still need to know a few things to make the enforcement complete – a token representing the user or service and some context to help validate the request.

Contextual Integration

Access control decisions generally require the subject, the object and any associated actions.  For example subject=Bob, could perform actions=open on object=Meeting Room.  Another dimension that is now required, especially within zero trust based approaches, is that of context.  In Bob’s example, context may include time of day, day of the week, or even the project he is working on.  They could all impact the decision.  Previous access control requests and decisions could also come into play here.  For example, say Bob was just given access to the Safe Room where the gold bullion was stored, perhaps his request to gain access to the Back Door is denied.
The capturing of context, both during authentication time and during authorization evaluation time is now critical, as it allows the object to have a much clearer understanding of how to handle access requests.

ML – Defining Normal

I’ve talked a lot so far about access control logic and where that should sit.  Well, how do we know what that access control logic looks like?  I spent many a year, designing role based access control systems (wow, that was 10+ years ago), using a system known as role mining.  Big data crunching before machine learning was in vogue.  Taking groups of users and trying to understand what access control patterns existed, and trying to shoe horn the results into business and technical roles.
Today, there are loads of great SaaS based machine learning systems, that can take user activity logs (logs that describe user to application interactions) and provide views on whether their activity levels are “normal” – normal for them, normal for their peers, their business unit, location, purchasing patterns and so on.  The output of that process, can be used to help define the initial baseline policies.
Enforcing access based on policies though is not enough.  It is time consuming and open to many many avenues of circumvention.  Machine learning also has a huge role to play within the enforcement aspect too, especially as the idea of context and what is valid or not, becomes a highly complicated and ever changing question.
One of the key issues of modern authorization, is the distinction between access control logic, enforcement and the vehicles used to deliver the necessary parts to the protected services.
If at all possible, they should be as modular as possible, to allow for future proofing and the ability to design a secure system that is flexible enough to meet business requirements, scale out to millions of transactions a second and integrate thousands of services simultaneously.

This blog post was first published @ www.infosecprofessional.com, included here with permission.

Implementing JWT Profile for OAuth2 Access Tokens

There is a new IETF draft stream called JSON Web Token (JWT) Profile for OAuth 2.0 Access Tokens.  This is a very early 0 version, that looks to describe the format of OAuth2 issued access_tokens.

Access tokens, are typically bearer tokens, but the OAuth2 spec, doesn’t really describe what format they should be.  They typically end up being two high level types – stateless and stateful.  Stateful just means “by reference”, with a long opaque random string being issued to the requestor, which resource servers can then send back into the authorization service, in order to introspect and validate.  On their own, stateful or reference tokens, don’t really provide the resource servers with any detail.

The alternative is to use a stateless token – namely a JSON Web Token (JWT).  This new spec, aims to standardise what the content and format should be.

From a ForgeRock AM perspective, this is good news.  AM has delivered JWT based tokens (web session, OIDC id_tokens and OAuth2 access_tokens) for a long time.  The format and content of the access_tokens, out of the box, generally look something like the following:

The out of the box header (using RS256 signing):

The out of the box payload:

Note there is a lot of stuff in that access_token.  Note the cnf claim (confirmation key).  This is used for proof of possession support which is of course optional, so you can easily reduce the size by not implementing that.  There are several claims, that are specific to the AM authorization service, which may not always be needed in a stateless JWT world, where perhaps the RS is performing offline validation away from the AS.

In AM 6.5.2 and above, new functionality allows for the ability to rapidly customize the content of the access_token.  You can add custom claims, remove out of the box fields and generally build token formats that suit your deployment.  We do this, by the addition of scriptable support.  Within the settings of the OAuth2 provider, note the new field for OAuth2 Access Token Modification Script.

The scripting ability, was already in place for OIDC id_tokens.  Similar concepts now apply.

The draft JWT profile spec, basically mandates iss, exp, aud, sub and client_id, with auth_time and jti as optional.  The AM token already contains those claims.  The perhaps only differing component, is that the JWT Profile spec –  section 2.1 – recommends the header typ value be set to “at+JWT” – meaning access token JWT, so the RS does not confuse the token as an id_token.  The FR AM scripting support, does not allow for changes to the typ, but the payload already contains a tokenName claim (value as access_token) to help this distinction.

If we add a couple of lines to the out of the box script, namely the following, we cut back the token content to the recommended JWT Profile:

accessToken.removeField(“cts”);
accessToken.removeField(“expires_in”);
accessToken.removeField(“realm”);
accessToken.removeField(“grant_type”);
accessToken.removeField(“nbf”);
accessToken.removeField(“authGrantId”);
accessToken.removeField(“cnf”);

The new token payload is now much more slimmed down:

The accessToken.setField(“name”, “value”) method, allows simple extension and alteration of standard claims.

For further details see the following documentation on scripted token content – https://backstage.forgerock.com/docs/am/6.5/oauth2-guide/#modifying-access-tokens-scripts

This blog post was first published @ http://www.theidentitycookbook.com/, included here with permission from the author.

2019 Digital Identity Progress Report

Schools out for summer?  Well not quite.  Unless you're living in the east coast of Australia, it's looking decidedly bleak weather wise for most of Europe and the American east coast.  But I digress.  Is it looking bleak for your digital identity driven projects?  What's been a success, where are we heading and what should we look out for?

Where We Are Today

Passwordless - (Reports says B-)

Over the last 24 months, there have been some pretty big themes that many organisations embarking on digital identity and security related projects, have been trying to succeed at.  First up, the age old chestnut...of passwordless authentication.  The password is dead, long live the password!  We are definitely making progress though.  Many of the top public sites (Facebook, LinkedIn, Twitter et al) provide multi-factor authentication options at least.  Passwords are still required as the first step, but the end user education and familiarity with something other than a password during login, must surely be the first steps to getting ridding of them entirely.  2018 also saw the rise of WebAuthn - the W3C standards based approach for crypto based challenge response authentication.  Could this hopefully accelerate adoption to a password-free world?

API Protection - (Report says C+)

API's will eat the world?  Well, digital disruption needs speed, agility and mashups.  API's help organisations achieve those basic aims, but where are we, with respect to the protection of those API's?  API management platforms are now common in most enterprise architectures.  They help to perform API provisioning, versioning and life cycle management, but what about security?  Many use cases fall into the API security band wagon such as service to service authentication, least privilege authorization, token exchange and contextual throttling.  Most API services are now sitting comfortably behind basic authentication, but fine grained controls and basic use cases such as token revocation and rotation are still in their infancy.  Report says "we must do better".

Microservices Protection - (Report says B-)

Not all API's are microservices, but many net new additions to projects will leverage this approach.  But microservices infrastructures, bring many new security challenges as well as benefits.  Service versioning, same service load balancing, high through puts and fine grained access controls have created some new emerging security patterns.  Both the side car and inflight/proxy approach for traffic introspection and security enforcement have appeared.  Microservices by their design, normally means very high transactions per second, as well as fine grained access control - with each service performing only a single task.  Stateless OAuth2 seems to fit the bill for many projects, but the consistency around high scale token introspection and scope design seems immature.

IoT Security - (Reports says C-)

Many digital disruption projects are embracing smart device (HTTP-able) infrastructures.  Pairing those devices to real people seems a winner for many industries, from retail, insurance to finance.  But and there's always a but, the main interest for many organisations is not the device, but the data the device is either collecting or generating.  Device protection is often lacking - default credentials, hard coded keys, un-upgradable firmware, inability to use HTTPS and the inability to store access tokens are all very common.  There are costs and usability issues with increased device security and no emerging patterns are consistent.  Several regulations and security best practice documents now exist, but adoption is still low.

User Consent Management - (Report says B-)

GDPR has probably had the biggest impact, from an awareness perspective, than any other piece of regulation relating to consent.  The consumer, from a pure economic buyer perspective at least, has never been so powerful.  One click away from a competitor.  From a data perspective however, it seems the capitalist corporate machine is holding all the cards.  Marketing analytics, usage tracking, location tracking, you name it, the service provider wants that data to either improve your service, or improve their ability to market new services.  Many organisations are not stupid.  They realise that by offering basic consent management functionality (contact preferences, ability to be removed, data exportation, activity viewing) they are not only ticking the compliance check box, but can actually create a competitive advantage by giving their user community the image of being at trusted partner to do business with.  But will the end user be ever truly in control of their data?

What's Coming

The above 4 topics are not going away any time soon.  Knowledge, standards maturity and technology advances, should all allow each of those areas to bounce a grade within the next 18-24 months.  But what other concerns are on the horizon?  

Well skills immediately spring out.  Cyber security in general is known to have a basic skills shortage.  Digital Identity seems to fall in to that general trend and some of these topics are niches within a niche.  Getting the right skill set to design micro services security or consent management systems will not be trivial.

What about new threats - they are emerging every day.  Bot protection - at both registration and login time - not only helps improve the security posture of an organisation, but also helps improve user analytics, remove opportunities for false advertising and provide a clearer picture to a service's real organic user community.  How will things like ML/AI help here - and does that provide another skills challenge or management black hole?

The final topic to mention is that of usability.  Security can be simple in many respects, but usability can make or break a service.  As underlying ecosystems become more complex, with a huge supply chain of API's, cross-boundary federations and devices, how can the end user be both protected, yet offered a seamless registration and login experience? Dedicated user experience teams exist today, but their skill set will need to be sharpened and focused on the security aspect of any new service. 


2019 Digital Identity Progress Report

Schools out for summer?  Well not quite.  Unless you're living in the east coast of Australia, it's looking decidedly bleak weather wise for most of Europe and the American east coast.  But I digress.  Is it looking bleak for your digital identity driven projects?  What's been a success, where are we heading and what should we look out for?

Where We Are Today

Passwordless - (Reports says B-)

Over the last 24 months, there have been some pretty big themes that many organisations embarking on digital identity and security related projects, have been trying to succeed at.  First up, the age old chestnut...of passwordless authentication.  The password is dead, long live the password!  We are definitely making progress though.  Many of the top public sites (Facebook, LinkedIn, Twitter et al) provide multi-factor authentication options at least.  Passwords are still required as the first step, but the end user education and familiarity with something other than a password during login, must surely be the first steps to getting ridding of them entirely.  2018 also saw the rise of WebAuthn - the W3C standards based approach for crypto based challenge response authentication.  Could this hopefully accelerate adoption to a password-free world?

API Protection - (Report says C+)

API's will eat the world?  Well, digital disruption needs speed, agility and mashups.  API's help organisations achieve those basic aims, but where are we, with respect to the protection of those API's?  API management platforms are now common in most enterprise architectures.  They help to perform API provisioning, versioning and life cycle management, but what about security?  Many use cases fall into the API security band wagon such as service to service authentication, least privilege authorization, token exchange and contextual throttling.  Most API services are now sitting comfortably behind basic authentication, but fine grained controls and basic use cases such as token revocation and rotation are still in their infancy.  Report says "we must do better".

Microservices Protection - (Report says B-)

Not all API's are microservices, but many net new additions to projects will leverage this approach.  But microservices infrastructures, bring many new security challenges as well as benefits.  Service versioning, same service load balancing, high through puts and fine grained access controls have created some new emerging security patterns.  Both the side car and inflight/proxy approach for traffic introspection and security enforcement have appeared.  Microservices by their design, normally means very high transactions per second, as well as fine grained access control - with each service performing only a single task.  Stateless OAuth2 seems to fit the bill for many projects, but the consistency around high scale token introspection and scope design seems immature.

IoT Security - (Reports says C-)

Many digital disruption projects are embracing smart device (HTTP-able) infrastructures.  Pairing those devices to real people seems a winner for many industries, from retail, insurance to finance.  But and there's always a but, the main interest for many organisations is not the device, but the data the device is either collecting or generating.  Device protection is often lacking - default credentials, hard coded keys, un-upgradable firmware, inability to use HTTPS and the inability to store access tokens are all very common.  There are costs and usability issues with increased device security and no emerging patterns are consistent.  Several regulations and security best practice documents now exist, but adoption is still low.

User Consent Management - (Report says B-)

GDPR has probably had the biggest impact, from an awareness perspective, than any other piece of regulation relating to consent.  The consumer, from a pure economic buyer perspective at least, has never been so powerful.  One click away from a competitor.  From a data perspective however, it seems the capitalist corporate machine is holding all the cards.  Marketing analytics, usage tracking, location tracking, you name it, the service provider wants that data to either improve your service, or improve their ability to market new services.  Many organisations are not stupid.  They realise that by offering basic consent management functionality (contact preferences, ability to be removed, data exportation, activity viewing) they are not only ticking the compliance check box, but can actually create a competitive advantage by giving their user community the image of being at trusted partner to do business with.  But will the end user be ever truly in control of their data?

What's Coming

The above 4 topics are not going away any time soon.  Knowledge, standards maturity and technology advances, should all allow each of those areas to bounce a grade within the next 18-24 months.  But what other concerns are on the horizon?  

Well skills immediately spring out.  Cyber security in general is known to have a basic skills shortage.  Digital Identity seems to fall in to that general trend and some of these topics are niches within a niche.  Getting the right skill set to design micro services security or consent management systems will not be trivial.

What about new threats - they are emerging every day.  Bot protection - at both registration and login time - not only helps improve the security posture of an organisation, but also helps improve user analytics, remove opportunities for false advertising and provide a clearer picture to a service's real organic user community.  How will things like ML/AI help here - and does that provide another skills challenge or management black hole?

The final topic to mention is that of usability.  Security can be simple in many respects, but usability can make or break a service.  As underlying ecosystems become more complex, with a huge supply chain of API's, cross-boundary federations and devices, how can the end user be both protected, yet offered a seamless registration and login experience? Dedicated user experience teams exist today, but their skill set will need to be sharpened and focused on the security aspect of any new service. 


Renewable Security: Steps to Save The Cyber Security Planet

Actually, this has nothing to-do with being green.  Although, that is a passion of mine.  This is more to-do with a paradigm that is becoming more popular in security architectures: that of being able to re-spin particular services to a known “safe” state after breach, or even as a preventative measure before a breach or vulnerability has been exploited.

Triple R's of Security


This falls into what is known as the “3 R’s of Security”.  A quick Google on that topic will result in a fair few decent explanations of what that can mean.  The TL;DR is basically, rotate (credentials), repair (vulnerabilities) and repave (services and servers to a known good state).  This approach is gaining popularity mainly due devops deployment models.  Or “secdevops”.  Or is it “devsecops”?  Containerization and highly automated “code to prod” pipelines make it a lot easier to get stuff into production, iterate and go again.  So how does security play into this?

Left-Shifting 


Well I want to back track a little, and tackle the age old issue of why security is generally applied as a post live issue.  Security practitioners, often evangelise on the “left shifting” of security.  Getting security higher up the production line, earlier in the software design life cycle and less as an audit/afterthought/pen testing exercise.  Why isn’t this really happening?  Well anecdotally, just look at the audit, pen testing and testing contractor rates.  They’re high and growing.  Sure, lots of dev teams and organisations are incorporating security architecture practices earlier in the dev cycle, but many find this too slow, expensive or inhibitive.  Many simply ship insecure software and assume external auditors will find the issues.

This I would say has resulted in variations of R3.  Dev as normal and simply flatten and rebuild in production in order to either prevent vulnerabilities being exploited, or recover from them faster.  Is this the approach many organisations are applying to newer architectures such as micro-services, server-less and IoT?

IoT, Microservices and Server-less


There are not many mature design patterns or vendors for things like micro-services security or even IoT security.  Yes, there are some interesting ideas, but the likes of Forrester, Gartner and other industry analysts, don’t to my knowledge, describe security for these areas as a known market size, or a level of repeatable maturity.  So what are the options?  These architectures ship with out security? Well, being a security guy, I would hope not.  So, what is the next best approach?  Maybe the triple R model is the next best thing.  Assume you’re going to breached – which CISO’s should be doing anyway – and focus on a remediation plan.

The triple R approach does assume a few things though.  The main one, is that you have a known-safe place.  Whether that is focused on images, virtual machines or new credentials, there needs to be a position which you can rollback or forward to, that is believed to be more secure than the version before.  That safe place, also needs to evolve.  There is no point in that safe place being unable to deliver the services needed to keep end users happy.

Options, Options, Options...


The main benefit of the triple R approach, is you have options – either as a response to a breach or vulnerability exposure, or as a preventative shortcut. It can bring other more pragmatic issues however.  If we’re referring to things like IoT security – how can devices, in the field and potentially aware from Internet connectivity – be hooked, rebuilt and re-keyed?  Can this be done in a hot-swappable model too, without interruptions to service?  If you need to rebuild a smart meter, you can’t possibly interrupt electricity supply to the property whilst that completes.

So the R3 model is certainly a powerful tool in the security architecture kit bag.  Is is suitable for all scenarios?  Probably not.  Is it a good “get out of jail” card in environments with highly optimized devops-esque process?  Absolutely.

Renewable Security: Steps to Save The Cyber Security Planet

Actually, this has nothing to-do with being green.  Although, that is a passion of mine.  This is more to-do with a paradigm that is becoming more popular in security architectures: that of being able to re-spin particular services to a known “safe” state after breach, or even as a preventative measure before a breach or vulnerability has been exploited.

Triple R's of Security


This falls into what is known as the “3 R’s of Security”.  A quick Google on that topic will result in a fair few decent explanations of what that can mean.  The TL;DR is basically, rotate (credentials), repair (vulnerabilities) and repave (services and servers to a known good state).  This approach is gaining popularity mainly due devops deployment models.  Or “secdevops”.  Or is it “devsecops”?  Containerization and highly automated “code to prod” pipelines make it a lot easier to get stuff into production, iterate and go again.  So how does security play into this?

Left-Shifting 


Well I want to back track a little, and tackle the age old issue of why security is generally applied as a post live issue.  Security practitioners, often evangelise on the “left shifting” of security.  Getting security higher up the production line, earlier in the software design life cycle and less as an audit/afterthought/pen testing exercise.  Why isn’t this really happening?  Well anecdotally, just look at the audit, pen testing and testing contractor rates.  They’re high and growing.  Sure, lots of dev teams and organisations are incorporating security architecture practices earlier in the dev cycle, but many find this too slow, expensive or inhibitive.  Many simply ship insecure software and assume external auditors will find the issues.

This I would say has resulted in variations of R3.  Dev as normal and simply flatten and rebuild in production in order to either prevent vulnerabilities being exploited, or recover from them faster.  Is this the approach many organisations are applying to newer architectures such as micro-services, server-less and IoT?

IoT, Microservices and Server-less


There are not many mature design patterns or vendors for things like micro-services security or even IoT security.  Yes, there are some interesting ideas, but the likes of Forrester, Gartner and other industry analysts, don’t to my knowledge, describe security for these areas as a known market size, or a level of repeatable maturity.  So what are the options?  These architectures ship with out security? Well, being a security guy, I would hope not.  So, what is the next best approach?  Maybe the triple R model is the next best thing.  Assume you’re going to breached – which CISO’s should be doing anyway – and focus on a remediation plan.

The triple R approach does assume a few things though.  The main one, is that you have a known-safe place.  Whether that is focused on images, virtual machines or new credentials, there needs to be a position which you can rollback or forward to, that is believed to be more secure than the version before.  That safe place, also needs to evolve.  There is no point in that safe place being unable to deliver the services needed to keep end users happy.

Options, Options, Options...


The main benefit of the triple R approach, is you have options – either as a response to a breach or vulnerability exposure, or as a preventative shortcut. It can bring other more pragmatic issues however.  If we’re referring to things like IoT security – how can devices, in the field and potentially aware from Internet connectivity – be hooked, rebuilt and re-keyed?  Can this be done in a hot-swappable model too, without interruptions to service?  If you need to rebuild a smart meter, you can’t possibly interrupt electricity supply to the property whilst that completes.

So the R3 model is certainly a powerful tool in the security architecture kit bag.  Is is suitable for all scenarios?  Probably not.  Is it a good “get out of jail” card in environments with highly optimized devops-esque process?  Absolutely.

12 Steps to Zero Trust Success

A Google search for “zero trust” returns ~ 195Million results.  Pretty sure some are not necessarily related to access management and cyber security, but a few probably are.  Zero Trust was a term coined by analyst group Forrester back in 2010 and has gained popularity since Google started using the concept with their employee management project called BeyondCorp.


It was originally focused on network segmentation but has now come to include other aspects of user focused security management.

Below is a hybrid set of concepts that tries to cover all the current approaches.  Please comment below so we can iterate and add more to this over time.


  1. Assign unique, non-reusable identifiers to all subjects [1], objects [2] and network devices [3]
  2. Authenticate every subject
  3. Authenticate every device
  4. Inspect, verify and validate every object access request
  5. Log every object access request
  6. Authentication should contain 2 of something you have, something you are, something you know
  7. Successful authentication should result in a revocable credential [4]
  8. Credentials should be scoped and follow least privilege [5]
  9. Credentials should be bound to a user, device, transaction tuple [6]
  10. Network communications should be encrypted [7]
  11. Assume all services, API’s and applications are accessible from the Internet [8]
  12. Segment processes and network traffic in logical and operational groups


[1] – Users of systems, including employees, partners, customers and other user-interactive service accounts
[2] – API’s, services, web applications and unique data sources
[3] – User devices (such as laptops, mobiles, tablets, virtual machines), service devices (such as printers, faxes) and network management devices (such as switches, routers)
[4] – Such as a cookie, tokenId or access token which is cryptographically secure.  Revocable shouldn't necessarily be limited to being time bound. Eg revocation/black lists etc.
[5] – Credential exchange may be required where access traverses network or object segmentation.  For example an issued credential for subject 1 to access object 1, may require object 1 to contact object 2 to fulfil the request.  The credential presented to object 2 may differ to that presented to object 1.
[6] – Token binding approach such as signature based access tokens or TLS binding
[7] – Using for example standards based protocols such as TLS 1.3 or similar. Eg Google's ALTS.
[8] – Assume perimeter based networking (either software defined or network defined) is incomplete and trust cannot be placed simply on the origin of a request




The below is a list of companies referencing “zero trust” public documentation:

  • Akamai - https://www.akamai.com/uk/en/solutions/zero-trust-security-model.jsp
  • Palo Alto - https://www.paloaltonetworks.com/cyberpedia/what-is-a-zero-trust-architecture
  • Centrify - https://www.centrify.com/zero-trust-security/
  • Cisco - https://blogs.cisco.com/security/why-has-forresters-zero-trust-cybersecurity-framework-become-such-a-hot-topic
  • Microsoft - https://cloudblogs.microsoft.com/microsoftsecure/2018/06/14/building-zero-trust-networks-with-microsoft-365/
  • ScaleFT - https://www.scaleft.com/zero-trust-security/
  • zscaler - https://www.zscaler.com/blogs/corporate/google-leveraging-zero-trust-security-model-and-so-can-you
  • Okta - https://www.okta.com/resources/whitepaper-zero-trust-with-okta-modern-approach-to-secure-access/
  • ForgeRock  - https://www.forgerock.com/blog/zero-trust-importance-identity-centered-security-program
  • Duo Security - https://duo.com/blog/to-trust-or-zero-trust
  • Google’s Beyond Corp - https://beyondcorp.com/
  • Fortinet - https://www.fortinet.com/demand/gated/Forrester-Market-Overview-NetworkSegmentation-Gateways.html

12 Steps to Zero Trust Success

A Google search for “zero trust” returns ~ 195Million results.  Pretty sure some are not necessarily related to access management and cyber security, but a few probably are.  Zero Trust was a term coined by analyst group Forrester back in 2010 and has gained popularity since Google started using the concept with their employee management project called BeyondCorp.


It was originally focused on network segmentation but has now come to include other aspects of user focused security management.

Below is a hybrid set of concepts that tries to cover all the current approaches.  Please comment below so we can iterate and add more to this over time.


  1. Assign unique, non-reusable identifiers to all subjects [1], objects [2] and network devices [3]
  2. Authenticate every subject
  3. Authenticate every device
  4. Inspect, verify and validate every object access request
  5. Log every object access request
  6. Authentication should contain 2 of something you have, something you are, something you know
  7. Successful authentication should result in a revocable credential [4]
  8. Credentials should be scoped and follow least privilege [5]
  9. Credentials should be bound to a user, device, transaction tuple [6]
  10. Network communications should be encrypted [7]
  11. Assume all services, API’s and applications are accessible from the Internet [8]
  12. Segment processes and network traffic in logical and operational groups


[1] – Users of systems, including employees, partners, customers and other user-interactive service accounts
[2] – API’s, services, web applications and unique data sources
[3] – User devices (such as laptops, mobiles, tablets, virtual machines), service devices (such as printers, faxes) and network management devices (such as switches, routers)
[4] – Such as a cookie, tokenId or access token which is cryptographically secure.  Revocable shouldn't necessarily be limited to being time bound. Eg revocation/black lists etc.
[5] – Credential exchange may be required where access traverses network or object segmentation.  For example an issued credential for subject 1 to access object 1, may require object 1 to contact object 2 to fulfil the request.  The credential presented to object 2 may differ to that presented to object 1.
[6] – Token binding approach such as signature based access tokens or TLS binding
[7] – Using for example standards based protocols such as TLS 1.3 or similar. Eg Google's ALTS.
[8] – Assume perimeter based networking (either software defined or network defined) is incomplete and trust cannot be placed simply on the origin of a request




The below is a list of companies referencing “zero trust” public documentation:

  • Akamai - https://www.akamai.com/uk/en/solutions/zero-trust-security-model.jsp
  • Palo Alto - https://www.paloaltonetworks.com/cyberpedia/what-is-a-zero-trust-architecture
  • Centrify - https://www.centrify.com/zero-trust-security/
  • Cisco - https://blogs.cisco.com/security/why-has-forresters-zero-trust-cybersecurity-framework-become-such-a-hot-topic
  • Microsoft - https://cloudblogs.microsoft.com/microsoftsecure/2018/06/14/building-zero-trust-networks-with-microsoft-365/
  • ScaleFT - https://www.scaleft.com/zero-trust-security/
  • zscaler - https://www.zscaler.com/blogs/corporate/google-leveraging-zero-trust-security-model-and-so-can-you
  • Okta - https://www.okta.com/resources/whitepaper-zero-trust-with-okta-modern-approach-to-secure-access/
  • ForgeRock  - https://www.forgerock.com/blog/zero-trust-importance-identity-centered-security-program
  • Duo Security - https://duo.com/blog/to-trust-or-zero-trust
  • Google’s Beyond Corp - https://beyondcorp.com/
  • Fortinet - https://www.fortinet.com/demand/gated/Forrester-Market-Overview-NetworkSegmentation-Gateways.html

Cyber Security Skills in 2018

Last week I passed the EC-Council Certified Ethical Hacker exam.  Yay to me.  I am a professional penetration tester right?  Negatory.  I sat the exam more as an exercise to see if I “still had it”.  A boxer returning to the ring.  It is over 10 years since I passed my CISSP.  The 6-hour multi-choice horror of an exam, that was still being conducted using pencil and paper down at the Royal Holloway University.  In honesty, that was a great general information security bench mark and allowed you to go in multiple different directions as an "infosec pro".  So back to the CEH…

There are now a fair few information security related career paths in 2018.  The basic split tends to be something like:

  • Managerial  - I don’t always mean managing people, more risk management, compliance management and auditing
  • Technical - here I guess I focus upon penetration testing, cryptography or secure software engineering
  • Operational - thinking this is more security operation centres, log analysis and threat intelligence and the like
So the CEH would fit as an intro to intermediate level qualification within the technical sphere.  Is it a useful qualification to have?  Let me come back to that question, by framing it a little.

There is the constant hum that in both the US and UK, there is a massive cyber and information security personnel shortage, in both the public and private sectors.  This I agree with, but it also needs some additional framing and qualification.  Which areas, what jobs, what skill levels are missing or in short supply?  As the cyber security sector has reached a decent level maturity with regards job roles and more importantly job definitions, we can start to work backwards in understanding how to fulfil demand.

I often hear conversations around cyber education, which go down the route of delivering cyber security curriculum at the under sixteens or even under 11 age groups.  Whilst this is incredibly important for general Internet safety, I’m not sure it helps the longer term cyber skills supply problem.  If we look at the omnipresent shortage of medical doctors, we don’t start medical school earlier.  We teach the first principles earlier: maths, biology and chemistry for example.  With those foundations in place, specialism becomes much easier at say eighteen and again at 21 or 22 when specialist doctor training starts.

Shouldn’t we just apply the same approach to cyber?  A good grounding in mathematics, computing and networking would then provide a strong foundation to build upon, before focusing on cryptography or penetration testing.

The CEH exam (and this isn’t a specific criticism of the EC Council, simply recent experience), doesn’t necessarily provide you with the skills to become a hacker.  I spent 5 months self-studying for the exam.  A few hours here and there whilst holding down a full time job with regular travel.  Aka not a lot of time.  The reason I probably passed the exam, was mainly due to a broad 17 year history in networking, security and access management.  I certainly learned a load of stuff.  Mainly tooling and process, but not necessarily first principles skills.

Most qualifications are great.  They certainly give the candidate career bounce and credibility and any opportunity to study is a good one.  I do think cyber security training is at a real inflection point though.

Clearly most large organisations are desperately building out teams to protect and react to security incidents.  Be it for compliance reasons, or to build end user trust, but we as an industry need to look at a longer term and sustainable way to develop, nurture and feed talent.  Going back to basics seems a good step forward.

Cyber Security Skills in 2018

Last week I passed the EC-Council Certified Ethical Hacker exam.  Yay to me.  I am a professional penetration tester right?  Negatory.  I sat the exam more as an exercise to see if I “still had it”.  A boxer returning to the ring.  It is over 10 years since I passed my CISSP.  The 6-hour multi-choice horror of an exam, that was still being conducted using pencil and paper down at the Royal Holloway University.  In honesty, that was a great general information security bench mark and allowed you to go in multiple different directions as an "infosec pro".  So back to the CEH…

There are now a fair few information security related career paths in 2018.  The basic split tends to be something like:

  • Managerial  - I don’t always mean managing people, more risk management, compliance management and auditing
  • Technical - here I guess I focus upon penetration testing, cryptography or secure software engineering
  • Operational - thinking this is more security operation centres, log analysis and threat intelligence and the like
So the CEH would fit as an intro to intermediate level qualification within the technical sphere.  Is it a useful qualification to have?  Let me come back to that question, by framing it a little.

There is the constant hum that in both the US and UK, there is a massive cyber and information security personnel shortage, in both the public and private sectors.  This I agree with, but it also needs some additional framing and qualification.  Which areas, what jobs, what skill levels are missing or in short supply?  As the cyber security sector has reached a decent level maturity with regards job roles and more importantly job definitions, we can start to work backwards in understanding how to fulfil demand.

I often hear conversations around cyber education, which go down the route of delivering cyber security curriculum at the under sixteens or even under 11 age groups.  Whilst this is incredibly important for general Internet safety, I’m not sure it helps the longer term cyber skills supply problem.  If we look at the omnipresent shortage of medical doctors, we don’t start medical school earlier.  We teach the first principles earlier: maths, biology and chemistry for example.  With those foundations in place, specialism becomes much easier at say eighteen and again at 21 or 22 when specialist doctor training starts.

Shouldn’t we just apply the same approach to cyber?  A good grounding in mathematics, computing and networking would then provide a strong foundation to build upon, before focusing on cryptography or penetration testing.

The CEH exam (and this isn’t a specific criticism of the EC Council, simply recent experience), doesn’t necessarily provide you with the skills to become a hacker.  I spent 5 months self-studying for the exam.  A few hours here and there whilst holding down a full time job with regular travel.  Aka not a lot of time.  The reason I probably passed the exam, was mainly due to a broad 17 year history in networking, security and access management.  I certainly learned a load of stuff.  Mainly tooling and process, but not necessarily first principles skills.

Most qualifications are great.  They certainly give the candidate career bounce and credibility and any opportunity to study is a good one.  I do think cyber security training is at a real inflection point though.

Clearly most large organisations are desperately building out teams to protect and react to security incidents.  Be it for compliance reasons, or to build end user trust, but we as an industry need to look at a longer term and sustainable way to develop, nurture and feed talent.  Going back to basics seems a good step forward.