2019 Digital Identity Progress Report

Schools out for summer?  Well not quite.  Unless you're living in the east coast of Australia, it's looking decidedly bleak weather wise for most of Europe and the American east coast.  But I digress.  Is it looking bleak for your digital identity driven projects?  What's been a success, where are we heading and what should we look out for?

Where We Are Today

Passwordless - (Reports says B-)

Over the last 24 months, there have been some pretty big themes that many organisations embarking on digital identity and security related projects, have been trying to succeed at.  First up, the age old chestnut...of passwordless authentication.  The password is dead, long live the password!  We are definitely making progress though.  Many of the top public sites (Facebook, LinkedIn, Twitter et al) provide multi-factor authentication options at least.  Passwords are still required as the first step, but the end user education and familiarity with something other than a password during login, must surely be the first steps to getting ridding of them entirely.  2018 also saw the rise of WebAuthn - the W3C standards based approach for crypto based challenge response authentication.  Could this hopefully accelerate adoption to a password-free world?

API Protection - (Report says C+)

API's will eat the world?  Well, digital disruption needs speed, agility and mashups.  API's help organisations achieve those basic aims, but where are we, with respect to the protection of those API's?  API management platforms are now common in most enterprise architectures.  They help to perform API provisioning, versioning and life cycle management, but what about security?  Many use cases fall into the API security band wagon such as service to service authentication, least privilege authorization, token exchange and contextual throttling.  Most API services are now sitting comfortably behind basic authentication, but fine grained controls and basic use cases such as token revocation and rotation are still in their infancy.  Report says "we must do better".

Microservices Protection - (Report says B-)

Not all API's are microservices, but many net new additions to projects will leverage this approach.  But microservices infrastructures, bring many new security challenges as well as benefits.  Service versioning, same service load balancing, high through puts and fine grained access controls have created some new emerging security patterns.  Both the side car and inflight/proxy approach for traffic introspection and security enforcement have appeared.  Microservices by their design, normally means very high transactions per second, as well as fine grained access control - with each service performing only a single task.  Stateless OAuth2 seems to fit the bill for many projects, but the consistency around high scale token introspection and scope design seems immature.

IoT Security - (Reports says C-)

Many digital disruption projects are embracing smart device (HTTP-able) infrastructures.  Pairing those devices to real people seems a winner for many industries, from retail, insurance to finance.  But and there's always a but, the main interest for many organisations is not the device, but the data the device is either collecting or generating.  Device protection is often lacking - default credentials, hard coded keys, un-upgradable firmware, inability to use HTTPS and the inability to store access tokens are all very common.  There are costs and usability issues with increased device security and no emerging patterns are consistent.  Several regulations and security best practice documents now exist, but adoption is still low.

User Consent Management - (Report says B-)

GDPR has probably had the biggest impact, from an awareness perspective, than any other piece of regulation relating to consent.  The consumer, from a pure economic buyer perspective at least, has never been so powerful.  One click away from a competitor.  From a data perspective however, it seems the capitalist corporate machine is holding all the cards.  Marketing analytics, usage tracking, location tracking, you name it, the service provider wants that data to either improve your service, or improve their ability to market new services.  Many organisations are not stupid.  They realise that by offering basic consent management functionality (contact preferences, ability to be removed, data exportation, activity viewing) they are not only ticking the compliance check box, but can actually create a competitive advantage by giving their user community the image of being at trusted partner to do business with.  But will the end user be ever truly in control of their data?

What's Coming

The above 4 topics are not going away any time soon.  Knowledge, standards maturity and technology advances, should all allow each of those areas to bounce a grade within the next 18-24 months.  But what other concerns are on the horizon?  

Well skills immediately spring out.  Cyber security in general is known to have a basic skills shortage.  Digital Identity seems to fall in to that general trend and some of these topics are niches within a niche.  Getting the right skill set to design micro services security or consent management systems will not be trivial.

What about new threats - they are emerging every day.  Bot protection - at both registration and login time - not only helps improve the security posture of an organisation, but also helps improve user analytics, remove opportunities for false advertising and provide a clearer picture to a service's real organic user community.  How will things like ML/AI help here - and does that provide another skills challenge or management black hole?

The final topic to mention is that of usability.  Security can be simple in many respects, but usability can make or break a service.  As underlying ecosystems become more complex, with a huge supply chain of API's, cross-boundary federations and devices, how can the end user be both protected, yet offered a seamless registration and login experience? Dedicated user experience teams exist today, but their skill set will need to be sharpened and focused on the security aspect of any new service. 


2019 Digital Identity Progress Report

Schools out for summer?  Well not quite.  Unless you're living in the east coast of Australia, it's looking decidedly bleak weather wise for most of Europe and the American east coast.  But I digress.  Is it looking bleak for your digital identity driven projects?  What's been a success, where are we heading and what should we look out for?

Where We Are Today

Passwordless - (Reports says B-)

Over the last 24 months, there have been some pretty big themes that many organisations embarking on digital identity and security related projects, have been trying to succeed at.  First up, the age old chestnut...of passwordless authentication.  The password is dead, long live the password!  We are definitely making progress though.  Many of the top public sites (Facebook, LinkedIn, Twitter et al) provide multi-factor authentication options at least.  Passwords are still required as the first step, but the end user education and familiarity with something other than a password during login, must surely be the first steps to getting ridding of them entirely.  2018 also saw the rise of WebAuthn - the W3C standards based approach for crypto based challenge response authentication.  Could this hopefully accelerate adoption to a password-free world?

API Protection - (Report says C+)

API's will eat the world?  Well, digital disruption needs speed, agility and mashups.  API's help organisations achieve those basic aims, but where are we, with respect to the protection of those API's?  API management platforms are now common in most enterprise architectures.  They help to perform API provisioning, versioning and life cycle management, but what about security?  Many use cases fall into the API security band wagon such as service to service authentication, least privilege authorization, token exchange and contextual throttling.  Most API services are now sitting comfortably behind basic authentication, but fine grained controls and basic use cases such as token revocation and rotation are still in their infancy.  Report says "we must do better".

Microservices Protection - (Report says B-)

Not all API's are microservices, but many net new additions to projects will leverage this approach.  But microservices infrastructures, bring many new security challenges as well as benefits.  Service versioning, same service load balancing, high through puts and fine grained access controls have created some new emerging security patterns.  Both the side car and inflight/proxy approach for traffic introspection and security enforcement have appeared.  Microservices by their design, normally means very high transactions per second, as well as fine grained access control - with each service performing only a single task.  Stateless OAuth2 seems to fit the bill for many projects, but the consistency around high scale token introspection and scope design seems immature.

IoT Security - (Reports says C-)

Many digital disruption projects are embracing smart device (HTTP-able) infrastructures.  Pairing those devices to real people seems a winner for many industries, from retail, insurance to finance.  But and there's always a but, the main interest for many organisations is not the device, but the data the device is either collecting or generating.  Device protection is often lacking - default credentials, hard coded keys, un-upgradable firmware, inability to use HTTPS and the inability to store access tokens are all very common.  There are costs and usability issues with increased device security and no emerging patterns are consistent.  Several regulations and security best practice documents now exist, but adoption is still low.

User Consent Management - (Report says B-)

GDPR has probably had the biggest impact, from an awareness perspective, than any other piece of regulation relating to consent.  The consumer, from a pure economic buyer perspective at least, has never been so powerful.  One click away from a competitor.  From a data perspective however, it seems the capitalist corporate machine is holding all the cards.  Marketing analytics, usage tracking, location tracking, you name it, the service provider wants that data to either improve your service, or improve their ability to market new services.  Many organisations are not stupid.  They realise that by offering basic consent management functionality (contact preferences, ability to be removed, data exportation, activity viewing) they are not only ticking the compliance check box, but can actually create a competitive advantage by giving their user community the image of being at trusted partner to do business with.  But will the end user be ever truly in control of their data?

What's Coming

The above 4 topics are not going away any time soon.  Knowledge, standards maturity and technology advances, should all allow each of those areas to bounce a grade within the next 18-24 months.  But what other concerns are on the horizon?  

Well skills immediately spring out.  Cyber security in general is known to have a basic skills shortage.  Digital Identity seems to fall in to that general trend and some of these topics are niches within a niche.  Getting the right skill set to design micro services security or consent management systems will not be trivial.

What about new threats - they are emerging every day.  Bot protection - at both registration and login time - not only helps improve the security posture of an organisation, but also helps improve user analytics, remove opportunities for false advertising and provide a clearer picture to a service's real organic user community.  How will things like ML/AI help here - and does that provide another skills challenge or management black hole?

The final topic to mention is that of usability.  Security can be simple in many respects, but usability can make or break a service.  As underlying ecosystems become more complex, with a huge supply chain of API's, cross-boundary federations and devices, how can the end user be both protected, yet offered a seamless registration and login experience? Dedicated user experience teams exist today, but their skill set will need to be sharpened and focused on the security aspect of any new service. 


Renewable Security: Steps to Save The Cyber Security Planet

Actually, this has nothing to-do with being green.  Although, that is a passion of mine.  This is more to-do with a paradigm that is becoming more popular in security architectures: that of being able to re-spin particular services to a known “safe” state after breach, or even as a preventative measure before a breach or vulnerability has been exploited.

Triple R's of Security


This falls into what is known as the “3 R’s of Security”.  A quick Google on that topic will result in a fair few decent explanations of what that can mean.  The TL;DR is basically, rotate (credentials), repair (vulnerabilities) and repave (services and servers to a known good state).  This approach is gaining popularity mainly due devops deployment models.  Or “secdevops”.  Or is it “devsecops”?  Containerization and highly automated “code to prod” pipelines make it a lot easier to get stuff into production, iterate and go again.  So how does security play into this?

Left-Shifting 


Well I want to back track a little, and tackle the age old issue of why security is generally applied as a post live issue.  Security practitioners, often evangelise on the “left shifting” of security.  Getting security higher up the production line, earlier in the software design life cycle and less as an audit/afterthought/pen testing exercise.  Why isn’t this really happening?  Well anecdotally, just look at the audit, pen testing and testing contractor rates.  They’re high and growing.  Sure, lots of dev teams and organisations are incorporating security architecture practices earlier in the dev cycle, but many find this too slow, expensive or inhibitive.  Many simply ship insecure software and assume external auditors will find the issues.

This I would say has resulted in variations of R3.  Dev as normal and simply flatten and rebuild in production in order to either prevent vulnerabilities being exploited, or recover from them faster.  Is this the approach many organisations are applying to newer architectures such as micro-services, server-less and IoT?

IoT, Microservices and Server-less


There are not many mature design patterns or vendors for things like micro-services security or even IoT security.  Yes, there are some interesting ideas, but the likes of Forrester, Gartner and other industry analysts, don’t to my knowledge, describe security for these areas as a known market size, or a level of repeatable maturity.  So what are the options?  These architectures ship with out security? Well, being a security guy, I would hope not.  So, what is the next best approach?  Maybe the triple R model is the next best thing.  Assume you’re going to breached – which CISO’s should be doing anyway – and focus on a remediation plan.

The triple R approach does assume a few things though.  The main one, is that you have a known-safe place.  Whether that is focused on images, virtual machines or new credentials, there needs to be a position which you can rollback or forward to, that is believed to be more secure than the version before.  That safe place, also needs to evolve.  There is no point in that safe place being unable to deliver the services needed to keep end users happy.

Options, Options, Options...


The main benefit of the triple R approach, is you have options – either as a response to a breach or vulnerability exposure, or as a preventative shortcut. It can bring other more pragmatic issues however.  If we’re referring to things like IoT security – how can devices, in the field and potentially aware from Internet connectivity – be hooked, rebuilt and re-keyed?  Can this be done in a hot-swappable model too, without interruptions to service?  If you need to rebuild a smart meter, you can’t possibly interrupt electricity supply to the property whilst that completes.

So the R3 model is certainly a powerful tool in the security architecture kit bag.  Is is suitable for all scenarios?  Probably not.  Is it a good “get out of jail” card in environments with highly optimized devops-esque process?  Absolutely.

Renewable Security: Steps to Save The Cyber Security Planet

Actually, this has nothing to-do with being green.  Although, that is a passion of mine.  This is more to-do with a paradigm that is becoming more popular in security architectures: that of being able to re-spin particular services to a known “safe” state after breach, or even as a preventative measure before a breach or vulnerability has been exploited.

Triple R's of Security


This falls into what is known as the “3 R’s of Security”.  A quick Google on that topic will result in a fair few decent explanations of what that can mean.  The TL;DR is basically, rotate (credentials), repair (vulnerabilities) and repave (services and servers to a known good state).  This approach is gaining popularity mainly due devops deployment models.  Or “secdevops”.  Or is it “devsecops”?  Containerization and highly automated “code to prod” pipelines make it a lot easier to get stuff into production, iterate and go again.  So how does security play into this?

Left-Shifting 


Well I want to back track a little, and tackle the age old issue of why security is generally applied as a post live issue.  Security practitioners, often evangelise on the “left shifting” of security.  Getting security higher up the production line, earlier in the software design life cycle and less as an audit/afterthought/pen testing exercise.  Why isn’t this really happening?  Well anecdotally, just look at the audit, pen testing and testing contractor rates.  They’re high and growing.  Sure, lots of dev teams and organisations are incorporating security architecture practices earlier in the dev cycle, but many find this too slow, expensive or inhibitive.  Many simply ship insecure software and assume external auditors will find the issues.

This I would say has resulted in variations of R3.  Dev as normal and simply flatten and rebuild in production in order to either prevent vulnerabilities being exploited, or recover from them faster.  Is this the approach many organisations are applying to newer architectures such as micro-services, server-less and IoT?

IoT, Microservices and Server-less


There are not many mature design patterns or vendors for things like micro-services security or even IoT security.  Yes, there are some interesting ideas, but the likes of Forrester, Gartner and other industry analysts, don’t to my knowledge, describe security for these areas as a known market size, or a level of repeatable maturity.  So what are the options?  These architectures ship with out security? Well, being a security guy, I would hope not.  So, what is the next best approach?  Maybe the triple R model is the next best thing.  Assume you’re going to breached – which CISO’s should be doing anyway – and focus on a remediation plan.

The triple R approach does assume a few things though.  The main one, is that you have a known-safe place.  Whether that is focused on images, virtual machines or new credentials, there needs to be a position which you can rollback or forward to, that is believed to be more secure than the version before.  That safe place, also needs to evolve.  There is no point in that safe place being unable to deliver the services needed to keep end users happy.

Options, Options, Options...


The main benefit of the triple R approach, is you have options – either as a response to a breach or vulnerability exposure, or as a preventative shortcut. It can bring other more pragmatic issues however.  If we’re referring to things like IoT security – how can devices, in the field and potentially aware from Internet connectivity – be hooked, rebuilt and re-keyed?  Can this be done in a hot-swappable model too, without interruptions to service?  If you need to rebuild a smart meter, you can’t possibly interrupt electricity supply to the property whilst that completes.

So the R3 model is certainly a powerful tool in the security architecture kit bag.  Is is suitable for all scenarios?  Probably not.  Is it a good “get out of jail” card in environments with highly optimized devops-esque process?  Absolutely.

12 Steps to Zero Trust Success

A Google search for “zero trust” returns ~ 195Million results.  Pretty sure some are not necessarily related to access management and cyber security, but a few probably are.  Zero Trust was a term coined by analyst group Forrester back in 2010 and has gained popularity since Google started using the concept with their employee management project called BeyondCorp.


It was originally focused on network segmentation but has now come to include other aspects of user focused security management.

Below is a hybrid set of concepts that tries to cover all the current approaches.  Please comment below so we can iterate and add more to this over time.


  1. Assign unique, non-reusable identifiers to all subjects [1], objects [2] and network devices [3]
  2. Authenticate every subject
  3. Authenticate every device
  4. Inspect, verify and validate every object access request
  5. Log every object access request
  6. Authentication should contain 2 of something you have, something you are, something you know
  7. Successful authentication should result in a revocable credential [4]
  8. Credentials should be scoped and follow least privilege [5]
  9. Credentials should be bound to a user, device, transaction tuple [6]
  10. Network communications should be encrypted [7]
  11. Assume all services, API’s and applications are accessible from the Internet [8]
  12. Segment processes and network traffic in logical and operational groups


[1] – Users of systems, including employees, partners, customers and other user-interactive service accounts
[2] – API’s, services, web applications and unique data sources
[3] – User devices (such as laptops, mobiles, tablets, virtual machines), service devices (such as printers, faxes) and network management devices (such as switches, routers)
[4] – Such as a cookie, tokenId or access token which is cryptographically secure.  Revocable shouldn't necessarily be limited to being time bound. Eg revocation/black lists etc.
[5] – Credential exchange may be required where access traverses network or object segmentation.  For example an issued credential for subject 1 to access object 1, may require object 1 to contact object 2 to fulfil the request.  The credential presented to object 2 may differ to that presented to object 1.
[6] – Token binding approach such as signature based access tokens or TLS binding
[7] – Using for example standards based protocols such as TLS 1.3 or similar. Eg Google's ALTS.
[8] – Assume perimeter based networking (either software defined or network defined) is incomplete and trust cannot be placed simply on the origin of a request




The below is a list of companies referencing “zero trust” public documentation:

  • Akamai - https://www.akamai.com/uk/en/solutions/zero-trust-security-model.jsp
  • Palo Alto - https://www.paloaltonetworks.com/cyberpedia/what-is-a-zero-trust-architecture
  • Centrify - https://www.centrify.com/zero-trust-security/
  • Cisco - https://blogs.cisco.com/security/why-has-forresters-zero-trust-cybersecurity-framework-become-such-a-hot-topic
  • Microsoft - https://cloudblogs.microsoft.com/microsoftsecure/2018/06/14/building-zero-trust-networks-with-microsoft-365/
  • ScaleFT - https://www.scaleft.com/zero-trust-security/
  • zscaler - https://www.zscaler.com/blogs/corporate/google-leveraging-zero-trust-security-model-and-so-can-you
  • Okta - https://www.okta.com/resources/whitepaper-zero-trust-with-okta-modern-approach-to-secure-access/
  • ForgeRock  - https://www.forgerock.com/blog/zero-trust-importance-identity-centered-security-program
  • Duo Security - https://duo.com/blog/to-trust-or-zero-trust
  • Google’s Beyond Corp - https://beyondcorp.com/
  • Fortinet - https://www.fortinet.com/demand/gated/Forrester-Market-Overview-NetworkSegmentation-Gateways.html

12 Steps to Zero Trust Success

A Google search for “zero trust” returns ~ 195Million results.  Pretty sure some are not necessarily related to access management and cyber security, but a few probably are.  Zero Trust was a term coined by analyst group Forrester back in 2010 and has gained popularity since Google started using the concept with their employee management project called BeyondCorp.


It was originally focused on network segmentation but has now come to include other aspects of user focused security management.

Below is a hybrid set of concepts that tries to cover all the current approaches.  Please comment below so we can iterate and add more to this over time.


  1. Assign unique, non-reusable identifiers to all subjects [1], objects [2] and network devices [3]
  2. Authenticate every subject
  3. Authenticate every device
  4. Inspect, verify and validate every object access request
  5. Log every object access request
  6. Authentication should contain 2 of something you have, something you are, something you know
  7. Successful authentication should result in a revocable credential [4]
  8. Credentials should be scoped and follow least privilege [5]
  9. Credentials should be bound to a user, device, transaction tuple [6]
  10. Network communications should be encrypted [7]
  11. Assume all services, API’s and applications are accessible from the Internet [8]
  12. Segment processes and network traffic in logical and operational groups


[1] – Users of systems, including employees, partners, customers and other user-interactive service accounts
[2] – API’s, services, web applications and unique data sources
[3] – User devices (such as laptops, mobiles, tablets, virtual machines), service devices (such as printers, faxes) and network management devices (such as switches, routers)
[4] – Such as a cookie, tokenId or access token which is cryptographically secure.  Revocable shouldn't necessarily be limited to being time bound. Eg revocation/black lists etc.
[5] – Credential exchange may be required where access traverses network or object segmentation.  For example an issued credential for subject 1 to access object 1, may require object 1 to contact object 2 to fulfil the request.  The credential presented to object 2 may differ to that presented to object 1.
[6] – Token binding approach such as signature based access tokens or TLS binding
[7] – Using for example standards based protocols such as TLS 1.3 or similar. Eg Google's ALTS.
[8] – Assume perimeter based networking (either software defined or network defined) is incomplete and trust cannot be placed simply on the origin of a request




The below is a list of companies referencing “zero trust” public documentation:

  • Akamai - https://www.akamai.com/uk/en/solutions/zero-trust-security-model.jsp
  • Palo Alto - https://www.paloaltonetworks.com/cyberpedia/what-is-a-zero-trust-architecture
  • Centrify - https://www.centrify.com/zero-trust-security/
  • Cisco - https://blogs.cisco.com/security/why-has-forresters-zero-trust-cybersecurity-framework-become-such-a-hot-topic
  • Microsoft - https://cloudblogs.microsoft.com/microsoftsecure/2018/06/14/building-zero-trust-networks-with-microsoft-365/
  • ScaleFT - https://www.scaleft.com/zero-trust-security/
  • zscaler - https://www.zscaler.com/blogs/corporate/google-leveraging-zero-trust-security-model-and-so-can-you
  • Okta - https://www.okta.com/resources/whitepaper-zero-trust-with-okta-modern-approach-to-secure-access/
  • ForgeRock  - https://www.forgerock.com/blog/zero-trust-importance-identity-centered-security-program
  • Duo Security - https://duo.com/blog/to-trust-or-zero-trust
  • Google’s Beyond Corp - https://beyondcorp.com/
  • Fortinet - https://www.fortinet.com/demand/gated/Forrester-Market-Overview-NetworkSegmentation-Gateways.html

Cyber Security Skills in 2018

Last week I passed the EC-Council Certified Ethical Hacker exam.  Yay to me.  I am a professional penetration tester right?  Negatory.  I sat the exam more as an exercise to see if I “still had it”.  A boxer returning to the ring.  It is over 10 years since I passed my CISSP.  The 6-hour multi-choice horror of an exam, that was still being conducted using pencil and paper down at the Royal Holloway University.  In honesty, that was a great general information security bench mark and allowed you to go in multiple different directions as an "infosec pro".  So back to the CEH…

There are now a fair few information security related career paths in 2018.  The basic split tends to be something like:

  • Managerial  - I don’t always mean managing people, more risk management, compliance management and auditing
  • Technical - here I guess I focus upon penetration testing, cryptography or secure software engineering
  • Operational - thinking this is more security operation centres, log analysis and threat intelligence and the like
So the CEH would fit as an intro to intermediate level qualification within the technical sphere.  Is it a useful qualification to have?  Let me come back to that question, by framing it a little.

There is the constant hum that in both the US and UK, there is a massive cyber and information security personnel shortage, in both the public and private sectors.  This I agree with, but it also needs some additional framing and qualification.  Which areas, what jobs, what skill levels are missing or in short supply?  As the cyber security sector has reached a decent level maturity with regards job roles and more importantly job definitions, we can start to work backwards in understanding how to fulfil demand.

I often hear conversations around cyber education, which go down the route of delivering cyber security curriculum at the under sixteens or even under 11 age groups.  Whilst this is incredibly important for general Internet safety, I’m not sure it helps the longer term cyber skills supply problem.  If we look at the omnipresent shortage of medical doctors, we don’t start medical school earlier.  We teach the first principles earlier: maths, biology and chemistry for example.  With those foundations in place, specialism becomes much easier at say eighteen and again at 21 or 22 when specialist doctor training starts.

Shouldn’t we just apply the same approach to cyber?  A good grounding in mathematics, computing and networking would then provide a strong foundation to build upon, before focusing on cryptography or penetration testing.

The CEH exam (and this isn’t a specific criticism of the EC Council, simply recent experience), doesn’t necessarily provide you with the skills to become a hacker.  I spent 5 months self-studying for the exam.  A few hours here and there whilst holding down a full time job with regular travel.  Aka not a lot of time.  The reason I probably passed the exam, was mainly due to a broad 17 year history in networking, security and access management.  I certainly learned a load of stuff.  Mainly tooling and process, but not necessarily first principles skills.

Most qualifications are great.  They certainly give the candidate career bounce and credibility and any opportunity to study is a good one.  I do think cyber security training is at a real inflection point though.

Clearly most large organisations are desperately building out teams to protect and react to security incidents.  Be it for compliance reasons, or to build end user trust, but we as an industry need to look at a longer term and sustainable way to develop, nurture and feed talent.  Going back to basics seems a good step forward.

Cyber Security Skills in 2018

Last week I passed the EC-Council Certified Ethical Hacker exam.  Yay to me.  I am a professional penetration tester right?  Negatory.  I sat the exam more as an exercise to see if I “still had it”.  A boxer returning to the ring.  It is over 10 years since I passed my CISSP.  The 6-hour multi-choice horror of an exam, that was still being conducted using pencil and paper down at the Royal Holloway University.  In honesty, that was a great general information security bench mark and allowed you to go in multiple different directions as an "infosec pro".  So back to the CEH…

There are now a fair few information security related career paths in 2018.  The basic split tends to be something like:

  • Managerial  - I don’t always mean managing people, more risk management, compliance management and auditing
  • Technical - here I guess I focus upon penetration testing, cryptography or secure software engineering
  • Operational - thinking this is more security operation centres, log analysis and threat intelligence and the like
So the CEH would fit as an intro to intermediate level qualification within the technical sphere.  Is it a useful qualification to have?  Let me come back to that question, by framing it a little.

There is the constant hum that in both the US and UK, there is a massive cyber and information security personnel shortage, in both the public and private sectors.  This I agree with, but it also needs some additional framing and qualification.  Which areas, what jobs, what skill levels are missing or in short supply?  As the cyber security sector has reached a decent level maturity with regards job roles and more importantly job definitions, we can start to work backwards in understanding how to fulfil demand.

I often hear conversations around cyber education, which go down the route of delivering cyber security curriculum at the under sixteens or even under 11 age groups.  Whilst this is incredibly important for general Internet safety, I’m not sure it helps the longer term cyber skills supply problem.  If we look at the omnipresent shortage of medical doctors, we don’t start medical school earlier.  We teach the first principles earlier: maths, biology and chemistry for example.  With those foundations in place, specialism becomes much easier at say eighteen and again at 21 or 22 when specialist doctor training starts.

Shouldn’t we just apply the same approach to cyber?  A good grounding in mathematics, computing and networking would then provide a strong foundation to build upon, before focusing on cryptography or penetration testing.

The CEH exam (and this isn’t a specific criticism of the EC Council, simply recent experience), doesn’t necessarily provide you with the skills to become a hacker.  I spent 5 months self-studying for the exam.  A few hours here and there whilst holding down a full time job with regular travel.  Aka not a lot of time.  The reason I probably passed the exam, was mainly due to a broad 17 year history in networking, security and access management.  I certainly learned a load of stuff.  Mainly tooling and process, but not necessarily first principles skills.

Most qualifications are great.  They certainly give the candidate career bounce and credibility and any opportunity to study is a good one.  I do think cyber security training is at a real inflection point though.

Clearly most large organisations are desperately building out teams to protect and react to security incidents.  Be it for compliance reasons, or to build end user trust, but we as an industry need to look at a longer term and sustainable way to develop, nurture and feed talent.  Going back to basics seems a good step forward.

The Role Of Mobile During Authentication

Nearly all the big player social networks now provide a multi-factor authentication option – either an SMS sent code or perhaps key derived one-time password, accessible via a mobile app.  Examples include Google’s Authenticator, Facebook’s options for MFA (including their Code Generator, built into their mobile app) or LinkedIn’s two-step verification.  There are lots more examples, but the main component is using the user’s mobile phone as an out of band authenticator channel.

Phone as a Secondary Device - “Phone-as-a-Token”

The common term for this is “phone-as-a-token”.  Depending on the statistics, basic mobile phones are now so ubiquitous that the ability to leverage at least SMS delivered one one-time-passwords (OTP) for users who do not have either data plans or smart phones is common.  This is an initial step in moving away from the traditional user name and password based login.  However, since the National Institute of Standards and Technology (NIST) released their view that SMS based OTP delivery is deemed insecure, there has been constant innovations around how best to integrate phone-based out of band authentication.  Push notifications are one and local or native biometry is another, often coupled with FIDO (Fast Identity Online) for secure application integration.

EMM and Device Authentication

But using a phone as an out of band authentication device, often overlooks the credibility and assurance of the device itself.  If push based notification apps are used, whilst the security and integrity of those apps can be guaranteed to a certain degree, the device the app is installed upon can not necessarily be attested to the same levels.  What about environments where BYOD (Bring Your Own Device) is used?  What about the potential for either jail broken operating systems or low assurance or worse still malware based apps running parallel to the push authentication app?  Does that impact credibility and assurance?  Could that result in the app being compromised in some way? 

In the internal identity space, Enterprise Mobility Management (EMM) software often comes to the rescue here – perhaps issuing and distributing certs of key pairs to devices in order to perform device validation, before accepting the out band verification step.  This can often be coupled with app assurance checks and OS baseline versioning.  However this is often time-consuming and complex and isn’t always possible in the consumer or digital identity space.

Multi-band to Single-band Login

Whilst you can achieve both a user authentication, device authentication and out of band authentication nirvana, let’s spin forward and simulate a world where the majority of interactions are solely via a mobile device.  So we no longer have an “out of band” authentication vehicle.  The main application login occurs on the mobile.  So what does that really mean?  Well we lose the secondary binding.  But if the initial application authentication leverages the mechanics of the original out of band (aka local biometry, crypto/FIDO integration) is there anything to worry about?  Well the initial device to user binding is still an avenue that requires further investigation.  I guess by removing an out of band process, we are reducing the number of signals or factors.  Also, unless a biometric local authentication process is used, the risk of credential theft increases substantially. 

Leave your phone on the train, with a basic local PIN based authentication that allows access to refresh_tokens or private keys and we’re back to the “keys to the castle” scenario.


User, Device & Contextual Analysis

So we’re back to a situation where we need to augment what is in fact a single factor login journey.

The physical identity is bound to a digital device. How can we have a continuous level of assurance for the user to app interaction?  We need to add additional signals – commonly known as context. 

This “context” could well include environmental data such as geo-location, time, network addressing or more behavioural such as movement or gait analysis or app usage patterns.  The main driver being a movement away from the big bang login event, where assurance is very high, with a long slow tail drop off as time goes by.  This correlates to the adage of short lived sessions or access_tokens – mainly as assurance can not be guaranteed as time from authentication event increases.

This “context” is then used to attempt lots of smaller micro-authentication events – perhaps checking at every use of an access_token or when a session is presented to an access control event.

So once a mobile user has “logged in” to the app, in the background there is a lot more activity looking for changes regarding to context (either environmental or behavioural).   No more out of band, just a lot of micro-steps.

As authentication becomes more transparent or passive, the real effort then moves to physical to digital binding or user proofing...

The Role Of Mobile During Authentication

Nearly all the big player social networks now provide a multi-factor authentication option – either an SMS sent code or perhaps key derived one-time password, accessible via a mobile app.  Examples include Google’s Authenticator, Facebook’s options for MFA (including their Code Generator, built into their mobile app) or LinkedIn’s two-step verification.  There are lots more examples, but the main component is using the user’s mobile phone as an out of band authenticator channel.

Phone as a Secondary Device - “Phone-as-a-Token”

The common term for this is “phone-as-a-token”.  Depending on the statistics, basic mobile phones are now so ubiquitous that the ability to leverage at least SMS delivered one one-time-passwords (OTP) for users who do not have either data plans or smart phones is common.  This is an initial step in moving away from the traditional user name and password based login.  However, since the National Institute of Standards and Technology (NIST) released their view that SMS based OTP delivery is deemed insecure, there has been constant innovations around how best to integrate phone-based out of band authentication.  Push notifications are one and local or native biometry is another, often coupled with FIDO (Fast Identity Online) for secure application integration.

EMM and Device Authentication

But using a phone as an out of band authentication device, often overlooks the credibility and assurance of the device itself.  If push based notification apps are used, whilst the security and integrity of those apps can be guaranteed to a certain degree, the device the app is installed upon can not necessarily be attested to the same levels.  What about environments where BYOD (Bring Your Own Device) is used?  What about the potential for either jail broken operating systems or low assurance or worse still malware based apps running parallel to the push authentication app?  Does that impact credibility and assurance?  Could that result in the app being compromised in some way? 

In the internal identity space, Enterprise Mobility Management (EMM) software often comes to the rescue here – perhaps issuing and distributing certs of key pairs to devices in order to perform device validation, before accepting the out band verification step.  This can often be coupled with app assurance checks and OS baseline versioning.  However this is often time-consuming and complex and isn’t always possible in the consumer or digital identity space.

Multi-band to Single-band Login

Whilst you can achieve both a user authentication, device authentication and out of band authentication nirvana, let’s spin forward and simulate a world where the majority of interactions are solely via a mobile device.  So we no longer have an “out of band” authentication vehicle.  The main application login occurs on the mobile.  So what does that really mean?  Well we lose the secondary binding.  But if the initial application authentication leverages the mechanics of the original out of band (aka local biometry, crypto/FIDO integration) is there anything to worry about?  Well the initial device to user binding is still an avenue that requires further investigation.  I guess by removing an out of band process, we are reducing the number of signals or factors.  Also, unless a biometric local authentication process is used, the risk of credential theft increases substantially. 

Leave your phone on the train, with a basic local PIN based authentication that allows access to refresh_tokens or private keys and we’re back to the “keys to the castle” scenario.


User, Device & Contextual Analysis

So we’re back to a situation where we need to augment what is in fact a single factor login journey.

The physical identity is bound to a digital device. How can we have a continuous level of assurance for the user to app interaction?  We need to add additional signals – commonly known as context. 

This “context” could well include environmental data such as geo-location, time, network addressing or more behavioural such as movement or gait analysis or app usage patterns.  The main driver being a movement away from the big bang login event, where assurance is very high, with a long slow tail drop off as time goes by.  This correlates to the adage of short lived sessions or access_tokens – mainly as assurance can not be guaranteed as time from authentication event increases.

This “context” is then used to attempt lots of smaller micro-authentication events – perhaps checking at every use of an access_token or when a session is presented to an access control event.

So once a mobile user has “logged in” to the app, in the background there is a lot more activity looking for changes regarding to context (either environmental or behavioural).   No more out of band, just a lot of micro-steps.

As authentication becomes more transparent or passive, the real effort then moves to physical to digital binding or user proofing...