5 Indicators of Cyber Security Market Failure

6 Minute Read. By Simon Moffatt.


Let us start with some brief definitions to get us all on the same page. Firstly – what is meant by the term “market failure”? A textbook description would be something that articulated the “inefficient distribution of goods and services in a free market”. But how do we decide whether the distribution is inefficient or not? Perhaps, let us look at how "efficient" is described first, then work backwards.  An efficient market would probably display a scenario where goods and services are distributed, priced and made, in a manner which can not be improved upon, with the amount of waste minimised.

This requires analysing two distinct parties – the consumer of the good and the maker of the good. The consumer wants to purchase at the lowest price, that maximises their “utility” or satisfaction. The maker on the other hand, wants to maximise profits whilst simultaneously minimising costs.

If we start looking at the "good", as the manufacturing and procurement of cyber security software, services and consulting, are we confident we are operating at maximum efficiency? I would argue we are not.  I am going to pick five high level topics in which to dig a little deeper.

1) Labour Shortages

The 2019 ISC2 Cyber Workforce Study, identified a staggering 4.07 million unfilled cyber security positions – up from 2.93 million in 2018. The report highlighted this as a global problem too – with APAC sitting on a 2.6 million backlog of unfilled roles. There are probably numerous other reports and Google search nuggets, to back up the claim, that cyber security is one of the toughest skill sets to recruit for within technology in 2020.

But what does this prove? Mismatches in labour demand and supply are common in numerous professions – medical doctors being an obvious one. An excess in demand over supply can obviously create wage inflation, amongst other inefficiencies, but what about triggers from the supply side?

The classic causes of labour market imperfection are many – but some seem to easily apply to cyber. The inelastic supply of suitable candidates is a good starting place.


In-elasticity of the supply of cyber security candidates

In this basic example, the supply of cyber candidates is described as being highly inelastic – for example a change in salary, does not result in a proportional change in the supply of candidates. Why is this? Clearly training has a part to play. Skilled cyber practitioners are likely to require strong computer science, network and infrastructure skills, before being able to embark on more specialised training. This can take many years to obtain, effectively acting like barriers to entry for new and willing candidates.

As with many labour markets, immobility and lack of vacancy information may also hinder skills investment, especially if the candidate is not aware of the potential benefits the long term training can bring. The more common practice of remote working however, is certainly helping to reduce geographical immobility issues which often hamper more traditional industries.

The cyber security industry is still very much in its infancy too, which can contribute to a lack of repeatable candidate development. Only in 2019, did the UK’s Chartered Institute of Information Security receive its royal warrant. Compare that to the likes of the Soap Makers Company (1638), Needlemakers Company (1656), Coachmakers Company (1677), Fanmakers (1709) and the Royal Medical Society (1773) and there is a palpable level of professional immaturity to understand. 

This could be amplified by a lack of consistency surrounding certifications, curriculum and job role descriptions. Only within the last 3 months has the industry seen CyBoK – the cyber book of knowledge - published. This may go a little way in attempting to formalise training and certification of candidates globally.

2) Regulation

An interesting bi product of perceived market failure, is government intervention. External intervention can take many forms and is often used to simulate competition (eg the likes of OfCom, OfWat or OfRail in the UK) where monopolistic or quasi-public sector run industries would not necessarily deliver optimum allocative efficiency if left to their own devices.

Whilst the cyber security sector is not a monopolistic supplier or employer, it has seen numerous pieces of governmental regulation. A few basic examples in Europe would include the General Data Protection Regulation (GDPR) and the Network and Information Systems Directive (NIS). In the United States, at a state level at least, the California Consumer Privacy Act (CCPA) came into fruition with further amendments towards the end of 2019.

I am blurring the line between security and privacy with some of those regulations, but an interesting aspect, is the consumer protection angle of the likes of the GDPR and CCPA. If the market where left to its own devices, the consumer of 3rd party on line goods and services, is not deemed to be protected to a satisfactory level. The regulatory intervention is not to rectify a negative externality affecting 3rd parties, more to protect the first party user. During the exchange of goods and services to the individual, it seems the requisite level of privacy and security that benefits the group of users as a whole is not utilitarian. A major aim the current cyber legislation is trying to achieve, is to overcome the information asymmetries that exist when a user signs up for a service or makes an online purchase or interaction.

With respect to consumer privacy, a concept of information asymmetry known as adverse selection may exist - where the buyer is not fully aware of the use and value of their personal data in relation to the supplier of a good or service, who may use their data in ways not fully understood or even disclosed to the user.

The likes of the NIS directive, seems more focused upon reducing the impact of an externality - basically a negative impact to a wider group of users. Perhaps, due to a data breach, service disruption or degradation, that may have occurred due to a lack of cyber security controls. A simple example could be the lack of power generation to an entire town if a nuclear power station is knocked offline due to a malware attack.

3) Product Hyper Augmentation

The cyber security market is broad, complex and ever evolving. The number of product categories grows daily. Gartner has at least 20 security related magic quadrants. CISO's and CIO's have to make incredibly complex decisions regarding product and service procurement.
Certainly, there is another set of information asymmetries at play here, but those are likely to exist in other complex software markets. With respect to cyber, there seems to an accelerated augmentation of features and messaging. When does a next generation web application firewall become a dedicated API security gateway? When does a security orchestration automation and response platform become an integrated event driven identity and access management workflow? Is there space for such niche upon niches, or are we entering a phase of largely merged and consolidated markets, where buyer procurement decision making is simply based on non-features such as brand affiliation and procurement ease?


Product direction via vertical and horizontal augmentation

Many mature markets often reach a position where suppliers augment products to a position of mediocrity and consumer apathy and confusion. Youngme Moon from Harvard Business School articulates this concept extremely well in her book Different - which focuses on competitive strategies. It seems the market for cyber security products especially (maybe less so for services and consultancy) is rapidly moving to a position, where core products are being blurred via augmentation, add on services and proprietary market descriptions. This is creating difficulties when it comes to calculating product purchase versus return on investment reporting.

4) Breach Increase

A continuation of the purchase/RoI analysis pattern, is to analyse what "success" looks like for numerous cyber investments. Whether those investments are people, process or technology related, most procurement decisions, end up being mapped to a success criteria. Value for money. Return on Investment. Call it what you will, but many organisations will look to describe what success looks like when it comes to security investments.

Is it a reduction in data breaches? Is it having fewer installed products with known CVE (common vulnerability & exposures) due to faster patch roll out? Is it having more end users signing up with second factor authentication? This can tie neatly into the controls -v- outcomes discussion where risk, security and privacy management for an organisation needs to identify tangible and SMART (specific measurable assignable realistic time-bound) metrics for implied cyber investment. The ultimate goal of cyber is to support the CIA (confidentiality integrity availability) triad, either singularly or collectively.

A major source of cyber investment success discussion, is associated with data breach trends. There are numerous pieces of data to support the claim, that breaches are increasing. Increasing in volume (from 157 in 2005 to 783 in 2014), breadth and complexity. Many of the articles could admittedly be FUD raised by vendors to accelerate product adoption, but there is no denying the popularity of sites like HaveIBeenPwned, where the number of breached credentials is substantial and increasing. If cyber investment was efficient, shouldn't these metrics be reducing?

This starts to generate two questions: either buyers are buying and using the wrong products or those products are not providing a decent return on investment.

5) Corporate Information Failure

But are products really to blame? The entire thread of this article, is to discuss market failure points. Information is a key component of effective free market development. Many information barriers seem to exist within the cyber security sector. Think of the following:
  • RoI on cyber product investment
  • Cost of personal data protection
  • The societal impact of critical infrastructure failures
  • Risk management success criteria
  • Cyber security certification benefit to corporations
There are likely several other angles to take on this, but full information with regards to the upholding of the confidentiality, availability and integrity of data is unlikely to occur. Many private sector organisations have undergone digital transformation over the last 10 years. These "corporation.next" operations, have created new challenges with respect to data protection. Data relating to customers, employees, intellectual property, products, suppliers, transactions and products.

But how do organisations a) know what to protect b) know how to protect it and c) innovate and manage investment strategies with respect to the protection?

There are many strategies used to manage cyber corporate investment. Some are driven by vendor FUD - aka breach threat - right through to modern risk management strategies, driven by mapping information protection to a higher level corporate strategy. 

If the corporate strategy is known and well communicated, it can become easier to overlay information protection decisions, that the business owners are willing to accept, monitor and iterate against. Risk transparency can help to provide a deeper understanding to what investments should be made and whether those investments are personnel, training or product related.

Summary

Cyber security is a growing, complex and multi faceted market. Many aspects are emerging, with new vendors, design patterns and attack vectors being created monthly. Other aspects, such as risk management and core protection of critical assets are relatively mature and well understood, in comparison to the computational age.

The investment and usage patterns associated with cyber security technology however, are seemingly plagued with numerous information failures, resulting in complex procurement, skills and personnel misalignment.

A value driven approach is needed, where explicit investment decisions (on both the skills provider, procurer and end user side) are weighed against short and long term returns.

5 Indicators of Cyber Security Market Failure

6 Minute Read. By Simon Moffatt.


Let us start with some brief definitions to get us all on the same page. Firstly – what is meant by the term “market failure”? A textbook description would be something that articulated the “inefficient distribution of goods and services in a free market”. But how do we decide whether the distribution is inefficient or not? Perhaps, let us look at how "efficient" is described first, then work backwards.  An efficient market would probably display a scenario where goods and services are distributed, priced and made, in a manner which can not be improved upon, with the amount of waste minimised.

This requires analysing two distinct parties – the consumer of the good and the maker of the good. The consumer wants to purchase at the lowest price, that maximises their “utility” or satisfaction. The maker on the other hand, wants to maximise profits whilst simultaneously minimising costs.

If we start looking at the "good", as the manufacturing and procurement of cyber security software, services and consulting, are we confident we are operating at maximum efficiency? I would argue we are not.  I am going to pick five high level topics in which to dig a little deeper.

1) Labour Shortages

The 2019 ISC2 Cyber Workforce Study, identified a staggering 4.07 million unfilled cyber security positions – up from 2.93 million in 2018. The report highlighted this as a global problem too – with APAC sitting on a 2.6 million backlog of unfilled roles. There are probably numerous other reports and Google search nuggets, to back up the claim, that cyber security is one of the toughest skill sets to recruit for within technology in 2020.

But what does this prove? Mismatches in labour demand and supply are common in numerous professions – medical doctors being an obvious one. An excess in demand over supply can obviously create wage inflation, amongst other inefficiencies, but what about triggers from the supply side?

The classic causes of labour market imperfection are many – but some seem to easily apply to cyber. The inelastic supply of suitable candidates is a good starting place.


In-elasticity of the supply of cyber security candidates

In this basic example, the supply of cyber candidates is described as being highly inelastic – for example a change in salary, does not result in a proportional change in the supply of candidates. Why is this? Clearly training has a part to play. Skilled cyber practitioners are likely to require strong computer science, network and infrastructure skills, before being able to embark on more specialised training. This can take many years to obtain, effectively acting like barriers to entry for new and willing candidates.

As with many labour markets, immobility and lack of vacancy information may also hinder skills investment, especially if the candidate is not aware of the potential benefits the long term training can bring. The more common practice of remote working however, is certainly helping to reduce geographical immobility issues which often hamper more traditional industries.

The cyber security industry is still very much in its infancy too, which can contribute to a lack of repeatable candidate development. Only in 2019, did the UK’s Chartered Institute of Information Security receive its royal warrant. Compare that to the likes of the Soap Makers Company (1638), Needlemakers Company (1656), Coachmakers Company (1677), Fanmakers (1709) and the Royal Medical Society (1773) and there is a palpable level of professional immaturity to understand. 

This could be amplified by a lack of consistency surrounding certifications, curriculum and job role descriptions. Only within the last 3 months has the industry seen CyBoK – the cyber book of knowledge - published. This may go a little way in attempting to formalise training and certification of candidates globally.

2) Regulation

An interesting bi product of perceived market failure, is government intervention. External intervention can take many forms and is often used to simulate competition (eg the likes of OfCom, OfWat or OfRail in the UK) where monopolistic or quasi-public sector run industries would not necessarily deliver optimum allocative efficiency if left to their own devices.

Whilst the cyber security sector is not a monopolistic supplier or employer, it has seen numerous pieces of governmental regulation. A few basic examples in Europe would include the General Data Protection Regulation (GDPR) and the Network and Information Systems Directive (NIS). In the United States, at a state level at least, the California Consumer Privacy Act (CCPA) came into fruition with further amendments towards the end of 2019.

I am blurring the line between security and privacy with some of those regulations, but an interesting aspect, is the consumer protection angle of the likes of the GDPR and CCPA. If the market where left to its own devices, the consumer of 3rd party on line goods and services, is not deemed to be protected to a satisfactory level. The regulatory intervention is not to rectify a negative externality affecting 3rd parties, more to protect the first party user. During the exchange of goods and services to the individual, it seems the requisite level of privacy and security that benefits the group of users as a whole is not utilitarian. A major aim the current cyber legislation is trying to achieve, is to overcome the information asymmetries that exist when a user signs up for a service or makes an online purchase or interaction.

With respect to consumer privacy, a concept of information asymmetry known as adverse selection may exist - where the buyer is not fully aware of the use and value of their personal data in relation to the supplier of a good or service, who may use their data in ways not fully understood or even disclosed to the user.

The likes of the NIS directive, seems more focused upon reducing the impact of an externality - basically a negative impact to a wider group of users. Perhaps, due to a data breach, service disruption or degradation, that may have occurred due to a lack of cyber security controls. A simple example could be the lack of power generation to an entire town if a nuclear power station is knocked offline due to a malware attack.

3) Product Hyper Augmentation

The cyber security market is broad, complex and ever evolving. The number of product categories grows daily. Gartner has at least 20 security related magic quadrants. CISO's and CIO's have to make incredibly complex decisions regarding product and service procurement.
Certainly, there is another set of information asymmetries at play here, but those are likely to exist in other complex software markets. With respect to cyber, there seems to an accelerated augmentation of features and messaging. When does a next generation web application firewall become a dedicated API security gateway? When does a security orchestration automation and response platform become an integrated event driven identity and access management workflow? Is there space for such niche upon niches, or are we entering a phase of largely merged and consolidated markets, where buyer procurement decision making is simply based on non-features such as brand affiliation and procurement ease?


Product direction via vertical and horizontal augmentation

Many mature markets often reach a position where suppliers augment products to a position of mediocrity and consumer apathy and confusion. Youngme Moon from Harvard Business School articulates this concept extremely well in her book Different - which focuses on competitive strategies. It seems the market for cyber security products especially (maybe less so for services and consultancy) is rapidly moving to a position, where core products are being blurred via augmentation, add on services and proprietary market descriptions. This is creating difficulties when it comes to calculating product purchase versus return on investment reporting.

4) Breach Increase

A continuation of the purchase/RoI analysis pattern, is to analyse what "success" looks like for numerous cyber investments. Whether those investments are people, process or technology related, most procurement decisions, end up being mapped to a success criteria. Value for money. Return on Investment. Call it what you will, but many organisations will look to describe what success looks like when it comes to security investments.

Is it a reduction in data breaches? Is it having fewer installed products with known CVE (common vulnerability & exposures) due to faster patch roll out? Is it having more end users signing up with second factor authentication? This can tie neatly into the controls -v- outcomes discussion where risk, security and privacy management for an organisation needs to identify tangible and SMART (specific measurable assignable realistic time-bound) metrics for implied cyber investment. The ultimate goal of cyber is to support the CIA (confidentiality integrity availability) triad, either singularly or collectively.

A major source of cyber investment success discussion, is associated with data breach trends. There are numerous pieces of data to support the claim, that breaches are increasing. Increasing in volume (from 157 in 2005 to 783 in 2014), breadth and complexity. Many of the articles could admittedly be FUD raised by vendors to accelerate product adoption, but there is no denying the popularity of sites like HaveIBeenPwned, where the number of breached credentials is substantial and increasing. If cyber investment was efficient, shouldn't these metrics be reducing?

This starts to generate two questions: either buyers are buying and using the wrong products or those products are not providing a decent return on investment.

5) Corporate Information Failure

But are products really to blame? The entire thread of this article, is to discuss market failure points. Information is a key component of effective free market development. Many information barriers seem to exist within the cyber security sector. Think of the following:
  • RoI on cyber product investment
  • Cost of personal data protection
  • The societal impact of critical infrastructure failures
  • Risk management success criteria
  • Cyber security certification benefit to corporations
There are likely several other angles to take on this, but full information with regards to the upholding of the confidentiality, availability and integrity of data is unlikely to occur. Many private sector organisations have undergone digital transformation over the last 10 years. These "corporation.next" operations, have created new challenges with respect to data protection. Data relating to customers, employees, intellectual property, products, suppliers, transactions and products.

But how do organisations a) know what to protect b) know how to protect it and c) innovate and manage investment strategies with respect to the protection?

There are many strategies used to manage cyber corporate investment. Some are driven by vendor FUD - aka breach threat - right through to modern risk management strategies, driven by mapping information protection to a higher level corporate strategy. 

If the corporate strategy is known and well communicated, it can become easier to overlay information protection decisions, that the business owners are willing to accept, monitor and iterate against. Risk transparency can help to provide a deeper understanding to what investments should be made and whether those investments are personnel, training or product related.

Summary

Cyber security is a growing, complex and multi faceted market. Many aspects are emerging, with new vendors, design patterns and attack vectors being created monthly. Other aspects, such as risk management and core protection of critical assets are relatively mature and well understood, in comparison to the computational age.

The investment and usage patterns associated with cyber security technology however, are seemingly plagued with numerous information failures, resulting in complex procurement, skills and personnel misalignment.

A value driven approach is needed, where explicit investment decisions (on both the skills provider, procurer and end user side) are weighed against short and long term returns.

2H2019 Identity Management Funding Analysis

Back in July, I wrote an article taking a brief look at venture capitalist funding patterns within the identity and access management space, for the first half of 2019.  I am going to revisit that topic, but for the second half of the year.

Key Facts July to December 2017 / 2018 / 2019

Funding increased 309% year on year for the second half of 2019, compared to the same period in 2018.  Taking a 3 year look, it seems, that perhaps 2018, and not 2019, was the unusual year.


The number of organisations receiving funding, has reduced every year since 2017.  The drop between 2018 and 2019 was about 15%.  Between 2017 and 2018, a 34% decline.  As per first half numbers, you could infer, that the identity industry in general is maturing, stabilising and seeing the number of organisations needing funding start to slow.  Approximately 30% of the funding in the second half of 2019, was classified as seed, which may support that claim.

2H2019
  • ~$532 million overall funding
  • Seed funding accounted for 30.3%
  • Median announcement date Sep 26th
  • 33 companies funded

2H2018
  • ~$172 million overall funding
  • Seed funding accounted for 23.1%
  • Median announcement date Aug 29th
  • 39 companies funded

2H2017

  • ~$523 million overall funding
  • Seed funding accounted for 32.8%
  • Median announcement date Oct 3rd
  • 61 companies funded

2H2019 Company Analysis

A coarse grained analysis of the 2019 numbers, shows a pretty balanced geographic spread - between EMEA and North America at least.  Whilst, most funding originates within the US, the companies receiving funding seems quite balanced.  For the first half of 2019, a much larger focus was on organisations based out of North America however.



2H2019 Top 10 Companies By Funding Amounts

The following is a simple top down list, of the companies that received the highest funding and at what stage that funding was received:


1Password ($200m, Series A) - https://pulse2.com/1password-200-million-funding/

AU10TIX ($60m, PE) - https://www.biometricupdate.com/201907/au10tix-receives-60m-investment-to-pay-off-debt-and-fund-growth-initiatives

Truiloo ($52m, Series C) - https://www.geekwire.com/2019/vancouver-startup-truiloo-raises-52m-identity-verification-tech/

2H2019 Identity Management Funding Analysis

Back in July, I wrote an article taking a brief look at venture capitalist funding patterns within the identity and access management space, for the first half of 2019.  I am going to revisit that topic, but for the second half of the year.

Key Facts July to December 2017 / 2018 / 2019

Funding increased 309% year on year for the second half of 2019, compared to the same period in 2018.  Taking a 3 year look, it seems, that perhaps 2018, and not 2019, was the unusual year.


The number of organisations receiving funding, has reduced every year since 2017.  The drop between 2018 and 2019 was about 15%.  Between 2017 and 2018, a 34% decline.  As per first half numbers, you could infer, that the identity industry in general is maturing, stabilising and seeing the number of organisations needing funding start to slow.  Approximately 30% of the funding in the second half of 2019, was classified as seed, which may support that claim.

2H2019
  • ~$532 million overall funding
  • Seed funding accounted for 30.3%
  • Median announcement date Sep 26th
  • 33 companies funded

2H2018
  • ~$172 million overall funding
  • Seed funding accounted for 23.1%
  • Median announcement date Aug 29th
  • 39 companies funded

2H2017

  • ~$523 million overall funding
  • Seed funding accounted for 32.8%
  • Median announcement date Oct 3rd
  • 61 companies funded

2H2019 Company Analysis

A coarse grained analysis of the 2019 numbers, shows a pretty balanced geographic spread - between EMEA and North America at least.  Whilst, most funding originates within the US, the companies receiving funding seems quite balanced.  For the first half of 2019, a much larger focus was on organisations based out of North America however.



2H2019 Top 10 Companies By Funding Amounts

The following is a simple top down list, of the companies that received the highest funding and at what stage that funding was received:


1Password ($200m, Series A) - https://pulse2.com/1password-200-million-funding/

AU10TIX ($60m, PE) - https://www.biometricupdate.com/201907/au10tix-receives-60m-investment-to-pay-off-debt-and-fund-growth-initiatives

Truiloo ($52m, Series C) - https://www.geekwire.com/2019/vancouver-startup-truiloo-raises-52m-identity-verification-tech/

ForgeRock Identity Day Paris (2019)

Jeudi 21 Novembre, c’est tenu à Paris le ForgeRock Identity Day, une demi journée d’information sur notre société et nos produits, destinée à nos clients, prospects et partenaires.

Animé par Christophe Badot, VP de la Région France, Benelux, Europe du Sud, cet événement a commencé par une présentation de Alexander Laurie, VP Global Solution Architecture, sur les tendances du marché et la vision de ForgeRock, en Français avec un bel accent Anglais.

Nous avons eu des témoignages de nos clients: CNP Assurance, GRDF et l’Alliance Renault-Nissan-Mitsubishi. Merci à eux d’avoir partagé leurs besoins et la solution apportée par ForgeRock.

Léonard Moustacchis et Stéphane Orluc, Solutions Architects chez ForgeRock, ont fait une démonstration en direct de la force de la Plateforme d’Identité de ForgeRock, à travers une application bancaire web et mobile. Et j’ai eu l’honneur de clore la journée avec une présentation de la roadmap produits, et surtout du ForgeRock Identity Cloud, notre offre SaaS disponible depuis la fin Octobre.

Cette après-midi s’est terminée sur un cocktail qui nous a permis de discuter plus en détail avec les participants. Toutes les photos de l’événement sont visible dans l’album sur mon compte Flickr.


And now the English shorter version:

On Thursday November 21st, we hosted ForgeRock Identity Day in Paris, a half day event for our customers, prospect customers and partners. We presented our vision of the identity landscape, our products and the roadmap. And three of our French customers : CNP Assurances, GRDF, Renault-Nissan-Mitsubishi Alliance, presented how ForgeRock has helped them with their digital transformation and identity needs. My colleagues from the Solutions Architect team ran a live demo of our web and mobile sample banking applications, to illustrate the power of the ForgeRock Identity Platform. And I closed the day with a presentation of the product roadmap and especially of ForgeRock Identity Cloud, our solution as a service. As usual, all my photos are visible in this public Flickr album.

This blog post was first published @ ludopoitou.com, included here with permission.

ForgeRock Identity Day Paris (2019)

Jeudi 21 Novembre, c’est tenu à Paris le ForgeRock Identity Day, une demi journée d’information sur notre société et nos produits, destinée à nos clients, prospects et partenaires.

Animé par Christophe Badot, VP de la Région France, Benelux, Europe du Sud, cet événement a commencé par une présentation de Alexander Laurie, VP Global Solution Architecture, sur les tendances du marché et la vision de ForgeRock, en Français avec un bel accent Anglais.

Nous avons eu des témoignages de nos clients: CNP Assurance, GRDF et l’Alliance Renault-Nissan-Mitsubishi. Merci à eux d’avoir partagé leurs besoins et la solution apportée par ForgeRock.

Léonard Moustacchis et Stéphane Orluc, Solutions Architects chez ForgeRock, ont fait une démonstration en direct de la force de la Plateforme d’Identité de ForgeRock, à travers une application bancaire web et mobile. Et j’ai eu l’honneur de clore la journée avec une présentation de la roadmap produits, et surtout du ForgeRock Identity Cloud, notre offre SaaS disponible depuis la fin Octobre.

Cette après-midi s’est terminée sur un cocktail qui nous a permis de discuter plus en détail avec les participants. Toutes les photos de l’événement sont visible dans l’album sur mon compte Flickr.


And now the English shorter version:

On Thursday November 21st, we hosted ForgeRock Identity Day in Paris, a half day event for our customers, prospect customers and partners. We presented our vision of the identity landscape, our products and the roadmap. And three of our French customers : CNP Assurances, GRDF, Renault-Nissan-Mitsubishi Alliance, presented how ForgeRock has helped them with their digital transformation and identity needs. My colleagues from the Solutions Architect team ran a live demo of our web and mobile sample banking applications, to illustrate the power of the ForgeRock Identity Platform. And I closed the day with a presentation of the product roadmap and especially of ForgeRock Identity Cloud, our solution as a service. As usual, all my photos are visible in this public Flickr album.

Configuring ForgeRock AM Active/Active Deployment Routing Using IG

Introduction

The standard deployment pattern for ForgeRock Identity Platform is to deploy the entire platform in multiple data centers/cloud regions. This is ensures the availability of services in case of an outage in one data center. This approach also provides performance benefits, as the load can be distributed among multiple data centers. Below is the example diagram for Active/Active deployment:

Problem Statement

AM provides both stateful/CTS-based and stateless/client-based sessions. Global deployment use cases require a seamless, single sign-on (SSO) experience among all applications with following constraints:

  • Certain deployments have distributed applications, such as App-A, deployed only in Data Center-A, and App-B, deployed only in Data Center-B.
  • The end user may travel to different locations, such as from the East coast to the West coast in the U.S. This means that application access requests will be handled by different data centers.

To achieve these use cases, CTS replication has to be enabled across multiple data centers/cloud regions.

In some situations, a user may try to access an application hosted in a specific data center before their corresponding sessions have been replicated. This can result in the user being prompted to re-authenticate, thereby degrading the user experience:

Note: This problem may be avoided if client-based sessions are leveraged, but many deployments have to use CTS-based sessions due to current limitations in client-based sessions. Also, when CTS-based sessions are used, the impact of CTS replication is much more than in client-based sessions.

In this article, we leverage IG to intelligently route session validation requests to a single data center, irrespective of the application being accessed.

Solution

IG can route session validation requests to a specific data center/region, depending on an additional site cookie generated during user’s authentication.

This approach ensures that the AM data center that issued the user’s session is used for corresponding session validation calls. This also means that CTS replication is not required across multiple data centers/ cloud regions:

Configure AM

  • Install AM 6.5.x and corresponding DS stores, Amster, and others. Following is a sample Amster install command:
install-openam — serverUrl http://am-A.example.com:8094/am — adminPwd cangetinam — acceptLicense — userStoreDirMgr “cn=Directory Manager” — userStoreDirMgrPwd “cangetindj” — userStoreHost uds1.example.com — userStoreType LDAPv3ForOpenDS — userStorePort 1389 — userStoreRootSuffix dc=example,dc=com — cfgStoreAdminPort 18092 — cfgStorePort 28092 — cfgStoreJmxPort 38092 — cfgStoreSsl SIMPLE — cfgStoreHost am-A.example.com — cfgDir /home/forgerock/am11 — cookieDomain example.com
am> connect http://am-A.example.com:8094/am -i
Sign in
User Name: amadmin
Password: **********
amster am-A.example.com:8094> import-config — path /home/forgerock/work/amster
Importing directory /home/forgerock/work/amster
Imported /home/forgerock/work/amster/global/Realms/root-employees.json
Imported /home/forgerock/work/amster/realms/root-employees/CoookieSetterNode/e4c11a8e-6c3b-455d-a875–4a1c29547716.json
Imported /home/forgerock/work/amster/realms/root-employees/DataStoreDecision/6bc90a3d-d54d-4857-a226-fb99df08ff8c.json
Imported /home/forgerock/work/amster/realms/root-employees/PasswordCollector/013d8761–2267–43cf-9e5e-01a794bd6d8d.json
Imported /home/forgerock/work/amster/realms/root-employees/UsernameCollector/31ce613e-a630–4c64–84ee-20662fb4e15e.json
Imported /home/forgerock/work/amster/realms/root-employees/PageNode/55f2d83b-724b-4e3a-87cc-247570c7020e.json
Imported /home/forgerock/work/amster/realms/root-employees/AuthTree/LDAPTree.json
Imported /home/forgerock/work/amster/realms/root/J2eeAgents/IG.json
Import completed successfully
- Creates /root realm aliases: am-A.example.com and am-B.example.com- AM Agent to be used by IG in /root realm- LDAPTree to create cookie after authentication. Update Cookie value as DC-A or DC-B, dependending on datacenter being used
  • Repeat the previous steps for configuring AM in all data centers:

Configure IG

- frProps.json to specify AM primary and secondary DC endpoints. Refer frProps-DC-A for DC-A and frProps-DC-B for DC-B.- config.json to declare primary and secondary AmService objects- 01-pep-dc-igApp.json to route session validation to specific datacenter, depending on “DataCenterCookie” value.
  • Repeat the previous steps for deploying IG in all data centers.

Test the use cases

The user accesses an application deployed in DC-A first

  1. The user accesses app1.example.com, deployed in DC-A.
  2. IG, deployed in DC-A, redirects the request to AM, deployed in DC-A for authentication.
  3. A DataCenterCookie is issued with a DC-A value.
  4. The user accesses app2.example.com, deployed in DC-B.
  5. IG, deployed in DC-B, redirects the request to AM, deployed in DC-A, for session validation.

The user accesses an application deployed in DC-B first

  1. The user accesses app2.example.com deployed in DC-B.
  2. IG, deployed in DC-B, redirects the request to AM deployed in DC-B, for authentication.
  3. A DataCenterCookie is issued with a DC-B value.
  4. The user accesses app1.example.com, deployed in DC-A.
  5. IG, deployed in DC-A, redirects request to AM, deployed in DC-B, for session validation.

Extend AM to OAuth/OIDC use cases

OAuth: AM 6.5.2 provides option to modify Access tokens using scripts. This allows additional metadata in stateless OAuth tokens, such as dataCenter. This information can be leveraged by IG OAuth RS to invoke the appropriate data center’s AmService objects for tokenInfo/ introspection endpoints:

{
 “sub”: “user.88”,
 “cts”: “OAUTH2_STATELESS_GRANT”,
 “auth_level”: 0,
 “iss”: “http://am6521.example.com:8092/am/oauth2/employees",
 …
 “dataCenter”: “DC-A”
}

OIDC: AM allows additional claims in OIDC tokens using scripts. This information can be leveraged by IG to invoke appropriate dataceter’s AmService objects

References

IDM Deployment Patterns — Centralized Repo- Based vs. Immutable File-Based

Introduction

I recently blogged about how customers can architect ForgeRock Access Management to support an immutable, DevOps style deployment pattern — see link. In this post, we’ll take a look at how to do this for ForgeRock Identity Management (IDM).

IDM is a modern OSGi-based application, with its configuration stored as a set of JSON files. This lends itself well to either a centralized repository- (repo) based deployment pattern, or a file-based immutable pattern. This blog explores both options, and summarizes the advantages and disadvantages of each.

IDM Architecture

Before delving into the deployment patterns, it is useful to summarize the IDM architecture. IDM provides centralized, simple management, and synchronization of users, devices, and things. It is a highly flexible product, and caters to a multitude of different identity management use cases, from provisioning, self-service, password management, synchronization, and reconciliation, to workflow, relationships, and task execution. For more on IDM’s architecture, check out the following link.

IDM’s core deployment architecture is split between a web application running in an OSGi framework within a Jetty Web Server, and a supported repo. See this link for a list of supported repos.

Within the IDM web application, the following components are stored:

  • Apache Felix and Jetty Web Server hosting the IDM binaries
  • Secrets
  • Configuration and scripts — this is the topic of this blog.
  • Policy scripts
  • Audit logs (optional)
  • Workflow BAR files
  • Bundles and application JAR files (connectors, dependencies, repo drivers, etc)
  • UI files

Within the IDM repo the following components are stored:

  • Centralized copies of configuration and policies — again, the topic of this blog
  • Cluster configuration
  • Managed and system objects
  • Audit logs (optional)
  • Scheduled tasks and jobs
  • Workflow
  • Relationships and link data

Notice that configuration is listed twice, both on the IDM node’s filesystem, and within the IDM repo. This is the focus of this blog, and how manipulation of this can either support a centralized repository deployment pattern, or a file-based immutable configuration deployment pattern.

Centralized, Repo-Based Deployment Pattern

This is the out-of-the-box (OOTB) deployment pattern for IDM. In this model, all IDM nodes share the same repository to pull down their configuration on startup, and if necessary, overwrite their local files. Any configuration changes made through the UI or over REST (REST ensures consistency) are pushed to the repo and then down to each IDM node via the cluster service. The JSON configuration files within the ../conf directory on the IDM web application are present, but should not be manipulated directly, as this can lead to inconsistencies in the configuration between the local file system and the authoritative repo configuration.

The following component-level diagram illustrates this deployment pattern:

Configuration Settings

The main configuration items for a multi-instance, immutable, file-based deployment pattern are:

  • The ../resolver/boot.properties file — This file stores IDM boot specifics like the IDM host, ports, SSL settings, and more. The key configuration item in this file for this blog post is openidm.node.id, which needs to be a string that is unique to each IDM node to allow the cluster service to identify each host.
  • The ../conf folder — This contains all JSON configuration files. On startup, these files will be pushed to the IDM repo. As a best practice (see link), the OOTB ../conf directory should not be used. Instead, a project folder containing the contents of the ../conf and ../script directory should be created, and IDM started with the “-p </path/to/my/project/location>” flag. This ensures OOTB and custom configurations are kept separately to ease version control, upgrades, backouts, and others.
  • The ../<my_project>/conf/system.properties file. This file contains 2 key settings:
openidm.fileinstall.enabled=false

This setting can either be left-commented (for example, true by default) or uncommented, and explicitly set to true. This, combined with the setting below, pushes all configurations except those from your project’s directory (for example, . ../conf and ../script) to the repo:

openidm.config.repo.enabled=false 

This setting needs to be uncommented to ensure IDM does not read the configuration from the repo, or push the configuration to the repo:

  • The ../<my_project>/conf/config.properties file. The key setting in this file is:
felix.fileinstall.enableConfigSave=false 

This setting needs to be uncommented. This means any changes made via REST or the UI are not pushed down to the local IDM file system. This effectively makes the IDM configuration read-only, which is key to immutability.

Note: Direct manipulation of configuration files and promotion to other IDM environments can fail if the JSON files contain crypto material. See the following KB article for information on how to handle this. You can also use the IDM configexport tool (IDM version 6.5 and above).

Following are key advantages and disadvantages of this deployment pattern:

Advantages

  • Follows core DevOps patterns for an immutable configuration: push configuration into a repo like GIT, parameterize, and promoted up to production. A customer knows without a doubt which configuration is running in production.
  • This pattern offers the ability to pre-bake a configuration into an image (such as, a Docker image, an Amazon Machine Image, and others) for auto-deployment of IDM configuration using orchestration tools.
  • Supports “stack by stack” deployments, as configuration changes can be made to a single node without impacting the others. Rollback is also far simpler—just restore the previous configuration.
  • The IDM configuration is set to read-only; meaning, accidental UI or REST-based configuration changes cannot alter configuration, and can potentially go on to impact functionality.

Disadvantages

  • As each IDM node holds its own configuration, the UI cannot be used to make configuration changes. This could present a challenge to customers new to IDM.
  • The customer is left to ensure processes are put in place to ensure all IDM nodes run from exactly the same configuration. This requires strong DevOps methodologies and experience.
  • Limited benefits for customers who do not modify their IDM configuration often.

Immutable, File-Based Deployment Pattern

The key difference in this model is IDM’s configuration is not stored in the repository. Instead, IDM pulls the configuration from the local filesystem and stores it in memory. The repo is still the authoritative source for all other IDM components (cluster configuration, schedules, and optionally, audit logs, system and managed objects, links, relationships, and others).

The following component level diagram illustrates this deployment pattern:

Configuration Settings

The main configuration items for a multi-instance, immutable, file-based deployment pattern are:

  • The ../resolver/boot.properties file — This file stores IDM boot specifics like the IDM host, ports, SSL settings, and more. The key configuration item in this file for this blog post is openidm.node.id, which needs to be a string unique to each IDM node to let the cluster service identify each host.
  • The ../conf folder — This contains all JSON configuration files. On startup, these files will be pushed to the IDM repo. As a best practice, (see link), the OOTB ../conf directory should not be used. Instead, a project folder containing the contents of the ../conf and ../script directory should be created, and IDM started with the “-p </path/to/my/project/location>” flag. This ensures OOTB and custom configurations are kept separate, to ease version control, upgrades, backouts, and others.
  • The ../<my_project>/conf/system.properties file. This file contains 2 key settings:
openidm.fileinstall.enabled=false

This setting can either be left commented (for example, true by default) or uncommented, and explicitly set to true. This, combined with the setting below pushes all configurations except that from your project’s directory (such as ../conf and ../script) to the repo:

openidm.config.repo.enabled=false 

This setting needs to be uncommented to ensure IDM does not read the configuration from the repo or push the configuration to the repo:

  • The ../<my_project>/conf/config.properties file. The key setting in this file is:
felix.fileinstall.enableConfigSave=false 

This setting needs to be uncommented. This means any changes made via REST or the UI are not pushed down to the local IDM filesystem. This effectively makes the IDM configuration read-only, which is key to immutability.

Note: Direct manipulation of configuration files and promotion to other IDM environments can fail if the JSON files contain crypto material. See the following KB article for information on how to handle this. You can also use the IDM configexport tool (IDM version 6.5 and above).

The following presents key advantages and disadvantages of this deployment pattern:

Advantages

  • Follows core DevOps patterns for immutable configuration: push configuration into a repo like GIT, parameterize, and promoted up to production. A customer knows without a doubt which configuration is running in production.
  • This pattern offers the ability to pre-bake the configuration into an image (such as a Docker image, an Amazon Machine Image, and others) for auto-deployment of IDM configuration using orchestration tools.
  • Supports “stack by stack” deployments, as configuration changes can be made to a single node without impacting the others. Rollback is also far simpler—restore the previous configuration.
  • The IDM configuration is set to read-only; meaning, accidental UI or REST-based configuration changes cannot alter configuration and potentially go on to impact functionality.

Disadvantages

  • As each IDM node holds its own configuration, the UI cannot be used to make configuration changes. This could present a challenge to customers new to IDM.
  • The customer is left to guarantee processes are put in place to ensure all IDM nodes run from exactly the same configuration. This requires strong DevOps methodologies and experience.
  • Limited benefit for customers who do not modify their IDM configuration often.

Summary of Configuration Parameters

The following table summarizes the key configuration parameters used in the centralized repo, and in file-based, immutable deployment patterns:

Conclusion

There you have it, two different deployment patterns—the centralized, repo-based pattern for customers who wish to go with the OOTB configuration, and/or do not update the IDM configuration often, and the immutable, file- based deployment pattern for those customers who demand it and/or are well-versed in DevOps methodologies and wish to treat IDM like code.

5 Minute Briefing: Designing for Security Outcomes

This is the first in a set of blogs focused on high level briefings - typically 5 minute reads, covering design patterns and meta trends relating to security architecture and design.

When it comes to cyber security design, there have been numerous ways of attempting to devise the investment profile and allocative efficiency metric.  Should we protect a $10 bike with a $6 lock, if the chance of loss is 10% - that sort of stuff.  I don’t want to tackle the measurement process per-se.

I want to focus upon taking the generic business concept of outcomes, alongside some of the uncertainty that is often associated with complex security and access control investments.

I guess, to start with, a few definitions to get us all on the same page.  Firstly, what are outcomes?  In a simple business context, an outcome is really just a forward looking statement – where do we want go get to?  What do we want to achieve?  In the objective, strategy and tactics model of analysis, it is likely the outcome could fall somewhere in the objective and possibly strategy blocks.

A really basic example of an OST breakdown could be the following:

    • Objective: fit in to my wedding dress by July 1st
    • Strategy: eat less and exercise more
    • Tactic: don’t eat snacks between meals and walk to work



So how does this fit into security?  Well cyber is typically seen as a business cost – with risk management another parallel cost centre, used to manage the implementation of cyber security investment and the subsequent returns - or associated loss reduction.

The end result of traditional information security management, is something resembling a control – essentially a pretty fine grained and repeatable step that can be measured.  Maybe something like “run anti-virus version x or above on all managed desktops”.  

But how does something so linear and pretty abstract in some cases, flow back to the business objectives?  I think in general, it doesn’t (or even can’t) which results in the inertia associated with security investment – the overall security posture is compromised and business investment in those controls is questioned.

The security control could be seen as a tactic – but is often not associated with any strategy or IT objective – and certainly very rarely associated with a business objective.  The business wants to sell widgets, not worry about security controls and quite rightly so.

Improve Security Design


So how are outcomes better than simple controls?  I think there are two aspects to this.  First is about security design and second is about security communications.

If we take the AV control previously described – what is that trying to achieve?  Maybe a more broad brush outcome, is that malware isn’t great and should be avoided.  Why should malware be avoided?  Perhaps metrics can attribute firewall performance reductions of 18% due to malware call home activity, which in turn reduces the ability for the business to uphold a 5 minute response SLA for customer support calls?

Or that 2 recent data breaches, were attributable to a bot net miner browser plug-in, that resulted in 20,000 identity records being leaked at a cost of $120 per record in fines?  

Does a security outcome such as “a 25% reduction in malware activity” result in a more productive, accountable and business understandable work effort?  

It would certainly require multiple different strategies and tactics to make it successful, covering lots of different aspects of people, process and technology.  Perhaps one of the tactics involved is indeed running up to date AV.   I guess the outcome can act as both a modular umbrella and also a future proofed and autonomous way of identifying the most value driven technical control.

Perhaps outcomes really are more about reporting and accountability?


Improve Security Communications


Accountability and communications are often a major weakness of security design and risk management.  IT often doesn’t understand the nuance of certain security requirements – anyone heard of devsecops (secdevops)?

Business understanding is vitally important when it comes to security design and that really is what “security communications” is all about.  I’m not talking about TLS (nerd joke alert), but more about making sure both the business and IT functions not only use a common language, but also work towards common goals. 

Security controls tend to be less effective when seen as checkbox exercises, powered by internal and external audit processes (audit functions tend to exist in areas of market failure, where the equilibrium state of the market results in externalities….but I won’t go there here).

Controls are often abstracted away from business objectives via a risk management layer and can lose their overall effectiveness – and in turn business confidence.  Controls also tend to be implicitly out of date by the time they are designed and certainly when they implemented.

If controls are emphasised less, and security outcomes more – and making sure outcomes are tied more closely with business objectives, an alignment on accountability and in turn investment profiles can be made.

Summary


So what are trying to say?  At a high level, try to move away from controls and encourage more goals and outcomes based design when it comes to security.  By leveraging an outcomes based model, procurement and investment decisions can be crystallised and made more accountable.  

The business objectives can be contributed towards and security essentially can become more effective – resulting in fewer data breaches, better returns on investment and greater clarity on where investment should be made.

5 Minute Briefing: Designing for Security Outcomes

This is the first in a set of blogs focused on high level briefings - typically 5 minute reads, covering design patterns and meta trends relating to security architecture and design.

When it comes to cyber security design, there have been numerous ways of attempting to devise the investment profile and allocative efficiency metric.  Should we protect a $10 bike with a $6 lock, if the chance of loss is 10% - that sort of stuff.  I don’t want to tackle the measurement process per-se.

I want to focus upon taking the generic business concept of outcomes, alongside some of the uncertainty that is often associated with complex security and access control investments.

I guess, to start with, a few definitions to get us all on the same page.  Firstly, what are outcomes?  In a simple business context, an outcome is really just a forward looking statement – where do we want go get to?  What do we want to achieve?  In the objective, strategy and tactics model of analysis, it is likely the outcome could fall somewhere in the objective and possibly strategy blocks.

A really basic example of an OST breakdown could be the following:

    • Objective: fit in to my wedding dress by July 1st
    • Strategy: eat less and exercise more
    • Tactic: don’t eat snacks between meals and walk to work



So how does this fit into security?  Well cyber is typically seen as a business cost – with risk management another parallel cost centre, used to manage the implementation of cyber security investment and the subsequent returns - or associated loss reduction.

The end result of traditional information security management, is something resembling a control – essentially a pretty fine grained and repeatable step that can be measured.  Maybe something like “run anti-virus version x or above on all managed desktops”.  

But how does something so linear and pretty abstract in some cases, flow back to the business objectives?  I think in general, it doesn’t (or even can’t) which results in the inertia associated with security investment – the overall security posture is compromised and business investment in those controls is questioned.

The security control could be seen as a tactic – but is often not associated with any strategy or IT objective – and certainly very rarely associated with a business objective.  The business wants to sell widgets, not worry about security controls and quite rightly so.

Improve Security Design


So how are outcomes better than simple controls?  I think there are two aspects to this.  First is about security design and second is about security communications.

If we take the AV control previously described – what is that trying to achieve?  Maybe a more broad brush outcome, is that malware isn’t great and should be avoided.  Why should malware be avoided?  Perhaps metrics can attribute firewall performance reductions of 18% due to malware call home activity, which in turn reduces the ability for the business to uphold a 5 minute response SLA for customer support calls?

Or that 2 recent data breaches, were attributable to a bot net miner browser plug-in, that resulted in 20,000 identity records being leaked at a cost of $120 per record in fines?  

Does a security outcome such as “a 25% reduction in malware activity” result in a more productive, accountable and business understandable work effort?  

It would certainly require multiple different strategies and tactics to make it successful, covering lots of different aspects of people, process and technology.  Perhaps one of the tactics involved is indeed running up to date AV.   I guess the outcome can act as both a modular umbrella and also a future proofed and autonomous way of identifying the most value driven technical control.

Perhaps outcomes really are more about reporting and accountability?


Improve Security Communications


Accountability and communications are often a major weakness of security design and risk management.  IT often doesn’t understand the nuance of certain security requirements – anyone heard of devsecops (secdevops)?

Business understanding is vitally important when it comes to security design and that really is what “security communications” is all about.  I’m not talking about TLS (nerd joke alert), but more about making sure both the business and IT functions not only use a common language, but also work towards common goals. 

Security controls tend to be less effective when seen as checkbox exercises, powered by internal and external audit processes (audit functions tend to exist in areas of market failure, where the equilibrium state of the market results in externalities….but I won’t go there here).

Controls are often abstracted away from business objectives via a risk management layer and can lose their overall effectiveness – and in turn business confidence.  Controls also tend to be implicitly out of date by the time they are designed and certainly when they implemented.

If controls are emphasised less, and security outcomes more – and making sure outcomes are tied more closely with business objectives, an alignment on accountability and in turn investment profiles can be made.

Summary


So what are trying to say?  At a high level, try to move away from controls and encourage more goals and outcomes based design when it comes to security.  By leveraging an outcomes based model, procurement and investment decisions can be crystallised and made more accountable.  

The business objectives can be contributed towards and security essentially can become more effective – resulting in fewer data breaches, better returns on investment and greater clarity on where investment should be made.