IDM: Zero Downtime Upgrade Strategy Using a Blue/Green Deployment

Introduction

This article is a continuation of the previous article on a Zero Downtime Upgrade Strategy Using a Blue/Green Deployment for AM. Traditionally, ForgeRock Identity Management (IDM) upgrades are handled either in-place or by leveraging the migration service. As many deployments have constraints around this approach (zero downtime, immutable, etc.), a parallel deployment approach, or a blue/green strategy can be leveraged for upgrading ForgeRock IDM servers.

This article provides a high-level approach for using a blue/green methodology for updating ForgeRock IDM servers.

This corresponds to a Unit 4: IDM Upgrade in our overall ForgeRock approach to upgrading.

Unit 4: IDM Upgrade
Blue/Green Deployment

Determining the correct approach

IDM 6.5 is bundled with the migration service, which lets you migrate data from a blue environment to a green environment, table by table, and row by row. This leads to identical data on both sides. The constraint is that update traffic must be cut off in the blue environment to ensure consistent data in the green environment after the migration completes. It is not a zero downtime migration.

The method described here shows you how to perform a zero downtime migration.

This approach in this article should be seen as a foundation on which to build your migration strategy. As IDM dependencies are very elaborate and varied, it is very difficult to design a perfect strategy that will work with all kinds of deployments.

The table below shows the major differences between the migration service method and the scripted CREST method:

Before using the methodology described in this article, please confirm that you really need zero downtime. It may be that the amount of data is too big to fall within an overnight window, which prevents you from using the migration service. The solution to this may be overcome with the following measures:

  1. Verify that the blue system nodes are performance-optimized to ensure the lowest possible response time.
  2. Allocate more nodes in the green environment, and distribute the migration between the nodes by partitioning the mappings.

If neither of the above options are possible, then please read on.

Prerequisites/Assumptions

1. This approach assumes that your infrastructure processes have the ability to install a parallel deployment for upgrade, or you are already using blue/green deployment.

2. In above diagram, a blue cluster represents an existing IDM deployment (like 4.5.x version) and a green one represents a new IDM deployment (like 6.5.x version).

3. Review the Release Notes for all IDM versions between existing and target IDM deployment for new, deprecated features, bug fixes, etc. For an IDM 4.5 to IDM 6.5 upgrade, review the Release Notes for IDM 5.0, 5.5, 6.0, and 6.5 versions.

4. Verify that the deployment upgrade fits with a blue/green deployment. It requires copying repository data over. When data transformation is not required, such as upgrading from 6.0 to 6.5, sharing the repository, which is not a pure blue/green deployment, might be the best fit.

5. When external systems are the source of truth for data, the IDM repository can be rebuilt through a full reconciliation. Therefore, the methodology exposed in this paper is not relevant for this case.

Upgrade the Process Using the CREST Scripted Connector

1. Prepare the migration:

  • Clone the crestidmmigration project from Bitbucket, and read the project description so that you can decide whether this method fits the requirements. While implicit sync always occurs in the blue environment, decide which environment you will use to initiate the reconciliation.
  • Decide which strategy to employ for external resources (such as a ForgeRock Directory Server). As described in the project, you can migrate the repository links table using the migration service, as the CREST migration preserves the managed object IDs. Alternatively, you may perform a full reconciliation after the migration. In all cases, turn off implicit sync for all mappings in the green environment to avoid unnecessary (and perhaps, conflicting) traffic to the external provisioning systems.
  • Follow the instructions in the Installation Guide to prepare for a migration, except that you are not going to perform a migration.
  • Provide all property mappings in sync.json. For encrypted values such as “password”, step 2 in the Installation Guide ensures that both environments share the same encryption keys. Provide all the necessary configuration information to optimize the reconciliation (in particular, paged reconciliation). However, do not alter the correlationQuery.
  • Edit the onCreate and onUpdate scripts to add processing for all relationships. These scripts ensure that relationship properties are propagated to the target, but also filter out any relationships to resources that are not yet provisioned. This prevents duplicate entries.
  • Edit provisioner.openicf-scriptedcrest.json to include all properties that will be migrated. Be careful not to change the configuration for the _id property, and examine how relationship properties are configured (See “devices” for multivalued, and “owner” for single value).
  • Perform a blue/green upgrade to deploy the CREST scripted connector for the 4.x or 5.x nodes. Ensure that implicit sync is disabled for all mappings involving the CREST connectors. The resulting green environment becomes the blue environment in the next migration phase.
  • Deploy the green environment (for example, 6.5) prepared for migration (including the CREST connector, if reconciliation will be launched from the green nodes).

2. Turn on implicit sync in blue, and launch the reconciliation of each scripted CREST mapping, one after the other completes (so they are not running concurrently).

3. You may use the migration service to migrate other data, such as internal roles, internal users, and links.

4. Switch over to the new deployment:

  • After validating that the new deployment is working correctly, switch the load balancer from blue to green, and turn off implicit sync to external provisioning systems in blue. Then, turn on implicit sync to external provisioning systems in green, and perform a reconciliation with the external provisioning systems, or use a custom method if provisioning is not performed through reconciliation.
  • If there are any issues, you can always roll back to blue deployment.

Note: Any managed object or configuration change made after switchover should be applied to both blue and green deployments so that no change is lost during rollback.

Post Go-Live

  1. Stop the migration service on the green deployment.
  2. Stop the blue IDM servers.
  3. De-provision the blue deployment.

Conclusion

Although a blue/green deployment requires a high level of deployment maturity, this approach provides an elegant way to minimize downtime for ForgeRock deployment upgrades. It is always advisable to practice an upgrade strategy in lower environments like dev, stage before moving to the production environment.

Depending on the complexity of your deployment, multiple things may need to be considered for these upgrades, such as customizations, new FR features, etc. It is always recommended to break the entire upgrade process into multiple releases like “base upgrade”, followed by “leveraging new features” and so on.

When deploying IDM for the first time, it is always advisable to incorporate the upgrade strategy early on in the project, so that any designed feature allows for seamless migration in the future. Also, syncing to the green environment will impact the blue update performance, as implicit syncs are all executed synchronously on the request’s thread (on 4.x, 5.x, and 6.0). Fortunately, this does not apply any more to 6.5 when queued sync is enabled. The impact will be sensible with a high relationship cardinality, as the process requires interrogating the target system for each relationship before propagating it. This is why planning the upgrade strategy well in advance is important.

The provided CREST scripts and configuration are a starting point, as well as a proof of concept, which you can use as the basis to build your own upgrade based on your deployment requirements. The details are described in the crestidmmigration project. Note that a second alternative solution is proposed there; one that preserves relationships and lets you run the migration service from the green environment (CREST+MIGRATION folder). However, this may contribute to a higher performance hit on the blue environment. Please note that crestidmmigration is an evolving project, and as such, some other variants could be proposed in the future, so stay tuned!

References

DS: Zero Downtime Upgrade Strategy Using a Blue/Green Deployment

Introduction

This is the continuation of the previous blog about a Zero Downtime Upgrade Strategy Using a Blue/Green Deployment for AM. Traditionally, ForgeRock Directory Server (DS) upgrades are handled via a rolling upgrade strategy using an in-place update. As many deployments have constraints around this approach (zero downtime, immutable, etc.), a parallel deployment approach, also known as a blue/green strategy, can be leveraged for upgrading ForgeRock DS servers.

This blog provides a high-level approach for using a blue/green methodology for updating ForgeRock DS-UserStores.

This corresponds to Unit 3: DS-UserStores in our overall ForgeRock upgrade approach.

ForgeRock Upgrade Units
Unit 3: DS-Userstores Upgrade Process

Prerequisites/Assumptions

1. This approach assumes that your infrastructure processes have the ability to install a parallel deployment for an upgrade, or you are already using a blue/green deployment.

2. In the above diagram, the blue cluster reflects an existing DS deployment (like a 3.5.x version), and the green reflects a new DS deployment (like a 6.5.x version).

3. There are N+1 DS servers deployed in your existing deployment. N servers are used for your production workload and one server is reserved for maintenance activities like backup, upgrades, etc. If there is no maintenance server, then you may need to remove one server from the production cluster (thereby reducing production load capacity) or install an additional DS server node for this upgrade strategy.

4. Review release notes for all DS versions between existing and target DS deployments for new, deprecated features, bug fixes, and others. For a DS 3.5 to DS 6.5 upgrade, review the Release Notes for DS 5.0, 5.5, 6.0, and 6.5 versions.

Upgrade Process

1. Unconfigure replication for the DS-3 user store. Doing so ensures that the upgrade doesn’t impact your existing DS deployment.

2. Upgrade DS-3 in place using DS upgrade process.

3. Create a backup from DS-3 using the DS backup utility.

4. Configure green RS-1’s replication with the existing blue replication topology.

5. Configure green RS-2’s replication with the existing blue replication topology.

6. Install green DS-1 and restore data from backup using the DS restore utility.

7. Install green DS-2 and restore data from backup using the DS restore utility.

8. Install Green DS-3 and restore data from backup using the DS restore utility.

9. Configure Green DS-1’s replication with Green RS-1.

10. Configure Green DS-2’s replication with Green RS-1.

11. Configure Green DS-3’s replication with Green RS-1.

Switch Over to the New Deployment

12. After validating that the new deployment is working correctly, switch the load balancer from blue to green. This can also be done incrementally. If any issues occur, you can always roll back to the blue deployment.

If direct hostnames are used by DS clients, such as AM, IDM, etc., then those configurations need to be updated to leverage new green hostnames.

Post Go-Live

13. Unconfigure the blue RS1 replication server to remove this server from blue’s replication topology.

14. Unconfigure the blue RS2 replication server to remove this server from blue’s replication topology.

15. Stop the blue DS servers.

16. Stop the blue RS servers.

17. De-provision the blue deployment.

Conclusion

Although a blue/green deployment requires a high level of deployment maturity, this approach provides an elegant way to minimizing downtime for ForgeRock deployment upgrades. It is always advisable to try an upgrade strategy in lower environments like dev, stage before moving to a production environment.

Depending on the complexity of your deployment, there can be multiple things to be considered for these upgrades, such as customizations, new FR features, etc. It is always recommended to break the entire upgrade process into multiple releases like “base upgrade” followed by “leveraging new features”, and so on.

References

AM and IG: Zero Downtime Upgrade Strategy Using a Blue/Green Deployment

Introduction

The standard deployment for the ForgeRock Identity Platform consists of multiple ForgeRock products such as IG, AM, IDM, and DS. As newer ForgeRock versions are released, deployments using older versions need to be migrated before they reach their end of life. Also, newer versions of ForgeRock products provide features such as intelligent authentication and the latest OAuth standards, which help businesses implement complex use cases.

ForgeRock Deployment Components

Problem Statement

Traditionally, ForgeRock upgrades are handled via a rolling upgrade strategy using an in-place update. This strategy doesn’t suit all deployments due to the following constraints:

  • Many deployments don’t allow any downtime. This means production servers can’t be stopped for upgrade purposes.
  • Some deployments follow an immutable instances approach. This means no modification is allowed on the current running servers.

To resolve these constraints, a parallel deployment approach, or a blue/green strategy can be leveraged for upgrading ForgeRock servers.

Solution

This article provides a high-level approach for using a blue/green methodology for updating ForgeRock AM servers and related components like DS-ConfigStore, DS-CTS, AM-Agents, and IG servers. We plan to cover similar strategies for DS-UserStores and IDM in future articles.

In order to upgrade ForgeRock deployment, we need to first analyze the dependencies between various ForgeRock products and their impact on upgrade process:

Given the dependencies between ForgeRock products, it is generally advisable to upgrade AM before upgrading DS, AM agents, and others, as new versions of AM support older versions of DS and AM agents, but the converse may not be true.

Note: There can be some exceptions to this rule. For example:

  • Web policy agents 4.x are compatible with AM 6.0, but not with AM 6.5. This means the order of upgrade shall be existing version to AM 6.0 => AM Agent 4.x to 5.x => AM 6.0 to AM 6.5.x
  • If an AM-IDM integration is used, then both AM and IDM need to be upgraded at the same time.

Upgrade Units

ForgeRock Upgrade Units

A ForgeRock Identity Platform deployment can be divided into 4 units so that upgrade of these units can be handled individually:

  • Unit 1: AM and its related stores (DS-Config and DS-CTS)
  • Unit 2: AM-Agents/IG
  • Unit 3: DS-UserStores
  • Unit 4: IDM and its datastore

The order of upgrade used by our approach shall be Unit 1=>Unit 2=>Unit 3=>Unit 4.

Unit 1: AM Upgrade

Unit 1: AM Upgrade

Prerequisites/ Assumptions

1. This approach assumes that your infrastructure processes have the ability to install a parallel deployment for upgrade, or you are already using a blue/green deployment.

2. In the above diagram, the blue cluster reflects an existing AM deployment (like the 13.5.x version) and the green cluster reflects a new AM deployment (like the 6.5.x version).

3. There are N+1 AM servers and corresponding config stores deployed in your existing deployment. This means N servers are used for production load, and one server is reserved for maintenance activities like backup, upgrades, and others. If there is no such maintenance server, then you may need to remove one server from the production cluster (thereby reducing production load capacity) or install an additional node (AM server and corresponding config store) for this upgrade.

4. No sessions in CTS servers are replicated during blue/green switch; therefore, users are expected to re-authenticate after this migration. If your business use cases require users to remain authenticated, then these sessions (like OAuth Refresh tokens) need to be synced from the old to the new deployment. Mechanisms like ldif export/import or using IDM synchronization engine can be leveraged for syncing selective tokens from old to new deployments. Also, refer to the AM Release Notes on session compatibility across AM versions.

5. Review the Release Notes for all AM versions between existing and target AM deployment for new features, deprecated features, bug fixes, and so on for OpenAM 13.5 to AM 6.5 upgrade. Review the Release Notes for AM 5.0, 5.5, 6.0, and 6.5 versions.

Upgrade Process

1. Unconfigure replication for the DS-3 Config store. This ensures that the upgrade doesn’t impact an existing AM deployment.

2. Upgrade AM-3 in-place using the AM upgrade process. Note: You may need to handle new AM features in this process like AM 6.5 secrets, and others.

3. Export Amster configs from AM-3.

4. Transform Amster export so that the Amster export is aligned with a new green deployment such as DS hostname:port.

5. Install AM, DS-Config, and DS-CTS servers. Import the Amster export into a new green cluster. Note: For certain deployment patterns, such as ForgeRock immutable deployment, the Amster import needs to be executed for each AM node. If a shared config store is used, then the Amster import needs to be executed only once, and other nodes are required to be added to the existing AM site.

Switch Over to the New Deployment

6. After validating that the new deployment is working correctly, switch the load balancer from blue to green. This can also be done incrementally. If any issues occur, we can always roll back to the blue deployment.

Note: Any configuration changes made after the blue’s cluster’s Amster export should be applied to both blue and green deployments so that no configuration change is lost during switchover or rollback.

Post Go-Live

7. Stop the AM servers in the blue deployment.

8. Stop the Config and CTS DS servers in blue deployment.

9. De-provision the blue deployment.

Unit 2: AM-Agent/IG Upgrade

Unit 2: AM-Agent/IG Upgrade Process

AM-Agent

Prerequisites/ Assumptions

1. This approach assumes that your deployment (including applications protected by agents) has the ability to install a parallel deployment for upgrade, or you are already using a blue/green deployment.

2. In the above diagram, the blue cluster reflects an existing AM-Agent deployment and the green reflects new AM-Agent deployment.

3. A parallel base green deployment for protected app servers has already been created.

4. Create new Agent profiles for green deployment on AM servers.

5. This approach assumes both old and new AM-Agent versions are supported by the AM deployment version.

6. Refer to the Release Notes for latest and deprecated features in the new AM-Agent/IG version, such as the AM-Agent 5.6 Release Notes.

Upgrade Process

1. Install AM-Agents in the green deployment. Update agent profiles on the AM server (created in #4 above) for new agents deployed in the green deployment to match configurations used in agent profiles from the blue deployment. For certain AM versions, this process can be automated by retrieving existing Agent profiles and using these results to create new Agent profiles.

Switch Over to the New Deployment

2. After validating that the new deployment is working properly, switch the load balancer from blue to green.

Post Go-Live

3. Stop the app servers in the blue deployment.

4. Remove the blue agent profiles from AM deployment.

5. De-provision the blue deployment.

IG

Prerequisites/ Assumptions

1. This approach assumes that your deployment (including applications protected by agents) has the ability to install a parallel deployment for upgrade, or you are already using a blue/green deployment.

2. In the above diagram, the blue cluster reflects an existing IG deployment and the green reflects the new IG deployment.

3. This approach assumes both old and new IG versions are supported by the AM deployment version.

4. Create new Agent profiles for the green deployment on the AM servers required for IG servers.

5. Refer to the Release Notes for the latest and deprecated features in the new IG version, like IG 6.5 Release Notes.

Upgrade Process

1. Update the IG configs in the git repository as per the changes in the new version. You may create a different branch in your repository for the same.

2. Deploy the new green IG deployment by leveraging updated configurations.

Switch Over to the New Deployment

3. After validating that the new deployment is working fine, switch the load balancer from blue to green.

Post Go-Live

4. Stop the IG servers in the blue deployment.

5. De-provision the blue deployment.

Conclusion

Although a blue/green deployment requires a high level of deployment maturity, this approach provides an elegant way to minimize downtime for ForgeRock deployment upgrades. It is always advisable to practice an upgrade strategy in lower environments like dev, and stage before moving to a production environment.

Depending on the complexity of your deployment, there can be multiple things to be considered for these upgrade,s such as customizations, new FR features, migration to containers, and others. It is always recommended to break the entire upgrade process into multiple releases, like “base upgrade” followed by “leveraging new features”, and so on.

References

5 Indicators of Cyber Security Market Failure

6 Minute Read. By Simon Moffatt.


Let us start with some brief definitions to get us all on the same page. Firstly – what is meant by the term “market failure”? A textbook description would be something that articulated the “inefficient distribution of goods and services in a free market”. But how do we decide whether the distribution is inefficient or not? Perhaps, let us look at how "efficient" is described first, then work backwards.  An efficient market would probably display a scenario where goods and services are distributed, priced and made, in a manner which can not be improved upon, with the amount of waste minimised.

This requires analysing two distinct parties – the consumer of the good and the maker of the good. The consumer wants to purchase at the lowest price, that maximises their “utility” or satisfaction. The maker on the other hand, wants to maximise profits whilst simultaneously minimising costs.

If we start looking at the "good", as the manufacturing and procurement of cyber security software, services and consulting, are we confident we are operating at maximum efficiency? I would argue we are not.  I am going to pick five high level topics in which to dig a little deeper.

1) Labour Shortages

The 2019 ISC2 Cyber Workforce Study, identified a staggering 4.07 million unfilled cyber security positions – up from 2.93 million in 2018. The report highlighted this as a global problem too – with APAC sitting on a 2.6 million backlog of unfilled roles. There are probably numerous other reports and Google search nuggets, to back up the claim, that cyber security is one of the toughest skill sets to recruit for within technology in 2020.

But what does this prove? Mismatches in labour demand and supply are common in numerous professions – medical doctors being an obvious one. An excess in demand over supply can obviously create wage inflation, amongst other inefficiencies, but what about triggers from the supply side?

The classic causes of labour market imperfection are many – but some seem to easily apply to cyber. The inelastic supply of suitable candidates is a good starting place.


In-elasticity of the supply of cyber security candidates

In this basic example, the supply of cyber candidates is described as being highly inelastic – for example a change in salary, does not result in a proportional change in the supply of candidates. Why is this? Clearly training has a part to play. Skilled cyber practitioners are likely to require strong computer science, network and infrastructure skills, before being able to embark on more specialised training. This can take many years to obtain, effectively acting like barriers to entry for new and willing candidates.

As with many labour markets, immobility and lack of vacancy information may also hinder skills investment, especially if the candidate is not aware of the potential benefits the long term training can bring. The more common practice of remote working however, is certainly helping to reduce geographical immobility issues which often hamper more traditional industries.

The cyber security industry is still very much in its infancy too, which can contribute to a lack of repeatable candidate development. Only in 2019, did the UK’s Chartered Institute of Information Security receive its royal warrant. Compare that to the likes of the Soap Makers Company (1638), Needlemakers Company (1656), Coachmakers Company (1677), Fanmakers (1709) and the Royal Medical Society (1773) and there is a palpable level of professional immaturity to understand. 

This could be amplified by a lack of consistency surrounding certifications, curriculum and job role descriptions. Only within the last 3 months has the industry seen CyBoK – the cyber book of knowledge - published. This may go a little way in attempting to formalise training and certification of candidates globally.

2) Regulation

An interesting bi product of perceived market failure, is government intervention. External intervention can take many forms and is often used to simulate competition (eg the likes of OfCom, OfWat or OfRail in the UK) where monopolistic or quasi-public sector run industries would not necessarily deliver optimum allocative efficiency if left to their own devices.

Whilst the cyber security sector is not a monopolistic supplier or employer, it has seen numerous pieces of governmental regulation. A few basic examples in Europe would include the General Data Protection Regulation (GDPR) and the Network and Information Systems Directive (NIS). In the United States, at a state level at least, the California Consumer Privacy Act (CCPA) came into fruition with further amendments towards the end of 2019.

I am blurring the line between security and privacy with some of those regulations, but an interesting aspect, is the consumer protection angle of the likes of the GDPR and CCPA. If the market where left to its own devices, the consumer of 3rd party on line goods and services, is not deemed to be protected to a satisfactory level. The regulatory intervention is not to rectify a negative externality affecting 3rd parties, more to protect the first party user. During the exchange of goods and services to the individual, it seems the requisite level of privacy and security that benefits the group of users as a whole is not utilitarian. A major aim the current cyber legislation is trying to achieve, is to overcome the information asymmetries that exist when a user signs up for a service or makes an online purchase or interaction.

With respect to consumer privacy, a concept of information asymmetry known as adverse selection may exist - where the buyer is not fully aware of the use and value of their personal data in relation to the supplier of a good or service, who may use their data in ways not fully understood or even disclosed to the user.

The likes of the NIS directive, seems more focused upon reducing the impact of an externality - basically a negative impact to a wider group of users. Perhaps, due to a data breach, service disruption or degradation, that may have occurred due to a lack of cyber security controls. A simple example could be the lack of power generation to an entire town if a nuclear power station is knocked offline due to a malware attack.

3) Product Hyper Augmentation

The cyber security market is broad, complex and ever evolving. The number of product categories grows daily. Gartner has at least 20 security related magic quadrants. CISO's and CIO's have to make incredibly complex decisions regarding product and service procurement.
Certainly, there is another set of information asymmetries at play here, but those are likely to exist in other complex software markets. With respect to cyber, there seems to an accelerated augmentation of features and messaging. When does a next generation web application firewall become a dedicated API security gateway? When does a security orchestration automation and response platform become an integrated event driven identity and access management workflow? Is there space for such niche upon niches, or are we entering a phase of largely merged and consolidated markets, where buyer procurement decision making is simply based on non-features such as brand affiliation and procurement ease?


Product direction via vertical and horizontal augmentation

Many mature markets often reach a position where suppliers augment products to a position of mediocrity and consumer apathy and confusion. Youngme Moon from Harvard Business School articulates this concept extremely well in her book Different - which focuses on competitive strategies. It seems the market for cyber security products especially (maybe less so for services and consultancy) is rapidly moving to a position, where core products are being blurred via augmentation, add on services and proprietary market descriptions. This is creating difficulties when it comes to calculating product purchase versus return on investment reporting.

4) Breach Increase

A continuation of the purchase/RoI analysis pattern, is to analyse what "success" looks like for numerous cyber investments. Whether those investments are people, process or technology related, most procurement decisions, end up being mapped to a success criteria. Value for money. Return on Investment. Call it what you will, but many organisations will look to describe what success looks like when it comes to security investments.

Is it a reduction in data breaches? Is it having fewer installed products with known CVE (common vulnerability & exposures) due to faster patch roll out? Is it having more end users signing up with second factor authentication? This can tie neatly into the controls -v- outcomes discussion where risk, security and privacy management for an organisation needs to identify tangible and SMART (specific measurable assignable realistic time-bound) metrics for implied cyber investment. The ultimate goal of cyber is to support the CIA (confidentiality integrity availability) triad, either singularly or collectively.

A major source of cyber investment success discussion, is associated with data breach trends. There are numerous pieces of data to support the claim, that breaches are increasing. Increasing in volume (from 157 in 2005 to 783 in 2014), breadth and complexity. Many of the articles could admittedly be FUD raised by vendors to accelerate product adoption, but there is no denying the popularity of sites like HaveIBeenPwned, where the number of breached credentials is substantial and increasing. If cyber investment was efficient, shouldn't these metrics be reducing?

This starts to generate two questions: either buyers are buying and using the wrong products or those products are not providing a decent return on investment.

5) Corporate Information Failure

But are products really to blame? The entire thread of this article, is to discuss market failure points. Information is a key component of effective free market development. Many information barriers seem to exist within the cyber security sector. Think of the following:
  • RoI on cyber product investment
  • Cost of personal data protection
  • The societal impact of critical infrastructure failures
  • Risk management success criteria
  • Cyber security certification benefit to corporations
There are likely several other angles to take on this, but full information with regards to the upholding of the confidentiality, availability and integrity of data is unlikely to occur. Many private sector organisations have undergone digital transformation over the last 10 years. These "corporation.next" operations, have created new challenges with respect to data protection. Data relating to customers, employees, intellectual property, products, suppliers, transactions and products.

But how do organisations a) know what to protect b) know how to protect it and c) innovate and manage investment strategies with respect to the protection?

There are many strategies used to manage cyber corporate investment. Some are driven by vendor FUD - aka breach threat - right through to modern risk management strategies, driven by mapping information protection to a higher level corporate strategy. 

If the corporate strategy is known and well communicated, it can become easier to overlay information protection decisions, that the business owners are willing to accept, monitor and iterate against. Risk transparency can help to provide a deeper understanding to what investments should be made and whether those investments are personnel, training or product related.

Summary

Cyber security is a growing, complex and multi faceted market. Many aspects are emerging, with new vendors, design patterns and attack vectors being created monthly. Other aspects, such as risk management and core protection of critical assets are relatively mature and well understood, in comparison to the computational age.

The investment and usage patterns associated with cyber security technology however, are seemingly plagued with numerous information failures, resulting in complex procurement, skills and personnel misalignment.

A value driven approach is needed, where explicit investment decisions (on both the skills provider, procurer and end user side) are weighed against short and long term returns.

5 Indicators of Cyber Security Market Failure

6 Minute Read. By Simon Moffatt.


Let us start with some brief definitions to get us all on the same page. Firstly – what is meant by the term “market failure”? A textbook description would be something that articulated the “inefficient distribution of goods and services in a free market”. But how do we decide whether the distribution is inefficient or not? Perhaps, let us look at how "efficient" is described first, then work backwards.  An efficient market would probably display a scenario where goods and services are distributed, priced and made, in a manner which can not be improved upon, with the amount of waste minimised.

This requires analysing two distinct parties – the consumer of the good and the maker of the good. The consumer wants to purchase at the lowest price, that maximises their “utility” or satisfaction. The maker on the other hand, wants to maximise profits whilst simultaneously minimising costs.

If we start looking at the "good", as the manufacturing and procurement of cyber security software, services and consulting, are we confident we are operating at maximum efficiency? I would argue we are not.  I am going to pick five high level topics in which to dig a little deeper.

1) Labour Shortages

The 2019 ISC2 Cyber Workforce Study, identified a staggering 4.07 million unfilled cyber security positions – up from 2.93 million in 2018. The report highlighted this as a global problem too – with APAC sitting on a 2.6 million backlog of unfilled roles. There are probably numerous other reports and Google search nuggets, to back up the claim, that cyber security is one of the toughest skill sets to recruit for within technology in 2020.

But what does this prove? Mismatches in labour demand and supply are common in numerous professions – medical doctors being an obvious one. An excess in demand over supply can obviously create wage inflation, amongst other inefficiencies, but what about triggers from the supply side?

The classic causes of labour market imperfection are many – but some seem to easily apply to cyber. The inelastic supply of suitable candidates is a good starting place.


In-elasticity of the supply of cyber security candidates

In this basic example, the supply of cyber candidates is described as being highly inelastic – for example a change in salary, does not result in a proportional change in the supply of candidates. Why is this? Clearly training has a part to play. Skilled cyber practitioners are likely to require strong computer science, network and infrastructure skills, before being able to embark on more specialised training. This can take many years to obtain, effectively acting like barriers to entry for new and willing candidates.

As with many labour markets, immobility and lack of vacancy information may also hinder skills investment, especially if the candidate is not aware of the potential benefits the long term training can bring. The more common practice of remote working however, is certainly helping to reduce geographical immobility issues which often hamper more traditional industries.

The cyber security industry is still very much in its infancy too, which can contribute to a lack of repeatable candidate development. Only in 2019, did the UK’s Chartered Institute of Information Security receive its royal warrant. Compare that to the likes of the Soap Makers Company (1638), Needlemakers Company (1656), Coachmakers Company (1677), Fanmakers (1709) and the Royal Medical Society (1773) and there is a palpable level of professional immaturity to understand. 

This could be amplified by a lack of consistency surrounding certifications, curriculum and job role descriptions. Only within the last 3 months has the industry seen CyBoK – the cyber book of knowledge - published. This may go a little way in attempting to formalise training and certification of candidates globally.

2) Regulation

An interesting bi product of perceived market failure, is government intervention. External intervention can take many forms and is often used to simulate competition (eg the likes of OfCom, OfWat or OfRail in the UK) where monopolistic or quasi-public sector run industries would not necessarily deliver optimum allocative efficiency if left to their own devices.

Whilst the cyber security sector is not a monopolistic supplier or employer, it has seen numerous pieces of governmental regulation. A few basic examples in Europe would include the General Data Protection Regulation (GDPR) and the Network and Information Systems Directive (NIS). In the United States, at a state level at least, the California Consumer Privacy Act (CCPA) came into fruition with further amendments towards the end of 2019.

I am blurring the line between security and privacy with some of those regulations, but an interesting aspect, is the consumer protection angle of the likes of the GDPR and CCPA. If the market where left to its own devices, the consumer of 3rd party on line goods and services, is not deemed to be protected to a satisfactory level. The regulatory intervention is not to rectify a negative externality affecting 3rd parties, more to protect the first party user. During the exchange of goods and services to the individual, it seems the requisite level of privacy and security that benefits the group of users as a whole is not utilitarian. A major aim the current cyber legislation is trying to achieve, is to overcome the information asymmetries that exist when a user signs up for a service or makes an online purchase or interaction.

With respect to consumer privacy, a concept of information asymmetry known as adverse selection may exist - where the buyer is not fully aware of the use and value of their personal data in relation to the supplier of a good or service, who may use their data in ways not fully understood or even disclosed to the user.

The likes of the NIS directive, seems more focused upon reducing the impact of an externality - basically a negative impact to a wider group of users. Perhaps, due to a data breach, service disruption or degradation, that may have occurred due to a lack of cyber security controls. A simple example could be the lack of power generation to an entire town if a nuclear power station is knocked offline due to a malware attack.

3) Product Hyper Augmentation

The cyber security market is broad, complex and ever evolving. The number of product categories grows daily. Gartner has at least 20 security related magic quadrants. CISO's and CIO's have to make incredibly complex decisions regarding product and service procurement.
Certainly, there is another set of information asymmetries at play here, but those are likely to exist in other complex software markets. With respect to cyber, there seems to an accelerated augmentation of features and messaging. When does a next generation web application firewall become a dedicated API security gateway? When does a security orchestration automation and response platform become an integrated event driven identity and access management workflow? Is there space for such niche upon niches, or are we entering a phase of largely merged and consolidated markets, where buyer procurement decision making is simply based on non-features such as brand affiliation and procurement ease?


Product direction via vertical and horizontal augmentation

Many mature markets often reach a position where suppliers augment products to a position of mediocrity and consumer apathy and confusion. Youngme Moon from Harvard Business School articulates this concept extremely well in her book Different - which focuses on competitive strategies. It seems the market for cyber security products especially (maybe less so for services and consultancy) is rapidly moving to a position, where core products are being blurred via augmentation, add on services and proprietary market descriptions. This is creating difficulties when it comes to calculating product purchase versus return on investment reporting.

4) Breach Increase

A continuation of the purchase/RoI analysis pattern, is to analyse what "success" looks like for numerous cyber investments. Whether those investments are people, process or technology related, most procurement decisions, end up being mapped to a success criteria. Value for money. Return on Investment. Call it what you will, but many organisations will look to describe what success looks like when it comes to security investments.

Is it a reduction in data breaches? Is it having fewer installed products with known CVE (common vulnerability & exposures) due to faster patch roll out? Is it having more end users signing up with second factor authentication? This can tie neatly into the controls -v- outcomes discussion where risk, security and privacy management for an organisation needs to identify tangible and SMART (specific measurable assignable realistic time-bound) metrics for implied cyber investment. The ultimate goal of cyber is to support the CIA (confidentiality integrity availability) triad, either singularly or collectively.

A major source of cyber investment success discussion, is associated with data breach trends. There are numerous pieces of data to support the claim, that breaches are increasing. Increasing in volume (from 157 in 2005 to 783 in 2014), breadth and complexity. Many of the articles could admittedly be FUD raised by vendors to accelerate product adoption, but there is no denying the popularity of sites like HaveIBeenPwned, where the number of breached credentials is substantial and increasing. If cyber investment was efficient, shouldn't these metrics be reducing?

This starts to generate two questions: either buyers are buying and using the wrong products or those products are not providing a decent return on investment.

5) Corporate Information Failure

But are products really to blame? The entire thread of this article, is to discuss market failure points. Information is a key component of effective free market development. Many information barriers seem to exist within the cyber security sector. Think of the following:
  • RoI on cyber product investment
  • Cost of personal data protection
  • The societal impact of critical infrastructure failures
  • Risk management success criteria
  • Cyber security certification benefit to corporations
There are likely several other angles to take on this, but full information with regards to the upholding of the confidentiality, availability and integrity of data is unlikely to occur. Many private sector organisations have undergone digital transformation over the last 10 years. These "corporation.next" operations, have created new challenges with respect to data protection. Data relating to customers, employees, intellectual property, products, suppliers, transactions and products.

But how do organisations a) know what to protect b) know how to protect it and c) innovate and manage investment strategies with respect to the protection?

There are many strategies used to manage cyber corporate investment. Some are driven by vendor FUD - aka breach threat - right through to modern risk management strategies, driven by mapping information protection to a higher level corporate strategy. 

If the corporate strategy is known and well communicated, it can become easier to overlay information protection decisions, that the business owners are willing to accept, monitor and iterate against. Risk transparency can help to provide a deeper understanding to what investments should be made and whether those investments are personnel, training or product related.

Summary

Cyber security is a growing, complex and multi faceted market. Many aspects are emerging, with new vendors, design patterns and attack vectors being created monthly. Other aspects, such as risk management and core protection of critical assets are relatively mature and well understood, in comparison to the computational age.

The investment and usage patterns associated with cyber security technology however, are seemingly plagued with numerous information failures, resulting in complex procurement, skills and personnel misalignment.

A value driven approach is needed, where explicit investment decisions (on both the skills provider, procurer and end user side) are weighed against short and long term returns.

Easily Share Authentication Trees

Originally published on Mr. Anderson’s Musings

A New World

A new world of possibilities was born with the introduction of authentication trees in ForgeRock’s Access Management (AM). Limiting login sequences of the past were replaced with flexible, adaptive, and contextual authentication journeys.

ForgeRock chose the term Intelligent Authentication to capture this new set of capabilities. Besides offering a shiny new browser-based design tool to visually create and maintain authentication trees, Intelligent Authentication also rang in a new era of atomic extensibility.

Authentication Tree

While ForgeRock’s Identity Platform has always been known for its developer-friendliness, authentication trees took it to the next level: Trees consist of a number nodes, which are connected with each other like in a flow diagram or decision tree. Each node is an atomic entity, taking a single input and providing one or more outputs. Nodes can be implemented in Java, JavaScript, or Groovy.

A public marketplace allows the community to share custom nodes. An extensive network of technology partners provides nodes to integrate with their products and services.

A New Challenge

With the inception of authentication trees, a spike of collaboration between individuals, partners, and customers occurred. At first the sharing happened on a node basis as people would exchange cool custom node jar files with instructions on how to use those nodes. But soon it became apparent that the sharing of atomic pieces of functionality wasn’t quite cutting it. People wanted to share whole journeys, processes, trees.

A New Tool – amtree.sh

A fellow ForgeRock solution architect in the UK, Jon Knight, created the first version of a tool that allowed the easy export and import of trees. I was so excited about the little utility that I forked his repository and extended its functionality to make it even more useful. Shortly thereafter, another fellow solution architect from the Bay Area, Jamie Morgan, added even more capabilities.

The tool is implemented as a shell script, which exports authentication trees from any AM realm to standard output or a file and imports trees into any realm from standard input or a file. The tool automatically includes required decision node scripts for authentication trees (JavaScript and Groovy) and requires curl, jq, and uuidgen to be installed and available on the host where it is to be used. Here are a few ideas and examples for how to use the tool:

Backup/Export

I do a lot of POCs or create little point solutions for customer or prospect use cases or build demos to show off technology partner integrations or our support for the latest open standards. No matter what I do, it often involves authentication trees of various complexity and usually those trees take some time designing and testing and thus are worthy of documentation and preservation and sharing. The first step to achieve any of these things is to extract the trees’ configuration into a reusable format, or simply speaking: backing them up or exporting them.

Before performing an export, It can be helpful to just produce a list of all the authentication trees in a realm. That way we get an idea what’s available and can decide if we want to export individual trees or all the trees in a realm. The tool provides an option to list all trees in a realm. It lists the trees in their natural order (order of creation). To get an alphabetically ordered list, we can pipe the output into the sort shell command.

List Trees
../amtree.sh -h https://am.example.com/openam -u amadmin -p ******** -r /authn -l | sort
email
push
push_reg
push_reg_2fa
risk
select
simple
smart
solid
trusona
webauthn
webauthn_reg
webauthn_reg_2fa 

Now that we have a list of trees, it is time to think about what it is we want to do. The amtree.sh tool offers us 3 options:

  1. Export a single tree into a file or to standard out: -e
  2. Export all trees into individual files: -S
  3. Export all trees into a single file or to standard out: -E

The main reason to choose one of these options over another is whether your trees are independent (have no dependency on other trees) or not. Authentication trees can reference other trees, which then act like subroutines in a program. These subroutines are called inner trees. Independent trees do not contain inner trees. Dependent trees contain inner trees.

Options 1 and 2 are great for independent trees as they put a single tree into a single file. Those trees can then easily be imported again. Option 2 generates the same output as if running option 1 for every tree in the realm.

Dependent trees require other trees be already available or be imported before the dependent tree is imported or the AM APIs will complain and the tool will not be able to complete the import.

Option 3 is best suited for highly interdependent trees. It puts all the trees of a realm into the same file and on import of that file, the tool will always have all the required dependencies available.

Option 2: Export All Trees To Individual Files
../amtree.sh -h https://am.example.com/openam -u amadmin -p ******** -r /authn -S
 Export all trees to files
 Exporting push ..........
 Exporting simple ............
 Exporting trusona ....
 Exporting risk ........
 Exporting smart ..........
 Exporting webauthn .............
 Exporting select .......
 Exporting solid ......
 Exporting webauthn_reg ............
 Exporting webauthn_reg_2fa .............
 Exporting email .........
 Exporting push_reg ..............
 Exporting push_reg_2fa ...............
Option 3: Export All Trees To Single File
../amtree.sh -h https://am.example.com/openam -u amadmin -p ******** -r /authn -E -f authn_all.json
 Exporting push ..........
 Exporting simple ............
 Exporting trusona ....
 Exporting risk ........
 Exporting smart ..........
 Exporting webauthn .............
 Exporting select .......
 Exporting solid ......
 Exporting webauthn_reg ............
 Exporting webauthn_reg_2fa .............
 Exporting email .........
 Exporting push_reg ..............
 Exporting push_reg_2fa ...............

After running both of those commands, we should find the expected files in our current directory:

ls -1
 authn_all.json
 email.json
 push.json
 push_reg.json
 push_reg_2fa.json
 risk.json
 select.json
 simple.json
 smart.json
 solid.json
 trusona.json
 webauthn.json
 webauthn_reg.json
 webauthn_reg_2fa.json

The second command (option 3) produced the single authn_all.json file as indicated by the -f parameter. The first command (option 2) generated individual files per tree.

Restore/Import

Import is just as simple as export. The tool brings in required scripts and resolves dependencies to inner trees, which means it orders trees on import to satisfy dependencies.

Exports omit secrets of all kind (passwords, API keys, etc.) which may be stored in node configuration properties. Therefore, if we exported a tree whose configuration contains secrets, the imported tree will lack those secrets. If we want to more easily reuse trees (like I do in my demo/lab environments) we can edit the exported tree files and manually insert the secrets. Fields containing secrets are exported as null values. Once we manually add those secrets to our exports, they will import as expected.

{
  "origin": "003232731275e50c2770b3de61675fca",
  "innernodes": {},
  "nodes": {
    ...
    "B56DB408-E26D-4FBA-BF86-339799ED8C45": {
      "_id": "B56DB408-E26D-4FBA-BF86-339799ED8C45",
      "hostName": "smtp.gmail.com",
      "password": null,
      "sslOption": "SSL",
      "hostPort": 465,
      "emailAttribute": "mail",
      "smsGatewayImplementationClass": "com.sun.identity.authentication.modules.hotp.DefaultSMSGatewayImpl",
      "fromEmailAddress": "vscheuber@gmail.com",
      "username": "vscheuber@gmail.com",
      "_type": {
        "_id": "OneTimePasswordSmtpSenderNode",
        "name": "OTP Email Sender",
        "collection": true
      }
    },
    ...
  },
  "scripts": {
    ...
  },
  "tree": {
    "_id": "email",
    "nodes": {
      ...
      "B56DB408-E26D-4FBA-BF86-339799ED8C45": {
        "displayName": "Email OTP",
        "nodeType": "OneTimePasswordSmtpSenderNode",
        "connections": {
          "outcome": "08211FF9-9F09-4688-B7F1-5BCEB3984624"
        }
      },
      ...
    },
    "entryNodeId": "DF68B2B8-0F10-4FF3-9F2C-622DA16BA4B7"
  }
}

The Json code snippet above shows excerpts from the email tree. One of the nodes is responsible for sending a one-time password (OTP) via email to the user, thus needing SMTP gateway configuration. The export does not include the value of the password property in the node configuration. To make this export file re-usable, we could replace null with the actual password. Depending on the type of secret this might be acceptable or not.

Importing individual trees requires us to make sure all the dependencies are met. Amtree.sh provides a nice option, -d, to describe a tree export file. That will tell us if a tree has any dependencies we need to meet before we can import that single tree. Let’s take the select tree as an example. The select tree offers the user a choice, which 2nd factor they want to use to login. Each choice then evaluates another tree, which implements the chosen method:

Running amtree.sh against the exported select.json file gives us a good overview of what the select tree is made of, which node types it uses, which scripts (if any) it references, and what other trees (inner trees) it depends on:

../amtree.sh -d -f select.json
 Tree: select
 ============

 Nodes:
 -----
 - ChoiceCollectorNode
 - InnerTreeEvaluatorNode 

 Scripts:
 -------
 None

 Dependencies:
 ------------
 - email
 - push
 - trusona
 - webauthn 

From the output of the -d option we can derive useful information:

  • Which nodes will we need to have installed in our AM instance? ChoiceCollectorNode and InnerTreeEvaluatorNode.
  • Which scripts will the tree export file install in our AM instance? None.
  • Which trees does this tree depend on? The requiring of the InnerTreeEvaluatorNode already gave away that there will be dependencies. This list simply breaks them down: email, push, trusona, and webauthn.

Ignoring the dependencies we can try to import the file into an empty realm and see what amtree.sh will tell us:

../amtree.sh -h https://am.example.com/openam -u amadmin -p ******** -r /empty -i select -f select.json 

Importing select…Error importing node InnerTreeEvaluatorNode (D21C798F-D4E9-400A-A038-0E1A883348EB): {"code":400,"reason":"Bad Request","message":"Data validation failed for the attribute, Tree Name"}
{
  "_id": "D21C798F-D4E9-400A-A038-0E1A883348EB",
  "tree": "email",
  "_type": {
    "_id": "InnerTreeEvaluatorNode",
    "name": "Inner Tree Evaluator",
    "collection": true
  }
}

The error message confirms that dependencies are not met. This leaves us with 3 options:

  1. Instruct amtree.sh to import the four dependencies using the -i option before trying to import select.json. Of course that bears the risk that any or all of the 4 inner trees have dependencies of their own.
  2. Instruct amtree.sh to import the authn_all.json using the -I option. The tool will bring in all the trees in the right order but there is no easy way to avoid any of the many trees in the file to be imported.
  3. Instruct amtree.sh import all the .json files in the current directory using the -s option. The tool will bring in all the trees in the right order. Any trees we don’t want to import, we can move into a sub folder and amtree.sh will ignore them.

Let’s see how option 3 will work out. To avoid errors, we need to move the authn_all.json file containing all the trees into a sub folder, ignore in my case. Then we are good to go:

../amtree.sh -h https://am.example.com/openam -u amadmin -p ******** -r /empty -s
Import all trees in the current directory
Determining installation order..............................................
Importing email.........
Importing push_reg..............
Importing push_reg_2fa...............
Importing simple............
Importing trusona....
Importing webauthn.............
Importing webauthn_reg............
Importing webauthn_reg_2fa.............
Importing push..........
Importing risk........
Importing select.......
Importing smart..........
Importing solid......

No errors reported this time. You can see the tools spent quite some cycles determining the proper import order (the more dots, the more cycles). We would have likely run into nested dependencies had we tried option 1 and manually imported the four known dependencies.

A word of caution: Imports overwrite trees of the same name without any warning. Be mindful of that fact when importing into a realm with existing trees.

Migrate/Copy

Amtree.sh supports stdin and stdout for input and output. That allows us to pipe the output of an export command (-e or -E) to an import command (-i or -I) without storing anything on disk. That’s a pretty slick way to migrate trees from one realm to another in the same AM instance or across instances. The -s and -S options do not support stdin and stdout, thus they won’t work for this scenario.

../amtree.sh -h https://am.example.com/openam -u amadmin -p ******** -r /authn -E | ../amtree.sh -h https://am.example.com/openam -u amadmin -p ******** -r /empty -I
 Exporting push ..........
 Exporting simple ............
 Exporting trusona ....
 Exporting risk ........
 Exporting smart ..........
 Exporting webauthn .............
 Exporting select .......
 Exporting solid ......
 Exporting webauthn_reg ............
 Exporting webauthn_reg_2fa .............
 Exporting email .........
 Exporting push_reg ..............
 Exporting push_reg_2fa ...............
 Determining installation order.............................
 Importing email.........
 Importing push..........
 Importing push_reg..............
 Importing push_reg_2fa...............
 Importing risk........
 Importing select.......
 Importing simple............
 Importing smart..........
 Importing solid......
 Importing trusona....
 Importing webauthn.............
 Importing webauthn_reg............
 Importing webauthn_reg_2fa.............

The above command copies all the trees in a realm to another realm. Nothing is ever exported to disk.

Prune

Trees consist of different configuration artifacts in the AM configuration store. When managing trees through the AM REST APIs, it is easy to forget to remove unused artifacts. Even when using the AM Admin UI, dead configuration is left behind every time a tree is deleted. The UI doesn’t give an admin any options to remove those dead artifacts nor is there a way really to even see them. Over time, they will grow to uncomfortable size and will clutter the results of API calls.

Amtree.sh prunes those orphaned configuration artifacts when the -P parameter is supplied. I regularly delete all the default trees in a new realm, which leaves me with 33 orphaned configuration artifacts right out of the gate. To be clear: Those orphaned configuration artifacts don’t cause any harm. It’s a desire for tidiness that makes me want them gone.

./amtree.sh -h https://am.example.com/openam -u amadmin -p ******** -r /authn -P
Analyzing authentication nodes configuration artifacts…

Total:    118
Orphaned: 20

Do you want to prune (permanently delete) all the orphaned node instances? (N/y): y
Pruning....................
Done.

Wrap-Up & Resources

Amtree.sh is a big improvement for the handling and management of authentication trees in the ForgeRock Identity Platform. It is hardly the final solution, though. The implementation as a shell script is limiting both the supported platforms and functionality. My fingers are itching to re-write it in a better suited language. Now there’s a goal for 2020!

If you want to explore the examples in this post, here are all the resources I used:

2H2019 Identity Management Funding Analysis

Back in July, I wrote an article taking a brief look at venture capitalist funding patterns within the identity and access management space, for the first half of 2019.  I am going to revisit that topic, but for the second half of the year.

Key Facts July to December 2017 / 2018 / 2019

Funding increased 309% year on year for the second half of 2019, compared to the same period in 2018.  Taking a 3 year look, it seems, that perhaps 2018, and not 2019, was the unusual year.


The number of organisations receiving funding, has reduced every year since 2017.  The drop between 2018 and 2019 was about 15%.  Between 2017 and 2018, a 34% decline.  As per first half numbers, you could infer, that the identity industry in general is maturing, stabilising and seeing the number of organisations needing funding start to slow.  Approximately 30% of the funding in the second half of 2019, was classified as seed, which may support that claim.

2H2019
  • ~$532 million overall funding
  • Seed funding accounted for 30.3%
  • Median announcement date Sep 26th
  • 33 companies funded

2H2018
  • ~$172 million overall funding
  • Seed funding accounted for 23.1%
  • Median announcement date Aug 29th
  • 39 companies funded

2H2017

  • ~$523 million overall funding
  • Seed funding accounted for 32.8%
  • Median announcement date Oct 3rd
  • 61 companies funded

2H2019 Company Analysis

A coarse grained analysis of the 2019 numbers, shows a pretty balanced geographic spread - between EMEA and North America at least.  Whilst, most funding originates within the US, the companies receiving funding seems quite balanced.  For the first half of 2019, a much larger focus was on organisations based out of North America however.



2H2019 Top 10 Companies By Funding Amounts

The following is a simple top down list, of the companies that received the highest funding and at what stage that funding was received:


1Password ($200m, Series A) - https://pulse2.com/1password-200-million-funding/

AU10TIX ($60m, PE) - https://www.biometricupdate.com/201907/au10tix-receives-60m-investment-to-pay-off-debt-and-fund-growth-initiatives

Truiloo ($52m, Series C) - https://www.geekwire.com/2019/vancouver-startup-truiloo-raises-52m-identity-verification-tech/

2H2019 Identity Management Funding Analysis

Back in July, I wrote an article taking a brief look at venture capitalist funding patterns within the identity and access management space, for the first half of 2019.  I am going to revisit that topic, but for the second half of the year.

Key Facts July to December 2017 / 2018 / 2019

Funding increased 309% year on year for the second half of 2019, compared to the same period in 2018.  Taking a 3 year look, it seems, that perhaps 2018, and not 2019, was the unusual year.


The number of organisations receiving funding, has reduced every year since 2017.  The drop between 2018 and 2019 was about 15%.  Between 2017 and 2018, a 34% decline.  As per first half numbers, you could infer, that the identity industry in general is maturing, stabilising and seeing the number of organisations needing funding start to slow.  Approximately 30% of the funding in the second half of 2019, was classified as seed, which may support that claim.

2H2019
  • ~$532 million overall funding
  • Seed funding accounted for 30.3%
  • Median announcement date Sep 26th
  • 33 companies funded

2H2018
  • ~$172 million overall funding
  • Seed funding accounted for 23.1%
  • Median announcement date Aug 29th
  • 39 companies funded

2H2017

  • ~$523 million overall funding
  • Seed funding accounted for 32.8%
  • Median announcement date Oct 3rd
  • 61 companies funded

2H2019 Company Analysis

A coarse grained analysis of the 2019 numbers, shows a pretty balanced geographic spread - between EMEA and North America at least.  Whilst, most funding originates within the US, the companies receiving funding seems quite balanced.  For the first half of 2019, a much larger focus was on organisations based out of North America however.



2H2019 Top 10 Companies By Funding Amounts

The following is a simple top down list, of the companies that received the highest funding and at what stage that funding was received:


1Password ($200m, Series A) - https://pulse2.com/1password-200-million-funding/

AU10TIX ($60m, PE) - https://www.biometricupdate.com/201907/au10tix-receives-60m-investment-to-pay-off-debt-and-fund-growth-initiatives

Truiloo ($52m, Series C) - https://www.geekwire.com/2019/vancouver-startup-truiloo-raises-52m-identity-verification-tech/

ForgeRock Identity Day Paris (2019)

Jeudi 21 Novembre, c’est tenu à Paris le ForgeRock Identity Day, une demi journée d’information sur notre société et nos produits, destinée à nos clients, prospects et partenaires.

Animé par Christophe Badot, VP de la Région France, Benelux, Europe du Sud, cet événement a commencé par une présentation de Alexander Laurie, VP Global Solution Architecture, sur les tendances du marché et la vision de ForgeRock, en Français avec un bel accent Anglais.

Nous avons eu des témoignages de nos clients: CNP Assurance, GRDF et l’Alliance Renault-Nissan-Mitsubishi. Merci à eux d’avoir partagé leurs besoins et la solution apportée par ForgeRock.

Léonard Moustacchis et Stéphane Orluc, Solutions Architects chez ForgeRock, ont fait une démonstration en direct de la force de la Plateforme d’Identité de ForgeRock, à travers une application bancaire web et mobile. Et j’ai eu l’honneur de clore la journée avec une présentation de la roadmap produits, et surtout du ForgeRock Identity Cloud, notre offre SaaS disponible depuis la fin Octobre.

Cette après-midi s’est terminée sur un cocktail qui nous a permis de discuter plus en détail avec les participants. Toutes les photos de l’événement sont visible dans l’album sur mon compte Flickr.


And now the English shorter version:

On Thursday November 21st, we hosted ForgeRock Identity Day in Paris, a half day event for our customers, prospect customers and partners. We presented our vision of the identity landscape, our products and the roadmap. And three of our French customers : CNP Assurances, GRDF, Renault-Nissan-Mitsubishi Alliance, presented how ForgeRock has helped them with their digital transformation and identity needs. My colleagues from the Solutions Architect team ran a live demo of our web and mobile sample banking applications, to illustrate the power of the ForgeRock Identity Platform. And I closed the day with a presentation of the product roadmap and especially of ForgeRock Identity Cloud, our solution as a service. As usual, all my photos are visible in this public Flickr album.

This blog post was first published @ ludopoitou.com, included here with permission.