ForgeRock welcomes Shankar Raman

Welcome to Shankar Raman, who joins the ForgeRock documentation team today.

Shankar is starting with platform and deployment documentation, where many of you have been asking us to do more.

Shankar comes to the team from curriculum development, having worked for years as an instructor, writer, and course developer at Oracle on everything from the DB, to middleware, to Fusion Applications. Shankar’s understanding of the deployment problem space and what architects and deployers must know to build solutions will help him make your lives, or at least your jobs, a bit easier. Looking forward to that!

This blog post was first published @ marginnotes2.wordpress.com, included here with permission.

ForgeRock welcomes Shankar Raman

ForgeRock Logo Welcome to Shankar Raman, who joins the ForgeRock documentation team today.

Shankar is starting with platform and deployment documentation, where many of you have been asking us to do more.

Shankar comes to the team from curriculum development, having worked for years as an instructor, writer, and course developer at Oracle on everything from the DB, to middleware, to Fusion Applications. Shankar’s understanding of the deployment problem space and what architects and deployers must know to build solutions will help him make your lives, or at least your jobs, a bit easier. Looking forward to that!

How Information Security Can Drive Innovation

Information Security and Innovation: often at two different ends of an executive team’s business strategy. The non-CIO ‘C’ level folks want to discuss revenue generation, efficiency and growth. Three areas often immeasurably enhanced by having a strong and clear innovation management framework. The CIO’s objectives are often focused on technical delivery, compliance, uploading SLA’s and more recently on privacy enablement and data breach prevention. So how can the two worlds combine, to create a perfect storm for trusted and secure economic growth?

Innovation Management 

But firstly how do organisations actually become innovative? It is a buzzword that is thrown around at will, but many organisations fail to build out the necessary teams and processes to allow innovation to succeed. Innovation basically focuses on the ability to create both incremental and radically different products, processes and services, with the aim of developing net-new revenue streams. But can this process be managed?

Or are companies and individuals just “born” to be creative? Well simply, no. Creativity can be managed, fostered and encouraged. Some basic creative thinking concepts, include “design thinking” - where the focus is on emphasising customer needs, prototyping, iterating and testing again. This is then combined with different thinking types – both open (problem felt directly), closed (via a 3rd party), internal (a value add contribution) and external (creativity as part of a job role). The “idea-factory” can then be categorised into something like HR lead ideas – those from existing staff that lead to incremental changes – R&D ideas – the generation of radical concepts that lead to entirely new products – and finally Marketing lead ideas – those that capture customer feedback.

Business Management 

 Once the idea-machine has been designed, it needs feeding with business strategy. That “food” helps to define what the idea-machine should focus upon and prioritise. This can be articulated in the form of what the business wants to achieve. If it is revenue maximisation, does this take the form of product standardisation, volume or distribution changes? This business analysis needs to look for identifying unmet customer needs, tied neatly into industry or global trends (a nice review on the latter is the “Fourth Industrial Revolution” by Klaus Schwab).

Information Security Management 


There is a great quote by Amit & Zott, that goes along the lines of, as an organisation, you're always one innovation from being wiped out. Very true. But that analogy can also be applied to “one data breach” from being wiped out – from irreparable brand damage, or perhaps via the theft of intellectual property. So how can we accelerate from the focus of business change and forward thinking to information security, which has typically been retrospective, restrictive and seen as a IT cost centre.

Well there are similarities believe it or not and, when designed in the right way the overlay of application, data and identity lead security can drive faster, more efficient and more trust worthy services. One of the common misconceptions regarding security management and implementation, is that it is applied retrospectively. An application or infrastructure is created, then audits, penetration tests or code analysis takes place. Security vulnerabilities are identified, triaged and fixed in future releases.

Move Security to the Left 

It is much more cost effective and secure, to apply security processes at the very beginning of any project. Be it for the creation of net-new applications or a new infrastructure design. The classic “security by design” approach. For example, developers should have basic understanding of security concepts – cryptography 101, when to hash versus encrypt, what algorithms to use, how to protect from unnecessary information disclosure, identity protection and so on. Exit criteria within epic and story creation should reference how the security posture should, as a minimum not be altered. Functional tests should include static and dynamic code analysis. All of these incremental changes really move “security to the left” of the development pipeline, getting closer to the project start than the end.

Agile -v- State Gate Analysis 

Within the innovation management framework, stage-gate analysis is often used to triage creative idea processing, helping to identify what to move forward with and what to abandon. A stage is a piece of work, followed by a gate. A gate basically has an exit criteria, with exits such as “kill”, “stop”, “back”, “go forward” etc. Each idea flows through this process to basically kill early and reduce cost. As an idea flows through the stage-gate process, the cost of implementation clearly increases. This approach is very similar to the agile methodology of building complex software. Lots of unknowns. Baby steps, iteration, feedback and behaviour alteration and so on. So there is a definitive mindset duplication between creating ideas that feed into service and application creation and how those applications are made real.

Security Innovation and IP Protection

A key theme of information security attack vectors over the last 5 years, have been the speed of change. Whether we are discussing malware, ransomware, nation state attacks or zero-day notifications, there is constant change. Attack vectors do not stay still. The infosec industry is growing annually as both private sector and nation states ramp defence mechanisms using skilled personnel, machine learning and dedicated practices. Those “external” changes require organisations to respond in innovative and agile ways when it comes to security.

Security is no longer a compliance exercise. The ability to deliver a secure and trusted application or service is a competitive differentiator that can build long lasting, sticky customer relationships. A more direct relationship between innovation and information security, is the simple protection of intellectual property that relates to the new practices, ideas, patents and other value that has been created, due to innovative frameworks. That IP needs protecting, from external malicious attacks, disgruntled insiders and so on.

Summary 

Overall, organisations are doing through the digital transformation exercise at rapid speed and scale. That transformation process requires smart innovation which should be neatly tied into the business strategy. However, security management is no longer a retrospective compliance driven exercise. The process, personnel and speed of change the infosec industry sees, can provide a great breeding ground for helping to alter the application development process, help to reduce internal boundaries and help to deliver secure, trusted privacy preserving services that can allows organisations to grow.

How Information Security Can Drive Innovation

Information Security and Innovation: often at two different ends of an executive team’s business strategy. The non-CIO ‘C’ level folks want to discuss revenue generation, efficiency and growth. Three areas often immeasurably enhanced by having a strong and clear innovation management framework. The CIO’s objectives are often focused on technical delivery, compliance, uploading SLA’s and more recently on privacy enablement and data breach prevention. So how can the two worlds combine, to create a perfect storm for trusted and secure economic growth?

Innovation Management 

But firstly how do organisations actually become innovative? It is a buzzword that is thrown around at will, but many organisations fail to build out the necessary teams and processes to allow innovation to succeed. Innovation basically focuses on the ability to create both incremental and radically different products, processes and services, with the aim of developing net-new revenue streams. But can this process be managed?

Or are companies and individuals just “born” to be creative? Well simply, no. Creativity can be managed, fostered and encouraged. Some basic creative thinking concepts, include “design thinking” - where the focus is on emphasising customer needs, prototyping, iterating and testing again. This is then combined with different thinking types – both open (problem felt directly), closed (via a 3rd party), internal (a value add contribution) and external (creativity as part of a job role). The “idea-factory” can then be categorised into something like HR lead ideas – those from existing staff that lead to incremental changes – R&D ideas – the generation of radical concepts that lead to entirely new products – and finally Marketing lead ideas – those that capture customer feedback.

Business Management 

 Once the idea-machine has been designed, it needs feeding with business strategy. That “food” helps to define what the idea-machine should focus upon and prioritise. This can be articulated in the form of what the business wants to achieve. If it is revenue maximisation, does this take the form of product standardisation, volume or distribution changes? This business analysis needs to look for identifying unmet customer needs, tied neatly into industry or global trends (a nice review on the latter is the “Fourth Industrial Revolution” by Klaus Schwab).

Information Security Management 


There is a great quote by Amit & Zott, that goes along the lines of, as an organisation, you're always one innovation from being wiped out. Very true. But that analogy can also be applied to “one data breach” from being wiped out – from irreparable brand damage, or perhaps via the theft of intellectual property. So how can we accelerate from the focus of business change and forward thinking to information security, which has typically been retrospective, restrictive and seen as a IT cost centre.

Well there are similarities believe it or not and, when designed in the right way the overlay of application, data and identity lead security can drive faster, more efficient and more trust worthy services. One of the common misconceptions regarding security management and implementation, is that it is applied retrospectively. An application or infrastructure is created, then audits, penetration tests or code analysis takes place. Security vulnerabilities are identified, triaged and fixed in future releases.

Move Security to the Left 

It is much more cost effective and secure, to apply security processes at the very beginning of any project. Be it for the creation of net-new applications or a new infrastructure design. The classic “security by design” approach. For example, developers should have basic understanding of security concepts – cryptography 101, when to hash versus encrypt, what algorithms to use, how to protect from unnecessary information disclosure, identity protection and so on. Exit criteria within epic and story creation should reference how the security posture should, as a minimum not be altered. Functional tests should include static and dynamic code analysis. All of these incremental changes really move “security to the left” of the development pipeline, getting closer to the project start than the end.

Agile -v- State Gate Analysis 

Within the innovation management framework, stage-gate analysis is often used to triage creative idea processing, helping to identify what to move forward with and what to abandon. A stage is a piece of work, followed by a gate. A gate basically has an exit criteria, with exits such as “kill”, “stop”, “back”, “go forward” etc. Each idea flows through this process to basically kill early and reduce cost. As an idea flows through the stage-gate process, the cost of implementation clearly increases. This approach is very similar to the agile methodology of building complex software. Lots of unknowns. Baby steps, iteration, feedback and behaviour alteration and so on. So there is a definitive mindset duplication between creating ideas that feed into service and application creation and how those applications are made real.

Security Innovation and IP Protection

A key theme of information security attack vectors over the last 5 years, have been the speed of change. Whether we are discussing malware, ransomware, nation state attacks or zero-day notifications, there is constant change. Attack vectors do not stay still. The infosec industry is growing annually as both private sector and nation states ramp defence mechanisms using skilled personnel, machine learning and dedicated practices. Those “external” changes require organisations to respond in innovative and agile ways when it comes to security.

Security is no longer a compliance exercise. The ability to deliver a secure and trusted application or service is a competitive differentiator that can build long lasting, sticky customer relationships. A more direct relationship between innovation and information security, is the simple protection of intellectual property that relates to the new practices, ideas, patents and other value that has been created, due to innovative frameworks. That IP needs protecting, from external malicious attacks, disgruntled insiders and so on.

Summary 

Overall, organisations are doing through the digital transformation exercise at rapid speed and scale. That transformation process requires smart innovation which should be neatly tied into the business strategy. However, security management is no longer a retrospective compliance driven exercise. The process, personnel and speed of change the infosec industry sees, can provide a great breeding ground for helping to alter the application development process, help to reduce internal boundaries and help to deliver secure, trusted privacy preserving services that can allows organisations to grow.

The Simple Way to Create an AM Authentication Node Project

ForgeRock’s Identity Platform Access Management introduced Authentication Trees for preview in version 5.5. Version 6.0 will see Authentication Trees and Nodes become an integral part of the product. This blog post will help you quickly and easily create a Authentication Tree Node project so that you can develop your own authentication node.

About Authentication Trees and Nodes

Authentication trees provide fine-grained authentication by allowing multiple paths and decision points throughout the authentication flow.

Authentication trees are made up of authentication nodes, which define actions taken during authentication, similar to authentication modules within chains. Authentication nodes are more granular than modules, with each node performing a single task such as collecting a username or making a simple decision. Unlike authentication modules, authentication nodes can have multiple outcomes rather than just success or failure.

You can create complex yet customer-friendly authentication experiences by linking nodes together, creating loops, and nesting nodes within a tree.

You can read more about Authentication Trees and Nodes in the ForgeRock documentation here. Note the link is to v5.5 documentation. There may be newer versions available.

Creating an Authentication Node

Because Authentication Nodes are fine-grained you can end up writing lots of them to build a flexible custom authentication suite. The creation of the maven project for each node can become an overhead, but fear not! There is a maven archetype to help you set up a skeleton independent auth node project!

Using the Maven Archetype

The Maven archetype lives in the ForgeRock maven repository. In order to use it you will need to set up your maven environment to be able to authenticate to that repository. To be able to do that you will need a ForgeRock Backstage Account that is associated with either a customer subscription or a partner status.
To set up maven you will need to download a preconfigured maven settings.xml file as explained in this Backstage Knowledge Base article.
Note: If you have previously downloaded your settings.xml file it could still be worth downloading it again as the `profile` section of the settings.xml file required to access the archetype did not exist before mid Dec 2017.

I’m set up. Let’s do this!

OK! Create your project;

mvn archetype:generate \
-DgroupId=<my-group-id> \
-DartifactId=<my-artefact-id> \
-Dversion=<my-version> \
-DpackageName=<my-package-id> \
-DauthNodeName=<my-auth-node-class-name> \
-DarchetypeGroupId=org.forgerock.am \
-DarchetypeArtifactId=auth-tree-node-archetype \
-DarchetypeVersion=5.5.0 \
-DinteractiveMode=false

Where you need to substitute values for the groupId, artefactId, version and packageName and authNodeName to suite your project.
groupId, artefactId & version are all pretty self evident and will be used in the generation of the pom’s for your project.
packageName defines the package in which your auth tree node classes will be generated.
authNodeName Used to name generated classes and in the generation of a README.md file etc.

What does this create for me?

Assuming we run a command something like this;

mvn archetype:generate \
-DgroupId=com.boho-software \
-DartifactId=super-auth-tree-node \
-Dversion=1.0.0-SNAPSHOT \
-Dpackage=com.boho-software.supernode \
-DauthNodeName=SuperNode \
-DarchetypeGroupId=org.forgerock.am \
-DarchetypeArtifactId=auth-tree-node-archetype \
-DarchetypeVersion=5.5.0 \
-DinteractiveMode=false

We will get a project with the following structure;

super-auth-tree-node
  + README.md
+ example.png

  + legal
    + CDDL-1.0.txt
+ pom.xml

  + src
    + main
      + java
      | + com
      |   + boho-software
      |     + supernode
      |       + SuperNode.java
|       + SuperNodePlugin.java

      + resources
        + META-INF
          + services
            + org.forgerock.openam.plugins.AmPlugin
+ com

          + boho-software
            + supernode
              + SuperNode.properties

Which I’m sure you’ll agree, saves a lot of project set up time!

Once it’s built…

put it in the Backstage Marketplace! There, you can build a community around your auth tree node, share it with others, find help maintaining it and if it becomes popular it could be accepted into the AM project as a fully supported node.

This blog was originally published at http://www.boho-software.com/2017/12/the-simple-way-to-create-am.html

December Auth Node Roundup

The goose is getting fat.  Presents are wrapped.  Tree is up.  AND, some new authentication nodes have been added to the Backstage Marketplace.

Threat Management

The entire bot protection, threat intelligence and DDoS awareness space has grown massively over the last few years.  Instead of just relying on network related throttling, the auth trees fabric really makes it simple to augment third party systems into the login journey.

Two pretty simple nodes that were added include the OpenThreatIntelliegenceNode and the HaveIBeenPwnedNode.  The open threat intelligence node, basically calls out to the https://cymon.io site, sending a SHA256 hash of the inbound client IP address.  The response is basically a verification to see if the IP has been involved in any botnots or malicious software attacks.

Have I Been Pawned is a simple free site, that takes your email address and checks if it has been involved in any big data breaches.  If so, it might then be prudent to prompt the user in your system to either use MFA or perhaps change their password too.

Threat Focused Auth Tree

Another addition in this space was the Google reCaptcha node to prevent against bot attacks.

SLA, Metrics and Timing

Two recent additions, form ForgeRock’s very own Craig McDonnell.  Craig has built out two interesting nodes for monitoring time and metering.  First up is the MeterAuthTreeNode.  This node can be dropped into any part of the tree and basically add a configurable string to the DropWizard meter registry within AM.  So for example, if you’re tracking which browser users are logging in from using the BrowserCheckerNode, you could simply drop in a couple of meter nodes to add incremental counters that are updated every time that specific browser is seen during the login journey.  This becomes massively important when building out user experience analytics projects.  The metrics can the be viewed using something like JConsole and retrieved into nice dashboards using JMX or pushed to a Graphite Server.

A cousin of the meter node, are the TimeAuthTreeNodes.  Similar to the meter, they can basically time each part of the auth tree.  As tree’s are likely to include upwards of 20 data signals, and 3rd party processing, it may be essential to understand response times and SLA impacts.  The timer nodes are part of a pair – a starter node and stop node.  The time calculated in between is then sent to the same registry as the meter and can be viewed using JConsole.

Monitoring Response Time of a 3rd Party Call Out During Login

Device Analysis

From a security perspective it’s quite common to pair a trusted user to their login device.  But what about verifying if that device itself has the correct browser or operating system?  The OSCollectorNode and sister node BrowserCollectorNode, allow basically analysis of the incoming client request.  This can be extended right down to the versioning, to allow for redirects, blocking or perhaps a more personalised experience.  If a service provider knows you are logging in from a mobile device on 4G at 8am, they can likely infer you are commuting and may respond differently to different content.  The information collected, can be simply added into session properties and delivered to down stream protected applications.

OS and Browser Analysis During Login

A few other noteworthy additions this month include an updated IPAddressDecisionNode, KBAAuthenticationNode and ClientSideScriptingNode which allows for the delivery of JavaScript down onto the client machine.

This blog post was first published @ http://www.theidentitycookbook.com/, included here with permission from the author.

2020: Machine Learning, Post Quantum Crypto & Zero Trust

Welcome to a digital identity project in 2020! You’ll be expected to have a plan for post-quantum cryptography.  Your network will be littered with “zero trust” buzz words, that will make you suspect everyone, everything and every transaction.  Add to that, “machines” will be learning everything, from how you like your coffee, through to every network, authentication and authorisation decision. OK, are you ready?

Machine Learning

I’m not going to do an entire blog on machine learning (ML) and artificial intelligence (AI).  Firstly I’m not qualified enough on the topic and secondly I want to focus on the security implications.  Needless to say, within 3 years, most organisations will have relatively experienced teams who are handling big data capture from an and identity, access management and network perspective.

That data will be being fed into ML platforms, either on-premise, or via cloud services.  Leveraging either structured or unstructured learning, data from events such as login (authentication) for end users and devices, as well authorization decisions can be analysed in order to not only increase assurance and security, but for also increasing user experience.  How?  Well if the output from ML can be used to either update existing signatures (bit legacy, but still) whilst simultaneously working out the less risky logins, end user journeys can be made less intrusive.

Step one is finding out the correct data sources to be entered into the ML “model”.  What data is available, especially within the sign up, sign in and authorization flows?  Clearly general auditing data will look to capture ML “tasks” such as successful sign ins and any other meta data associated with that – such as time, location, IP, device data, behavioural biometry and so on.  Having vast amounts of this data available is the first start, which in turn can be used to “feed” the ML engine.  Other data points would be needed to.  What resources, applications and API calls are being made to complete certain business processes?  Can patterns be identified and tied to “typical” behaviour and user and device communities.  Being able to identify and track critical data and the services that process that data would be a first step, before being able to extract task based data samples to help identify trusted and untrusted activities.

 

Post Quantum Crypto

Quantum computing is coming.  Which is great.  Even in 2020, it might not be ready, but you need to be ready for it.  But, and there’s always a but, the main concern is that the super power of quantum will blow away the ability for existing encryption and hashing algorithms to remain secure.  Why?  Well quantum computing ushers in a paradigm of “qubits” – a superpositional state in between the classic binary 1 and 0.  Ultimately, that means that the “solutioneering” of complex problems can be completed  in a much more efficient and non-sequential way.

The quantum boxes can basically solve certain problems faster.  The mathematics behind cryptography being one of those problems.  A basic estimate for the future effectiveness of something like AES-256, drops to 128 bits.  Scary stuff.  Commonly used approaches today for key exchange rely on protocols such as Diffie-Hellman (DH) or Elliptic Curve Diffie Hellman (ECDH).  Encryption is then handled by things like Rivest-Shamir-Adleman (RSA) or Elliptic Curve Digital Signing Algorithm (ECDSA).

In the post-quantum (PQ) world they’re basically broken.  Clearly, the material impact on your organisation or services will largely depend on impact assessment.  There’s no point putting a $100 lock on a $20 bike.  But everyone wants encryption right?  All that data that will be flying around is likely to need even more protection whilst in transit and at rest.

Some of the potentially “safe” PQ algorithms include XMSS and SPHINCS for hashing – the former going through IETF standardization.  Ring Learning With Errors (RLWE) is basically an enhanced public key cryptosystem, that alters the structure of the private key.  Currently under research but no weakness have yet been found.  NTRU is another algorithm for the PQ world, using a hefty 12881 bit key.  NTRU is also already standardized by the IEEE which helps with the maturity aspect.

But how to decide?  There is a nice body called the PQCRYPTO Consortium that is providing guidance on current research.  Clearing you’re not going to build your own alternatives, but information assurance and crypto specialists within your organisation, will need to start data impact assessments, in order to understand where cryptography is currently used for both transport, identification and data at rest protection to understand any future potential exposures.

Zero Trust Identities

“Zero Trust” (ZT) networking has been around for a while.  The concept of organisations having a “safe” internal network versus the untrusted “hostile” public network, separated by a firewall are long gone. Organisations are perimeter-less.

Assume every device, identity and transaction is hostile until proven otherwise.  ZT for identity especially, will be looking to bind not only a physical identity to a digital representation (session Id, token, JWT), but also that representation to a vehicle – aka mobile, tablet or device.  In turn, every transaction that tuple interacts with, is then verified – checking for changes – either contextual or behavioural that could indicate malicious intent.  That introduces a lot of complexity to transaction, data and application protection.

Every transaction potentially requires introspection or validation.  Add to this mix an increased number of devices and data flows, which would pave the way for distributed authorization, coupled with continuous session validation.

How will that look?  Well we’re starting to see the of things like stateless JSON Web Tokens (JWT’s) as a means for hyper scale assertion issuance, along with token binding to sessions and devices.  Couple to that Fine Grained Authentication processes that are using 20+ signals of data to identify a user or thing and we’re starting to see the foundations of ZT identity infrastructures.  Microservice or hyper-mesh related application infrastructures are going to need rapid introspection and re-validation on every call so the likes of distributed authorization looks likely.

So the future is now.  As always.  We know that secure identity and access management functions has never been more needed, popular or advanced in the last 20 years.  The next 3-5 years will be critical in defining a back bone of security services that can nimbly be applied to users, devices, data and the billions of transactions that will result.

This blog post was first published @ www.infosecprofessional.com, included here with permission.

2020: Machine Learning, Post Quantum Crypto & Zero Trust

Welcome to a digital identity project in 2020! You'll be expected to have a plan for post-quantum cryptography.  Your network will be littered with "zero trust" buzz words, that will make you suspect everyone, everything and every transaction.  Add to that, “machines” will be learning everything, from how you like your coffee, through to every network, authentication and authorisation decision. OK, are you ready?

Machine Learning

I'm not going to do an entire blog on machine learning (ML) and artificial intelligence (AI).  Firstly I'm not qualified enough on the topic and secondly I want to focus on the security implications.  Needless to say, within 3 years, most organisations will have relatively experienced teams who are handling big data capture from an and identity, access management and network perspective.

That data will be being fed into ML platforms, either on-premise, or via cloud services.  Leveraging either structured or unstructured learning, data from events such as login (authentication) for end users and devices, as well authorization decisions can be analysed in order to not only increase assurance and security, but for also increasing user experience.  How?  Well if the output from ML can be used to either update existing signatures (bit legacy, but still) whilst simultaneously working out the less risky logins, end user journeys can be made less intrusive. 

Step one is finding out the correct data sources to be entered into the ML “model”.  What data is available, especially within the sign up, sign in and authorization flows?  Clearly general auditing data will look to capture ML “tasks” such as successful sign ins and any other meta data associated with that – such as time, location, IP, device data, behavioural biometry and so on.  Having vast amounts of this data available is the first start, which in turn can be used to “feed” the ML engine.  Other data points would be needed to.  What resources, applications and API calls are being made to complete certain business processes?  Can patterns be identified and tied to “typical” behaviour and user and device communities.  Being able to identify and track critical data and the services that process that data would be a first step, before being able to extract task based data samples to help identify trusted and untrusted activities.


Post Quantum Crypto

Quantum computing is coming.  Which is great.  Even in 2020, it might not be ready, but you need to be ready for it.  But, and there’s always a but, the main concern is that the super power of quantum will blow away the ability for existing encryption and hashing algorithms to remain secure.  Why?  Well quantum computing ushers in a paradigm of “qubits” - a superpositional state in between the classic binary 1 and 0.  Ultimately, that means that the “solutioneering” of complex problems can be completed  in a much more efficient and non-sequential way.

The quantum boxes can basically solve certain problems faster.  The mathematics behind cryptography being one of those problems.  A basic estimate for the future effectiveness of something like AES-256, drops to 128 bits.  Scary stuff.  Commonly used approaches today for key exchange rely on protocols such as Diffie-Hellman (DH) or Elliptic Curve Diffie Hellman (ECDH).  Encryption is then handled by things like Rivest-Shamir-Adleman (RSA) or Elliptic Curve Digital Signing Algorithm (ECDSA). 

In the post-quantum (PQ) world they’re basically broken.  Clearly, the material impact on your organisation or services will largely depend on impact assessment.  There’s no point putting a $100 lock on a $20 bike.  But everyone wants encryption right?  All that data that will be flying around is likely to need even more protection whilst in transit and at rest.

Some of the potentially “safe” PQ algorithms include XMSS and SPHINCS for hashing – the former going through IETF standardization.  Ring Learning With Errors (RLWE) is basically an enhanced public key cryptosystem, that alters the structure of the private key.  Currently under research but no weakness have yet been found.  NTRU is another algorithm for the PQ world, using a hefty 12881 bit key.  NTRU is also already standardized by the IEEE which helps with the maturity aspect.

But how to decide?  There is a nice body called the PQCRYPTO Consortium that is providing guidance on current research.  Clearing you’re not going to build your own alternatives, but information assurance and crypto specialists within your organisation, will need to start data impact assessments, in order to understand where cryptography is currently used for both transport, identification and data at rest protection to understand any future potential exposures.


Zero Trust Identities

“Zero Trust” (ZT) networking has been around for a while.  The concept of organisations having a “safe” internal network versus the untrusted “hostile” public network, separated by a firewall are long gone. Organisations are perimeter-less. 

Assume every device, identity and transaction is hostile until proven otherwise.  ZT for identity especially, will be looking to bind not only a physical identity to a digital representation (session Id, token, JWT), but also that representation to a vehicle – aka mobile, tablet or device.  In turn, every transaction that tuple interacts with, is then verified – checking for changes – either contextual or behavioural that could indicate malicious intent.  That introduces a lot of complexity to transaction, data and application protection. 

Every transaction potentially requires introspection or validation.  Add to this mix an increased number of devices and data flows, which would pave the way for distributed authorization, coupled with continuous session validation.

How will that look?  Well we’re starting to see the of things like stateless JSON Web Tokens (JWT’s) as a means for hyper scale assertion issuance, along with token binding to sessions and devices.  Couple to that Fine Grained Authentication processes that are using 20+ signals of data to identify a user or thing and we’re starting to see the foundations of ZT identity infrastructures.  Microservice or hyper-mesh related application infrastructures are going to need rapid introspection and re-validation on every call so the likes of distributed authorization looks likely.


So the future is now.  As always.  We know that secure identity and access management functions has never been more needed, popular or advanced in the last 20 years.  The next 3-5 years will be critical in defining a back bone of security services that can nimbly be applied to users, devices, data and the billions of transactions that will result.

2020: Machine Learning, Post Quantum Crypto & Zero Trust

Welcome to a digital identity project in 2020! You'll be expected to have a plan for post-quantum cryptography.  Your network will be littered with "zero trust" buzz words, that will make you suspect everyone, everything and every transaction.  Add to that, “machines” will be learning everything, from how you like your coffee, through to every network, authentication and authorisation decision. OK, are you ready?

Machine Learning

I'm not going to do an entire blog on machine learning (ML) and artificial intelligence (AI).  Firstly I'm not qualified enough on the topic and secondly I want to focus on the security implications.  Needless to say, within 3 years, most organisations will have relatively experienced teams who are handling big data capture from an and identity, access management and network perspective.

That data will be being fed into ML platforms, either on-premise, or via cloud services.  Leveraging either structured or unstructured learning, data from events such as login (authentication) for end users and devices, as well authorization decisions can be analysed in order to not only increase assurance and security, but for also increasing user experience.  How?  Well if the output from ML can be used to either update existing signatures (bit legacy, but still) whilst simultaneously working out the less risky logins, end user journeys can be made less intrusive. 

Step one is finding out the correct data sources to be entered into the ML “model”.  What data is available, especially within the sign up, sign in and authorization flows?  Clearly general auditing data will look to capture ML “tasks” such as successful sign ins and any other meta data associated with that – such as time, location, IP, device data, behavioural biometry and so on.  Having vast amounts of this data available is the first start, which in turn can be used to “feed” the ML engine.  Other data points would be needed to.  What resources, applications and API calls are being made to complete certain business processes?  Can patterns be identified and tied to “typical” behaviour and user and device communities.  Being able to identify and track critical data and the services that process that data would be a first step, before being able to extract task based data samples to help identify trusted and untrusted activities.


Post Quantum Crypto

Quantum computing is coming.  Which is great.  Even in 2020, it might not be ready, but you need to be ready for it.  But, and there’s always a but, the main concern is that the super power of quantum will blow away the ability for existing encryption and hashing algorithms to remain secure.  Why?  Well quantum computing ushers in a paradigm of “qubits” - a superpositional state in between the classic binary 1 and 0.  Ultimately, that means that the “solutioneering” of complex problems can be completed  in a much more efficient and non-sequential way.

The quantum boxes can basically solve certain problems faster.  The mathematics behind cryptography being one of those problems.  A basic estimate for the future effectiveness of something like AES-256, drops to 128 bits.  Scary stuff.  Commonly used approaches today for key exchange rely on protocols such as Diffie-Hellman (DH) or Elliptic Curve Diffie Hellman (ECDH).  Encryption is then handled by things like Rivest-Shamir-Adleman (RSA) or Elliptic Curve Digital Signing Algorithm (ECDSA). 

In the post-quantum (PQ) world they’re basically broken.  Clearly, the material impact on your organisation or services will largely depend on impact assessment.  There’s no point putting a $100 lock on a $20 bike.  But everyone wants encryption right?  All that data that will be flying around is likely to need even more protection whilst in transit and at rest.

Some of the potentially “safe” PQ algorithms include XMSS and SPHINCS for hashing – the former going through IETF standardization.  Ring Learning With Errors (RLWE) is basically an enhanced public key cryptosystem, that alters the structure of the private key.  Currently under research but no weakness have yet been found.  NTRU is another algorithm for the PQ world, using a hefty 12881 bit key.  NTRU is also already standardized by the IEEE which helps with the maturity aspect.

But how to decide?  There is a nice body called the PQCRYPTO Consortium that is providing guidance on current research.  Clearing you’re not going to build your own alternatives, but information assurance and crypto specialists within your organisation, will need to start data impact assessments, in order to understand where cryptography is currently used for both transport, identification and data at rest protection to understand any future potential exposures.


Zero Trust Identities

“Zero Trust” (ZT) networking has been around for a while.  The concept of organisations having a “safe” internal network versus the untrusted “hostile” public network, separated by a firewall are long gone. Organisations are perimeter-less. 

Assume every device, identity and transaction is hostile until proven otherwise.  ZT for identity especially, will be looking to bind not only a physical identity to a digital representation (session Id, token, JWT), but also that representation to a vehicle – aka mobile, tablet or device.  In turn, every transaction that tuple interacts with, is then verified – checking for changes – either contextual or behavioural that could indicate malicious intent.  That introduces a lot of complexity to transaction, data and application protection. 

Every transaction potentially requires introspection or validation.  Add to this mix an increased number of devices and data flows, which would pave the way for distributed authorization, coupled with continuous session validation.

How will that look?  Well we’re starting to see the of things like stateless JSON Web Tokens (JWT’s) as a means for hyper scale assertion issuance, along with token binding to sessions and devices.  Couple to that Fine Grained Authentication processes that are using 20+ signals of data to identify a user or thing and we’re starting to see the foundations of ZT identity infrastructures.  Microservice or hyper-mesh related application infrastructures are going to need rapid introspection and re-validation on every call so the likes of distributed authorization looks likely.


So the future is now.  As always.  We know that secure identity and access management functions has never been more needed, popular or advanced in the last 20 years.  The next 3-5 years will be critical in defining a back bone of security services that can nimbly be applied to users, devices, data and the billions of transactions that will result.

Using IDM and DS to synchronise hashed passwords

Overview

In this post I will describe a technique for synchronising a hashed password from ForgeRock IDM to DS.

Out of the box, IDM has a Managed User object that encrypts a password in symmetric (reversible) encryption.  One reason for this is that sometimes it is necessary to pass, in clear text, the password to a destination directory in order for it to perform its own hashing before storing it.  Therefore the out of the box synchronisation model for IDM is take the encrypted password from its own store, decrypt it, and pass it in clear text (typically over a secure channel!) to DS for it to hash and store.

You can see this in the samples for IDM.

However, there are some times when storing an encrypted, rather than hashed, value for a password is not acceptable.  IDM includes the capability to hash properties (such as passwords) not just encrypt them.  In that scenario, given that password hashes are one way, it’s not possible to decrypt the password before synchronisation with other systems such as DS.

Fortunately, DS offers the capability of accepting pre-hashed passwords so IDM is able to pass the hash to DS for storage.  DS obviously needs to know this value is a hash, otherwise it will try to hash the hash!

So, what are the steps required?

  1. Ensure that DS is configured to accept hashed passwords.
  2. Ensure the IDM data model uses ‘Hashing’ for the password property.
  3. Ensure the IDM mapping is setup correctly

Ensure DS is configured to accept hashed passwords

This topic is covered excellently by Mark Craig in this article here:
https://marginnotes2.wordpress.com/2011/07/21/opendj-using-pre-encoded-passwords/

I’m using ForgeRock DS v5.0 here, but Mark references the old name for DS (OpenDJ) because this capability has been around for a while.  The key thing to note about the steps in the article is that you need the allow-pre-encoded-passwords advanced password policy property to be set for the appropriate password policy.  I’m only going to be dealing with one password policy – the default one – so Mark’s article covers everything I need.

(I will be using a Salted SHA-512 algorithm so if you want to follow all the steps, including testing out the change of a user’s password, then specify {SSHA512} in the userPassword value, rather than {SSHA}.  This test isn’t necessary for the later steps in this article, but may help you understand what’s going on).

Ensure IDM uses hashing

Like everything in IDM, you can modify configuration by changing the various config .json files, or the UI (which updates the json config files!)

I’ll use IDM v5.0 here and show the UI.

By default, the Managed User object includes a password property that is defined as ‘Encrypted’:

We need to change this to be Hashed:

And, I’m using the SHA-512 algorithm here (which is a Salted SHA-512 algorithm).

Note that making this change does not update all the user passwords that exist.  It will only take effect when a new value is saved to the property.

Now the value of a password, when it is saved, is string representation of a complex JSON object (just like it is when encrypted) but will look something like:

{"$crypto":

  {"value":

    {"algorithm":"SHA-512","data":"Quxh/PEBXMa2wfh9Jmm5xkgMwbLdQfytGRy9VFP12Bb5I2w4fcpAkgZIiMPX0tcPg8OSo+UbeJRdnNPMV8Kxc354Nj12j0DXyJpzgqkdiWE="},

    "type":"salted-hash"

  }

}

Ensure IDM Mapping is setup correctly

Now we need to configure the mapping.

As you may have noted in the 1st step, DS is told that the password is pre-hashed by the presence of {SSHA512} at the beginning of the password hash value.  Therefore we need a transformation script that takes the algorithm and hash value from IDM and concatenates it in a way suited for DS.

The script is fairly simple, but does need some logic to convert the IDM algorithm representation: SHA-512 into the DS representation: {SSHA512}

This is the transformation script (in groovy) I used (which can be extended of course for other algorithms):

String strHash;

if (source.$crypto.value.algorithm == "SHA-512" ) {

  strHash = "{SSHA512}" + source.$crypto.value.data

}

strHash;

This script replaces the default IDM script that does the decryption of the password.

(You might want to extend the script to cope with both hashed and encrypted values of passwords if you already have data.  Look at functions such as openidm.isHashed and openidm.isEncrypted in the IDM Integrators Guide).

Now when a password is changed, the password is stored in hashed form in IDM.  Then the mapping is triggered to synchronise to DS applying the transformation script that passes the pre-hashed password value.

Now there is no need to store passwords in reversible encryption!

This blog post was first published @ yaunap.blogspot.no, included here with permission from the author.