How Information Security Can Drive Innovation

Information Security and Innovation: often at two different ends of an executive team’s business strategy. The non-CIO ‘C’ level folks want to discuss revenue generation, efficiency and growth. Three areas often immeasurably enhanced by having a strong and clear innovation management framework. The CIO’s objectives are often focused on technical delivery, compliance, uploading SLA’s and more recently on privacy enablement and data breach prevention. So how can the two worlds combine, to create a perfect storm for trusted and secure economic growth?

Innovation Management 

But firstly how do organisations actually become innovative? It is a buzzword that is thrown around at will, but many organisations fail to build out the necessary teams and processes to allow innovation to succeed. Innovation basically focuses on the ability to create both incremental and radically different products, processes and services, with the aim of developing net-new revenue streams. But can this process be managed?

Or are companies and individuals just “born” to be creative? Well simply, no. Creativity can be managed, fostered and encouraged. Some basic creative thinking concepts, include “design thinking” - where the focus is on emphasising customer needs, prototyping, iterating and testing again. This is then combined with different thinking types – both open (problem felt directly), closed (via a 3rd party), internal (a value add contribution) and external (creativity as part of a job role). The “idea-factory” can then be categorised into something like HR lead ideas – those from existing staff that lead to incremental changes – R&D ideas – the generation of radical concepts that lead to entirely new products – and finally Marketing lead ideas – those that capture customer feedback.

Business Management 

 Once the idea-machine has been designed, it needs feeding with business strategy. That “food” helps to define what the idea-machine should focus upon and prioritise. This can be articulated in the form of what the business wants to achieve. If it is revenue maximisation, does this take the form of product standardisation, volume or distribution changes? This business analysis needs to look for identifying unmet customer needs, tied neatly into industry or global trends (a nice review on the latter is the “Fourth Industrial Revolution” by Klaus Schwab).

Information Security Management 


There is a great quote by Amit & Zott, that goes along the lines of, as an organisation, you're always one innovation from being wiped out. Very true. But that analogy can also be applied to “one data breach” from being wiped out – from irreparable brand damage, or perhaps via the theft of intellectual property. So how can we accelerate from the focus of business change and forward thinking to information security, which has typically been retrospective, restrictive and seen as a IT cost centre.

Well there are similarities believe it or not and, when designed in the right way the overlay of application, data and identity lead security can drive faster, more efficient and more trust worthy services. One of the common misconceptions regarding security management and implementation, is that it is applied retrospectively. An application or infrastructure is created, then audits, penetration tests or code analysis takes place. Security vulnerabilities are identified, triaged and fixed in future releases.

Move Security to the Left 

It is much more cost effective and secure, to apply security processes at the very beginning of any project. Be it for the creation of net-new applications or a new infrastructure design. The classic “security by design” approach. For example, developers should have basic understanding of security concepts – cryptography 101, when to hash versus encrypt, what algorithms to use, how to protect from unnecessary information disclosure, identity protection and so on. Exit criteria within epic and story creation should reference how the security posture should, as a minimum not be altered. Functional tests should include static and dynamic code analysis. All of these incremental changes really move “security to the left” of the development pipeline, getting closer to the project start than the end.

Agile -v- State Gate Analysis 

Within the innovation management framework, stage-gate analysis is often used to triage creative idea processing, helping to identify what to move forward with and what to abandon. A stage is a piece of work, followed by a gate. A gate basically has an exit criteria, with exits such as “kill”, “stop”, “back”, “go forward” etc. Each idea flows through this process to basically kill early and reduce cost. As an idea flows through the stage-gate process, the cost of implementation clearly increases. This approach is very similar to the agile methodology of building complex software. Lots of unknowns. Baby steps, iteration, feedback and behaviour alteration and so on. So there is a definitive mindset duplication between creating ideas that feed into service and application creation and how those applications are made real.

Security Innovation and IP Protection

A key theme of information security attack vectors over the last 5 years, have been the speed of change. Whether we are discussing malware, ransomware, nation state attacks or zero-day notifications, there is constant change. Attack vectors do not stay still. The infosec industry is growing annually as both private sector and nation states ramp defence mechanisms using skilled personnel, machine learning and dedicated practices. Those “external” changes require organisations to respond in innovative and agile ways when it comes to security.

Security is no longer a compliance exercise. The ability to deliver a secure and trusted application or service is a competitive differentiator that can build long lasting, sticky customer relationships. A more direct relationship between innovation and information security, is the simple protection of intellectual property that relates to the new practices, ideas, patents and other value that has been created, due to innovative frameworks. That IP needs protecting, from external malicious attacks, disgruntled insiders and so on.

Summary 

Overall, organisations are doing through the digital transformation exercise at rapid speed and scale. That transformation process requires smart innovation which should be neatly tied into the business strategy. However, security management is no longer a retrospective compliance driven exercise. The process, personnel and speed of change the infosec industry sees, can provide a great breeding ground for helping to alter the application development process, help to reduce internal boundaries and help to deliver secure, trusted privacy preserving services that can allows organisations to grow.

How Information Security Can Drive Innovation

Information Security and Innovation: often at two different ends of an executive team’s business strategy. The non-CIO ‘C’ level folks want to discuss revenue generation, efficiency and growth. Three areas often immeasurably enhanced by having a strong and clear innovation management framework. The CIO’s objectives are often focused on technical delivery, compliance, uploading SLA’s and more recently on privacy enablement and data breach prevention. So how can the two worlds combine, to create a perfect storm for trusted and secure economic growth?

Innovation Management 

But firstly how do organisations actually become innovative? It is a buzzword that is thrown around at will, but many organisations fail to build out the necessary teams and processes to allow innovation to succeed. Innovation basically focuses on the ability to create both incremental and radically different products, processes and services, with the aim of developing net-new revenue streams. But can this process be managed?

Or are companies and individuals just “born” to be creative? Well simply, no. Creativity can be managed, fostered and encouraged. Some basic creative thinking concepts, include “design thinking” - where the focus is on emphasising customer needs, prototyping, iterating and testing again. This is then combined with different thinking types – both open (problem felt directly), closed (via a 3rd party), internal (a value add contribution) and external (creativity as part of a job role). The “idea-factory” can then be categorised into something like HR lead ideas – those from existing staff that lead to incremental changes – R&D ideas – the generation of radical concepts that lead to entirely new products – and finally Marketing lead ideas – those that capture customer feedback.

Business Management 

 Once the idea-machine has been designed, it needs feeding with business strategy. That “food” helps to define what the idea-machine should focus upon and prioritise. This can be articulated in the form of what the business wants to achieve. If it is revenue maximisation, does this take the form of product standardisation, volume or distribution changes? This business analysis needs to look for identifying unmet customer needs, tied neatly into industry or global trends (a nice review on the latter is the “Fourth Industrial Revolution” by Klaus Schwab).

Information Security Management 


There is a great quote by Amit & Zott, that goes along the lines of, as an organisation, you're always one innovation from being wiped out. Very true. But that analogy can also be applied to “one data breach” from being wiped out – from irreparable brand damage, or perhaps via the theft of intellectual property. So how can we accelerate from the focus of business change and forward thinking to information security, which has typically been retrospective, restrictive and seen as a IT cost centre.

Well there are similarities believe it or not and, when designed in the right way the overlay of application, data and identity lead security can drive faster, more efficient and more trust worthy services. One of the common misconceptions regarding security management and implementation, is that it is applied retrospectively. An application or infrastructure is created, then audits, penetration tests or code analysis takes place. Security vulnerabilities are identified, triaged and fixed in future releases.

Move Security to the Left 

It is much more cost effective and secure, to apply security processes at the very beginning of any project. Be it for the creation of net-new applications or a new infrastructure design. The classic “security by design” approach. For example, developers should have basic understanding of security concepts – cryptography 101, when to hash versus encrypt, what algorithms to use, how to protect from unnecessary information disclosure, identity protection and so on. Exit criteria within epic and story creation should reference how the security posture should, as a minimum not be altered. Functional tests should include static and dynamic code analysis. All of these incremental changes really move “security to the left” of the development pipeline, getting closer to the project start than the end.

Agile -v- State Gate Analysis 

Within the innovation management framework, stage-gate analysis is often used to triage creative idea processing, helping to identify what to move forward with and what to abandon. A stage is a piece of work, followed by a gate. A gate basically has an exit criteria, with exits such as “kill”, “stop”, “back”, “go forward” etc. Each idea flows through this process to basically kill early and reduce cost. As an idea flows through the stage-gate process, the cost of implementation clearly increases. This approach is very similar to the agile methodology of building complex software. Lots of unknowns. Baby steps, iteration, feedback and behaviour alteration and so on. So there is a definitive mindset duplication between creating ideas that feed into service and application creation and how those applications are made real.

Security Innovation and IP Protection

A key theme of information security attack vectors over the last 5 years, have been the speed of change. Whether we are discussing malware, ransomware, nation state attacks or zero-day notifications, there is constant change. Attack vectors do not stay still. The infosec industry is growing annually as both private sector and nation states ramp defence mechanisms using skilled personnel, machine learning and dedicated practices. Those “external” changes require organisations to respond in innovative and agile ways when it comes to security.

Security is no longer a compliance exercise. The ability to deliver a secure and trusted application or service is a competitive differentiator that can build long lasting, sticky customer relationships. A more direct relationship between innovation and information security, is the simple protection of intellectual property that relates to the new practices, ideas, patents and other value that has been created, due to innovative frameworks. That IP needs protecting, from external malicious attacks, disgruntled insiders and so on.

Summary 

Overall, organisations are doing through the digital transformation exercise at rapid speed and scale. That transformation process requires smart innovation which should be neatly tied into the business strategy. However, security management is no longer a retrospective compliance driven exercise. The process, personnel and speed of change the infosec industry sees, can provide a great breeding ground for helping to alter the application development process, help to reduce internal boundaries and help to deliver secure, trusted privacy preserving services that can allows organisations to grow.

2020: Machine Learning, Post Quantum Crypto & Zero Trust

Welcome to a digital identity project in 2020! You’ll be expected to have a plan for post-quantum cryptography.  Your network will be littered with “zero trust” buzz words, that will make you suspect everyone, everything and every transaction.  Add to that, “machines” will be learning everything, from how you like your coffee, through to every network, authentication and authorisation decision. OK, are you ready?

Machine Learning

I’m not going to do an entire blog on machine learning (ML) and artificial intelligence (AI).  Firstly I’m not qualified enough on the topic and secondly I want to focus on the security implications.  Needless to say, within 3 years, most organisations will have relatively experienced teams who are handling big data capture from an and identity, access management and network perspective.

That data will be being fed into ML platforms, either on-premise, or via cloud services.  Leveraging either structured or unstructured learning, data from events such as login (authentication) for end users and devices, as well authorization decisions can be analysed in order to not only increase assurance and security, but for also increasing user experience.  How?  Well if the output from ML can be used to either update existing signatures (bit legacy, but still) whilst simultaneously working out the less risky logins, end user journeys can be made less intrusive.

Step one is finding out the correct data sources to be entered into the ML “model”.  What data is available, especially within the sign up, sign in and authorization flows?  Clearly general auditing data will look to capture ML “tasks” such as successful sign ins and any other meta data associated with that – such as time, location, IP, device data, behavioural biometry and so on.  Having vast amounts of this data available is the first start, which in turn can be used to “feed” the ML engine.  Other data points would be needed to.  What resources, applications and API calls are being made to complete certain business processes?  Can patterns be identified and tied to “typical” behaviour and user and device communities.  Being able to identify and track critical data and the services that process that data would be a first step, before being able to extract task based data samples to help identify trusted and untrusted activities.

 

Post Quantum Crypto

Quantum computing is coming.  Which is great.  Even in 2020, it might not be ready, but you need to be ready for it.  But, and there’s always a but, the main concern is that the super power of quantum will blow away the ability for existing encryption and hashing algorithms to remain secure.  Why?  Well quantum computing ushers in a paradigm of “qubits” – a superpositional state in between the classic binary 1 and 0.  Ultimately, that means that the “solutioneering” of complex problems can be completed  in a much more efficient and non-sequential way.

The quantum boxes can basically solve certain problems faster.  The mathematics behind cryptography being one of those problems.  A basic estimate for the future effectiveness of something like AES-256, drops to 128 bits.  Scary stuff.  Commonly used approaches today for key exchange rely on protocols such as Diffie-Hellman (DH) or Elliptic Curve Diffie Hellman (ECDH).  Encryption is then handled by things like Rivest-Shamir-Adleman (RSA) or Elliptic Curve Digital Signing Algorithm (ECDSA).

In the post-quantum (PQ) world they’re basically broken.  Clearly, the material impact on your organisation or services will largely depend on impact assessment.  There’s no point putting a $100 lock on a $20 bike.  But everyone wants encryption right?  All that data that will be flying around is likely to need even more protection whilst in transit and at rest.

Some of the potentially “safe” PQ algorithms include XMSS and SPHINCS for hashing – the former going through IETF standardization.  Ring Learning With Errors (RLWE) is basically an enhanced public key cryptosystem, that alters the structure of the private key.  Currently under research but no weakness have yet been found.  NTRU is another algorithm for the PQ world, using a hefty 12881 bit key.  NTRU is also already standardized by the IEEE which helps with the maturity aspect.

But how to decide?  There is a nice body called the PQCRYPTO Consortium that is providing guidance on current research.  Clearing you’re not going to build your own alternatives, but information assurance and crypto specialists within your organisation, will need to start data impact assessments, in order to understand where cryptography is currently used for both transport, identification and data at rest protection to understand any future potential exposures.

Zero Trust Identities

“Zero Trust” (ZT) networking has been around for a while.  The concept of organisations having a “safe” internal network versus the untrusted “hostile” public network, separated by a firewall are long gone. Organisations are perimeter-less.

Assume every device, identity and transaction is hostile until proven otherwise.  ZT for identity especially, will be looking to bind not only a physical identity to a digital representation (session Id, token, JWT), but also that representation to a vehicle – aka mobile, tablet or device.  In turn, every transaction that tuple interacts with, is then verified – checking for changes – either contextual or behavioural that could indicate malicious intent.  That introduces a lot of complexity to transaction, data and application protection.

Every transaction potentially requires introspection or validation.  Add to this mix an increased number of devices and data flows, which would pave the way for distributed authorization, coupled with continuous session validation.

How will that look?  Well we’re starting to see the of things like stateless JSON Web Tokens (JWT’s) as a means for hyper scale assertion issuance, along with token binding to sessions and devices.  Couple to that Fine Grained Authentication processes that are using 20+ signals of data to identify a user or thing and we’re starting to see the foundations of ZT identity infrastructures.  Microservice or hyper-mesh related application infrastructures are going to need rapid introspection and re-validation on every call so the likes of distributed authorization looks likely.

So the future is now.  As always.  We know that secure identity and access management functions has never been more needed, popular or advanced in the last 20 years.  The next 3-5 years will be critical in defining a back bone of security services that can nimbly be applied to users, devices, data and the billions of transactions that will result.

This blog post was first published @ www.infosecprofessional.com, included here with permission.

2020: Machine Learning, Post Quantum Crypto & Zero Trust

Welcome to a digital identity project in 2020! You'll be expected to have a plan for post-quantum cryptography.  Your network will be littered with "zero trust" buzz words, that will make you suspect everyone, everything and every transaction.  Add to that, “machines” will be learning everything, from how you like your coffee, through to every network, authentication and authorisation decision. OK, are you ready?

Machine Learning

I'm not going to do an entire blog on machine learning (ML) and artificial intelligence (AI).  Firstly I'm not qualified enough on the topic and secondly I want to focus on the security implications.  Needless to say, within 3 years, most organisations will have relatively experienced teams who are handling big data capture from an and identity, access management and network perspective.

That data will be being fed into ML platforms, either on-premise, or via cloud services.  Leveraging either structured or unstructured learning, data from events such as login (authentication) for end users and devices, as well authorization decisions can be analysed in order to not only increase assurance and security, but for also increasing user experience.  How?  Well if the output from ML can be used to either update existing signatures (bit legacy, but still) whilst simultaneously working out the less risky logins, end user journeys can be made less intrusive. 

Step one is finding out the correct data sources to be entered into the ML “model”.  What data is available, especially within the sign up, sign in and authorization flows?  Clearly general auditing data will look to capture ML “tasks” such as successful sign ins and any other meta data associated with that – such as time, location, IP, device data, behavioural biometry and so on.  Having vast amounts of this data available is the first start, which in turn can be used to “feed” the ML engine.  Other data points would be needed to.  What resources, applications and API calls are being made to complete certain business processes?  Can patterns be identified and tied to “typical” behaviour and user and device communities.  Being able to identify and track critical data and the services that process that data would be a first step, before being able to extract task based data samples to help identify trusted and untrusted activities.


Post Quantum Crypto

Quantum computing is coming.  Which is great.  Even in 2020, it might not be ready, but you need to be ready for it.  But, and there’s always a but, the main concern is that the super power of quantum will blow away the ability for existing encryption and hashing algorithms to remain secure.  Why?  Well quantum computing ushers in a paradigm of “qubits” - a superpositional state in between the classic binary 1 and 0.  Ultimately, that means that the “solutioneering” of complex problems can be completed  in a much more efficient and non-sequential way.

The quantum boxes can basically solve certain problems faster.  The mathematics behind cryptography being one of those problems.  A basic estimate for the future effectiveness of something like AES-256, drops to 128 bits.  Scary stuff.  Commonly used approaches today for key exchange rely on protocols such as Diffie-Hellman (DH) or Elliptic Curve Diffie Hellman (ECDH).  Encryption is then handled by things like Rivest-Shamir-Adleman (RSA) or Elliptic Curve Digital Signing Algorithm (ECDSA). 

In the post-quantum (PQ) world they’re basically broken.  Clearly, the material impact on your organisation or services will largely depend on impact assessment.  There’s no point putting a $100 lock on a $20 bike.  But everyone wants encryption right?  All that data that will be flying around is likely to need even more protection whilst in transit and at rest.

Some of the potentially “safe” PQ algorithms include XMSS and SPHINCS for hashing – the former going through IETF standardization.  Ring Learning With Errors (RLWE) is basically an enhanced public key cryptosystem, that alters the structure of the private key.  Currently under research but no weakness have yet been found.  NTRU is another algorithm for the PQ world, using a hefty 12881 bit key.  NTRU is also already standardized by the IEEE which helps with the maturity aspect.

But how to decide?  There is a nice body called the PQCRYPTO Consortium that is providing guidance on current research.  Clearing you’re not going to build your own alternatives, but information assurance and crypto specialists within your organisation, will need to start data impact assessments, in order to understand where cryptography is currently used for both transport, identification and data at rest protection to understand any future potential exposures.


Zero Trust Identities

“Zero Trust” (ZT) networking has been around for a while.  The concept of organisations having a “safe” internal network versus the untrusted “hostile” public network, separated by a firewall are long gone. Organisations are perimeter-less. 

Assume every device, identity and transaction is hostile until proven otherwise.  ZT for identity especially, will be looking to bind not only a physical identity to a digital representation (session Id, token, JWT), but also that representation to a vehicle – aka mobile, tablet or device.  In turn, every transaction that tuple interacts with, is then verified – checking for changes – either contextual or behavioural that could indicate malicious intent.  That introduces a lot of complexity to transaction, data and application protection. 

Every transaction potentially requires introspection or validation.  Add to this mix an increased number of devices and data flows, which would pave the way for distributed authorization, coupled with continuous session validation.

How will that look?  Well we’re starting to see the of things like stateless JSON Web Tokens (JWT’s) as a means for hyper scale assertion issuance, along with token binding to sessions and devices.  Couple to that Fine Grained Authentication processes that are using 20+ signals of data to identify a user or thing and we’re starting to see the foundations of ZT identity infrastructures.  Microservice or hyper-mesh related application infrastructures are going to need rapid introspection and re-validation on every call so the likes of distributed authorization looks likely.


So the future is now.  As always.  We know that secure identity and access management functions has never been more needed, popular or advanced in the last 20 years.  The next 3-5 years will be critical in defining a back bone of security services that can nimbly be applied to users, devices, data and the billions of transactions that will result.

2020: Machine Learning, Post Quantum Crypto & Zero Trust

Welcome to a digital identity project in 2020! You'll be expected to have a plan for post-quantum cryptography.  Your network will be littered with "zero trust" buzz words, that will make you suspect everyone, everything and every transaction.  Add to that, “machines” will be learning everything, from how you like your coffee, through to every network, authentication and authorisation decision. OK, are you ready?

Machine Learning

I'm not going to do an entire blog on machine learning (ML) and artificial intelligence (AI).  Firstly I'm not qualified enough on the topic and secondly I want to focus on the security implications.  Needless to say, within 3 years, most organisations will have relatively experienced teams who are handling big data capture from an and identity, access management and network perspective.

That data will be being fed into ML platforms, either on-premise, or via cloud services.  Leveraging either structured or unstructured learning, data from events such as login (authentication) for end users and devices, as well authorization decisions can be analysed in order to not only increase assurance and security, but for also increasing user experience.  How?  Well if the output from ML can be used to either update existing signatures (bit legacy, but still) whilst simultaneously working out the less risky logins, end user journeys can be made less intrusive. 

Step one is finding out the correct data sources to be entered into the ML “model”.  What data is available, especially within the sign up, sign in and authorization flows?  Clearly general auditing data will look to capture ML “tasks” such as successful sign ins and any other meta data associated with that – such as time, location, IP, device data, behavioural biometry and so on.  Having vast amounts of this data available is the first start, which in turn can be used to “feed” the ML engine.  Other data points would be needed to.  What resources, applications and API calls are being made to complete certain business processes?  Can patterns be identified and tied to “typical” behaviour and user and device communities.  Being able to identify and track critical data and the services that process that data would be a first step, before being able to extract task based data samples to help identify trusted and untrusted activities.


Post Quantum Crypto

Quantum computing is coming.  Which is great.  Even in 2020, it might not be ready, but you need to be ready for it.  But, and there’s always a but, the main concern is that the super power of quantum will blow away the ability for existing encryption and hashing algorithms to remain secure.  Why?  Well quantum computing ushers in a paradigm of “qubits” - a superpositional state in between the classic binary 1 and 0.  Ultimately, that means that the “solutioneering” of complex problems can be completed  in a much more efficient and non-sequential way.

The quantum boxes can basically solve certain problems faster.  The mathematics behind cryptography being one of those problems.  A basic estimate for the future effectiveness of something like AES-256, drops to 128 bits.  Scary stuff.  Commonly used approaches today for key exchange rely on protocols such as Diffie-Hellman (DH) or Elliptic Curve Diffie Hellman (ECDH).  Encryption is then handled by things like Rivest-Shamir-Adleman (RSA) or Elliptic Curve Digital Signing Algorithm (ECDSA). 

In the post-quantum (PQ) world they’re basically broken.  Clearly, the material impact on your organisation or services will largely depend on impact assessment.  There’s no point putting a $100 lock on a $20 bike.  But everyone wants encryption right?  All that data that will be flying around is likely to need even more protection whilst in transit and at rest.

Some of the potentially “safe” PQ algorithms include XMSS and SPHINCS for hashing – the former going through IETF standardization.  Ring Learning With Errors (RLWE) is basically an enhanced public key cryptosystem, that alters the structure of the private key.  Currently under research but no weakness have yet been found.  NTRU is another algorithm for the PQ world, using a hefty 12881 bit key.  NTRU is also already standardized by the IEEE which helps with the maturity aspect.

But how to decide?  There is a nice body called the PQCRYPTO Consortium that is providing guidance on current research.  Clearing you’re not going to build your own alternatives, but information assurance and crypto specialists within your organisation, will need to start data impact assessments, in order to understand where cryptography is currently used for both transport, identification and data at rest protection to understand any future potential exposures.


Zero Trust Identities

“Zero Trust” (ZT) networking has been around for a while.  The concept of organisations having a “safe” internal network versus the untrusted “hostile” public network, separated by a firewall are long gone. Organisations are perimeter-less. 

Assume every device, identity and transaction is hostile until proven otherwise.  ZT for identity especially, will be looking to bind not only a physical identity to a digital representation (session Id, token, JWT), but also that representation to a vehicle – aka mobile, tablet or device.  In turn, every transaction that tuple interacts with, is then verified – checking for changes – either contextual or behavioural that could indicate malicious intent.  That introduces a lot of complexity to transaction, data and application protection. 

Every transaction potentially requires introspection or validation.  Add to this mix an increased number of devices and data flows, which would pave the way for distributed authorization, coupled with continuous session validation.

How will that look?  Well we’re starting to see the of things like stateless JSON Web Tokens (JWT’s) as a means for hyper scale assertion issuance, along with token binding to sessions and devices.  Couple to that Fine Grained Authentication processes that are using 20+ signals of data to identify a user or thing and we’re starting to see the foundations of ZT identity infrastructures.  Microservice or hyper-mesh related application infrastructures are going to need rapid introspection and re-validation on every call so the likes of distributed authorization looks likely.


So the future is now.  As always.  We know that secure identity and access management functions has never been more needed, popular or advanced in the last 20 years.  The next 3-5 years will be critical in defining a back bone of security services that can nimbly be applied to users, devices, data and the billions of transactions that will result.

Creating Personal Access Tokens in ForgeRock AM

Personal Access Tokens (PAT’s) are used to provide scoped self-managed access credentials that can be used to provide access to trusted systems and services that want to act on a users behalf.

Similar to OAuth tokens, they often don’t have an expiration and are used conceptually instead of passwords.  A PAT could be used in combination with a username when performing basic authentication.

For example see the https://github.com/settings/tokens page within Github which allows scope-related tokens to be created for services to access your Github profile.

PAT Creation

The PAT can be an opaque string – perhaps a SHA256 hash.  Using a hash seems the most sensible approach to avoid collisions and create a fixed-length portable string.  A hash without a key of course wont provide any creator assurance/verification function, but since the hash will be stored against the user profile and not treated like a session/token this shouldn’t be an issue.

An example PAT value could be:

Eg:

f83ee64ef9d15a68e5c7a910395ea1611e2fa138b1b9dd7e090941dfed773b2c:{“resource1” : [ “read”, “write”, “execute” ] }

a011286605ff6a5de51f4d46eb511a9e8715498fca87965576c73b8fd27246fe:{"resource2" : [ "read", "write"]}

The key was simply created by running the resource and the associated permissions through sha256sum on Linux.  How you create the hash is beyond the scope of this blog, but this could be easily handled by say ForgeRock IDM and a custom endpoint in a few lines of JavaScript.

PAT Storage

The important aspect is where to store the PAT once it has been created.  Ideally this really needs to be stored against the users profile record in DJ.  I’d recommend creating a new schema attribute dedicated for PAT’s that is multivalued.  The user can then update their PAT’s over REST the same as any other profile attribute.

For this demo I used the existing attribute called “iplanet-am-user-alias-list” for speed as this was multi-valued.  I added in a self-created PAT for my fake resource:

Using a multi-valued attribute allows me to create any number of PAT’s.  As they don’t have an expiration they might last for some time in the user store.

PAT Usage

Once stored, they could be used in a variety of ways to provide “access” within applications. The most simple way, is to leverage the AM authorization engine as a decision point to verify that a PAT exists and what permissions it maps to.

Once the PAT is stored and created, the end user can present it to another user/service that they want to use the PAT.  That service or user presents the username:PAT combination to the service they wish to gain access to.  That service calls the AM authorization API’s to see if the user:PAT combination is valid.

The protected service would call {{OpenAM}}/openam/json/policies?_action=evaluate with a payload similar to:

Here I am calling the ../policies endpoint with a dedicated account called “policyeval” which has ability to read the REST endpoint and also read realm users which we will need later on.  Edit the necessary privileges tab within the Admin console.

If the PAT exists within the user profile of “smoff”, AM returns an access=true message, along with the resource and associated permissions that can be used within the calling application:

So what needs setting up in the background to allow AM to make these decisions? Well all pretty simple really.

Create Authorization Resource Type for PAT’s

Firstly create a resource type that matches the pat://*.* format (or any format you prefer):

Next we need to add a policy set that will contain our access policies:

The PATValidator only contains one policy called AllPATs, which is just a wildcard match for pat://*:*.  This will allow any combination of user:pat to be submitted for validation:

Make sure to set the subjects condition to “NOT Never Match” as we are not analysing user session data here.  The logic for analysis is being handled by a simple script.

PAT Authorization Script

The script is available here.

At a high level is does the following:

  1. Captures the submitted username and PAT that is part of the authorization request
  2. As the user will not have a local session, we need to make a native REST call to look up the user
  3. We do this by first generating a session for our policyeval user
  4. We use that session to call the ../json/users endpoint to perform a search for the users PATs
  5. We do a compare between the submitted PAT and any PAT’s found against the user profile
  6. If a match is found, we pull out the assigned permissions and send back as a response attribute array to the calling application

This blog post was first published @ http://www.theidentitycookbook.com/, included here with permission from the author.