The Role of Identity Management in the GDPR

Unless you have been living in a darkened room for a long time, you will know the countdown for the EU's General Data Protection Regulation is dramatically coming to a head.  May 2018 is when the regulation really takes hold, and organisations are fast in the act on putting plans, processes and personnel in place, in order to comply.

Whilst many organisations are looking at employing a Data Privacy Officer (DPO), reading through all the legalese and developing data analytics and tagging processes, many need to embrace and understand the requirements with how their consumer identity and access management platform can and should be used in this new regulatory setting.

My intention in this blog, isn't to list every single article and what they mean - there are plenty of other sites that can help with that.  I want to really highlight, some of the more identity related components of the GDPR and what needs to be done.

Personal Data

On the the personal data front, more and more organisations are collecting more data, more frequently than ever before.  Some data is explicit, like when you enter your first name, last name and date of birth when you register for a service for example, through to the more subtle - location, history and preference details amongst others. The GDPR focuses on making sure personal data is processed legally and data is only kept for as long as necessary - with a full end user interface that has the ability to make sure their data is up to date and accurate.

It goes with out saying, that this personal data needs to have the necessary security, confidentiality, integrity and availability constraints applied to it.  This will require the necessary least privileged administrative controls and data persistence security, such as the necessary hashing or encryption.

Lawful Processing

Ah the word law! That must be the legal team. Or the newly appointed DPO. That can't be a security, identity or technology issue.  Partially correct. But the lawful processing, also has a significant requirement surrounding the capture and management of consent.  So what is this explicit consent? The data owner - that's Joe Blogs whose data has been snaffled - needs to be fully aware of the data that has been captured, why it is captured and who has access.

The service provider, also needs to explicitly capture consent - not an implicit "the end user needs to opt out", but more the end user needs to "opt-in" for their data to be used and processed.  This will require a transparent user driven consent system, with sharing and more importantly, timely revocation of access.  Protocols such as User Managed Access may come in useful here.

Individuals Right to be Informed

The lawful processing aspect, flows neatly into the entire area of the end user being informed.  The end user needs to be in a position to make informed decisions, around data sharing, service registration, data revocation and more.  The use of 10 page terms and conditions thrust down the end user's screen at service startup, are over.

Non-tech language is now a must, with clear explanations of why data has been captured and which 3rd parties - if any - have access to the data.  This again flows into the consent model - with the data owner being able to make consent decisions, they need simple to understand information.  So registration flows will now need to be much more progressive - only collecting data when it is needed, with a clear explanation of why the data is needed and what processing will be done with it.  20 attribute registration forms are dead.

Individuals Right to Rectification, Export and Erasure

Certainly some new requirements here - if you are a service provider, can you allow your end users to clearly see what data you have captured about them, and also provide that data in a simple to use end user dashboard where they can make changes and keep it up to date?  What about the ability for the data owner to export that data in a machine readable and standard format such as CSV or JSON?

Right to erasure is also interesting - do you know where your end user data resides?  Which systems, what attributes, what correlations or translations have taken place?  Could you issue a de-provisioning request to either delete, clean or anonymize that data? If not you may need to investigate why and what can be done to remediate that.


Conclusion

The GDPR is big.  It contains over 90 articles, containing lots of legalese and fine grained print.  Don't just assume the legal team or the newly appointed DPO will cover your company's ass.  Full platform data analytics tagging will be needed, along with a modern consumer identity and access management design pattern.  End user dashboards, registration journeys and consent frameworks will need updating.

The interesting aspect, is that privacy is now becoming a competitive differentiator.  The GDPR should not just be seen as an internal compliance exercise.  It could actually be a launch pad for building closer more trusted relationships with your end user community.

Why Tim Berners-Lee Is Right On Privacy

Last week, the “father” of the Internet, Tim Berners-Lee, did a series of interviews to mark the 28 year anniversary since he submitted his original proposal for the worldwide web.

The interviews were focused on the phenomenal success of the web, along with a macabre warning describing 3 key areas we need to change in order to “save” the Internet as we know it.

The three points were:

  1. We’ve lost control of our personal data
  2. It’s too easy for misinformation to spread on the web
  3. Political advertising online needs transparency and understanding

I want to primarily discuss the first point – personal data, privacy and our lack of control.

As nearly every private, non-profit and public sector organisation on the planet, either has a digital presence, or is in the process of transforming itself to be a digital force, the transfer of personal data to service provider is growing at an unprecedented rate.

Every time we register for a service – be it for an insurance quote, to submit a tax return, when we download an app on our smart phones, register at the local leisure centre, join a new dentists or buy a fitness wearable, we are sharing an ever growing list of personal information or providing access to our own personal data.

The terms and conditions often associated with such registration flows, are often so full of “legalese”, or the app permissions or “scope” so large and complex, that the end user literally has no control or choice over the type, quality and and duration of the information they share.  It is generally an “all or nothing” type of data exchange.  Provide the details the service provider is asking for, or don’t sign up to the service. There are no alternatives.

This throws up several important questions surrounding data privacy, ownership and control.

  1. What is the data being used for?
  2. Who has access to the data, including 3rd parties?
  3. Can I revoke access to the data?
  4. How long with the service provider have access to the data for?
  5. Can the end user amend the data?
  6. Can the end user remove the data from the service provider – aka right to erasure?

Many service providers are likely unable to provide an identity framework that can answer those sorts of questions.

The interesting news, is that there are alternatives and things are likely to change pretty soon.  The EU General Data Protection Regulation (GDPR), provides a regulatory framework around how organisations should collect and manage personal data.  The wide ranging regulation, covers things like how consent from the end user is managed and captured, how breach notifications are handled and how information pertaining to the reasons for data capture are explained to the end user.

The GDPR isn’t a choice either – it’s mandatory for any organisation (irregardless of their location) that handles data of European Union citizens.

Couple with that, new technology standards such as the User Managed Access working group being run by the Kantara Initiative, that look to empower end users to have more control and consent of data exchanges, will open doors for organisations who want to deliver personalised services, but do so in a more privacy preserving and user friendly way.

So, whilst the Internet certainly has some major flaws, and data protection and user privacy is a big one currently, there are some green shoots of recovery from an end user perspective.  It will be interesting to see what the Internet will look like another 28 years from now.

This blog post was first published @ www.infosecprofessional.com, included here with permission.

Why Tim Berners-Lee Is Right About Internet Privacy

Last week, the "father" of the Internet, Tim Berners-Lee, did a series of interviews to mark the 28 year anniversary since he submitted his original proposal for the worldwide web.

The interviews were focused on the phenomenal success of the web, along with a macabre warning describing 3 key areas we need to change in order to "save" the Internet as we know it.

The three points were:


  1. We’ve lost control of our personal data
  2. It’s too easy for misinformation to spread on the web
  3. Political advertising online needs transparency and understanding
I want to primarily discuss the first point - personal data, privacy and our lack of control.

As nearly every private, non-profit and public sector organisation on the planet, either has a digital presence, or is in the process of transforming itself to be a digital force, the transfer of personal data to service provider is growing at an unprecedented rate. 

Every time we register for a service - be it for an insurance quote, to submit a tax return, when we download an app on our smart phones, register at the local leisure centre, join a new dentists or buy a fitness wearable, we are sharing an ever growing list of personal information or providing access to our own personal data.

The terms and conditions often associated with such registration flows, are often so full of "legalese", or the app permissions or "scope" so large and complex, that the end user literally has no control or choice over the type, quality and and duration of the information they share.  It is generally an "all or nothing" type of data exchange.  Provide the details the service provider is asking for, or don't sign up to the service. There are no alternatives.

This throws up several important questions surrounding data privacy, ownership and control.
  1. What is the data being used for?
  2. Who has access to the data, including 3rd parties?
  3. Can I revoke access to the data?
  4. How long with the service provider have access to the data for?
  5. Can the end user amend the data?
  6. Can the end user remove the data from the service provider - aka right to erasure?
Many service providers are likely unable to provide an identity framework that can answer those sorts of questions.

The interesting news, is that there are alternatives and things are likely to change pretty soon.  The EU General Data Protection Regulation (GDPR), provides a regulatory framework around how organisations should collect and manage personal data.  The wide ranging regulation, covers things like how consent from the end user is managed and captured, how breach notifications are handled and how information pertaining to the reasons for data capture are explained to the end user.

The GDPR isn't a choice either - it's mandatory for any organisation (irregardless of their location) that handles data of European Union citizens.

Couple with that, new technology standards such as the User Managed Access working group being run by the Kantara Initiative, that look to empower end users to have more control and consent of data exchanges, will open doors for organisations who want to deliver personalised services, but do so in a more privacy preserving and user friendly way.

So, whilst the Internet certainly has some major flaws, and data protection and user privacy is a big one currently, there are some green shoots of recovery from an end user perspective.  It will be interesting to see what the Internet will look like another 28 years from now.










Top 5 Digital Identity Predictions for 2017

2016 is drawing to an end, the goose is getting fat, the lights and decorations are adorning many a fire place and other such cold weather cliches.  However, the attention must turn back to identity management and what the future may or may not hold. Digital identity or consumer based identity and access management (CIAM) has taken a few big […]

Top 5 Digital Identity Predictions for 2017

2016 is drawing to an end, the goose is getting fat, the lights and decorations are adorning many a fire place and other such cold weather cliches.  However, the attention must turn back to identity management and what the future may or may not hold.

Digital identity or consumer based identity and access management (CIAM) has taken a few big steps forward in the last 2 years.  Numerous industry analysts, aka Gartner, Forrester and Kuppinger Cole, have carved out CIAM as a new sub topic of IAM, that requires its own market and vendor analysis.  I think this is a valuable process, as CIAM projects tend to have very different requirements and implementation steps to traditional internal or employee based IAM.

From a predictions perspective, I see the following top 5 topics becoming key components of any digital identity platform for the next 12-18 months.


1 - Device Pairing Becomes a Base Requirement for IoT


Everyone knows about IoT.  It's going to save the planet.  Increase personalisation. Create loads of data and bring most CISO and network security managers to their knees.  Other than that, "smart devices", aka devices that can talk at least HTTP (hopefully HTTPS) will be much more powerful and useful, when tied and paired to a physical personal identity.  The classic "pin and pair" style use case. Take for example a smart-TV or a healthcare wearable.  By tying the device to an individual, the device can not only access cloud services and API's on the owners behalf, but can then in turn receive information to make the user experience more personalised.

A simple way to achieve this is via a draft IETF standard that leverages the popular authorization protocol OAuth2.  This allows the device to receive a scoped OAuth2 access token that can be used to represent the real person to other services.  More importantly the token can be revoked just like any other OAuth2 access token when the the device is sold or lost.


2 - OAuth2 Token Protection Becomes Mainstream


So what does this mean?  OAuth2 and OpenId Connect are the now defacto method for application owners to integrate 3rd party authorization, identity assertions and other authX style use cases. 
OAuth2 generates an access_token and refresh_token pair that are used to gain access to profile data or API's for example.  OpenId Connect extends this concept slightly, by also issuing an id_token that can basically act like a SAML2 identity assertion.

However...the access tokens, are bearer tokens.  What does that mean? Well if you are in possession of the token, you basically have access.  Assuming the token is valid of course.  This opens up the possibility that tokens can be stolen (thinking insecure communications channels, MITM, man-in-the-middle) and then reused maliciously.  The resource servers, by design only really check that the access token is valid and has the correct scopes/permissions - they don't check that the person, application or device that is presenting the token is the correct owner of the token.  Bad times.

Another draft IETF standard focuses on generating tokens that basically can't be reused if stolen. Each issued token contains a little piece of the requester - aka their public key.  This allows the resource server to extract the public key from the access token and generate a challenge response dance with the requester, to see if they are in fact the correct holder of the corresponding private key pair.  If they are, great, access granted.  If not, well access is not granted as they are not the original token owner.

3 - Social Signup Default


Social signup and sign in (aka Sign in with Facebook..) is so omnipresent in the applications and consumer services world, that enterprise service providers, be it in the public sector deliverying government services or the private sector deliverying banking, insurance or retail services, can not ignore the end user benefits it can bring.

Not only does it speed it the user registration process, it also reduces the over head for the service provider, in that they no longer need to handle password storage.  The user is authenticating with a 3rd party, so it allows the service provider to out source the password storage to Google, Facebook, Microsoft or whomever.

The flip side of using a 3rd party, is that you have to trust their vetting, registration and data storage capabilities.  Social networks are notorious for the having fake accounts, or accounts that no longer map into the correct owner.  If you are a service provider leveraging social sign in, your applications and data assurance standards need to align and add extra levels of assurance or verification as necessary.


4 - Push Authentication Default


What is push authentication? I thought one-time-passwords (OTP) were going to save the world? Well OTP's are certainly not going away any time soon, but many consumer facing sites and indeed social networks, are now introducing push authentication.  This basically occurs via a mobile app that creates notifications during login time.  The device and app and previously registered to the user.  During login time, the end user performs a simple action (generally a finger-print scan or a swipe) to confirm they are the user logging in.  Push is certainly becoming the standard mechanism amongst the under 30's and no doubt will replace OTP for enterprise multi-factor-authentication soon.


5 - Stateless Tokens & Micro-services a Match Made in Heaven


Microservice architectures seem to be everywhere.  Out with monolithic apps that often have long delivery cycles and lots of fragility and in with tiny, often single function applications, that are loosely coupled, that can be delivered and updated continuously.

However, that then introduces new challenges and requirements surrounding authentication and authorization in a microservices world. Here, OAuth2 again tends to come to the rescue, as many microservice or single function systems, are generally just exposed API's, sitting behind a routing and throttling mechanism.  Add in to that mix the ability to have stateless access tokens (that is, an access token that is a JSON Web Token, that carries all of the access, validity and permissions data with it in one place) and you can start to support multi-million transaction style infrastructures.

Microservice infrastructures tend to get hit hard.  Very hard.  Multi-million requests per day, performing GET's to retrieve data, or POST's to update, with each transaction perhaps hitting 10, 20 or 100 tiny independent services.  By being to pass down an access token within an HTTP authorization header is powerful and flexible and couple with that a token that is stateless provides the necessary scaling back bone.  

But why is stateless so interesting here? A stateless access token allows local introspection before access is given. That allows a microservice API to verify and look inside the presented JWT (which will appear in the Authorization header) without making a call back to the authorization service that issued the token. This reduction in hops can be pretty useful in high volume ecosystems - albeit the microservice will need the public key of the authorization service to verify the tokens and some extra code to verify and then introspect attributes like the exp, aud, scopes etc.

Interesting to see where we are come this time 2017...

Using OpenAM as a Trusted File Authorization Engine

A common theme in the DevOps world, or any containerization style infrastructure, may be the need to verify which executables (or files in general) can be installed, run, updated or deleted within a particular environment, image or container.  There are numerous ways this could be done.  Consider a use case where exe’s, Android APK’s or other 3rd party compiled files […]

Protect Bearer Tokens Using Proof of Possession

Bearer tokens are the cash of the digital world.  They need to be protected.  Whoever gets hold of them, can well, basically use them as if they were you. Pretty much the same as cash.  The shop owner only really checks the cash is real, they don’t check that the £5 note you produced from your wallet is actually your £5 note.

This has been an age old issue in web access management technologies, both for stateless and stateful token types, OAuth2 access and refresh tokens, as well as OpenID Connect id tokens.

In the hyper connected Consumer Identity & Access Management (CIAM) and Internet (Identity) of Things worlds, this can become a big problem.

Token misuse, perhaps via MITM (man in the middle) attacks, or even resource server misconfiguration, could result in considerable data compromise.

However, there are some newer standards that look to add some binding ability to the tokens – that is, glue them to a particular user or device based on some simple crypto.

The unstable nightly source and build of OpenAM has added the proof of possession capability to the OAuth2 provider service. (Perhaps the first vendor to do so? Email me if you see other implementations..).

The idea is, that the client makes a normal request for an access_token from the authorization service (AS), but also adds another parameter in the request, that contains some crypto the client has access to – basically a public key of an asymmetric key pair.

This key, which could be ephemeral for that request, is then baked into the access_token.  If the access_token is a JWT, the JWT contains this public key and the JWT is then signed by the authorization service.  If using a stateful access_token, the AS token introspection endpoint can relay the public key back to the resource server at look up time.

This basically gives the RS an option to then issue a challenge response style interaction with the client to see if they are in possession of the private key pair – thus proving they are the correct recipient of the originally issued access_token!

 

The basic flow, sees the addition of a new parameter to the access_token request to the OpenAM authorization service, under the name of “cnf_key”.  This is a confirmation key, that the client is in possession of.  In this example, it would be a base64 encoded JSON Web Key representation of a public key.

So for example, a POST request to the endpoint ../openam/oauth2/access_token, would now take the parameters grant_type, scope and also cnf_key, with an authorization header containing the OAuth2 client id and secret as normal.  A cnf_key could look something like this:

eyJqd2siOnsKICAiYWxnIjogIlJTMjU2IiwKICAiZSI6ICJBUUFCIiwKICAibiI6ICJ2TDM0UXh5bXdId1dEOVpWTDljaU42Yk5ybk91NTI0cjdZMzRvUlJXRkpjWjc3S1dXaHB1Si1iSlZXVVNUd3ZKTGdWTWlDZmFxSTZEWnIwNWQ2VGdONTNfMklVWmtHLXgzNnBFbDZZRWs1d1ZnX1ExelFkeEZHZkRoeFBWajJ3TWNNcjFyR0h1UUFEeC1qV2JHeGRHLTJXMXFsVEdQT253SklqYk9wVm1RYUJjNHhSYndqenNsdG1tcndzMmZNTUtNTDVqbnFwR2RoeWRfdXlFTU0wdHpNTGFNSVN2M2lmeFM2UUw3c2tpZTZ5ajJxamxUTUd3QjA4S29ZUEQ2QlVPaXd6QWxkUmJfM3k4bVA2TXY5cDdvQXBheTZCb25pWU8yaVJySzMxUlRaLVlWUHRleTllSWZ1d0ZFc0RqVzNES0JBS21rMlhGY0NkTHEyU1djVWFOc1EiLAogICJrdHkiOiAiUlNBIiwKICAidXNlIjogInNpZyIsCiAgImtpZCI6ICJzbW9mZi1rZXkiCn19Cg==

Running that through base64 -d on bash, or via an online base64 decoder, shows something like the following: (NB this JWK was created using an online tool for simple testing)

{
   "jwk":
            "alg": "RS256",
             "e": "AQAB",
             "n": "vL34QxymwHwWD9ZVL9ciN6bNrnOu524r7Y34oRRWFJcZ77KWWhpuJ-                               bJVWUSTwvJLgVMiCfaqI6DZr05d6TgN53_2IUZkG-                                                x36pEl6YEk5wVg_Q1zQdxFGfDhxPVj2wMcMr1rGHuQADx-jWbGxdG-2W1qlTGPOnwJIjbOpVmQaBc4xRbwjzsltmmrws2fMMKML5jnqpGdhyd_uyEMM0tzMLaMISv3ifxS6QL7skie6yj2qjlTMGwB08KoYPD6BUOiwzAldRb_3y8mP6Mv9p7oApay6BoniYO2iRrK31RTZ-YVPtey9eIfuwFEsDjW3DKBAKmk2XFcCdLq2SWcUaNsQ",
          "kty": "RSA",
           "use": "sig",
            "kid": "smoff-key"
     }
}

The authorization service, should then return the normal access_token payload.  If using stateless OAuth2 access_tokens, the access_token will contain the new embedded cnf_key attribute, containing the originally submitted public key.  The resource server, can then leverage the public key to perform some out of band challenge response questions of the client, when the client comes to present the access_token later.

If using the more traditional stateful access_tokens, the RS can call the ../oauth2/introspect endpoint to find the public key.

The powerful use case is to validate the the client submitting the access_token, is in fact the same as the original recipient, when the access_token was issued.  This can help reduce MITM and other basic token misuse scenarios.

This blog post was first published @ http://www.theidentitycookbook.com/, included here with permission from the author.

Blockchain For Identity: Access Request Managmement

This is the first in a series of blogs, that will start to look at some use cases for leveraging block chain technology in the world of identity and access management.  I don't proclaim to be a BC expert and there are several blogs better equipped to tackle that subject, but a good introductory text is the O'Reilly published "Blockchain: Blueprint for a New Economy".

I want to first look at access request management.  An age old issue that has developed substaintially in the last 30 years, to several sub-industries within the IAM world, with specialist vendors, standards and methodologies.

In the Old Days

Embedded/Local Assertion Managment

So this is a typical "standalone" model of access management.  An application manages both users and access control list information within it's own boundary.  Each application needs a separate login and access control database. The subject is typically a person and the object an application with functions and processes.

Specialism & Economies of Scale

So whilst the first example is the starting point - and still exists in certain environments - specialism quickly occured, with separate processes for identity assertion management and access control list management.



Externalised Identity & ACL Management

So this could be a typical enterprise web access management paradigm.  An identity provider generates a token or assertion, with a policy enforcement process acting as a gatekeeper down into the protected objects.  This works perfectly well for single domain scenarios, where identity and resource data can be easily controlled.  Scaling too is not really a major issue here, as traditionally, this approach would be within the same LAN for example.

So far so good.  But today, we are starting to see a much more federated and broken landscape. Organisations have complex supply chains, with partners, sub-companies and external users all requiring access into once previously internal-only objects.  Employees too, want to access resources in other domains and as-a-service providers.


Federated Identities


This then creates a much more federated landscape.  Protocols such as SAML2 and OAuth2/OIDC allow identity data from trusted 3rd parties, but not originating from the objects domain, to interact with those resource securely.

Again, from a scaling perspective this tends to work quite well.  The main external interactions tend to be at the identity layer, with access control information still sitting within the object's domain - albeit externalised from the resource itself.

The Mesh and Super-Federation

As the Internet of Things becomes normality, the increased volume of both subjects and objects creates numerous challenges.  Firstly the definition of both changes.  A subject will become not just a person, but also a thing and potentially another service.  An object will become not just an application, but an autonomous piece of data, an API or even another subject.  This then creates a multi-point set of interactions, with subjects accessing other subjects, API's accessing API's, things accessing API's and so on.

Enter the Blockchain

So where does the block chain fit into all this?  Well, the main characteristics that can be valueable in this sort of landscape, would be the decentralised, append-only, globally accessible nature of a blockchain.  The blockchain technology could be used as an access request warehouse.  This warehouse could contain the output from the access request workflow process such as this sample of psuedo code:

{"sub":"1234-org2", "obj":"file.dat", "access":"granted", "iss":"tomorrow", "exp":"tomorrow+1", "issuingAuth":"org1", "added":"now"}

This is basic, but would be hashed and cryptographically made secure from a trusted access request manager.  That manager would have the necessary circle of trust relationships with the necessary identity and access control managers.

After each access request, an entry would be made to the chain.  Each object would then be able to make a query against the chain, to identify all corresponding entries that map to their object set, unionise all entries and work out the necessary access control result.  For example, this would contain all access granted and access denied results.


A Blockchained Enabled Access Requestment Mgmt Workflow

So What?

So we now have another system and process to manage?  Well possibly, but this could provide a much more scaleable and interoperable model with request to all the necessary access control decisions that would need to take place to allow an IoT and API enabled world.

Each object could have access to any BC enabled node - so there would be massive fault tolerance and elastic scaling.  Each subject would simply present a self-contained assertion.  Today that could be a JWT or a token within a proof-of-possession framework.  They could collect that from any generator they choose.  Things like authentication and identity validation would not be altered.

Access request workflow management would be abstracted - the same asychronous processes, approvals and trusted interactions would take place.  The blockchain would simply be an externalised, distribued, secure storage mechanism.

From a technology perspective I don't believe this framework exists, and I will be investigating a proof of concept in this area.

Blog originally posted at The Identity Cookbook

Blockchain for Identity: Access Request Management

This is the first in a series of blogs, that will start to look at some use cases for leveraging block chain technology in the world of identity and access management.  I don’t proclaim to be a BC expert and there are several blogs better equipped to tackle that subject, but a good introductory text is the O’Reilly published “Blockchain: Blueprint for a New Economy”.

I want to first look at access request management.  An age old issue that has developed substaintially in the last 30 years, to several sub-industries within the IAM world, with specialist vendors, standards and methodologies.

In the Old Days

 

Embedded/Local Assertion Managment
 
So this is a typical “standalone” model of access management.  An application manages both users and access control list information within it’s own boundary.  Each application needs a separate login and access control database. The subject is typically a person and the object an application with functions and processes.
Specialism & Economies of Scale
 
So whilst the first example is the starting point – and still exists in certain environments – specialism quickly occured, with separate processes for identity assertion management and access control list management.
Externalised Identity & ACL Management
So this could be a typical enterprise web access management paradigm.  An identity provider generates a token or assertion, with a policy enforcement process acting as a gatekeeper down into the protected objects.  This works perfectly well for single domain scenarios, where identity and resource data can be easily controlled.  Scaling too is not really a major issue here, as traditionally, this approach would be within the same LAN for example.
So far so good.  But today, we are starting to see a much more federated and broken landscape. Organisations have complex supply chains, with partners, sub-companies and external users all requiring access into once previously internal-only objects.  Employees too, want to access resources in other domains and as-a-service providers.
Federated Identities

This then creates a much more federated landscape.  Protocols such as SAML2 and OAuth2/OIDC allow identity data from trusted 3rd parties, but not originating from the objects domain, to interact with those resource securely.

Again, from a scaling perspective this tends to work quite well.  The main external interactions tend to be at the identity layer, with access control information still sitting within the object’s domain – albeit externalised from the resource itself.

The Mesh and Super-Federation

As the Internet of Things becomes normality, the increased volume of both subjects and objects creates numerous challenges.  Firstly the definition of both changes.  A subject will become not just a person, but also a thing and potentially another service.  An object will become not just an application, but an autonomous piece of data, an API or even another subject.  This then creates a multi-point set of interactions, with subjects accessing other subjects, API’s accessing API’s, things accessing API’s and so on.

Enter the Blockchain

So where does the block chain fit into all this?  Well, the main characteristics that can be valueable in this sort of landscape, would be the decentralised, append-only, globally accessible nature of a blockchain.  The blockchain technology could be used as an access request warehouse.  This warehouse could contain the output from the access request workflow process such as this sample of psuedo code:

{“sub”:”1234-org2″, “obj”:”file.dat”, “access”:”granted”, “iss”:”tomorrow”, “exp”:”tomorrow+1″, “issuingAuth”:”org1″, “added”:”now”}

This is basic, but would be hashed and cryptographically made secure from a trusted access request manager.  That manager would have the necessary circle of trust relationships with the necessary identity and access control managers.

After each access request, an entry would be made to the chain.  Each object would then be able to make a query against the chain, to identify all corresponding entries that map to their object set, unionise all entries and work out the necessary access control result.  For example, this would contain all access granted and access denied results.

 

A Blockchained Enabled Access Requestment Mgmt Workflow
 
So What?
 
So we now have another system and process to manage?  Well possibly, but this could provide a much more scaleable and interoperable model with request to all the necessary access control decisions that would need to take place to allow an IoT and API enabled world.
Each object could have access to any BC enabled node – so there would be massive fault tolerance and elastic scaling.  Each subject would simply present a self-contained assertion.  Today that could be a JWT or a token within a proof-of-possession framework.  They could collect that from any generator they choose.  Things like authentication and identity validation would not be altered.
Access request workflow management would be abstracted – the same asychronous processes, approvals and trusted interactions would take place.  The blockchain would simply be an externalised, distribued, secure storage mechanism.
From a technology perspective I don’t believe this framework exists, and I will be investigating a proof of concept in this area.

This blog post was first published @ http://www.theidentitycookbook.com/, included here with permission from the author.

Delegated RBAC CRUD Via Workflow

OpenIDM provides a powerful delegated administration model, for both REST endpoint access and workflow process access.

A simple way to provide scoped access into the IDM functions, is to simply wrap a workflow process around it and then delegate access to that workflow to a certain of group of users.

A basic example could be that of role based access control administration.  The basic create, read, update and delete tasks often associated with object management.  So RBAC-CRUD to save a few letters.

Each CRUD function can be wrapped into a workflow, with access to those workflows then given members of the rbac-admins internal authorization role.

I created 5 workflows, four for the role-admins and 1 for the end user:

role-admins: createRole.bar

 

A simple wrapper that takes two arguments and runs an openidm.create() to create the role.

role-admins: deleteRole.bar

Opposite of create…and does a lookahead using some JS stored within the form HTML to get a list of roles that can be deleted.  Before the openidm.delete() function is called, it clears down the members list first.

role-admins: addRoleToUserTemporal.bar

So we have a role, now we want to add some users.  Again, does a lookahead to create a dynamic select drop down, then free text to add a username.  You could add some checking logic here I guess to make sure the user exists before submission, but I wrap a conditional check in the workflow before I patch the role anyway.

The other attribute is a timer – this is just based on the Activiti Timer element and I’ve set it to take just a time.  In reality you would accept a date, but for demo’s a time is much easier.  So, after the time has been passed, the initial role to user association is reversed, taking the role away.

role-admins: removeRoleFromUser.bar

Simple manual process to remove a role from a user.  Note all the patches in the workflows work against managed/role.  Whilst you can add and remove roles from the managed/user/_id, by using managed/role endpoint, I can restrict the access the role-admins get via access.js more accurately.

openidm-authorized:requestRole.bar

We then have one workflow left – that is available to any user.  Eg it’s a standard end user workflow, and this time for an access request.

This again does a look ahead and performs an approval step before provisioning the role to the user. The default manager approval is in the workflow and remmed out alongside the ability to use any member of the role-admins authorization role.  So you can flip between the two approval journeys.

The use of role-admins leverages the Activiti:Candidate users attribute – eg role-admins could contain 10 users – the approval goes to all 10 and the first one to claim the task can approve.

A couple of points on access.  The workflow access is governed by the ../conf/process-access.json file.  In there add in the pattern of the role _id along with the internal authorization roles that should have access – note internal role and not just managed/role.

The access.js file in the ../script directory also needs updating to allow full control to the managed/role endpoint to the role-admins users.

Code for this set is available here.

Note, thanks should also go to Marek Detko and some code crib from his role collection example.

This blog post was first published @ http://www.theidentitycookbook.com/, included here with permission from the author.