Creating Personal Access Tokens in ForgeRock AM

Personal Access Tokens (PAT’s) are used to provide scoped self-managed access credentials that can be used to provide access to trusted systems and services that want to act on a users behalf.

Similar to OAuth tokens, they often don’t have an expiration and are used conceptually instead of passwords.  A PAT could be used in combination with a username when performing basic authentication.

For example see the https://github.com/settings/tokens page within Github which allows scope-related tokens to be created for services to access your Github profile.

PAT Creation

The PAT can be an opaque string – perhaps a SHA256 hash.  Using a hash seems the most sensible approach to avoid collisions and create a fixed-length portable string.  A hash without a key of course wont provide any creator assurance/verification function, but since the hash will be stored against the user profile and not treated like a session/token this shouldn’t be an issue.

An example PAT value could be:

Eg:

f83ee64ef9d15a68e5c7a910395ea1611e2fa138b1b9dd7e090941dfed773b2c:{“resource1” : [ “read”, “write”, “execute” ] }

a011286605ff6a5de51f4d46eb511a9e8715498fca87965576c73b8fd27246fe:{"resource2" : [ "read", "write"]}

The key was simply created by running the resource and the associated permissions through sha256sum on Linux.  How you create the hash is beyond the scope of this blog, but this could be easily handled by say ForgeRock IDM and a custom endpoint in a few lines of JavaScript.

PAT Storage

The important aspect is where to store the PAT once it has been created.  Ideally this really needs to be stored against the users profile record in DJ.  I’d recommend creating a new schema attribute dedicated for PAT’s that is multivalued.  The user can then update their PAT’s over REST the same as any other profile attribute.

For this demo I used the existing attribute called “iplanet-am-user-alias-list” for speed as this was multi-valued.  I added in a self-created PAT for my fake resource:

Using a multi-valued attribute allows me to create any number of PAT’s.  As they don’t have an expiration they might last for some time in the user store.

PAT Usage

Once stored, they could be used in a variety of ways to provide “access” within applications. The most simple way, is to leverage the AM authorization engine as a decision point to verify that a PAT exists and what permissions it maps to.

Once the PAT is stored and created, the end user can present it to another user/service that they want to use the PAT.  That service or user presents the username:PAT combination to the service they wish to gain access to.  That service calls the AM authorization API’s to see if the user:PAT combination is valid.

The protected service would call {{OpenAM}}/openam/json/policies?_action=evaluate with a payload similar to:

Here I am calling the ../policies endpoint with a dedicated account called “policyeval” which has ability to read the REST endpoint and also read realm users which we will need later on.  Edit the necessary privileges tab within the Admin console.

If the PAT exists within the user profile of “smoff”, AM returns an access=true message, along with the resource and associated permissions that can be used within the calling application:

So what needs setting up in the background to allow AM to make these decisions? Well all pretty simple really.

Create Authorization Resource Type for PAT’s

Firstly create a resource type that matches the pat://*.* format (or any format you prefer):

Next we need to add a policy set that will contain our access policies:

The PATValidator only contains one policy called AllPATs, which is just a wildcard match for pat://*:*.  This will allow any combination of user:pat to be submitted for validation:

Make sure to set the subjects condition to “NOT Never Match” as we are not analysing user session data here.  The logic for analysis is being handled by a simple script.

PAT Authorization Script

The script is available here.

At a high level is does the following:

  1. Captures the submitted username and PAT that is part of the authorization request
  2. As the user will not have a local session, we need to make a native REST call to look up the user
  3. We do this by first generating a session for our policyeval user
  4. We use that session to call the ../json/users endpoint to perform a search for the users PATs
  5. We do a compare between the submitted PAT and any PAT’s found against the user profile
  6. If a match is found, we pull out the assigned permissions and send back as a response attribute array to the calling application

This blog post was first published @ http://www.theidentitycookbook.com/, included here with permission from the author.

A Design for Modern Authentication

The password is dead. Long live the password! I have lost count of how many articles and blogs I have seen with regards to the weaknesses, the management, the flexibility, security, insecurity and overall usage of passwords when it comes to user authentication. We all use them and they’re not going anywhere any time soon. OK, so next step. What else can and should we be using for our user and device based authentication and login journeys?


Where We Are Now – The Sticking Plaster of MFA


So we accept that the traditional combo of user name and passwords is bad for our (system) health. Step forward multi-factor authentication. Or 2FA. Take your pick. This generally saw the introduction of something you have in the form of a token, phone-as-a-token or some other out of band mechanism that would create a one-time-password. Traditionally the “out of band” mechanism was either an email or SMS to a preregistered address or phone number, that contained a 6 digit pass code. Internal or employee systems would often leverage a hard token – either a USB dongle or a small tag with a tiny display that would show a rotating pin. These concepts were certainly better from a security perspective, but a) were not unbreakable and b) often created a disjointed user login experience with lots on interruptions and user interaction.

Basic MFA factors


Where We Are Moving To – Nano Authentication!


OK, so user name and passwords are not great. MFA is simple, pretty cheap to implement, but means either the end user needs to carry something around (a bit 2006) or has an interrupted login journey by constantly being asked for a one-time-password. What we need is not two-factor-authentication but more nano-factor-authentication! More factors that are smaller.  At least 10 to be precise. Increase the factors and aim to reduce the material impact of a single factor compromise whilst simultaneously reducing the number of user interrupts. These 10 factors (it could be 8, it could be 15, you get the idea) are all about introducing a broad spectrum analysis for the login journey.

Each factor is must more cohesive and modular, analysing a single piece of the login journey. The login journey could still leverage pretty static profile related data such as a user name, but is augmented with much more context – the location, time, device origin of the request and comparison factors that look at previous login requests to determine patterns or abnormalities.

Breaking authentication down? "Nano-authentication"

Some of these factors could “pass” and some could “fail” during the login journey, but the process is much about about accumulating and analysing risk and therefore being able to respond to high risk more accurately. Applying 2FA to every user login does not reduce risk per-se, it simply applies a blanket risk to every actor.

Wouldn’t it be much better to allow login variation for genuine users who do regularly change machine, location, network and time zone? Wouldn’t it be better to give users more choice over their login journeys and provide numerous options if and when high risks scenarios do occur?

 Another key area I think authentication is moving towards, is that of transparency. “Frictionless”, “effortless” or “zero-effort” logins are all the buzz. If, as an end user I enrol, sacrifice the privacy regarding a device fingerprint, maybe download a OTP or push app, why can’t I just “login” without having my experience interrupted? The classic security/convenience paradox. By introducing more factors and “gluing” those factors together with processing logic, the user authentication system can be much more responsive – perhaps mimicking a state machine, designed in a non-deterministic fashion, where any given factor could have multiple outcomes.

Where We Want To Get To – Transparent Pre-Authentication


So I guess the sci-fi end goal is to just turn up at work/coffee shop/door/car/website/application (delete as applicable) and just present one selves. The service would not only “know” who you were, but also trust that it is you. A bit like the Queen. Every time you presented yourself, transparent background checks would continually evaluate every part of the interaction, looking for changes and identifying risk.

Session + Bind + Usage - increasing transparency?


The closest we are to that today in the web world at least, is the exchange of the authentication process for a cookie/session/tokenId/access_token. Whether that is stateless or stateful, it’s something to represent the user when they attempt to gain access to the service again. Couple that token to some kind of binding (either to a PKI key pair, or TLS session) to reduce the impact of token theft and there is some kind of repeatable access use case. However, change is all around and the token presentation, must therefore be coupled with all usage, context, resource and transaction data that the token is attempting to access, to allow the authentication machine to loop through the necessary nano-factors either individually or collectively to identify risk or change.

Authentication is moving on.  A more modern system must accommodate a broad spectrum  when it comes to analysing who is instigating a transaction which must also be coupled with mechanisms that increase transparency and pre-identification of risk without unnecessary and obtrusive interruptions.

A Design for Modern Authentication

The password is dead. Long live the password! I have lost count of how many articles and blogs I have seen with regards to the weaknesses, the management, the flexibility, security, insecurity and overall usage of passwords when it comes to user authentication. We all use them and they’re not going anywhere any time soon. OK, so next step. What else can and should we be using for our user and device based authentication and login journeys?

Where We Are Now – The Sticking Plaster of MFA

So we accept that the traditional combo of user name and passwords is bad for our (system) health. Step forward multi-factor authentication. Or 2FA. Take your pick. This generally saw the introduction of something you have in the form of a token, phone-as-a-token or some other out of band mechanism that would create a one-time-password. Traditionally the “out of band” mechanism was either an email or SMS to a preregistered address or phone number, that contained a 6 digit pass code. Internal or employee systems would often leverage a hard token – either a USB dongle or a small tag with a tiny display that would show a rotating pin. These concepts were certainly better from a security perspective, but a) were not unbreakable and b) often created a disjointed user login experience with lots on interruptions and user interaction.

Basic MFA factors

Where We Are Moving To – Deca-factor Authentication!

OK, so user name and passwords are not great. MFA is simple, pretty cheap to implement, but means either the end user needs to carry something around (a bit 2006) or has an interrupted login journey by constantly being asked for a one-time-password. What we need is not two-factor-authentication but deca-factor-authentication! More factors. At least 10 to be precise. Increase the factors and aim to reduce the material impact of a single factor compromise whilst simultaneously reducing the number of user interrupts. These 10 factors (it could be 8, it could be 15, you get the idea) are all about introducing a broad spectrum analysis for the login journey.

Each factor is must more cohesive and modular, analysing a single piece of the login journey. The login journey could still leverage pretty static profile related data such as a user name, but is augmented with much more context – the location, time, device origin of the request and comparison factors that look at previous login requests to determine patterns or abnormalities.

Breaking authentication down? “Deca-factors”

Some of these factors could “pass” and some could “fail” during the login journey, but the process is much about about accumulating and analysing risk and therefore being able to respond to high risk more accurately. Applying 2FA to every user login does not reduce risk per-se, it simply applies a blanket risk to every actor.

Wouldn’t it be much better to allow login variation for genuine users who do regularly change machine, location, network and time zone? Wouldn’t it be better to give users more choice over their login journeys and provide numerous options if and when high risks scenarios do occur?

Another key area I think authentication is moving towards, is that of transparency. “Frictionless”, “effortless” or “zero-effort” logins are all the buzz. If, as an end user I enrol, sacrifice the privacy regarding a device fingerprint, maybe download a OTP or push app, why can’t I just “login” without having my experience interrupted? The classic security/convenience paradox. By introducing more factors and “gluing” those factors together with processing logic, the user authentication system can be much more responsive – perhaps mimicking a state machine, designed in a non-deterministic fashion, where any given factor could have multiple outcomes.

Where We Want To Get To – Transparent Pre-Authentication

So I guess the sci-fi end goal is to just turn up at work/coffee shop/door/car/website/application (delete as applicable) and just present one selves. The service would not only “know” who you were, but also trust that it is you. A bit like the Queen. Every time you presented yourself, transparent background checks would continually evaluate every part of the interaction, looking for changes and identifying risk.

Session + Bind + Usage – increasing transparency?

The closest we are to that today in the web world at least, is the exchange of the authentication process for a cookie/session/tokenId/access_token. Whether that is stateless or stateful, it’s something to represent the user when they attempt to gain access to the service again. Couple that token to some kind of binding (either to a PKI key pair, or TLS session) to reduce the impact of token theft and there is some kind of repeatable access use case. However, change is all around and the token presentation, must therefore be coupled with all usage, context, resource and transaction data that the token is attempting to access, to allow the authentication machine to loop through the necessary deca-factors either individually or collectively to identify risk or change.

Authentication is moving on.  A more modern system must accommodate a broad spectrum  when it comes to analysing who is instigating a transaction which must also be coupled with mechanisms that increase transparency and pre-identification of risk without unnecessary and obtrusive interruptions.

This blog post was first published @ www.infosecprofessional.com, included here with permission.

A Design for Modern Authentication

The password is dead. Long live the password! I have lost count of how many articles and blogs I have seen with regards to the weaknesses, the management, the flexibility, security, insecurity and overall usage of passwords when it comes to user authentication. We all use them and they’re not going anywhere any time soon. OK, so next step. What else can and should we be using for our user and device based authentication and login journeys?


Where We Are Now – The Sticking Plaster of MFA


So we accept that the traditional combo of user name and passwords is bad for our (system) health. Step forward multi-factor authentication. Or 2FA. Take your pick. This generally saw the introduction of something you have in the form of a token, phone-as-a-token or some other out of band mechanism that would create a one-time-password. Traditionally the “out of band” mechanism was either an email or SMS to a preregistered address or phone number, that contained a 6 digit pass code. Internal or employee systems would often leverage a hard token – either a USB dongle or a small tag with a tiny display that would show a rotating pin. These concepts were certainly better from a security perspective, but a) were not unbreakable and b) often created a disjointed user login experience with lots on interruptions and user interaction.

Basic MFA factors


Where We Are Moving To – Nano Authentication!


OK, so user name and passwords are not great. MFA is simple, pretty cheap to implement, but means either the end user needs to carry something around (a bit 2006) or has an interrupted login journey by constantly being asked for a one-time-password. What we need is not two-factor-authentication but more nano-factor-authentication! More factors that are smaller.  At least 10 to be precise. Increase the factors and aim to reduce the material impact of a single factor compromise whilst simultaneously reducing the number of user interrupts. These 10 factors (it could be 8, it could be 15, you get the idea) are all about introducing a broad spectrum analysis for the login journey.

Each factor is must more cohesive and modular, analysing a single piece of the login journey. The login journey could still leverage pretty static profile related data such as a user name, but is augmented with much more context – the location, time, device origin of the request and comparison factors that look at previous login requests to determine patterns or abnormalities.

Breaking authentication down? "Nano-authentication"

Some of these factors could “pass” and some could “fail” during the login journey, but the process is much about about accumulating and analysing risk and therefore being able to respond to high risk more accurately. Applying 2FA to every user login does not reduce risk per-se, it simply applies a blanket risk to every actor.

Wouldn’t it be much better to allow login variation for genuine users who do regularly change machine, location, network and time zone? Wouldn’t it be better to give users more choice over their login journeys and provide numerous options if and when high risks scenarios do occur?

 Another key area I think authentication is moving towards, is that of transparency. “Frictionless”, “effortless” or “zero-effort” logins are all the buzz. If, as an end user I enrol, sacrifice the privacy regarding a device fingerprint, maybe download a OTP or push app, why can’t I just “login” without having my experience interrupted? The classic security/convenience paradox. By introducing more factors and “gluing” those factors together with processing logic, the user authentication system can be much more responsive – perhaps mimicking a state machine, designed in a non-deterministic fashion, where any given factor could have multiple outcomes.

Where We Want To Get To – Transparent Pre-Authentication


So I guess the sci-fi end goal is to just turn up at work/coffee shop/door/car/website/application (delete as applicable) and just present one selves. The service would not only “know” who you were, but also trust that it is you. A bit like the Queen. Every time you presented yourself, transparent background checks would continually evaluate every part of the interaction, looking for changes and identifying risk.

Session + Bind + Usage - increasing transparency?


The closest we are to that today in the web world at least, is the exchange of the authentication process for a cookie/session/tokenId/access_token. Whether that is stateless or stateful, it’s something to represent the user when they attempt to gain access to the service again. Couple that token to some kind of binding (either to a PKI key pair, or TLS session) to reduce the impact of token theft and there is some kind of repeatable access use case. However, change is all around and the token presentation, must therefore be coupled with all usage, context, resource and transaction data that the token is attempting to access, to allow the authentication machine to loop through the necessary nano-factors either individually or collectively to identify risk or change.

Authentication is moving on.  A more modern system must accommodate a broad spectrum  when it comes to analysing who is instigating a transaction which must also be coupled with mechanisms that increase transparency and pre-identification of risk without unnecessary and obtrusive interruptions.

SAML2 IDP Automated Certificate Management in FR AM

ForgeRock AM 5.0 ships with Amster a lightweight command line tool and interactive shell, that allows for the automation of many management and configuration tasks.

A common task often associated with SAML2 identity provider configs, is the updating of certificates that are used for signing and the possible encryption of assertions.  A feature added in 13.0 of OpenbAM, was the ability to have multiple certificates within an IDP config.  This is useful to overcome the age old challenge of how to handle certificate expiration.  An invalid cert can brake integrations with service providers.  The process to remove, then add a new certificate, would require any entities within the circle of trust to retrieve new metadata into their configs – and thus create downtime, so the timing of this is often an issue.  The ability to have multiple certificates in the config, would allow service providers to pull down meta data at a known date, instead of specifically when certificates expired.

Here we see the basic admin view of the IDP config…showing the list of certs available.  These certs are stored in the JCEKS keystore in AM5.0 (previously the JKS keystore).

So the config contains am1 and am2 certs – an export of the meta data (from the ../openam/saml2/jsp/exportmetadata.jsp?entityid=idp endpoint) will list both certs that could be used for signing:

The first certificate listed in the config, is the one that is used to sign.  When that expires, just remove from the list and the second certificate is then used.  As the service provider already has both certs in their originally downloaded metadata, there should be no break in service.

Anyway….back to automation.  Amster can manage the the SAML2 entities, either via the shell or script.  This allows admins to operationally create, edit and update entities…and a regular task could be to add new certificates to the IDP list as necessary.

To do this I created a script that does just this.  It’s a basic bash script that utilises Amster to read, edit then re-import the entity as a JSON wrapped XML object.

The script is available here.

For more information on IDP certificate management see the docs here.

This blog post was first published @ http://www.theidentitycookbook.com/, included here with permission from the author.

The Role of Identity Management in the GDPR

Unless you have been living in a darkened room for a long time, you will know the countdown for the EU's General Data Protection Regulation is dramatically coming to a head.  May 2018 is when the regulation really takes hold, and organisations are fast in the act on putting plans, processes and personnel in place, in order to comply.

Whilst many organisations are looking at employing a Data Privacy Officer (DPO), reading through all the legalese and developing data analytics and tagging processes, many need to embrace and understand the requirements with how their consumer identity and access management platform can and should be used in this new regulatory setting.

My intention in this blog, isn't to list every single article and what they mean - there are plenty of other sites that can help with that.  I want to really highlight, some of the more identity related components of the GDPR and what needs to be done.

Personal Data

On the the personal data front, more and more organisations are collecting more data, more frequently than ever before.  Some data is explicit, like when you enter your first name, last name and date of birth when you register for a service for example, through to the more subtle - location, history and preference details amongst others. The GDPR focuses on making sure personal data is processed legally and data is only kept for as long as necessary - with a full end user interface that has the ability to make sure their data is up to date and accurate.

It goes with out saying, that this personal data needs to have the necessary security, confidentiality, integrity and availability constraints applied to it.  This will require the necessary least privileged administrative controls and data persistence security, such as the necessary hashing or encryption.

Lawful Processing

Ah the word law! That must be the legal team. Or the newly appointed DPO. That can't be a security, identity or technology issue.  Partially correct. But the lawful processing, also has a significant requirement surrounding the capture and management of consent.  So what is this explicit consent? The data owner - that's Joe Blogs whose data has been snaffled - needs to be fully aware of the data that has been captured, why it is captured and who has access.

The service provider, also needs to explicitly capture consent - not an implicit "the end user needs to opt out", but more the end user needs to "opt-in" for their data to be used and processed.  This will require a transparent user driven consent system, with sharing and more importantly, timely revocation of access.  Protocols such as User Managed Access may come in useful here.

Individuals Right to be Informed

The lawful processing aspect, flows neatly into the entire area of the end user being informed.  The end user needs to be in a position to make informed decisions, around data sharing, service registration, data revocation and more.  The use of 10 page terms and conditions thrust down the end user's screen at service startup, are over.

Non-tech language is now a must, with clear explanations of why data has been captured and which 3rd parties - if any - have access to the data.  This again flows into the consent model - with the data owner being able to make consent decisions, they need simple to understand information.  So registration flows will now need to be much more progressive - only collecting data when it is needed, with a clear explanation of why the data is needed and what processing will be done with it.  20 attribute registration forms are dead.

Individuals Right to Rectification, Export and Erasure

Certainly some new requirements here - if you are a service provider, can you allow your end users to clearly see what data you have captured about them, and also provide that data in a simple to use end user dashboard where they can make changes and keep it up to date?  What about the ability for the data owner to export that data in a machine readable and standard format such as CSV or JSON?

Right to erasure is also interesting - do you know where your end user data resides?  Which systems, what attributes, what correlations or translations have taken place?  Could you issue a de-provisioning request to either delete, clean or anonymize that data? If not you may need to investigate why and what can be done to remediate that.


Conclusion

The GDPR is big.  It contains over 90 articles, containing lots of legalese and fine grained print.  Don't just assume the legal team or the newly appointed DPO will cover your company's ass.  Full platform data analytics tagging will be needed, along with a modern consumer identity and access management design pattern.  End user dashboards, registration journeys and consent frameworks will need updating.

The interesting aspect, is that privacy is now becoming a competitive differentiator.  The GDPR should not just be seen as an internal compliance exercise.  It could actually be a launch pad for building closer more trusted relationships with your end user community.

The Role of Identity Management in the GDPR

Unless you have been living in a darkened room for a long time, you will know the countdown for the EU's General Data Protection Regulation is dramatically coming to a head.  May 2018 is when the regulation really takes hold, and organisations are fast in the act on putting plans, processes and personnel in place, in order to comply.

Whilst many organisations are looking at employing a Data Privacy Officer (DPO), reading through all the legalese and developing data analytics and tagging processes, many need to embrace and understand the requirements with how their consumer identity and access management platform can and should be used in this new regulatory setting.

My intention in this blog, isn't to list every single article and what they mean - there are plenty of other sites that can help with that.  I want to really highlight, some of the more identity related components of the GDPR and what needs to be done.

Personal Data

On the the personal data front, more and more organisations are collecting more data, more frequently than ever before.  Some data is explicit, like when you enter your first name, last name and date of birth when you register for a service for example, through to the more subtle - location, history and preference details amongst others. The GDPR focuses on making sure personal data is processed legally and data is only kept for as long as necessary - with a full end user interface that has the ability to make sure their data is up to date and accurate.

It goes with out saying, that this personal data needs to have the necessary security, confidentiality, integrity and availability constraints applied to it.  This will require the necessary least privileged administrative controls and data persistence security, such as the necessary hashing or encryption.

Lawful Processing

Ah the word law! That must be the legal team. Or the newly appointed DPO. That can't be a security, identity or technology issue.  Partially correct. But the lawful processing, also has a significant requirement surrounding the capture and management of consent.  So what is this explicit consent? The data owner - that's Joe Blogs whose data has been snaffled - needs to be fully aware of the data that has been captured, why it is captured and who has access.

The service provider, also needs to explicitly capture consent - not an implicit "the end user needs to opt out", but more the end user needs to "opt-in" for their data to be used and processed.  This will require a transparent user driven consent system, with sharing and more importantly, timely revocation of access.  Protocols such as User Managed Access may come in useful here.

Individuals Right to be Informed

The lawful processing aspect, flows neatly into the entire area of the end user being informed.  The end user needs to be in a position to make informed decisions, around data sharing, service registration, data revocation and more.  The use of 10 page terms and conditions thrust down the end user's screen at service startup, are over.

Non-tech language is now a must, with clear explanations of why data has been captured and which 3rd parties - if any - have access to the data.  This again flows into the consent model - with the data owner being able to make consent decisions, they need simple to understand information.  So registration flows will now need to be much more progressive - only collecting data when it is needed, with a clear explanation of why the data is needed and what processing will be done with it.  20 attribute registration forms are dead.

Individuals Right to Rectification, Export and Erasure

Certainly some new requirements here - if you are a service provider, can you allow your end users to clearly see what data you have captured about them, and also provide that data in a simple to use end user dashboard where they can make changes and keep it up to date?  What about the ability for the data owner to export that data in a machine readable and standard format such as CSV or JSON?

Right to erasure is also interesting - do you know where your end user data resides?  Which systems, what attributes, what correlations or translations have taken place?  Could you issue a de-provisioning request to either delete, clean or anonymize that data? If not you may need to investigate why and what can be done to remediate that.


Conclusion

The GDPR is big.  It contains over 90 articles, containing lots of legalese and fine grained print.  Don't just assume the legal team or the newly appointed DPO will cover your company's ass.  Full platform data analytics tagging will be needed, along with a modern consumer identity and access management design pattern.  End user dashboards, registration journeys and consent frameworks will need updating.

The interesting aspect, is that privacy is now becoming a competitive differentiator.  The GDPR should not just be seen as an internal compliance exercise.  It could actually be a launch pad for building closer more trusted relationships with your end user community.