Creating Personal Access Tokens in ForgeRock AM

Personal Access Tokens (PAT’s) are used to provide scoped self-managed access credentials that can be used to provide access to trusted systems and services that want to act on a users behalf.

Similar to OAuth tokens, they often don’t have an expiration and are used conceptually instead of passwords.  A PAT could be used in combination with a username when performing basic authentication.

For example see the https://github.com/settings/tokens page within Github which allows scope-related tokens to be created for services to access your Github profile.

PAT Creation

The PAT can be an opaque string – perhaps a SHA256 hash.  Using a hash seems the most sensible approach to avoid collisions and create a fixed-length portable string.  A hash without a key of course wont provide any creator assurance/verification function, but since the hash will be stored against the user profile and not treated like a session/token this shouldn’t be an issue.

An example PAT value could be:

Eg:

f83ee64ef9d15a68e5c7a910395ea1611e2fa138b1b9dd7e090941dfed773b2c:{“resource1” : [ “read”, “write”, “execute” ] }

a011286605ff6a5de51f4d46eb511a9e8715498fca87965576c73b8fd27246fe:{"resource2" : [ "read", "write"]}

The key was simply created by running the resource and the associated permissions through sha256sum on Linux.  How you create the hash is beyond the scope of this blog, but this could be easily handled by say ForgeRock IDM and a custom endpoint in a few lines of JavaScript.

PAT Storage

The important aspect is where to store the PAT once it has been created.  Ideally this really needs to be stored against the users profile record in DJ.  I’d recommend creating a new schema attribute dedicated for PAT’s that is multivalued.  The user can then update their PAT’s over REST the same as any other profile attribute.

For this demo I used the existing attribute called “iplanet-am-user-alias-list” for speed as this was multi-valued.  I added in a self-created PAT for my fake resource:

Using a multi-valued attribute allows me to create any number of PAT’s.  As they don’t have an expiration they might last for some time in the user store.

PAT Usage

Once stored, they could be used in a variety of ways to provide “access” within applications. The most simple way, is to leverage the AM authorization engine as a decision point to verify that a PAT exists and what permissions it maps to.

Once the PAT is stored and created, the end user can present it to another user/service that they want to use the PAT.  That service or user presents the username:PAT combination to the service they wish to gain access to.  That service calls the AM authorization API’s to see if the user:PAT combination is valid.

The protected service would call {{OpenAM}}/openam/json/policies?_action=evaluate with a payload similar to:

Here I am calling the ../policies endpoint with a dedicated account called “policyeval” which has ability to read the REST endpoint and also read realm users which we will need later on.  Edit the necessary privileges tab within the Admin console.

If the PAT exists within the user profile of “smoff”, AM returns an access=true message, along with the resource and associated permissions that can be used within the calling application:

So what needs setting up in the background to allow AM to make these decisions? Well all pretty simple really.

Create Authorization Resource Type for PAT’s

Firstly create a resource type that matches the pat://*.* format (or any format you prefer):

Next we need to add a policy set that will contain our access policies:

The PATValidator only contains one policy called AllPATs, which is just a wildcard match for pat://*:*.  This will allow any combination of user:pat to be submitted for validation:

Make sure to set the subjects condition to “NOT Never Match” as we are not analysing user session data here.  The logic for analysis is being handled by a simple script.

PAT Authorization Script

The script is available here.

At a high level is does the following:

  1. Captures the submitted username and PAT that is part of the authorization request
  2. As the user will not have a local session, we need to make a native REST call to look up the user
  3. We do this by first generating a session for our policyeval user
  4. We use that session to call the ../json/users endpoint to perform a search for the users PATs
  5. We do a compare between the submitted PAT and any PAT’s found against the user profile
  6. If a match is found, we pull out the assigned permissions and send back as a response attribute array to the calling application

This blog post was first published @ http://www.theidentitycookbook.com/, included here with permission from the author.

A Design for Modern Authentication

The password is dead. Long live the password! I have lost count of how many articles and blogs I have seen with regards to the weaknesses, the management, the flexibility, security, insecurity and overall usage of passwords when it comes to user authentication. We all use them and they’re not going anywhere any time soon. OK, so next step. What else can and should we be using for our user and device based authentication and login journeys?

Where We Are Now – The Sticking Plaster of MFA

So we accept that the traditional combo of user name and passwords is bad for our (system) health. Step forward multi-factor authentication. Or 2FA. Take your pick. This generally saw the introduction of something you have in the form of a token, phone-as-a-token or some other out of band mechanism that would create a one-time-password. Traditionally the “out of band” mechanism was either an email or SMS to a preregistered address or phone number, that contained a 6 digit pass code. Internal or employee systems would often leverage a hard token – either a USB dongle or a small tag with a tiny display that would show a rotating pin. These concepts were certainly better from a security perspective, but a) were not unbreakable and b) often created a disjointed user login experience with lots on interruptions and user interaction.

Basic MFA factors

Where We Are Moving To – Deca-factor Authentication!

OK, so user name and passwords are not great. MFA is simple, pretty cheap to implement, but means either the end user needs to carry something around (a bit 2006) or has an interrupted login journey by constantly being asked for a one-time-password. What we need is not two-factor-authentication but deca-factor-authentication! More factors. At least 10 to be precise. Increase the factors and aim to reduce the material impact of a single factor compromise whilst simultaneously reducing the number of user interrupts. These 10 factors (it could be 8, it could be 15, you get the idea) are all about introducing a broad spectrum analysis for the login journey.

Each factor is must more cohesive and modular, analysing a single piece of the login journey. The login journey could still leverage pretty static profile related data such as a user name, but is augmented with much more context – the location, time, device origin of the request and comparison factors that look at previous login requests to determine patterns or abnormalities.

Breaking authentication down? “Deca-factors”

Some of these factors could “pass” and some could “fail” during the login journey, but the process is much about about accumulating and analysing risk and therefore being able to respond to high risk more accurately. Applying 2FA to every user login does not reduce risk per-se, it simply applies a blanket risk to every actor.

Wouldn’t it be much better to allow login variation for genuine users who do regularly change machine, location, network and time zone? Wouldn’t it be better to give users more choice over their login journeys and provide numerous options if and when high risks scenarios do occur?

Another key area I think authentication is moving towards, is that of transparency. “Frictionless”, “effortless” or “zero-effort” logins are all the buzz. If, as an end user I enrol, sacrifice the privacy regarding a device fingerprint, maybe download a OTP or push app, why can’t I just “login” without having my experience interrupted? The classic security/convenience paradox. By introducing more factors and “gluing” those factors together with processing logic, the user authentication system can be much more responsive – perhaps mimicking a state machine, designed in a non-deterministic fashion, where any given factor could have multiple outcomes.

Where We Want To Get To – Transparent Pre-Authentication

So I guess the sci-fi end goal is to just turn up at work/coffee shop/door/car/website/application (delete as applicable) and just present one selves. The service would not only “know” who you were, but also trust that it is you. A bit like the Queen. Every time you presented yourself, transparent background checks would continually evaluate every part of the interaction, looking for changes and identifying risk.

Session + Bind + Usage – increasing transparency?

The closest we are to that today in the web world at least, is the exchange of the authentication process for a cookie/session/tokenId/access_token. Whether that is stateless or stateful, it’s something to represent the user when they attempt to gain access to the service again. Couple that token to some kind of binding (either to a PKI key pair, or TLS session) to reduce the impact of token theft and there is some kind of repeatable access use case. However, change is all around and the token presentation, must therefore be coupled with all usage, context, resource and transaction data that the token is attempting to access, to allow the authentication machine to loop through the necessary deca-factors either individually or collectively to identify risk or change.

Authentication is moving on.  A more modern system must accommodate a broad spectrum  when it comes to analysing who is instigating a transaction which must also be coupled with mechanisms that increase transparency and pre-identification of risk without unnecessary and obtrusive interruptions.

This blog post was first published @ www.infosecprofessional.com, included here with permission.

SAML2 IDP Automated Certificate Management in FR AM

ForgeRock AM 5.0 ships with Amster a lightweight command line tool and interactive shell, that allows for the automation of many management and configuration tasks.

A common task often associated with SAML2 identity provider configs, is the updating of certificates that are used for signing and the possible encryption of assertions.  A feature added in 13.0 of OpenbAM, was the ability to have multiple certificates within an IDP config.  This is useful to overcome the age old challenge of how to handle certificate expiration.  An invalid cert can brake integrations with service providers.  The process to remove, then add a new certificate, would require any entities within the circle of trust to retrieve new metadata into their configs – and thus create downtime, so the timing of this is often an issue.  The ability to have multiple certificates in the config, would allow service providers to pull down meta data at a known date, instead of specifically when certificates expired.

Here we see the basic admin view of the IDP config…showing the list of certs available.  These certs are stored in the JCEKS keystore in AM5.0 (previously the JKS keystore).

So the config contains am1 and am2 certs – an export of the meta data (from the ../openam/saml2/jsp/exportmetadata.jsp?entityid=idp endpoint) will list both certs that could be used for signing:

The first certificate listed in the config, is the one that is used to sign.  When that expires, just remove from the list and the second certificate is then used.  As the service provider already has both certs in their originally downloaded metadata, there should be no break in service.

Anyway….back to automation.  Amster can manage the the SAML2 entities, either via the shell or script.  This allows admins to operationally create, edit and update entities…and a regular task could be to add new certificates to the IDP list as necessary.

To do this I created a script that does just this.  It’s a basic bash script that utilises Amster to read, edit then re-import the entity as a JSON wrapped XML object.

The script is available here.

For more information on IDP certificate management see the docs here.

This blog post was first published @ http://www.theidentitycookbook.com/, included here with permission from the author.

Integrating Yubikey OTP with ForgeRock Access Management

Yubico is a manufacturer of multi-factor authentication devices, that typically are just USB dongles. They can provide a range of different MFA options including traditional static password linking, one-time-password generation and integration using FIDO (Fast Identity Online) Universal 2nd Factor (U2F).

I want to quickly show the route of integrating your Yubico Yubikey with ForgeRock Access Management.  ForgeRock and Yubico have had integrations for the last 6 years, but I thought it was good to have a simple update on integration using the OATH compliant OTP.

First of all you need a Yubikey.  I’m using a Yubikey Nano, which couldn’t be any smaller if it tried. Just make sure you don’t lose it… The Yubikey needs configuring first of all to generate one time passwords.  This is done using the Yubico personalisation tool.  This is a simple util that works on Mac, Windows and Linux.  Download the tool from Yubico and install.  Setting up the Yubikey for OTP generation is a 3 min job.  There’s even a nice Vimeo on how to do it, if you can’t be bothered RTFM.

This set up process, basically generates a secret, that is bound to the Yubikey along with some config.  If you want to use your own secret, just fill in the field…but don’t forget it :-)

Next step is to setup ForgeRock AM (aka OpenAM), to use the Yubikey during login.

Access Management has shipped with an OATH compliant authentication module for years.  Even since the Sun OpenSSO days.  This module works with any Open Authentication compliant device.

Create a new module instance and add in the fields where you will store the secret and counter against the users profile.  For quickness (and laziness) I just used employeeNumber and telephoneNumber as they are already shipped in the profile schema and weren’t being used.  In the “real world” you would just add two specific attributes to the profile schema.

Make sure you then copy the secret that the Yubikey personalisation tool created, into the user record within the employeeNumber field…

Next, just add the module to a chain, that contains your data store module first – the data store isn’t essential, but you do need a way to identify the user first, in order to look up their OTP seed in the profile store, so user name and password authentication seems the quickest – albeit you could just use persistent cookie if the user had authenticated previously, or maybe even just a username module.

Done.  Next, to use your new authentication service, just augment the authentication URL with the name of the service – in this case yubikeyOTPService. Eg:

../openam/XUI/#login/&authIndexType=service&authIndexValue=yubikeyOTPService

This first asks me for my username and password…

…then my OTP.

At this point, I just add my Yubikey Nano into my USB drive, then touch it for 3 seconds, to auto generate the 6 digit OTP and log me in.  Note the 3 seconds bit is important.  Most Yubikeys have 2 configuration slots and the 1 slot is often configured for the Yubico Cloud Service, and is activated if you touch the key for only 1 second.  To activate the second configuration and in our case the OTP, just hold a little longer…

This blog post was first published @ http://www.theidentitycookbook.com/, included here with permission from the author.

Why Tim Berners-Lee Is Right On Privacy

Last week, the “father” of the Internet, Tim Berners-Lee, did a series of interviews to mark the 28 year anniversary since he submitted his original proposal for the worldwide web.

The interviews were focused on the phenomenal success of the web, along with a macabre warning describing 3 key areas we need to change in order to “save” the Internet as we know it.

The three points were:

  1. We’ve lost control of our personal data
  2. It’s too easy for misinformation to spread on the web
  3. Political advertising online needs transparency and understanding

I want to primarily discuss the first point – personal data, privacy and our lack of control.

As nearly every private, non-profit and public sector organisation on the planet, either has a digital presence, or is in the process of transforming itself to be a digital force, the transfer of personal data to service provider is growing at an unprecedented rate.

Every time we register for a service – be it for an insurance quote, to submit a tax return, when we download an app on our smart phones, register at the local leisure centre, join a new dentists or buy a fitness wearable, we are sharing an ever growing list of personal information or providing access to our own personal data.

The terms and conditions often associated with such registration flows, are often so full of “legalese”, or the app permissions or “scope” so large and complex, that the end user literally has no control or choice over the type, quality and and duration of the information they share.  It is generally an “all or nothing” type of data exchange.  Provide the details the service provider is asking for, or don’t sign up to the service. There are no alternatives.

This throws up several important questions surrounding data privacy, ownership and control.

  1. What is the data being used for?
  2. Who has access to the data, including 3rd parties?
  3. Can I revoke access to the data?
  4. How long with the service provider have access to the data for?
  5. Can the end user amend the data?
  6. Can the end user remove the data from the service provider – aka right to erasure?

Many service providers are likely unable to provide an identity framework that can answer those sorts of questions.

The interesting news, is that there are alternatives and things are likely to change pretty soon.  The EU General Data Protection Regulation (GDPR), provides a regulatory framework around how organisations should collect and manage personal data.  The wide ranging regulation, covers things like how consent from the end user is managed and captured, how breach notifications are handled and how information pertaining to the reasons for data capture are explained to the end user.

The GDPR isn’t a choice either – it’s mandatory for any organisation (irregardless of their location) that handles data of European Union citizens.

Couple with that, new technology standards such as the User Managed Access working group being run by the Kantara Initiative, that look to empower end users to have more control and consent of data exchanges, will open doors for organisations who want to deliver personalised services, but do so in a more privacy preserving and user friendly way.

So, whilst the Internet certainly has some major flaws, and data protection and user privacy is a big one currently, there are some green shoots of recovery from an end user perspective.  It will be interesting to see what the Internet will look like another 28 years from now.

This blog post was first published @ www.infosecprofessional.com, included here with permission.

What’s New in ForgeRock Access Management

forgerock-access-management-whats-new-jan17

If you’re interested in hearing what’s coming up for ForgeRock Access Management, have a look at the replay of a webinar Andy Hall and I did yesterday. In it, we discuss how the ForgeRock Identity Platform addresses the challenges of customer identity relationship management, and the new features coming up in ForgeRock Access Management in our next platform release.

The Future is Now: What’s New in ForgeRock Access Management webinar replay

Or you can flip through slides over on SlideShare.

Hope you enjoy it!

Top 5 Digital Identity Predictions for 2017

2016 is drawing to an end, the goose is getting fat, the lights and decorations are adorning many a fire place and other such cold weather cliches.  However, the attention must turn back to identity management and what the future may or may not hold. Digital identity or consumer based identity and access management (CIAM) has taken a few big […]

Using OpenAM as a Trusted File Authorization Engine

A common theme in the DevOps world, or any containerization style infrastructure, may be the need to verify which executables (or files in general) can be installed, run, updated or deleted within a particular environment, image or container.  There are numerous ways this could be done.  Consider a use case where exe’s, Android APK’s or other 3rd party compiled files […]

Protect Bearer Tokens Using Proof of Possession

Bearer tokens are the cash of the digital world.  They need to be protected.  Whoever gets hold of them, can well, basically use them as if they were you. Pretty much the same as cash.  The shop owner only really checks the cash is real, they don’t check that the £5 note you produced from your wallet is actually your £5 note.

This has been an age old issue in web access management technologies, both for stateless and stateful token types, OAuth2 access and refresh tokens, as well as OpenID Connect id tokens.

In the hyper connected Consumer Identity & Access Management (CIAM) and Internet (Identity) of Things worlds, this can become a big problem.

Token misuse, perhaps via MITM (man in the middle) attacks, or even resource server misconfiguration, could result in considerable data compromise.

However, there are some newer standards that look to add some binding ability to the tokens – that is, glue them to a particular user or device based on some simple crypto.

The unstable nightly source and build of OpenAM has added the proof of possession capability to the OAuth2 provider service. (Perhaps the first vendor to do so? Email me if you see other implementations..).

The idea is, that the client makes a normal request for an access_token from the authorization service (AS), but also adds another parameter in the request, that contains some crypto the client has access to – basically a public key of an asymmetric key pair.

This key, which could be ephemeral for that request, is then baked into the access_token.  If the access_token is a JWT, the JWT contains this public key and the JWT is then signed by the authorization service.  If using a stateful access_token, the AS token introspection endpoint can relay the public key back to the resource server at look up time.

This basically gives the RS an option to then issue a challenge response style interaction with the client to see if they are in possession of the private key pair – thus proving they are the correct recipient of the originally issued access_token!

 

The basic flow, sees the addition of a new parameter to the access_token request to the OpenAM authorization service, under the name of “cnf_key”.  This is a confirmation key, that the client is in possession of.  In this example, it would be a base64 encoded JSON Web Key representation of a public key.

So for example, a POST request to the endpoint ../openam/oauth2/access_token, would now take the parameters grant_type, scope and also cnf_key, with an authorization header containing the OAuth2 client id and secret as normal.  A cnf_key could look something like this:

eyJqd2siOnsKICAiYWxnIjogIlJTMjU2IiwKICAiZSI6ICJBUUFCIiwKICAibiI6ICJ2TDM0UXh5bXdId1dEOVpWTDljaU42Yk5ybk91NTI0cjdZMzRvUlJXRkpjWjc3S1dXaHB1Si1iSlZXVVNUd3ZKTGdWTWlDZmFxSTZEWnIwNWQ2VGdONTNfMklVWmtHLXgzNnBFbDZZRWs1d1ZnX1ExelFkeEZHZkRoeFBWajJ3TWNNcjFyR0h1UUFEeC1qV2JHeGRHLTJXMXFsVEdQT253SklqYk9wVm1RYUJjNHhSYndqenNsdG1tcndzMmZNTUtNTDVqbnFwR2RoeWRfdXlFTU0wdHpNTGFNSVN2M2lmeFM2UUw3c2tpZTZ5ajJxamxUTUd3QjA4S29ZUEQ2QlVPaXd6QWxkUmJfM3k4bVA2TXY5cDdvQXBheTZCb25pWU8yaVJySzMxUlRaLVlWUHRleTllSWZ1d0ZFc0RqVzNES0JBS21rMlhGY0NkTHEyU1djVWFOc1EiLAogICJrdHkiOiAiUlNBIiwKICAidXNlIjogInNpZyIsCiAgImtpZCI6ICJzbW9mZi1rZXkiCn19Cg==

Running that through base64 -d on bash, or via an online base64 decoder, shows something like the following: (NB this JWK was created using an online tool for simple testing)

{
   "jwk":
            "alg": "RS256",
             "e": "AQAB",
             "n": "vL34QxymwHwWD9ZVL9ciN6bNrnOu524r7Y34oRRWFJcZ77KWWhpuJ-                               bJVWUSTwvJLgVMiCfaqI6DZr05d6TgN53_2IUZkG-                                                x36pEl6YEk5wVg_Q1zQdxFGfDhxPVj2wMcMr1rGHuQADx-jWbGxdG-2W1qlTGPOnwJIjbOpVmQaBc4xRbwjzsltmmrws2fMMKML5jnqpGdhyd_uyEMM0tzMLaMISv3ifxS6QL7skie6yj2qjlTMGwB08KoYPD6BUOiwzAldRb_3y8mP6Mv9p7oApay6BoniYO2iRrK31RTZ-YVPtey9eIfuwFEsDjW3DKBAKmk2XFcCdLq2SWcUaNsQ",
          "kty": "RSA",
           "use": "sig",
            "kid": "smoff-key"
     }
}

The authorization service, should then return the normal access_token payload.  If using stateless OAuth2 access_tokens, the access_token will contain the new embedded cnf_key attribute, containing the originally submitted public key.  The resource server, can then leverage the public key to perform some out of band challenge response questions of the client, when the client comes to present the access_token later.

If using the more traditional stateful access_tokens, the RS can call the ../oauth2/introspect endpoint to find the public key.

The powerful use case is to validate the the client submitting the access_token, is in fact the same as the original recipient, when the access_token was issued.  This can help reduce MITM and other basic token misuse scenarios.

This blog post was first published @ http://www.theidentitycookbook.com/, included here with permission from the author.