2020: Machine Learning, Post Quantum Crypto & Zero Trust

Welcome to a digital identity project in 2020! You’ll be expected to have a plan for post-quantum cryptography.  Your network will be littered with “zero trust” buzz words, that will make you suspect everyone, everything and every transaction.  Add to that, “machines” will be learning everything, from how you like your coffee, through to every network, authentication and authorisation decision. OK, are you ready?

Machine Learning

I’m not going to do an entire blog on machine learning (ML) and artificial intelligence (AI).  Firstly I’m not qualified enough on the topic and secondly I want to focus on the security implications.  Needless to say, within 3 years, most organisations will have relatively experienced teams who are handling big data capture from an and identity, access management and network perspective.

That data will be being fed into ML platforms, either on-premise, or via cloud services.  Leveraging either structured or unstructured learning, data from events such as login (authentication) for end users and devices, as well authorization decisions can be analysed in order to not only increase assurance and security, but for also increasing user experience.  How?  Well if the output from ML can be used to either update existing signatures (bit legacy, but still) whilst simultaneously working out the less risky logins, end user journeys can be made less intrusive.

Step one is finding out the correct data sources to be entered into the ML “model”.  What data is available, especially within the sign up, sign in and authorization flows?  Clearly general auditing data will look to capture ML “tasks” such as successful sign ins and any other meta data associated with that – such as time, location, IP, device data, behavioural biometry and so on.  Having vast amounts of this data available is the first start, which in turn can be used to “feed” the ML engine.  Other data points would be needed to.  What resources, applications and API calls are being made to complete certain business processes?  Can patterns be identified and tied to “typical” behaviour and user and device communities.  Being able to identify and track critical data and the services that process that data would be a first step, before being able to extract task based data samples to help identify trusted and untrusted activities.

 

Post Quantum Crypto

Quantum computing is coming.  Which is great.  Even in 2020, it might not be ready, but you need to be ready for it.  But, and there’s always a but, the main concern is that the super power of quantum will blow away the ability for existing encryption and hashing algorithms to remain secure.  Why?  Well quantum computing ushers in a paradigm of “qubits” – a superpositional state in between the classic binary 1 and 0.  Ultimately, that means that the “solutioneering” of complex problems can be completed  in a much more efficient and non-sequential way.

The quantum boxes can basically solve certain problems faster.  The mathematics behind cryptography being one of those problems.  A basic estimate for the future effectiveness of something like AES-256, drops to 128 bits.  Scary stuff.  Commonly used approaches today for key exchange rely on protocols such as Diffie-Hellman (DH) or Elliptic Curve Diffie Hellman (ECDH).  Encryption is then handled by things like Rivest-Shamir-Adleman (RSA) or Elliptic Curve Digital Signing Algorithm (ECDSA).

In the post-quantum (PQ) world they’re basically broken.  Clearly, the material impact on your organisation or services will largely depend on impact assessment.  There’s no point putting a $100 lock on a $20 bike.  But everyone wants encryption right?  All that data that will be flying around is likely to need even more protection whilst in transit and at rest.

Some of the potentially “safe” PQ algorithms include XMSS and SPHINCS for hashing – the former going through IETF standardization.  Ring Learning With Errors (RLWE) is basically an enhanced public key cryptosystem, that alters the structure of the private key.  Currently under research but no weakness have yet been found.  NTRU is another algorithm for the PQ world, using a hefty 12881 bit key.  NTRU is also already standardized by the IEEE which helps with the maturity aspect.

But how to decide?  There is a nice body called the PQCRYPTO Consortium that is providing guidance on current research.  Clearing you’re not going to build your own alternatives, but information assurance and crypto specialists within your organisation, will need to start data impact assessments, in order to understand where cryptography is currently used for both transport, identification and data at rest protection to understand any future potential exposures.

Zero Trust Identities

“Zero Trust” (ZT) networking has been around for a while.  The concept of organisations having a “safe” internal network versus the untrusted “hostile” public network, separated by a firewall are long gone. Organisations are perimeter-less.

Assume every device, identity and transaction is hostile until proven otherwise.  ZT for identity especially, will be looking to bind not only a physical identity to a digital representation (session Id, token, JWT), but also that representation to a vehicle – aka mobile, tablet or device.  In turn, every transaction that tuple interacts with, is then verified – checking for changes – either contextual or behavioural that could indicate malicious intent.  That introduces a lot of complexity to transaction, data and application protection.

Every transaction potentially requires introspection or validation.  Add to this mix an increased number of devices and data flows, which would pave the way for distributed authorization, coupled with continuous session validation.

How will that look?  Well we’re starting to see the of things like stateless JSON Web Tokens (JWT’s) as a means for hyper scale assertion issuance, along with token binding to sessions and devices.  Couple to that Fine Grained Authentication processes that are using 20+ signals of data to identify a user or thing and we’re starting to see the foundations of ZT identity infrastructures.  Microservice or hyper-mesh related application infrastructures are going to need rapid introspection and re-validation on every call so the likes of distributed authorization looks likely.

So the future is now.  As always.  We know that secure identity and access management functions has never been more needed, popular or advanced in the last 20 years.  The next 3-5 years will be critical in defining a back bone of security services that can nimbly be applied to users, devices, data and the billions of transactions that will result.

This blog post was first published @ www.infosecprofessional.com, included here with permission.

Creating Personal Access Tokens in ForgeRock AM

Personal Access Tokens (PAT’s) are used to provide scoped self-managed access credentials that can be used to provide access to trusted systems and services that want to act on a users behalf.

Similar to OAuth tokens, they often don’t have an expiration and are used conceptually instead of passwords.  A PAT could be used in combination with a username when performing basic authentication.

For example see the https://github.com/settings/tokens page within Github which allows scope-related tokens to be created for services to access your Github profile.

PAT Creation

The PAT can be an opaque string – perhaps a SHA256 hash.  Using a hash seems the most sensible approach to avoid collisions and create a fixed-length portable string.  A hash without a key of course wont provide any creator assurance/verification function, but since the hash will be stored against the user profile and not treated like a session/token this shouldn’t be an issue.

An example PAT value could be:

Eg:

f83ee64ef9d15a68e5c7a910395ea1611e2fa138b1b9dd7e090941dfed773b2c:{“resource1” : [ “read”, “write”, “execute” ] }

a011286605ff6a5de51f4d46eb511a9e8715498fca87965576c73b8fd27246fe:{"resource2" : [ "read", "write"]}

The key was simply created by running the resource and the associated permissions through sha256sum on Linux.  How you create the hash is beyond the scope of this blog, but this could be easily handled by say ForgeRock IDM and a custom endpoint in a few lines of JavaScript.

PAT Storage

The important aspect is where to store the PAT once it has been created.  Ideally this really needs to be stored against the users profile record in DJ.  I’d recommend creating a new schema attribute dedicated for PAT’s that is multivalued.  The user can then update their PAT’s over REST the same as any other profile attribute.

For this demo I used the existing attribute called “iplanet-am-user-alias-list” for speed as this was multi-valued.  I added in a self-created PAT for my fake resource:

Using a multi-valued attribute allows me to create any number of PAT’s.  As they don’t have an expiration they might last for some time in the user store.

PAT Usage

Once stored, they could be used in a variety of ways to provide “access” within applications. The most simple way, is to leverage the AM authorization engine as a decision point to verify that a PAT exists and what permissions it maps to.

Once the PAT is stored and created, the end user can present it to another user/service that they want to use the PAT.  That service or user presents the username:PAT combination to the service they wish to gain access to.  That service calls the AM authorization API’s to see if the user:PAT combination is valid.

The protected service would call {{OpenAM}}/openam/json/policies?_action=evaluate with a payload similar to:

Here I am calling the ../policies endpoint with a dedicated account called “policyeval” which has ability to read the REST endpoint and also read realm users which we will need later on.  Edit the necessary privileges tab within the Admin console.

If the PAT exists within the user profile of “smoff”, AM returns an access=true message, along with the resource and associated permissions that can be used within the calling application:

So what needs setting up in the background to allow AM to make these decisions? Well all pretty simple really.

Create Authorization Resource Type for PAT’s

Firstly create a resource type that matches the pat://*.* format (or any format you prefer):

Next we need to add a policy set that will contain our access policies:

The PATValidator only contains one policy called AllPATs, which is just a wildcard match for pat://*:*.  This will allow any combination of user:pat to be submitted for validation:

Make sure to set the subjects condition to “NOT Never Match” as we are not analysing user session data here.  The logic for analysis is being handled by a simple script.

PAT Authorization Script

The script is available here.

At a high level is does the following:

  1. Captures the submitted username and PAT that is part of the authorization request
  2. As the user will not have a local session, we need to make a native REST call to look up the user
  3. We do this by first generating a session for our policyeval user
  4. We use that session to call the ../json/users endpoint to perform a search for the users PATs
  5. We do a compare between the submitted PAT and any PAT’s found against the user profile
  6. If a match is found, we pull out the assigned permissions and send back as a response attribute array to the calling application

This blog post was first published @ http://www.theidentitycookbook.com/, included here with permission from the author.

A Design for Modern Authentication

The password is dead. Long live the password! I have lost count of how many articles and blogs I have seen with regards to the weaknesses, the management, the flexibility, security, insecurity and overall usage of passwords when it comes to user authentication. We all use them and they’re not going anywhere any time soon. OK, so next step. What else can and should we be using for our user and device based authentication and login journeys?

Where We Are Now – The Sticking Plaster of MFA

So we accept that the traditional combo of user name and passwords is bad for our (system) health. Step forward multi-factor authentication. Or 2FA. Take your pick. This generally saw the introduction of something you have in the form of a token, phone-as-a-token or some other out of band mechanism that would create a one-time-password. Traditionally the “out of band” mechanism was either an email or SMS to a preregistered address or phone number, that contained a 6 digit pass code. Internal or employee systems would often leverage a hard token – either a USB dongle or a small tag with a tiny display that would show a rotating pin. These concepts were certainly better from a security perspective, but a) were not unbreakable and b) often created a disjointed user login experience with lots on interruptions and user interaction.

Basic MFA factors

Where We Are Moving To – Deca-factor Authentication!

OK, so user name and passwords are not great. MFA is simple, pretty cheap to implement, but means either the end user needs to carry something around (a bit 2006) or has an interrupted login journey by constantly being asked for a one-time-password. What we need is not two-factor-authentication but deca-factor-authentication! More factors. At least 10 to be precise. Increase the factors and aim to reduce the material impact of a single factor compromise whilst simultaneously reducing the number of user interrupts. These 10 factors (it could be 8, it could be 15, you get the idea) are all about introducing a broad spectrum analysis for the login journey.

Each factor is must more cohesive and modular, analysing a single piece of the login journey. The login journey could still leverage pretty static profile related data such as a user name, but is augmented with much more context – the location, time, device origin of the request and comparison factors that look at previous login requests to determine patterns or abnormalities.

Breaking authentication down? “Deca-factors”

Some of these factors could “pass” and some could “fail” during the login journey, but the process is much about about accumulating and analysing risk and therefore being able to respond to high risk more accurately. Applying 2FA to every user login does not reduce risk per-se, it simply applies a blanket risk to every actor.

Wouldn’t it be much better to allow login variation for genuine users who do regularly change machine, location, network and time zone? Wouldn’t it be better to give users more choice over their login journeys and provide numerous options if and when high risks scenarios do occur?

Another key area I think authentication is moving towards, is that of transparency. “Frictionless”, “effortless” or “zero-effort” logins are all the buzz. If, as an end user I enrol, sacrifice the privacy regarding a device fingerprint, maybe download a OTP or push app, why can’t I just “login” without having my experience interrupted? The classic security/convenience paradox. By introducing more factors and “gluing” those factors together with processing logic, the user authentication system can be much more responsive – perhaps mimicking a state machine, designed in a non-deterministic fashion, where any given factor could have multiple outcomes.

Where We Want To Get To – Transparent Pre-Authentication

So I guess the sci-fi end goal is to just turn up at work/coffee shop/door/car/website/application (delete as applicable) and just present one selves. The service would not only “know” who you were, but also trust that it is you. A bit like the Queen. Every time you presented yourself, transparent background checks would continually evaluate every part of the interaction, looking for changes and identifying risk.

Session + Bind + Usage – increasing transparency?

The closest we are to that today in the web world at least, is the exchange of the authentication process for a cookie/session/tokenId/access_token. Whether that is stateless or stateful, it’s something to represent the user when they attempt to gain access to the service again. Couple that token to some kind of binding (either to a PKI key pair, or TLS session) to reduce the impact of token theft and there is some kind of repeatable access use case. However, change is all around and the token presentation, must therefore be coupled with all usage, context, resource and transaction data that the token is attempting to access, to allow the authentication machine to loop through the necessary deca-factors either individually or collectively to identify risk or change.

Authentication is moving on.  A more modern system must accommodate a broad spectrum  when it comes to analysing who is instigating a transaction which must also be coupled with mechanisms that increase transparency and pre-identification of risk without unnecessary and obtrusive interruptions.

This blog post was first published @ www.infosecprofessional.com, included here with permission.

SAML2 IDP Automated Certificate Management in FR AM

ForgeRock AM 5.0 ships with Amster a lightweight command line tool and interactive shell, that allows for the automation of many management and configuration tasks.

A common task often associated with SAML2 identity provider configs, is the updating of certificates that are used for signing and the possible encryption of assertions.  A feature added in 13.0 of OpenbAM, was the ability to have multiple certificates within an IDP config.  This is useful to overcome the age old challenge of how to handle certificate expiration.  An invalid cert can brake integrations with service providers.  The process to remove, then add a new certificate, would require any entities within the circle of trust to retrieve new metadata into their configs – and thus create downtime, so the timing of this is often an issue.  The ability to have multiple certificates in the config, would allow service providers to pull down meta data at a known date, instead of specifically when certificates expired.

Here we see the basic admin view of the IDP config…showing the list of certs available.  These certs are stored in the JCEKS keystore in AM5.0 (previously the JKS keystore).

So the config contains am1 and am2 certs – an export of the meta data (from the ../openam/saml2/jsp/exportmetadata.jsp?entityid=idp endpoint) will list both certs that could be used for signing:

The first certificate listed in the config, is the one that is used to sign.  When that expires, just remove from the list and the second certificate is then used.  As the service provider already has both certs in their originally downloaded metadata, there should be no break in service.

Anyway….back to automation.  Amster can manage the the SAML2 entities, either via the shell or script.  This allows admins to operationally create, edit and update entities…and a regular task could be to add new certificates to the IDP list as necessary.

To do this I created a script that does just this.  It’s a basic bash script that utilises Amster to read, edit then re-import the entity as a JSON wrapped XML object.

The script is available here.

For more information on IDP certificate management see the docs here.

This blog post was first published @ http://www.theidentitycookbook.com/, included here with permission from the author.

Integrating Yubikey OTP with ForgeRock Access Management

Yubico is a manufacturer of multi-factor authentication devices, that typically are just USB dongles. They can provide a range of different MFA options including traditional static password linking, one-time-password generation and integration using FIDO (Fast Identity Online) Universal 2nd Factor (U2F).

I want to quickly show the route of integrating your Yubico Yubikey with ForgeRock Access Management.  ForgeRock and Yubico have had integrations for the last 6 years, but I thought it was good to have a simple update on integration using the OATH compliant OTP.

First of all you need a Yubikey.  I’m using a Yubikey Nano, which couldn’t be any smaller if it tried. Just make sure you don’t lose it… The Yubikey needs configuring first of all to generate one time passwords.  This is done using the Yubico personalisation tool.  This is a simple util that works on Mac, Windows and Linux.  Download the tool from Yubico and install.  Setting up the Yubikey for OTP generation is a 3 min job.  There’s even a nice Vimeo on how to do it, if you can’t be bothered RTFM.

This set up process, basically generates a secret, that is bound to the Yubikey along with some config.  If you want to use your own secret, just fill in the field…but don’t forget it :-)

Next step is to setup ForgeRock AM (aka OpenAM), to use the Yubikey during login.

Access Management has shipped with an OATH compliant authentication module for years.  Even since the Sun OpenSSO days.  This module works with any Open Authentication compliant device.

Create a new module instance and add in the fields where you will store the secret and counter against the users profile.  For quickness (and laziness) I just used employeeNumber and telephoneNumber as they are already shipped in the profile schema and weren’t being used.  In the “real world” you would just add two specific attributes to the profile schema.

Make sure you then copy the secret that the Yubikey personalisation tool created, into the user record within the employeeNumber field…

Next, just add the module to a chain, that contains your data store module first – the data store isn’t essential, but you do need a way to identify the user first, in order to look up their OTP seed in the profile store, so user name and password authentication seems the quickest – albeit you could just use persistent cookie if the user had authenticated previously, or maybe even just a username module.

Done.  Next, to use your new authentication service, just augment the authentication URL with the name of the service – in this case yubikeyOTPService. Eg:

../openam/XUI/#login/&authIndexType=service&authIndexValue=yubikeyOTPService

This first asks me for my username and password…

…then my OTP.

At this point, I just add my Yubikey Nano into my USB drive, then touch it for 3 seconds, to auto generate the 6 digit OTP and log me in.  Note the 3 seconds bit is important.  Most Yubikeys have 2 configuration slots and the 1 slot is often configured for the Yubico Cloud Service, and is activated if you touch the key for only 1 second.  To activate the second configuration and in our case the OTP, just hold a little longer…

This blog post was first published @ http://www.theidentitycookbook.com/, included here with permission from the author.

Why Tim Berners-Lee Is Right On Privacy

Last week, the “father” of the Internet, Tim Berners-Lee, did a series of interviews to mark the 28 year anniversary since he submitted his original proposal for the worldwide web.

The interviews were focused on the phenomenal success of the web, along with a macabre warning describing 3 key areas we need to change in order to “save” the Internet as we know it.

The three points were:

  1. We’ve lost control of our personal data
  2. It’s too easy for misinformation to spread on the web
  3. Political advertising online needs transparency and understanding

I want to primarily discuss the first point – personal data, privacy and our lack of control.

As nearly every private, non-profit and public sector organisation on the planet, either has a digital presence, or is in the process of transforming itself to be a digital force, the transfer of personal data to service provider is growing at an unprecedented rate.

Every time we register for a service – be it for an insurance quote, to submit a tax return, when we download an app on our smart phones, register at the local leisure centre, join a new dentists or buy a fitness wearable, we are sharing an ever growing list of personal information or providing access to our own personal data.

The terms and conditions often associated with such registration flows, are often so full of “legalese”, or the app permissions or “scope” so large and complex, that the end user literally has no control or choice over the type, quality and and duration of the information they share.  It is generally an “all or nothing” type of data exchange.  Provide the details the service provider is asking for, or don’t sign up to the service. There are no alternatives.

This throws up several important questions surrounding data privacy, ownership and control.

  1. What is the data being used for?
  2. Who has access to the data, including 3rd parties?
  3. Can I revoke access to the data?
  4. How long with the service provider have access to the data for?
  5. Can the end user amend the data?
  6. Can the end user remove the data from the service provider – aka right to erasure?

Many service providers are likely unable to provide an identity framework that can answer those sorts of questions.

The interesting news, is that there are alternatives and things are likely to change pretty soon.  The EU General Data Protection Regulation (GDPR), provides a regulatory framework around how organisations should collect and manage personal data.  The wide ranging regulation, covers things like how consent from the end user is managed and captured, how breach notifications are handled and how information pertaining to the reasons for data capture are explained to the end user.

The GDPR isn’t a choice either – it’s mandatory for any organisation (irregardless of their location) that handles data of European Union citizens.

Couple with that, new technology standards such as the User Managed Access working group being run by the Kantara Initiative, that look to empower end users to have more control and consent of data exchanges, will open doors for organisations who want to deliver personalised services, but do so in a more privacy preserving and user friendly way.

So, whilst the Internet certainly has some major flaws, and data protection and user privacy is a big one currently, there are some green shoots of recovery from an end user perspective.  It will be interesting to see what the Internet will look like another 28 years from now.

This blog post was first published @ www.infosecprofessional.com, included here with permission.

What’s New in ForgeRock Access Management

forgerock-access-management-whats-new-jan17

If you’re interested in hearing what’s coming up for ForgeRock Access Management, have a look at the replay of a webinar Andy Hall and I did yesterday. In it, we discuss how the ForgeRock Identity Platform addresses the challenges of customer identity relationship management, and the new features coming up in ForgeRock Access Management in our next platform release.

The Future is Now: What’s New in ForgeRock Access Management webinar replay

Or you can flip through slides over on SlideShare.

Hope you enjoy it!

Top 5 Digital Identity Predictions for 2017

2016 is drawing to an end, the goose is getting fat, the lights and decorations are adorning many a fire place and other such cold weather cliches.  However, the attention must turn back to identity management and what the future may or may not hold. Digital identity or consumer based identity and access management (CIAM) has taken a few big […]