The OAuth2 ForgeRock Identity Microservices

ForgeRock Identity Microservices

ForgeRock released in Q4 2017, an Early Access (aka beta) program for three key Identity Microservices within a compact, single-purpose code set for consumer-scale deployments. For companies who are deploying stateless Microservices architectures, these microservices offer a micro-gateway enabled solution that enables service trust, policy-enforced identity propagation and even OAuth2-based delegation. The stateless architecture of FR Identity Microservices allows for implementing a sidecar-friendly microgateway deployment pattern.

I blogged about it here. The sign up form is here.

The Microservices

OAuth2 Token Issuance

Enables service to service authentication using client-credential grant type. These “bearer” tokens facilitate trust up and down the call chain.

We currently support RSA, EC, and HMAC signing, with the following algorithms:

  • HS256
  • HS384
  • HS512
  • RS256
  • ES256
  • ES384
  • ES512

Token Validation

Supports introspection of OAuth2 / OIDC tokens issued by an OAuth2 AS as well as native AM tokens. Token validations performed using either configured keys, or lookup via JWK URI for issuer and kid verification.

One or more Token Introspection API implementations can be configured by setting the value to a comma-separated list of implementation names. Each implementation is given a chance to handle the token, in given order, until an implementation successfully handles the token. An error response occurs if no implementation successfully introspects a given token.

The following implementations are provided:

  • json : Configured by a JSON file.
  • openam : Proxies introspect calls to ForgeRock Access Management.

Token Exchange

Supports exchanging stateless OAuth2 access tokens and native AM tokens for hybrid tokens that grant specific entitlements per resourceor audience requested. Implements the draft OAuth2 token-exchange specification. Offers no-fuss declarative (“local”) policy enforcement inside the pod without needing to “call home” for identity data. Also provides policy based entitlement decisions by calling out to ForgeRock Access Management.

The Token-Exchange Policy is a pluggable service which determines whether or not a given token exchange may proceed. The service also determines what audience, scope, and so forth the generated token will have. The following implementations are provided:

  • json (default) : Configured by a JSON file.
  • jwt : Simply returns a JWT for any token that can be introspected.
  • openam : Uses a ForgeRock Access Management Policy Set.

Cloud and Platform

Docker and Kubernetes

Dockerfile is bundled with the binary. Soon there will be a docker repository from where the images could be pulled by evaluators.

A Kubernetes manifest is also provided that demonstrates using environment variables to configure a simple demo instance.

Metrics and Prometheus Integration

The microservices are instrumented to publish basic request metrics and a Prometheus endpoint is also provided, which is convenient for monitoring published metrics.

Audit Tracking

Each incoming HTTP request is assigned a transaction ID, for audit logging purposes, which by default is a UUID. ForgeRock Microservices like other products support propagation of a transaction ID between point-to-point service calls.

Cloud Foundry

The standard binary in Zip format is also directly deploy-able to Cloud Foundry, and will get recognized by the Cloud Foundry Java build pack as a “dist zip” format.

Client Credential Repository

ForgeRock Identity Microservices support Cassandra, MongoDB, any LDAP v3 including ForgeRock Directory Services.

Enhancing User Privacy with OpenID Connect Pairwise Identifiers

This is a quick post to describe how to set up Pairwise subject hashing, when issuing OpenID Connect id_tokens that require the users sub= claim to be pseudonymous.  The main use case for this approach, is to prevent clients or resource servers, from being able to track user activity and correlate the same subject’s activity across different applications.

OpenID Connect basically provides two subject identifier types: public or pairwise.  With public, the sub= claim is simply the user id or equivalent for the user.  This creates a flow something like the below:

Typical “public” subject identifier OIDC flow

This is just a typical authorization_code flow – end result is the id_token payload.  The sub= claim is simply clear and readable.  This allows the possibility of correlating all of sub=jdoe activity.

So, what if you want a bit more privacy within your ecosystem?  Well here comes the Pairwise Subject Identifier type.  This allows each client to be basically issued with a non-reversible hash of the sub= claim, preventing correlation.

To configure in ForgeRock Access Management, alter the OIDC provider settings.  On the advanced tab, simply add pairwise as a subject type.

Enabling Pairwise on the provider

 

Next alter the salt for the hash, also on the provider settings advanced tab.

Add a salt for the hash

 

Each client profile, then needs either a request_uri setting or a sector_identifier_uri.  Basically akin to the redirect_uri whitelist.  This is just a mechanism to identify client requests.  On the client profile, add in the necessary sector identifier and change the subject identifier to be “pairwise”.  This is on the client profile Advanced tab.

Client profile settings

Once done, just slightly alter the incoming authorization_code generation request to looking something like this:

/openam/oauth2/authorize?response_type=code
&save_consent=0
&decision=Allow
&scope=openid
&client_id=OIDCClient
&redirect_uri=http://app.example.com:8080
&sector_identifier_uri=http://app.example.com:8080
Note the addition of the sector_identifier_uri parameter.  Once you’ve exchanged the authorization_code for an access_token, take a peak inside the associated id_token.  This now contains an opaque sub= claim:
{
  "at_hash": "numADlVL3JIuH2Za4X-G6Q",
  "sub": "lj9/l6hzaqtrO2BwjYvu3NLXKHq46SdorqSgHAUaVws=",
  "auditTrackingId": "f8ca531a-61dd-4372-aece-96d0cea21c21-152094",
  "iss": "http://openam.example.com:8080/openam/oauth2",
  "tokenName": "id_token",
  "aud": "OIDCClient",
  "c_hash": "Pr1RhcSUUDTZUGdOTLsTUQ",
  "org.forgerock.openidconnect.ops": "SJNTKWStNsCH4Zci8nW-CHk69ro",
  "azp": "OIDCClient",
  "auth_time": 1517485644000,
  "realm": "/",
  "exp": 1517489256,
  "tokenType": "JWTToken",
  "iat": 1517485656

}
The overall flow would now look something like this:
OIDC flow with Pairwise

This blog post was first published @ http://www.theidentitycookbook.com/, included here with permission from the author.

Enhancing OAuth2 introspection with a Policy Decision Point

OAuth2 protection of resource server content, is typically either done via a call to the authorization service (AS) and the ../introspect endpoint for stateful access_tokens, or, in deployments where stateless access_tokens are deployed, the resource server (RS) could perform “local” introspection, if they have access to the necessary AS signing material.  All good.  The RS would valid scope values, token expiration time and so on.

Contrast that to the typical externalised authorization model, with a policy enforcement point (PEP) and policy decision point (PDP).  Something being protected, sends in a request to a central PDP.  That request is likely to contain the object descriptor, a token representing the subject and some contextual data.  The PDP will have a load of pre-built signatures or policies that would be looked up and processed.  The net-net is the PDP sends back a deny/allow style decision which the PEP (either in the form of an SDK or a policy agent) complies with.

So what is this blog about?  Well it’s the juxtaposition of the typical OAuth2 construct, with externalised PDP style authorization.

So the first step is to set up a basic policy within ForgeRock Access Management that protects a basic web URL – http://app.example.com:8080/index.html.  In honesty the thing being protected could be a URL, button, image, physical object or any other schema you see fit.

Out of the box authorization policy summary

To call the PDP, an application would create a REST payload looking something like the following:

REST request payload to PDP

The request would be a POST ../openam/json/policies?_action=evaluate endpoint.  This endpoint is a protected endpoint, meaning it requires authX from an AM instance.  In a normal non-OAuth2 integrated scenario, this would be handled via the iPlanetDirectoryPro header that would be used within the PDP decision.  Now in this case, we don’t have an iPlanetDirectoryPro cookie, simply the access_token.

Application Service Account

So, there are a couple of extra steps to take.  Firstly, we need to give our calling application their own service account.  Simply add a new group and associated application service user.  This account could then authenticate either via shared secret, JWT, x509 or any other authentication method configured. Make sure to give the associated group the account is in, privileges to the call the REST PDP endpoint.  So back to the use case…

This REST PDP request is the same as any other.  We have the resource being protected which maps into the policy and the OAuth2 access_token that was generated out of band, presented to the PDP as an environment variable.

OAuth2 Validation Script

The main validation is now happening in a simple Policy Condition script.  The script does a few things: performs a call to the AM ../introspect endpoint to perform basic validation – is the token AM issued, valid, within exp and so on.  In addition there are two switches – perform auth_level validation and also perform scope_validation.  Each of these functions takes a configurable setting.  If performAuthLevelCheck is true, make sure to set the acceptableAuthLevel value.  As of AM 5.5, the issued OAuth2 access_token now contains a value called “auth_level”.  This value just ties in the authentication assurance level that has been in AM since the OpenSSO days.  This numeric value is useful to differentiate how a user was validated during OAuth2 issuance. The script basically allows a simple way to perform a minimum acceptable value check.

The other configurable switch, is the performScopeCheck boolean.  If true, the script checks to make sure that the submitted access_token, is associated with atleast a minimum set of required scopes.  The access_token may have more scopes, but it must, as a minimum have the ones configured in the acceptableScopes attribute.

Validation Responses

Once the script is in place lets run through some examples where access is denied.  The first simple one is if the auth_level of the access_token is less than the configured acceptable value.

acceptable_auth_level_not_met advice message

Next up, is the situation where the scopes associated with the submitted access_token fall short of what is required.  There are two advice payloads that could be sent back here.  Firstly, if the number of scopes is fundamentally too small, the following advice is sent back:

acceptable_scopes_not_met – submitted scopes too few

A second response, associated with mismatched scopes, is if the number of scopes is OK, but the actual values don’t contain the acceptable ones.  The following is seen:

acceptable_scopes_not_met – scope entry missing

That’s all there is to it.  A few things to know.  The TTL of the policy has been set to be the exp of the access_token.  Clearly this is over writable, but seemed sensible to tie this to the access_token lifespan.

All being well though, a successful response back would look something like the following – depending on what actions you had configured in your policy:

Successful PDP response

Augmenting with Additional Environmental Conditions

So we have an OAuth2-compatible PDP.  Cool!  But what else can we do.  Well, we can augment the scripted decision making, with a couple of other conditions.  Namely the time based, IP address based and LDAP based conditions.

IP and Time based augmentation of access_token validation

The above just shows a simple of example of tying the decision making to only allow valid access_token usage between 830am and 530pm Monday-Friday from a valid IP range.  The other condition worth a mention is the LDAP filter one.

Note, any of the environmental conditions that require session validation, would fail, the script isn’t linking any access_token to AM session at this point – in some cases (depending on how the access_token was generated) may never have a session associated.  So beware they will not work.

The code for the scripted condition is available here.

 

This blog post was first published @ http://www.theidentitycookbook.com/, included here with permission from the author.

How Information Security Can Drive Innovation

Information Security and Innovation: often at two different ends of an executive team’s business strategy. The non-CIO ‘C’ level folks want to discuss revenue generation, efficiency and growth. Three areas often immeasurably enhanced by having a strong and clear innovation management framework. The CIO’s objectives are often focused on technical delivery, compliance, uploading SLA’s and more recently on privacy enablement and data breach prevention.

So how can the two worlds combine, to create a perfect storm for trusted and secure economic growth?

Innovation Management

But firstly how do organisations actually become innovative? It is a buzzword that is thrown around at will, but many organisations fail to build out the necessary teams and processes to allow innovation to succeed. Innovation basically focuses on the ability to create both incremental and radically different products, processes and services, with the aim of developing net-new revenue streams.

But can this process be managed? Or are companies and individuals just “born” to be creative? Well simply, no. Creativity can be managed, fostered and encouraged. Some basic creative thinking concepts, include “design thinking” – where the focus is on emphasising customer needs, prototyping, iterating and testing again. This is then combined with different thinking types – both open (problem felt directly), closed (via a 3rd party), internal (a value add contribution) and external (creativity as part of a job role).

The “idea-factory” can then be categorised into something like HR lead ideas – those from existing staff that lead to incremental changes – R&D ideas – the generation of radical concepts that lead to entirely new products – and finally Marketing lead ideas – those that capture customer feedback.

Business Management

Once the idea-machine has been designed, it needs feeding with business strategy. That “food” helps to define what the idea-machine should focus upon and prioritise. This can be articulated in the form of what the business wants to achieve. If it is revenue maximisation, does this take the form of product standardization, volume or distribution changes? This business analysis needs to look for identifying unmet customer needs, tied neatly into industry or global trends (a nice review on the latter is the “Fourth Industrial Revolution” by Klaus Schwab).

Information Security Management

There is a great quote by Amit & Zott, that goes along the lines of, as an organisation, you’re always one innovation from being wiped out. Very true. But that analogy can also be applied to “one data breach” from being wiped out – from irreparable brand damage, or perhaps via the theft of intellectual property. So how can we accelerate from the focus of business change and forward thinking to information security, which has typically been retrospective, restrictive and seen as a IT cost centre?

Well there are similarities believe it or not and, when designed in the right way the overlay of application, data and identity lead security can drive faster, more efficient and more trust worthy services.

One of the common misconceptions regarding security management and implementation, is that it is applied retrospectively. An application or infrastructure is created, then audits, penetration tests or code analysis takes place. Security vulnerabilities are identified, triaged and fixed in future releases.
Move Security to the Left

It is much more cost effective and secure, to apply security processes at the very beginning of any project. Be it for the creation of net-new applications or a new infrastructure design. The classic “security by design” approach. For example, developers should have basic understanding of security concepts – cryptography 101, when to hash versus encrypt, what algorithms to use, how to protect from unnecessary information disclosure, identity protection and so on. Exit criteria within epic and story creation should reference how the security posture should, as a minimum not be altered. Functional tests should include static and dynamic code analysis. All of these incremental changes really move “security to the left” of the development pipeline, getting closer to the project start than the end.

Agile -v- State Gate Analysis

Within the innovation management framework, stage-gate analysis is often used to triage creative idea processing, helping to identify what to move forward with and what to abandon.

A stage is a piece of work, followed by a gate. A gate basically has an exit criteria, with exits such as “kill”, “stop”, “back”, “go forward” etc. Each idea flows through this process to basically kill early and reduce cost. As an idea flows through the stage-gate process, the cost of implementation clearly increases. This approach is very similar to the agile methodology of building complex software. Lots of unknowns. Baby steps, iteration, feedback and behaviour alteration and so on.

So there is a definitive mindset duplication between creating ideas that feed into service and application creation and how those applications are made real.

Security Innovation and IP Protection

A key theme of information security attack vectors over the last 5 years, have been the speed of change. Whether we are discussing malware, ransomware, nation state attacks or zero-day notifications, there is constant change. Attack vectors do not stay still. The infosec industry is growing annually as both private sector and nation states ramp defence mechanisms using skilled personnel, machine learning and dedicated practices. Those “external” changes require organisations to respond in innovative and agile ways when it comes to security.

Security is no longer a compliance exercise. The ability to deliver a secure and trusted application or service is a competitive differentiator that can build long lasting, sticky customer relationships.

A more direct relationship between innovation and information security, is the simple protection of intellectual property that relates to the new practices, ideas, patents and other value that has been created, due to innovative frameworks. That IP needs protecting, from external malicious attacks, disgruntled insiders and so on.

Summary

Overall, organisations are doing through the digital transformation exercise at rapid speed and scale. That transformation process requires smart innovation which should be neatly tied into the business strategy. However, security management is no longer a retrospective compliance driven exercise. The process, personnel and speed of change the infosec industry sees, can provide a great breeding ground for helping to alter the application development process, help to reduce internal boundaries and help to deliver secure, trusted privacy preserving services that can allows organisations to grow.

This blog post was first published @ www.infosecprofessional.com, included here with permission.

December Auth Node Roundup

The goose is getting fat.  Presents are wrapped.  Tree is up.  AND, some new authentication nodes have been added to the Backstage Marketplace.

Threat Management

The entire bot protection, threat intelligence and DDoS awareness space has grown massively over the last few years.  Instead of just relying on network related throttling, the auth trees fabric really makes it simple to augment third party systems into the login journey.

Two pretty simple nodes that were added include the OpenThreatIntelliegenceNode and the HaveIBeenPwnedNode.  The open threat intelligence node, basically calls out to the https://cymon.io site, sending a SHA256 hash of the inbound client IP address.  The response is basically a verification to see if the IP has been involved in any botnots or malicious software attacks.

Have I Been Pawned is a simple free site, that takes your email address and checks if it has been involved in any big data breaches.  If so, it might then be prudent to prompt the user in your system to either use MFA or perhaps change their password too.

Threat Focused Auth Tree

Another addition in this space was the Google reCaptcha node to prevent against bot attacks.

SLA, Metrics and Timing

Two recent additions, form ForgeRock’s very own Craig McDonnell.  Craig has built out two interesting nodes for monitoring time and metering.  First up is the MeterAuthTreeNode.  This node can be dropped into any part of the tree and basically add a configurable string to the DropWizard meter registry within AM.  So for example, if you’re tracking which browser users are logging in from using the BrowserCheckerNode, you could simply drop in a couple of meter nodes to add incremental counters that are updated every time that specific browser is seen during the login journey.  This becomes massively important when building out user experience analytics projects.  The metrics can the be viewed using something like JConsole and retrieved into nice dashboards using JMX or pushed to a Graphite Server.

A cousin of the meter node, are the TimeAuthTreeNodes.  Similar to the meter, they can basically time each part of the auth tree.  As tree’s are likely to include upwards of 20 data signals, and 3rd party processing, it may be essential to understand response times and SLA impacts.  The timer nodes are part of a pair – a starter node and stop node.  The time calculated in between is then sent to the same registry as the meter and can be viewed using JConsole.

Monitoring Response Time of a 3rd Party Call Out During Login

Device Analysis

From a security perspective it’s quite common to pair a trusted user to their login device.  But what about verifying if that device itself has the correct browser or operating system?  The OSCollectorNode and sister node BrowserCollectorNode, allow basically analysis of the incoming client request.  This can be extended right down to the versioning, to allow for redirects, blocking or perhaps a more personalised experience.  If a service provider knows you are logging in from a mobile device on 4G at 8am, they can likely infer you are commuting and may respond differently to different content.  The information collected, can be simply added into session properties and delivered to down stream protected applications.

OS and Browser Analysis During Login

A few other noteworthy additions this month include an updated IPAddressDecisionNode, KBAAuthenticationNode and ClientSideScriptingNode which allows for the delivery of JavaScript down onto the client machine.

This blog post was first published @ http://www.theidentitycookbook.com/, included here with permission from the author.

2020: Machine Learning, Post Quantum Crypto & Zero Trust

Welcome to a digital identity project in 2020! You’ll be expected to have a plan for post-quantum cryptography.  Your network will be littered with “zero trust” buzz words, that will make you suspect everyone, everything and every transaction.  Add to that, “machines” will be learning everything, from how you like your coffee, through to every network, authentication and authorisation decision. OK, are you ready?

Machine Learning

I’m not going to do an entire blog on machine learning (ML) and artificial intelligence (AI).  Firstly I’m not qualified enough on the topic and secondly I want to focus on the security implications.  Needless to say, within 3 years, most organisations will have relatively experienced teams who are handling big data capture from an and identity, access management and network perspective.

That data will be being fed into ML platforms, either on-premise, or via cloud services.  Leveraging either structured or unstructured learning, data from events such as login (authentication) for end users and devices, as well authorization decisions can be analysed in order to not only increase assurance and security, but for also increasing user experience.  How?  Well if the output from ML can be used to either update existing signatures (bit legacy, but still) whilst simultaneously working out the less risky logins, end user journeys can be made less intrusive.

Step one is finding out the correct data sources to be entered into the ML “model”.  What data is available, especially within the sign up, sign in and authorization flows?  Clearly general auditing data will look to capture ML “tasks” such as successful sign ins and any other meta data associated with that – such as time, location, IP, device data, behavioural biometry and so on.  Having vast amounts of this data available is the first start, which in turn can be used to “feed” the ML engine.  Other data points would be needed to.  What resources, applications and API calls are being made to complete certain business processes?  Can patterns be identified and tied to “typical” behaviour and user and device communities.  Being able to identify and track critical data and the services that process that data would be a first step, before being able to extract task based data samples to help identify trusted and untrusted activities.

 

Post Quantum Crypto

Quantum computing is coming.  Which is great.  Even in 2020, it might not be ready, but you need to be ready for it.  But, and there’s always a but, the main concern is that the super power of quantum will blow away the ability for existing encryption and hashing algorithms to remain secure.  Why?  Well quantum computing ushers in a paradigm of “qubits” – a superpositional state in between the classic binary 1 and 0.  Ultimately, that means that the “solutioneering” of complex problems can be completed  in a much more efficient and non-sequential way.

The quantum boxes can basically solve certain problems faster.  The mathematics behind cryptography being one of those problems.  A basic estimate for the future effectiveness of something like AES-256, drops to 128 bits.  Scary stuff.  Commonly used approaches today for key exchange rely on protocols such as Diffie-Hellman (DH) or Elliptic Curve Diffie Hellman (ECDH).  Encryption is then handled by things like Rivest-Shamir-Adleman (RSA) or Elliptic Curve Digital Signing Algorithm (ECDSA).

In the post-quantum (PQ) world they’re basically broken.  Clearly, the material impact on your organisation or services will largely depend on impact assessment.  There’s no point putting a $100 lock on a $20 bike.  But everyone wants encryption right?  All that data that will be flying around is likely to need even more protection whilst in transit and at rest.

Some of the potentially “safe” PQ algorithms include XMSS and SPHINCS for hashing – the former going through IETF standardization.  Ring Learning With Errors (RLWE) is basically an enhanced public key cryptosystem, that alters the structure of the private key.  Currently under research but no weakness have yet been found.  NTRU is another algorithm for the PQ world, using a hefty 12881 bit key.  NTRU is also already standardized by the IEEE which helps with the maturity aspect.

But how to decide?  There is a nice body called the PQCRYPTO Consortium that is providing guidance on current research.  Clearing you’re not going to build your own alternatives, but information assurance and crypto specialists within your organisation, will need to start data impact assessments, in order to understand where cryptography is currently used for both transport, identification and data at rest protection to understand any future potential exposures.

Zero Trust Identities

“Zero Trust” (ZT) networking has been around for a while.  The concept of organisations having a “safe” internal network versus the untrusted “hostile” public network, separated by a firewall are long gone. Organisations are perimeter-less.

Assume every device, identity and transaction is hostile until proven otherwise.  ZT for identity especially, will be looking to bind not only a physical identity to a digital representation (session Id, token, JWT), but also that representation to a vehicle – aka mobile, tablet or device.  In turn, every transaction that tuple interacts with, is then verified – checking for changes – either contextual or behavioural that could indicate malicious intent.  That introduces a lot of complexity to transaction, data and application protection.

Every transaction potentially requires introspection or validation.  Add to this mix an increased number of devices and data flows, which would pave the way for distributed authorization, coupled with continuous session validation.

How will that look?  Well we’re starting to see the of things like stateless JSON Web Tokens (JWT’s) as a means for hyper scale assertion issuance, along with token binding to sessions and devices.  Couple to that Fine Grained Authentication processes that are using 20+ signals of data to identify a user or thing and we’re starting to see the foundations of ZT identity infrastructures.  Microservice or hyper-mesh related application infrastructures are going to need rapid introspection and re-validation on every call so the likes of distributed authorization looks likely.

So the future is now.  As always.  We know that secure identity and access management functions has never been more needed, popular or advanced in the last 20 years.  The next 3-5 years will be critical in defining a back bone of security services that can nimbly be applied to users, devices, data and the billions of transactions that will result.

This blog post was first published @ www.infosecprofessional.com, included here with permission.

Creating Personal Access Tokens in ForgeRock AM

Personal Access Tokens (PAT’s) are used to provide scoped self-managed access credentials that can be used to provide access to trusted systems and services that want to act on a users behalf.

Similar to OAuth tokens, they often don’t have an expiration and are used conceptually instead of passwords.  A PAT could be used in combination with a username when performing basic authentication.

For example see the https://github.com/settings/tokens page within Github which allows scope-related tokens to be created for services to access your Github profile.

PAT Creation

The PAT can be an opaque string – perhaps a SHA256 hash.  Using a hash seems the most sensible approach to avoid collisions and create a fixed-length portable string.  A hash without a key of course wont provide any creator assurance/verification function, but since the hash will be stored against the user profile and not treated like a session/token this shouldn’t be an issue.

An example PAT value could be:

Eg:

f83ee64ef9d15a68e5c7a910395ea1611e2fa138b1b9dd7e090941dfed773b2c:{“resource1” : [ “read”, “write”, “execute” ] }

a011286605ff6a5de51f4d46eb511a9e8715498fca87965576c73b8fd27246fe:{"resource2" : [ "read", "write"]}

The key was simply created by running the resource and the associated permissions through sha256sum on Linux.  How you create the hash is beyond the scope of this blog, but this could be easily handled by say ForgeRock IDM and a custom endpoint in a few lines of JavaScript.

PAT Storage

The important aspect is where to store the PAT once it has been created.  Ideally this really needs to be stored against the users profile record in DJ.  I’d recommend creating a new schema attribute dedicated for PAT’s that is multivalued.  The user can then update their PAT’s over REST the same as any other profile attribute.

For this demo I used the existing attribute called “iplanet-am-user-alias-list” for speed as this was multi-valued.  I added in a self-created PAT for my fake resource:

Using a multi-valued attribute allows me to create any number of PAT’s.  As they don’t have an expiration they might last for some time in the user store.

PAT Usage

Once stored, they could be used in a variety of ways to provide “access” within applications. The most simple way, is to leverage the AM authorization engine as a decision point to verify that a PAT exists and what permissions it maps to.

Once the PAT is stored and created, the end user can present it to another user/service that they want to use the PAT.  That service or user presents the username:PAT combination to the service they wish to gain access to.  That service calls the AM authorization API’s to see if the user:PAT combination is valid.

The protected service would call {{OpenAM}}/openam/json/policies?_action=evaluate with a payload similar to:

Here I am calling the ../policies endpoint with a dedicated account called “policyeval” which has ability to read the REST endpoint and also read realm users which we will need later on.  Edit the necessary privileges tab within the Admin console.

If the PAT exists within the user profile of “smoff”, AM returns an access=true message, along with the resource and associated permissions that can be used within the calling application:

So what needs setting up in the background to allow AM to make these decisions? Well all pretty simple really.

Create Authorization Resource Type for PAT’s

Firstly create a resource type that matches the pat://*.* format (or any format you prefer):

Next we need to add a policy set that will contain our access policies:

The PATValidator only contains one policy called AllPATs, which is just a wildcard match for pat://*:*.  This will allow any combination of user:pat to be submitted for validation:

Make sure to set the subjects condition to “NOT Never Match” as we are not analysing user session data here.  The logic for analysis is being handled by a simple script.

PAT Authorization Script

The script is available here.

At a high level is does the following:

  1. Captures the submitted username and PAT that is part of the authorization request
  2. As the user will not have a local session, we need to make a native REST call to look up the user
  3. We do this by first generating a session for our policyeval user
  4. We use that session to call the ../json/users endpoint to perform a search for the users PATs
  5. We do a compare between the submitted PAT and any PAT’s found against the user profile
  6. If a match is found, we pull out the assigned permissions and send back as a response attribute array to the calling application

This blog post was first published @ http://www.theidentitycookbook.com/, included here with permission from the author.

A Design for Modern Authentication

The password is dead. Long live the password! I have lost count of how many articles and blogs I have seen with regards to the weaknesses, the management, the flexibility, security, insecurity and overall usage of passwords when it comes to user authentication. We all use them and they’re not going anywhere any time soon. OK, so next step. What else can and should we be using for our user and device based authentication and login journeys?

Where We Are Now – The Sticking Plaster of MFA

So we accept that the traditional combo of user name and passwords is bad for our (system) health. Step forward multi-factor authentication. Or 2FA. Take your pick. This generally saw the introduction of something you have in the form of a token, phone-as-a-token or some other out of band mechanism that would create a one-time-password. Traditionally the “out of band” mechanism was either an email or SMS to a preregistered address or phone number, that contained a 6 digit pass code. Internal or employee systems would often leverage a hard token – either a USB dongle or a small tag with a tiny display that would show a rotating pin. These concepts were certainly better from a security perspective, but a) were not unbreakable and b) often created a disjointed user login experience with lots on interruptions and user interaction.

Basic MFA factors

Where We Are Moving To – Deca-factor Authentication!

OK, so user name and passwords are not great. MFA is simple, pretty cheap to implement, but means either the end user needs to carry something around (a bit 2006) or has an interrupted login journey by constantly being asked for a one-time-password. What we need is not two-factor-authentication but deca-factor-authentication! More factors. At least 10 to be precise. Increase the factors and aim to reduce the material impact of a single factor compromise whilst simultaneously reducing the number of user interrupts. These 10 factors (it could be 8, it could be 15, you get the idea) are all about introducing a broad spectrum analysis for the login journey.

Each factor is must more cohesive and modular, analysing a single piece of the login journey. The login journey could still leverage pretty static profile related data such as a user name, but is augmented with much more context – the location, time, device origin of the request and comparison factors that look at previous login requests to determine patterns or abnormalities.

Breaking authentication down? “Deca-factors”

Some of these factors could “pass” and some could “fail” during the login journey, but the process is much about about accumulating and analysing risk and therefore being able to respond to high risk more accurately. Applying 2FA to every user login does not reduce risk per-se, it simply applies a blanket risk to every actor.

Wouldn’t it be much better to allow login variation for genuine users who do regularly change machine, location, network and time zone? Wouldn’t it be better to give users more choice over their login journeys and provide numerous options if and when high risks scenarios do occur?

Another key area I think authentication is moving towards, is that of transparency. “Frictionless”, “effortless” or “zero-effort” logins are all the buzz. If, as an end user I enrol, sacrifice the privacy regarding a device fingerprint, maybe download a OTP or push app, why can’t I just “login” without having my experience interrupted? The classic security/convenience paradox. By introducing more factors and “gluing” those factors together with processing logic, the user authentication system can be much more responsive – perhaps mimicking a state machine, designed in a non-deterministic fashion, where any given factor could have multiple outcomes.

Where We Want To Get To – Transparent Pre-Authentication

So I guess the sci-fi end goal is to just turn up at work/coffee shop/door/car/website/application (delete as applicable) and just present one selves. The service would not only “know” who you were, but also trust that it is you. A bit like the Queen. Every time you presented yourself, transparent background checks would continually evaluate every part of the interaction, looking for changes and identifying risk.

Session + Bind + Usage – increasing transparency?

The closest we are to that today in the web world at least, is the exchange of the authentication process for a cookie/session/tokenId/access_token. Whether that is stateless or stateful, it’s something to represent the user when they attempt to gain access to the service again. Couple that token to some kind of binding (either to a PKI key pair, or TLS session) to reduce the impact of token theft and there is some kind of repeatable access use case. However, change is all around and the token presentation, must therefore be coupled with all usage, context, resource and transaction data that the token is attempting to access, to allow the authentication machine to loop through the necessary deca-factors either individually or collectively to identify risk or change.

Authentication is moving on.  A more modern system must accommodate a broad spectrum  when it comes to analysing who is instigating a transaction which must also be coupled with mechanisms that increase transparency and pre-identification of risk without unnecessary and obtrusive interruptions.

This blog post was first published @ www.infosecprofessional.com, included here with permission.