Next Generation Distributed Authorization

Many of today’s security models spend a lot of time focusing upon network segmentation and authentication.  Both of these concepts are critical in building out a baseline defensive security posture.  However, there is a major area that is often overlooked, or at least simplified to a level of limited use.  That of authorization.  Working out what, a user, service, or thing, should be able to do within another service.  The permissions.  Entitlements.  The access control entries.  I don’t want to give an introduction into the many, sometimes academic acronyms and ideas around authorization (see RBAC, MAC, DAC, ABAC, PDP/PEP amongst others). I want to spend a page delving into the some of the current and future requirements surrounding distributed authorization.

New Authorization Requirements

Classic authorization modelling, tends to have a centralised policy decision (PDP) point – a central location where applications, agents and other SDK’s call, in order to get a decision regarding a subject/object/action combination.  The PDP contains signatures (or policies) that map the objects and actions to a bunch of users and services.
That call out process is now a bottle neck, for several reasons.  Firstly the number of systems being protected is rapidly increasing, with the era of microservices, API’s and IoT devices all needing some sort of access control.  Having them all hitting a central PDP doesn’t seem a good use of network bandwidth or central processing power.  Secondly, that increase in objects, also gives way to a more mesh and federated set of interactions such as the following.

Distributed Enforcement

This gives way to a more distributed enforcement requirement.  How can the protected object perform an access control evaluation without having to go back to the mother ship?  There are a few things that could help.  
Firstly we need to probably achieve three things.  Work out what we need to identify the calling user or service (aka authentication token), map that to what that identity can do, before finally making sure that actually happens.  The first part, is often completed using tokens – and in the distributed world a token that has been cryptographically signed by a central authority.  JSON Web Tokens (JWTs) are popular, but not the only approach.
The second part – working out what they can do – could be handled in two slightly different ways.  One, is the calling subject brings with them what they can do.  They could do this by having the cryptographically signed token, contain their access control entries.  This approach, would require the service that issues tokens, to also know what the calling user or service could do, so would need to have knowledge of the access control entries to use.  That list of entries, would also need things like governance, audit, version control and so, but that is needed irregardless of where those entries are stored.
So here, a token gets issued and the objects being protected, have a method to crytographically validate the presented token, extract the access control entries (ACE) and enforce what is being asked.
Having a token that contains the actual ACE, is not that new.  Capability Based Access Control (CBAC) follows this concept, where the token could contain the object and associated actions.  It could also contain the subject identifier, or perhaps that could be delivered as a separate token.  A similar practical implementation is described in Google’s Macaroons project.
What we’ve achieved here, is to basically remove the access control logic from the object or service, but equally, removed the need to perform a call back to a policy mother ship.
A subtly different approach, is to pass the access control logic back down to the object – but instead of it originating within the service itself – it is still owned and managed by central authority – just distributed to the edges.
This allows for local enforcement, but central governance and management.  Modern distribution technologies like web sockets, or even flat file systems like JSON and YAML, allow for repave and replace approaches as policy definitions change, which fits nicely into devops deployment models.  
The object itself, would still need to know a few things to make the enforcement complete – a token representing the user or service and some context to help validate the request.

Contextual Integration

Access control decisions generally require the subject, the object and any associated actions.  For example subject=Bob, could perform actions=open on object=Meeting Room.  Another dimension that is now required, especially within zero trust based approaches, is that of context.  In Bob’s example, context may include time of day, day of the week, or even the project he is working on.  They could all impact the decision.  Previous access control requests and decisions could also come into play here.  For example, say Bob was just given access to the Safe Room where the gold bullion was stored, perhaps his request to gain access to the Back Door is denied.
The capturing of context, both during authentication time and during authorization evaluation time is now critical, as it allows the object to have a much clearer understanding of how to handle access requests.

ML – Defining Normal

I’ve talked a lot so far about access control logic and where that should sit.  Well, how do we know what that access control logic looks like?  I spent many a year, designing role based access control systems (wow, that was 10+ years ago), using a system known as role mining.  Big data crunching before machine learning was in vogue.  Taking groups of users and trying to understand what access control patterns existed, and trying to shoe horn the results into business and technical roles.
Today, there are loads of great SaaS based machine learning systems, that can take user activity logs (logs that describe user to application interactions) and provide views on whether their activity levels are “normal” – normal for them, normal for their peers, their business unit, location, purchasing patterns and so on.  The output of that process, can be used to help define the initial baseline policies.
Enforcing access based on policies though is not enough.  It is time consuming and open to many many avenues of circumvention.  Machine learning also has a huge role to play within the enforcement aspect too, especially as the idea of context and what is valid or not, becomes a highly complicated and ever changing question.
One of the key issues of modern authorization, is the distinction between access control logic, enforcement and the vehicles used to deliver the necessary parts to the protected services.
If at all possible, they should be as modular as possible, to allow for future proofing and the ability to design a secure system that is flexible enough to meet business requirements, scale out to millions of transactions a second and integrate thousands of services simultaneously.

This blog post was first published @ www.infosecprofessional.com, included here with permission.

Implementing JWT Profile for OAuth2 Access Tokens

There is a new IETF draft stream called JSON Web Token (JWT) Profile for OAuth 2.0 Access Tokens.  This is a very early 0 version, that looks to describe the format of OAuth2 issued access_tokens.

Access tokens, are typically bearer tokens, but the OAuth2 spec, doesn’t really describe what format they should be.  They typically end up being two high level types – stateless and stateful.  Stateful just means “by reference”, with a long opaque random string being issued to the requestor, which resource servers can then send back into the authorization service, in order to introspect and validate.  On their own, stateful or reference tokens, don’t really provide the resource servers with any detail.

The alternative is to use a stateless token – namely a JSON Web Token (JWT).  This new spec, aims to standardise what the content and format should be.

From a ForgeRock AM perspective, this is good news.  AM has delivered JWT based tokens (web session, OIDC id_tokens and OAuth2 access_tokens) for a long time.  The format and content of the access_tokens, out of the box, generally look something like the following:

The out of the box header (using RS256 signing):

The out of the box payload:

Note there is a lot of stuff in that access_token.  Note the cnf claim (confirmation key).  This is used for proof of possession support which is of course optional, so you can easily reduce the size by not implementing that.  There are several claims, that are specific to the AM authorization service, which may not always be needed in a stateless JWT world, where perhaps the RS is performing offline validation away from the AS.

In AM 6.5.2 and above, new functionality allows for the ability to rapidly customize the content of the access_token.  You can add custom claims, remove out of the box fields and generally build token formats that suit your deployment.  We do this, by the addition of scriptable support.  Within the settings of the OAuth2 provider, note the new field for OAuth2 Access Token Modification Script.

The scripting ability, was already in place for OIDC id_tokens.  Similar concepts now apply.

The draft JWT profile spec, basically mandates iss, exp, aud, sub and client_id, with auth_time and jti as optional.  The AM token already contains those claims.  The perhaps only differing component, is that the JWT Profile spec –  section 2.1 – recommends the header typ value be set to “at+JWT” – meaning access token JWT, so the RS does not confuse the token as an id_token.  The FR AM scripting support, does not allow for changes to the typ, but the payload already contains a tokenName claim (value as access_token) to help this distinction.

If we add a couple of lines to the out of the box script, namely the following, we cut back the token content to the recommended JWT Profile:

accessToken.removeField(“cts”);
accessToken.removeField(“expires_in”);
accessToken.removeField(“realm”);
accessToken.removeField(“grant_type”);
accessToken.removeField(“nbf”);
accessToken.removeField(“authGrantId”);
accessToken.removeField(“cnf”);

The new token payload is now much more slimmed down:

The accessToken.setField(“name”, “value”) method, allows simple extension and alteration of standard claims.

For further details see the following documentation on scripted token content – https://backstage.forgerock.com/docs/am/6.5/oauth2-guide/#modifying-access-tokens-scripts

This blog post was first published @ http://www.theidentitycookbook.com/, included here with permission from the author.

Directory Services – Docker, Kubernetes: Friends or Foes?

Two weeks ago, at the ForgeRock Identity Live conference, I did a talk about ForgeRock Directory Services (DS) in the Docker/Kubernetes (K8S) world, trying to answer the question whether DS and Docker/K8S were friends or foes.

Before I dive into the question, let me say that it’s obvious that our whole industry is moving to the Cloud, and that Docker/Kubernetes are becoming the standard way to deploy software in the Cloud, in any Cloud. Therefore whether DS and K8S are ultimately friends or foes is not the right question. I believe it is unavoidable and that in the near future we will deploy and fully support Directory Services in K8S. But is it a good idea to do it today? Let’s examine why we are questioning this today, what are the benefits of using Kubernetes to deploy software, what are the constraints of deploying the current version of Directory Services (6.5) in Kubernetes, and what ForgeRock is working on to improve DS in K8S. Finally I will highlight why Directory Services is a good solution to persist data, whether it’s on premise or in the Cloud. 

Why the discussion about DS and K8S?

The main reason we are having this discussion is due to the nature of Directory Services. DS is not the usual stateless web application. Directory Services is both a stateful application and a distributed one. These are two main aspects that require special care when trying to deploy in containers. First Directory Services is a stateful application because it is the place where one can store the state for all these stateless web-applications. In our platform, we use DS to store ForgeRock Access Management data, whether it’s runtime configuration data, tokens and user identities. Second Directory Services is a distributed application because instances need to talk with each other so that the data is replicated and consistent. Because databases and distributed applications require stronger orchestration and coordination between elements of the system, they are implemented as Stateful Sets in the Kubernetes world, and make use of Persistent Volumes (PV). Therefore our Cloud Deployment Model of ForgeRock Directory Services is also implemented this way.

It’s worth noting that Persistent Volume is a Kubernetes API and there are several types of volumes and many different providers implementations. Some of the PV types are very recent and still beta versions. So, when using Kubernetes for applications that persist data, you should have a good understanding of the characteristics and the performance of the Persistent Volumes choices that are available in your environment.

Benefits of Containers and Kubernetes

Developers are making a great use of containers because it simplifies focus on what they have to build and test. Instead of spending hours figuring how to install and configure a database, and build a monitoring platform to validate their work, they can pull one or more docker images that will automate this task.

When going into production, the automation is a key aspect. Kubernetes and its family of tools, allow administrators to describe their target architectures, automate deployment, monitoring and incident response. Typically in a Kubernetes cluster, if the administrator requires at least 3 instances of an application, Kubernetes will react to the disappearance of an instance and will restart a new one immediately. Another key benefit of Kubernetes is auto-scalability. The Kubernetes deployment can react to monitoring alerts or external signals to add or remove instances of an application in order to support a greater or smaller workload. This optimises the cost of running the solution, balancing the capacity to absorb peak loads with the cost of running at normal or low usage levels.

Directory Services 6.5 constraints in K8S

But auto-scaling is not something that is suitable to all applications, and typically Directory Services, like most of the databases, does not scale automatically by adding more running instances. Because databases have state and data, and expect exclusive access to the files, adding a new replica is a costly operation. The data needs to be duplicated in order to let another instance using it. Also, adding a Directory Services instance only helps to scale read operations. A write operation on any server will need to be replicated to all other servers. So all servers will have the same write throughput and the same amount of disk I/Os. In the world of databases, the only way to scale write operations is to distribute (shard) the data to multiple servers. Such capability is not yet available in Directory Services, but it’s planned for future releases. (Note that Directory Proxy Services 6.5 already has support for sharding, but with some constraints. And the proxy is not yet part of the Cloud Deployment Model).

Another constraint of Directory Services 6.5 is how replication works. The DS replication feature was designed years ago when customers would deploy servers and would not touch them unless they were broken. Servers had stable hostnames or IP addresses and would know all of their peers. In the container world, the address of an instance is only known after the instance is started. And sometimes you want to start several instances at the same time. The current ForgeRock Cloud Deployment Model and the Directory Services docker images that we propose, work around the design limitation of replication management, by pre-configuring replication for a fixed (and small) maximum number of replicas. It’s not possible to dynamically add another replica after that. Also, the “dsreplication” utility cannot be used in Kubernetes. Luckily, monitoring replication and more importantly its latency is possible with Prometheus which is the default monitoring technology in Kubernetes.

Coming Improvements in Directory Services

For the past year, we’ve been working hard on redesigning how we manage and bootstrap replication between Directory Services instances. Our main challenge with that work has been to do it in a way that allows us to continue to replicate with previous versions. Interoperability and compatibility of replication between different versions of Directory Services has been and will remain a key value of the product, allowing customers to roll out new versions with zero downtime of the service. We’re moving towards using full CA-based certificates and mutual TLS authentication for establishing trust between replicas. Configuring a new replica will no longer require updating all servers in the topology, and replicas that are uninstalled or stopped for some time will be automatically removed from the topology (and so will be their associated change logs and meta-data). When starting a new replica, it will only need to know of one other running replica (or be told that it is the first one). These changes will make automating the deployment of new replica much simpler and remove the limit to the number of replicas. We are also improving the way we are doing backup and restore of a database backend or the whole server, allowing to directly use cloud buckets such as S3 or GCS. All of these things are planned for the next major release due in the first half of 2020. Most of these features will be used by our own ForgeRock Identity Platform as a Service offering that will go in stages of Early Access and Beta later this year.

Once we have the ability to fully automate the deployment and the upgrade of a cluster of Directory Services instances, in one or more data-centres, we will start working on horizontal scalability for Directory Services, and provide a way to scale the number of servers as the data stored grows, allowing a consistent level of write throughout. All of this fully automated to be deployed in the Cloud using Kubernetes.

Benefits of using Directory Services as a data store

Often people ask me why they should use ForgeRock Directory Services rather than a real database. First of all, Directory Services is a database. It’s a specialised database, built on a standard data model and a standard access protocol: Lightweight Directory Access Protocol aka LDAP. Several people in the past have pointed out that LDAP might have even been the first successful NoSQL database! 🙂  Furthermore, Directory Services also exposes all of the data through a REST/JSON API, yet still providing the same security and fine grained access controls mechanisms as through LDAP. But the main value of Directory Services is that you can achieve very high availability of the data (in the 5 9’s), using standard systems (whether they are bare metal systems or virtual hosts or containers), even with world wide geographic distribution. We have many customers that have deployed a single directory services distributed in 3 to 6 data centers around the globe. The LDAP data model has a flexible schema that can be extended, customised without having to rebuild the database nor even restart the servers. The data can even be exposed through versioned APIs using our REST API. Finally, the combination of flexible and extensive schema with fine-grained access controls, allow multiple applications to access the data, but with great control of which application can read or write which data. This results in a single identity and credentials for a user, but multiple sets of attributes, that can be shared by applications or restricted to a single one: a single central view of the user that is then easier and more cost effective to manage.

Conclusion

Back to the track of Kubernetes, and because of the constraints of the current Directory Services Cloud Deployment Model with version 6.5, we would recommend that you try to keep your Directory Services deployed in VMs or on bare metal. But with the next release which underpins the ForgeRock Cloud offering, we will fully support deploying Directory Services on Docker/Kubernetes. We will continue our investment in the product to be able to support Auto-Scaling (using data sharding) in subsequent releases. Building these solutions is not extremely difficult, but we need time to prove that it’s 100% reliable in all conditions, because in the end, the most wanted and appreciated feature of ForgeRock Directory Services is its reliability.

This blog post was first published @ ludopoitou.com, included here with permission.

ForgeRock DS and the LDAP Relax Rules Control

In ForgeRock Directory Services 6.5, we’ve added the support for the LDAP Relax Rules Control, both on the server and our clients. One of my colleagues, involved with the customers’ deployment, asked me why we’ve added the control and what it should be used for.

The LDAP Relax Rules Control is an LDAP extension that allows a directory user agent (a client) to request the directory service to temporarily relax enforcement of various data and service model rules. The internet-draft is explicit about which rules can be relaxed or not. But typically it can be used to allow a client to write specific operational attributes that should be read-only and managed by the server.

Starting with OpenDJ 3.0, we’ve removed the ability to bulk import LDIF data to a server while preserving the existing data (the “append mode”). First, performing an import-ldif in append mode was breaking replication. The import needed to be applied to all replica, while no change was to happen on the new data. The process was cumbersome, especially when having multiple data-centers. But also, removing this feature allowed us to have a more generic interface and implement multiple backend using different underlying key-value stores.

But we have a few customers that have the need to seldom bulk load a large set of users to their directory service. In DS 6.0, we’ve added an option to speed bulk operations using ldapmodify or ldapdelete: –numConnections. Instead of serialising all updates or adds contained in an LDIF file, the tool will run them in parallel across multiple connections, while also controlling dependencies of changes. With this options, some of our customers have added several millions of users to their replicated directory services in minutes. By controlling the number of connections, one can also balance the need for speed of bulk loading data against the need to keep bandwidth for the regular client applications.

Doing bulk updates over LDAP is now fast, but some customers used the import process to also carry over some attributes that are usually managed by the directory server and thus read-only, such as the CreateTimeStamp, the CreatorsName.

And this is specifically what the Relax Rules Control is meant to allow.

So, if you have a need to bulk load large set of data, or synchronise over LDAP data from another server, and need to preserve some of the operational attribute, you can use the Relax Rules Control as illustrated below. Note that the OID for the control is 1.3.6.1.4.1.4203.666.5.12 but ForgeRock DS tools also recognise the RelaxRules string alias.

$ ldapmodify -p 1389 -D cn=directory manager -w secret12
-J RelaxRules:true --numConnections 4 ../50Kusers.ldif
...
ADD operation successful for DN uid=user.10021,ou=People,dc=example,dc=com
ADD operation successful for DN uid=user.10022,ou=People,dc=example,dc=com
ADD operation successful for DN uid=user.10001,ou=People,dc=example,dc=com
ADD operation successful for DN uid=user.10020,ou=People,dc=example,dc=com
ADD operation successful for DN uid=user.10026,ou=People,dc=example,dc=com
ADD operation successful for DN uid=user.10025,ou=People,dc=example,dc=com
ADD operation successful for DN uid=user.10024,ou=People,dc=example,dc=com
ADD operation successful for DN uid=user.10005,ou=People,dc=example,dc=com
ADD operation successful for DN uid=user.10033,ou=People,dc=example,dc=com
ADD operation successful for DN uid=user.10029,ou=People,dc=example,dc=com
...

Note that because the Relax Rules Control allows to override some of the rules enforced normally by the server, it’s important to control and restrict which clients or users are allowed to make use of it. In ForgeRock DS, you would use ACIs (global or not) to define who has permission to use the control. Out of the box, only Directory Manager can, because it has the bypass access controls privilege. Check the “Use Control or Extended Operation” section of the Administration Guide for the details on how to allow a user to use a control.

This blog post was first published @ ludopoitou.com, included here with permission.

Explaining index-entry-limit in ForgeRock Directory Services / OpenDJ

A few years ago, I’ve explained the various resource limits in OpenDJ, the open source LDAP and REST directory server. A few months ago, someone read the post and asked on twitter about the index-entry-limit:

Screen Shot 2016-08-20 at 16.28.01

The index-entry-limit is probably the least understood parameter in the OpenDJ directory server, as was the AllIDThreshold in Sun Directory Server (and its siblings : Netscape Directory, Red Hat Directory, Oracle DSEE…). So before I dive in explaining what is this parameter, how it’s used and how it can be tuned, let me start with answering the question : how does index-entry-limit relate to other administrative limits ?

Answer: It doesn’t ! The index-entry-limit is an internal limit and does not really limits the results returned to clients. It just limits the resources consumed when processing indexes.

A Directory Server is a very specialized data-store based on the LDAP standard, and its primary goal was to be able to search and return user information such as email addresses or names and phone numbers, very quickly and for a large number of different clients. For that, the directory servers were designed to favor reads over writes, and read optimization was achieved through the use of indexes.

In LDAP, a search request (which can be used to read an entry or search for one or more through the whole database) contains a search filter. The filter may be simple or complex, and composed of one or more attribute value assertions.

A simple filter can be “(sn=Smith)”. Complex filters combine operators and different attributes : “(&(objectclass=Person)(|(sn=Smith)(cn=*Smith*)))” – find a person whose surname is smith or whose common name contains smith

When the ForgeRock Directory Server / OpenDJ receives a search request, it processes it in 2 phases. In the first phase, it analyzes the search filter, to identify which attributes are indexed, and then uses these indexes to build a list of possible candidates to return. If there are no indexed attributes or the list is too large, the server decides that the list is actually the whole database. Such search request is tagged as “unindexed” and the server verify if the authenticated user has the “unindexed-search” privilege before continuing. In the second phase, it reads all the candidates from the database, and assess the full filter to decide to return the entry to the client or not (subject to access controls).

ForgeRock DS / OpenDJ implements attribute indexes as reversed index. Meaning that for a specific attribute, we keep a pair of each unique value and a list of the entries that  contain that value. Because maintaining a large list of entries for each value of all indexed attributes may have a big cost, both in term of memory usage and disk I/O (think that when you add an entry in the Directory, all of its indexed attributes will need to be updated), we introduced a limit to the number of entries that an index record can contain: the index-entry-limit. For example, if the number of entries that contain the objectClass person exceeds the limit, then we mark the key as “full” and we consider that the list of candidates is actually the whole set of directory entries.  This saves us from updating and reading a very long record, allocating lots of data, to end up iterating through almost all entries. You might ask, so why having an index for objectClass then ? Well, in a directory server that contains millions of users, there are in fact very few entries that are not persons. These entries will have their objectClass values indexed, and searching for those entries will be very efficient thanks for the index.

The index-entry-limit is a limit of the number of entries that are contained in a single index record, per value of an attribute index. Its default value is 4000 and works for most medium to large scale deployments. So, why is it a configurable parameter, and when should you change it?

Because ForgeRock DS is used in many different environments with various use cases, and a great range of number of entries (some of our customers have over 100 millions entries in a directory service), we know that one size doesn’t fit all. But the default value works for most of the index usages. Also, the index-entry-limit can be set for each individual index, or for the whole backend (and this value applies to all indexes that don’t have a specific value). It is highly recommended that you only try to change the index-entry-limit of specific indexes, and not the backend default value.

In no case, should you increase the index-entry-limit to a value close to the total number of entries in the directory. This will undermine performances of both searches and updates, significantly increase the footprint of the data stored on disk.

There are few known cases where the index-entry-limit value should be changed (and equally cases where increasing the value will only consume more resources for no performance gain). Keep also in mind that when you change the index-entry-limit, you need to rebuild the indexes for which the limit was changed. So it’s not something that you want to do too often. And definitely not something that you need to adjust constantly.

Groups. When the server starts, it issues an internal search to find all group entries and cache them for better performances. The search is based on the ObjectClass attribute. If there are more than 4000 groups of one kind (the search is for GroupOfNames, GroupOfUniqueNames, GroupOfEntries, DynamicGroup and ds-virtual-static-group), the search will be unindexed and can take a long time to proceed. In that case, you should increase the index-entry-limit for the ObjectClass attribute, to a value just above the number of groups.

Members (or uniqueMembers). If you have more than 4000 static groups, and you know that some users are likely to be member of more than 4000 groups, then you should also increase the index-entry-limit for the member attribute (or uniqueMember) to a value just above the maximum number of group a user can be in, especially if you have enabled the Referential Integrity Plugin (that removes a user from groups when its entry is deleted).

Another typical use case for increasing the index-entry-limit is when you have millions of entries, and an attribute doesn’t have a flat distribution of values. Think about the surname of users. In a wide range of population, there are probably more “Smith” or “Lee” than “Washington”. Within 10M users, would there be more than 4000 “Lee”? If it’s possible, and the server receives searches with filters such as “(sn=Lee)”, then you should consider increasing the limit for the sn attribute.

Backendstat is the tool you want to use to verify the state of the index and whether some records have reached the index-entry-limit. For some attributes, such as ObjectClass, it is normal that the limit is reached. For others, such as sn, it’s probably something you want to check regularly.

The backendstat tool requires exclusive access to the database, and thus can only run against a server that is stopped (or a backup).

To list the indexes, use backendstat list-indexes:

$ backendstat list-indexes -b dc=example,dc=com -n userRoot

Index Name Raw DB Name Type Record Count
dn2id /dc=com,dc=example/dn2id DN2ID 10002
id2entry /dc=com,dc=example/id2entry ID2Entry 10002
referral /dc=com,dc=example/referral DN2URI 0
id2childrencount /dc=com,dc=example/id2childrencount ID2ChildrenCount 3
state /dc=com,dc=example/state State 18
uniqueMember.uniqueMemberMatch /dc=com,dc=example/uniqueMember.uniqueMemberMatch MatchingRuleIndex 0
mail.caseIgnoreIA5SubstringsMatch:6 /dc=com,dc=example/mail.caseIgnoreIA5SubstringsMatch:6 MatchingRuleIndex 31232
mail.caseIgnoreIA5Match /dc=com,dc=example/mail.caseIgnoreIA5Match MatchingRuleIndex 10000
aci.presence /dc=com,dc=example/aci.presence MatchingRuleIndex 0
member.distinguishedNameMatch /dc=com,dc=example/member.distinguishedNameMatch MatchingRuleIndex 0
givenName.caseIgnoreMatch /dc=com,dc=example/givenName.caseIgnoreMatch MatchingRuleIndex 8605
givenName.caseIgnoreSubstringsMatch:6 /dc=com,dc=example/givenName.caseIgnoreSubstringsMatch:6 MatchingRuleIndex 19629
telephoneNumber.telephoneNumberSubstringsMatch:6 /dc=com,dc=example/telephoneNumber.telephoneNumberSubstringsMatch:6 MatchingRuleIndex 73235
telephoneNumber.telephoneNumberMatch /dc=com,dc=example/telephoneNumber.telephoneNumberMatch MatchingRuleIndex 10000
ds-sync-hist.changeSequenceNumberOrderingMatch /dc=com,dc=example/ds-sync-hist.changeSequenceNumberOrderingMatch MatchingRuleIndex 0
ds-sync-conflict.distinguishedNameMatch /dc=com,dc=example/ds-sync-conflict.distinguishedNameMatch MatchingRuleIndex 0
entryUUID.uuidMatch /dc=com,dc=example/entryUUID.uuidMatch MatchingRuleIndex 10002
sn.caseIgnoreMatch /dc=com,dc=example/sn.caseIgnoreMatch MatchingRuleIndex 10000
sn.caseIgnoreSubstringsMatch:6 /dc=com,dc=example/sn.caseIgnoreSubstringsMatch:6 MatchingRuleIndex 32217
cn.caseIgnoreMatch /dc=com,dc=example/cn.caseIgnoreMatch MatchingRuleIndex 10000
cn.caseIgnoreSubstringsMatch:6 /dc=com,dc=example/cn.caseIgnoreSubstringsMatch:6 MatchingRuleIndex 86040
objectClass.objectIdentifierMatch /dc=com,dc=example/objectClass.objectIdentifierMatch MatchingRuleIndex 6
uid.caseIgnoreMatch /dc=com,dc=example/uid.caseIgnoreMatch MatchingRuleIndex 10000

Total: 23

To check the status of the indexes and see which keys are full (i.e. exceeded the index-entry-limit), use backendstat show-index-status. Warning, this may take a long time.

$ backendstat show-index-status -b dc=example,dc=com -n userRoot
Index Name Raw DB Name Valid Confidential Record Count Over Entry Limit 95% 90% 85%
uniqueMember.uniqueMemberMatch /dc=com,dc=example/uniqueMember.uniqueMemberMatch true false 0 0 0 0 0
mail.caseIgnoreIA5SubstringsMatch:6 /dc=com,dc=example/mail.caseIgnoreIA5SubstringsMatch:6 true false 31232 12 0 0 0
mail.caseIgnoreIA5Match /dc=com,dc=example/mail.caseIgnoreIA5Match true false 10000 0 0 0 0
aci.presence /dc=com,dc=example/aci.presence true false 0 0 0 0 0
member.distinguishedNameMatch /dc=com,dc=example/member.distinguishedNameMatch true false 0 0 0 0 0
givenName.caseIgnoreMatch /dc=com,dc=example/givenName.caseIgnoreMatch true false 8605 0 0 0 0
givenName.caseIgnoreSubstringsMatch:6 /dc=com,dc=example/givenName.caseIgnoreSubstringsMatch:6 true false 19629 0 0 0 0
telephoneNumber.telephoneNumberSubstringsMatch:6 /dc=com,dc=example/telephoneNumber.telephoneNumberSubstringsMatch:6 true false 73235 0 0 0 0
telephoneNumber.telephoneNumberMatch /dc=com,dc=example/telephoneNumber.telephoneNumberMatch true false 10000 0 0 0 0
ds-sync-hist.changeSequenceNumberOrderingMatch /dc=com,dc=example/ds-sync-hist.changeSequenceNumberOrderingMatch true false 0 0 0 0 0
ds-sync-conflict.distinguishedNameMatch /dc=com,dc=example/ds-sync-conflict.distinguishedNameMatch true false 0 0 0 0 0
entryUUID.uuidMatch /dc=com,dc=example/entryUUID.uuidMatch true false 10002 0 0 0 0
sn.caseIgnoreMatch /dc=com,dc=example/sn.caseIgnoreMatch true false 10000 0 0 0 0
sn.caseIgnoreSubstringsMatch:6 /dc=com,dc=example/sn.caseIgnoreSubstringsMatch:6 true false 32217 0 0 0 0
cn.caseIgnoreMatch /dc=com,dc=example/cn.caseIgnoreMatch true false 10000 0 0 0 0
cn.caseIgnoreSubstringsMatch:6 /dc=com,dc=example/cn.caseIgnoreSubstringsMatch:6 true false 86040 0 0 0 0
objectClass.objectIdentifierMatch /dc=com,dc=example/objectClass.objectIdentifierMatch true false 6 4 0 0 0
uid.caseIgnoreMatch /dc=com,dc=example/uid.caseIgnoreMatch true false 10000 0 0 0 0
Total: 18
Index: /dc=com,dc=example/mail.caseIgnoreIA5SubstringsMatch:6
Over index-entry-limit keys: [.com] [@examp] [ample.] [com] [e.com] [exampl] [le.com] [m] [mple.c] [om] [ple.co] [xample]
Index: /dc=com,dc=example/objectClass.objectIdentifierMatch
Over index-entry-limit keys: [inetorgperson] [organizationalperson] [person] [top]

I hope this long article will help you better understand and tune your ForgeRock Directory Servers for search performances. Please let me know how it goes.

This blog post was first published @ ludopoitou.com, included here with permission.

Better index troubleshooting with ForgeRock DS / OpenDJ

Many years ago, I wrote about troubleshooting indexes and search performances, explaining the magicdebugSearchIndex” operational attribute, that allows an administrator to get from the server information about the processing of indexes for a specific search query.

The returned value provides insights on the indexes that were used for a particular search, how they were used and how the resulting set of candidates was built, allowing an administrator to understand whether indexes are used optimally or need to be tailored better for specific search queries and filters, in combination with access logs and other tools such as backendstat.

In DS 6.5, we’ve made some improvements in the search filter processing and we’ve taken changed the format of the debugSearchIndex value to provide a better reporting of how indexes are used.

The new format is now JSON based, which allow to give it more structure and all could be processed programatically. Here are a few examples of output of the new debugSearchIndex attribute values.

$ bin/ldapsearch -h localhost -p 1389 -D "cn=directory manager" -b "dc=example,dc=com" "(&(cn=*Den*)(mail=user.19*))" debugsearchindex
Password for user 'cn=directory manager': *********

dn: cn=debugsearch
debugsearchindex: {"filter":{"intersection":[{"index":"mail.caseIgnoreIA5SubstringsMatch:6", "exact":"ser.19","candidates":111,"retained":111},{"index":"mail.caseIgnoreIA5SubstringsMatch:6", "exact":"user.1","candidates":1111,"retained":111},
{"filter":"(cn=*Den*)", "index":"cn.caseIgnoreSubstringsMatch:6",
"range":"[den,deo[","candidates":103,"retained":5}], "candidates":5},"final":5}

Let’s look at the debugSearchIndex value and interpret it:

{
   "filter": {
     "intersection": [
       {
         "index": "mail.caseIgnoreIA5SubstringsMatch:6",
         "exact": "ser.19",
         "candidates": 111,
         "retained": 111
       },
       {
         "index": "mail.caseIgnoreIA5SubstringsMatch:6",
         "exact": "user.1",
         "candidates": 1111,
         "retained": 111
       },
       {
         "filter": "(cn=*Den*)",
         "index": "cn.caseIgnoreSubstringsMatch:6",
         "range": "[den,deo[",
         "candidates": 103,
         "retained": 5
       }
     ],
     "candidates": 5
   },
   "final": 5
 }

The filter had 2 components: (cn=*Den*) and (mail=user.19*). Because the whole filter is an AND, the result set is an intersection of several index lookups. Also, both substring filters, but one is a substring of 3 characters and the second one a substring of 7 characters. By default, substring indexes are built with substrings of 6 characters. So the filters are treated differently. The server optimises the processing of indexes so that it will try to first to use the queries that are the most effective. In the case above, the filter (mail=user.19*) is preferred. 2 records are read from the index, and that results in a list of 111 candidates. Then, the server use the remaining filter to narrow the result list. Because the string Den is shorter than the indexed substrings, the server scans a range of keys in the index, starting from the first key match “den” and stopping before the key that matches “deo”. This results in 103 candidates, but only 5 are retained because they were parts of the previous result set. So the result is 5 entries that are matching these filters.

Note the [den,deo[ notation is similar to mathematical Set representation where [ and ] indicate whether a set includes or excludes the boundaries.

Let’s take an example with an OR filter:

$ bin/ldapsearch -h localhost -p 1389 -D "cn=directory manager" -b "dc=example,dc=com" "(|(cn=*Denice*)(uid=user.19))" debugsearchindex
Password for user 'cn=directory manager': *********

dn: cn=debugsearch
debugsearchindex: {"filter":{"union":[{"filter":"(cn=*Denice*)", "index":"cn.caseIgnoreSubstringsMatch:6","exact":"denice","candidates":1}, {"filter":"(uid=user.19)", "index":"uid.caseIgnoreMatch","exact":"user.19","candidates":1}],"candidates":2},"final":2}

As you can see, the result is now a union of 2 exact match (i.e. reads of index keys), each resulting a 1 candidate.

Finally here’s another example, where the scope is used to attempt to reduce the candidate list:

$ bin/ldapsearch -h localhost -p 1389 -D "cn=directory manager" -b "ou=people,dc=example,dc=com" -s one "(mail=user.1)" debugsearchindex
Password for user 'cn=directory manager': *********

dn: cn=debugsearch
debugsearchindex: {"filter":{"filter":"(mail=user.1)","index":"mail.caseIgnoreIA5SubstringsMatch:6", "exact":"user.1","candidates":1111},"scope":{"type":"one","candidates":"[NOT-INDEXED]","retained":1111},"final":1111}

You can find more information and details about the debugsearchindex attribute in the ForgeRock Directory Services 6.5 Administration Guide.

This blog post was first published @ ludopoitou.com, included here with permission.

ForgeRock Directory Services 6.5 is Available

The ForgeRock Identity Platform was released and publicly announced early December this year (also here).

As you may guess from the announcement, an important part of the new features has to do with DevOps, running in Docker, automated with Kubernetes.

The underlying datastore for the ForgeRock Identity Platform is ForgeRock Directory Services, and the new 6.5 release comes with a set of new features and improvements, that are detailed in the Release Notes, but here’s some highlights:

Ease of use has always been important for us, and DS 6.5 brings it to a new level for the customers that are deploying other ForgeRock products. Starting with this version, you can now select, at the time of installation, one or more profiles. A profile contains the complete configuration for a specific use, from base DN, backend, indexes, schema, specific configuration parameters, administrative users, ACI and privileges.. Out of the box, we are delivering 3 profiles for ForgeRock Access Management: Identity Store, Configuration Store and the Core Token Service Store; 1 profile for ForgeRock Identity Management: Managed Object Store; and 1 profile for Directory Services evaluation, that contains the data and configuration that is used through our documentation, and allows you to copy and paste the command examples of the guides and replay them against a running server.

To learn more about profiles, get DS 6.5, and run

setup –help-profiles

. To learn about a specific profile, you can run

setup –help-profile am-cts:6.5.0

With regards to DevOps, containers and automation in the cloud, we’ve continued the efforts that we had started with previous releases.

  • DS 6.5 now supports a method to run post upgrade tasks to the data, such as rebuilding indexes.
  • The server has 2 new HTTP endpoints to poke about its status. /isReady indicates that the server is up and running. /isHealty indicates if its current state is optimal, or if there are some temporary limitations, such as a database backend is offline for maintenance, or the replication is lagging too much (with too much being fully configurable).
  • The Grafana sample dashboard has been updated
  • Like all ForgeRock Identity Platform’s products, DS comes with a Common Audit handler that published log messages to stdout, a common practice when working with Docker containers.

Directory Proxy Server 6.5 now supports “sharding”, i.e. distributing data into multiple discrete replicated directory services. Such deployments make very large amount of data easier to manage and give better write scalability. In this version, the number of “shards” is fixed, but we are working on making the service dynamically scaling as the data grows, in future versions.

Directory Services 6.5 now supports limiting the number of connections that can be opened from a single client application. By IP address, a client may be denied, fully allowed or restricted in its number of opened connections, offering a greater protection against misbehaving applications.

The product also now supports the LDAP Relax Rules Control, that allow an administrator to add or modify attributes that are normally read-only. This feature can be used when having to synchronise data between different LDAP products, so they have the same timestamps for their creation or modification dates.

We’ve made the “cn=Changelog” suffix and data available on servers that are only acting as Replication hubs (RS), since they are persisting all the changes to replicate them.

We’ve added a couple of troubleshooting tools with the release. One tool, changelogstat) allows to list and dump the content of the replication changelog databases. The supportextract tool allows an administrator to capture the state and logs of a Directory Services instance and make the file available to ForgeRock support quickly.

Java 11 is now fully supported, both Oracle JVM and OpenJDK builds (from Oracle, Red-Hat or Azul Systems).

Finally, like with all releases of Directory Services, we have enhanced the performance and the reliability of the server in many areas. But most importantly, we have fully tested that you can upgrade to 6.5 without any service interruption: from 2.6 to 6.0, you can upgrade an instance and let it replicate with the other instances, then start upgrading the next one, until all instances are on the latest and greatest version. If you use VMs or containers, you can stop an existing instance and replace it with a new one. Or add a new one and then stop an old one… Your choice, but both scenarios are supported.

For further details, read the complete Release Notes. I’m looking forward to your feedback on the features and improvements of the Directory Services 6.5 release!

This blog post was first published @ ludopoitou.com, included here with permission.

The OAuth2 ForgeRock Identity Microservices

ForgeRock Identity Microservices

ForgeRock released in Q4 2017, an Early Access (aka beta) program for three key Identity Microservices within a compact, single-purpose code set for consumer-scale deployments. For companies who are deploying stateless Microservices architectures, these microservices offer a micro-gateway enabled solution that enables service trust, policy-enforced identity propagation and even OAuth2-based delegation. The stateless architecture of FR Identity Microservices allows for implementing a sidecar-friendly microgateway deployment pattern.

I blogged about it here. The sign up form is here.

The Microservices

OAuth2 Token Issuance

Enables service to service authentication using client-credential grant type. These “bearer” tokens facilitate trust up and down the call chain.

We currently support RSA, EC, and HMAC signing, with the following algorithms:

  • HS256
  • HS384
  • HS512
  • RS256
  • ES256
  • ES384
  • ES512

Token Validation

Supports introspection of OAuth2 / OIDC tokens issued by an OAuth2 AS as well as native AM tokens. Token validations performed using either configured keys, or lookup via JWK URI for issuer and kid verification.

One or more Token Introspection API implementations can be configured by setting the value to a comma-separated list of implementation names. Each implementation is given a chance to handle the token, in given order, until an implementation successfully handles the token. An error response occurs if no implementation successfully introspects a given token.

The following implementations are provided:

  • json : Configured by a JSON file.
  • openam : Proxies introspect calls to ForgeRock Access Management.

Token Exchange

Supports exchanging stateless OAuth2 access tokens and native AM tokens for hybrid tokens that grant specific entitlements per resourceor audience requested. Implements the draft OAuth2 token-exchange specification. Offers no-fuss declarative (“local”) policy enforcement inside the pod without needing to “call home” for identity data. Also provides policy based entitlement decisions by calling out to ForgeRock Access Management.

The Token-Exchange Policy is a pluggable service which determines whether or not a given token exchange may proceed. The service also determines what audience, scope, and so forth the generated token will have. The following implementations are provided:

  • json (default) : Configured by a JSON file.
  • jwt : Simply returns a JWT for any token that can be introspected.
  • openam : Uses a ForgeRock Access Management Policy Set.

Cloud and Platform

Docker and Kubernetes

Dockerfile is bundled with the binary. Soon there will be a docker repository from where the images could be pulled by evaluators.

A Kubernetes manifest is also provided that demonstrates using environment variables to configure a simple demo instance.

Metrics and Prometheus Integration

The microservices are instrumented to publish basic request metrics and a Prometheus endpoint is also provided, which is convenient for monitoring published metrics.

Audit Tracking

Each incoming HTTP request is assigned a transaction ID, for audit logging purposes, which by default is a UUID. ForgeRock Microservices like other products support propagation of a transaction ID between point-to-point service calls.

Cloud Foundry

The standard binary in Zip format is also directly deploy-able to Cloud Foundry, and will get recognized by the Cloud Foundry Java build pack as a “dist zip” format.

Client Credential Repository

ForgeRock Identity Microservices support Cassandra, MongoDB, any LDAP v3 including ForgeRock Directory Services.

Enhancing User Privacy with OpenID Connect Pairwise Identifiers

This is a quick post to describe how to set up Pairwise subject hashing, when issuing OpenID Connect id_tokens that require the users sub= claim to be pseudonymous.  The main use case for this approach, is to prevent clients or resource servers, from being able to track user activity and correlate the same subject’s activity across different applications.

OpenID Connect basically provides two subject identifier types: public or pairwise.  With public, the sub= claim is simply the user id or equivalent for the user.  This creates a flow something like the below:

Typical “public” subject identifier OIDC flow

This is just a typical authorization_code flow – end result is the id_token payload.  The sub= claim is simply clear and readable.  This allows the possibility of correlating all of sub=jdoe activity.

So, what if you want a bit more privacy within your ecosystem?  Well here comes the Pairwise Subject Identifier type.  This allows each client to be basically issued with a non-reversible hash of the sub= claim, preventing correlation.

To configure in ForgeRock Access Management, alter the OIDC provider settings.  On the advanced tab, simply add pairwise as a subject type.

Enabling Pairwise on the provider

 

Next alter the salt for the hash, also on the provider settings advanced tab.

Add a salt for the hash

 

Each client profile, then needs either a request_uri setting or a sector_identifier_uri.  Basically akin to the redirect_uri whitelist.  This is just a mechanism to identify client requests.  On the client profile, add in the necessary sector identifier and change the subject identifier to be “pairwise”.  This is on the client profile Advanced tab.

Client profile settings

Once done, just slightly alter the incoming authorization_code generation request to looking something like this:

/openam/oauth2/authorize?response_type=code
&save_consent=0
&decision=Allow
&scope=openid
&client_id=OIDCClient
&redirect_uri=http://app.example.com:8080
&sector_identifier_uri=http://app.example.com:8080
Note the addition of the sector_identifier_uri parameter.  Once you’ve exchanged the authorization_code for an access_token, take a peak inside the associated id_token.  This now contains an opaque sub= claim:
{
  "at_hash": "numADlVL3JIuH2Za4X-G6Q",
  "sub": "lj9/l6hzaqtrO2BwjYvu3NLXKHq46SdorqSgHAUaVws=",
  "auditTrackingId": "f8ca531a-61dd-4372-aece-96d0cea21c21-152094",
  "iss": "http://openam.example.com:8080/openam/oauth2",
  "tokenName": "id_token",
  "aud": "OIDCClient",
  "c_hash": "Pr1RhcSUUDTZUGdOTLsTUQ",
  "org.forgerock.openidconnect.ops": "SJNTKWStNsCH4Zci8nW-CHk69ro",
  "azp": "OIDCClient",
  "auth_time": 1517485644000,
  "realm": "/",
  "exp": 1517489256,
  "tokenType": "JWTToken",
  "iat": 1517485656

}
The overall flow would now look something like this:
OIDC flow with Pairwise

This blog post was first published @ http://www.theidentitycookbook.com/, included here with permission from the author.

Enhancing OAuth2 introspection with a Policy Decision Point

OAuth2 protection of resource server content, is typically either done via a call to the authorization service (AS) and the ../introspect endpoint for stateful access_tokens, or, in deployments where stateless access_tokens are deployed, the resource server (RS) could perform “local” introspection, if they have access to the necessary AS signing material.  All good.  The RS would valid scope values, token expiration time and so on.

Contrast that to the typical externalised authorization model, with a policy enforcement point (PEP) and policy decision point (PDP).  Something being protected, sends in a request to a central PDP.  That request is likely to contain the object descriptor, a token representing the subject and some contextual data.  The PDP will have a load of pre-built signatures or policies that would be looked up and processed.  The net-net is the PDP sends back a deny/allow style decision which the PEP (either in the form of an SDK or a policy agent) complies with.

So what is this blog about?  Well it’s the juxtaposition of the typical OAuth2 construct, with externalised PDP style authorization.

So the first step is to set up a basic policy within ForgeRock Access Management that protects a basic web URL – http://app.example.com:8080/index.html.  In honesty the thing being protected could be a URL, button, image, physical object or any other schema you see fit.

Out of the box authorization policy summary

To call the PDP, an application would create a REST payload looking something like the following:

REST request payload to PDP

The request would be a POST ../openam/json/policies?_action=evaluate endpoint.  This endpoint is a protected endpoint, meaning it requires authX from an AM instance.  In a normal non-OAuth2 integrated scenario, this would be handled via the iPlanetDirectoryPro header that would be used within the PDP decision.  Now in this case, we don’t have an iPlanetDirectoryPro cookie, simply the access_token.

Application Service Account

So, there are a couple of extra steps to take.  Firstly, we need to give our calling application their own service account.  Simply add a new group and associated application service user.  This account could then authenticate either via shared secret, JWT, x509 or any other authentication method configured. Make sure to give the associated group the account is in, privileges to the call the REST PDP endpoint.  So back to the use case…

This REST PDP request is the same as any other.  We have the resource being protected which maps into the policy and the OAuth2 access_token that was generated out of band, presented to the PDP as an environment variable.

OAuth2 Validation Script

The main validation is now happening in a simple Policy Condition script.  The script does a few things: performs a call to the AM ../introspect endpoint to perform basic validation – is the token AM issued, valid, within exp and so on.  In addition there are two switches – perform auth_level validation and also perform scope_validation.  Each of these functions takes a configurable setting.  If performAuthLevelCheck is true, make sure to set the acceptableAuthLevel value.  As of AM 5.5, the issued OAuth2 access_token now contains a value called “auth_level”.  This value just ties in the authentication assurance level that has been in AM since the OpenSSO days.  This numeric value is useful to differentiate how a user was validated during OAuth2 issuance. The script basically allows a simple way to perform a minimum acceptable value check.

The other configurable switch, is the performScopeCheck boolean.  If true, the script checks to make sure that the submitted access_token, is associated with atleast a minimum set of required scopes.  The access_token may have more scopes, but it must, as a minimum have the ones configured in the acceptableScopes attribute.

Validation Responses

Once the script is in place lets run through some examples where access is denied.  The first simple one is if the auth_level of the access_token is less than the configured acceptable value.

acceptable_auth_level_not_met advice message

Next up, is the situation where the scopes associated with the submitted access_token fall short of what is required.  There are two advice payloads that could be sent back here.  Firstly, if the number of scopes is fundamentally too small, the following advice is sent back:

acceptable_scopes_not_met – submitted scopes too few

A second response, associated with mismatched scopes, is if the number of scopes is OK, but the actual values don’t contain the acceptable ones.  The following is seen:

acceptable_scopes_not_met – scope entry missing

That’s all there is to it.  A few things to know.  The TTL of the policy has been set to be the exp of the access_token.  Clearly this is over writable, but seemed sensible to tie this to the access_token lifespan.

All being well though, a successful response back would look something like the following – depending on what actions you had configured in your policy:

Successful PDP response

Augmenting with Additional Environmental Conditions

So we have an OAuth2-compatible PDP.  Cool!  But what else can we do.  Well, we can augment the scripted decision making, with a couple of other conditions.  Namely the time based, IP address based and LDAP based conditions.

IP and Time based augmentation of access_token validation

The above just shows a simple of example of tying the decision making to only allow valid access_token usage between 830am and 530pm Monday-Friday from a valid IP range.  The other condition worth a mention is the LDAP filter one.

Note, any of the environmental conditions that require session validation, would fail, the script isn’t linking any access_token to AM session at this point – in some cases (depending on how the access_token was generated) may never have a session associated.  So beware they will not work.

The code for the scripted condition is available here.

 

This blog post was first published @ http://www.theidentitycookbook.com/, included here with permission from the author.