2019 Digital Identity Progress Report

Schools out for summer?  Well not quite.  Unless you're living in the east coast of Australia, it's looking decidedly bleak weather wise for most of Europe and the American east coast.  But I digress.  Is it looking bleak for your digital identity driven projects?  What's been a success, where are we heading and what should we look out for?

Where We Are Today

Passwordless - (Reports says B-)

Over the last 24 months, there have been some pretty big themes that many organisations embarking on digital identity and security related projects, have been trying to succeed at.  First up, the age old chestnut...of passwordless authentication.  The password is dead, long live the password!  We are definitely making progress though.  Many of the top public sites (Facebook, LinkedIn, Twitter et al) provide multi-factor authentication options at least.  Passwords are still required as the first step, but the end user education and familiarity with something other than a password during login, must surely be the first steps to getting ridding of them entirely.  2018 also saw the rise of WebAuthn - the W3C standards based approach for crypto based challenge response authentication.  Could this hopefully accelerate adoption to a password-free world?

API Protection - (Report says C+)

API's will eat the world?  Well, digital disruption needs speed, agility and mashups.  API's help organisations achieve those basic aims, but where are we, with respect to the protection of those API's?  API management platforms are now common in most enterprise architectures.  They help to perform API provisioning, versioning and life cycle management, but what about security?  Many use cases fall into the API security band wagon such as service to service authentication, least privilege authorization, token exchange and contextual throttling.  Most API services are now sitting comfortably behind basic authentication, but fine grained controls and basic use cases such as token revocation and rotation are still in their infancy.  Report says "we must do better".

Microservices Protection - (Report says B-)

Not all API's are microservices, but many net new additions to projects will leverage this approach.  But microservices infrastructures, bring many new security challenges as well as benefits.  Service versioning, same service load balancing, high through puts and fine grained access controls have created some new emerging security patterns.  Both the side car and inflight/proxy approach for traffic introspection and security enforcement have appeared.  Microservices by their design, normally means very high transactions per second, as well as fine grained access control - with each service performing only a single task.  Stateless OAuth2 seems to fit the bill for many projects, but the consistency around high scale token introspection and scope design seems immature.

IoT Security - (Reports says C-)

Many digital disruption projects are embracing smart device (HTTP-able) infrastructures.  Pairing those devices to real people seems a winner for many industries, from retail, insurance to finance.  But and there's always a but, the main interest for many organisations is not the device, but the data the device is either collecting or generating.  Device protection is often lacking - default credentials, hard coded keys, un-upgradable firmware, inability to use HTTPS and the inability to store access tokens are all very common.  There are costs and usability issues with increased device security and no emerging patterns are consistent.  Several regulations and security best practice documents now exist, but adoption is still low.

User Consent Management - (Report says B-)

GDPR has probably had the biggest impact, from an awareness perspective, than any other piece of regulation relating to consent.  The consumer, from a pure economic buyer perspective at least, has never been so powerful.  One click away from a competitor.  From a data perspective however, it seems the capitalist corporate machine is holding all the cards.  Marketing analytics, usage tracking, location tracking, you name it, the service provider wants that data to either improve your service, or improve their ability to market new services.  Many organisations are not stupid.  They realise that by offering basic consent management functionality (contact preferences, ability to be removed, data exportation, activity viewing) they are not only ticking the compliance check box, but can actually create a competitive advantage by giving their user community the image of being at trusted partner to do business with.  But will the end user be ever truly in control of their data?

What's Coming

The above 4 topics are not going away any time soon.  Knowledge, standards maturity and technology advances, should all allow each of those areas to bounce a grade within the next 18-24 months.  But what other concerns are on the horizon?  

Well skills immediately spring out.  Cyber security in general is known to have a basic skills shortage.  Digital Identity seems to fall in to that general trend and some of these topics are niches within a niche.  Getting the right skill set to design micro services security or consent management systems will not be trivial.

What about new threats - they are emerging every day.  Bot protection - at both registration and login time - not only helps improve the security posture of an organisation, but also helps improve user analytics, remove opportunities for false advertising and provide a clearer picture to a service's real organic user community.  How will things like ML/AI help here - and does that provide another skills challenge or management black hole?

The final topic to mention is that of usability.  Security can be simple in many respects, but usability can make or break a service.  As underlying ecosystems become more complex, with a huge supply chain of API's, cross-boundary federations and devices, how can the end user be both protected, yet offered a seamless registration and login experience? Dedicated user experience teams exist today, but their skill set will need to be sharpened and focused on the security aspect of any new service. 


2019 Digital Identity Progress Report

Schools out for summer?  Well not quite.  Unless you're living in the east coast of Australia, it's looking decidedly bleak weather wise for most of Europe and the American east coast.  But I digress.  Is it looking bleak for your digital identity driven projects?  What's been a success, where are we heading and what should we look out for?

Where We Are Today

Passwordless - (Reports says B-)

Over the last 24 months, there have been some pretty big themes that many organisations embarking on digital identity and security related projects, have been trying to succeed at.  First up, the age old chestnut...of passwordless authentication.  The password is dead, long live the password!  We are definitely making progress though.  Many of the top public sites (Facebook, LinkedIn, Twitter et al) provide multi-factor authentication options at least.  Passwords are still required as the first step, but the end user education and familiarity with something other than a password during login, must surely be the first steps to getting ridding of them entirely.  2018 also saw the rise of WebAuthn - the W3C standards based approach for crypto based challenge response authentication.  Could this hopefully accelerate adoption to a password-free world?

API Protection - (Report says C+)

API's will eat the world?  Well, digital disruption needs speed, agility and mashups.  API's help organisations achieve those basic aims, but where are we, with respect to the protection of those API's?  API management platforms are now common in most enterprise architectures.  They help to perform API provisioning, versioning and life cycle management, but what about security?  Many use cases fall into the API security band wagon such as service to service authentication, least privilege authorization, token exchange and contextual throttling.  Most API services are now sitting comfortably behind basic authentication, but fine grained controls and basic use cases such as token revocation and rotation are still in their infancy.  Report says "we must do better".

Microservices Protection - (Report says B-)

Not all API's are microservices, but many net new additions to projects will leverage this approach.  But microservices infrastructures, bring many new security challenges as well as benefits.  Service versioning, same service load balancing, high through puts and fine grained access controls have created some new emerging security patterns.  Both the side car and inflight/proxy approach for traffic introspection and security enforcement have appeared.  Microservices by their design, normally means very high transactions per second, as well as fine grained access control - with each service performing only a single task.  Stateless OAuth2 seems to fit the bill for many projects, but the consistency around high scale token introspection and scope design seems immature.

IoT Security - (Reports says C-)

Many digital disruption projects are embracing smart device (HTTP-able) infrastructures.  Pairing those devices to real people seems a winner for many industries, from retail, insurance to finance.  But and there's always a but, the main interest for many organisations is not the device, but the data the device is either collecting or generating.  Device protection is often lacking - default credentials, hard coded keys, un-upgradable firmware, inability to use HTTPS and the inability to store access tokens are all very common.  There are costs and usability issues with increased device security and no emerging patterns are consistent.  Several regulations and security best practice documents now exist, but adoption is still low.

User Consent Management - (Report says B-)

GDPR has probably had the biggest impact, from an awareness perspective, than any other piece of regulation relating to consent.  The consumer, from a pure economic buyer perspective at least, has never been so powerful.  One click away from a competitor.  From a data perspective however, it seems the capitalist corporate machine is holding all the cards.  Marketing analytics, usage tracking, location tracking, you name it, the service provider wants that data to either improve your service, or improve their ability to market new services.  Many organisations are not stupid.  They realise that by offering basic consent management functionality (contact preferences, ability to be removed, data exportation, activity viewing) they are not only ticking the compliance check box, but can actually create a competitive advantage by giving their user community the image of being at trusted partner to do business with.  But will the end user be ever truly in control of their data?

What's Coming

The above 4 topics are not going away any time soon.  Knowledge, standards maturity and technology advances, should all allow each of those areas to bounce a grade within the next 18-24 months.  But what other concerns are on the horizon?  

Well skills immediately spring out.  Cyber security in general is known to have a basic skills shortage.  Digital Identity seems to fall in to that general trend and some of these topics are niches within a niche.  Getting the right skill set to design micro services security or consent management systems will not be trivial.

What about new threats - they are emerging every day.  Bot protection - at both registration and login time - not only helps improve the security posture of an organisation, but also helps improve user analytics, remove opportunities for false advertising and provide a clearer picture to a service's real organic user community.  How will things like ML/AI help here - and does that provide another skills challenge or management black hole?

The final topic to mention is that of usability.  Security can be simple in many respects, but usability can make or break a service.  As underlying ecosystems become more complex, with a huge supply chain of API's, cross-boundary federations and devices, how can the end user be both protected, yet offered a seamless registration and login experience? Dedicated user experience teams exist today, but their skill set will need to be sharpened and focused on the security aspect of any new service. 


ForgeRock DS and the LDAP Relax Rules Control

In ForgeRock Directory Services 6.5, we’ve added the support for the LDAP Relax Rules Control, both on the server and our clients. One of my colleagues, involved with the customers’ deployment, asked me why we’ve added the control and what it should be used for.

The LDAP Relax Rules Control is an LDAP extension that allows a directory user agent (a client) to request the directory service to temporarily relax enforcement of various data and service model rules. The internet-draft is explicit about which rules can be relaxed or not. But typically it can be used to allow a client to write specific operational attributes that should be read-only and managed by the server.

Starting with OpenDJ 3.0, we’ve removed the ability to bulk import LDIF data to a server while preserving the existing data (the “append mode”). First, performing an import-ldif in append mode was breaking replication. The import needed to be applied to all replica, while no change was to happen on the new data. The process was cumbersome, especially when having multiple data-centers. But also, removing this feature allowed us to have a more generic interface and implement multiple backend using different underlying key-value stores.

But we have a few customers that have the need to seldom bulk load a large set of users to their directory service. In DS 6.0, we’ve added an option to speed bulk operations using ldapmodify or ldapdelete: –numConnections. Instead of serialising all updates or adds contained in an LDIF file, the tool will run them in parallel across multiple connections, while also controlling dependencies of changes. With this options, some of our customers have added several millions of users to their replicated directory services in minutes. By controlling the number of connections, one can also balance the need for speed of bulk loading data against the need to keep bandwidth for the regular client applications.

Doing bulk updates over LDAP is now fast, but some customers used the import process to also carry over some attributes that are usually managed by the directory server and thus read-only, such as the CreateTimeStamp, the CreatorsName.

And this is specifically what the Relax Rules Control is meant to allow.

So, if you have a need to bulk load large set of data, or synchronise over LDAP data from another server, and need to preserve some of the operational attribute, you can use the Relax Rules Control as illustrated below. Note that the OID for the control is 1.3.6.1.4.1.4203.666.5.12 but ForgeRock DS tools also recognise the RelaxRules string alias.

$ ldapmodify -p 1389 -D cn=directory\ manager -w secret12
-J RelaxRules:true --numConnections 4 ../50Kusers.ldif
...
ADD operation successful for DN uid=user.10021,ou=People,dc=example,dc=com
ADD operation successful for DN uid=user.10022,ou=People,dc=example,dc=com
ADD operation successful for DN uid=user.10001,ou=People,dc=example,dc=com
ADD operation successful for DN uid=user.10020,ou=People,dc=example,dc=com
ADD operation successful for DN uid=user.10026,ou=People,dc=example,dc=com
ADD operation successful for DN uid=user.10025,ou=People,dc=example,dc=com
ADD operation successful for DN uid=user.10024,ou=People,dc=example,dc=com
ADD operation successful for DN uid=user.10005,ou=People,dc=example,dc=com
ADD operation successful for DN uid=user.10033,ou=People,dc=example,dc=com
ADD operation successful for DN uid=user.10029,ou=People,dc=example,dc=com
...

Note that because the Relax Rules Control allows to override some of the rules enforced normally by the server, it’s important to control and restrict which clients or users are allowed to make use of it. In ForgeRock DS, you would use ACIs (global or not) to define who has permission to use the control. Out of the box, only Directory Manager can, because it has the bypass access controls privilege. Check the “Use Control or Extended Operation” section of the Administration Guide for the details on how to allow a user to use a control.

Explaining index-entry-limit in ForgeRock Directory Services / OpenDJ

A few years ago, I’ve explained the various resource limits in OpenDJ, the open source LDAP and REST directory server. A few months ago, someone read the post and asked on twitter about the index-entry-limit:

Screen Shot 2016-08-20 at 16.28.01

The index-entry-limit is probably the least understood parameter in the OpenDJ directory server, as was the AllIDThreshold in Sun Directory Server (and its siblings : Netscape Directory, Red Hat Directory, Oracle DSEE…). So before I dive in explaining what is this parameter, how it’s used and how it can be tuned, let me start with answering the question : how does index-entry-limit relate to other administrative limits ?

Answer: It doesn’t ! The index-entry-limit is an internal limit and does not really limits the results returned to clients. It just limits the resources consumed when processing indexes.

A Directory Server is a very specialized data-store based on the LDAP standard, and its primary goal was to be able to search and return user information such as email addresses or names and phone numbers, very quickly and for a large number of different clients. For that, the directory servers were designed to favor reads over writes, and read optimization was achieved through the use of indexes.

In LDAP, a search request (which can be used to read an entry or search for one or more through the whole database) contains a search filter. The filter may be simple or complex, and composed of one or more attribute value assertions.

A simple filter can be “(sn=Smith)”. Complex filters combine operators and different attributes : “(&(objectclass=Person)(|(sn=Smith)(cn=*Smith*)))” – find a person whose surname is smith or whose common name contains smith

When the ForgeRock Directory Server / OpenDJ receives a search request, it processes it in 2 phases. In the first phase, it analyzes the search filter, to identify which attributes are indexed, and then uses these indexes to build a list of possible candidates to return. If there are no indexed attributes or the list is too large, the server decides that the list is actually the whole database. Such search request is tagged as “unindexed” and the server verify if the authenticated user has the “unindexed-search” privilege before continuing. In the second phase, it reads all the candidates from the database, and assess the full filter to decide to return the entry to the client or not (subject to access controls).

ForgeRock DS / OpenDJ implements attribute indexes as reversed index. Meaning that for a specific attribute, we keep a pair of each unique value and a list of the entries that  contain that value. Because maintaining a large list of entries for each value of all indexed attributes may have a big cost, both in term of memory usage and disk I/O (think that when you add an entry in the Directory, all of its indexed attributes will need to be updated), we introduced a limit to the number of entries that an index record can contain: the index-entry-limit. For example, if the number of entries that contain the objectClass person exceeds the limit, then we mark the key as “full” and we consider that the list of candidates is actually the whole set of directory entries.  This saves us from updating and reading a very long record, allocating lots of data, to end up iterating through almost all entries. You might ask, so why having an index for objectClass then ? Well, in a directory server that contains millions of users, there are in fact very few entries that are not persons. These entries will have their objectClass values indexed, and searching for those entries will be very efficient thanks for the index.

The index-entry-limit is a limit of the number of entries that are contained in a single index record, per value of an attribute index. Its default value is 4000 and works for most medium to large scale deployments. So, why is it a configurable parameter, and when should you change it?

Because ForgeRock DS is used in many different environments with various use cases, and a great range of number of entries (some of our customers have over 100 millions entries in a directory service), we know that one size doesn’t fit all. But the default value works for most of the index usages. Also, the index-entry-limit can be set for each individual index, or for the whole backend (and this value applies to all indexes that don’t have a specific value). It is highly recommended that you only try to change the index-entry-limit of specific indexes, and not the backend default value.

In no case, should you increase the index-entry-limit to a value close to the total number of entries in the directory. This will undermine performances of both searches and updates, significantly increase the footprint of the data stored on disk.

There are few known cases where the index-entry-limit value should be changed (and equally cases where increasing the value will only consume more resources for no performance gain). Keep also in mind that when you change the index-entry-limit, you need to rebuild the indexes for which the limit was changed. So it’s not something that you want to do too often. And definitely not something that you need to adjust constantly.

Groups. When the server starts, it issues an internal search to find all group entries and cache them for better performances. The search is based on the ObjectClass attribute. If there are more than 4000 groups of one kind (the search is for GroupOfNames, GroupOfUniqueNames, GroupOfEntries, DynamicGroup and ds-virtual-static-group), the search will be unindexed and can take a long time to proceed. In that case, you should increase the index-entry-limit for the ObjectClass attribute, to a value just above the number of groups.

Members (or uniqueMembers). If you have more than 4000 static groups, and you know that some users are likely to be member of more than 4000 groups, then you should also increase the index-entry-limit for the member attribute (or uniqueMember) to a value just above the maximum number of group a user can be in, especially if you have enabled the Referential Integrity Plugin (that removes a user from groups when its entry is deleted).

Another typical use case for increasing the index-entry-limit is when you have millions of entries, and an attribute doesn’t have a flat distribution of values. Think about the surname of users. In a wide range of population, there are probably more “Smith” or “Lee” than “Washington”. Within 10M users, would there be more than 4000 “Lee”? If it’s possible, and the server receives searches with filters such as “(sn=Lee)”, then you should consider increasing the limit for the sn attribute.

Backendstat is the tool you want to use to verify the state of the index and whether some records have reached the index-entry-limit. For some attributes, such as ObjectClass, it is normal that the limit is reached. For others, such as sn, it’s probably something you want to check regularly.

The backendstat tool requires exclusive access to the database, and thus can only run against a server that is stopped (or a backup).

To list the indexes, use backendstat list-indexes:

$ backendstat list-indexes -b dc=example,dc=com -n userRoot

Index Name Raw DB Name Type Record Count
dn2id /dc=com,dc=example/dn2id DN2ID 10002
id2entry /dc=com,dc=example/id2entry ID2Entry 10002
referral /dc=com,dc=example/referral DN2URI 0
id2childrencount /dc=com,dc=example/id2childrencount ID2ChildrenCount 3
state /dc=com,dc=example/state State 18
uniqueMember.uniqueMemberMatch /dc=com,dc=example/uniqueMember.uniqueMemberMatch MatchingRuleIndex 0
mail.caseIgnoreIA5SubstringsMatch:6 /dc=com,dc=example/mail.caseIgnoreIA5SubstringsMatch:6 MatchingRuleIndex 31232
mail.caseIgnoreIA5Match /dc=com,dc=example/mail.caseIgnoreIA5Match MatchingRuleIndex 10000
aci.presence /dc=com,dc=example/aci.presence MatchingRuleIndex 0
member.distinguishedNameMatch /dc=com,dc=example/member.distinguishedNameMatch MatchingRuleIndex 0
givenName.caseIgnoreMatch /dc=com,dc=example/givenName.caseIgnoreMatch MatchingRuleIndex 8605
givenName.caseIgnoreSubstringsMatch:6 /dc=com,dc=example/givenName.caseIgnoreSubstringsMatch:6 MatchingRuleIndex 19629
telephoneNumber.telephoneNumberSubstringsMatch:6 /dc=com,dc=example/telephoneNumber.telephoneNumberSubstringsMatch:6 MatchingRuleIndex 73235
telephoneNumber.telephoneNumberMatch /dc=com,dc=example/telephoneNumber.telephoneNumberMatch MatchingRuleIndex 10000
ds-sync-hist.changeSequenceNumberOrderingMatch /dc=com,dc=example/ds-sync-hist.changeSequenceNumberOrderingMatch MatchingRuleIndex 0
ds-sync-conflict.distinguishedNameMatch /dc=com,dc=example/ds-sync-conflict.distinguishedNameMatch MatchingRuleIndex 0
entryUUID.uuidMatch /dc=com,dc=example/entryUUID.uuidMatch MatchingRuleIndex 10002
sn.caseIgnoreMatch /dc=com,dc=example/sn.caseIgnoreMatch MatchingRuleIndex 10000
sn.caseIgnoreSubstringsMatch:6 /dc=com,dc=example/sn.caseIgnoreSubstringsMatch:6 MatchingRuleIndex 32217
cn.caseIgnoreMatch /dc=com,dc=example/cn.caseIgnoreMatch MatchingRuleIndex 10000
cn.caseIgnoreSubstringsMatch:6 /dc=com,dc=example/cn.caseIgnoreSubstringsMatch:6 MatchingRuleIndex 86040
objectClass.objectIdentifierMatch /dc=com,dc=example/objectClass.objectIdentifierMatch MatchingRuleIndex 6
uid.caseIgnoreMatch /dc=com,dc=example/uid.caseIgnoreMatch MatchingRuleIndex 10000

Total: 23

To check the status of the indexes and see which keys are full (i.e. exceeded the index-entry-limit), use backendstat show-index-status. Warning, this may take a long time.

$ backendstat show-index-status -b dc=example,dc=com -n userRoot
Index Name Raw DB Name Valid Confidential Record Count Over Entry Limit 95% 90% 85%
uniqueMember.uniqueMemberMatch /dc=com,dc=example/uniqueMember.uniqueMemberMatch true false 0 0 0 0 0
mail.caseIgnoreIA5SubstringsMatch:6 /dc=com,dc=example/mail.caseIgnoreIA5SubstringsMatch:6 true false 31232 12 0 0 0
mail.caseIgnoreIA5Match /dc=com,dc=example/mail.caseIgnoreIA5Match true false 10000 0 0 0 0
aci.presence /dc=com,dc=example/aci.presence true false 0 0 0 0 0
member.distinguishedNameMatch /dc=com,dc=example/member.distinguishedNameMatch true false 0 0 0 0 0
givenName.caseIgnoreMatch /dc=com,dc=example/givenName.caseIgnoreMatch true false 8605 0 0 0 0
givenName.caseIgnoreSubstringsMatch:6 /dc=com,dc=example/givenName.caseIgnoreSubstringsMatch:6 true false 19629 0 0 0 0
telephoneNumber.telephoneNumberSubstringsMatch:6 /dc=com,dc=example/telephoneNumber.telephoneNumberSubstringsMatch:6 true false 73235 0 0 0 0
telephoneNumber.telephoneNumberMatch /dc=com,dc=example/telephoneNumber.telephoneNumberMatch true false 10000 0 0 0 0
ds-sync-hist.changeSequenceNumberOrderingMatch /dc=com,dc=example/ds-sync-hist.changeSequenceNumberOrderingMatch true false 0 0 0 0 0
ds-sync-conflict.distinguishedNameMatch /dc=com,dc=example/ds-sync-conflict.distinguishedNameMatch true false 0 0 0 0 0
entryUUID.uuidMatch /dc=com,dc=example/entryUUID.uuidMatch true false 10002 0 0 0 0
sn.caseIgnoreMatch /dc=com,dc=example/sn.caseIgnoreMatch true false 10000 0 0 0 0
sn.caseIgnoreSubstringsMatch:6 /dc=com,dc=example/sn.caseIgnoreSubstringsMatch:6 true false 32217 0 0 0 0
cn.caseIgnoreMatch /dc=com,dc=example/cn.caseIgnoreMatch true false 10000 0 0 0 0
cn.caseIgnoreSubstringsMatch:6 /dc=com,dc=example/cn.caseIgnoreSubstringsMatch:6 true false 86040 0 0 0 0
objectClass.objectIdentifierMatch /dc=com,dc=example/objectClass.objectIdentifierMatch true false 6 4 0 0 0
uid.caseIgnoreMatch /dc=com,dc=example/uid.caseIgnoreMatch true false 10000 0 0 0 0
Total: 18
Index: /dc=com,dc=example/mail.caseIgnoreIA5SubstringsMatch:6
Over index-entry-limit keys: [.com] [@examp] [ample.] [com] [e.com] [exampl] [le.com] [m] [mple.c] [om] [ple.co] [xample]
Index: /dc=com,dc=example/objectClass.objectIdentifierMatch
Over index-entry-limit keys: [inetorgperson] [organizationalperson] [person] [top]

I hope this long article will help you better understand and tune your ForgeRock Directory Servers for search performances. Please let me know how it goes.

Better index troubleshooting with ForgeRock DS / OpenDJ

Many years ago, I wrote about troubleshooting indexes and search performances, explaining the magicdebugSearchIndex” operational attribute, that allows an administrator to get from the server information about the processing of indexes for a specific search query.

The returned value provides insights on the indexes that were used for a particular search, how they were used and how the resulting set of candidates was built, allowing an administrator to understand whether indexes are used optimally or need to be tailored better for specific search queries and filters, in combination with access logs and other tools such as backendstat.

In DS 6.5, we’ve made some improvements in the search filter processing and we’ve changed the format of the debugSearchIndex value to provide a better reporting of how indexes are used.

The new format is now JSON based, which allow to give it more structure and all could be processed programatically. Here are a few examples of output of the new debugSearchIndex attribute values.

$ bin/ldapsearch -h localhost -p 1389 -D "cn=directory manager" -b "dc=example,dc=com" "(&(cn=*Den*)(mail=user.19*))" debugsearchindex
Password for user 'cn=directory manager': *********

dn: cn=debugsearch
debugsearchindex: {"filter":{"intersection":[{"index":"mail.caseIgnoreIA5SubstringsMatch:6", "exact":"ser.19","candidates":111,"retained":111},{"index":"mail.caseIgnoreIA5SubstringsMatch:6", "exact":"user.1","candidates":1111,"retained":111},
{"filter":"(cn=*Den*)", "index":"cn.caseIgnoreSubstringsMatch:6",
"range":"[den,deo[","candidates":103,"retained":5}], "candidates":5},"final":5}

Let’s look at the debugSearchIndex value and interpret it:

{
"filter": {
"intersection": [
{
"index": "mail.caseIgnoreIA5SubstringsMatch:6",
"exact": "ser.19",
"candidates": 111,
"retained": 111
},
{
"index": "mail.caseIgnoreIA5SubstringsMatch:6",
"exact": "user.1",
"candidates": 1111,
"retained": 111
},
{
"filter": "(cn=*Den*)",
"index": "cn.caseIgnoreSubstringsMatch:6",
"range": "[den,deo[",
"candidates": 103,
"retained": 5
}
],
"candidates": 5
},
"final": 5
}

The filter had 2 components: (cn=*Den*) and (mail=user.19*). Because the whole filter is an AND, the result set is an intersection of several index lookups. Also, both substring filters, but one is a substring of 3 characters and the second one a substring of 7 characters. By default, substring indexes are built with substrings of 6 characters. So the filters are treated differently. The server optimises the processing of indexes so that it will try to first to use the queries that are the most effective. In the case above, the filter (mail=user.19*) is preferred. 2 records are read from the index, and that results in a list of 111 candidates. Then, the server use the remaining filter to narrow the result list. Because the string Den is shorter than the indexed substrings, the server scans a range of keys in the index, starting from the first key match “den” and stopping before the key that matches “deo”. This results in 103 candidates, but only 5 are retained because they were parts of the previous result set. So the result is 5 entries that are matching these filters.

Note the [den,deo[ notation is similar to mathematical Set representation where [ and ] indicate whether a set includes or excludes the boundaries.

Let’s take an example with an OR filter:

$ bin/ldapsearch -h localhost -p 1389 -D "cn=directory manager" -b "dc=example,dc=com" "(|(cn=*Denice*)(uid=user.19))" debugsearchindex
Password for user 'cn=directory manager': *********

dn: cn=debugsearch
debugsearchindex: {"filter":{"union":[{"filter":"(cn=*Denice*)", "index":"cn.caseIgnoreSubstringsMatch:6","exact":"denice","candidates":1}, {"filter":"(uid=user.19)", "index":"uid.caseIgnoreMatch","exact":"user.19","candidates":1}],"candidates":2},"final":2}

As you can see, the result is now a union of 2 exact match (i.e. reads of index keys), each resulting a 1 candidate.

Finally here’s another example, where the scope is used to attempt to reduce the candidate list:

$ bin/ldapsearch -h localhost -p 1389 -D "cn=directory manager" -b "ou=people,dc=example,dc=com" -s one "(mail=user.1)" debugsearchindex
Password for user 'cn=directory manager': *********

dn: cn=debugsearch
debugsearchindex: {"filter":{"filter":"(mail=user.1)","index":"mail.caseIgnoreIA5SubstringsMatch:6", "exact":"user.1","candidates":1111},"scope":{"type":"one","candidates":"[NOT-INDEXED]","retained":1111},"final":1111}

You can find more information and details about the debugsearchindex attribute in the ForgeRock Directory Services 6.5 Administration Guide.

Intelligent Authn and more

ForgeRock Access Management (AM) 6.5 brings many new features and improvements: support for standard Web Authentication (WebAuthn), more built-in intelligent authentication nodes, support for secret stores including keystores, file-based stores, and HSMs, as well as CTS and OAuth 2.0/OpenID Connect enhancements.

The AM 6.5 docs are the best yet. Highlights:

  • The new Authentication Node Developer’s Guide shows you how to develop and maintain your own intelligent authentication nodes in Java for use alongside built-in nodes and third-party nodes from the marketplace. (New to authentication nodes and trees? In a nutshell, AM 6 and later let you use decision trees to create authentication journeys that best fit any use case. For more, start with this blog.)
  • The OAuth 2.0 Guide for 6.5 has improved a lot, making it easier to understand and use OAuth 2.0 features in AM (even if you haven’t read all the RFCs ;-). The guide now helps you decide quickly which flow to use for your case. The descriptions and instructions for flows have been reworked for you to find what you need fast.
  • The AM 6.5 docs release includes 40 improvements and new features and over 100 fixes and updates, many in response to questions from readers. So please continue to send your feedback, which you can do directly from the docs as you read them. (Click at the top right to start.)

ForgeRock Directory Services 6.5 is Available

The ForgeRock Identity Platform was released and publicly announced early December this year (also here).

As you may guess from the announcement, an important part of the new features has to do with DevOps, running in Docker, automated with Kubernetes.

The underlying datastore for the ForgeRock Identity Platform is ForgeRock Directory Services, and the new 6.5 release comes with a set of new features and improvements, that are detailed in the Release Notes, but here’s some highlights:

Ease of use has always been important for us, and DS 6.5 brings it to a new level for the customers that are deploying other ForgeRock products. Starting with this version, you can now select, at the time of installation, one or more profiles. A profile contains the complete configuration for a specific use, from base DN, backend, indexes, schema, specific configuration parameters, administrative users, ACI and privileges.. Out of the box, we are delivering 3 profiles for ForgeRock Access Management: Identity Store, Configuration Store and the Core Token Service Store; 1 profile for ForgeRock Identity Management: Managed Object Store; and 1 profile for Directory Services evaluation, that contains the data and configuration that is used through our documentation, and allows you to copy and paste the command examples of the guides and replay them against a running server.

To learn more about profiles, get DS 6.5, and run

setup –help-profiles

. To learn about a specific profile, you can run

setup –help-profile am-cts:6.5.0

With regards to DevOps, containers and automation in the cloud, we’ve continued the efforts that we had started with previous releases.

  • DS 6.5 now supports a method to run post upgrade tasks to the data, such as rebuilding indexes.
  • The server has 2 new HTTP endpoints to poke about its status. /isReady indicates that the server is up and running. /isHealty indicates if its current state is optimal, or if there are some temporary limitations, such as a database backend is offline for maintenance, or the replication is lagging too much (with too much being fully configurable).
  • The Grafana sample dashboard has been updated
  • Like all ForgeRock Identity Platform’s products, DS comes with a Common Audit handler that published log messages to stdout, a common practice when working with Docker containers.

Directory Proxy Server 6.5 now supports “sharding”, i.e. distributing data into multiple discrete replicated directory services. Such deployments make very large amount of data easier to manage and give better write scalability. In this version, the number of “shards” is fixed, but we are working on making the service dynamically scaling as the data grows, in future versions.

Directory Services 6.5 now supports limiting the number of connections that can be opened from a single client application. By IP address, a client may be denied, fully allowed or restricted in its number of opened connections, offering a greater protection against misbehaving applications.

The product also now supports the LDAP Relax Rules Control, that allow an administrator to add or modify attributes that are normally read-only. This feature can be used when having to synchronise data between different LDAP products, so they have the same timestamps for their creation or modification dates.

We’ve made the “cn=Changelog” suffix and data available on servers that are only acting as Replication hubs (RS), since they are persisting all the changes to replicate them.

We’ve added a couple of troubleshooting tools with the release. One tool, changelogstat) allows to list and dump the content of the replication changelog databases. The supportextract tool allows an administrator to capture the state and logs of a Directory Services instance and make the file available to ForgeRock support quickly.

Java 11 is now fully supported, both Oracle JVM and OpenJDK builds (from Oracle, Red-Hat or Azul Systems).

Finally, like with all releases of Directory Services, we have enhanced the performance and the reliability of the server in many areas. But most importantly, we have fully tested that you can upgrade to 6.5 without any service interruption: from 2.6 to 6.0, you can upgrade an instance and let it replicate with the other instances, then start upgrading the next one, until all instances are on the latest and greatest version. If you use VMs or containers, you can stop an existing instance and replace it with a new one. Or add a new one and then stop an old one… Your choice, but both scenarios are supported.

For further details, read the complete Release Notes. I’m looking forward to your feedback on the features and improvements of the Directory Services 6.5 release!

How to build an SSO client for your REST APIs with OIDC

The Problem

REST APIs are everywhere now, and for good reason. Say you are a savvy service provider – you don’t want to only offer a web application that requires direct user interaction. You know that business value comes from integration and partnerships, and so you publish your services as a REST API. By providing this API, you enable your services to be used by many different applications, expanding your reach and making more money as a result.

Security is critical for every form of service you offer, including your REST API. At the same time, you know that for an API to be successful, it has to be easy to work with. You balance security and usability by relying on standards – specifically, OAuth2. This is a great choice, because there is a broad ecosystem available to support the publication and consumption of OAuth2-based APIs (see “The ForgeRock Platform” for details).  There are many options to choose from in the market for client software; these are used to request access tokens and then supply them as part of the REST API calls. If you are unsure how OAuth2 works, take a look at my past article “The OAuth2 Apartment Building“.

Your own web applications are still very important – direct user interaction with your services is absolutely needed. This presents you with a choice – build your own web applications as OAuth2-based clients that use your REST APIs similar to how third-parties would use them, or build something proprietary which does not use those APIs. It may initially be easier to build an application without relying on the REST API; designing a complete REST API that is suitable for all use-cases can be hard. The problem with this option is that now you are faced with maintaining multiple points of entry into your system – the REST API used by third-parties and your proprietary web application. Inevitably, one of those will be less robust than the other, and you will have essentially doubled your work to maintain them. The best approach for the long term is to build your web applications using the same OAuth2-based REST APIs that third-parties use.

Presenting a single sign-on experience for your users is as important as ever. When they login anywhere within your platform, they have a reasonable expectation that the session they started will be good everywhere they browse within your platform. Likewise, they have an expectation that whenever they logout from anywhere within your platform, their session will be terminated everywhere within your platform. Since your web application is being built on top of an OAuth2-based REST API, you will need to find a way to provide this kind of seamless session experience in that environment.

The Solution

OpenID Connect (OIDC) is the fundamental technique required to achieve single sign-on in this context. OIDC is a standard means to allow an “OpenID Provider” (OP) to handle authentication for a user on behalf of a “relying party” (RP) application (for a high-level overview of how OIDC works, take a look at my previous article “The OpenID Connect Neighborhood“).  Using OIDC, you can obtain the two things you need: a valid OAuth2 access token to submit to the REST API and information about the user that is currently logged in.  The RP needs the authenticated user’s identity details and gets them in the form of an id token from the OP.  By virtue of being an extension to OAuth2, logging in with OIDC will also allow the RP to obtain the access token. ForgeRock Access Management is built to operate as an OpenID Provider.

Initially logging in is a well-established part of the OIDC process. The first time a user needs to login to any of your web applications they will be redirected to the OP to do so. The OP is responsible for authenticating them and then maintaining session details about that authentication. The OP will set a cookie in the user’s browser that identifies this session, which is only readable by the OP. The OP will then redirect the user back to the RP so that it can fetch the tokens it needs.

Keeping tokens current is the main challenge that your web applications need to solve for single sign-on. By “current” I mean that they need to stay in sync with the session established within the OP – when that session is valid, then the RP tokens should be valid. When that session ends, the tokens used by the RP should be revoked. The problem is that this session identifier lives in the user’s browser under the OP domain and is completely unavailable to the RP. This makes monitoring for changes to that session difficult for the RP. Fortunately, there is a trick that browsers can use to overcome these limitations.

Hidden iframes within a web application provide a very powerful means of monitoring for OP session status. These are basically non-interactive browser contexts which can leverage redirection to pass messages between the OP and the RP. By embedding an iframe within your web application you can periodically use it to call the OP’s authorization endpoint with the “prompt=none” URL parameter. When the iframe loads this URL, it will include the OP session cookie as part of the request; this is how the OP will be able to recognize the session. The authorization endpoint will always respond by redirecting the frame to your specified “redirect_uri” location, along with additional parameters which tell you details about the state of the user’s session. If the session is still valid, the parameters will allow you to determine the currently logged-in user from the id token details. If the session has expired, then the response will include error messages such as “interaction_required“. When the iframe has these messages available, it can pass information about them back to the parent web application frame using the browser’s postMessage API.

Deciding when to check the session at the OP using the hidden iframe is a topic of debate. The draft specification for OpenID Connect Session Management states:

[I]t is possible to repeat the Authentication Request with prompt=none. However, this causes network traffic and this is problematic on the mobile devices that are becoming increasingly popular.

The specification then goes on to describe a technique involving multiple iframes which exist in both the RP and the OP domains. The spec suggests that the OP-based iframe should use JavaScript to poll for changes to a cookie value that represents the current state of the session on the OP; upon a change detection, only then use the prompt=none technique. The theory here is that polling for cookie value changes avoids the cost of network traffic, which on mobile devices is especially concerning. While that is a laudable goal in theory, there are several practical problems with this approach.

One problem is the simple fact that many security practices suggest setting session cookies to be “httpOnly“; this makes them impossible to read from JavaScript, which makes them impossible to use in the way described by the spec. This issue could be overcome if the OP sets an additional cookie that is not marked as httpOnly and is intended exclusively for the use in the OP iframe. This cookie value would have to be maintained so that it changed every time the main session value changed, but also must not give away anything about the main session value that might undermine the security provided by the httpOnly flag.

Another problem is simply adoption – the value to compare against when monitoring for changes in the OP frame is an extra return parameter from the authorization response (called session_state). Most OIDC RP libraries do not recognize this parameter, and so would have to be altered to support it.

The most critical problem with this specification is the concern it has regarding mobile traffic; this was first expressed in August 2012 as part of draft 08, back when mobile networks were merely “increasingly popular”. Now they have become much more robust and ubiquitous. The relative cost of a periodic network request to check session status at the OP is much lower, whereas the cost of the specification in terms of complexity remains high.

The simplest solution would be to just use a setInterval call to re-initiate the authentication request every few seconds (or minutes, if so desired). This would happen regardless of user interaction in the RP, and it would be triggered at whatever frequency you configure. This approach, while easy, does have the downside of causing more load on the system – both in terms of network overhead and work for the OP.

Another option is to use an event-based strategy – basically, only check the session status when the user does something within the RP. This could be page transitions, network requests, key presses or mouse clicks. Whatever events you choose to monitor, you will want to be sure not to issue more than one session check request within a particular window of time (say, every few seconds at the minimum). This is probably the best option, since it won’t overwhelm the network or the OP, and yet still provides very quick feedback to the user when their session expires.

Handling an expired session within the RP is pretty straightforward. Just revoke the access token using the mechanism described in the OAuth 2.0 Token Revocation specification and call the end session endpoint with the id token. Afterwards, simply delete the tokens from your RP memory and redirect the user to a page that they can access without credentials (this might end up just being the OP login page).

Sequence Diagram

The general pattern of requests for this process looks like this:

The ForgeRock Platform

You can use the ForgeRock Identity Platform to build the infrastructure needed for this environment. ForgeRock Access Management operates as the OpenID Provider (OP) – it offers strong authentication, maintains sessions and issues both id and access tokens. ForgeRock Identity Gateway can protect your REST APIs by acting as the Resource Server (RS). It does this by intercepting requests to your APIs; before forwarding the request along, it will read the token from the request and perform token introspection via a call to AM. If AM indicates that the access token has the necessary scopes, the request will be forwarded to the underlying REST API.

An example of this configuration which uses ForgeRock Identity Management’s REST API as the API being protected is available to try out, here: Platform OAuth2 Sample.

The Ideal Relying Party

This approach for session management could be used for any type of OIDC RP that uses the browser as a user agent. The only real technical requirement is it can embed a non-interactive browser context (such as a hidden iframe) with access to the OP session cookie. However, when your web application is designed to work exclusively with your OAuth2 REST APIs as the back-end, you will probably find that the best choice for building your RP is using a pure front-end technology such as a Single-Page Application (i.e. “SPA”). This is because doing so will allow you to avoid having to run an additional application server that is only operating as a proxy to your REST API.

The SPA pattern is a popular modern technique for building web applications, and it does not require the use of an application server. Instead, SPAs execute entirely within a web browser using JavaScript. This JavaScript application code is completely public – any user familiar with how web pages operate can easily read all of it. As a result, these applications cannot have anything sensitive (like passwords) included within their code. This means that SPAs are classified as “public” clients.

Public clients have two types of grants available to implement – Authorization Code and Implicit. Based on the descriptions in the specification, it may appear that a SPA should be built using the implicit grant; however, industry trends and best current practices that have emerged since the initial spec was written suggest that this is not the best choice after all. Instead, use of the authorization code grant as a public client is considered more secure (most notably by avoiding presence of the access token in browser URL history).

PKCE is an extension to OAuth2 that is designed specifically to make the authorization code grant even more secure for public clients. Essentially, PKCE prevents a malicious third party from using a public client’s authorization code to obtain an access token. While it should be very difficult to intercept an authorization code served over HTTPS, using PKCE provides a valuable additional layer of protection.

To summarize, the ideal relying party for your REST APIs will be a single-page app that uses PKCE to obtain the tokens and periodically checks that the session at the OP is still valid.

The Code

ForgeRock has published two open source libraries to help make building this sort of RP as easy as possible:

AppAuth Helper is a wrapper around the AppAuth-JS library that makes it easy to use in the context of a Single-Page App. AppAuth-JS is designed to support PKCE, which makes it a good choice for obtaining tokens in a public client. This helper automates the usage of AppAuth-JS  for token acquisition and renewal, providing a very seamless login and token management experience.

OIDC Session Check is a library designed to make it easy to add the session-checking hidden iframe into your web application RP. It can be added into any web-based app (SPA or not) – the only requirement is that you supply the username of the user that is currently logged in. Then you only have to decide when to check the session – based on a regular poll and/or based on various events.

For an example showing how to use these two together, take a look at this OAuth2 client that was designed to work with the ForgeRock Platform OAuth2 Sample: IDM 6.0 End-User UI with AppAuth

DevOps docs leap forward

The ForgeRock DevOps docs for 6.5 add a lot beyond version 6. Not only do the 6.5 DevOps Developer’s Guide (formerly DevOps Guide) and Quick Start Guide cover everything they addressed in 6, you now get much more guidance:

  • The Start Here roadmap gives you an overview of all docs.
  • The Release Notes bring you up to date quickly from the previous release.
  • The CDM Cookbooks bring you the Cloud Deployment Model, a recipe for common use of the ForgeRock Identity Platform in a DevOps environment. At present, ForgeRock publishes cookbooks for Google’s cloud and Amazon’s cloud, relying on Kubernetes for orchestration in both clouds. Make sure you read through to the Benchmarking chapter, where you will learn what it cost ForgeRock to run sample deployments in the real world.
  • The Site Reliability Guides cover how to customize and run the deployments in the cloud of your choice.

Congratulations to everyone in the cloud deployment team on an impressive release, and especially to Gina, David, and Shankar for a great doc set!

Brokering Identity Services Into Pivotal Cloud Foundry

Introduction

Pivotal Cloud Foundry (PCF) deployments are maturing across the corporate landscape. PCF’s out-of-the-box identity and access management (IAM) tool, UAA (User Accounts and Authentication), provides basic user management functions and OAuth 2.0/OIDC 1.0 support. UAA has come a long way since its inception and provides a solid foundation of IAM services for an isolated application ecosystem running on the Pivotal platform. As organizations experience ever more demanding requirements pushed on their applications, they start realizing the need for a full IAM platform that provides identity services beyond what UAA can offer. Integrating applications running on Pivotal with applications running outside the platform, providing strong and adaptive authentication journeys, managing identities across applications, enforcing security policies and more requires a full-service IAM platform like ForgeRock’s Identity Platform.

ForgeRock provides a Pivotal service broker implementation, the ForgeRock Service Broker. It runs as a small service inside Pivotal and brokers two services into the PCF platform: An OAuth 2.0 AM Service and an IG Route Service. While the OAuth 2.0 AM Service provides similar capabilities to UAA on the OAuth/OIDC side, the IG Route Service is based on IG (Identity Gateway) and can broker the full spectrum of services of the ForgeRock Identity Platform. PCF applications bound to the IG Route Service can seamlessly consume any of the countless services the ForgeRock Identity Platform provides: Intelligent authentication, authorization, federation, user-managed access, identity synchronization, user self-service, workflow, social identity, directory services, API gateway services and more.

This article provides an easy-to-follow path to:

  • Set up a PCF development environment (PCF Dev)
  • Install and configure IG in that environment
  • Install and configure the ForgeRock Service Broker in that environment
  • Deploy, integrate and protect a number of PCF sample applications using the IG Route Service and IG

Additionally, the guide provides steps how to run IG on PCF. If you have access to a full PCF instance, you can skip the PCF Dev part and dive right into the Service Broker deployment and configuration. You also need access to a ForgeRock Access Management instance 5.0 or newer.

1. Preparing a PCF Dev environment

As mentioned, if you have access to a full PCF instance, you can skip this part and go straight  to the Service Broker deployment and configuration.

1.1. Installing CF CLI

Before you install the server side of the PCF Dev environment, you must first install the Cloud Foundry Command Line Interface (CF CLI) utility, which is the main way you will interact with PCF throughout this process.

Follow the Pivotal documentation to install the flavor of the CLI you need for your workstation OS:

https://docs.run.pivotal.io/cf-cli/install-go-cli.html

1.2. Installing PCF Dev

Now that you are ready to roll with the CF CLI, it is time to download and install the PCF Dev components. This article is based on PCF Dev v0.30.0 for PCF 1.11.0. This version is based on a VirtualBox and has a number of default services installed, some of which you will need later on.

PCF Dev – PAS 2.0.18.0 is an alpha release of the NextGen PCF Dev using the native OS hypervisor, doubling the minimum memory requirements from 4G to 8G, having only a few PCF services installed by default, and taking up to 1h to start. It does however include a full BOSH Director, which is the graphical UI to manage “Tiles” in PCF vs having to use the CLI. As soon as this version is a bit more stable and bundles more services like the old one did, it might be worth upgrading. But for now, make sure you select and download v0.30.0:

https://network.pivotal.io/products/pcfdev

In order to use your own IP address and DNS name (-i and -d parameters of the cf dev start command) you need to set up a wildcard DNS record. In my case I setup *.pcfdev.mytestrun.com pointing to my workstation’s IP address where I am running PCF Dev.

Follow the command log below to install and start PCF Dev:

unzip pcfdev-v0.30.0_PCF1.11.0-osx.zip
./pcfdev-v0.30.0+PCF1.11.0-osx
cf dev start -i 192.168.37.73 -d pcfdev.mytestrun.com -m 6144
Warning: the chosen PCF Dev VM IP address may be in use by another VM or device.
Using existing image.
Allocating 6144 MB out of 16384 MB total system memory (6591 MB free).
Importing VM...
Starting VM...
Provisioning VM...
Waiting for services to start...
7 out of 58 running
7 out of 58 running
7 out of 58 running
7 out of 58 running
40 out of 58 running
56 out of 58 running
58 out of 58 running
 _______  _______  _______    ______   _______  __   __
|       ||       ||       |  |      | |       ||  | |  |
|    _  ||       ||    ___|  |  _    ||    ___||  |_|  |
|   |_| ||       ||   |___   | | |   ||   |___ |       |
|    ___||      _||    ___|  | |_|   ||    ___||       |
|   |    |     |_ |   |      |       ||   |___  |     |
|___|    |_______||___|      |______| |_______|  |___|
is now running.
To begin using PCF Dev, please run:
   cf login -a https://api.pcfdev.mytestrun.com --skip-ssl-validation
Apps Manager URL: https://apps.pcfdev.mytestrun.com
Admin user => Email: admin / Password: admin
Regular user => Email: user / Password: pass

1.3 Logging in to PCF Dev

Login to your fresh PCF Dev instance and select the org you want to work with. Use the pcfdev-org:

cf login -a https://api.pcfdev.mytestrun.com/ --skip-ssl-validation
API endpoint: https://api.pcfdev.mytestrun.com/
Email> admin
Password>
Authenticating...
OK
Select an org (or press enter to skip):
1. pcfdev-org
2. system
Org> 1
Targeted org pcfdev-org
Targeted space pcfdev-space

API endpoint:  https://api.pcfdev.mytestrun.com (API version: 2.82.0)
User:          admin
Org:            pcfdev-org
Space:          pcfdev-space

Authenticate using admin/admin if using PCF Dev or a Pivotal admin user if using a real PCF instance.

2. Install Sample Applications

To test the Service Broker and inter-application SSO, install 2 sample applications:

2.1. Spring Music

git clone https://github.com/cloudfoundry-samples/spring-music
cd spring-music

Modify manifest to reduce memory and avoid random route names:

vi manifest.yml

Enter or copy & paste the following content:

---
applications:
- name: music
  memory: 768M
  random-route: false
  path: build/libs/spring-music-1.0.jar

Push the app:

cf push

Waiting for app to start...
name:           music
requested state:   started
instances:      1/1
usage:          768M x 1 instances
routes:            music.pcfdev.mytestrun.com
last uploaded:  Tue 22 May 15:28:24 CDT 2018
stack:          cflinuxfs2
buildpack:      container-certificate-trust-store=2.0.0_RELEASE java-buildpack=v3.13-offline-https://github.com/cloudfoundry/java-buildpack.git#03b493f
                java-main open-jdk-like-jre=1.8.0_121 open-jdk-like-memory-calculator=2.0.2_RELEASE spring-auto-reconfiguration=1.10...
start command:  CALCULATED_MEMORY=$($PWD/.java-buildpack/open_jdk_jre/bin/java-buildpack-memory-calculator-2.0.2_RELEASE
                -memorySizes=metaspace:64m..,stack:228k.. -memoryWeights=heap:65,metaspace:10,native:15,stack:10 -memoryInitials=heap:100%,metaspace:100%
                -stackThreads=300 -totMemory=$MEMORY_LIMIT) && JAVA_OPTS="-Djava.io.tmpdir=$TMPDIR
                -XX:OnOutOfMemoryError=$PWD/.java-buildpack/open_jdk_jre/bin/killjava.sh $CALCULATED_MEMORY
                -Djavax.net.ssl.trustStore=$PWD/.java-buildpack/container_certificate_trust_store/truststore.jks
                -Djavax.net.ssl.trustStorePassword=java-buildpack-trust-store-password" && SERVER_PORT=$PORT eval exec
                $PWD/.java-buildpack/open_jdk_jre/bin/java $JAVA_OPTS -cp $PWD/. org.springframework.boot.loader.JarLauncher

     state    since                  cpu      memory          disk          details
#0   running   2018-05-22T20:29:00Z   226.8% 530.4M of 768M   168M of 512M

Note the routes: music.pcfdev.mytestrun.com

That’s the URL at which your application can be reached. You should be able to resolve the dynamically generated DNS name. You should also be able to hit the URL in a web browser.

Retrieve application logs:

cf logs music --recent

Live-tail application logs:

cf logs music

2.2. Cloud Foundry Sample NodeJS App

git clone https://github.com/cloudfoundry-samples/cf-sample-app-nodejs.git
cd cf-sample-app-nodejs

Modify manifest to reduce memory and avoid random route names:

vi manifest.yml

---
applications:
- name: node
  memory: 512M
  instances: 1
  random-route: false

Push the app:

cf push

Waiting for app to start...
name:           node
requested state:   started
instances:      1/1
usage:          512M x 1 instances
routes:            node.pcfdev.mytestrun.com
last uploaded:  Tue 22 May 15:46:02 CDT 2018
stack:          cflinuxfs2
buildpack:      node.js 1.5.32
start command:  npm start

     state    since                  cpu    memory      disk        details
#0   running   2018-05-22T20:46:35Z   0.0% 0 of 512M 0 of 512M   

Note the routes: node.pcfdev.mytestrun.com

That’s the URL at which your application can be reached. You should be able to resolve the dynamically generated DNS name. You should also be able to hit the URL in a web browser.

Retrieve application logs:

$ cf logs node —recent

Live-tail application logs:

cf logs node

2.3. Create Your Own JSP Headers App

Create your very own useful sample application to display headers. This will come in handy for future experiments with the IG Route Service.

mkdir headers
cd headers
mkdir WEB-INF
vi index.jsp

<%@ page import="java.util.*" %>
<html>
<head>
<title><%= application.getServerInfo() %></title>
</head>
<body>
<h1>HTTP Request Headers Received</h1>
<table border="1" cellpadding="3" cellspacing="3">
<%
Enumeration eNames = request.getHeaderNames();
while (eNames.hasMoreElements()) {
String name = (String) eNames.nextElement();
String value = normalize(request.getHeader(name));
%>
<tr><td><%= name %></td><td><%= value %></td></tr>
<%
}
%>
</table>
</body>
</html>
<%!
private String normalize(String value)
{
StringBuffer sb = new StringBuffer();
for (int i = 0; i < value.length(); i++) {
char c = value.charAt(i);
sb.append(c);
if (c == ';')
sb.append("<br>");
}
return sb.toString();
}
%>
cf push headers


Waiting for app to start...
name:           headers
requested state:   started
instances:      1/1
usage:          256M x 1 instances
routes:            headers.pcfdev.mytestrun.com
last uploaded:  Tue 22 May 16:24:26 CDT 2018
stack:          cflinuxfs2
buildpack:      container-certificate-trust-store=2.0.0_RELEASE java-buildpack=v3.13-offline-https://github.com/cloudfoundry/java-buildpack.git#03b493f
                open-jdk-like-jre=1.8.0_121 open-jdk-like-memory-calculator=2.0.2_RELEASE tomcat-access-logging-support=2.5.0_RELEAS...
start command:  CALCULATED_MEMORY=$($PWD/.java-buildpack/open_jdk_jre/bin/java-buildpack-memory-calculator-2.0.2_RELEASE
                -memorySizes=metaspace:64m..,stack:228k.. -memoryWeights=heap:65,metaspace:10,native:15,stack:10 -memoryInitials=heap:100%,metaspace:100%
                -stackThreads=300 -totMemory=$MEMORY_LIMIT) &&  JAVA_HOME=$PWD/.java-buildpack/open_jdk_jre JAVA_OPTS="-Djava.io.tmpdir=$TMPDIR
                -XX:OnOutOfMemoryError=$PWD/.java-buildpack/open_jdk_jre/bin/killjava.sh $CALCULATED_MEMORY
                -Djavax.net.ssl.trustStore=$PWD/.java-buildpack/container_certificate_trust_store/truststore.jks
                -Djavax.net.ssl.trustStorePassword=java-buildpack-trust-store-password -Djava.endorsed.dirs=$PWD/.java-buildpack/tomcat/endorsed
                -Daccess.logging.enabled=false -Dhttp.port=$PORT" exec $PWD/.java-buildpack/tomcat/bin/catalina.sh run

     state    since                  cpu    memory        disk            details
#0   running   2018-05-22T21:24:48Z   0.0% 600K of 256M 84.6M of 512M  

2.4. More Sample Apps

git clone https://github.com/cloudfoundry-samples/cf-ex-php-info
git clone https://github.com/cloudfoundry-samples/cf-sample-app-rails.git

3. Running IG in Pivotal Cloud Foundry

You can run IG absolutely anywhere you want, but since you are going to use it inside PCF, running it in PCF may be a logic choice.

3.1. Install, Deploy, and Configure  IG in PCF

The steps below describe an opinionated deployment model for IG in PCF. Your specific environment may require you to make different choices to achieve an ideal configuration and behavior.

3.1.1. Download IG

Download IG 6 from https://backstage.forgerock.com/downloads/browse/ig/latest to a preferred working location. Login using your backstage credentials.

unzip IG-6.1.0.war
cf push ig --no-start

3.1.2. Enable Development Mode

cf set-env ig IG_RUN_MODE development

3.1.3. Create And Use Persistent Volume For Configuration Data

IG is configured using JSON files. This section is an easy way to create a share storage volume that can persist your IG configuration between restarts. If you run IG using its default configuration, it will lose all its configuration every time it restarts because the app is reset. Externalizing the config allows the configuration to reside outside the app and persist between restarts. In a real PCF environment (vs a PCF DEV environment) you would probably use a different shared storage like an NSF service or the like. But for development purposes, a local-volume will work great.

(https://github.com/cloudfoundry/local-volume-release)

cf create-service local-volume free-local-disk local-volume-instance
cf bind-service ig local-volume-instance -c '{"mount":"/var/openig"}'
cf set-env ig IG_INSTANCE_DIR '/var/openig'

3.1.4. Start IG applying all the configuration changes we have made

cf start ig

3.1.5. Logs

cf logs ig --recent

3.1.6. Apply Required Configuration

3.1.6.1. SSH into your IG instance

cf ssh ig
cd /var/openig
mkdir config
vi config/config.json

3.1.6.2. Apply configuration

Create /var/openig/config/config.json and populate with default configuration as documented here:

https://backstage.forgerock.com/docs/forgerock-service-broker/2/forgerock-service-broker-guide/#implementation-setting-up-openig

{
  "heap": [
     {
       "name": "ClientHandler",
       "type": "ClientHandler",
       "config": {
         "hostnameVerifier": "ALLOW_ALL",
         "trustManager": {
           "type": "TrustAllManager"
         }
       }
    },
    {
      "name": "_router",
      "type": "Router",
      "config": {
        "defaultHandler": {
          "type": "StaticResponseHandler",
          "config": {
            "status": 404,
            "reason": "Not Found",
            "headers": {
              "Content-Type": [
                "application/json"
              ]
            },
            "entity": "{ \"error\": \"Something went wrong, contact the sys admin\"}"
          }
        }
      }
    },
    {
      "type": "Chain",
      "name": "CloudFoundryProxy",
      "config": {
        "filters": [
          {
            "type": "ScriptableFilter",
            "name": "CloudFoundryRequestRebaser",
            "comment": "Rebase the request based on the CloudFoundry provided headers",
            "config": {
              "type": "application/x-groovy",
              "source": [
                "Request newRequest = new Request(request);",
                "org.forgerock.util.Utils.closeSilently(request);",
                "newRequest.uri = URI.create(request.headers['X-CF-Forwarded-Url'].firstValue);",
                "newRequest.headers['Host'] = newRequest.uri.host;",
                "logger.info('Receive request : ' + request.uri + ' forwarding to ' + newRequest.uri);",
                "Context newRoutingContext = org.forgerock.http.routing.UriRouterContext.uriRouterContext(context).originalUri(newRequest.uri.asURI()).build();",
                "return next.handle(newRoutingContext, newRequest);"
              ]
            }
          }
        ],
        "handler": "_router"
      },
      "capture": [
        "request",
        "response"
      ]
    }
  ],
  "handler": {
    "type": "DispatchHandler",
    "name": "Dispatcher",
    "config": {
      "bindings": [
        {
          "condition": "${not empty request.headers['X-CF-Forwarded-Url']}",
          "handler": "CloudFoundryProxy"
        },
        {
          "handler": {
            "type": "StaticResponseHandler",
            "config": {
              "status": 400,
              "entity": "Bad request : expecting a header X-CF-Forwarded-Url"
            }
          }
        }
      ]
    }
  }
}

Then:

exit cf
restart ig

3.1.7. Access IG Studio

http://ig.pcfdev.mytestrun.com/openig/studio/

4. Install ForgeRock Service Broker

Download and install the service broker following the instructions in the doc:

https://backstage.forgerock.com/docs/forgerock-service-broker/2/forgerock-service-broker-guide/#implementation-installing-into-cloud-foundry

4.1. Deploy and Configure the Service Broker App

cf push forgerockbroker-app -p service-broker-servlet-2.0.1.war
cf set-env forgerockbroker-app SECURITY_USER_NAME f8Q7hyHKgz
cf set-env forgerockbroker-app SECURITY_USER_PASSWORD n3BpjwKW4m
cf set-env forgerockbroker-app OPENAM_BASE_URI https://idp.mytestrun.com/openam/
cf set-env forgerockbroker-app OPENAM_USERNAME CloudFoundryAgentAdmin
cf set-env forgerockbroker-app OPENAM_PASSWORD KZDJhN7Vr4
cf set-env forgerockbroker-app OAUTH2_SCOPES profile
cf set-env forgerockbroker-app OPENIG_BASE_URI https://ig.pcfdev.mytestrun.com
cf restage forgerockbroker-app

Note that OPENIG_BASE_URI is specified as https, not http! If specified as http, the following error occurred when binding the ig route service to an application:

cf bind-route-service pcfdev.mytestrun.com igrs --hostname spring-music-chatty-quokka
Binding route spring-music-chatty-quokka.pcfdev.mytestrun.com to service instance igrs in org pcfdev-org / space pcfdev-space as admin...
FAILED
Server error, status code: 502, error code: 10001, message: The service broker returned an invalid response for the request to http://forgerockbroker-app.pcfdev.mytestrun.com/v2/service_instances/4aa37a88-afc0-4e75-9474-d5e2ed3e7876/service_bindings/c8da2445-6689-4824-afd1-125795e2a848. Status Code: 201 Created, Body: {"route_service_url":"http://ig.pcfdev.mytestrun.com/4aa37a88-afc0-4e75-9474-d5e2ed3e7876/c8da2445-6689-4824-afd1-125795e2a848"}

To see the service broker app’s environment:

cf env forgerockbroker-app

To see the service broker app’s details:

cf app forgerockbroker-app

Create service broker:

cf create-service-broker forgerockbroker f8Q7hyHKgz n3BpjwKW4m http://forgerockbroker-app.pcfdev.mytestrun.com

Enable the service you plan on using. The ForgeRock Service Broker supports OAuth and IG. You can enable either or both.

cf enable-service-access forgerock-ig-route-service
cf enable-service-access forgerock-am-oauth2

Create the service instance(s) you will be using for your apps. You should only need one instance per service to handle any number of applications:

cf create-service forgerock-ig-route-service shared igrs
cf create-service forgerock-am-oauth2 shared amrs

4.2. Bind IG Route Service to the Sample Apps

Note how no apps are bound to the IG Route Service (igrs):

cf routes
Getting routes for org pcfdev-org / space pcfdev-space as admin ...
space          host                  domain                port  path  type  apps                  service
pcfdev-space  music                 pcfdev.mytestrun.com                     music
pcfdev-space  node                  pcfdev.mytestrun.com                     node
pcfdev-space  rails                 pcfdev.mytestrun.com                     rails
pcfdev-space  headers               pcfdev.mytestrun.com                     headrs
pcfdev-space  ig                    pcfdev.mytestrun.com                     ig
pcfdev-space  forgerockbroker-app   pcfdev.mytestrun.com                     forgerockbroker-app

Bind the Route Service to the apps:

cf bind-route-service pcfdev.mytestrun.com igrs --hostname music
cf bind-route-service pcfdev.mytestrun.com igrs --hostname node
cf bind-route-service pcfdev.mytestrun.com igrs --hostname rails
cf bind-route-service pcfdev.mytestrun.com igrs --hostname headers

Now the two sample apps are bound to our IG Route Service:

cf routes
Getting routes for org pcfdev-org / space pcfdev-space as admin ...
space          host                              domain                port  path  type  apps                  service
pcfdev-space  music                 pcfdev.mytestrun.com                     music              igrs
pcfdev-space  node                  pcfdev.mytestrun.com                     node               igrs
pcfdev-space  rails                 pcfdev.mytestrun.com                     rails              igrs
pcfdev-space  headers               pcfdev.mytestrun.com                     headers            igrs
pcfdev-space  ig                    pcfdev.mytestrun.com                     ig
pcfdev-space  forgerockbroker-app   pcfdev.mytestrun.com                     forgerockbroker-app

5. Define IG Routes for the Sample Apps

By default, no routes are defined in IG for our sample apps and the default behavior in IG (defined in config.json you created earlier) is to deny access to everything. So the next and very important step is now to define routes that re-enable access to our sample applications. Once the basic routes are defined, we can add authentication and authorization per application as we see fit:

  • Point your browser to the IG Studio: http://ig.pcfdev.mytestrun.com/openig/studio/
  • Select “Protect an Application” from the Studio home screen, then select “Structured.”
  • Select “Advanced options” and enter the app URL from the step where you pushed the app to PCF.
    • Since PCF does hostname-based routing (vs path-based) you have to change the Condition that selects your route accordingly. Into the Condition field, select “Expression” and enter:
      ${matches(request.uri.host, ‘^app-url’)}
      E.g.:
      ${matches(request.uri.host, ‘^music.pcfdev.mytestrun.com’)}
    • Pick a descriptive name and a unique ID for the application
    • Select “Create route”

  • Deploy your route.
  • You have now created a route with default configuration, which simply proxies requests through IG to the app. That means your app is available again like it was before you implemented IG and the Service Broker. The next step is to add value to your route like authentication or authorization.

5.1. Prepare for Authentication and Authorization

As a preparatory step to authentication and authorization, create an AM Service for your route, which is a piece of configuration pointing to your ForgeRock Access Management instance. Select “AM service” from the left side menu and provide the details of your AM instance:

You won’t need the agent section populated for the use cases here.

5.2. Broker Authentication to an Application

  • To add authentication to your route, select “Authentication” from the left side menu and move the slider “Enable authentication” to the right, then select “Single Sign-On” as your authentication option.
  • In the configuration dialog popping up, select your AM service:

    Then select “Save”.
  • Deploy your route.
  • In a browser, point your browser to your app URL, e.g. https://music.pcfdev.mytestrun.com/
  • Notice how you will be redirected to your Access Management login page for authentication. Provide valid login credentials and your sample app should load.
  • Repeat with the other apps. Note how you can now SSO between all the apps!
  • Now let’s add authorization to one of the routes and only allow members of a certain group access to that application. For that, we need some additional prep work in AM:
    • Create a J2EE agent IG can use to evaluate AM policies:

    • Create a new policy set with the name “PCF” or a name and ID of your liking:

      Add URL as the resource type.
    • Create a policy and name it after your application you are protecting. Specify your app URL as the resource, allow GET as an action, and specify the subject condition to require a group membership. In this example, we want membership in the “Engineering” group to be required for access to the “headers” application:

      Your policy summary page should look something like this:

  • Now come back to IG Studio and select the route of the app you created your policy for, in our case the “headers” app and select “Authorization” from the left side bar and move the slider “Enable authorization” to the right, then select “AM Policy Enforcement” as your way to authorize users.
  • Select your AM service, specify your realm and provide the name of the J2EE agent you created in an earlier step and the password. In the policy endpoint section specify the name of your policy set and the expression to retrieve your SSO token; the default should work: ${contexts.ssoToken.value}

  • Save and deploy your route.
  • Point your browser to the protected app and login using a user who is a member of the group you configured to control access. Notice how the app loads after logging in.
  • Now remove the user from the group and refresh the app. Notice how the page goes blank because the user is no longer authorized.

Conclusion

With this setup, applications can now be integrated, protected, SSO-enabled, and identity-infused within minutes. Provide profile self-service, password reset, strong and step-up authentication, continuous authentication, authorization, and risk evaluation to any application in the Pivotal Cloud Foundry ecosystem.