Integrating ForgeRock Identity Platform 6.5

Integrating The ForgeRock Identity Platform 6.5

It’s a relatively common requirement to need to integrate the products that make up the ForgeRock Identity Platform. The IDM Samples Guide contains a good working example of just how to do this. Each version of the ForgeRock stack has slight differences, both in the products themselves, as well as the integrations. As such this blog will focus on version 6.5 of the products and will endeavour to include as much useful information to speed integrations for readers of this blog, including sample configuration files, REST calls etc.

In this integration IDM acts as an OIDC Relying Party, talking to AM as the OIDC Provider using the OAuth 2.0 authorization grant. The following sequence diagram illustrates successful processing from the authorization request, through grant of the authorization code, and ID token from the authorization provider, AM. You can find more details in the IDM Samples Guide.

Full Stack Authorization Code Flow

Sample files

For this integration I’ve included configured sample files which can be found by accessing the link below, modified, and either used as an example or just dropped straight into your test environment. It should not have to be said but just in case…DO NOT deploy these to production without appropriate testing / hardening: https://stash.forgerock.org/users/mark.nienaber_forgerock.com/repos/fullstack6.5/.

Postman Collection

For any REST calls made in this blog you’ll find the Postman collection available here: https://documenter.getpostman.com/view/5814408/SVfJVBYR?version=latest

Sample Scripts

We will set up some basic vanilla instances of our products to get started. I’ve provided some scripts to install both DS as well as AM.

Products used in this integration

Documentation can be found https://backstage.forgerock.com/docs/.

Note: In this blog I install the products under my home directory, this is not best practice, but keep in mind the focus is on the integration, and not meant as a detailed install guide.

Setup new DS instances

On Server 1 (AM Server) install a fresh DS 6.5 instance for an external AM config store (optional) and a DS repository to be shared with IDM.

Add the following entries to the hosts file:

sudo vi /etc/hosts
127.0.0.1       localhost opendj.example.com amconfig1.example.com openam.example.com

Before running the DS install script you’ll need to copy the Example.ldif file from the full-stack IDM sample to the DS/AM server. You can do this manually or use SCP from the IDM server.

scp ~/openidm/samples/full-stack/data/Example.ldif fradmin@opendj.example.com:~/Downloads

Modify the sample file installDSFullStack6.5.sh including:

  • server names
  • port numbers
  • location of DS-6.5.0.zip file
  • location of the Example.ldif from the IDM full-stack sample.

Once completed, run to install both an external AM Config store as well as the DS shared repository. i.e.

script_dir=`pwd`
ds_zip_file=~/Downloads/DS/DS-6.5.0.zip
ds_instances_dir=~/opends/ds_instances
ds_config=${ds_instances_dir}/config
ds_fullstack=${ds_instances_dir}/fullstack
#IDMFullStackSampleRepo
ds_fullstack_server=opendj.example.com
ds_fullstack_server_ldapPort=5389
ds_fullstack_server_ldapsPort=5636
ds_fullstack_server_httpPort=58081
ds_fullstack_server_httpsPort=58443
ds_fullstack_server_adminConnectorPort=54444
#Config
ds_amconfig_server=amconfig1.example.com
ds_amconfig_server_ldapPort=3389
ds_amconfig_server_ldapsPort=3636
ds_amconfig_server_httpPort=38081
ds_amconfig_server_httpsPort=38443
ds_amconfig_server_adminConnectorPort=34444

Now let’s run the script on Server 1, the AM / DS server, to create a new DS instances.

chmod +x ./installdsFullStack.sh
./installdsFullStack.sh
Install DS instances

You now have 2 DS servers installed and configured, let’s install AM.

Setup new AM server

On Server 1 we will install a fresh AM 6.5.2 on Tomcat using the provided Amster script.

Assuming the tomcat instance is started drop the AM WAR file under the webapps directory renaming the context to secure (change this as you wish).

cp ~/Downloads/AM-6.5.2.war ~/tomcat/webapps/secure.war

Copy the amster script install_am.amster into your amster 6.5.2 directory and make any modifications as required.

install-openam 
--serverUrl http://openam.example.com:8080/secure 
--adminPwd password 
--acceptLicense 
--cookieDomain example.com 
--cfgStoreDirMgr 'uid=am-config,ou=admins,ou=am-config' 
--cfgStoreDirMgrPwd password 
--cfgStore dirServer 
--cfgStoreHost amconfig1.example.com 
--cfgStoreAdminPort 34444 
--cfgStorePort 3389 
--cfgStoreRootSuffix ou=am-config 
--userStoreDirMgr 'cn=Directory Manager' 
--userStoreDirMgrPwd password 
--userStoreHost opendj.example.com  
--userStoreType LDAPv3ForOpenDS 
--userStorePort 5389 
--userStoreRootSuffix dc=example,dc=com
:exit

Now run amster and pass it this script to install AM. You can do this manually if you like, but scripting will make your life easier and allow you to repeat it later on.

cd amster6.5.2/
./amster install_am.amster
Install AM

At the end of this you’ll have AM installed and configured to point to the DS instances you set up previously.

AM Successfully installed

Setup new IDM server

On Server 2 install/unzip a new IDM 6.5.0.1 instance.

Make sure the IDM server can reach AM and DS servers by adding an entry to the hosts file.

sudo vi /etc/hosts
Hosts file entry

Modify the hostname and port of the IDM instance in the boot.properties file. An example boot.properties file can be found HERE.

cd ~/openidm
vi /resolver/boot.properties

Set appropriate openidm.host and openidm.port.http/s.

openidm/resolver/boot.properties

We now have the basic AM / IDM and DS setup and are ready to configure each of the products.

Configure IDM to point to shared DS repository

Modify the IDM LDAP connector file to point to the shared repository. An example file can be found HERE.

vi ~/openidm/samples/full-stack/conf/provisioner.openicf-ldap.json

Change “host” and “port” to match shared repo configured above.

provisioner.openicf-ldap.json

Configure AM to point to shared DS repository

Login as amadmin and browse to Identity Stores in the realm you’re configuring. For simplicity I’m using Top Level Realm , DO NOT this in production!!.

Set the correct values for your environment, then select Load Schema and Save.

Shared DS Identity Store

Browse to Identities and you should now see two identities that were imported in the Example.ldif when you setup the DS shared repository.

List of identities

We will setup another user to ensure AM is configured to talk to DS correctly. Select + Add Identity and fill in the values then press Create.

Create new identity

Modify some of the users values and press Save Changes.

Modify attributes

Prepare IDM

We will start IDM now and check that it’s connected to the DS shared repository.

Start the IDM full-stack sample

Enter the following commands to start the full-stack sample.

cd ~/openidm/
./startup.sh -p samples/full-stack
Start full-stack sample

Login to IDM

Login to the admin interface as openidm-admin.

http://openidm.example.com:8080/admin/

Login as openidm-admin

Reconcile DS Shared Repository

We will now run a reconciliation to pull users from the DS shared repository into IDM.

Browse to Configure, then Mappings then on the System/ldap/account→Mananged/user mapping select Reconcile

Reconcile DS shared repository

The users should now exist under Manage, Users.

User list

Let’s Login with our newly created testuser1 to make sure it’s all working.

End User UI Login

You should see the welcome screen.

Welcome Page

IDM is now ready for integration with AM.

Configure AM for integration

Now we will set up AM for integration with IDM.

Setup CORS Filter

Set up a CORS filter in AM to allow IDM as an origin. An example web.xml can be found HERE.

vi tomcat/webapps/secure/WEB-INF/web.xml

Add the appropriate CORS filter. (See sample file above)

</filter-mapping>
<filter-mapping>
<filter-name>CORSFilter</filter-name>
<url-pattern>/json/*</url-pattern>
</filter-mapping>
<filter-mapping>
<filter-name>CORSFilter</filter-name>
<url-pattern>/oauth2/*</url-pattern>
</filter-mapping>
<filter>
<filter-name>CORSFilter</filter-name>
<filter-class>org.forgerock.openam.cors.CORSFilter</filter-class>
<init-param>
<param-name>methods</param-name>
<param-value>POST,PUT,OPTIONS</param-value>
</init-param>
<init-param>
<param-name>origins</param-name>
<param-value>http://openidm.example.com:8080,http://openam.example.com:8080,http://localhost:8000,http://localhost:8080,null,file://</param-value>
</init-param>
<init-param>
<param-name>allowCredentials</param-name>
<param-value>true</param-value>
</init-param>
<init-param>
<param-name>headers</param-name>
<param-value>X-OpenAM-Username,X-OpenAM-Password,X-Requested-With,Accept,iPlanetDirectoryPro</param-value>
</init-param>
<init-param>
<param-name>expectedHostname</param-name>
<param-value>openam.example.com:8080</param-value>
</init-param>
<init-param>
<param-name>exposeHeaders</param-name>
<param-value>Access-Control-Allow-Origin,Access-Control-Allow-Credentials,Set-Cookie</param-value>
</init-param>
<init-param>
<param-name>maxAge</param-name>
<param-value>1800</param-value>
</init-param>
</filter>

Configure AM as an OIDC Provider

Login to AM as an administrator and browse to the Realm you’re configuring. As already mentioned, I’m using the Top Level Realm for simplicity.

Once you’ve browsed to the Realm, on the Dashboard, select Configure OAuth Provider.

Configure OAuth Provider

Now select Configure OpenID Connect.

Configure OIDC

If you need to modify any values like the Realm then do, but I’ll just press Create.

OIDC Server defaults

You’ll get a success message.

Success

AM is now configured as an OP, let’s set some required values. Browse to Services, then click on OAuth2 Provider.

AM Services

On the Consent tab, check the box next to Allow Clients to Skip Consent, then press Save.

Consent settings

Now browse to the Advanced OpenID Connect tab set openidm as the value for Authorized OIDC SSO Clients ( this is the name of the Relying Party / Client which we will create next). Press Save.

Authorized SSO clients

Configure Relying Party / Client for IDM

You can do these steps manually but let’s call AM’s REST interface, you can save these calls and easily replicate this step with the click of a button if you use a tool like Postman (See HERE for the Postman collection). I am using a simple CURL command from the AM server.

Firstly we’ll need an AM administrator session to create the client so call the /authenticate endpoint. (I’ve used jq for better formatting, so I’ll leave that for you to add in if required)

curl -X POST 
http://openam.example.com:8080/secure/json/realms/root/authenticate 
-H 'Accept-API-Version: resource=2.0, protocol=1' 
-H 'Content-Type: application/json' 
-H 'X-OpenAM-Password: password' 
-H 'X-OpenAM-Username: amadmin' 
-H 'cache-control: no-cache' 
-d '{}'

The SSO Token / Session will be returned as tokenId, so save this value.

Authenticate

Now we have a session we can use that in the next call to create the client. Again this will be done via REST however you can do this manually if you want. Substitute the tokenId value from above for the value of the iPlanetDrectoryPro.

curl -X POST 
'http://openam.example.com:8080/secure/json/realms/root/agents/?_action=create' 
-H 'Accept-API-Version: resource=3.0, protocol=1.0' 
-H 'Content-Type: application/json' 
-H 'iPlanetDirectoryPro: djyfFX3h97jH2P-61auKig_i23o.*AAJTSQACMDEAAlNLABxFc2d0dUs2c1RGYndWOUo0bnU3dERLK3pLY2c9AAR0eXBlAANDVFMAAlMxAAA.*' 
-d '{
"username": "openidm",
"userpassword": "openidm",
"realm": "/",
"AgentType": ["OAuth2Client"],
"com.forgerock.openam.oauth2provider.grantTypes": [
"[0]=authorization_code"
],
"com.forgerock.openam.oauth2provider.scopes": [
"[0]=openid"
],
"com.forgerock.openam.oauth2provider.tokenEndPointAuthMethod": [
"client_secret_basic"
],
"com.forgerock.openam.oauth2provider.redirectionURIs": [
"[0]=http://openidm.example.com:8080/"
],
"isConsentImplied": [
"true"
],
"com.forgerock.openam.oauth2provider.postLogoutRedirectURI": [
"[0]=http://openidm.example.com:8080/",
"[1]=http://openidm.example.com:8080/admin/"
]
}'
Create Relying Party

The OAuth Client should be created.

RP / Client created

Integrate IDM and AM

AM is now configured as an OIDC provider and has an OIDC Relying Party for IDM to use, so now we can configure the final step, that is, tell IDM to outsource authentication to AM.

Feel free to modify and copy this authentication.json file directly into your ~/openidm/samples/full-stack/conf folder or follow these steps to configure.

Browse to Configure, then Authentication.

Configure authentication

Authentication should be configured to Local, select ForgeRock Identity Provider.

Outsource authentication to AM

After you click above, the Configure ForgeRock Identity Provider page will pop up.

Set the appropriate values for

  • Well-Known Endpoint
  • Client ID
  • Client Secret
  • Note that the common datastore is set to the DS shared repository, leave this as is.

You can change the others to match your environment but be careful as the values must match those set in AM OP/RP configuration above. You can also refer to the sample the sample authentication.json. Once completed press Submit. You will be asked to re-authenticate.

IDM authentication settings

Testing the integrated environment

Everything is now configured so we are ready to test end to end.

Test End User UI

Browse to the IDM End User UI.

http://openidm.example.com:8080/

You will be directed to AM for login, let’s login with the test user we created earlier.

Login as test user

After authentication you will be directed to IDM End User UI welcome page.

Login success

Congratulations you did it!

Test IDM Admin UI

In this test you’ll login to AM as amadmin and IDM will convert this to a session for the IDM administrator openidm-admin.

This is achieved through the following script:

~/openidm/bin/defaults/script/auth/amSessionCheck.js

Specifically this code snippet is used:

if (security.authenticationId.toLowerCase() === "amadmin") {
security.authorization = {
"id" : "openidm-admin",
"component" : "internal/user",
"roles" : ["internal/role/openidm-admin", "internal/role/openidm-authorized"],
"moduleId" : security.authorization.moduleId
};

In a live environment you should not use amadmin, or openidm-admin but create your own delegated administrators, however in this case we will stick with the sample.

Browse to the IDM Admin UI.

http://openidm.example.com:8080/admin/

You will be directed to AM for login, login with amadmin.

Login as administrator

After authentication you will be directed to IDM End User UI welcome page.

Note: This may seem strange as you requested the Admin UI but it is expected as the redirection URI is set to this page (don’t change this as you can just browse to admin URL after authentication).

Login success

On the top right click the drop down and select Admin, or alternatively just hit the IDM Admin UI URL again.

Switch to Admin UI

You’ll now be directed to the Admin UI and as an IDM administrator you should be able to browse around as expected.

IDM Admin UI success

Congratulations you did it!

Finished.

References

  1. https://backstage.forgerock.com/docs/idm/6.5/samples-guide/#chap-full-stack
  2. khttps://forum.forgerock.com/2018/05/forgerock-identity-platform-version-6-integrating-idm-ds/
  3. https://forum.forgerock.com/2016/02/forgerock-full-stack-configuration/

This blog post was first published @ https://medium.com/@marknienaber included here with permission.

Use ForgeRock Access Manager to provide MFA to Linux using PAM Radius

Use ForgeRock Access Manager to provide Multi-Factor Authentication to Linux

Introduction

Our aim is to set up an integration to provide Multi-Factor Authentication (MFA) to the Linux (Ubuntu) platform using ForgeRock Access Manager. The integration uses pluggable authentication module (PAM) to point to a RADIUS server. In this case AM is configured as a RADIUS server.

We achieve the following:

  1. Outsource Authentication of Linux to ForgeRock Access Manager.
  2. Provide an MFA solution to the Linux Platform.
  3. Configure ForgeRock Access Manager as a RADIUS Server.
  4. Configure PAM on Linux server point to our new RADIUS Server.

Setup

  • ForgeRock Access Manager 6.5.2 Installed and configured.
  • OS — Ubuntu 16.04.
  • PAM exists on your your server (this is common these days and you’ll find PAM here: /etc/pam.d/ ).

Configuration Steps

Configure a chain in AM

Firstly we configure a simple Authentication Chain in Access Manager with two modules.

a. First module – DataStore.

b. Second Module – HOTP. Email configured to point to local fakesmtp server.

Simple Authentication Chain

Configure ForgeRock AM as a RADIUS Server

Now we configure AM as a RADIUS Server.

a. Follow steps here: https://backstage.forgerock.com/docs/am/6.5/radius-server-guide/#chap-radius-implementation

RADIUS Server setup

Secondary Configuration — i.e. RADIUS Client

We have to configure a trusted RADIUS client, our Linux server.

a. Enter the IP address of the client (Linux Server).

b. Set the Client Secret.

b. Select your Realm — I used top level realm (don’t do this in production!).

c. Select your Chain.

Configure RADIUS Client

Configure pam_radius on Linux Server (Ubuntu)

Following these instructions, configure pam_radius on your Linux server:

https://www.wikidsystems.com/support/how-to/how-to-configure-pam-radius-in-ubuntu/

a. Install pam_radius.

sudo apt-get install libpam-radius-auth

b. Configure pam_radius to talk to RADIUS server (In this case AM).

sudo vim /etc/pam_radius_auth.conf

i.e <AM Server VM>:1812 secret 1

Point pam_radius to your RADIUS server

Tell SSH to use pam_radius for authentication.

a. Add this line to the top of the /etc/pam.d/sshd file.

auth sufficient pam_radius_auth.so debug

Note: debug is optional and has been added for testing, do not do this in production.

pam_radius is sufficient to authentication

Enable Challenge response for MFA

Tell your sshd config to allow challenge/response and use PAM.

a. Set the following values in your /etc/ssh/sshd_config file.

ChallengeResponseAuthentication yes

UsePAM yes

Create a local user on your Linux server and in AM

In this simple use case you will required a separate account on your Linux server and in AM.

a. Create a Linux user.

sudo adduser test

Note: Make sure the user has a different password than the user in AM to ensure you’re not authenticating locally. Users may have no password if your system allows it, but in this demo I set the password to some random string.

Create Linux User

b. Ensure the user is created in AM with an email address.

User exists in AM with an email address for OTP

Test Authentication to Unix via SSH

It’s now time to put it all together.

a. I recommend you tail the auth log file.

tail -f /var/log/auth.log

b. SSH to your server using.

ssh test@<server name>

c. You should be authenticating to the first module in your AM chain so enter your AM Password.

d. You should be prompted for your OTP, check your email.

OTP generated and sent to mail attribute on user

e. Enter your OTP and press enter then enter again (the UI i.e. challenge/response is not super friendly here).

f. If successfully entered you should be logged in.

You can follow the Auth logs, as well as the AM logs i.e. authentication.audit.json to view the process.

The End.

References:

This blog post was first published @ https://medium.com/@marknienaber included here with permission.

How To Build An Authentication Platform

Today's authentication requirements go way beyond hooking into a database or directory and challenging every user and service for an Id and password.  Authentication and the login experience, is the application entry point and can make or break your security posture and end user experience. 

Authentication is typically associated with identifying, to a certain degree of assurance, who or what you are interacting with.  Authorization is typically identifying and allowing what that person or thing can do.  This blog is focused on the former, but I might stray in to the latter from time to time.

There are numerous use cases that a modern enterprise needs to fulfil, if authentication services are to deliver value.  These can include:

  • Authentication for a service or API
  • Device authentication
  • Metrics, timing and analytics of flows
  • Threat intelligence integration
  • Anonymous to known authentication profiling
  • Contextual analysis
In addition to the basic functional requirements, there are several non-functional basics too.  These are going to include:

  • Simple customisation
  • Being highly available
  • Stateless and elastic
  • Simple integrations
  • API first

I'm going to take some of these key requirements and describe them in a little more detail.

Non Identity Intelligence

From a feature perspective, the new requirements consistently rely upon Intelligence:  the new buzz in the cyber security world.  Every week a new more consolidated threat intelligence tool comes to market.  Organisations up and down the land, are rapidly building out Security Operations Centres (SOC) with wily ex-military veterans creating strategies and starry eyed graduates analysing SIEM and NIDS logs.  We need data.  We have data.  What we need is information.  Actionable intelligence.  Intelligence can be rapidly integrated into any number of different security architecture components. 

Intelligence here, is basically a focus upon non-identity data signals.  Sources of malware, malicious IP addresses, app assurance ratings, breached credentials data and so on.

The vast breadth and depth of cyber threat intelligence (CTI) sources is staggering.  Free, chargeable, subscription based, cloud based, you name it, it's available.  A common factor must be simplicity of integration - ideally via some like a REST/JSON based API that developers are familiar with.  Long tale integration must be avoided too, with the ability to swap out and have a zero barrier to exit being important.  This last point is extremely important.  You need to able to future proof your data inputs.  

Whatever you want to integrate today, will be out of date tomorrow.  

Integration

Integration is not just limited to threat intelligence sources.  This is really just a non-functional, but I want to spend some time on it.  It is quite common to find legacy (I hate this word, let's call them "classic" or initial system) authentication products are generally difficult to integrate against and extend.  

Many systems integrators (SI's) (and many do excellent jobs in highly challenging environments) will work tirelessly, and at some considerable cost, to add different authentication modalities, customize one time password options, integrate with difficult LDAP account lockout options, mobile-ise and more.  These "integration" steps are often described as non-BAU.  They require change control and are charged via a time and materials or scope creep premium model.  Integration costs in a modern system, really need to be minimized if not removed.  Authentication is becoming so fluid that changes including new authentication factors, data sources, UI flows and so on, should be a standard operator journey.

Roadmapping

So why is integration such an issue?  A common problem of historical authentication deployments, has often been around lack of foresight. In honesty, foresight and robust road mapping has never been a real requirement for a login system.  Login using user names and passwords and occasionally an MFA, was pretty much it.  Like it or lump.  Well, in today's digitised ecosystems, new requirements pop up daily.  Think of the following basic scenarios, that will impact an authentication system:

  • New go to markets requiring localization
  • A new product that requires new API's and apps
  • A merger resulting in differing regulatory compliance requirements
  • New attack patterns and vector discovery
  • Competitive innovations
  • Commodity innovations
If you looked at your authentication services library and compare that to the applications and users consuming those services, do you know their functional and non-functional requirements, business objectives and challenges for  the next 12-18 months?  Some will, so the underlying authentication service needs to a) have a road map and b) be able to accommodate new requirements and demands, in a agile and iterative fashion.

Part of this is technical and part of that is operational management.  The business owners of an authentication platform, need to have interactions with the new stakeholders to the login journey.  The login process is basically the application from an end user perspective.  It needs to uphold security, whilst improving the user experience.  Requirements gathering must be a fully integrated process not just for application development, but for identity and authentication services too.


Platform versus Product

I purposefully chose the word platform in the title as opposed to service or product.  Modern authentication is a platform.  It powers transformation, by supporting API's, applications and services that allow organisations to create value driven software.  It becomes the wiring in the hotel, that allows all of the auxiliary products and shiny things to flourish.  

Many point authentication products exist. I am not discrediting them by any means.  Best of breed point solutions for biometry, mobile SDK integration, device operating or behaviour profiling exist and will need integrating to the underlying platform.  They are integration points.  Cogs inside a bigger machine.

The glue that drives the business value however, will be the authentication platform, capable of delivering a range of services to different applications, user communities, geographies and customers.  A single product is unlikely to be able to achieve this.

In summary, authentication has become a critical component, not only for securing user and data centric integrations, but also for helping to deliver continuous modernization of the enterprise.  

It has become a foundational component, that requires a wide breadth of coverage, coupled with agility and extensibility.



How To Build An Authentication Platform

Today’s authentication requirements go way beyond hooking into a database or directory and challenging every user and service for an Id and password.  Authentication and the login experience, is the application entry point and can make or break your security posture and end user experience. 

Authentication is typically associated with identifying, to a certain degree of assurance, who or what you are interacting with.  Authorization is typically identifying and allowing what that person or thing can do.  This blog is focused on the former, but I might stray in to the latter from time to time.

There are numerous use cases that a modern enterprise needs to fulfil, if authentication services are to deliver value.  These can include:

  • Authentication for a service or API
  • Device authentication
  • Metrics, timing and analytics of flows
  • Threat intelligence integration
  • Anonymous to known authentication profiling
  • Contextual analysis
In addition to the basic functional requirements, there are several non-functional basics too.  These are going to include:
  • Simple customisation
  • Being highly available
  • Stateless and elastic
  • Simple integrations
  • API first
I’m going to take some of these key requirements and describe them in a little more detail.

Non Identity Intelligence

From a feature perspective, the new requirements consistently rely upon Intelligence:  the new buzz in the cyber security world.  Every week a new more consolidated threat intelligence tool comes to market.  Organisations up and down the land, are rapidly building out Security Operations Centres (SOC) with wily ex-military veterans creating strategies and starry eyed graduates analysing SIEM and NIDS logs.  We need data.  We have data.  What we need is information.  Actionable intelligence.  Intelligence can be rapidly integrated into any number of different security architecture components. 
Intelligence here, is basically a focus upon non-identity data signals.  Sources of malware, malicious IP addresses, app assurance ratings, breached credentials data and so on.
The vast breadth and depth of cyber threat intelligence (CTI) sources is staggering.  Free, chargeable, subscription based, cloud based, you name it, it’s available.  A common factor must be simplicity of integration – ideally via some like a REST/JSON based API that developers are familiar with.  Long tale integration must be avoided too, with the ability to swap out and have a zero barrier to exit being important.  This last point is extremely important.  You need to able to future proof your data inputs.  
Whatever you want to integrate today, will be out of date tomorrow.  

Integration

Integration is not just limited to threat intelligence sources.  This is really just a non-functional, but I want to spend some time on it.  It is quite common to find legacy (I hate this word, let’s call them “classic” or initial system) authentication products are generally difficult to integrate against and extend.  
Many systems integrators (SI’s) (and many do excellent jobs in highly challenging environments) will work tirelessly, and at some considerable cost, to add different authentication modalities, customize one time password options, integrate with difficult LDAP account lockout options, mobile-ise and more.  These “integration” steps are often described as non-BAU.  They require change control and are charged via a time and materials or scope creep premium model.  Integration costs in a modern system, really need to be minimized if not removed.  Authentication is becoming so fluid that changes including new authentication factors, data sources, UI flows and so on, should be a standard operator journey.

Roadmapping

So why is integration such an issue?  A common problem of historical authentication deployments, has often been around lack of foresight. In honesty, foresight and robust road mapping has never been a real requirement for a login system.  Login using user names and passwords and occasionally an MFA, was pretty much it.  Like it or lump.  Well, in today’s digitised ecosystems, new requirements pop up daily.  Think of the following basic scenarios, that will impact an authentication system:
  • New go to markets requiring localization
  • A new product that requires new API’s and apps
  • A merger resulting in differing regulatory compliance requirements
  • New attack patterns and vector discovery
  • Competitive innovations
  • Commodity innovations
If you looked at your authentication services library and compare that to the applications and users consuming those services, do you know their functional and non-functional requirements, business objectives and challenges for  the next 12-18 months?  Some will, so the underlying authentication service needs to a) have a road map and b) be able to accommodate new requirements and demands, in a agile and iterative fashion.
Part of this is technical and part of that is operational management.  The business owners of an authentication platform, need to have interactions with the new stakeholders to the login journey.  The login process is basically the application from an end user perspective.  It needs to uphold security, whilst improving the user experience.  Requirements gathering must be a fully integrated process not just for application development, but for identity and authentication services too.

Platform versus Product

I purposefully chose the word platform in the title as opposed to service or product.  Modern authentication is a platform.  It powers transformation, by supporting API’s, applications and services that allow organisations to create value driven software.  It becomes the wiring in the hotel, that allows all of the auxiliary products and shiny things to flourish.  
Many point authentication products exist. I am not discrediting them by any means.  Best of breed point solutions for biometry, mobile SDK integration, device operating or behaviour profiling exist and will need integrating to the underlying platform.  They are integration points.  Cogs inside a bigger machine.
The glue that drives the business value however, will be the authentication platform, capable of delivering a range of services to different applications, user communities, geographies and customers.  A single product is unlikely to be able to achieve this.
In summary, authentication has become a critical component, not only for securing user and data centric integrations, but also for helping to deliver continuous modernization of the enterprise.  
It has become a foundational component, that requires a wide breadth of coverage, coupled with agility and extensibility.

This blog post was first published @ www.infosecprofessional.com, included here with permission.

How To Build An Authentication Platform

Today's authentication requirements go way beyond hooking into a database or directory and challenging every user and service for an Id and password.  Authentication and the login experience, is the application entry point and can make or break your security posture and end user experience. 

Authentication is typically associated with identifying, to a certain degree of assurance, who or what you are interacting with.  Authorization is typically identifying and allowing what that person or thing can do.  This blog is focused on the former, but I might stray in to the latter from time to time.

There are numerous use cases that a modern enterprise needs to fulfil, if authentication services are to deliver value.  These can include:

  • Authentication for a service or API
  • Device authentication
  • Metrics, timing and analytics of flows
  • Threat intelligence integration
  • Anonymous to known authentication profiling
  • Contextual analysis
In addition to the basic functional requirements, there are several non-functional basics too.  These are going to include:

  • Simple customisation
  • Being highly available
  • Stateless and elastic
  • Simple integrations
  • API first

I'm going to take some of these key requirements and describe them in a little more detail.

Non Identity Intelligence

From a feature perspective, the new requirements consistently rely upon Intelligence:  the new buzz in the cyber security world.  Every week a new more consolidated threat intelligence tool comes to market.  Organisations up and down the land, are rapidly building out Security Operations Centres (SOC) with wily ex-military veterans creating strategies and starry eyed graduates analysing SIEM and NIDS logs.  We need data.  We have data.  What we need is information.  Actionable intelligence.  Intelligence can be rapidly integrated into any number of different security architecture components. 

Intelligence here, is basically a focus upon non-identity data signals.  Sources of malware, malicious IP addresses, app assurance ratings, breached credentials data and so on.

The vast breadth and depth of cyber threat intelligence (CTI) sources is staggering.  Free, chargeable, subscription based, cloud based, you name it, it's available.  A common factor must be simplicity of integration - ideally via some like a REST/JSON based API that developers are familiar with.  Long tale integration must be avoided too, with the ability to swap out and have a zero barrier to exit being important.  This last point is extremely important.  You need to able to future proof your data inputs.  

Whatever you want to integrate today, will be out of date tomorrow.  

Integration

Integration is not just limited to threat intelligence sources.  This is really just a non-functional, but I want to spend some time on it.  It is quite common to find legacy (I hate this word, let's call them "classic" or initial system) authentication products are generally difficult to integrate against and extend.  

Many systems integrators (SI's) (and many do excellent jobs in highly challenging environments) will work tirelessly, and at some considerable cost, to add different authentication modalities, customize one time password options, integrate with difficult LDAP account lockout options, mobile-ise and more.  These "integration" steps are often described as non-BAU.  They require change control and are charged via a time and materials or scope creep premium model.  Integration costs in a modern system, really need to be minimized if not removed.  Authentication is becoming so fluid that changes including new authentication factors, data sources, UI flows and so on, should be a standard operator journey.

Roadmapping

So why is integration such an issue?  A common problem of historical authentication deployments, has often been around lack of foresight. In honesty, foresight and robust road mapping has never been a real requirement for a login system.  Login using user names and passwords and occasionally an MFA, was pretty much it.  Like it or lump.  Well, in today's digitised ecosystems, new requirements pop up daily.  Think of the following basic scenarios, that will impact an authentication system:

  • New go to markets requiring localization
  • A new product that requires new API's and apps
  • A merger resulting in differing regulatory compliance requirements
  • New attack patterns and vector discovery
  • Competitive innovations
  • Commodity innovations
If you looked at your authentication services library and compare that to the applications and users consuming those services, do you know their functional and non-functional requirements, business objectives and challenges for  the next 12-18 months?  Some will, so the underlying authentication service needs to a) have a road map and b) be able to accommodate new requirements and demands, in a agile and iterative fashion.

Part of this is technical and part of that is operational management.  The business owners of an authentication platform, need to have interactions with the new stakeholders to the login journey.  The login process is basically the application from an end user perspective.  It needs to uphold security, whilst improving the user experience.  Requirements gathering must be a fully integrated process not just for application development, but for identity and authentication services too.


Platform versus Product

I purposefully chose the word platform in the title as opposed to service or product.  Modern authentication is a platform.  It powers transformation, by supporting API's, applications and services that allow organisations to create value driven software.  It becomes the wiring in the hotel, that allows all of the auxiliary products and shiny things to flourish.  

Many point authentication products exist. I am not discrediting them by any means.  Best of breed point solutions for biometry, mobile SDK integration, device operating or behaviour profiling exist and will need integrating to the underlying platform.  They are integration points.  Cogs inside a bigger machine.

The glue that drives the business value however, will be the authentication platform, capable of delivering a range of services to different applications, user communities, geographies and customers.  A single product is unlikely to be able to achieve this.

In summary, authentication has become a critical component, not only for securing user and data centric integrations, but also for helping to deliver continuous modernization of the enterprise.  

It has become a foundational component, that requires a wide breadth of coverage, coupled with agility and extensibility.



Brokering Identity Services Into Pivotal Cloud Foundry

Introduction

Pivotal Cloud Foundry (PCF) deployments are maturing across the corporate landscape. PCF’s out-of-the-box identity and access management (IAM) tool, UAA (User Accounts and Authentication), provides basic user management functions and OAuth 2.0/OIDC 1.0 support. UAA has come a long way since its inception and provides a solid foundation of IAM services for an isolated application ecosystem running on the Pivotal platform. As organizations experience ever more demanding requirements pushed on their applications, they start realizing the need for a full IAM platform that provides identity services beyond what UAA can offer. Integrating applications running on Pivotal with applications running outside the platform, providing strong and adaptive authentication journeys, managing identities across applications, enforcing security policies and more requires a full-service IAM platform like ForgeRock’s Identity Platform.

ForgeRock provides a Pivotal service broker implementation, the ForgeRock Service Broker. It runs as a small service inside Pivotal and brokers two services into the PCF platform: An OAuth 2.0 AM Service and an IG Route Service. While the OAuth 2.0 AM Service provides similar capabilities to UAA on the OAuth/OIDC side, the IG Route Service is based on IG (Identity Gateway) and can broker the full spectrum of services of the ForgeRock Identity Platform. PCF applications bound to the IG Route Service can seamlessly consume any of the countless services the ForgeRock Identity Platform provides: Intelligent authentication, authorization, federation, user-managed access, identity synchronization, user self-service, workflow, social identity, directory services, API gateway services and more.

This article provides an easy-to-follow path to:

  • Set up a PCF development environment (PCF Dev)
  • Install and configure IG in that environment
  • Install and configure the ForgeRock Service Broker in that environment
  • Deploy, integrate and protect a number of PCF sample applications using the IG Route Service and IG

Additionally, the guide provides steps how to run IG on PCF. If you have access to a full PCF instance, you can skip the PCF Dev part and dive right into the Service Broker deployment and configuration. You also need access to a ForgeRock Access Management instance 5.0 or newer.

1. Preparing a PCF Dev environment

As mentioned, if you have access to a full PCF instance, you can skip this part and go straight  to the Service Broker deployment and configuration.

1.1. Installing CF CLI

Before you install the server side of the PCF Dev environment, you must first install the Cloud Foundry Command Line Interface (CF CLI) utility, which is the main way you will interact with PCF throughout this process.

Follow the Pivotal documentation to install the flavor of the CLI you need for your workstation OS:

https://docs.run.pivotal.io/cf-cli/install-go-cli.html

1.2. Installing PCF Dev

Now that you are ready to roll with the CF CLI, it is time to download and install the PCF Dev components. This article is based on PCF Dev v0.30.0 for PCF 1.11.0. This version is based on a VirtualBox and has a number of default services installed, some of which you will need later on.

PCF Dev – PAS 2.0.18.0 is an alpha release of the NextGen PCF Dev using the native OS hypervisor, doubling the minimum memory requirements from 4G to 8G, having only a few PCF services installed by default, and taking up to 1h to start. It does however include a full BOSH Director, which is the graphical UI to manage “Tiles” in PCF vs having to use the CLI. As soon as this version is a bit more stable and bundles more services like the old one did, it might be worth upgrading. But for now, make sure you select and download v0.30.0:

https://network.pivotal.io/products/pcfdev

In order to use your own IP address and DNS name (-i and -d parameters of the cf dev start command) you need to set up a wildcard DNS record. In my case I setup *.pcfdev.mytestrun.com pointing to my workstation’s IP address where I am running PCF Dev.

Follow the command log below to install and start PCF Dev:

unzip pcfdev-v0.30.0_PCF1.11.0-osx.zip
./pcfdev-v0.30.0+PCF1.11.0-osx
cf dev start -i 192.168.37.73 -d pcfdev.mytestrun.com -m 6144
Warning: the chosen PCF Dev VM IP address may be in use by another VM or device.
Using existing image.
Allocating 6144 MB out of 16384 MB total system memory (6591 MB free).
Importing VM...
Starting VM...
Provisioning VM...
Waiting for services to start...
7 out of 58 running
7 out of 58 running
7 out of 58 running
7 out of 58 running
40 out of 58 running
56 out of 58 running
58 out of 58 running
 _______  _______  _______    ______   _______  __   __
|       ||       ||       |  |      | |       ||  | |  |
|    _  ||       ||    ___|  |  _    ||    ___||  |_|  |
|   |_| ||       ||   |___   | | |   ||   |___ |       |
|    ___||      _||    ___|  | |_|   ||    ___||       |
|   |    |     |_ |   |      |       ||   |___  |     |
|___|    |_______||___|      |______| |_______|  |___|
is now running.
To begin using PCF Dev, please run:
   cf login -a https://api.pcfdev.mytestrun.com --skip-ssl-validation
Apps Manager URL: https://apps.pcfdev.mytestrun.com
Admin user => Email: admin / Password: admin
Regular user => Email: user / Password: pass

1.3 Logging in to PCF Dev

Login to your fresh PCF Dev instance and select the org you want to work with. Use the pcfdev-org:

cf login -a https://api.pcfdev.mytestrun.com/ --skip-ssl-validation
API endpoint: https://api.pcfdev.mytestrun.com/
Email> admin
Password>
Authenticating...
OK
Select an org (or press enter to skip):
1. pcfdev-org
2. system
Org> 1
Targeted org pcfdev-org
Targeted space pcfdev-space

API endpoint:  https://api.pcfdev.mytestrun.com (API version: 2.82.0)
User:          admin
Org:            pcfdev-org
Space:          pcfdev-space

Authenticate using admin/admin if using PCF Dev or a Pivotal admin user if using a real PCF instance.

2. Install Sample Applications

To test the Service Broker and inter-application SSO, install 2 sample applications:

2.1. Spring Music

git clone https://github.com/cloudfoundry-samples/spring-music
cd spring-music

Modify manifest to reduce memory and avoid random route names:

vi manifest.yml

Enter or copy & paste the following content:

---
applications:
- name: music
  memory: 768M
  random-route: false
  path: build/libs/spring-music-1.0.jar

Push the app:

cf push

Waiting for app to start...
name:           music
requested state:   started
instances:      1/1
usage:          768M x 1 instances
routes:            music.pcfdev.mytestrun.com
last uploaded:  Tue 22 May 15:28:24 CDT 2018
stack:          cflinuxfs2
buildpack:      container-certificate-trust-store=2.0.0_RELEASE java-buildpack=v3.13-offline-https://github.com/cloudfoundry/java-buildpack.git#03b493f
                java-main open-jdk-like-jre=1.8.0_121 open-jdk-like-memory-calculator=2.0.2_RELEASE spring-auto-reconfiguration=1.10...
start command:  CALCULATED_MEMORY=$($PWD/.java-buildpack/open_jdk_jre/bin/java-buildpack-memory-calculator-2.0.2_RELEASE
                -memorySizes=metaspace:64m..,stack:228k.. -memoryWeights=heap:65,metaspace:10,native:15,stack:10 -memoryInitials=heap:100%,metaspace:100%
                -stackThreads=300 -totMemory=$MEMORY_LIMIT) && JAVA_OPTS="-Djava.io.tmpdir=$TMPDIR
                -XX:OnOutOfMemoryError=$PWD/.java-buildpack/open_jdk_jre/bin/killjava.sh $CALCULATED_MEMORY
                -Djavax.net.ssl.trustStore=$PWD/.java-buildpack/container_certificate_trust_store/truststore.jks
                -Djavax.net.ssl.trustStorePassword=java-buildpack-trust-store-password" && SERVER_PORT=$PORT eval exec
                $PWD/.java-buildpack/open_jdk_jre/bin/java $JAVA_OPTS -cp $PWD/. org.springframework.boot.loader.JarLauncher

     state    since                  cpu      memory          disk          details
#0   running   2018-05-22T20:29:00Z   226.8% 530.4M of 768M   168M of 512M

Note the routes: music.pcfdev.mytestrun.com

That’s the URL at which your application can be reached. You should be able to resolve the dynamically generated DNS name. You should also be able to hit the URL in a web browser.

Retrieve application logs:

cf logs music --recent

Live-tail application logs:

cf logs music

2.2. Cloud Foundry Sample NodeJS App

git clone https://github.com/cloudfoundry-samples/cf-sample-app-nodejs.git
cd cf-sample-app-nodejs

Modify manifest to reduce memory and avoid random route names:

vi manifest.yml

---
applications:
- name: node
  memory: 512M
  instances: 1
  random-route: false

Push the app:

cf push

Waiting for app to start...
name:           node
requested state:   started
instances:      1/1
usage:          512M x 1 instances
routes:            node.pcfdev.mytestrun.com
last uploaded:  Tue 22 May 15:46:02 CDT 2018
stack:          cflinuxfs2
buildpack:      node.js 1.5.32
start command:  npm start

     state    since                  cpu    memory      disk        details
#0   running   2018-05-22T20:46:35Z   0.0% 0 of 512M 0 of 512M   

Note the routes: node.pcfdev.mytestrun.com

That’s the URL at which your application can be reached. You should be able to resolve the dynamically generated DNS name. You should also be able to hit the URL in a web browser.

Retrieve application logs:

$ cf logs node —recent

Live-tail application logs:

cf logs node

2.3. Create Your Own JSP Headers App

Create your very own useful sample application to display headers. This will come in handy for future experiments with the IG Route Service.

mkdir headers
cd headers
mkdir WEB-INF
vi index.jsp

<%@ page import="java.util.*" %>
<html>
<head>
<title><%= application.getServerInfo() %></title>
</head>
<body>
<h1>HTTP Request Headers Received</h1>
<table border="1" cellpadding="3" cellspacing="3">
<%
Enumeration eNames = request.getHeaderNames();
while (eNames.hasMoreElements()) {
String name = (String) eNames.nextElement();
String value = normalize(request.getHeader(name));
%>
<tr><td><%= name %></td><td><%= value %></td></tr>
<%
}
%>
</table>
</body>
</html>
<%!
private String normalize(String value)
{
StringBuffer sb = new StringBuffer();
for (int i = 0; i < value.length(); i++) {
char c = value.charAt(i);
sb.append(c);
if (c == ';')
sb.append("<br>");
}
return sb.toString();
}
%>
cf push headers


Waiting for app to start...
name:           headers
requested state:   started
instances:      1/1
usage:          256M x 1 instances
routes:            headers.pcfdev.mytestrun.com
last uploaded:  Tue 22 May 16:24:26 CDT 2018
stack:          cflinuxfs2
buildpack:      container-certificate-trust-store=2.0.0_RELEASE java-buildpack=v3.13-offline-https://github.com/cloudfoundry/java-buildpack.git#03b493f
                open-jdk-like-jre=1.8.0_121 open-jdk-like-memory-calculator=2.0.2_RELEASE tomcat-access-logging-support=2.5.0_RELEAS...
start command:  CALCULATED_MEMORY=$($PWD/.java-buildpack/open_jdk_jre/bin/java-buildpack-memory-calculator-2.0.2_RELEASE
                -memorySizes=metaspace:64m..,stack:228k.. -memoryWeights=heap:65,metaspace:10,native:15,stack:10 -memoryInitials=heap:100%,metaspace:100%
                -stackThreads=300 -totMemory=$MEMORY_LIMIT) &&  JAVA_HOME=$PWD/.java-buildpack/open_jdk_jre JAVA_OPTS="-Djava.io.tmpdir=$TMPDIR
                -XX:OnOutOfMemoryError=$PWD/.java-buildpack/open_jdk_jre/bin/killjava.sh $CALCULATED_MEMORY
                -Djavax.net.ssl.trustStore=$PWD/.java-buildpack/container_certificate_trust_store/truststore.jks
                -Djavax.net.ssl.trustStorePassword=java-buildpack-trust-store-password -Djava.endorsed.dirs=$PWD/.java-buildpack/tomcat/endorsed
                -Daccess.logging.enabled=false -Dhttp.port=$PORT" exec $PWD/.java-buildpack/tomcat/bin/catalina.sh run

     state    since                  cpu    memory        disk            details
#0   running   2018-05-22T21:24:48Z   0.0% 600K of 256M 84.6M of 512M  

2.4. More Sample Apps

git clone https://github.com/cloudfoundry-samples/cf-ex-php-info
git clone https://github.com/cloudfoundry-samples/cf-sample-app-rails.git

3. Running IG in Pivotal Cloud Foundry

You can run IG absolutely anywhere you want, but since you are going to use it inside PCF, running it in PCF may be a logic choice.

3.1. Install, Deploy, and Configure  IG in PCF

The steps below describe an opinionated deployment model for IG in PCF. Your specific environment may require you to make different choices to achieve an ideal configuration and behavior.

3.1.1. Download IG

Download IG 6 from https://backstage.forgerock.com/downloads/browse/ig/latest to a preferred working location. Login using your backstage credentials.

unzip IG-6.1.0.war
cf push ig --no-start

3.1.2. Enable Development Mode

cf set-env ig IG_RUN_MODE development

3.1.3. Create And Use Persistent Volume For Configuration Data

IG is configured using JSON files. This section is an easy way to create a share storage volume that can persist your IG configuration between restarts. If you run IG using its default configuration, it will lose all its configuration every time it restarts because the app is reset. Externalizing the config allows the configuration to reside outside the app and persist between restarts. In a real PCF environment (vs a PCF DEV environment) you would probably use a different shared storage like an NSF service or the like. But for development purposes, a local-volume will work great.

(https://github.com/cloudfoundry/local-volume-release)

cf create-service local-volume free-local-disk local-volume-instance
cf bind-service ig local-volume-instance -c '{"mount":"/var/openig"}'
cf set-env ig IG_INSTANCE_DIR '/var/openig'

3.1.4. Start IG applying all the configuration changes we have made

cf start ig

3.1.5. Logs

cf logs ig --recent

3.1.6. Apply Required Configuration

3.1.6.1. SSH into your IG instance

cf ssh ig
cd /var/openig
mkdir config
vi config/config.json

3.1.6.2. Apply configuration

Create /var/openig/config/config.json and populate with default configuration as documented here:

https://backstage.forgerock.com/docs/forgerock-service-broker/2/forgerock-service-broker-guide/#implementation-setting-up-openig

{
  "heap": [
     {
       "name": "ClientHandler",
       "type": "ClientHandler",
       "config": {
         "hostnameVerifier": "ALLOW_ALL",
         "trustManager": {
           "type": "TrustAllManager"
         }
       }
    },
    {
      "name": "_router",
      "type": "Router",
      "config": {
        "defaultHandler": {
          "type": "StaticResponseHandler",
          "config": {
            "status": 404,
            "reason": "Not Found",
            "headers": {
              "Content-Type": [
                "application/json"
              ]
            },
            "entity": "{ \"error\": \"Something went wrong, contact the sys admin\"}"
          }
        }
      }
    },
    {
      "type": "Chain",
      "name": "CloudFoundryProxy",
      "config": {
        "filters": [
          {
            "type": "ScriptableFilter",
            "name": "CloudFoundryRequestRebaser",
            "comment": "Rebase the request based on the CloudFoundry provided headers",
            "config": {
              "type": "application/x-groovy",
              "source": [
                "Request newRequest = new Request(request);",
                "org.forgerock.util.Utils.closeSilently(request);",
                "newRequest.uri = URI.create(request.headers['X-CF-Forwarded-Url'].firstValue);",
                "newRequest.headers['Host'] = newRequest.uri.host;",
                "logger.info('Receive request : ' + request.uri + ' forwarding to ' + newRequest.uri);",
                "Context newRoutingContext = org.forgerock.http.routing.UriRouterContext.uriRouterContext(context).originalUri(newRequest.uri.asURI()).build();",
                "return next.handle(newRoutingContext, newRequest);"
              ]
            }
          }
        ],
        "handler": "_router"
      },
      "capture": [
        "request",
        "response"
      ]
    }
  ],
  "handler": {
    "type": "DispatchHandler",
    "name": "Dispatcher",
    "config": {
      "bindings": [
        {
          "condition": "${not empty request.headers['X-CF-Forwarded-Url']}",
          "handler": "CloudFoundryProxy"
        },
        {
          "handler": {
            "type": "StaticResponseHandler",
            "config": {
              "status": 400,
              "entity": "Bad request : expecting a header X-CF-Forwarded-Url"
            }
          }
        }
      ]
    }
  }
}

Then:

exit cf
restart ig

3.1.7. Access IG Studio

http://ig.pcfdev.mytestrun.com/openig/studio/

4. Install ForgeRock Service Broker

Download and install the service broker following the instructions in the doc:

https://backstage.forgerock.com/docs/forgerock-service-broker/2/forgerock-service-broker-guide/#implementation-installing-into-cloud-foundry

4.1. Deploy and Configure the Service Broker App

cf push forgerockbroker-app -p service-broker-servlet-2.0.1.war
cf set-env forgerockbroker-app SECURITY_USER_NAME f8Q7hyHKgz
cf set-env forgerockbroker-app SECURITY_USER_PASSWORD n3BpjwKW4m
cf set-env forgerockbroker-app OPENAM_BASE_URI https://idp.mytestrun.com/openam/
cf set-env forgerockbroker-app OPENAM_USERNAME CloudFoundryAgentAdmin
cf set-env forgerockbroker-app OPENAM_PASSWORD KZDJhN7Vr4
cf set-env forgerockbroker-app OAUTH2_SCOPES profile
cf set-env forgerockbroker-app OPENIG_BASE_URI https://ig.pcfdev.mytestrun.com
cf restage forgerockbroker-app

Note that OPENIG_BASE_URI is specified as https, not http! If specified as http, the following error occurred when binding the ig route service to an application:

cf bind-route-service pcfdev.mytestrun.com igrs --hostname spring-music-chatty-quokka
Binding route spring-music-chatty-quokka.pcfdev.mytestrun.com to service instance igrs in org pcfdev-org / space pcfdev-space as admin...
FAILED
Server error, status code: 502, error code: 10001, message: The service broker returned an invalid response for the request to http://forgerockbroker-app.pcfdev.mytestrun.com/v2/service_instances/4aa37a88-afc0-4e75-9474-d5e2ed3e7876/service_bindings/c8da2445-6689-4824-afd1-125795e2a848. Status Code: 201 Created, Body: {"route_service_url":"http://ig.pcfdev.mytestrun.com/4aa37a88-afc0-4e75-9474-d5e2ed3e7876/c8da2445-6689-4824-afd1-125795e2a848"}

To see the service broker app’s environment:

cf env forgerockbroker-app

To see the service broker app’s details:

cf app forgerockbroker-app

Create service broker:

cf create-service-broker forgerockbroker f8Q7hyHKgz n3BpjwKW4m http://forgerockbroker-app.pcfdev.mytestrun.com

Enable the service you plan on using. The ForgeRock Service Broker supports OAuth and IG. You can enable either or both.

cf enable-service-access forgerock-ig-route-service
cf enable-service-access forgerock-am-oauth2

Create the service instance(s) you will be using for your apps. You should only need one instance per service to handle any number of applications:

cf create-service forgerock-ig-route-service shared igrs
cf create-service forgerock-am-oauth2 shared amrs

4.2. Bind IG Route Service to the Sample Apps

Note how no apps are bound to the IG Route Service (igrs):

cf routes
Getting routes for org pcfdev-org / space pcfdev-space as admin ...
space          host                  domain                port  path  type  apps                  service
pcfdev-space  music                 pcfdev.mytestrun.com                     music
pcfdev-space  node                  pcfdev.mytestrun.com                     node
pcfdev-space  rails                 pcfdev.mytestrun.com                     rails
pcfdev-space  headers               pcfdev.mytestrun.com                     headrs
pcfdev-space  ig                    pcfdev.mytestrun.com                     ig
pcfdev-space  forgerockbroker-app   pcfdev.mytestrun.com                     forgerockbroker-app

Bind the Route Service to the apps:

cf bind-route-service pcfdev.mytestrun.com igrs --hostname music
cf bind-route-service pcfdev.mytestrun.com igrs --hostname node
cf bind-route-service pcfdev.mytestrun.com igrs --hostname rails
cf bind-route-service pcfdev.mytestrun.com igrs --hostname headers

Now the two sample apps are bound to our IG Route Service:

cf routes
Getting routes for org pcfdev-org / space pcfdev-space as admin ...
space          host                              domain                port  path  type  apps                  service
pcfdev-space  music                 pcfdev.mytestrun.com                     music              igrs
pcfdev-space  node                  pcfdev.mytestrun.com                     node               igrs
pcfdev-space  rails                 pcfdev.mytestrun.com                     rails              igrs
pcfdev-space  headers               pcfdev.mytestrun.com                     headers            igrs
pcfdev-space  ig                    pcfdev.mytestrun.com                     ig
pcfdev-space  forgerockbroker-app   pcfdev.mytestrun.com                     forgerockbroker-app

5. Define IG Routes for the Sample Apps

By default, no routes are defined in IG for our sample apps and the default behavior in IG (defined in config.json you created earlier) is to deny access to everything. So the next and very important step is now to define routes that re-enable access to our sample applications. Once the basic routes are defined, we can add authentication and authorization per application as we see fit:

  • Point your browser to the IG Studio: http://ig.pcfdev.mytestrun.com/openig/studio/
  • Select “Protect an Application” from the Studio home screen, then select “Structured.”
  • Select “Advanced options” and enter the app URL from the step where you pushed the app to PCF.
    • Since PCF does hostname-based routing (vs path-based) you have to change the Condition that selects your route accordingly. Into the Condition field, select “Expression” and enter:
      ${matches(request.uri.host, ‘^app-url’)}
      E.g.:
      ${matches(request.uri.host, ‘^music.pcfdev.mytestrun.com’)}
    • Pick a descriptive name and a unique ID for the application
    • Select “Create route”

  • Deploy your route.
  • You have now created a route with default configuration, which simply proxies requests through IG to the app. That means your app is available again like it was before you implemented IG and the Service Broker. The next step is to add value to your route like authentication or authorization.

5.1. Prepare for Authentication and Authorization

As a preparatory step to authentication and authorization, create an AM Service for your route, which is a piece of configuration pointing to your ForgeRock Access Management instance. Select “AM service” from the left side menu and provide the details of your AM instance:

You won’t need the agent section populated for the use cases here.

5.2. Broker Authentication to an Application

  • To add authentication to your route, select “Authentication” from the left side menu and move the slider “Enable authentication” to the right, then select “Single Sign-On” as your authentication option.
  • In the configuration dialog popping up, select your AM service:

    Then select “Save”.
  • Deploy your route.
  • In a browser, point your browser to your app URL, e.g. https://music.pcfdev.mytestrun.com/
  • Notice how you will be redirected to your Access Management login page for authentication. Provide valid login credentials and your sample app should load.
  • Repeat with the other apps. Note how you can now SSO between all the apps!
  • Now let’s add authorization to one of the routes and only allow members of a certain group access to that application. For that, we need some additional prep work in AM:
    • Create a J2EE agent IG can use to evaluate AM policies:

    • Create a new policy set with the name “PCF” or a name and ID of your liking:

      Add URL as the resource type.
    • Create a policy and name it after your application you are protecting. Specify your app URL as the resource, allow GET as an action, and specify the subject condition to require a group membership. In this example, we want membership in the “Engineering” group to be required for access to the “headers” application:

      Your policy summary page should look something like this:

  • Now come back to IG Studio and select the route of the app you created your policy for, in our case the “headers” app and select “Authorization” from the left side bar and move the slider “Enable authorization” to the right, then select “AM Policy Enforcement” as your way to authorize users.
  • Select your AM service, specify your realm and provide the name of the J2EE agent you created in an earlier step and the password. In the policy endpoint section specify the name of your policy set and the expression to retrieve your SSO token; the default should work: ${contexts.ssoToken.value}

  • Save and deploy your route.
  • Point your browser to the protected app and login using a user who is a member of the group you configured to control access. Notice how the app loads after logging in.
  • Now remove the user from the group and refresh the app. Notice how the page goes blank because the user is no longer authorized.

Conclusion

With this setup, applications can now be integrated, protected, SSO-enabled, and identity-infused within minutes. Provide profile self-service, password reset, strong and step-up authentication, continuous authentication, authorization, and risk evaluation to any application in the Pivotal Cloud Foundry ecosystem.

The Role Of Mobile During Authentication

Nearly all the big player social networks now provide a multi-factor authentication option – either an SMS sent code or perhaps key derived one-time password, accessible via a mobile app.  Examples include Google’s Authenticator, Facebook’s options for MFA (including their Code Generator, built into their mobile app) or LinkedIn’s two-step verification.  There are lots more examples, but the main component is using the user’s mobile phone as an out of band authenticator channel.

Phone as a Secondary Device - “Phone-as-a-Token”

The common term for this is “phone-as-a-token”.  Depending on the statistics, basic mobile phones are now so ubiquitous that the ability to leverage at least SMS delivered one one-time-passwords (OTP) for users who do not have either data plans or smart phones is common.  This is an initial step in moving away from the traditional user name and password based login.  However, since the National Institute of Standards and Technology (NIST) released their view that SMS based OTP delivery is deemed insecure, there has been constant innovations around how best to integrate phone-based out of band authentication.  Push notifications are one and local or native biometry is another, often coupled with FIDO (Fast Identity Online) for secure application integration.

EMM and Device Authentication

But using a phone as an out of band authentication device, often overlooks the credibility and assurance of the device itself.  If push based notification apps are used, whilst the security and integrity of those apps can be guaranteed to a certain degree, the device the app is installed upon can not necessarily be attested to the same levels.  What about environments where BYOD (Bring Your Own Device) is used?  What about the potential for either jail broken operating systems or low assurance or worse still malware based apps running parallel to the push authentication app?  Does that impact credibility and assurance?  Could that result in the app being compromised in some way? 

In the internal identity space, Enterprise Mobility Management (EMM) software often comes to the rescue here – perhaps issuing and distributing certs of key pairs to devices in order to perform device validation, before accepting the out band verification step.  This can often be coupled with app assurance checks and OS baseline versioning.  However this is often time-consuming and complex and isn’t always possible in the consumer or digital identity space.

Multi-band to Single-band Login

Whilst you can achieve both a user authentication, device authentication and out of band authentication nirvana, let’s spin forward and simulate a world where the majority of interactions are solely via a mobile device.  So we no longer have an “out of band” authentication vehicle.  The main application login occurs on the mobile.  So what does that really mean?  Well we lose the secondary binding.  But if the initial application authentication leverages the mechanics of the original out of band (aka local biometry, crypto/FIDO integration) is there anything to worry about?  Well the initial device to user binding is still an avenue that requires further investigation.  I guess by removing an out of band process, we are reducing the number of signals or factors.  Also, unless a biometric local authentication process is used, the risk of credential theft increases substantially. 

Leave your phone on the train, with a basic local PIN based authentication that allows access to refresh_tokens or private keys and we’re back to the “keys to the castle” scenario.


User, Device & Contextual Analysis

So we’re back to a situation where we need to augment what is in fact a single factor login journey.

The physical identity is bound to a digital device. How can we have a continuous level of assurance for the user to app interaction?  We need to add additional signals – commonly known as context. 

This “context” could well include environmental data such as geo-location, time, network addressing or more behavioural such as movement or gait analysis or app usage patterns.  The main driver being a movement away from the big bang login event, where assurance is very high, with a long slow tail drop off as time goes by.  This correlates to the adage of short lived sessions or access_tokens – mainly as assurance can not be guaranteed as time from authentication event increases.

This “context” is then used to attempt lots of smaller micro-authentication events – perhaps checking at every use of an access_token or when a session is presented to an access control event.

So once a mobile user has “logged in” to the app, in the background there is a lot more activity looking for changes regarding to context (either environmental or behavioural).   No more out of band, just a lot of micro-steps.

As authentication becomes more transparent or passive, the real effort then moves to physical to digital binding or user proofing...

The Role Of Mobile During Authentication

Nearly all the big player social networks now provide a multi-factor authentication option – either an SMS sent code or perhaps key derived one-time password, accessible via a mobile app.  Examples include Google’s Authenticator, Facebook’s options for MFA (including their Code Generator, built into their mobile app) or LinkedIn’s two-step verification.  There are lots more examples, but the main component is using the user’s mobile phone as an out of band authenticator channel.

Phone as a Secondary Device - “Phone-as-a-Token”

The common term for this is “phone-as-a-token”.  Depending on the statistics, basic mobile phones are now so ubiquitous that the ability to leverage at least SMS delivered one one-time-passwords (OTP) for users who do not have either data plans or smart phones is common.  This is an initial step in moving away from the traditional user name and password based login.  However, since the National Institute of Standards and Technology (NIST) released their view that SMS based OTP delivery is deemed insecure, there has been constant innovations around how best to integrate phone-based out of band authentication.  Push notifications are one and local or native biometry is another, often coupled with FIDO (Fast Identity Online) for secure application integration.

EMM and Device Authentication

But using a phone as an out of band authentication device, often overlooks the credibility and assurance of the device itself.  If push based notification apps are used, whilst the security and integrity of those apps can be guaranteed to a certain degree, the device the app is installed upon can not necessarily be attested to the same levels.  What about environments where BYOD (Bring Your Own Device) is used?  What about the potential for either jail broken operating systems or low assurance or worse still malware based apps running parallel to the push authentication app?  Does that impact credibility and assurance?  Could that result in the app being compromised in some way? 

In the internal identity space, Enterprise Mobility Management (EMM) software often comes to the rescue here – perhaps issuing and distributing certs of key pairs to devices in order to perform device validation, before accepting the out band verification step.  This can often be coupled with app assurance checks and OS baseline versioning.  However this is often time-consuming and complex and isn’t always possible in the consumer or digital identity space.

Multi-band to Single-band Login

Whilst you can achieve both a user authentication, device authentication and out of band authentication nirvana, let’s spin forward and simulate a world where the majority of interactions are solely via a mobile device.  So we no longer have an “out of band” authentication vehicle.  The main application login occurs on the mobile.  So what does that really mean?  Well we lose the secondary binding.  But if the initial application authentication leverages the mechanics of the original out of band (aka local biometry, crypto/FIDO integration) is there anything to worry about?  Well the initial device to user binding is still an avenue that requires further investigation.  I guess by removing an out of band process, we are reducing the number of signals or factors.  Also, unless a biometric local authentication process is used, the risk of credential theft increases substantially. 

Leave your phone on the train, with a basic local PIN based authentication that allows access to refresh_tokens or private keys and we’re back to the “keys to the castle” scenario.


User, Device & Contextual Analysis

So we’re back to a situation where we need to augment what is in fact a single factor login journey.

The physical identity is bound to a digital device. How can we have a continuous level of assurance for the user to app interaction?  We need to add additional signals – commonly known as context. 

This “context” could well include environmental data such as geo-location, time, network addressing or more behavioural such as movement or gait analysis or app usage patterns.  The main driver being a movement away from the big bang login event, where assurance is very high, with a long slow tail drop off as time goes by.  This correlates to the adage of short lived sessions or access_tokens – mainly as assurance can not be guaranteed as time from authentication event increases.

This “context” is then used to attempt lots of smaller micro-authentication events – perhaps checking at every use of an access_token or when a session is presented to an access control event.

So once a mobile user has “logged in” to the app, in the background there is a lot more activity looking for changes regarding to context (either environmental or behavioural).   No more out of band, just a lot of micro-steps.

As authentication becomes more transparent or passive, the real effort then moves to physical to digital binding or user proofing...