About REST APIs and API Descriptors

ForgeRock Logo This post briefly describes the types of HTTP APIs available through the ForgeRock platform, and which ones come with live reference documentation.

The following categories of HTTP APIs are available in the ForgeRock platform:

ForgeRock Common REST (CREST) APIs

ForgeRock Common REST provides a framework for HTTP APIs. Each of the component products in the platform uses CREST to build APIs that do CRUDPAQ operations in the same ways.

ForgeRock platform component products generate live reference documentation in a standard format (Swagger, which has been standardized as OpenAPI) for CREST APIs. This is done through a mechanism referred to as API descriptors. You can use this documentation to try out the CREST APIs.

Standard HTTP APIs such as OAuth 2.0

Standard HTTP APIs are defined by organizations like the IETF for OAuth 2.0, the Kantara Initiative for UMA, and the OpenID Connect Working Group. These APIs have their own implementations and do not use CREST. They are documented where they are used in the product documentation.

The canonical documentation is the specifications for the standards. At present, the ForgeRock platform components do not generate live documentation for these standard APIs.

Non-RESTful, Challenge-Response HTTP APIs

Some APIs, such as the authentication API used in ForgeRock AM and the user self-service API used in ForgeRock IDM are not fully RESTful. Instead, they use challenge-response mechanisms that have the developer return to the same endpoint with different payloads during a session.

These APIs are documented in the product documentation.

The ForgeRock API reference documentation published with the product docs is, necessarily, abstract. It does not provide you a sandbox to try out the APIs. Unlike a SaaS, with its fixed configuration, the ForgeRock platform components are highly configurable. ForgeRock HTTP APIs depend on how you decide to configure each service.

Live Reference Documentation

It is your software deployment or SaaS, built with the ForgeRock platform, that publishes concrete APIs.

You can capture the OpenAPI-format docs, and edit them to correspond to the APIs you actually want to publish. A browser-based, third-party application, Swagger UI, makes it easy to set up a front end to a sandbox service so your developers can try out your APIs.

Note that you still need to protect the endpoints. In particular, prevent developers from using the endpoints you do not want to publish.

The following posts in this series will look at how to work with the APIs when developing your configuration, and how to get the OpenAPI-format descriptors to publish live API documentation for your developers.

Using IG to Protect IDM For Secure and Standards-Based Integration

ForgeRock Identity Management (IDM) has a rich set of REST APIs capable of performing many actions; this is one of the great values that IDM has to offer. However, with such a broad set of APIs available out of the box, it is reasonable to want to limit which of those REST APIs are directly available to the public. The most common way to enforce that limit is to use an API gateway to act as a reverse proxy. In this way, only the requests which the gateway has been configured to allow will be sent to IDM.

In addition to offering a filter, API gateways can offer many powerful options for authentication and authorization which are not available directly within IDM. One important example of this need is for OAuth2 clients to be able to request IDM REST APIs using an access token that was obtained from ForgeRock Access Management (AM). In this way, the IDM endpoints would be able to be treated as standard OAuth2 resource server endpoints; the gateway can validate the presence of the access token and introspect it to ensure that there are appropriate scopes for the request. Being able to rely on an access token as the means by which clients work with the IDM REST API is often easier, more secure, and more inter-operable than other means available.

In this article I’m going to focus on how you could configure ForgeRock Identity Gateway (IG) to accomplish both of these goals. You will be able to use this configuration as a pattern for deploying IDM functionality in a secure and standards-based way, making it easy to integrate with your own applications. Some basic knowledge about installation and typical use for each product is assumed; please review the product documentation if you are unfamiliar with either product.

Securing the connection between IDM and IG

When IG is responsible for securing the requests to IDM, it is important to configure IDM so that it will only accept connections which originate from IG. One way to accomplish this is to configure your network topology and firewall rules to prevent anything besides IG from being able to make a connection; doing so will be specific to your environment, so I won’t cover how to do that here; ask your network administrator for help. In addition to the network level security, you should also secure the connection with an encrypted channel using client SSL certificates. Both methods used together will provide the most robust security.

Configuring IDM to accept connections from IG

Since IG will be handling the direct end-user authentication, IDM only needs to handle the authentication of the requests originating from IG. As they are separate processes, there must be a standard HTTPS interface between them (IG proxing the client requests to the IDM server). The main security challenge is to ensure that only IG is able to make these direct HTTPS requests to IDM. Essentially, IDM needs a way to first authenticate IG itself so that it can then operate on behalf of the actual user. There are a few details that need to considered in order to do this.

Static Users

First is the simple fact that IG is not a “user” in the sense that IDM normally expects – it is more of a service, and as such will probably not have a corresponding record available on the IDM router (in the way that users normally do).  Because authentication in IDM requires a route for every authenticated subject, we have to find a way to supply one. One simple way to do this is to create a custom endpoint that provides a sort of pseudo-route. For example, you can create a file named “conf/endpoint-staticuser.json” with this content:

{
    "type" : "text/javascript",
    "context" : "endpoint/static/user/*",
    "source" : "request.method === 'query' ? ([{_id: request.additionalParameters._id}]) : ({_id: context.security.authenticationId})"
}

This handy endpoint is designed to work with the authentication service. It simply reflects back the id provided (either as a query parameter or from the security context). Using this, we can define authentication modules for “users” which don’t have any other router entry, such as the case with IG.

Client Certificate Authentication

The next step is to actually configure the authentication module that IG will be using. In this case, we are going use the “CLIENT_CERT” authentication module, following a similar process to the one described in “Configuring Client Certification Authentication“. Example openssl commands and more background detail can be found there.

This module requires that the “client” (in this case, IG) has a particular SSL certificate that it is able to present to IDM when it connects via HTTPS. Using our above “static/user” endpoint, here’s the most basic example JSON configuration showing how this could be done:

{
    "name": "CLIENT_CERT",
    "properties": {
        "queryOnResource": "endpoint/static/user",
        "defaultUserRoles": [
            "openidm-cert"
        ],
        "allowedAuthenticationIdPatterns": [
            "CN=ig, O=forgerock"
        ]
    },
    "enabled": true
}

This declaration authenticates any request which has supplied a client certificate that meets two conditions:

  1. It is trusted in terms of SSL (either directly imported in the IDM truststore or signed by a CA which is within the IDM truststore).
  2. It has a “subject” which matches one of the patterns listed under “allowedAuthenticationIdPatterns”. In this example, the subject needs to exactly match “CN=ig, O=forgerock”.

If both of those conditions are met, then the request will be authenticated, and it will be recognized as a “static/user” type of entity with the “openidm-cert” role. The intent behind this configuration is that requests from IG are recognized within this context.

Run As

The final configuration necessary within IDM is to allow IG to make requests on behalf of the user it has authenticated. In IDM 6.0, there is a new feature available for every authentication module which supports this – “RunAs Authentication“. This feature allows privileged clients to supply an “X-OpenIDM-RunAs” header in order to specify the name of the user they would like to operate on behalf of. This is obviously a highly-sensitive type of operation – only the most trusted clients (such as IG) should be allowed to operate under the guise of other users. You will need to configure IDM in order to allow IG to do this. For example, you can expand the CLIENT_CERT authentication module configuration like so (added values in italics):

{
    "name": "CLIENT_CERT",
    "properties": {
        "queryOnResource": "endpoint/static/user",
        "defaultUserRoles": [
            "openidm-cert"
        ],
        "allowedAuthenticationIdPatterns": [
            "CN=ig, O=forgerock"
        ],
        "runAsProperties": {
            "adminRoles": [
                "openidm-cert"
            ],
            "disallowedRunAsRoles": [ ],
            "queryOnResource": "managed/user",
            "propertyMapping": {
                "authenticationId" : "userName",
                "userRoles": "authzRoles"
            },
            "defaultUserRoles" : [
                "openidm-authorized"
            ]
        }
    },
    "enabled": true
}

By using this configuration, any authenticated request which comes from IG should include an “X-OpenIDM-RunAs” header that identifies the “userName” value for the associated “managed/user” record. IG will be the only client capable of making this sort of request, because it is the only entity which has the private key associated with the trusted certificate.

Configuring IG to accept connections for IDM

Now that IDM is prepared to accept connections, IG needs to be configured to make them correctly.

Trusting IDM

Review the chapter in the IG configuration guide called “Configuring IG for HTTPS (client-side)“. If IDM is using a self-signed certificate, you will need to import that into IG’s truststore as described there.

Supplying a Client Certificate

All requests made by IG to a back-end are via a “ClientHandler“.  By default, there is no special behavior associated with a ClientHandler – it simply acts as a generic HTTP/S client. The best option for us is to define a new ClientHandler on the IG “heap” that is configured to perform client certification authentication. The most important declaration for this use is the “keyManager“; this is how you tell the ClientHandler where to get the keys to use when prompted for client certificates. Here is an example of a configured client that is ready to trust IDM’s certificate and supply its own:

{
    "name": "IDMClient",
    "type": "ClientHandler",
    "config": {
        "hostnameVerifier": "ALLOW_ALL",
        "sslContextAlgorithm": "TLSv1.2",
        "keyManager": {
            "type": "KeyManager",
            "config": {
                "keystore": {
                    "type": "KeyStore",
                    "config": {
                        "url": "file:///var/openig/keystore.jks",
                        "password": "changeit"
                    }
                },
                "password": "changeit"
            }
        },
        "trustManager": {
            "type": "TrustManager",
            "config": {
                "keystore": {
                    "type": "KeyStore",
                    "config": {
                        "url": "file:///var/openig/keystore.jks",
                        "password": "changeit"
                    }
                }
            }
        }
    }
}

In this example, the /var/openig/keystore.jks file contains both the public certificate needed to trust IDM and the private key needed to authenticate IG as a client. Make sure that all IG routes which forward requests to IDM use this ClientHandler.

Including the RunAs Header

Before the request is forwarded to IDM via the IDMClient, it needs to be modified to include the X-OpenIDM-RunAs header. This allows IDM to find the user that is actually making the request, via IG.

There are several ways to modify the request before it is sent to IDM, but all of them involve a filter on the route. The simplest example is to use the “HeaderFilter“, like so:

{
     "type": "HeaderFilter",
     "config": {
         "messageType": "REQUEST",
         "add": {
             "X-OpenIDM-RunAs": [ "${contexts.oauth2.accessToken.info.sub}" ]
         }
     }
}

If you have more complex logic around your user behavior, you might want to use a ScriptableFilter, within which you could set the header with code like so:

String sub = contexts.oauth2.accessToken.info.sub
request.getHeaders().add('X-OpenIDM-RunAs', sub)
return next.handle(context, request)

Operating as an OAuth2 resource server

While there are many potential benefits provided by IG, the main one explored in this article is to operate as an OAuth2 resource server. The core feature is well documented – see both the IG gateway guide and the configuration reference. The main detail to consider in this specific context is about how you want to manage scopes. In order for requests to successfully pass through the OAuth2ResourceServerFilter, the token must have the correct scopes associated with it. What constitutes as a “correct” scope value for any given request is up to you to decide. You will need to consider which IDM endpoints and methods you want to make available and how to express that availability in terms of a scope. For example, if you want to allow OAuth2 clients to be able to make calls to the “/openidm/endpoint/usernotifications” endpoint, you could define a scope called “notifications” and require it with a route like so:

{
    "name": "notificationsRoute",
    "baseURI": "https://idm.example.com:8444",
    "condition": "${matches(request.uri.path, '^/openidm/endpoint/usernotifications')}",
    "handler": {
        "type": "Chain",
        "config": {
            "filters": [
                {
                    "type": "OAuth2ResourceServerFilter",
                    "config": {
                        "scopes": [
                            "notifications"
                        ],
                        "accessTokenResolver": "AccessTokenResolver"
                    }
                },
                {
                     "type": "HeaderFilter",
                     "config": {
                         "messageType": "REQUEST",
                         "add": {
                             "X-OpenIDM-RunAs": [ "${contexts.oauth2.accessToken.info.sub}" ]
                         }
                     }
                }
            ],
            "handler": "IDMClient"
        }
    },
    "heap": [
        {
            "name": "AccessTokenResolver",
            "type": "TokenIntrospectionAccessTokenResolver",
            "config": {
                "endpoint": "https://am.example.com/openam/oauth2/introspect",
                "providerHandler": {
                    "type": "Chain",
                    "config": {
                        "filters": [
                            {
                                "type": "HeaderFilter",
                                "config": {
                                    "messageType": "request",
                                    "add": {
                                        "Authorization": [
                                            "Basic ${encodeBase64('openidmClient:openidmClient')}"
                                        ]
                                    }
                                }
                            }
                        ],
                        "handler": "ClientHandler"
                    }
                }
            }
        },
        {
            "name": "IDMClient",
            "type": "ClientHandler",
            "config": {
                "hostnameVerifier": "ALLOW_ALL",
                "sslContextAlgorithm": "TLSv1.2",
                "keyManager": {
                    "type": "KeyManager",
                    "config": {
                        "keystore": {
                            "type": "KeyStore",
                            "config": {
                                "url": "file:///var/openig/keystore.jks",
                                "password": "changeit"
                            }
                        },
                        "password": "changeit"
                    }
                },
                "trustManager": {
                    "type": "TrustManager",
                    "config": {
                        "keystore": {
                            "type": "KeyStore",
                            "config": {
                                "url": "file:///var/openig/keystore.jks",
                                "password": "changeit"
                            }
                        }
                    }
                }
            }
        }
    ]
}

The details about how to validate the access token are shown here via the “AccessTokenResolver” heap object. There are a lot of legitimate configuration options worth exploring for the “OAuth2ResourceServerFilter” filter; see the reference documentation to see the full range. In this example, it is calling out to a standard OAuth2 token introspection endpoint provided by AM, authenticated with a registered client called “openidmClient”. See the documentation regarding how to configure AM to operate as a token introspection service. Be sure that the client you register for the IG resource server filter has the “am-introspect-all-tokens” scope included within it.

You will want to define similar route entries as this for each scope requirement you want to declare. To reduce the amount of duplicated route configuration, I suggest you move the common parts between each route (such as the IDMClient and AccessTokenResolver heap objects) into the global IG “config.json” file. See more details about route definition in the IG gateway guide section on the subject.

Complete configuration example

There is a full sample configuration available as part of the “ForgeOps” project which follows these patterns. This sample brings up AM configured to operate as an OAuth2 AS, providing token creation and introspection. It configures IG to act as an OAuth2 RS, and configures IDM to accept connections exclusively from IG. By reviewing the “rs” and “idm” sub-folders included within that sample, you should recognize many of the points described in this article. See the included README.md file within that sample for details regarding how to start it up. This sample configuration will be used as the basis for subsequent articles which will discuss how you can use specific OAuth2 client libraries to make requests to IDM endpoints.

Go forth and build your secure, standards-based applications, integrated with the ForgeRock Identity Platform!

We love feedback on our documentation!

We as the ForgeRock documentation team appreciate the feedback that you provide. It helps us improve our work, it helps us make our documentation (docs) more relevant to you, our users.

To that end, we’ve improved our feedback processes.

If you have an issue with a specific document, look for a bug icon in the upper-right corner of our documents. You’ll see the following text pop-up: Report documentation feedback.

Select the bug icon to "Report documentation feedback"

When you select that icon, you’ll see the following feedback form, where you can enter any and all desired feedback on our docs.

 

When you `Submit` that form, we as the ForgeRock documentation team receive an email with the following information:

  • Product
  • Version
  • Title of the document
  • Message (entered by you)
  • Email address (if available)
  • Information from the current browser URL, including the current chapter, section, and title

Thanks in advance for your feedback, and we’ll do our best to answer every issue you bring us!

ForgeRock Identity Microservices with Consul on OpenShift

Introduction

Openshift, a Kubernetes-as-a-Paas service, is increasingly being considered as an alternative to managed kubernetes platforms such as those from Tectonic, Rancher, etc and vanilla native kubernetes implementations such as those provided by Google, Amazon and even Azure. RedHat’s OpenShift is a PaaS that provides a complete platform which includes features such as source-2-image build and test management, managing images with built-in change detection, and deploying to staging / production.

In this blog, I demonstrate how ForgeRock Identity Microservices could be deployed into OpenShift Origin (community project).

OpenShift Origin Overview

OpenShift Origin is a distribution of Kubernetes optimized for continuous application development and multi-tenant deployment. OpenShift adds developer and operations-centric tools on top of Kubernetes to enable rapid application development, easy deployment and scaling, and long-term lifecycle maintenance for small and large teams.

OpenShift embeds Kubernetes and extends it with security and other integrated concepts. An OpenShift Origin release corresponds to the Kubernetes distribution – for example, OpenShift 1.7 includes Kubernetes 1.7

Docker Images

I described in this blog post how I built the envconsul layer on top of the forgerock-docker-public.bintray.io/microservice/authn:BASE-ONLY image for the Authentication microservice, and similarly for the token-validation and token-exchange micro services. Back then I took the easy route and used the vbox (host) IP for exposing the Consul IP address externally to the rest of the microservices pods.

This time however, I rebuilt the docker images using a modified version of the envconsul.sh script which is of course, used as the ENTRYPOINT as seen in my github repo. I use an environment variable called CONSUL_ADDRESS in the script instead of a hard coded virtualbox adapter IP. The latter works nicely for minikube deployments, but of course, is not production friendly. However, with the env variable we move much closer to a production grade deployment.

How does OpenShift Origin resolve the environment variable on container startup? Read on!

OpenShift Origin Configuration

I have used minishift for this demonstration. More info about minishift can be found here. I started up Openshift Origin using these settings:
minishift start --openshift-version=v3.10.0-rc.0
eval $(minishift oc-env)
eval $(minishift docker-env)
minishift enable addon anyuid

The anyuid addon is only necessary when you have containers that must be run as root. Consul is not one of those, but most docker images rely on root, including the microservices, so I decided to set it as such anyway.

Apps can be deployed to OC in a number of ways, using the web console or the oc CLI. The main method I used is to use an existing container image hosted on the Bintray ForgeRock repository as well as the Consul image on Docker Hub.

The following command creates a new project:

$oc new-project consul-microservices --display-name="Consul based MS" --description="consul powered env vars"

At this point we have a project container for all our OC artifacts, such as deployments, image streams, pods, services and routes.

The new-app command can be used to deploy any image hosted on an external public or private registry. Using this command, OC will create a deployment configuration with a rolling strategy for updatesz. The command shown here will deploy the Hashicorp Consul image from docker hub (the default repository OC looks for). OC will download the image and store it into an internal image registry. The image is then copied from there to each node in an OC cluster where the app is run.

$oc new-app consul --name=consul-ms

With the deployment configuration created, two triggers are added by default: a ConfigChange and ImageChange which means that each deployment configuration change, or image update on the external repository will automatically trigger a new deployment. A replication controller is created automatically by OC soon after the deployment itself.

Make sure to add a route for the Consul UI running at port 8500 as follows:

I have described the Consul configuration needed for ForgeRock Identity Microservices in my earlier blog post on using envconsul for sourcing environment configuration at runtime. For details on how to get the microservices configuration into Consul as Key-Value pairs please refer to that post. A few snapshots of the UI showing the namespaces for the configuration are presented here:

 

A sample key value used for token exchange is shown here:

Next, we move to deploying the microservices. And in order for envconsul to read the configuration from Consul at runtime, we have the option of passing an --env switch to the oc new-app command indicating what the value for the CONSUL_ADDRESS should be, or we could also include the environment variable and a value for it in the deployment definition as shown in the next section.

OpenShift Deployment Artifacts

If bundling more than one object for import, such as pod and service definitions, via a yaml file, OC expects strongly typed template  configuration, as shown here, for the ms-authn microservice. Note the extra metadata for template identification, and use of the objects syntax:

 

This template, once stored in OC, can be reused to create new deployments of the ms-authn microservice. Notice the use of the environment variable indicating the Consul IP. This informs the envconsul.sh script of the location of Consul and envconsul is able to pull the configuration key-value pairs before spawning the microservice.

I used the oc new-app command in this demonstration insteed.

As described in the other blog, copy the configuration export (could be automated using GIT as well) into the Consul docker container:

$docker cp consul-export.json 5a2e414e548a:/consul-export.json

And then import the configuration using the consul kv command:

$consul kv import @consul-export.json
Imported: ms-authn/CLIENT_CREDENTIALS_STORE
Imported: ms-authn/CLIENT_SECRET_SECURITY_SCHEME
Imported: ms-authn/INFO_ACCOUNT_PASSWORD
Imported: ms-authn/ISSUER_JWK_STORE
Imported: ms-authn/METRICS_ACCOUNT_PASSWORD
Imported: ms-authn/TOKEN_AUDIENCE
Imported: ms-authn/TOKEN_DEFAULT_EXPIRATION_SECONDS
Imported: ms-authn/TOKEN_ISSUER
Imported: ms-authn/TOKEN_SIGNATURE_JWK_BASE64
Imported: ms-authn/TOKEN_SUPPORTED_ADDITIONAL_CLAIMS
Imported: token-exchange/
Imported: token-exchange/EXCHANGE_JWT_INTROSPECT_URL
Imported: token-exchange/EXCHANGE_OPENAM_AUTH_SUBJECT_ID
Imported: token-exchange/EXCHANGE_OPENAM_AUTH_SUBJECT_PASSWORD
Imported: token-exchange/EXCHANGE_OPENAM_AUTH_URL
Imported: token-exchange/EXCHANGE_OPENAM_POLICY_ALLOWED_ACTORS_ATTR
Imported: token-exchange/EXCHANGE_OPENAM_POLICY_AUDIENCE_ATTR
Imported: token-exchange/EXCHANGE_OPENAM_POLICY_COPY_ADDITIONAL_ATTR
Imported: token-exchange/EXCHANGE_OPENAM_POLICY_SCOPE_ATTR
Imported: token-exchange/EXCHANGE_OPENAM_POLICY_SET_ID
Imported: token-exchange/EXCHANGE_OPENAM_POLICY_SUBJECT_ATTR
Imported: token-exchange/EXCHANGE_OPENAM_POLICY_URL
Imported: token-exchange/INFO_ACCOUNT_PASSWORD
Imported: token-exchange/ISSUER_JWK_JSON_ISSUER_URI
Imported: token-exchange/ISSUER_JWK_JSON_JWK_BASE64
Imported: token-exchange/ISSUER_JWK_OPENID_URL
Imported: token-exchange/ISSUER_JWK_STORE
Imported: token-exchange/METRICS_ACCOUNT_PASSWORD
Imported: token-exchange/TOKEN_EXCHANGE_POLICIES
Imported: token-exchange/TOKEN_EXCHANGE_REQUIRED_SCOPE
Imported: token-exchange/TOKEN_ISSUER
Imported: token-exchange/TOKEN_SIGNATURE_JWK_BASE64
Imported: token-validation/
Imported: token-validation/INFO_ACCOUNT_PASSWORD
Imported: token-validation/INTROSPECTION_OPENAM_CLIENT_ID
Imported: token-validation/INTROSPECTION_OPENAM_CLIENT_SECRET
Imported: token-validation/INTROSPECTION_OPENAM_ID_TOKEN_INFO_URL
Imported: token-validation/INTROSPECTION_OPENAM_SESSION_URL
Imported: token-validation/INTROSPECTION_OPENAM_URL
Imported: token-validation/INTROSPECTION_REQUIRED_SCOPE
Imported: token-validation/INTROSPECTION_SERVICES
Imported: token-validation/ISSUER_JWK_JSON_ISSUER_URI
Imported: token-validation/ISSUER_JWK_JSON_JWK_BASE64
Imported: token-validation/ISSUER_JWK_STORE
Imported: token-validation/METRICS_ACCOUNT_PASSWORD

The oc new-app command used to create a deployment:
$oc new-app forgerock-docker-public.bintray.io/microservice/authn:ENVCONSUL --env CONSUL_ADDRESS=consul-ms-ui-consul-microservices.192.168.64.3.nip.io

This creates a new deployment for the authn microservice, and as described in detail in this blog, envconsul is able to reach the Consul server, extract configuration and set up the environment required by the auth microservice.

 

Commands for the token validation microservice are shown here:
$oc new-app forgerock-docker-public.bintray.io/microservice/token-validation:ENVCONSUL --env CONSUL_ADDRESS=consul-ms-ui-consul-microservices.192.168.64.3.nip.io

$oc expose svc/token-validation
route "token-validation" exposed

Token Exchange Microservice is setup as follows

$oc new-app forgerock-docker-public.bintray.io/microservice/token-exchange:ENVCONSUL --env CONSUL_ADDRESS=consul-ms-ui-consul-microservices.192.168.64.3.nip.io

$oc expose svc/token-exchange
route "token-validation" exposed

Image Streams

OpenShift has a neat CI/CD trigger for docker images built in to help you manage your pipeline. Whenever the image is updated in the remote public or private repository, the deployment is restarted.

Details about the token exchange micro service image stream are shown here:

More elaborate Jenkins based pipelines can be built and will be demonstrated in a later blog!

Test

Setup the env variables in Postman like so:

I have several tests in this github repo. An example of getting a new access token using client credentials from the authn microservice is shown here:

 

This was an introduction to using OpenShift with our micro services, and there is much more to come!

ForgeRock Identity Live Berlin

The second show of the ForgeRock worldwide tour of Identity Live events took place last week in the beautiful city of Berlin.LP0_4079

My colleagues from the Marketing team have already put a summary of the event with an highlight video and links to slides, videos of the sessions.

And my photo album of the event is also visible online here:

ForgeRock Identity Live Berlin 2018

See you at the next Identity Live in Sydney or in Singapore in August.

Monitoring the ForgeRock Identity Platform 6.0 using Prometheus and Grafana


All products within the ForgeRock Identity Platform 6.0 release include native support for monitoring product metrics using Prometheus and visualising this information using Grafana.  To illustrate how these product metrics can be used, alongside each products’ binary download on Backstage, you can also find a zip file containing Monitoring Dashboard Samples.  Each download includes a README file which explains how to setup these dashboards.  In this post, we’ll review the AM 6.0.0 Overview dashboard included within the AM Monitoring Dashboard Samples.

As you might expect from the name, the AM 6.0.0 Overview dashboard is a high level dashboard which gives a cluster-wide summary of the load and average response time for key areas of AM.  For illustration purposes, I’ve used Prometheus to monitor three AM instances running on my laptop and exported an interactive snapshot of the AM 6.0.0 Overview dashboard.  The screenshots which follow have all been taken from that snapshot.

Variables

At the top of the dashboard, there are two dropdowns:

AM 6.0.0 Overview dashboard variables

 

  • Instance – Use this to select which AM instances within your cluster you’d like to monitor.  The instances shown within this dropdown are automatically derived from the metric data you have stored within Prometheus.
  • Aggregation Window – Use this to select a period of time over which you’d like to take a count of events shown within the dashboard. For example, how many sessions were started in the last minute, hour, day, etc.

Note. You’ll need to be up and running with your own Grafana instance in order to use these dropdowns as they can’t be updated within the interactive snapshot.

Authentications

AM 6.0.0 Overview dashboard Authentications section

The top row of the Authentications section shows a count of authentications which have started, succeeded, failed, or timed-out across all AM instances selected in the “Instance” variable drop-down over the selected “Aggregation Window”.  The screenshot above shows that a total of 95 authentications were started over the past minute across all three of my AM instances.

The second row of the Authentications section shows the per second rate of authentications starting, succeeding, failing, or timing-out by AM instance.  This set of line graphs can be used to see how behaviour is changing over time and if any AM instance is taking more or less of the load.

All of the presented Authentications metrics are of the summary type.

Sessions

AM 6.0.0 Overview dashboard Sessions section

As with the Authentications section, the top row of the Sessions section shows cluster-wide aggregations while the second row contains a set of line graphs.

In the Sessions section, the metrics being presented are session creation, session termination, and average session lifetime.  Unlike the other metrics presented in the Authentications and Sessions sections, the session lifetime metric is of the timer type.

In both panels, Prometheus is calculating the average session lifetime by dividing am_session_lifetime_seconds_total by am_session_lifetime_count.  Because this calculation is happening within Prometheus rather than within each product instance, we have control over what period of time the average is calculated, we can choose to include or exclude certain session types by filtering on the session_type tag values, and we can calculate the cluster-wide average.

When working with any timer metric, it’s also possible to present the 50th, 75th, 95th, 98th, 99th, or 99.9th percentiles.  These are calculated from within the monitored product using an exponential decay so that they are representative of roughly the last five minutes of data [1].  Because percentiles are calculated from within the product, this does mean that it’s not possible to combine the results of multiple similar metrics or to aggregate across the whole cluster.

CTS

AM 6.0.0 Overview dashboard CTS section

The CTS is AM’s storage service for tokens and other data objects.  When a token needs to be operated upon, AM describes the desired operation as a Task object and enqueues this to be handled by an available worker thread when a connection to DS is available.

In the CTS section, we can observe…

  • Task Throughput – how many CTS operations are being requested
  • Tasks Waiting in Queues – how many operations are waiting for an available worker thread and connection to DS
  • Task Queueing Time – the time each operation spends waiting for a worker thread and connection
  • Task Service Time – the time spent performing the requested operation against DS

If you’d like to dig deeper into the CTS behaviour, you can use the AM 6.0.0 CTS dashboard to observe any of the recorded percentiles by operation type and token type.  You can also use the AM 6.0.0 CTS Token Reaper dashboard to monitor AM’s removal of expired CTS tokens.  Both of these dashboards are also included in the Monitoring Dashboard Samples zip file.

OAuth2

AM 6.0.0 Overview dashboard OAuth2 section

In the OAuth2 section, we can monitor OAuth2 grants completed or revoked, and tokens issued or revoked.  As OAuth2 refresh tokens can be long-lived and require state to be stored in CTS (to allow the resource owner to revoke consent) tracking grant issuance vs revocation can be helpful to aid with CTS capacity planning.

The dashboard currently shows a line per grant type.  You may prefer to use Prometheus’ sum function to show the count of all OAuth2 grants.  You can see examples of the sum function used in the Authentications section.

Note. you may also prefer to filter the grant_type tag to exclude refresh.

Policy / Authorization

AM 6.0.0 Overview dashboard Policy / Authorizations section

The Policy section shows a count of policy evaluations requested and the average time it took to perform these evaluations.  As you can see from the screenshots, policy evaluation completed in next to no time on my AM instances as I have no policies defined.

JVM

AM 6.0.0 Overview dashboard JVM section

The JVM section shows how JVM metrics forwarded by AM to Prometheus can be used to track memory usage and garbage collections.  As can be seen in the screenshot, the JVM section is repeated per AM instance.  This is a nice feature of Grafana.

Note. the set of garbage collection metric available is dependent on the selected GC algorithm.

Passing thoughts

While I’ve focused heavily on the AM dashboards within this post, all products across the ForgeRock Identitty Platform 6.0 release include the same support for Prometheus and Grafana and you can find sample dashboards for each.  I encourage you to take a look at the sample dashboards for DS, IG and IDM.

References

  1. DropWizard Metrics Exponentially Decaying Reservoirs, https://metrics.dropwizard.io/4.0.0/manual/core.html

This blog post was first published @ https://xennial-bytes.blogspot.com/, included here with permission from the author.

Cyber Security Skills in 2018

Last week I passed the EC-Council Certified Ethical Hacker exam.  Yay to me.  I am a professional penetration tester right?  Negatory.  I sat the exam more as an exercise to see if I “still had it”.  A boxer returning to the ring.  It is over 10 years since I passed my CISSP.  The 6-hour multi-choice horror of an exam, that was still being conducted using pencil and paper down at the Royal Holloway University.  In honesty, that was a great general information security bench mark and allowed you to go in multiple different directions as an "infosec pro".  So back to the CEH…

There are now a fair few information security related career paths in 2018.  The basic split tends to be something like:

  • Managerial  - I don’t always mean managing people, more risk management, compliance management and auditing
  • Technical - here I guess I focus upon penetration testing, cryptography or secure software engineering
  • Operational - thinking this is more security operation centres, log analysis and threat intelligence and the like
So the CEH would fit as an intro to intermediate level qualification within the technical sphere.  Is it a useful qualification to have?  Let me come back to that question, by framing it a little.

There is the constant hum that in both the US and UK, there is a massive cyber and information security personnel shortage, in both the public and private sectors.  This I agree with, but it also needs some additional framing and qualification.  Which areas, what jobs, what skill levels are missing or in short supply?  As the cyber security sector has reached a decent level maturity with regards job roles and more importantly job definitions, we can start to work backwards in understanding how to fulfil demand.

I often hear conversations around cyber education, which go down the route of delivering cyber security curriculum at the under sixteens or even under 11 age groups.  Whilst this is incredibly important for general Internet safety, I’m not sure it helps the longer term cyber skills supply problem.  If we look at the omnipresent shortage of medical doctors, we don’t start medical school earlier.  We teach the first principles earlier: maths, biology and chemistry for example.  With those foundations in place, specialism becomes much easier at say eighteen and again at 21 or 22 when specialist doctor training starts.

Shouldn’t we just apply the same approach to cyber?  A good grounding in mathematics, computing and networking would then provide a strong foundation to build upon, before focusing on cryptography or penetration testing.

The CEH exam (and this isn’t a specific criticism of the EC Council, simply recent experience), doesn’t necessarily provide you with the skills to become a hacker.  I spent 5 months self-studying for the exam.  A few hours here and there whilst holding down a full time job with regular travel.  Aka not a lot of time.  The reason I probably passed the exam, was mainly due to a broad 17 year history in networking, security and access management.  I certainly learned a load of stuff.  Mainly tooling and process, but not necessarily first principles skills.

Most qualifications are great.  They certainly give the candidate career bounce and credibility and any opportunity to study is a good one.  I do think cyber security training is at a real inflection point though.

Clearly most large organisations are desperately building out teams to protect and react to security incidents.  Be it for compliance reasons, or to build end user trust, but we as an industry need to look at a longer term and sustainable way to develop, nurture and feed talent.  Going back to basics seems a good step forward.

Cyber Security Skills in 2018

Last week I passed the EC-Council Certified Ethical Hacker exam.  Yay to me.  I am a professional penetration tester right?  Negatory.  I sat the exam more as an exercise to see if I “still had it”.  A boxer returning to the ring.  It is over 10 years since I passed my CISSP.  The 6-hour multi-choice horror of an exam, that was still being conducted using pencil and paper down at the Royal Holloway University.  In honesty, that was a great general information security bench mark and allowed you to go in multiple different directions as an "infosec pro".  So back to the CEH…

There are now a fair few information security related career paths in 2018.  The basic split tends to be something like:

  • Managerial  - I don’t always mean managing people, more risk management, compliance management and auditing
  • Technical - here I guess I focus upon penetration testing, cryptography or secure software engineering
  • Operational - thinking this is more security operation centres, log analysis and threat intelligence and the like
So the CEH would fit as an intro to intermediate level qualification within the technical sphere.  Is it a useful qualification to have?  Let me come back to that question, by framing it a little.

There is the constant hum that in both the US and UK, there is a massive cyber and information security personnel shortage, in both the public and private sectors.  This I agree with, but it also needs some additional framing and qualification.  Which areas, what jobs, what skill levels are missing or in short supply?  As the cyber security sector has reached a decent level maturity with regards job roles and more importantly job definitions, we can start to work backwards in understanding how to fulfil demand.

I often hear conversations around cyber education, which go down the route of delivering cyber security curriculum at the under sixteens or even under 11 age groups.  Whilst this is incredibly important for general Internet safety, I’m not sure it helps the longer term cyber skills supply problem.  If we look at the omnipresent shortage of medical doctors, we don’t start medical school earlier.  We teach the first principles earlier: maths, biology and chemistry for example.  With those foundations in place, specialism becomes much easier at say eighteen and again at 21 or 22 when specialist doctor training starts.

Shouldn’t we just apply the same approach to cyber?  A good grounding in mathematics, computing and networking would then provide a strong foundation to build upon, before focusing on cryptography or penetration testing.

The CEH exam (and this isn’t a specific criticism of the EC Council, simply recent experience), doesn’t necessarily provide you with the skills to become a hacker.  I spent 5 months self-studying for the exam.  A few hours here and there whilst holding down a full time job with regular travel.  Aka not a lot of time.  The reason I probably passed the exam, was mainly due to a broad 17 year history in networking, security and access management.  I certainly learned a load of stuff.  Mainly tooling and process, but not necessarily first principles skills.

Most qualifications are great.  They certainly give the candidate career bounce and credibility and any opportunity to study is a good one.  I do think cyber security training is at a real inflection point though.

Clearly most large organisations are desperately building out teams to protect and react to security incidents.  Be it for compliance reasons, or to build end user trust, but we as an industry need to look at a longer term and sustainable way to develop, nurture and feed talent.  Going back to basics seems a good step forward.

ForgeRock Identity Live Austin

The season for the ForgeRock Identity Live events has opened earlier in May with the first of a series of 6 worldwide events in 2018, the Identity Live Austin.

LP0_3097With the largest audience since we’ve started these events, this was an absolutely great event, with as usual, passionate and in depth discussions with customers and partners.

You can find highlights, session videos and selected decks on the event website.

And here is my summary of the 2 days conference in pictures.

The next event will take place in Europe, in Berlin on June 12-13. It is still time to register, and you can look at the whole agenda of the summits to find one closer to your home. I’m looking forward to meet you there.

ForgeRock Identity Platform Version 6: Integrating IDM, AM, and DS

For the ForgeRock Identity Platform version 6, integration between our products is easier than ever. In this blog, I’ll show you how to integrate ForgeRock Identity Management (IDM), ForgeRock Access Management (AM), and ForgeRock Directory Services (DS). With integration, you can configure aspects of privacy, consent, trusted devices, and more. This configuration sets up IDM as an OpenID Connect / OAuth 2.0 client of AM, using DS as a common user datastore.

Setting up integration can be a challenge, as it requires you to configure (and read documentation from) three different ForgeRock products. This blog will help you set up that integration. For additional features, refer to the following chapters from IDM documentation: Integrating IDM with the ForgeRock Identity Platform and the Configuring User Self-Service.

While you can find most of the steps in the IDM 6 Samples Guide, this blog collects the information you need to set up integration in one place.

This blog post will guide you through the process. (Here’s the link to the companion blog post for ForgeRock Identity Platform version 5.5.)

Preparing Your System

For the purpose of this blog, I’ve configured all three systems in a single Ubuntu 16.04 VM (8 GB RAM / 40GB HD / 2 CPU).

Install Java 8 on your system. I’ve installed the Ubuntu 16.04-native openjdk-8 packages. In some cases, you may have to include export JAVA_HOME=/usr in your ~/.bashrc or ~/.bash_profile files.

Create separate home directories for each product. For the purpose of this blog, I’m using:

  • /home/idm
  • /home/am
  • /home/ds

Install Tomcat 8 as the web container for AM. For the purpose of this blog, I’ve downloaded Tomcat 8.5.30, and have unpacked it. Activate the executable bit in the bin/ subdirectory:

$ cd apache-tomcat-8.5.30/bin
$ chmod u+x *.sh

As AM requires fully qualified domain names (FQDNs), I’ve set up an /etc/hosts file with FQDNs for all three systems, with the following line:

192.168.0.1 AM.example.com DS.example.com IDM.example.com

(Substitute your IP address as appropriate. You may set up AM, DS, and IDM on different systems.)

If you set up AM and IDM on the same system, make sure they’re configured to connect on different ports. Both products configure default connections on ports 8080 and 8443.

Download AM, IDM, and DS versions 6 from backstage.forgerock.com. For organizational purposes, set them up on their own home directories:

 

Product Download Home Directory
DS DS-6.0.0.zip /home/ds
AM AM-6.0.0.war /home/am
IDM IDM-6.0.0.zip /home/idm

 

Unpack the zip files. For convenience, copy the Example.ldif file from /home/idm/openidm/samples/full-stack/data to the /home/ds directory.

For the purpose of this blog, I’ve downloaded Tomcat 8.5.30 to the /home/am directory.

Configuring ForgeRock Directory Services (DS)

To install DS, navigate to the directory where you unpacked the binary, in this case, /home/ds/opendj. In that directory, you’ll find a setup script. The following command uses that script to start DS as a directory server, with a root DN of “cn=Directory Manager”, with a host name of ds.example.com, port 1389 for LDAP communication, and 4444 for administrative connections:

$ ./setup \
  directory-server \
  --rootUserDN "cn=Directory Manager" \
  --rootUserPassword password \
  --hostname ds.example.com \
  --ldapPort 1389 \
  --adminConnectorPort 4444 \
  --baseDN dc=com \
  --ldifFile /home/ds/Example.ldif \
  --acceptLicense

Earlier in this blog, you copied the Example.ldif file to /home/ds. Substitute if needed. Once the setup script returns you to the command line, DS is ready for integration.

Installing ForgeRock Access Manager (AM)

Use the configured external DS server as a common user store for AM and IDM. For an extended explanation, see the following documentation: Integrating IDM with the ForgeRock Identity Platform. To install AM, use the following steps:

  1. Set up Tomcat for AM. For this blog, I used Tomcat 8.5.30, downloaded from http://tomcat.apache.org/. 
  2. Unzip Tomcat in the /home/am directory.
  3. Make the files in the apache-tomcat-8.5.30/bin directory executable.
  4. Copy the AM-6.0.0.war file from the /home/am directory to apache-tomcat-8.5.30/webapps/openam.war.
  5. Start the Tomcat web container with the startup.sh script in the apache-tomcat-8.5.30/bin directory. This action should unpack the openam.war binary to the
    apache-tomcat-8.5.30/webapps/openam directory.
     
  6. Shut down Tomcat, with the shutdown.sh script in the same directory. Make sure the Tomcat process has stopped.
  7. Open the web.xml file in the following directory: apache-tomcat-8.5.30/webapps/openam/WEB-INF/. For an explanation, see the AM 6 Release Notes. Include the following code blocks in that file to support cross-origin resource sharing:
<filter>
      <filter-name>CORSFilter</filter-name>
      <filter-class>org.apache.catalina.filters.CorsFilter</filter-class>
      <init-param>
           <param-name>cors.allowed.headers</param-name>
           <param-value>Content-Type,X-OpenIDM-OAuth-Login,X-OpenIDM-DataStoreToken,X-Requested-With,Cache-Control,Accept-Language,accept,Origin,Access-Control-Request-Method,Access-Control-Request-Headers,X-OpenAM-Username,X-OpenAM-Password,iPlanetDirectoryPro</param-value>
      </init-param>
      <init-param>
           <param-name>cors.allowed.methods</param-name>
           <param-value>GET,POST,HEAD,OPTIONS,PUT,DELETE</param-value>
      </init-param>
      <init-param>
           <param-name>cors.allowed.origins</param-name>
           <param-value>http://am.example.com:8080,https://idm.example.com:9080</param-value>
      </init-param>
      <init-param>
           <param-name>cors.exposed.headers</param-name>
           <param-value>Access-Control-Allow-Origin,Access-Control-Allow-Credentials,Set-Cookie</param-value>
      </init-param>
      <init-param>
           <param-name>cors.preflight.maxage</param-name>
           <param-value>10</param-value>
      </init-param>
      <init-param>
           <param-name>cors.support.credentials</param-name>
           <param-value>true</param-value>
      </init-param>
 </filter>

 <filter-mapping>
      <filter-name>CORSFilter</filter-name>
      <url-pattern>/json/*</url-pattern>
 </filter-mapping>

Important: Substitute the actual URL and ports for your AM and IDM deployments, where you see http://am.example.com:8080 and http://idm.example.com:9080. (I forgot to make these once and couldn’t figure out the problem for a couple of days.)

Configuring AM

  1. If you’ve configured AM on this system before, delete the /home/am/openam directory.
  2. Restart Tomcat with the startup.sh script in the aforementioned apache-tomcat-8.5.30/bin directory.
  3. Navigate to the URL for your AM deployment. In this case, call it http://am.example.com:8080/openam. You’ll create a “Custom Configuration” for OpenAM, and accept the defaults for most cases.
    • When setting up Configuration Data Store Settings, for consistency, use the same root suffix in the Configuration Data Store, i.e. dc=example,dc=com.
    • When setting up User Data Store settings, make sure the entries match what you used when you installed DS. The following table is based on that installation:

      Option Setting
      Directory Name ds.example.com
      Port 1389
      Root Suffix dc=example,dc=com
      Login ID cn=Directory Manager
      Password password

       

  4. When the installation process is complete, you’ll be prompted with a login screen. Log in as the amadmin administrative user with the password you set up during the configuration process. With the following action, you’ll set up an OpenID Connect/OAuth 2.0 service that you’ll configure shortly for a connection to IDM.
    • Select Top-Level Realm -> Configure OAuth Provider -> Configure OpenID Connect -> Create -> OK. This sets up AM as an OIDC authorization server.
  5. Set up IDM as an OAuth 2.0 Client:
    • Select Applications -> OAuth 2.0. Choose Add Client. In the New OAuth 2.0 Client window that appears, set openidm as a Client ID, set changeme as a Client Secret, along with a Redirection URI of http://idm.example.com:9080/oauthReturn/. The scope is openid, which reflects the use of the OpenID Connect standard.
    • Select Create, go to the Advanced Tab, and scroll down. Activate the Implied Consent option.
    • Press Save Changes.
  6. Go to the OpenID Connect tab, and enter the following information in the Post Logout Redirect URIs text box:
    • http://idm.example.com:9080/
    • http://idm.example.com:9080/admin/
    • Press Save Changes.
  7. Select Services -> OAuth2 Provider -> Advanced OpenID Connect:
    • Scroll down and enter openidm in the “Authorized OIDC SSO Clients” text box.
    • Press Save Changes.
  8. Navigate to the Consent tab:
    • Enable the Allow Clients to Skip Consent option.
    • Press Save Changes.

AM is now ready for integration.

 

Configuring IDM

Now you’re ready to configure IDM, using the following steps:

  1. For the purpose of this blog, use the following project subdirectory: /home/idm/openidm/samples/full-stack.
  2. If you haven’t modified the deployment port for AM, modify the port for IDM. To do so, edit the boot.properties file in the openidm/resolver/ subdirectory, and change the port property appropriate for your deployment (openidm.port.http or openidm.port.https). For this blog, I’ve changed the openidm.port.http line to:
    • openidm.port.http=9080
  3. (NEW) You’ll also need to change the openidm.host. By default, it’s set to localhost. For this blog, set it to:
    • openidm.host=idm.example.com
  4. Start IDM using the full-stack project directory:
    • $ cd openidm
    • $ ./startup.sh -p samples/full-stack
      • (If you’re running IDM in a VM, the following command starts IDM and keeps it going after you log out of the system:
        nohup ./startup.sh -p samples/full-stack/ > logs/console.out 2>&1& )
    • As IDM includes pre-configured options for the ForgeRock Identity Platform in the full-stack subdirectory, IDM documentation on the subject frequently refers to the platform as the “Full Stack”.
  5. In a browser, navigate to http://idm.example.com:9080/admin/.
  6. Log in as an IDM administrator:
    • Username: openidm-admin
    • Password: openidm-admin
  7. Reconcile users from the common DS user store to IDM. Select Configure > Mappings. In the page that appears, find the mapping from System/Ldap/Account to Managed/User, and press Reconcile. That will populate the IDM Managed User store with users from the common DS user store.
  8. Select Configure -> Authentication. Choose the ForgeRock Identity Provider option. In the window that appears, scroll down to the configuration details. Based on the instance of AM configured earlier, you’d change:
    Property Entry
    Well-Known Endpoint http://am.example.com:8080/openam/oauth2/.well-known/openid-configuration
    Client ID Matching entry from Step 5 of Configuring AM (openidm)
    Client Secret Matching entry from Step 5 of Configuring AM (changeme)
  9. When you’ve made appropriate changes, press Submit. (You won’t be able to press submit until you’ve entered a valid Well-Known Endpoint.)
    • You’re prompted with the following message:
      • Your current session may be invalid. Click here to logout and re-authenticate.
  10. When you tap on the ‘Click here’ link, you should be taken to http://am.example.com:8080/openam/<some long extension>. Log in with AM administrative credentials:
    • Username: amadmin
    • Password: <what you configured during the AM installation process>

If you see the IDM Admin UI after logging in, congratulations! You now have a working integration between AM, IDM, and DS.

Note: To ensure a rapid response when the AM session expires, the IDM JWT_SESSION timeout has been reduced to 5 seconds. For more information, see the following section of the IDM ForgeRock Identity Platform sample: Changes to Session and Authentication Modules.

 

Building On The ForgeRock Identity Platform

Once you’ve integrated AM, IDM, and DS, you can:

To visualize how this works, review the following diagram. For more information, see the following section of the IDM ForgeRock Identity Platform sample: Authorization Flow.

 

Troubleshooting

If you run into errors, review the following table:

 

Error message Solution
redirect_uri_mismatch Check for typos in the OAuth 2.0 client Redirection URI.
This application is requesting access to your account. Enable “Implied Consent” in the OAuth 2.0 client.

Enable “Allow Clients to Skip Consent” in the OAuth2 Provider.

Upon logout: The redirection URI provided does not match a pre-registered value. Check for typos in the OAuth 2.0 client Post Logout Redirect URIs.
Unable to login using authentication provider, with a redirect to preventAutoLogin=true. Check for typos in the Authorized OIDC SSO Clients list, in the OAuth2 Provider.

Make sure the Client ID and Client Secret in IDM match those configured for the AM OAuth 2.0 Application Client.

Unknown error: Please contact the administrator.

(In the dev console, you might see: Origin ‘http://idm.example.com:9080’ is therefore not allowed access.’).

Check for typos in the URLs in your web.xml file.
The IDM Self-Service UI does not appear, but you can connect to the IDM Admin UI. Check for typos in the URLs in your web.xml file.

 

If you see other errors, the problem is likely beyond the scope of this blog.