ForgeRock Directory Services 6.5 is Available

The ForgeRock Identity Platform was released and publicly announced early December this year (also here).

As you may guess from the announcement, an important part of the new features has to do with DevOps, running in Docker, automated with Kubernetes.

The underlying datastore for the ForgeRock Identity Platform is ForgeRock Directory Services, and the new 6.5 release comes with a set of new features and improvements, that are detailed in the Release Notes, but here’s some highlights:

Ease of use has always been important for us, and DS 6.5 brings it to a new level for the customers that are deploying other ForgeRock products. Starting with this version, you can now select, at the time of installation, one or more profiles. A profile contains the complete configuration for a specific use, from base DN, backend, indexes, schema, specific configuration parameters, administrative users, ACI and privileges.. Out of the box, we are delivering 3 profiles for ForgeRock Access Management: Identity Store, Configuration Store and the Core Token Service Store; 1 profile for ForgeRock Identity Management: Managed Object Store; and 1 profile for Directory Services evaluation, that contains the data and configuration that is used through our documentation, and allows you to copy and paste the command examples of the guides and replay them against a running server.

To learn more about profiles, get DS 6.5, and run

setup –help-profiles

. To learn about a specific profile, you can run

setup –help-profile am-cts:6.5.0

With regards to DevOps, containers and automation in the cloud, we’ve continued the efforts that we had started with previous releases.

  • DS 6.5 now supports a method to run post upgrade tasks to the data, such as rebuilding indexes.
  • The server has 2 new HTTP endpoints to poke about its status. /isReady indicates that the server is up and running. /isHealty indicates if its current state is optimal, or if there are some temporary limitations, such as a database backend is offline for maintenance, or the replication is lagging too much (with too much being fully configurable).
  • The Grafana sample dashboard has been updated
  • Like all ForgeRock Identity Platform’s products, DS comes with a Common Audit handler that published log messages to stdout, a common practice when working with Docker containers.

Directory Proxy Server 6.5 now supports “sharding”, i.e. distributing data into multiple discrete replicated directory services. Such deployments make very large amount of data easier to manage and give better write scalability. In this version, the number of “shards” is fixed, but we are working on making the service dynamically scaling as the data grows, in future versions.

Directory Services 6.5 now supports limiting the number of connections that can be opened from a single client application. By IP address, a client may be denied, fully allowed or restricted in its number of opened connections, offering a greater protection against misbehaving applications.

The product also now supports the LDAP Relax Rules Control, that allow an administrator to add or modify attributes that are normally read-only. This feature can be used when having to synchronise data between different LDAP products, so they have the same timestamps for their creation or modification dates.

We’ve made the “cn=Changelog” suffix and data available on servers that are only acting as Replication hubs (RS), since they are persisting all the changes to replicate them.

We’ve added a couple of troubleshooting tools with the release. One tool, changelogstat) allows to list and dump the content of the replication changelog databases. The supportextract tool allows an administrator to capture the state and logs of a Directory Services instance and make the file available to ForgeRock support quickly.

Java 11 is now fully supported, both Oracle JVM and OpenJDK builds (from Oracle, Red-Hat or Azul Systems).

Finally, like with all releases of Directory Services, we have enhanced the performance and the reliability of the server in many areas. But most importantly, we have fully tested that you can upgrade to 6.5 without any service interruption: from 2.6 to 6.0, you can upgrade an instance and let it replicate with the other instances, then start upgrading the next one, until all instances are on the latest and greatest version. If you use VMs or containers, you can stop an existing instance and replace it with a new one. Or add a new one and then stop an old one… Your choice, but both scenarios are supported.

For further details, read the complete Release Notes. I’m looking forward to your feedback on the features and improvements of the Directory Services 6.5 release!

This blog post was first published @ ludopoitou.com, included here with permission.

Monitoring the ForgeRock Identity Platform 6.0 using Prometheus and Grafana


All products within the ForgeRock Identity Platform 6.0 release include native support for monitoring product metrics using Prometheus and visualising this information using Grafana.  To illustrate how these product metrics can be used, alongside each products’ binary download on Backstage, you can also find a zip file containing Monitoring Dashboard Samples.  Each download includes a README file which explains how to setup these dashboards.  In this post, we’ll review the AM 6.0.0 Overview dashboard included within the AM Monitoring Dashboard Samples.

As you might expect from the name, the AM 6.0.0 Overview dashboard is a high level dashboard which gives a cluster-wide summary of the load and average response time for key areas of AM.  For illustration purposes, I’ve used Prometheus to monitor three AM instances running on my laptop and exported an interactive snapshot of the AM 6.0.0 Overview dashboard.  The screenshots which follow have all been taken from that snapshot.

Variables

At the top of the dashboard, there are two dropdowns:

AM 6.0.0 Overview dashboard variables

 

  • Instance – Use this to select which AM instances within your cluster you’d like to monitor.  The instances shown within this dropdown are automatically derived from the metric data you have stored within Prometheus.
  • Aggregation Window – Use this to select a period of time over which you’d like to take a count of events shown within the dashboard. For example, how many sessions were started in the last minute, hour, day, etc.

Note. You’ll need to be up and running with your own Grafana instance in order to use these dropdowns as they can’t be updated within the interactive snapshot.

Authentications

AM 6.0.0 Overview dashboard Authentications section

The top row of the Authentications section shows a count of authentications which have started, succeeded, failed, or timed-out across all AM instances selected in the “Instance” variable drop-down over the selected “Aggregation Window”.  The screenshot above shows that a total of 95 authentications were started over the past minute across all three of my AM instances.

The second row of the Authentications section shows the per second rate of authentications starting, succeeding, failing, or timing-out by AM instance.  This set of line graphs can be used to see how behaviour is changing over time and if any AM instance is taking more or less of the load.

All of the presented Authentications metrics are of the summary type.

Sessions

AM 6.0.0 Overview dashboard Sessions section

As with the Authentications section, the top row of the Sessions section shows cluster-wide aggregations while the second row contains a set of line graphs.

In the Sessions section, the metrics being presented are session creation, session termination, and average session lifetime.  Unlike the other metrics presented in the Authentications and Sessions sections, the session lifetime metric is of the timer type.

In both panels, Prometheus is calculating the average session lifetime by dividing am_session_lifetime_seconds_total by am_session_lifetime_count.  Because this calculation is happening within Prometheus rather than within each product instance, we have control over what period of time the average is calculated, we can choose to include or exclude certain session types by filtering on the session_type tag values, and we can calculate the cluster-wide average.

When working with any timer metric, it’s also possible to present the 50th, 75th, 95th, 98th, 99th, or 99.9th percentiles.  These are calculated from within the monitored product using an exponential decay so that they are representative of roughly the last five minutes of data [1].  Because percentiles are calculated from within the product, this does mean that it’s not possible to combine the results of multiple similar metrics or to aggregate across the whole cluster.

CTS

AM 6.0.0 Overview dashboard CTS section

The CTS is AM’s storage service for tokens and other data objects.  When a token needs to be operated upon, AM describes the desired operation as a Task object and enqueues this to be handled by an available worker thread when a connection to DS is available.

In the CTS section, we can observe…

  • Task Throughput – how many CTS operations are being requested
  • Tasks Waiting in Queues – how many operations are waiting for an available worker thread and connection to DS
  • Task Queueing Time – the time each operation spends waiting for a worker thread and connection
  • Task Service Time – the time spent performing the requested operation against DS

If you’d like to dig deeper into the CTS behaviour, you can use the AM 6.0.0 CTS dashboard to observe any of the recorded percentiles by operation type and token type.  You can also use the AM 6.0.0 CTS Token Reaper dashboard to monitor AM’s removal of expired CTS tokens.  Both of these dashboards are also included in the Monitoring Dashboard Samples zip file.

OAuth2

AM 6.0.0 Overview dashboard OAuth2 section

In the OAuth2 section, we can monitor OAuth2 grants completed or revoked, and tokens issued or revoked.  As OAuth2 refresh tokens can be long-lived and require state to be stored in CTS (to allow the resource owner to revoke consent) tracking grant issuance vs revocation can be helpful to aid with CTS capacity planning.

The dashboard currently shows a line per grant type.  You may prefer to use Prometheus’ sum function to show the count of all OAuth2 grants.  You can see examples of the sum function used in the Authentications section.

Note. you may also prefer to filter the grant_type tag to exclude refresh.

Policy / Authorization

AM 6.0.0 Overview dashboard Policy / Authorizations section

The Policy section shows a count of policy evaluations requested and the average time it took to perform these evaluations.  As you can see from the screenshots, policy evaluation completed in next to no time on my AM instances as I have no policies defined.

JVM

AM 6.0.0 Overview dashboard JVM section

The JVM section shows how JVM metrics forwarded by AM to Prometheus can be used to track memory usage and garbage collections.  As can be seen in the screenshot, the JVM section is repeated per AM instance.  This is a nice feature of Grafana.

Note. the set of garbage collection metric available is dependent on the selected GC algorithm.

Passing thoughts

While I’ve focused heavily on the AM dashboards within this post, all products across the ForgeRock Identitty Platform 6.0 release include the same support for Prometheus and Grafana and you can find sample dashboards for each.  I encourage you to take a look at the sample dashboards for DS, IG and IDM.

References

  1. DropWizard Metrics Exponentially Decaying Reservoirs, https://metrics.dropwizard.io/4.0.0/manual/core.html

This blog post was first published @ https://xennial-bytes.blogspot.com/, included here with permission from the author.

Runtime configuration for ForgeRock Identity Microservices using Consul

The ForgeRock Identity Microservices are able to read all runtime configuration from environment variables. These could be statically provided as hard coded key-value pairs in a Kubernetes manifest or shell script, but a better solution is to manage the configuration in a centralized configuration store such as Hashicorp Consul.

Managing configuration outside the code as such has been routinely adopted as a practice by modern service providers such as Google, Azure and AWS and business that use both cloud as well as on-premise software.

In this post, I shall walk you through how Consul can be used to set configuration values, envconsul can be used to read these key-values from different namespaces and supply them as environment variables to a subprocess. Envconsul can even be used to trigger automatic restarts on configuration changes without any modifications needed to the ForgeRock Identity Microservices.

Perhaps even more significantly, this post shows how it is possible to use Docker to layer on top of a base microservice docker image from the Forgerock Bintray repo and then use a pre-built go binary for envconsul to spawn the microservice as a subprocess.

This github.com repo has all the docker and kubernetes artifacts needed to run this integration. I used a single cluster minikube setup.

Consul

Starting a Pod in Minikube

We must start with setting up consul as a docker container and this is just a simple docker pull consul followed by a kubectl create -f kube-consul.yaml that sets up a pod running consul in our minikube environment. Consul creates a volume called /consul/data on to which you can store configuration key-value pairs in JSON format that could be directly imported using consul kv import @file.json. consul kv export > file.json can be used to export all stored configuration separated into JSON blobs by namespace.

The consul commands can be run from inside the Consul container by starting  a shell in it using docker exec -ti <consul-container-name> /bin/sh

Configuration setup using Key Value pairs

The namespaces for each microservice are defined either manually in the UI or via REST calls such as:
consul kv put my-app/my-key may-value

 

Expanding the ms-authn namespace yields the following screen:

A sample configuration set for all three microservices is available in the github.com repo for this integration here. Use consul kv import @consul-export.json to import them into your consul server.

Docker Builds

Base Builds

First, I built base images for each of the ForgeRock Identity microservices without using an ENTRYPOINT instruction. These images are built using the openjdk:8-jre-alpine base image which are the smallest alpine-based openjdk image one can possibly find. The openjdk7 and openjdk9 images are larger. Alpine itself is a great choice for a minimalistic docker base image and provides packaging capabilities using apk. It is a great choice for a base image especially if these are to be layered on top of like in this demonstration.

An example for the Authentication microservice is:
FROM openjdk:8-jre-alpine
WORKDIR /opt/authn-microservice
ADD . /opt/authn-microservice
EXPOSE 8080

The command docker build -t authn:BASE-ONLY builds the base image but does not start the authentication listener since no CMD or ENTRYPOINT instruction was specified in the Dockerfile.

The graphic below shows the layers the microservices docker build added on top of the openjdk:8-jre-alpine base image:

Next, we need to define a launcher script that uses a pre-built envconsul go binary to setup communication with the Consul server and use the key-value namespace for the authentication microservice.

/usr/bin/envconsul -log-level debug -consul=192.168.99.100:31100 -pristine -prefix=ms-authn /bin/sh -c "$SERVICE_HOME"/bin/docker-entrypoint.sh

A lot just happened there!

We instructed the launcher script to start envconsul with an address for the Consul server and nodePort defined in the kube-consul.yaml previously. The switch -pristine tells envconsul to not merge extant environment variables with the ones read from the Consul server. The -prefix switch identifies the application namespace we want to retrieve configuration for and the shell spawns a new process for the docker-entrypoint.sh script which is an ENTRYPOINT instruction for the authentication microservice tagged as 1.0.0-SNAPSHOT. Note that this is not used in our demonstration, but instead we built a BASE-ONLY tagged image and layer the envconsul binary and launcher script (shown next) on top of it. The -c switch enables tracking of changes to configuration values in Consul and if detected the subprocess is restarted automatically!

Envconsul Build

With the envconsul.sh script ready as shown above, it is time to build a new image using the base-only tagged microservice image in the FROM instruction as shown below:

FROM forgerock-docker-public.bintray.io/microservice/authn:BASE-ONLY
MAINTAINER Javed Shah <javed.shah@forgerock.com>
COPY envconsul /usr/bin/
ADD envconsul.sh /usr/bin/envconsul.sh
RUN chmod +x /usr/bin/envconsul.sh
ENTRYPOINT ["/usr/bin/envconsul.sh"]

This Dockerfile includes the COPY instruction to add our pre-built envconsul go-binary that can run standalone to our new image and also adds the envconsul.sh script shown above to the /usr/bin/ directory. It also sets the +x permission on the script and makes it the ENTRYPOINT for the container.

At this time we used the docker build -t authn:ENVCONSUL . command to tag the new image correctly. This is available at the ForgeRock Bintray docker repo as well.

This graphic shows the layers we just added using our Dockerfile instruction set above to the new ENVCONSUL tagged image:

These two images are available for use as needed by the github.com repo.

Kubernetes Manifest

It is a matter of pulling down the correctly tagged image and specifying one key item, the hostAliases instruction to point to the OpenAM server running on the host. This server could be anywhere, just update the hostAliases accordingly. Manifests for the three microservices are available in the github.com repo.

Start the pod with kubectl create -f kube-authn-envconsul.yaml will cause the following things to happen, as seen in the logs using kubectl logs ms-authn-env.

This shows that envonsul started up with the address of the Consul server. We deliberately used the older -consul flag for more “haptic” feedback from the pod if you will. Now that envconsul started up, it will immediately reach out to the Consul server unless a Consul agent was provided in the startup configuration. It will fetch the key-value pairs for the namespace ms-authn and make them available as environment variables to the child process!

The configuration has been received from the server and at this time envconsul is ready to spawn the child process with these key-value pairs available as environment variables:

The Authentication microservice is able to startup and read its configuration from the environment provided to it by envconsul.

Change Detection

Lets change a value in the Consul UI for one of our keys, CLIENT_CREDENTIALS_STORE from json to ldap:

This change is immediately detected by envconsul in all running pods:

envconsul then wastes no time in restarting the subprocess microservice:

Pretty cool way to maintain a 12-factor app!

Tests

A quick test for the OAuth2 client credential grant reveals our endeavor has been successful. We receive a bearer access token from the newly started authn-microservice.

The token validation and token exchange microservices were also spun up using the procedures defined above.

Here is an example of validating a stateful access_token generated using the Authorization Code grant flow using OpenAM:

And finally the postman call to the token-exchange service for exchanging a user’s id_token with an access token granting delegation for an “actor” whose id_token is also supplied as input is shown here:

The resulting token is:

This particular use case is described in its own blog.

Whats coming next?

Watch this space for secrets integration with Vault and envconsul!

Faster docs

One of the things you have asked for is to see large documents load faster on the ForgeRock BackStage docs site. We recently switched from publishing HTML documentation through the BackStage single-page app to publishing separate, static HTML with JavaScript to provide BackStage features.

This allows browsers to use progressive rendering, and start laying out the page before everything has been loaded and styled. The result is that large documents feel faster in your browser.

If you have bookmarks to published HTML, notice that we have dropped the per-chapter view of published docs. Each document is now a single HTML page. So instead of a link to /docs/product/version/book/chapter#section, target /docs/product/version/book/#section.

Also notice that we have consolidated documentation sets to make information easier to find, with only one set per major or minor release. Generally this means that you only have to read one set of release notes, no matter what maintenance version you have right now.

The latest docs are the ones for version 5 of the platform:

We still publish all the same docs as before, including docs for software that is beyond the end of its service life. Please check out the updated site. Open issues there for any problems you notice.

This blog post was first published @ marginnotes2.wordpress.com, included here with permission.

What’s New in ForgeRock Access Management 5?

ForgeRock this week released version 5 of the ForgeRock Identity Platform of which ForgeRock Access Management 5 is a major component. So, what’s new in this release?

New Name

The eagle-eyed amongst you may be asking yourselves about that name and version. ForgeRock Access Management is the actually the new name for what was previously known as OpenAM. And as all components of the Platform are now at the same version, this becomes AM 5.0 rather than OpenAM 14.0 (though you may still see remnants of the old versioning under the hood).

Cloud Friendly

AM5 is focussed on being a great Identity Platform for Consumer IAM and IoT and one of the shared characteristics of these markets is high, unpredictable scale. So one of the design goals of AM5 was to become more cloud-friendly enabling a more elastic architecture. This has resulted in a simpler architectural model where individual AM servers no longer need to know about, or address, other AM servers in a cluster, they can act Autonomously. If you need more horsepower, simply spin up new instances of AM.

DevOps Friendly

To assist with the casual “Spin up new instances” statement above, AM5 has become more DevOps friendly. Firstly, the configuration APIs to AM are now available over REST, meaning configuration can be done remotely. Secondly, there’s a great new tool called Amster.

Amster is a lightweight command-line tool which can run in interactive shell mode, or be scripted.

A typical Amster script looks like this:

connect http://www.example.com:8080/openam -k /Users/demo/keyfile
import-config --path /Users/demo/am-config
:exit

This example connects to the remote AM5 instance, authenticating using a key, then imports configuration from the filesystem/git repo, before exiting.

Amster is separately downloadable has its own documentation too.

Developer Friendly

AM5 comes with new interactive documentation via the API Explorer. This is a Swagger-like interface describing all of the AM CREST (Common REST) APIs and means it is now easier than ever for devs to understand how to use these APIs. Not only are the Request parameters fully documented with Response results, but devs can “Try it out” there and then.

Secure OAuth2 Tokens

OAuth2 is great, and used everywhere. Mobile apps, Web apps, Micro-services and, more and more, in IoT.
But one of the problems with OAuth2 access tokens are that they are bearer tokens. This means that if someone steals it, they can use it to get access to the services it grants access to.

One way to prevent this is to adopt a new industry standard approach called “Proof of Possession“(PoP).

With PoP the client provides something unique to it, which is baked into the token when it is issued by AM. This is usually the public key of the client. The Resource Server, when presented with such a token, can use the confirmation claim/key to challenge the client, knowing that only the true-client can successfully answer the challenge.

Splunk Audit Handler

Splunk is one of the cool kids so it makes sense that our pluggable Audit Framework supports a native Splunk handler.

There are a tonne of other improvements to AM5 we don’t have time to cover but read about some of the others in the Release Notes, or download it from Backstage now and give it a whirl.

This blog post by the Access Management product manager was first published @ thefatblokesings.blogspot.com, included here with permission.

Identity Disorder Podcast, Episode 2

Identity Disorder, Episode 2: It’s a DevOps World, We Just Live In It

identity-disorder-speakers-ep002

In the second episode of Identity Disorder, join Daniel and me as we chat with ForgeRock’s resident DevOps guru Warren Strange. Topics include why DevOps and elastic environments are a bit like herding cattle, how ForgeRock works in a DevOps world, more new features in the mid-year 2016 ForgeRock Identity Platform release, the Pokémon training center next to Daniel’s house, and if Canada might also consider withdrawing from its neighbors.

Episode Links:

Learn more about ForgeRock DevOps and cloud resources: https://wikis.forgerock.org/confluence/display/DC/ForgeRock+DevOps+and+Cloud+Resources

Videos of the new features in the mid-year 2016 ForgeRock Identity Platform release:
https://vimeo.com/album/4053949

Information on the 2016 Sydney Identity Summit and Sydney Identity Unconference (August 9-10, 2016):
https://summits.forgerock.com/sydney/

All upcoming ForgeRock events:
https://www.forgerock.com/about-us/events/

 

OpenIG 4.0 is now available

This blog post was first published @ sauthieg.github.io, included here with permission.

January’s release of the ForgeRock Identity Platform includes OpenIG 4. This release brings new API gateway features, better integration with OpenAM, extended support for standards, and increased performance.

OpenIG 4’s new audit framework now handles audit events in a common way across the whole ForgeRock platform. For example, OpenIG 4 can track interactions across OpenAM, OpenDJ, and OpenIDM. Audit logs can be centralized and transactions can be traced across the platform. Additionally, the audit framework supports logging to files, databases, and the UNIX system log.

Improved monitoring data for the servers, applications, and APIs provides a better view of how OpenIG 4 and its routes are used. Delivered through REST endpoints, data includes request and response statistics, such as the number of requests, time to respond, and throughput.

The new throttling feature limits access to applications and APIs, increasing security and fairness. Throttling can enforce flexible rate limits for a variety of use cases, such as to limit the number of requests per minute from clients at the same network address.

Several new features improve integration with OpenAM:

  • A new policy enforcement filter allows only authorized access to protected resources. You can now use OpenIG instead of an OpenAM agent for authorization, and centralize all your access control policies in OpenAM.
  • SSO and federation for applications has been extended by a token transformation filter to use with the OpenAM REST Security Token Service. By using the filter, a mobile app with an OpenID Connect token can now access resources held by a federated service provider.
  • A new password replay filter simplifies the configuration for replaying credentials in common use cases.

Support for standards has been extended:

  • OpenID Connect Discovery makes it possible for users themselves, instead of system administrators, to select identity providers.
  • Initial support is available for a User Managed Access resource server, where users can control who accesses their resources, when, and under what conditions.

Behind the scenes, OpenIG 4 internals have been refactored to improve scalability – because we are no longer blocking threads, a single deployment can handle more requests at the same time.

These are just some of the changes in OpenIG 4. Check the Release Notes for a full list of what’s new in this release, and download the software from ForgeRock’s BackStage.

We love your feedback. Please feel free to ask questions, make suggestions, and tell us what you think of OpenIG by joining the community and getting on the forum and mailing list.

New version of ForgeRock Identity Platform™

This week, we have announced the release of the new version of the ForgeRock Identity Platform, which brings new services in the following areas :

  • Continuous Security at Scale
  • Security for Internet of Things (IoT)
  • Enhanced Data Privacy Controls

FRPlatform

This is also the first identity management solution to fully implement the User-Managed Access (UMA) standard, making it possible for organizations to address expanding privacy regulations and establish trusted digital relationships. See the article that Eve Maler, VP of Innovation at ForgeRock and Chief UMAnitarian posted to explain UMA and what it can do for you.

A more in depth description of the new features of the ForgeRock Identity Platform has also been posted.

The ForgeRock Identity Platform is available for download now at https://www.forgerock.com/downloads/

In future posts, I will detail what is new in the Directory Services part, built on the OpenDJ project.


Filed under: Identity Tagged: access-management, Directory Services, ForgeRock, identity, Identity Relationship Management, opendj, platform, release, security, uma

Nouvelle version de la Plateforme Identité de ForgeRock

Cette semaine nous venons d’annoncer la nouvelle version de la Plateforme d’Identité de ForgeRock (ForgeRock Identity Platform™).

FRPlatform

La Plateforme d’Identité de ForgeRock est maintenant capable d’évaluer dans son contexte et en continu, l’authenticité des utilisateurs, des appareils et des objets.

Cette nouvelle version est aussi la première solution qui offre le support de la norme “User Managed Access” (UMA) qui permet aux individus de partager, contrôler, autoriser et révoquer l’accès aux données de façon sélective, et donc offrent aux entreprises une solution ouverte et standardisée pour protéger et contrôler la confidentialité des données de leurs clients et employés. Ces besoins de confidentialité et de gestion du consentement deviennent importants dans le domaine de la santé, des objets connectés ou même dans le secteur des services financiers.

Pour mieux comprendre “UMA” et les services offerts par la Plateforme d’Identité de ForgeRock, je vous propose de regarder cette courte vidéo (en Anglais).

La plateforme ForgeRock Identity Platform est disponible en téléchargement dès à présent à l’adresse : https://www.forgerock.com/downloads/

Les détails des nouveautés de cette version sont sur le site de ForgeRock.


Filed under: InFrench Tagged: ForgeRock, identité, identity, opensource, plateforme, platform, release, uma