2020: Machine Learning, Post Quantum Crypto & Zero Trust

Welcome to a digital identity project in 2020! You'll be expected to have a plan for post-quantum cryptography.  Your network will be littered with "zero trust" buzz words, that will make you suspect everyone, everything and every transaction.  Add to that, “machines” will be learning everything, from how you like your coffee, through to every network, authentication and authorisation decision. OK, are you ready?

Machine Learning

I'm not going to do an entire blog on machine learning (ML) and artificial intelligence (AI).  Firstly I'm not qualified enough on the topic and secondly I want to focus on the security implications.  Needless to say, within 3 years, most organisations will have relatively experienced teams who are handling big data capture from an and identity, access management and network perspective.

That data will be being fed into ML platforms, either on-premise, or via cloud services.  Leveraging either structured or unstructured learning, data from events such as login (authentication) for end users and devices, as well authorization decisions can be analysed in order to not only increase assurance and security, but for also increasing user experience.  How?  Well if the output from ML can be used to either update existing signatures (bit legacy, but still) whilst simultaneously working out the less risky logins, end user journeys can be made less intrusive. 

Step one is finding out the correct data sources to be entered into the ML “model”.  What data is available, especially within the sign up, sign in and authorization flows?  Clearly general auditing data will look to capture ML “tasks” such as successful sign ins and any other meta data associated with that – such as time, location, IP, device data, behavioural biometry and so on.  Having vast amounts of this data available is the first start, which in turn can be used to “feed” the ML engine.  Other data points would be needed to.  What resources, applications and API calls are being made to complete certain business processes?  Can patterns be identified and tied to “typical” behaviour and user and device communities.  Being able to identify and track critical data and the services that process that data would be a first step, before being able to extract task based data samples to help identify trusted and untrusted activities.


Post Quantum Crypto

Quantum computing is coming.  Which is great.  Even in 2020, it might not be ready, but you need to be ready for it.  But, and there’s always a but, the main concern is that the super power of quantum will blow away the ability for existing encryption and hashing algorithms to remain secure.  Why?  Well quantum computing ushers in a paradigm of “qubits” - a superpositional state in between the classic binary 1 and 0.  Ultimately, that means that the “solutioneering” of complex problems can be completed  in a much more efficient and non-sequential way.

The quantum boxes can basically solve certain problems faster.  The mathematics behind cryptography being one of those problems.  A basic estimate for the future effectiveness of something like AES-256, drops to 128 bits.  Scary stuff.  Commonly used approaches today for key exchange rely on protocols such as Diffie-Hellman (DH) or Elliptic Curve Diffie Hellman (ECDH).  Encryption is then handled by things like Rivest-Shamir-Adleman (RSA) or Elliptic Curve Digital Signing Algorithm (ECDSA). 

In the post-quantum (PQ) world they’re basically broken.  Clearly, the material impact on your organisation or services will largely depend on impact assessment.  There’s no point putting a $100 lock on a $20 bike.  But everyone wants encryption right?  All that data that will be flying around is likely to need even more protection whilst in transit and at rest.

Some of the potentially “safe” PQ algorithms include XMSS and SPHINCS for hashing – the former going through IETF standardization.  Ring Learning With Errors (RLWE) is basically an enhanced public key cryptosystem, that alters the structure of the private key.  Currently under research but no weakness have yet been found.  NTRU is another algorithm for the PQ world, using a hefty 12881 bit key.  NTRU is also already standardized by the IEEE which helps with the maturity aspect.

But how to decide?  There is a nice body called the PQCRYPTO Consortium that is providing guidance on current research.  Clearing you’re not going to build your own alternatives, but information assurance and crypto specialists within your organisation, will need to start data impact assessments, in order to understand where cryptography is currently used for both transport, identification and data at rest protection to understand any future potential exposures.


Zero Trust Identities

“Zero Trust” (ZT) networking has been around for a while.  The concept of organisations having a “safe” internal network versus the untrusted “hostile” public network, separated by a firewall are long gone. Organisations are perimeter-less. 

Assume every device, identity and transaction is hostile until proven otherwise.  ZT for identity especially, will be looking to bind not only a physical identity to a digital representation (session Id, token, JWT), but also that representation to a vehicle – aka mobile, tablet or device.  In turn, every transaction that tuple interacts with, is then verified – checking for changes – either contextual or behavioural that could indicate malicious intent.  That introduces a lot of complexity to transaction, data and application protection. 

Every transaction potentially requires introspection or validation.  Add to this mix an increased number of devices and data flows, which would pave the way for distributed authorization, coupled with continuous session validation.

How will that look?  Well we’re starting to see the of things like stateless JSON Web Tokens (JWT’s) as a means for hyper scale assertion issuance, along with token binding to sessions and devices.  Couple to that Fine Grained Authentication processes that are using 20+ signals of data to identify a user or thing and we’re starting to see the foundations of ZT identity infrastructures.  Microservice or hyper-mesh related application infrastructures are going to need rapid introspection and re-validation on every call so the likes of distributed authorization looks likely.


So the future is now.  As always.  We know that secure identity and access management functions has never been more needed, popular or advanced in the last 20 years.  The next 3-5 years will be critical in defining a back bone of security services that can nimbly be applied to users, devices, data and the billions of transactions that will result.

2020: Machine Learning, Post Quantum Crypto & Zero Trust

Welcome to a digital identity project in 2020! You'll be expected to have a plan for post-quantum cryptography.  Your network will be littered with "zero trust" buzz words, that will make you suspect everyone, everything and every transaction.  Add to that, “machines” will be learning everything, from how you like your coffee, through to every network, authentication and authorisation decision. OK, are you ready?

Machine Learning

I'm not going to do an entire blog on machine learning (ML) and artificial intelligence (AI).  Firstly I'm not qualified enough on the topic and secondly I want to focus on the security implications.  Needless to say, within 3 years, most organisations will have relatively experienced teams who are handling big data capture from an and identity, access management and network perspective.

That data will be being fed into ML platforms, either on-premise, or via cloud services.  Leveraging either structured or unstructured learning, data from events such as login (authentication) for end users and devices, as well authorization decisions can be analysed in order to not only increase assurance and security, but for also increasing user experience.  How?  Well if the output from ML can be used to either update existing signatures (bit legacy, but still) whilst simultaneously working out the less risky logins, end user journeys can be made less intrusive. 

Step one is finding out the correct data sources to be entered into the ML “model”.  What data is available, especially within the sign up, sign in and authorization flows?  Clearly general auditing data will look to capture ML “tasks” such as successful sign ins and any other meta data associated with that – such as time, location, IP, device data, behavioural biometry and so on.  Having vast amounts of this data available is the first start, which in turn can be used to “feed” the ML engine.  Other data points would be needed to.  What resources, applications and API calls are being made to complete certain business processes?  Can patterns be identified and tied to “typical” behaviour and user and device communities.  Being able to identify and track critical data and the services that process that data would be a first step, before being able to extract task based data samples to help identify trusted and untrusted activities.


Post Quantum Crypto

Quantum computing is coming.  Which is great.  Even in 2020, it might not be ready, but you need to be ready for it.  But, and there’s always a but, the main concern is that the super power of quantum will blow away the ability for existing encryption and hashing algorithms to remain secure.  Why?  Well quantum computing ushers in a paradigm of “qubits” - a superpositional state in between the classic binary 1 and 0.  Ultimately, that means that the “solutioneering” of complex problems can be completed  in a much more efficient and non-sequential way.

The quantum boxes can basically solve certain problems faster.  The mathematics behind cryptography being one of those problems.  A basic estimate for the future effectiveness of something like AES-256, drops to 128 bits.  Scary stuff.  Commonly used approaches today for key exchange rely on protocols such as Diffie-Hellman (DH) or Elliptic Curve Diffie Hellman (ECDH).  Encryption is then handled by things like Rivest-Shamir-Adleman (RSA) or Elliptic Curve Digital Signing Algorithm (ECDSA). 

In the post-quantum (PQ) world they’re basically broken.  Clearly, the material impact on your organisation or services will largely depend on impact assessment.  There’s no point putting a $100 lock on a $20 bike.  But everyone wants encryption right?  All that data that will be flying around is likely to need even more protection whilst in transit and at rest.

Some of the potentially “safe” PQ algorithms include XMSS and SPHINCS for hashing – the former going through IETF standardization.  Ring Learning With Errors (RLWE) is basically an enhanced public key cryptosystem, that alters the structure of the private key.  Currently under research but no weakness have yet been found.  NTRU is another algorithm for the PQ world, using a hefty 12881 bit key.  NTRU is also already standardized by the IEEE which helps with the maturity aspect.

But how to decide?  There is a nice body called the PQCRYPTO Consortium that is providing guidance on current research.  Clearing you’re not going to build your own alternatives, but information assurance and crypto specialists within your organisation, will need to start data impact assessments, in order to understand where cryptography is currently used for both transport, identification and data at rest protection to understand any future potential exposures.


Zero Trust Identities

“Zero Trust” (ZT) networking has been around for a while.  The concept of organisations having a “safe” internal network versus the untrusted “hostile” public network, separated by a firewall are long gone. Organisations are perimeter-less. 

Assume every device, identity and transaction is hostile until proven otherwise.  ZT for identity especially, will be looking to bind not only a physical identity to a digital representation (session Id, token, JWT), but also that representation to a vehicle – aka mobile, tablet or device.  In turn, every transaction that tuple interacts with, is then verified – checking for changes – either contextual or behavioural that could indicate malicious intent.  That introduces a lot of complexity to transaction, data and application protection. 

Every transaction potentially requires introspection or validation.  Add to this mix an increased number of devices and data flows, which would pave the way for distributed authorization, coupled with continuous session validation.

How will that look?  Well we’re starting to see the of things like stateless JSON Web Tokens (JWT’s) as a means for hyper scale assertion issuance, along with token binding to sessions and devices.  Couple to that Fine Grained Authentication processes that are using 20+ signals of data to identify a user or thing and we’re starting to see the foundations of ZT identity infrastructures.  Microservice or hyper-mesh related application infrastructures are going to need rapid introspection and re-validation on every call so the likes of distributed authorization looks likely.


So the future is now.  As always.  We know that secure identity and access management functions has never been more needed, popular or advanced in the last 20 years.  The next 3-5 years will be critical in defining a back bone of security services that can nimbly be applied to users, devices, data and the billions of transactions that will result.

ForgeRock UnSummit in Bristol – March 2nd.

lp0_2813
Allan Foster, VP Global Partner Enablement, master of ceremony of the 2016 San Francisco UnSummit.

On March 2nd, ForgeRock will be hosting an UnSummit, a  free and open to all event, in Bristol.  In an “unconference” format, join us in the ForgeRock’s Bristol offices at Queen’s Square, for a day of discussions, presentations with users, deployers and developers of the ForgeRock Identity Platform.

 

Top 5 reasons why you (or your team) should join us?

  1. It’s a day for techie’s and nothing like a regular conference
  2. If you’re interested in identity or working on an identity project – it’s a must!
  3. There will be 30+ sessions to choose from during the day
  4. It’s a great opportunity to visit Bristol – one of Britain’s leading “Smart Cities”
  5. It’s complimentary so no charge to attend

You can register and find  more details on the ForgeRock website. And if you’re still hesitating, please check what TechSpark wrote about the coming UnSummit.

I’ll be attending the UnSummit and hope to see you there.

 


Filed under: General, Identity Tagged: Bristol, conference, ForgeRock, iam, identity, innovation, unconference

Paris Identity Summit, 15 Novembre 2016

paris_summitL’édition Française de l’Identity Summit aura lieu le Mardi 15 Novembre à Paris, au Cercle National des Armées.

L’Identity Summit, c’est l’événement qui permet de comprendre comment l’identité numérique est au coeur de la sécurité, de la transformation numérique et de la révolution des objets connectés. C’est aussi l’occasion d’entendre des retours d’expérience de la solution ForgeRock Identity Platform, de rencontrer d’autres clients et de partager vos besoins ou expérience,  de discuter avec les partenaires qui déroulent les implémentations, d’avoir un aperçu des évolutions à venir de la solution de ForgeRock…

 

Untitled

 

Untitled

 

Untitled

 

Untitled

Pour vous inscrire, c’est ici : https://summits.forgerock.com/paris/ et profitez de 50% de réduction avec le code  Summit50.

J’espère vous y rencontrer !

Filed under: Identity, InFrench Tagged: conference, ForgeRock, identité, IdentitySummit, Paris

This blog post was first published @ ludopoitou.com, included here with permission.

Paris Identity Summit, 15 Novembre 2016

paris_summitL’édition Française de l’Identity Summit aura lieu le Mardi 15 Novembre à Paris, au Cercle National des Armées.

L’Identity Summit, c’est l’événement qui permet de comprendre comment l’identité numérique est au coeur de la sécurité, de la transformation numérique et de la révolution des objets connectés. C’est aussi l’occasion d’entendre des retours d’expérience de la solution ForgeRock Identity Platform, de rencontrer d’autres clients et de partager vos besoins ou expérience,  de discuter avec les partenaires qui déroulent les implémentations, d’avoir un aperçu des évolutions à venir de la solution de ForgeRock…

UntitledUntitledUntitledUntitled

Pour vous inscrire, c’est ici : https://summits.forgerock.com/paris/ et profitez de 50% de réduction avec le code  Summit50.

J’espère vous y rencontrer !


Filed under: Identity, InFrench Tagged: conference, ForgeRock, identité, IdentitySummit, Paris

London Identity Summit 2016

Yesterday, ForgeRock hosted the London Identity Summit, 2016 Series.

lp0_6452
Mike Ellis, ForgeRock CEO, launching the London Identity Summit.

Attended by more than 300 customers, prospects, partners, the event was a great success. Highlights, presentations, etc will all be available shortly at https://summits.forgerock.com/london/. Meanwhile all my photos of the event are available here, and you can get a feel of the pulse of the event through the twitter stream (hashtag #IdentitySummit)

The next Identity Summit will be held in Paris on November 15th. I hope to see you there.

 


Filed under: Identity Tagged: 2016, conference, ForgeRock, IdentitySummit, photos, summit

London Identity Summit 2016

Yesterday, ForgeRock hosted the London Identity Summit, 2016 Series.

lp0_6452
Mike Ellis, ForgeRock CEO, launching the London Identity Summit.

Attended by more than 300 customers, prospects, partners, the event was a great success. Highlights, presentations, etc will all be available shortly at https://summits.forgerock.com/london/. Meanwhile all my photos of the event are available here, and you can get a feel of the pulse of the event through the twitter stream (hashtag #IdentitySummit)

The next Identity Summit will be held in Paris on November 15th. I hope to see you there.

 

Filed under: Identity Tagged: 2016, conference, ForgeRock, IdentitySummit, photos, summit

This blog post was first published @ ludopoitou.com, included here with permission.

Data Confidentiality with OpenDJ LDAP Directory Services

FR_plogo_org_FC_openDJ-300x86Directory Servers have been used and continue to be used to store and retrieve identity information, including some data that is sensitive and should be protected. OpenDJ LDAP Directory Services, like many directory servers, has an extensive set of features to protect the data, from securing network connections and communications, authenticating users, to access controls and privileges… However, in the last few years, the way LDAP directory services have been deployed and managed has changed significantly, as they are moving to the “Cloud”. Already many of ForgeRock customers are deploying OpenDJ servers on Amazon or MS Azure, and the requirements for data confidentiality are increasing, especially as the file system and disk management are no longer under their control. For that reason, we’ve recently introduced a new feature in OpenDJ, giving the ability to administrators to encrypt all or part of the directory data before writing to disk.clouddataprotection

The OpenDJ Data Confidentiality feature can be enabled on a per database backend basis to encrypt LDAP entries before being stored to disk. Optionally, indexes can also be protected, individually. An administrator may chose to protect all indexes, or only a few of them, those that contain data that should remain confidential, like cn (common name), sn (surname)… Additionally, the confidentiality of the replication logs can be enabled, and then it’s enabled for all changes of all database backends. Note that if data confidentiality is enabled on an equality index, this index can no longer be used for ordering, and thus for initial substring nor sorted requests.

Example of command to enable data confidentiality for the userRoot backend:

dsconfig set-backend-prop 
 -h opendj.example.com -p 4444 
 -D "cn=Directory Manager" -w secret12 -n -X 
 --backend-name userRoot --set confidentiality-enabled:true

Data confidentiality is a dynamic feature, and can be enabled, disabled without stopping the server. When enabling on a backend, only the updated or created entries will be encrypted. If there is existing data that need confidentiality, it is better to export and reimport the data. With indexes data confidentiality, the behaviour is different. When changing the data confidentiality on an index, you must rebuild the index before it can be used with search requests.

Key Management - Photo adapted from https://www.flickr.com/people/ecossystems/

When enabling data confidentiality, you can select the cipher algorithm and the key length, and again this can be per database backend. The encryption key itself is generated on the server itself and securely distributed to all replicated servers through the replication of the Admin Backend (“cn=admin data”), and thus it’s never exposed to any administrator. Should a key get compromised, we provide a way to mark it so and generate a new key. Also, a backup of an encrypted database backend can be restored on any server with the same configuration, as long as the server still has its configuration and its Admin backend intact. Restoring such backend backup to fresh new server requires that it’s configured for replication first.

The Data Confidentiality feature can be tested with the OpenDJ nightly builds. It is also available to ForgeRock customers as part of our latest update of the ForgeRock Identity Platform.


Filed under: Directory Services Tagged: confidentiality, data-confidentiality, directory-server, encryption, ForgeRock, identity, java, ldap, opendj, opensource, security

Blockchain for Identity: Access Request Management

This is the first in a series of blogs, that will start to look at some use cases for leveraging block chain technology in the world of identity and access management.  I don’t proclaim to be a BC expert and there are several blogs better equipped to tackle that subject, but a good introductory text is the O’Reilly published “Blockchain: Blueprint for a New Economy”.

I want to first look at access request management.  An age old issue that has developed substaintially in the last 30 years, to several sub-industries within the IAM world, with specialist vendors, standards and methodologies.

In the Old Days

 

Embedded/Local Assertion Managment
 
So this is a typical “standalone” model of access management.  An application manages both users and access control list information within it’s own boundary.  Each application needs a separate login and access control database. The subject is typically a person and the object an application with functions and processes.
Specialism & Economies of Scale
 
So whilst the first example is the starting point – and still exists in certain environments – specialism quickly occured, with separate processes for identity assertion management and access control list management.
Externalised Identity & ACL Management
So this could be a typical enterprise web access management paradigm.  An identity provider generates a token or assertion, with a policy enforcement process acting as a gatekeeper down into the protected objects.  This works perfectly well for single domain scenarios, where identity and resource data can be easily controlled.  Scaling too is not really a major issue here, as traditionally, this approach would be within the same LAN for example.
So far so good.  But today, we are starting to see a much more federated and broken landscape. Organisations have complex supply chains, with partners, sub-companies and external users all requiring access into once previously internal-only objects.  Employees too, want to access resources in other domains and as-a-service providers.
Federated Identities

This then creates a much more federated landscape.  Protocols such as SAML2 and OAuth2/OIDC allow identity data from trusted 3rd parties, but not originating from the objects domain, to interact with those resource securely.

Again, from a scaling perspective this tends to work quite well.  The main external interactions tend to be at the identity layer, with access control information still sitting within the object’s domain – albeit externalised from the resource itself.

The Mesh and Super-Federation

As the Internet of Things becomes normality, the increased volume of both subjects and objects creates numerous challenges.  Firstly the definition of both changes.  A subject will become not just a person, but also a thing and potentially another service.  An object will become not just an application, but an autonomous piece of data, an API or even another subject.  This then creates a multi-point set of interactions, with subjects accessing other subjects, API’s accessing API’s, things accessing API’s and so on.

Enter the Blockchain

So where does the block chain fit into all this?  Well, the main characteristics that can be valueable in this sort of landscape, would be the decentralised, append-only, globally accessible nature of a blockchain.  The blockchain technology could be used as an access request warehouse.  This warehouse could contain the output from the access request workflow process such as this sample of psuedo code:

{“sub”:”1234-org2″, “obj”:”file.dat”, “access”:”granted”, “iss”:”tomorrow”, “exp”:”tomorrow+1″, “issuingAuth”:”org1″, “added”:”now”}

This is basic, but would be hashed and cryptographically made secure from a trusted access request manager.  That manager would have the necessary circle of trust relationships with the necessary identity and access control managers.

After each access request, an entry would be made to the chain.  Each object would then be able to make a query against the chain, to identify all corresponding entries that map to their object set, unionise all entries and work out the necessary access control result.  For example, this would contain all access granted and access denied results.

 

A Blockchained Enabled Access Requestment Mgmt Workflow
 
So What?
 
So we now have another system and process to manage?  Well possibly, but this could provide a much more scaleable and interoperable model with request to all the necessary access control decisions that would need to take place to allow an IoT and API enabled world.
Each object could have access to any BC enabled node – so there would be massive fault tolerance and elastic scaling.  Each subject would simply present a self-contained assertion.  Today that could be a JWT or a token within a proof-of-possession framework.  They could collect that from any generator they choose.  Things like authentication and identity validation would not be altered.
Access request workflow management would be abstracted – the same asychronous processes, approvals and trusted interactions would take place.  The blockchain would simply be an externalised, distribued, secure storage mechanism.
From a technology perspective I don’t believe this framework exists, and I will be investigating a proof of concept in this area.

This blog post was first published @ http://www.theidentitycookbook.com/, included here with permission from the author.

What’s new in OpenDJ 3.0, Part III

FR_plogo_org_FC_openDJ-300x86In the previous posts, I talked about the new PDB Backend in OpenDJ 3.0, and the other changes with backends, replication and the changelog.

In this last article about OpenDJ 3.0, I’m presenting the most important new features and enhancements in this major release:

Certificate Matching Rules.

OpenDJ now implements the CertificateExactMatch matching rule in compliance with “Lightweight Directory Access Protocol (LDAP) Schema Definitions for X.509 Certificates” (RFC 4523) and implements the schema and the syntax for certificates, certificate lists  and certificate pairs.

It’s now possible to search a directory to find an entry with a specific certificate, using a filter such as below:

(userCertificate={ serialNumber 13233831500277100508, issuer rdnSequence:"CN=Babs Jensen,OU=Product Development,L=Cupertino,C=US" })

Password Storage Schemes

The PKCS5S2 Password Storage Scheme has been added to the list of supported storage schemes. While this one is less secure and flexible than PBKDF2, it allows some of our customers to migrate from systems that use the PKCS5S2 algorithm. Other password storage schemes have been enhanced to support arbitrary salt length and thus helping with other migrations (without requiring all users to have a new password).

Disk Space Monitoring.

In previous releases, each backend had a disk space monitoring function, regardless of the filesystems or disks used. In OpenDJ 3.0, we’ve created a disk space monitoring service, and backends, replication, log services register to it. This allows the server to optimise its resource consumption to monitor, as well as ensuring that all disks that contain writable data are monitored, and alerts raised when reaching some low threshold.

Improvements

There are many improvements in many areas of the server: in the REST to LDAP services and gateway, optimisations on indexes, dsconfig batch mode, DSML Gateway supporting SOAP 1.2, native packages… For the complete details, please read the Release Notes.

As always, the best way to really see and feel the difference is by downloading and installing the OpenDJ server, and playing with it. We’re providing a Zip installation, an RPM and a Debian Package, the DSML Gateway and the REST to LDAP Gateway as war files.

Over the course of the development of OpenDJ 3.0, we’ve received many contributions, in form of code, issues raised in our JIRA, documentation… We address our deepest thanks to all the contributors and developers :

Auke Schrijnen, Ayami Tyndal, Brad Tumy, Bruno Lavit, Bernhard Thalmayr, Carole Forel, Chris Clifton, Chris Drake, Chris Ridd, Christian Ohr, Christophe Sovant, Cyril Grosjean, Darin Perusich, David Goldsmith, Dennis Demarco, Edan Idzerda, Fabio Pistolesi, Gaétan Boismal, Gary Williams, Gene Hirayama, Hakon Steinø, Ian Packer, Jaak Pruulmann-Vengerfeldt, James Phillpotts, Jeff Blaine, Jean-Noël Rouvignac, Jens Elkner, Jonathan Thomas, Kevin Fahy, Lana Frost, Lee Trujillo, Li Run, Ludovic Poitou, Manuel Gaupp, Mark Craig, Mark De Reeper, Markus Schulz, Matthew Swift, Matt Miller, Muzzol Oliba, Nicolas Capponi, Nicolas Labrot, Ondrej Fuchsik, Patrick Diligent, Peter Major, Quentin Cassel, Richard Kolb, Robert Wapshott, Sébastien Bertholet, Shariq Faruqi, Stein Myrseth, Sunil Raju, Tomasz Jędrzejewski, Travis Papp, Tsoi Hong, Violette Roche-Montané, Wajih Ahmed, Warren Strange, Yannick Lecaillez. (I’m sorry if I missed anyone…)


Filed under: Directory Services Tagged: directory, directory-server, ForgeRock, identity, java, ldap, opendj, opensource, release