Directory Services – Docker, Kubernetes: Friends or Foes?

Two weeks ago, at the ForgeRock Identity Live conference, I did a talk about ForgeRock Directory Services (DS) in the Docker/Kubernetes (K8S) world, trying to answer the question whether DS and Docker/K8S were friends or foes.

Before I dive into the question, let me say that it’s obvious that our whole industry is moving to the Cloud, and that Docker/Kubernetes are becoming the standard way to deploy software in the Cloud, in any Cloud. Therefore whether DS and K8S are ultimately friends or foes is not the right question. I believe it is unavoidable and that in the near future we will deploy and fully support Directory Services in K8S. But is it a good idea to do it today? Let’s examine why we are questioning this today, what are the benefits of using Kubernetes to deploy software, what are the constraints of deploying the current version of Directory Services (6.5) in Kubernetes, and what ForgeRock is working on to improve DS in K8S. Finally I will highlight why Directory Services is a good solution to persist data, whether it’s on premise or in the Cloud. 

Why the discussion about DS and K8S?

The main reason we are having this discussion is due to the nature of Directory Services. DS is not the usual stateless web application. Directory Services is both a stateful application and a distributed one. These are two main aspects that require special care when trying to deploy in containers. First Directory Services is a stateful application because it is the place where one can store the state for all these stateless web-applications. In our platform, we use DS to store ForgeRock Access Management data, whether it’s runtime configuration data, tokens and user identities. Second Directory Services is a distributed application because instances need to talk with each other so that the data is replicated and consistent. Because databases and distributed applications require stronger orchestration and coordination between elements of the system, they are implemented as Stateful Sets in the Kubernetes world, and make use of Persistent Volumes (PV). Therefore our Cloud Deployment Model of ForgeRock Directory Services is also implemented this way.

It’s worth noting that Persistent Volume is a Kubernetes API and there are several types of volumes and many different providers implementations. Some of the PV types are very recent and still beta versions. So, when using Kubernetes for applications that persist data, you should have a good understanding of the characteristics and the performance of the Persistent Volumes choices that are available in your environment.

Benefits of Containers and Kubernetes

Developers are making a great use of containers because it simplifies focus on what they have to build and test. Instead of spending hours figuring how to install and configure a database, and build a monitoring platform to validate their work, they can pull one or more docker images that will automate this task.

When going into production, the automation is a key aspect. Kubernetes and its family of tools, allow administrators to describe their target architectures, automate deployment, monitoring and incident response. Typically in a Kubernetes cluster, if the administrator requires at least 3 instances of an application, Kubernetes will react to the disappearance of an instance and will restart a new one immediately. Another key benefit of Kubernetes is auto-scalability. The Kubernetes deployment can react to monitoring alerts or external signals to add or remove instances of an application in order to support a greater or smaller workload. This optimises the cost of running the solution, balancing the capacity to absorb peak loads with the cost of running at normal or low usage levels.

Directory Services 6.5 constraints in K8S

But auto-scaling is not something that is suitable to all applications, and typically Directory Services, like most of the databases, does not scale automatically by adding more running instances. Because databases have state and data, and expect exclusive access to the files, adding a new replica is a costly operation. The data needs to be duplicated in order to let another instance using it. Also, adding a Directory Services instance only helps to scale read operations. A write operation on any server will need to be replicated to all other servers. So all servers will have the same write throughput and the same amount of disk I/Os. In the world of databases, the only way to scale write operations is to distribute (shard) the data to multiple servers. Such capability is not yet available in Directory Services, but it’s planned for future releases. (Note that Directory Proxy Services 6.5 already has support for sharding, but with some constraints. And the proxy is not yet part of the Cloud Deployment Model).

Another constraint of Directory Services 6.5 is how replication works. The DS replication feature was designed years ago when customers would deploy servers and would not touch them unless they were broken. Servers had stable hostnames or IP addresses and would know all of their peers. In the container world, the address of an instance is only known after the instance is started. And sometimes you want to start several instances at the same time. The current ForgeRock Cloud Deployment Model and the Directory Services docker images that we propose, work around the design limitation of replication management, by pre-configuring replication for a fixed (and small) maximum number of replicas. It’s not possible to dynamically add another replica after that. Also, the “dsreplication” utility cannot be used in Kubernetes. Luckily, monitoring replication and more importantly its latency is possible with Prometheus which is the default monitoring technology in Kubernetes.

Coming Improvements in Directory Services

For the past year, we’ve been working hard on redesigning how we manage and bootstrap replication between Directory Services instances. Our main challenge with that work has been to do it in a way that allows us to continue to replicate with previous versions. Interoperability and compatibility of replication between different versions of Directory Services has been and will remain a key value of the product, allowing customers to roll out new versions with zero downtime of the service. We’re moving towards using full CA-based certificates and mutual TLS authentication for establishing trust between replicas. Configuring a new replica will no longer require updating all servers in the topology, and replicas that are uninstalled or stopped for some time will be automatically removed from the topology (and so will be their associated change logs and meta-data). When starting a new replica, it will only need to know of one other running replica (or be told that it is the first one). These changes will make automating the deployment of new replica much simpler and remove the limit to the number of replicas. We are also improving the way we are doing backup and restore of a database backend or the whole server, allowing to directly use cloud buckets such as S3 or GCS. All of these things are planned for the next major release due in the first half of 2020. Most of these features will be used by our own ForgeRock Identity Platform as a Service offering that will go in stages of Early Access and Beta later this year.

Once we have the ability to fully automate the deployment and the upgrade of a cluster of Directory Services instances, in one or more data-centres, we will start working on horizontal scalability for Directory Services, and provide a way to scale the number of servers as the data stored grows, allowing a consistent level of write throughout. All of this fully automated to be deployed in the Cloud using Kubernetes.

Benefits of using Directory Services as a data store

Often people ask me why they should use ForgeRock Directory Services rather than a real database. First of all, Directory Services is a database. It’s a specialised database, built on a standard data model and a standard access protocol: Lightweight Directory Access Protocol aka LDAP. Several people in the past have pointed out that LDAP might have even been the first successful NoSQL database! 🙂  Furthermore, Directory Services also exposes all of the data through a REST/JSON API, yet still providing the same security and fine grained access controls mechanisms as through LDAP. But the main value of Directory Services is that you can achieve very high availability of the data (in the 5 9’s), using standard systems (whether they are bare metal systems or virtual hosts or containers), even with world wide geographic distribution. We have many customers that have deployed a single directory services distributed in 3 to 6 data centers around the globe. The LDAP data model has a flexible schema that can be extended, customised without having to rebuild the database nor even restart the servers. The data can even be exposed through versioned APIs using our REST API. Finally, the combination of flexible and extensive schema with fine-grained access controls, allow multiple applications to access the data, but with great control of which application can read or write which data. This results in a single identity and credentials for a user, but multiple sets of attributes, that can be shared by applications or restricted to a single one: a single central view of the user that is then easier and more cost effective to manage.

Conclusion

Back to the track of Kubernetes, and because of the constraints of the current Directory Services Cloud Deployment Model with version 6.5, we would recommend that you try to keep your Directory Services deployed in VMs or on bare metal. But with the next release which underpins the ForgeRock Cloud offering, we will fully support deploying Directory Services on Docker/Kubernetes. We will continue our investment in the product to be able to support Auto-Scaling (using data sharding) in subsequent releases. Building these solutions is not extremely difficult, but we need time to prove that it’s 100% reliable in all conditions, because in the end, the most wanted and appreciated feature of ForgeRock Directory Services is its reliability.

This blog post was first published @ ludopoitou.com, included here with permission.

Renewable Security: Steps to Save The Cyber Security Planet

Actually, this has nothing to-do with being green.  Although, that is a passion of mine.  This is more to-do with a paradigm that is becoming more popular in security architectures: that of being able to re-spin particular services to a known “safe” state after breach, or even as a preventative measure before a breach or vulnerability has been exploited.

Triple R's of Security


This falls into what is known as the “3 R’s of Security”.  A quick Google on that topic will result in a fair few decent explanations of what that can mean.  The TL;DR is basically, rotate (credentials), repair (vulnerabilities) and repave (services and servers to a known good state).  This approach is gaining popularity mainly due devops deployment models.  Or “secdevops”.  Or is it “devsecops”?  Containerization and highly automated “code to prod” pipelines make it a lot easier to get stuff into production, iterate and go again.  So how does security play into this?

Left-Shifting 


Well I want to back track a little, and tackle the age old issue of why security is generally applied as a post live issue.  Security practitioners, often evangelise on the “left shifting” of security.  Getting security higher up the production line, earlier in the software design life cycle and less as an audit/afterthought/pen testing exercise.  Why isn’t this really happening?  Well anecdotally, just look at the audit, pen testing and testing contractor rates.  They’re high and growing.  Sure, lots of dev teams and organisations are incorporating security architecture practices earlier in the dev cycle, but many find this too slow, expensive or inhibitive.  Many simply ship insecure software and assume external auditors will find the issues.

This I would say has resulted in variations of R3.  Dev as normal and simply flatten and rebuild in production in order to either prevent vulnerabilities being exploited, or recover from them faster.  Is this the approach many organisations are applying to newer architectures such as micro-services, server-less and IoT?

IoT, Microservices and Server-less


There are not many mature design patterns or vendors for things like micro-services security or even IoT security.  Yes, there are some interesting ideas, but the likes of Forrester, Gartner and other industry analysts, don’t to my knowledge, describe security for these areas as a known market size, or a level of repeatable maturity.  So what are the options?  These architectures ship with out security? Well, being a security guy, I would hope not.  So, what is the next best approach?  Maybe the triple R model is the next best thing.  Assume you’re going to breached – which CISO’s should be doing anyway – and focus on a remediation plan.

The triple R approach does assume a few things though.  The main one, is that you have a known-safe place.  Whether that is focused on images, virtual machines or new credentials, there needs to be a position which you can rollback or forward to, that is believed to be more secure than the version before.  That safe place, also needs to evolve.  There is no point in that safe place being unable to deliver the services needed to keep end users happy.

Options, Options, Options...


The main benefit of the triple R approach, is you have options – either as a response to a breach or vulnerability exposure, or as a preventative shortcut. It can bring other more pragmatic issues however.  If we’re referring to things like IoT security – how can devices, in the field and potentially aware from Internet connectivity – be hooked, rebuilt and re-keyed?  Can this be done in a hot-swappable model too, without interruptions to service?  If you need to rebuild a smart meter, you can’t possibly interrupt electricity supply to the property whilst that completes.

So the R3 model is certainly a powerful tool in the security architecture kit bag.  Is is suitable for all scenarios?  Probably not.  Is it a good “get out of jail” card in environments with highly optimized devops-esque process?  Absolutely.

Renewable Security: Steps to Save The Cyber Security Planet

Actually, this has nothing to-do with being green.  Although, that is a passion of mine.  This is more to-do with a paradigm that is becoming more popular in security architectures: that of being able to re-spin particular services to a known “safe” state after breach, or even as a preventative measure before a breach or vulnerability has been exploited.

Triple R's of Security


This falls into what is known as the “3 R’s of Security”.  A quick Google on that topic will result in a fair few decent explanations of what that can mean.  The TL;DR is basically, rotate (credentials), repair (vulnerabilities) and repave (services and servers to a known good state).  This approach is gaining popularity mainly due devops deployment models.  Or “secdevops”.  Or is it “devsecops”?  Containerization and highly automated “code to prod” pipelines make it a lot easier to get stuff into production, iterate and go again.  So how does security play into this?

Left-Shifting 


Well I want to back track a little, and tackle the age old issue of why security is generally applied as a post live issue.  Security practitioners, often evangelise on the “left shifting” of security.  Getting security higher up the production line, earlier in the software design life cycle and less as an audit/afterthought/pen testing exercise.  Why isn’t this really happening?  Well anecdotally, just look at the audit, pen testing and testing contractor rates.  They’re high and growing.  Sure, lots of dev teams and organisations are incorporating security architecture practices earlier in the dev cycle, but many find this too slow, expensive or inhibitive.  Many simply ship insecure software and assume external auditors will find the issues.

This I would say has resulted in variations of R3.  Dev as normal and simply flatten and rebuild in production in order to either prevent vulnerabilities being exploited, or recover from them faster.  Is this the approach many organisations are applying to newer architectures such as micro-services, server-less and IoT?

IoT, Microservices and Server-less


There are not many mature design patterns or vendors for things like micro-services security or even IoT security.  Yes, there are some interesting ideas, but the likes of Forrester, Gartner and other industry analysts, don’t to my knowledge, describe security for these areas as a known market size, or a level of repeatable maturity.  So what are the options?  These architectures ship with out security? Well, being a security guy, I would hope not.  So, what is the next best approach?  Maybe the triple R model is the next best thing.  Assume you’re going to breached – which CISO’s should be doing anyway – and focus on a remediation plan.

The triple R approach does assume a few things though.  The main one, is that you have a known-safe place.  Whether that is focused on images, virtual machines or new credentials, there needs to be a position which you can rollback or forward to, that is believed to be more secure than the version before.  That safe place, also needs to evolve.  There is no point in that safe place being unable to deliver the services needed to keep end users happy.

Options, Options, Options...


The main benefit of the triple R approach, is you have options – either as a response to a breach or vulnerability exposure, or as a preventative shortcut. It can bring other more pragmatic issues however.  If we’re referring to things like IoT security – how can devices, in the field and potentially aware from Internet connectivity – be hooked, rebuilt and re-keyed?  Can this be done in a hot-swappable model too, without interruptions to service?  If you need to rebuild a smart meter, you can’t possibly interrupt electricity supply to the property whilst that completes.

So the R3 model is certainly a powerful tool in the security architecture kit bag.  Is is suitable for all scenarios?  Probably not.  Is it a good “get out of jail” card in environments with highly optimized devops-esque process?  Absolutely.