Immutable Deployment Pattern for ForgeRock Access Management (AM) Configuration without File Based…

Immutable Deployment Pattern for ForgeRock Access Management (AM) Configuration without File Based Configuration (FBC)

Introduction

The standard Production Grade deployment pattern for ForgeRock AM is to use replicated sets of Configuration Directory Server instances to store all of AM’s configuration. The deployment pattern has worked well in the past, but is less suited to the immutable, DevOps enabled environments of today.

This blog presents an alternative view of how an immutable deployment pattern could be applied to AM in lieu of the upcoming full File Based Configuration (FBC) for AM in version 7.0 of the ForgeRock Platform. This pattern could also support easier transition to FBC.

Current Common Deployment Pattern

Currently most customers deploy AM with externalised Configuration, Core Token Service (CTS) and UserStore instances.

The following diagram illustrates such a topology spread over two sites; the focus is on the DS Config Stores hence the CTS and DS Userstore connections and replication topology have been simplified . Note this blog is still applicable to deployments which are single site.

Dual site AM deployment pattern. Focus is on the DS Configuration stores

In this topology AM uses connection strings to the DS Config stores to enable an all active Config store architecture, with each AM targeting one DS Config store as primary and the second as failover per site. Note in this model there is no cross site failover for AM to Config stores connections (possible but discouraged). The DS Config stores do communicate across site for replication to create a full mesh as do the User and CTS stores.

A slight divergence from this model and one applicable to cloud environments is to use a load balancer between AM and it’s DS Config Stores, however we have observed many customers experience problems with features such as Persistent Searches failing due to dropped connections. Hence, where possible Consulting Services recommends the use of AM Connection Strings.

It should be noted that the use of AM Connection Strings specific to each AM can only be used if each AM has a unique FQDN — for example: https://openam1.example.com:8443/openam, https://openam2.example.com:8443/openam and so on.

For more on AM Connection Strings click here

Problem Statement

This model has worked well in the past; the DS Config stores contain all the stuff AM needs to boot and operate plus a handful of runtime entries.

However, times are a changing!

The advent of Open Banking introduces potentially hundreds of thousands of OAuth2 clients, AM policies entry numbers are ever increasing and with UMA thrown in for good measure; the previously small, minimal footprint are fairly static DS Config Stores are suddenly much more dynamic and contains many thousands of entries. Managing the stuff AM needs to boot and operate and all this runtime data suddenly becomes much more complex.

TADA! Roll up the new DS App and Policy Stores. These new data stores address this by allowing separation from this stuff AM needs to boot and operate from long lived environment specifics data such as policies, OAuth2 clients, SAML entities etc. Nice!

However, one problem still remains; it is still difficult to do stack by stack deployments, blue/green type deployments, rolling deployments and/or support immutable style deployments as DS Config Store replication is in place and needs to be very carefully managed during deployment scenarios.

Some common issues:

  • Making a change to one AM can quite easily have a ripple effect through DS replication, which impacts and/or impairs the other AM nodes both within the same site or remote. This behaviour can make customers more hesitant to introduce patches, config or code changes.
  • In a dual site environment the typical deployment pattern is to stop cross site replication, force traffic to site B, disable site A, upgrade site A, test it in isolation, force traffic back to the newly deployed site A, ensure production is functional, disable traffic to site B, push replication from site A to site B and re-enable replication, upgrade site B before finally returning to normal service.
  • Complexity is further increased if App and Policy stores are not in use as the in service DS Config stores may have new OAuth2 clients, UMA data etc created during transition which needs to be preserved. So in the above scenario an LDIF export of site B’s DS Config Stores for such data needs to be taken and imported in site A prior to site A going live (to catch changes while site A deployed was in progress) and after site B is disabled another LDIF export needs to taken from B and imported into A to catch any last minute changes between the first LDIF export and the switch over. Sheesh!
  • Even in a single site deployment model managing replication as well as managing the AM upgrade/deployment itself introduces risk and several potential break points.

New Deployment Model

The real enabler for a new deployment model for AM is the introduction of App and Policy stores, which will be replicated across sites. They enable full separation from the stuff AM needs to boot and run, from environmental runtime data. In such a model the DS Config stores return to a minimal footprint, containing only AM boot data with the App and Policy Stores containing the long lived environmental runtime data which is typically subject to zero loss SLAs and long term preservation.

Another enabler is a different configuration pattern for AM, where each AM effectively has the same FQDN and serverId allowing AM to be built once and then cloned into an image to allow rapid expansion and contraction of the AM farm without having to interact with the DS Config Store to add/delete new instances or go through the build process again and again.

Finally the last key component to this model is Affinity Based Load Balancing for the Userstore, CTS, App and Policy stores to both simplify the configuration and enable an all-active datastore architecture immune to data misses as a result of replication delay and is central to this new model.

Affinity is a unique feature of the ForgeRock platform and is used extensively by many customers. For more on Affinity click here.

The proposed topology below illustrates this new deployment model and is applicable to both active-active deployments and active-standby. Note cross site replication for the User, App and CTS stores is depicted, but for global/isolated deployments may well not be required.

Localised DS Config Store for each AM with replication disabled

As the DS Config store footprint will be minimal, to enable immutable configuration and massively simplify step-by-step/blue green/rolling deployments the proposal is to move the DS Config Stores local to AM with each AM built with exactly the same FQDN and serverId. Each local DS Config Store lives in isolation and replication is not enabled between these stores.

In order to provision each DS Config Store in lieu of replication, either the same build script can be executed on each host or a quicker and more optimised approach would be to build one AM-DS Config store instance/Pod in full, clone it and deploy the complete image to deploy a new AM-DS instance. The latter approach removes the need to interact with Amster to build additional instances and for example Git to pull configuration artefacts. With this model any new configuration changes require a new package/docker image/AMI, etc, i.e. an immutable build.

At boot time AM uses its local address to connect to its DS Config Store and Affinity to connect to the user Store, CTS and the App/Policy stores.

Advantages of this model:

  • As the DS Config Stores are not replicated most AM configuration and code level changes can be implemented or rolled back (using a new image or similar) without impacting any of the other AM instances and without the complexity of managing replication. Blue/green, rolling and stack by stack deployments and upgrades are massively simplified as is rollback.
  • Enables simplified expansion and contraction of the AM pool especially if an image/clone of a full AM instance and associated DS Config instance is used. This cloning approach also protects against configuration changes in Git or other code repositories inadvertently rippling to new AM instances; the same code and configuration base is deployment everywhere.
  • Promotes the cattle vs pet paradigm, for any new configuration deploy a new image/package.
  • This approach does not require any additional instances; the existing DS Config Stores are repurposed as App/Policy stores and the DS Config Stores are hosted locally to AM (or in a small Container in the same Pod as AM).
  • The existing DS Config Store can be quickly repurposed as App/Policy Stores no new instances or data level deployment steps are required other than tuning up the JVM and potentially uprating storage; enabling rapid switching from DS Config to App/Policy Stores
  • Enabler for FBC; when FBC becomes available the local DS Config stores are simply stopped in favour of FBC. Also if transition to FBC becomes problematic, rollback is easy — fire up the local DS Config stores and revert back.

Disadvantages of this model:

  • No DS Config Store failover; if the local DS Config Store fails the AM connected to it would also fail and not recover. However, this fits well with the pets vs cattle paradigm; if a local component fails, kill the whole instance and instantiate a new one.
  • Any log systems which have logic based on individual FQDNs for AM (Splunk, etc) would need their configuration to be modified to take into account each AM now has the same FQDN.
  • This deployment pattern is only suitable for customers who have mature DevOps processes. The expectation is no changes are made in production, instead a new release/build is produced and promoted to production. If for example a customer makes changes via REST or the UI directly then these changes will not be replicated to all other AM instances in the cluster, which would severely impair performance and stability.

Conclusions

This suggested model would significantly improve a customer’s ability to take on new configuration/code changes and potentially rollback without impacting other AM servers in the pool, makes effective use of the App/Policy stores without additional kit, allows easy transition to FBC and enables DevOps style deployments.

This blog post was first published @ https://medium.com/@darinder.shokar included here with permission.

Renewable Security: Steps to Save The Cyber Security Planet

Actually, this has nothing to-do with being green.  Although, that is a passion of mine.  This is more to-do with a paradigm that is becoming more popular in security architectures: that of being able to re-spin particular services to a known “safe” state after breach, or even as a preventative measure before a breach or vulnerability has been exploited.

Triple R's of Security


This falls into what is known as the “3 R’s of Security”.  A quick Google on that topic will result in a fair few decent explanations of what that can mean.  The TL;DR is basically, rotate (credentials), repair (vulnerabilities) and repave (services and servers to a known good state).  This approach is gaining popularity mainly due devops deployment models.  Or “secdevops”.  Or is it “devsecops”?  Containerization and highly automated “code to prod” pipelines make it a lot easier to get stuff into production, iterate and go again.  So how does security play into this?

Left-Shifting 


Well I want to back track a little, and tackle the age old issue of why security is generally applied as a post live issue.  Security practitioners, often evangelise on the “left shifting” of security.  Getting security higher up the production line, earlier in the software design life cycle and less as an audit/afterthought/pen testing exercise.  Why isn’t this really happening?  Well anecdotally, just look at the audit, pen testing and testing contractor rates.  They’re high and growing.  Sure, lots of dev teams and organisations are incorporating security architecture practices earlier in the dev cycle, but many find this too slow, expensive or inhibitive.  Many simply ship insecure software and assume external auditors will find the issues.

This I would say has resulted in variations of R3.  Dev as normal and simply flatten and rebuild in production in order to either prevent vulnerabilities being exploited, or recover from them faster.  Is this the approach many organisations are applying to newer architectures such as micro-services, server-less and IoT?

IoT, Microservices and Server-less


There are not many mature design patterns or vendors for things like micro-services security or even IoT security.  Yes, there are some interesting ideas, but the likes of Forrester, Gartner and other industry analysts, don’t to my knowledge, describe security for these areas as a known market size, or a level of repeatable maturity.  So what are the options?  These architectures ship with out security? Well, being a security guy, I would hope not.  So, what is the next best approach?  Maybe the triple R model is the next best thing.  Assume you’re going to breached – which CISO’s should be doing anyway – and focus on a remediation plan.

The triple R approach does assume a few things though.  The main one, is that you have a known-safe place.  Whether that is focused on images, virtual machines or new credentials, there needs to be a position which you can rollback or forward to, that is believed to be more secure than the version before.  That safe place, also needs to evolve.  There is no point in that safe place being unable to deliver the services needed to keep end users happy.

Options, Options, Options...


The main benefit of the triple R approach, is you have options – either as a response to a breach or vulnerability exposure, or as a preventative shortcut. It can bring other more pragmatic issues however.  If we’re referring to things like IoT security – how can devices, in the field and potentially aware from Internet connectivity – be hooked, rebuilt and re-keyed?  Can this be done in a hot-swappable model too, without interruptions to service?  If you need to rebuild a smart meter, you can’t possibly interrupt electricity supply to the property whilst that completes.

So the R3 model is certainly a powerful tool in the security architecture kit bag.  Is is suitable for all scenarios?  Probably not.  Is it a good “get out of jail” card in environments with highly optimized devops-esque process?  Absolutely.

Renewable Security: Steps to Save The Cyber Security Planet

Actually, this has nothing to-do with being green.  Although, that is a passion of mine.  This is more to-do with a paradigm that is becoming more popular in security architectures: that of being able to re-spin particular services to a known “safe” state after breach, or even as a preventative measure before a breach or vulnerability has been exploited.

Triple R's of Security


This falls into what is known as the “3 R’s of Security”.  A quick Google on that topic will result in a fair few decent explanations of what that can mean.  The TL;DR is basically, rotate (credentials), repair (vulnerabilities) and repave (services and servers to a known good state).  This approach is gaining popularity mainly due devops deployment models.  Or “secdevops”.  Or is it “devsecops”?  Containerization and highly automated “code to prod” pipelines make it a lot easier to get stuff into production, iterate and go again.  So how does security play into this?

Left-Shifting 


Well I want to back track a little, and tackle the age old issue of why security is generally applied as a post live issue.  Security practitioners, often evangelise on the “left shifting” of security.  Getting security higher up the production line, earlier in the software design life cycle and less as an audit/afterthought/pen testing exercise.  Why isn’t this really happening?  Well anecdotally, just look at the audit, pen testing and testing contractor rates.  They’re high and growing.  Sure, lots of dev teams and organisations are incorporating security architecture practices earlier in the dev cycle, but many find this too slow, expensive or inhibitive.  Many simply ship insecure software and assume external auditors will find the issues.

This I would say has resulted in variations of R3.  Dev as normal and simply flatten and rebuild in production in order to either prevent vulnerabilities being exploited, or recover from them faster.  Is this the approach many organisations are applying to newer architectures such as micro-services, server-less and IoT?

IoT, Microservices and Server-less


There are not many mature design patterns or vendors for things like micro-services security or even IoT security.  Yes, there are some interesting ideas, but the likes of Forrester, Gartner and other industry analysts, don’t to my knowledge, describe security for these areas as a known market size, or a level of repeatable maturity.  So what are the options?  These architectures ship with out security? Well, being a security guy, I would hope not.  So, what is the next best approach?  Maybe the triple R model is the next best thing.  Assume you’re going to breached – which CISO’s should be doing anyway – and focus on a remediation plan.

The triple R approach does assume a few things though.  The main one, is that you have a known-safe place.  Whether that is focused on images, virtual machines or new credentials, there needs to be a position which you can rollback or forward to, that is believed to be more secure than the version before.  That safe place, also needs to evolve.  There is no point in that safe place being unable to deliver the services needed to keep end users happy.

Options, Options, Options...


The main benefit of the triple R approach, is you have options – either as a response to a breach or vulnerability exposure, or as a preventative shortcut. It can bring other more pragmatic issues however.  If we’re referring to things like IoT security – how can devices, in the field and potentially aware from Internet connectivity – be hooked, rebuilt and re-keyed?  Can this be done in a hot-swappable model too, without interruptions to service?  If you need to rebuild a smart meter, you can’t possibly interrupt electricity supply to the property whilst that completes.

So the R3 model is certainly a powerful tool in the security architecture kit bag.  Is is suitable for all scenarios?  Probably not.  Is it a good “get out of jail” card in environments with highly optimized devops-esque process?  Absolutely.

Introduction to ForgeRock DevOps – Part 3 – Deploying Clusters

Introduction to ForgeRock DevOps – Part 3 – Deploying Clusters

We have just launched Version 5 of the ForgeRock Identity Platform with numerous enhancements for DevOps friendliness. I have been meaning to jump into the world of DevOps for some time so the new release afforded a great opportunity to do just that.

Catch up with previous entries in the series:

http://identity-implementation.blogspot.co.uk/2017/04/introduction-to-forgerock-devops-part-1.html
http://identity-implementation.blogspot.co.uk/2017/05/introduction-to-forgerock-devops-part-2.html

I will be using IBM Bluemix here as I have recent experience of it but nearly all of the concepts will be similar for any other cloud environment.

Deploying Clusters

So now we have docker images deployed into Bluemix. The next step is to actually deploy the images into a Kubernetes cluster. Firstly we need to create a cluster, then we need to actually deploy into it. For what we are doing here we need a standard paid cluster.

Preperation

1. Log in to the Blue Mix CLI using you Blue Mix account credentials:

bx login -a https://api.ng.bluemix.net

2. Choose a location, you can view locations with:

bx cs locations

2. Choose machine type, you can view machine types for locations with:

bx cs machine-types dal10

3. Check for VLANS. You need to choose both a public and private VLAN for a standard cluster. It should look something like this:

bx cs vlans dal10

If you need to create them… init the SoftLayer CLI first:

bx sl init

Just select Single Sign On: (2)

You should be logged in and able to create vlans:

bx sl vlan create -t public -d dal10 -s 8 -n waynepublic

Note: Your Bluemix account needs permission to create VLANs, if you don’t have this you need to contact support. You’ll be told if this is the case. You should get one free public VLAN I believe.

Creating a Cluster

1. Create a cluster:

Assuming you have public and private VLANs you can create a kubernetes cluster:

bx cs cluster-create --location dal10 --machine-type u1c.2x4 --workers 2 --name wbcluster --private-vlan 1638423 --public-vlan 2106869

You *should* also be able to use the Bluemix UI to create clusters.

2. You may need to wait a little while for the cluster to be deployed. You can check the status of it using:

bx cs clusters

During the deployment you will likely receive various emails from Bluemix confirming infrastructure has been provisioned.

3. When the cluster has finished deployment ( state is pending ), set the new cluster as the current context:

bx cs cluster-config wbcluster

The statement in yellow is the important bit, copy and paste that export back into the terminal to configure the environment for kubernetes to run.

4. Now you can run kubectl commands, view the cluster config with:

kubectl config view

See the kubernetes documentation for the full set of commands you can run, we will only be looking at a few key ones for now.

5. Clone (or download) the ForgeRock Kubernetes repo to somewhere local:

https://stash.forgerock.org/projects/DOCKER/repos/fretes/browse

6. Navigate to the fretes directory:

cd /usr/local/DevOps/stash/fretes

 

7. We need to make a tweak to the fretes/helm/custom.yaml file and add the following:

storageClass: ibmc-file-bronze

This specified the type of storage we want our deployment to use in Bluemix. If it were AWS or Azure you may need something similar.

8. From the same terminal window that you have setup kubectl, navigate to the fretes/helm/ directory and run:

helm init

This will install the helm component into the cluster ready to process the helm scripts we are going to run.

9. Run the OpenAM helm script which will configure instances of AM, backed by DJ into our kubernetes cluster:

/usr/local/DevOps/stash/fretes/helm/bin/openam.sh

This script will take a while and again will trigger the provisioning of infrastructure, storage and other components resulting in emails from Bluemix. While this is happening you should see something like this:

If you have to re-deploy on subsequent occasions, the storage will not need to be re-provisioned and the whole process will be significantly faster. When it is all done you should see something like this:

10. Proxy the kube dash:

kubectl proxy

Navigate to http://127.0.0.1:8001/ui in a browser and you should see the kubernetes console!

Here you can see everything that has been deployed automatically using the helm script!

We have multiple instances of AM and DJ with storage deployed into Bluemix ready to configure!

In the next blog we will take a detailed look at the kubernetes dashboard to understand exactly what we have done, but for now lets take a quick look at one of our new AM instances.

11. Log in to AM:

Ctrl-C the proxy command and type the following:

bx cs workers wbcluster

You can see a list of our workers above, and the IP they have been exposed publicly on.

Note: There are defined ways of accessing applications using Kubernetes, typically you would use an ingress or a load balancer and not go directly using the public IP. We may look at these in later blogs.

As you probably know, AM expects a fully qualified domain name so before we can log in we need to edit /etc/hosts and add the following:

Then you can navigate to AM:

http://openam.example.com:30080/openam

You should be able to login with amadmin/password!

Summary

So far in this series we have created docker containers with the ForgeRock components, uploaded these to Bluemix and run the orchestration helm script to actually deploy instances of these containers into a meaningful architecture. Not bad!

In the next blog we will take a detailed look at the kubernetes console and examine what has actually been deployed.

This blog post was first published @ http://identity-implementation.blogspot.no/, included here with permission from the author.

Introduction to ForgeRock DevOps – Part 2 – Building Docker Containers

We have just launched Version 5 of the ForgeRock Identity Platform with numerous enhancements for DevOps friendliness. I have been meaning to jump into the world of DevOps for some time so the new release afforded a great opportunity to do just that.

Catch up with previous entries in the series:
http://identity-implementation.blogspot.co.uk/2017/04/introduction-to-forgerock-devops-part-1.html

I will be using IBM Bluemix here as I have recent experience of it but nearly all of the concepts will be similar for any other cloud environment.

Building Docker Containers

In this blog we are going to build our docker containers that will contain the ForgeRock platform components, tag them and upload them to the Bluemix registry.

Prerequisites

Install all of the below:

Docker: https://www.docker.com
Used to build, tag and upload docker containers.
Bluemix CLI: http://clis.ng.bluemix.net/ui/home.html
Used to deploy and configure the Bluemix environment.
CloudFoundry CLI: https://github.com/cloudfoundry/cli
Bluemix dependency.
Kubectl: https://kubernetes.io/docs/tasks/tools/install-kubectl/

Deploy and manage Kubernetes clusters.

Initial Configuration

1. Log in to the Blue Mix CLI using you Blue Mix account credentials:

bx login -a https://api.ng.bluemix.net

Note we are using the US instance of Bluemix here as it has support for Kubernetes in beta.

When prompted to select an account ( just type 1) and if you are logged in successfully you should see the above. Now you can interact with the Bluemix environment just as you might if you were logged in via a browser.

2. Add the Bluemix Docker components:

bx plugin repo-add Bluemix https://plugins.ng.bluemix.netbx plugin install container-service -r Bluemix
bx plugin install IBM-Containers -r Bluemix

Check they have installed:

bx plugin list

3. Clone (or download) the ForgeRock Docker Repo to somewhere local:

https://stash.forgerock.org/projects/DOCKER/repos/docker/browse

4. Download the ForgeRock AM and DS component binaries from backstage:

https://backstage.forgerock.com/downloads

5. Unzip and copy ForgeRock binaries into the Docker build directories:

AM:

unzip AM-5.0.0.zip
cp openam/AM-5.0.0.war /usr/local/DevOps/stash/docker/openam/

DJ:

mv DS-5.0.0.zip /usr/local/DevOps/stash/docker/openam/opendj.zipcp openam/AM-5.0.0.war /usr/local/DevOps/stash/docker/openam/

Amster:

mv Amster-5.0.0.zip /usr/local/DevOps/stash/docker/amster/amster.zip

For those unfamiliar, Amster is our new RESTful configuration tool for AM in the 5 platform, replacing SSOADM with a far more DevOps friendly tool, I’ll be covering it in a future blog.

Build Containers

We are going to create three containers: AM, DJ & Amster:

1. Build and Tag OpenAM container ( don’t forget the . ) :

cd /usr/local/DevOps/stash/docker/openam
docker build -t wayneblacklockfr/openam .

Note wayneblacklockfr/openam is just a name to tag the container with locally, replace it with whatever you like but keep the /openam.

All being well you will see something like the below:

Congratulations, you have built your first ForgeRock container!

Now we need to get the namespace for tagging, this is usually your username but check using:

bx ic namespace-get

Now lets tag it ready for upload to Bluemix, use the container ID output at the end of the build process and your namespace

docker tag d7e1700cfadd registry.ng.bluemix.net/wayneblacklock/openam:14.0.0

Repeat the process for Amster and DS.

2. Build and Tag Amster container:

cd /usr/local/DevOps/stash/docker/amster
docker build -t wayneblacklockfr/amster .
docker tag 54bf5bd46bf1 registry.ng.bluemix.net/wayneblacklock/amster:14.0.0

3. Build and Tag DS container:

cd /usr/local/DevOps/stash/docker/opendj
docker build -t wayneblacklockfr/opendj .
docker tag 19b8a6f4af73 registry.ng.bluemix.net/wayneblacklock/opendj:4.0.0

4. View the containers:

You can take a look at what we have built with: docker images

Push Containers

Finally we want to push our containers up to the Bluemix registry.

1. Login again:

bx login -a https://api.ng.bluemix.net

2. Initiate the Bluemix container service, this may take a moment:

bx ic init

Ignore Option 1 & Option 2, we are not doing either.

3. Push your Docker images up to Bluemix:

docker push registry.ng.bluemix.net/wayneblacklock/openam:14.0.0

docker push registry.ng.bluemix.net/wayneblacklock/amster:14.0.0

docker push registry.ng.bluemix.net/wayneblacklock/opendj:4.0.0

4. Confirm your images have been uploaded:

bx ic images

If you login to the Bluemix webapp you should be able to see your containers in the catalog:

Next Time

We will take a look at actually deploying a Kubernetes cluster and everything we have to do to ready our containers for deployment.

This blog post was first published @ http://identity-implementation.blogspot.no/, included here with permission from the author.

Introduction to ForgeRock DevOps – Part 1

We have just launched Version 5 of the ForgeRock Identity Platform with numerous enhancements for DevOps friendliness. I have been meaning to jump into the world of DevOps for some time so the new release afforded a great opportunity to do just that.

As always with this blog I am going to step through a fully worked example. In this case I am using IBM Bluemix however it could just as easily have been AWS, Azure. GKE or any service that supports Kubernetes. By the end of this blog you will have a containerised instance of ForgeRock Access Management and Directory Services running on Bluemix deployed using Kubernetes. First off we will cover the basics.

DevOps Basics

There are many tutorials out there introducing dev ops that do a great job so I am not going to repeat those here I will point you towards the excellent ForgeRock Platform 5 DevOps guide which also takes you through DevOps deployment step by step into Minikube or GKE:

https://backstage.forgerock.com/docs/platform/5/devops-guide

What I want to do briefly is touch on some of the key ideas that really helped me to understand DevOps. I do not claim to be an expert but I think I am beginning to piece it all together:

12 Factor Applications: Best practices for developing applications, superbly summarised here this is why we need containers and DevOps.

Docker: Technology for building, deploying and managing containers.

Containers: A minimal operating system and components necessary to host an application. Traditionally we host apps in virtual machines with full blown operating systems whereas containers cut all of that down to just what you need for the application you are going to run.

In docker containers are built from Dockerfiles which are effectively recipes for building containers from different components. e.g. a recipe for a container running Tomcat.

Container Registry: A place where built containers can be uploaded to, managed, downloaded and deployed from. You could have a registry running locally, cloud environments will also typically have registries they will use to retrieve containers at deployment time.

Kubernetes: An engine for orchestrating deployment of containers. Because containers are very minimal, they need to have extra elements provisioning such as volume storage, secrets storage and configuration. In addition when you deploy any application you need load balancing and numerous other considerations. Kubernetes is a language for defining all of these requirements and an engine for implementing them all.

In cloud environments such as AWS, Azure and IBM Bluemix that support Kubernetes this effectively means that Kubernetes will manage the configuration of the cloud infrastructure for you in effect abstracting away all of the usual configuration you have to do specific to these environments.

Storage is a good example, in Kubernetes you can define persistent volume claims, this is effectively a way of asking for storage. Now with Kubernetes you do not need to be concerned with the specifics of how this storage is provisioned. Kubernetes will do that for you regardless of whether you deploy onto AWS, Azure, IBM Bluemix.

This enables automated and simplified deployment of your application to any deployment environment that supports Kubernetes! If you want to move from one environment to another just point your script at that environment! More so Kubernetes gives you a consistent deployment management and monitoring dashboard across all of these environments!

Helm: An engine for scripting Kubernetes deployments and operations. The ForgeRock platform uses this for DevOps deployment. It simply enables scripting of Kubernetes functionality and configuration of things like environment variables that may change between deployments.

The above serves as a very brief introduction to the world of DevOps and helps to set the scene for our deployment.

If you want to following along with this guide please get yourself a paid IBM Bluemix account alternatively if you want to use GKE or Minikube ( for local deployment ) take a look at the superb ForgeRock DevOps Guide. I will likely cover off Azure and AWS deployment in later blogs however everything we talk about here will still be relevant for those and other cloud environments as after all that is the whole point of Kubernetes!

In Part 2 we will get started by installing some prerequisites and building our first docker containers.

This blog post was first published @ http://identity-implementation.blogspot.no/, included here with permission from the author.

Using OpenAM as a Trusted File Authorization Engine

A common theme in the DevOps world, or any containerization style infrastructure, may be the need to verify which executables (or files in general) can be installed, run, updated or deleted within a particular environment, image or container.  There are numerous ways this could be done.  Consider a use case where exe’s, Android APK’s or other 3rd party compiled files […]

Ansible roles for OpenDJ

My colleague Warren, who I had the pleasure to work with at Sun and again with ForgeRock, has been playing with Ansible and has produced 2 roles to install OpenDJ and to configure replication. Check Warren’s blog post for the details, or go directly to the Ansible Galaxy.


Filed under: Directory Services Tagged: ansible, devops, directory, directory-server, ForgeRock, ldap, opendj, roles