Introduction to ForgeRock DevOps – Part 2 – Building Docker Containers

We have just launched Version 5 of the ForgeRock Identity Platform with numerous enhancements for DevOps friendliness. I have been meaning to jump into the world of DevOps for some time so the new release afforded a great opportunity to do just that.

Catch up with previous entries in the series:
http://identity-implementation.blogspot.co.uk/2017/04/introduction-to-forgerock-devops-part-1.html

I will be using IBM Bluemix here as I have recent experience of it but nearly all of the concepts will be similar for any other cloud environment.

Building Docker Containers

In this blog we are going to build our docker containers that will contain the ForgeRock platform components, tag them and upload them to the Bluemix registry.

Prerequisites

Install all of the below:

Docker: https://www.docker.com
Used to build, tag and upload docker containers.
Bluemix CLI: http://clis.ng.bluemix.net/ui/home.html
Used to deploy and configure the Bluemix environment.
CloudFoundry CLI: https://github.com/cloudfoundry/cli
Bluemix dependency.
Kubectl: https://kubernetes.io/docs/tasks/tools/install-kubectl/

Deploy and manage Kubernetes clusters.

Initial Configuration

1. Log in to the Blue Mix CLI using you Blue Mix account credentials:

bx login -a https://api.ng.bluemix.net

Note we are using the US instance of Bluemix here as it has support for Kubernetes in beta.

When prompted to select an account ( just type 1) and if you are logged in successfully you should see the above. Now you can interact with the Bluemix environment just as you might if you were logged in via a browser.

2. Add the Bluemix Docker components:

bx plugin repo-add Bluemix https://plugins.ng.bluemix.netbx plugin install container-service -r Bluemix
bx plugin install IBM-Containers -r Bluemix

Check they have installed:

bx plugin list

3. Clone (or download) the ForgeRock Docker Repo to somewhere local:

https://stash.forgerock.org/projects/DOCKER/repos/docker/browse

4. Download the ForgeRock AM and DS component binaries from backstage:

https://backstage.forgerock.com/downloads

5. Unzip and copy ForgeRock binaries into the Docker build directories:

AM:

unzip AM-5.0.0.zip
cp openam/AM-5.0.0.war /usr/local/DevOps/stash/docker/openam/

DJ:

mv DS-5.0.0.zip /usr/local/DevOps/stash/docker/openam/opendj.zipcp openam/AM-5.0.0.war /usr/local/DevOps/stash/docker/openam/

Amster:

mv Amster-5.0.0.zip /usr/local/DevOps/stash/docker/amster/amster.zip

For those unfamiliar, Amster is our new RESTful configuration tool for AM in the 5 platform, replacing SSOADM with a far more DevOps friendly tool, I’ll be covering it in a future blog.

Build Containers

We are going to create three containers: AM, DJ & Amster:

1. Build and Tag OpenAM container ( don’t forget the . ) :

cd /usr/local/DevOps/stash/docker/openam
docker build -t wayneblacklockfr/openam .

Note wayneblacklockfr/openam is just a name to tag the container with locally, replace it with whatever you like but keep the /openam.

All being well you will see something like the below:

Congratulations, you have built your first ForgeRock container!

Now we need to get the namespace for tagging, this is usually your username but check using:

bx ic namespace-get

Now lets tag it ready for upload to Bluemix, use the container ID output at the end of the build process and your namespace

docker tag d7e1700cfadd registry.ng.bluemix.net/wayneblacklock/openam:14.0.0

Repeat the process for Amster and DS.

2. Build and Tag Amster container:

cd /usr/local/DevOps/stash/docker/amster
docker build -t wayneblacklockfr/amster .
docker tag 54bf5bd46bf1 registry.ng.bluemix.net/wayneblacklock/amster:14.0.0

3. Build and Tag DS container:

cd /usr/local/DevOps/stash/docker/opendj
docker build -t wayneblacklockfr/opendj .
docker tag 19b8a6f4af73 registry.ng.bluemix.net/wayneblacklock/opendj:4.0.0

4. View the containers:

You can take a look at what we have built with: docker images

Push Containers

Finally we want to push our containers up to the Bluemix registry.

1. Login again:

bx login -a https://api.ng.bluemix.net

2. Initiate the Bluemix container service, this may take a moment:

bx ic init

Ignore Option 1 & Option 2, we are not doing either.

3. Push your Docker images up to Bluemix:

docker push registry.ng.bluemix.net/wayneblacklock/openam:14.0.0

docker push registry.ng.bluemix.net/wayneblacklock/amster:14.0.0

docker push registry.ng.bluemix.net/wayneblacklock/opendj:4.0.0

4. Confirm your images have been uploaded:

bx ic images

If you login to the Bluemix webapp you should be able to see your containers in the catalog:

Next Time

We will take a look at actually deploying a Kubernetes cluster and everything we have to do to ready our containers for deployment.

This blog post was first published @ http://identity-implementation.blogspot.no/, included here with permission from the author.

Introduction to ForgeRock DevOps – Part 1

We have just launched Version 5 of the ForgeRock Identity Platform with numerous enhancements for DevOps friendliness. I have been meaning to jump into the world of DevOps for some time so the new release afforded a great opportunity to do just that.

As always with this blog I am going to step through a fully worked example. In this case I am using IBM Bluemix however it could just as easily have been AWS, Azure. GKE or any service that supports Kubernetes. By the end of this blog you will have a containerised instance of ForgeRock Access Management and Directory Services running on Bluemix deployed using Kubernetes. First off we will cover the basics.

DevOps Basics

There are many tutorials out there introducing dev ops that do a great job so I am not going to repeat those here I will point you towards the excellent ForgeRock Platform 5 DevOps guide which also takes you through DevOps deployment step by step into Minikube or GKE:

https://backstage.forgerock.com/docs/platform/5/devops-guide

What I want to do briefly is touch on some of the key ideas that really helped me to understand DevOps. I do not claim to be an expert but I think I am beginning to piece it all together:

12 Factor Applications: Best practices for developing applications, superbly summarised here this is why we need containers and DevOps.

Docker: Technology for building, deploying and managing containers.

Containers: A minimal operating system and components necessary to host an application. Traditionally we host apps in virtual machines with full blown operating systems whereas containers cut all of that down to just what you need for the application you are going to run.

In docker containers are built from Dockerfiles which are effectively recipes for building containers from different components. e.g. a recipe for a container running Tomcat.

Container Registry: A place where built containers can be uploaded to, managed, downloaded and deployed from. You could have a registry running locally, cloud environments will also typically have registries they will use to retrieve containers at deployment time.

Kubernetes: An engine for orchestrating deployment of containers. Because containers are very minimal, they need to have extra elements provisioning such as volume storage, secrets storage and configuration. In addition when you deploy any application you need load balancing and numerous other considerations. Kubernetes is a language for defining all of these requirements and an engine for implementing them all.

In cloud environments such as AWS, Azure and IBM Bluemix that support Kubernetes this effectively means that Kubernetes will manage the configuration of the cloud infrastructure for you in effect abstracting away all of the usual configuration you have to do specific to these environments.

Storage is a good example, in Kubernetes you can define persistent volume claims, this is effectively a way of asking for storage. Now with Kubernetes you do not need to be concerned with the specifics of how this storage is provisioned. Kubernetes will do that for you regardless of whether you deploy onto AWS, Azure, IBM Bluemix.

This enables automated and simplified deployment of your application to any deployment environment that supports Kubernetes! If you want to move from one environment to another just point your script at that environment! More so Kubernetes gives you a consistent deployment management and monitoring dashboard across all of these environments!

Helm: An engine for scripting Kubernetes deployments and operations. The ForgeRock platform uses this for DevOps deployment. It simply enables scripting of Kubernetes functionality and configuration of things like environment variables that may change between deployments.

The above serves as a very brief introduction to the world of DevOps and helps to set the scene for our deployment.

If you want to following along with this guide please get yourself a paid IBM Bluemix account alternatively if you want to use GKE or Minikube ( for local deployment ) take a look at the superb ForgeRock DevOps Guide. I will likely cover off Azure and AWS deployment in later blogs however everything we talk about here will still be relevant for those and other cloud environments as after all that is the whole point of Kubernetes!

In Part 2 we will get started by installing some prerequisites and building our first docker containers.

This blog post was first published @ http://identity-implementation.blogspot.no/, included here with permission from the author.

Deploying #OpenAM instances in #Docker

Deploying services with Docker has become pretty popular in the DevOps world (understatement).

I want to demonstrate how to deploy an instance of ForgeRock’s OpenAM and OpenDJ using Docker.

Essentially this is my ForgeRock Docker Cheat Sheet

Setup:
I am running this on a virtual Ubuntu instance in Virtualbox on my laptop. You can run Docker on both Windows and OS X too … I just personally prefer Linux.

Step 1: Install Docker:
https://docs.docker.com/engine/installation/linux/ubuntulinux/

Step 2: Clone ForgeRock Docker Files:

cd /home/brad/Dev/

Use git to clone from: https://stash.forgerock.org/projects/DOCKER/repos/docker/browse

This will create a directory called “docker” in the above path.

Step 3: Build Files:

cd /home/brad/Dev/docker
make clean
make

At this point a few images are created on your local host, to view Images:

docker images

 

OpenDJ Instance:
Note: the first time you run an instance you need to create the “dj” directory first (persistent storage)
eg.:

cd /home/brad
mkdir dj // <— just run this once; the first time you launch an instance on this host
docker run -d -p 1389:389 -v `pwd`/dj:/opt/opendj/instances/instance1 -t 9f332a0fbb88

To enable a persistent store you can use docker’s volume capability. From the above command, “-v `pwd`/dj:/opt/opendj/instances/instance1” this tells docker to cp “/opt/opendj/instances/instance1” from the running instance to `pwd`/dj on the docker host. You can then kill this instance and then launch a new one, referring to the same volume.

To view the running docker instances:

docker ps

Now when we launch OpenAM, we’ll want to allow it to access the OpenDJ container. By default Docker does not setup this networking but we can create a link (see run command below). Using the link parameter, Docker will edit the /etc/hosts file on the OpenAM container and create a “link” to the OpenDJ serverOpenAM:

cd /home/brad
mkdir am // <— just run this once; the first time you launch an instance on this host
docker run -d -p 8080:8080 -v `pwd`/am:/root/openam –link dreamy_hypatia:opendj -t c02f00f42e18

As we did with OpenDJ we tell Docker to create a volume, on the Docker host, and copy the OpenAM configurations to this location. This allows us to launch a new instance without having to reconfigure OpenAM.

Next Steps:
There are a lot of things that I did not cover in this post, specifically running multiple instances for scalability. OpenDJ would need to be configured for replication and OpenAM would need to be configured to join a Site. I plan on covering these things in a future post.

Also, I didn’t cover Docker best practices (specifically security). In your environment, treat your container ids as you would passwords.

Lastly, I plan on exploring other options for persistent storage, in future posts. I am pretty sure there are better alternatives than storing this data on the Docker host’s filesystem. Possibly looking at creating another Docker container specifically for storage.

Acknowledgements:
Warren Strange (ForgeRock) … he’s constantly producing awesome and developed a lot (probably most) of the capability around the ForgeRock docker instances

My friends at GoodDogLabs for mentoring me on all things Docker

Also, I have been gleaning a lot of Docker tips from @frazelledazzell … she drops a ton of Docker knowledge via Twitter and her blog.

 

This blog post was first published @ http://tumy-tech.com, included here with permission.