In this instalment of our DevOps consulting series, we look at how Docker can be used to virtualise a development environment. And how the simplification the containerisation tool provides delivers:
In its essence, virtualization makes it possible for multiple operating systems to run on a single machine. That is hugely helpful for software testing but leads to high processor loads, excessive usage of RAM and HD space, and means unique configurations cannot be replicated and re-used. All of which slows down the development process.
As a result, virtualization often wasn’t historically a viable option when speedy app deployment was a strategic requirement.
That started to change when HashiCorp introduced Vagrant in 2010. As a command-line utility for virtualization software, it allows the running of commands like [create virtual machine] and the generation of complicated configurations. As a result, you can type a single [vagrant up] line in your terminal when you want to, say, run your project with PHP isolated in Ubuntu.
Vagrant made virtualization tasks much simpler for developers. But the approach itself remained too resource-intensive, so in 2013 Solomon Hyckes introduced Docker.
Docker is an open-source platform that was created for rapid deployment and allows for:
These benefits are made possible by Docker’s container virtualization platform with processes and utilities managing RAM, HDD, CPU, etc. No matter what piece of software you might want to run inside Docker — a NodeJS web app, a Selenium server, a Java application or a Python script — each microservice will run in isolation inside its Docker container.
And you can create as many containers on a single machine as you need!
When does IT Outsourcing work?
(And when doesn’t it?)
In Docker, three main components do all the magic:
— Linux Containers, a technology of virtualization that allows running several isolated OS instances on one host
— Cgroups, a Linux kernel feature that limits, accounts for, and isolates the resource usage (CPU, memory, disk I/O, network, etc.) of a collection of processes
— Linux Namespaces (lightweight process virtualization feature), used to organize isolated places called Docker containers
Containers are similar to directories having everything applications may require to work. Each container is created from an image, a read-only template usually stored in a Docker registry (either private or public).
Dockerfile creating an image with Ruby, Sass, and Gulp installed and then launching a container on top of it:
FROM node:6.1-onbuild # Install gem sass for grunt-contrib-sass RUN apt-get update -qq && apt-get install -y build-essential RUN apt-get install -y ruby RUN gem install sass RUN npm install -g bower gulp RUN bower install --config.interactive=false --allow-root ENV NODE_ENV development # Port 3000 for server # Port 35729 for livereload EXPOSE 3000 35729
Sounds great, but in practice, we usually need to launch a few containers at once to build a complete development or testing environment for an application. What to do then?
Docker’s Compose is a tool introduced for defining and running multi-container applications. To configure your app’s services, you must create a Compose file. Then, using a single [docker-compose up] command, you set up and start all services required for your configuration.
Example of a docker-compose.yml file:
app: build: ./app/ ports: - "3000:3000" - "35729:35729" volumes: - ./app/config:/usr/src/app/config - ./app/modules:/usr/src/app/modules - ./app/public:/usr/src/app/public - ./app/.jshintrc:/usr/src/app/.jshintrc - ./app/.csslintrc:/usr/src/app/.csslintrc environment: MOODLE_URL: http://192.168.99.100:8080 MOODLE_SERVICE: appname_mobile_app ANGULAR_EXPIRATION_HOURS_AUTH: 24 moodle: build: ./dockerfiles/moodle/ ports: - "8080:80" links: - db volumes: - ./dockerfiles/appname/foreground.sh:/etc/apache2/foreground.sh db: image: centurylink/mysql expose: - "3306" environment: MYSQL_DATABASE: moodle MYSQL_USER: moodle MYSQL_PASSWORD: moodle
docker-compose up -d
The above-listed file will run an app that exists in 3 different docker images: app (with a nodeJS application), moodle (with PHP scripts and the Apache web server), and db (with a mySQL database).
While Docker does have its own Docker Swarm tool that can be used to scale the orchestration of container clusters in more complex cloud-native applications, Kubernetes has become the industry standard – a fact even recognised by Docker Enterprise’s new owners Mirantis. For more information on how Docker and Kubernetes work together in a cloud-native, DevOps architecture, you can refer to our article Kubernetes vs Docker: a partnership not a competition.
Docker allows you to run any platform with its own configuration on top of your infrastructure without overloading its resources as in the case of virtual machines. It lets you put your environment and configuration into the code and deploy it.
As a result, Docker allows our developers to:
If you need a software development team or team augmentation with Docker and DevOps expertise for a current or upcoming project, please do get in touch! We’d be delighted to hear the details.