Subscribe / Unsubscribe Enewsletters | Login | Register

Pencil Banner

First look: Docker is a better way to deploy your apps

Peter Wayner | March 6, 2014
A long time ago, a computer program was a stack of punch cards, and moving the program from computer to computer was easy as long as you didn't drop the box. Every command, instruction, and subroutine was in one big, fat deck. Editors, compilers, and code repositories have liberated us from punch cards, but somehow deploying software has grown more complicated. Moving a program from the coding geniuses to the production team is fraught with errors, glitches, and hassles. There's always some misconfiguration, and it's never as simple as carrying that deck down the hall.

Docker containers are built out of text written in the Dockerfile, the equivalent of the make file. There's not much to the syntax. Most of the lines in a Dockerfile will begin with RUN, which passes the rest of the line to the instance inside the container. These are usually lines that say things like RUN sudo apt-get install.... Much of the code in the Dockerfile is a shell script for building your machine and installing the software you need.

The real action occurs when you start playing with the other commands that poke holes in the container's flexible layer. The command ADD . /src maps your current directory and makes it appear inside the container as the directory src. I used it to put some Web pages for the version of Node.js I fired up inside a container. My Web page appeared to be both outside and inside the virtual world at the same time, but it seems like this is an illusion. Docker is really zipping up your files and passing a copy. You will also poke holes in the container for the TCP/IP ports, mapping the ports of the existing machine to the ports inside the container.

Two clever tricks start the moment when you ask Docker to build the machine. First, you can start accessing previously built containers from Docker's repositories. Most of the standard distros are there, as well as a number of common configurations with tools like MongoDB. You can ADD these slices to your Dockerfile and they'll be downloaded to your new machine. The basic repository is public, but the company behind the Docker project is looking into building private repositories for enterprise work.

The second is the way the new machine is built up with slices, much like a coldcut sandwich. Docker is clever enough to keep the changes in layers, potentially saving space and complexity. The changes you make are stored separately as diffs between the containers. These diffs are also mobile, and it's possible to juggle them to deploy your software. Your developers create the container with all the right libraries, then hand it over to the ops staff, which treats it like a little box that just needs to run.

For all of the cleverness, though, it's important to recognize that the software is very new and some parts are being redesigned as I type this. The Docker website says, "Please note Docker is currently under heavy development. It should not be used in production (yet)." The project plans to have an official release of a new version each month. It also notes that the current master branch of the open repository is the current release candidate. You can get it and build it yourself.

 

Previous Page  1  2  3  4  Next Page 

Sign up for MIS Asia eNewsletters.