Subscribe / Unsubscribe Enewsletters | Login | Register

Pencil Banner

First look: Docker is a better way to deploy your apps

Peter Wayner | March 6, 2014
A long time ago, a computer program was a stack of punch cards, and moving the program from computer to computer was easy as long as you didn't drop the box. Every command, instruction, and subroutine was in one big, fat deck. Editors, compilers, and code repositories have liberated us from punch cards, but somehow deploying software has grown more complicated. Moving a program from the coding geniuses to the production team is fraught with errors, glitches, and hassles. There's always some misconfiguration, and it's never as simple as carrying that deck down the hall.

Build once, run anywhere

Several people I've spoken with sounded a bit leery when hearing there was another virtual machine solution promising to make code that runs almost anywhere. They've lived through the interest in Pascal, Java, and the rest. The difference is that Docker is much more narrowly focused on packaging the Linux machines that act as the backbone of the Internet. There are no pretenses of taking over the desktop or any other part of the computing world. Docker doesn't want to translate some neutral byte code into local binaries. It wants to package x86 code that works with the Linux kernel. These are simpler goals.

Docker began as a tool to help the developer package up a Linux application, and even after all the hype, it remains just that: a container-building tool that works efficiently and cleverly. Will it sweep through data centers? Many Linux developers will love it. They'll be able to build up nice machines on their desk and ship them off to the cloud without having to waste extra time figuring out how to reconfigure their cloud. Docker shifts the focus to the most important part of the equation: the app. Instead of buying multiple machine instances, they'll be buying compute time. It's entirely possible that many of the clouds will morph into farms for running Docker containers.

There's no doubt that the ease and simplicity of Docker mean that many will start incorporating it into their stacks. It will become one of the preferred ways to ship around code. But for all of its promise, I still feel like everything is a bit too new.

Toward the end of the process, I started wondering about this entire operation. It's wholly possible to put a Docker container inside a Vagrant or VirtualBox VM that is sitting on the operating system. If this is a cloud machine, the operating system itself could be sitting on some hypervisor. There's plenty of virtualization going on. If it were a thriller mystery, the protagonist would be peeling off masks again and again and again.

At its root, Docker is solving a problem caused by a failure of operating system design. The old ideas of isolating users and jobs in an operating system aren't good enough. Somehow the developers and the staff need another, more powerful force field to stop the software from messing with each package. The success of Docker is one step toward this redesign, but it's clearly more of a Band-Aid than the kind of unifying vision that the operating system world needs.

Who knows when this newer, better, and cleaner model will emerge, but until it does, Docker is one of the simplest ways of using some virtual duct tape to wall off the applications from each other. The issues with ghosts and disk space will be solved. The tool will become less command-line driven. Anyone building software to run in production on Linux boxes will love the flexibility it brings, and that will drive plenty of interest over the next five years.

 

Previous Page  1  2  3  4 

Sign up for MIS Asia eNewsletters.