Containers 101 – What do you need to know?

4825

This post was originally published here by shaane syed.

Thanks to Docker, containers are now the future of web development. According to DataDog, 15% of hosts run Docker, which is significantly up both from the 6% of hosts running it at this point in 2015 and the 0% of hosts running it before it was released in March of 2013. LinkedIn has also seen a 160% increase in profile references to Docker in just the past year alone, indicating it’s becoming much more important to know something about Docker when looking for work.

What exactly are containers? And why are they so rapidly grabbing developer market share from virtual machines?

To answer these questions, it’s helpful to consider containers in contrast to VMs.

A virtual machine is an emulation of an entire operating system managed by a hypervisor. A virtual machine may be run over the top of another OS or directly off the hardware. In either case, one VM can and usually will be run alongside other VMs, all of which are allocated their own set static space and resources by the hypervisor, with each VM acting as its own independent computer.

A container is a self-contained (it’s right there in the name) execution environment with its own isolated network resources. At a quick glance, a container may appear very similar to a VM. The key difference is that a container does not emulate a separate OS. Containers instead create separate, independent user spaces that have their own bins and libs, but that share the host operating system’s kernel with all other containers running on a machine. This being the case, containers do not need to be assigned their own set amount of RAM and other resources; they simply make use of whatever they need while they’re running.

In short: a virtual machine virtualizes the hardware, while a container virtualizes the OS.

This means containers are significantly more lightweight than VMs. They can be spun up in seconds instead of minutes and you can run as many as 8x as many of them on a single machine. And since the OS has been abstracted away, they can be easily moved from one machine to another.

What is contained in a container?

A container is made up of container images that bundle up the code and all its dependencies. One of these container images will be the app itself. The other images will be the libraries and binaries needed to run that app. All the images that make up the container are then turned into an image template that can be reused across multiple hosts.

It may sound like a lot of effort to add all the necessary individual images to a container, but all your images are stored and run out of a registry. If your application needs PHP 7.0, Apache, and Ubuntu to run then you’ll reference these in your config file and your container manager will pull them from this registry (assuming they’re there).

Where does Docker come into all of this?

Containers are nothing new, having been part of Linux since the creation of chroot in 1982. But to run them, you need a container manager like the one referenced above. Docker is by far the most popular of these (it’s nearly synonymous with containers at this point) and has been at the forefront of the rapid surge in container usage. What sets is apart?

The Docker Hub – This hub is not only the registry where your various images are privately stored, it’s also a rich ecosystem of public images built by other Docker users that you can pull down and use for your own projects. Why do all the grunt work when there are other people out there who have already done it?

Easy Version History and Rollbacks – Docker containers are read-only post-creation. That doesn’t mean you can’t make changes, what it does mean is that any changes are used to create new images whenever you run the “docker commit” command. These new images become new containers that you run just like the original. If an alteration leads to a problems in a new image, then you can simply go back to the previous one.

Portability – Containers are already portable just by the nature of their design, but Docker guarantees the environment will be exactly the same when moving an image from one Docker host to another so you can build once and run anywhere.

Docker is open source and its technology is the basis of the Open Containers Initiative, which is a Linux Foundation initiative focused on creating “industry standards around container formats and runtime.” Google, Amazon, Microsoft and other industry leaders are also part of the OCI.

Are containers available only on Linux?

Until very recently the answer was yes, but Microsoft added container support to Windows Server 2016. This can be managed using Docker for Windows.

Will containers replace virtual machines?

Though containers will absolutely continue to rise in popularity, it’s incredibly unlikely they’ll replace VMs. More likely the two will be used in concert with each other, each being put to use where most appropriate. Occasionally, containers might even be run within virtual machines, warping the space time continuum and confusing partisans of both approaches. We truly live in the future.

There are particular concerns about containers and security, as the varied images provide more points of entry for attackers and a container’s direct access to the OS kernel creates a larger surface area for attack than would be found in a hypervisor controlled virtual machine. As a security company, we’ll certainly have a lot more to say about that in due time.

Photo:Pinterest

 

Ad

No posts to display