Docker containers can be a quick and effective way to keep project dependencies isolated. They are faster and less resource intensive than virtual machines. Docker provides the ability to share a local desktop directory with your Docker container using the –mount option. Doing so allows your container access to your local source code and changes you make to it will immediately be available within the container.
In this blog post I explain how to set up a Docker container to run your source code, keeping its dependencies separated from anything installed on your desktop, and use your favorite desktop editor to work on your code.
The dependency problem
It can be painful keeping your development environments configured with the correct versions of the languages and dependencies you need for a specific project. If you’re managing code for more than a handful of web sites or client projects the code dependencies are bound to diverge. Throw in databases and other language independent tools, like MySQL or Elasticsearch, and the complexities get even deeper.
Some languages have their own tools to help manage these differences. Ruby has rvm and Python has venv, to name a few. But that still leaves you, the developer, managing those environments and making sure they all co-exist peacefully on your desktop. Not impossible but time consuming and a potential major distraction from actually getting things done.
Virtual machines are one solution
Before containers came along virtual machines (VMs) offered a pretty good solution. Tools like Vagrant made it relatively easy to start and provision a virtual machine on your desktop. Keeping the dependencies within the VM meant you didn’t have to worry about conflicts. An added bonus of the VM is the ability to run your code on the actual operating system it will use in production. An application being deployed to Ubuntu 16 can be developed on a Ubuntu 16 VM.
As good as VMs are at keeping dependencies isolated they have their drawbacks. Configuring and provisioning VMs is not trivial. You can easily download the image for a Linux VM and bring it up but if you don’t automate the provisioning every time you rebuild the VM you need to reinstall the dependencies. I’ve worked in companies where it was a right of passage for new developers to get their VMs configured and running properly.
Another drawback to VMs is their size and speed. They offer a hefty solution if all you’re trying to do is avoid dependency conflicts. Not only do they bring up all your project’s dependencies, they also bring up an entire copy of the operating system.
All of the weight of a VM means it takes more system resources and it is relatively slow to come online. I’m working on a PHP project now that takes 8 minutes to start and provision. Thankfully once it’s been provisioned I can usually halt it and just restart it when I need it again. But even that takes 15-20 seconds.
Virtual machines still have their place but they aren’t the necessity they once were. If you’re running applications outside containers it may make sense to stick with the VM solution. Then again you still might benefit from containers.
Docker container overview
Docker containers have been around for several years and they continue to grow in popularity for good reason. According to Docker there have been 50 billion container downloads and 2 million applications in Docker Hub.
Containers create a virtual environment to run your code but unlike VMs, containers don’t replicate the entire operating system. In fact a best practice with containers is to only include those components you actually need to run your application. Why include bash in your container if your application is only going to run a Ruby script?
In comparison to VMs, containers are extremely fast to come online. we’re talking seconds rather than minutes. Since they don’t replicate the entire operating system they are also much less resource intensive. There’s a Ubuntu VM I use frequently. It starts in about 15 seconds which isn’t too bad but my Docker Apline container will run in about 1 second.
Your development container
When you think about using a container as part of your development environment you need to ask yourself the following questions:
- What dependencies does my project have?
- Is my target production environment a container or something else?
- Do I want to use the integrated development environment (IDE) on my desktop?
What dependencies does my project have?
The answer to this question will help you decide which container image to start with and what additional components to add to it. The project that led me to write this post is a small Python project I’m developing. I wanted to use Python 3.7 but I’m developing on a MacBook Pro and Apple ships OS X with Python 2.7 so I immediately had a conflict.
Other than Python I don’t have many dependencies for this project yet. As I get deeper into it I’m sure they will start to emerge and I’ll deal with them in my container rather than on my MacBook. For now I really just want Python 3.7.
Additionally I wanted to use a Makefile to run my tests (yes, I’m old school) so I knew I would need make in whatever container image I decided to use.
Is my target production environment a container?
If you know you’ll deploy your code into a container then you’re all set. One of the beauties of containers is they ship everything you need to run your code. So in most cases your development container and your production container will either be identical or very similar.
If you’re deploying outside of containers you might want to consider using the same OS in your container as you will be using in your production environment. Developing in an Alpine container and deploying into CentOS, for example, will introduce its own set of project risks.
Do I want to use the IDE on my desktop?
Since most containers don’t have IDEs built into them you’re probably going to want to run your code inside the container but work on the code on your own desktop. That’s easy to do with containers. You’ll just need to map your source code directory to a mount point within your container and your code will be accessible in both places.
Now that I’ve given you a taste of what’s possible with a Docker container, let’s walk through an example. We’ll take the previously mentioned Python project.
Our project has the following dependencies:
- Python 3.7
As I’m writing this post I’m not sure how, or even if, I’m going to deploy my code. It will either run on my MacBook on demand, run on a Ubuntu server through a cron job, run in a container, or run in a serverless environment such as Amazon Web Services (AWS) Lambda.
At this point all options are on the table so I’m not going to worry much about the target production environment. If I end up just running it on my MacBook I’ll just use my container as the run time environment.
When I ask myself if I’m going to use the IDE on my desktop the answer is a resounding “Yes”. I’ve been writing code for over 25 years and spent enough time using vi as a development environment. I’ll stick with modern, GUI tools thank you very much.
Creating the container
If you want to follow along, before you go any further you’ll need to have Docker installed on your desktop. If you don’t already have it visit https://www.docker.com to find the latest downloads and installation instructions.
To use Docker effectively you’ll need to keep two concepts straight: images and containers.
You can think of images as containers that have been built but are not running. They have everything they need at the ready but they aren’t actually in use. If you’re familiar with Application Machine Images (AMIs) in AWS, the concept is very similar. A Docker image is a pre-built, configured entity that can be used to run a container.
A Docker container is essentially a running instance of an image. It’s actually doing something. It’s important to remember that unless you’ve specifically configured persistent storage when your container stops running any changes you’ve made to it are gone.
If you’re familiar with VMs you can think of the image as the provisioned VM (except it’s not running) and the container as the running VM. Stopping the container is like destroying (not halting) the VM, except you don’t have to re-provision it the next time you want to start it.
How to create your container
Creating your container is a two step process. First you will create the image. Then you will run the container from the image.
Create your image
To create your image you’ll need to create a Dockerfile at the root of your source code directory.
All Dockerfiles start with a basic image. You can find them at https://store.docker.com/. The official images are pretty well vetted and safe to use.
Since we want to run Python 3.7 code I decided to use the python:3.7-alpine image. That’s a relatively small, Linux-based, official Python image, so it felt like a good fit for my needs.
The first line in your Dockerfile tells Docker which image it should start with to build your image. Since we’re using pythong:3.7-alpine, our Dockerfile start with these lines. Like in many scripting languages, Dockerfile lines beginning with # are comments.
# Let's start with the official Python 3.7 Alpine image. FROM python:3.7-alpine
Next we are going to update our image and install make, so the following lines are added to the Dockerfile.
# Upgrade packages and install make. RUN apk update RUN apk upgrade RUN apk add make
Alpine is a Linux OS that’s stripped down to minimize its footprint and it has its own package manager, apk. So, those lines update the apk package definitions, upgrade any packages that have updates available, and then install the make package.
Next we’re going to create a directory for our code and add a user account so we don’t have to run our code as root.
# Create a directory for our code. RUN mkdir -p /usr/src/app # Create a user to run our code so we don't need to run as root. RUN addgroup -S appgroup RUN adduser appuser -D -h /usr/src/app -G appgroup USER appuser
Now our Dockerfile is almost ready. It tells Docker which image to start with, which packages to install, and it creates a user to run things. The last step is to tell Docker what command it should run when we start the container. Since we want to be able to login to the Alpine container and execute our tests manually, we’re simply going to have Docker run a shell.
The last line of our Dockerfile looks like this.
# Run the shell and execute the user profile. CMD ["/bin/sh", "-l"]
Putting it all together, our complete Dockerfile looks like this:
# Let's start with the official Python 3.7 Alpine image. FROM python:3.7-alpine # Upgrade packages and install make. RUN apk update RUN apk upgrade RUN apk add make # Create a directory for our code. RUN mkdir -p /usr/src/app # Create a user to run our code so we don't need to run as root. RUN addgroup -S appgroup RUN adduser appuser -D -h /usr/src/app -G appgroup USER appuser # Run the shell and execute the user profile. CMD ["/bin/sh", "-l"]
There are still a few missing pieces to the puzzle.
- Where do we define the appuser’s .profile?, and
- How do we map our local source code directory to a mount point in the container?
Those two items will both be addressed outside the Dockerfile.
To build the image on my MacBook I open a terminal window, cd to my source code directory, and run the following command.
$ docker build -t python-project:0.1 .
That command tells docker to build an image with the name “python-project” and the tag “0.1”, using my current directory as the context. Think of the tag as a version number.
Once the image is built I can run
$ docker images
And I should see something like the following output,
REPOSITORY TAG IMAGE ID CREATED SIZE python-project 0.1 d2085ad59a69 2 days ago 82MB python 3.7-alpine a5f497d596f5 10 days ago 79.4MB
Run your container
Now that the image is available, you can use the following command to run it.
$ docker run -it --mount type=bind,source=$(PWD),target=/usr/src/app python-project:0.1
At this point you should be shelled into your container!
That last command is long and looks complicated so let’s break it down.
- The “-it” runs the container interactively (-i) and allocates a pseudo terminal (-t). Essentially “-it” allows you to see the output and interact with it in your terminal window.
- “–mount” maps a volume or host directory into the container. This is part of the magic because it allows the Docker container access the source code on our desktop. The values following “–mount” provide Docker with the details it needs.
- “type=bind” mounts the desktop directory into the container.
- “source=$(PWD)” specifies the desktop directory we want to share with the container.
- Finally, “target=/usr/src/app” indicates where the desktop directory should appear within the container.
- The last part of the run command “python-project:0.1” simply uses the “python-project” image with the tag “0.1” to run our container.
Now you can edit your source code using your favorite desktop editor and run your code from the command line in the Docker container. When you exit the Docker container it will stop running.
Here are a few things to keep in mind about this setup.
- Using “–mount type=bind” allows the container to read and write to the local disk on your desktop. If you delete or modify a file within the container the change will take place on your desktop too. Likewise if you edit files on your desktop the changes will be immediately available within your container.
- If you want the appuser to have a .profile within the container you can simply put it in the top level of the directory you’re sharing with the container. That works because we specified the appuser’s home directory as /usr/src/app within our Dockerfile.
- If you need to add additional packages to your container you should do that in the Dockerfile and then rebuild the image. Otherwise each time you run the container you’ll need to reinstall the additional packages.
- This same image could be used to run any number of Python 3.7 projects. You would simply run it from the top level of the source code directory for the project you’re working on.
Setting up a Docker container as your development run time environment is a fairly easy process. It will give you an isolated environment where you don’t have to worry about how the dependencies might affect other projects or tools installed on your desktop.
Containers are less resource intensive and faster to run than virtual machines but they may not be the best bet if you’re target production environment is not a container.
I hope your found this useful. I welcome your comments, feedback, and questions.