Here at Viget, Docker has become an indispensable tool for local development. We build and maintain a ton of apps across the team, running different stacks and versions, and being able to package up a working dev environment makes it much, much easier to switch between apps and ramp up new devs onto projects. That’s not to say that developing with Docker locally isn’t without its drawbacks1, but they’re massively outweighed by the ease and convenience it unlocks.
Over time, we’ve developed our own set of best practices for effectively setting Docker up for local development. Please note that last bit (“for local development”) – if you’re creating images for deployment purposes, most of these principles don’t apply. Our typical setup involves the following containers, orchestrated with Docker Compose:
- The application (e.g. Rails, Django, or Phoenix)
- A JavaScript watcher/compiler (e.g.
webpack-dev-server
) - A database (typically PostgreSQL)
- Additional necessary infrastructure (e.g. Redis, ElasticSearch, Mailhog)
- Occasionally, additional instances of the app doing things other than running the development server (think background jobs)
So with that architecture in mind, here are the best practices we’ve tried to standardize on:
1. Don’t put code or app-level dependencies into the image
Your primary Dockerfile, the one the application runs in, should include all the necessary software to run the app, but shouldn’t include the actual application code itself – that’ll be mounted into the container when docker-compose run
starts and synced between the container and the local machine.
Additionally, it’s important to distinguish between system-level dependencies (like ImageMagick) and application-level ones (like Rubygems and NPM packages) – the former should be included in the Dockerfile; the latter should not. Baking application-level dependencies into the image means that it’ll have to be rebuilt every time someone adds a new one, which is both time-consuming and error-prone. Instead, we install those dependencies as part of a startup script.
2. Don’t use a Dockerfile if you don’t have to
With point #1 in mind, you might find you don’t need to write a Dockerfile at all. If your app doesn’t have any special dependencies, you might be able to point your docker-compose.yml
entry right at the official Docker repository (i.e. just reference ruby:2.7.6
). This isn’t very common – most apps and frameworks require some amount of infrastructure (e.g. Rails needs a working version of Node), but if you find yourself with a Dockerfile that contains just a single FROM
line, you can just cut it.
3. Only reference a Dockerfile once in docker-compose.yml
If you’re using the same image for multiple services (which you should!), only provide the build instructions in the definition of a single service, assign a name to it, and then reference that name for the additional services. So as an example, imagine a Rails app that uses a shared image for running the development server and webpack-dev-server
. An example configuration might look like this:
services:
rails:
image: appname_rails
build:
context: .
dockerfile: ./.docker-config/rails/Dockerfile
command: ./bin/rails server -p 3000 -b '0.0.0.0'
node:
image: appname_rails
command: ./bin/webpack-dev-server
This way, when we build the services (with docker-compose build
), our image only gets built once. If instead we’d omitted the image:
directives and duplicated the build:
one, we’d be rebuilding the exact same image twice, wasting your disk space and limited time on this earth.
4. Cache dependencies in named volumes
As mentioned in point #1, we don’t bake code dependencies into the image and instead install them on startup. As you can imagine, this would be pretty slow if we installed every gem/pip/yarn library from scratch each time we restarted the services (hello NOKOGIRI), so we use Docker’s named volumes to keep a cache. The config above might become something like:
volumes:
gems:
yarn:
services:
rails:
image: appname_rails
build:
context: .
dockerfile: ./.docker-config/rails/Dockerfile
command: ./bin/rails server -p 3000 -b '0.0.0.0'
volumes:
- .:/app
- gems:/usr/local/bundle
- yarn:/app/node_modules
node:
image: appname_rails
command: ./bin/webpack-dev-server
volumes:
- .:/app
- yarn:/app/node_modules
Where specifically you should mount the volumes to will vary by stack, but the same principle applies: keep the compiled dependencies in named volumes to massively decrease startup time.
5. Put ephemeral stuff in named volumes
While we’re on the subject of using named volumes to increase performance, here’s another hot tip: put directories that hold files you don’t need to edit into named volumes to stop them from being synced back to your local machine (which carries a big performance cost). I’m thinking specifically of log
and tmp
directories, in addition to wherever your app stores uploaded files. A good rule of thumb is, if it’s .gitignore
’d, it’s a good candidate for a volume.
6. Clean up after apt-get update
If you use Debian-based images as the starting point for your Dockerfiles, you’ve noticed that you have to run apt-get update
before you’re able to apt-get install
your dependencies. If you don’t take precautions, this is going to cause a bunch of additional data to get baked into your image, drastically increasing its size. Best practice is to do the update, install, and cleanup in a single RUN
command:
RUN apt-get update && \
apt-get install -y libgirepository1.0-dev libpoppler-glib-dev && \
rm -rf /var/lib/apt/lists/*
7. Prefer exec
to run
If you need to run a command inside a container, you have two options: run
and exec
. The former is going to spin up a new container to run the command, while the latter attaches to an existing running container.
In almost every instance, assuming you pretty much always have the services running while you’re working on the app, exec
(and specifically docker-compose exec
) is what you want. It’s faster to spin up and doesn’t carry any chance of leaving weird artifacts around (which will happen if you’re not careful about including the --rm
flag with run
).
8. Coordinate services with wait-for-it
Given our dependence on shared images and volumes, you may encounter issues where one of your services starts before another service’s entrypoint
script finishes executing, leading to errors. When this occurs, we’ll pull in the wait-for-it
utility script, which takes a web location to check against and a command to run once that location sends back a response. Then we update our docker-compose.yml
to use it:
volumes:
gems:
yarn:
services:
rails:
image: appname_rails
build:
context: .
dockerfile: ./.docker-config/rails/Dockerfile
command: ./bin/rails server -p 3000 -b '0.0.0.0'
volumes:
- .:/app
- gems:/usr/local/bundle
- yarn:/app/node_modules
node:
image: appname_rails
command: [
"./.docker-config/wait-for-it.sh",
"rails:3000",
"--timeout=0",
"--",
"./bin/webpack-dev-server"
]
volumes:
- .:/app
- yarn:/app/node_modules
This way, webpack-dev-server
won’t start until the Rails development server is fully up and running.
So there you have it, a short list of the best practices we’ve developed over the last several years of working with Docker. We’ll try to keep this list updated as we get better at doing and documenting this stuff.
If you’re interested in reading more, here are a few good links:
- Ruby on Whales: Dockerizing Ruby and Rails development
- Docker: Right for Us. Right for You?
- Docker + Rails: Solutions to Common Hurdles
- Namely, there’s a significant performance hit when running Docker on Mac (as we do) in addition to the cognitive hurdle of all your stuff running inside containers. If I worked at a product shop, where I was focused on a single codebase for the bulk of my time, I’d think hard before going all in on local Docker.↩︎