Docker Compose

Configuration file for current project is in docker-compose.yml.


Docker Compose relies on docker. Be sure to install that first.

If you want to run multiple containers to meet the requirements of a more complicated service, you can use Docker Compose to bring all of the containers up together. To install docker-compose:

sudo apt-get install docker-compose -y

After editing the docker-compose.yml file for the services, launch them with:

docker-compose up -d

docker-compose up -d --no-deps --build


Then, to bring everything down again:

docker-compose down -v

To rebuild, use:

docker-compose build

ERROR: for seafile-mysql no such image:

If the container is not running, it may not show up in just docker ps. You can see the status in docker-compose ps

Check for existing images:

docker-compose ps

Remove all old images

docker-compose rm

then rebuild again.


This defines all containers to be used for the application. Ideally, there are existing images that meet the requirements.

One parameter that is helpful is

container_name: mycontainername

Typically the parent directory name is used to create a container name. The container_name parameter helps keep container names consistent regardless of where they are deployed. This, in turn, makes it easier to create other configuration files that work as expected within the docker network.

In some cases it may help to run more than one command. You can separate these out into separate compose files (e.g. docker-compose-build.yml), or you could run multiple commands by chaining them together in a sh call:

command: bash -c "
    python migrate
    && python runserver

Beyond that, and you may want to consider building a custom image with a dedicated Dockerfile.

The dockerfile can be specified in the docker-compose.yml file with:

      context: .
      dockerfile: <Below_Directory>/Dockerfile

Reminder Image for service was built because it did not already exist. To rebuild this image you must use docker-compose build or docker-compose up --build.


To restrict to local machine only, edit the docker-compose.yml file and change to:

            - ""

To restrict a service so it's only available within the container network (not available on the host directly), then use expose instead:

      - 3306

Custom Network

Usually it's sufficient to use the default network in a compose file so all containers specified will have access to each other, but not to anything else.

If you want to run a number of different applications, but proxy them all behind the same nginx host (running in a different container), it could help to specify the network in the compose file:

    name: my-pre-existing-network

Custom images (Dockerfile)

It's a good idea to start with an existing docker image for the type of service you want to use as a foundation for your container. For example, if you're running a node application, in docker-compose.yml start with:

    image: node:14

Eventually you may want some other utilities to be available within the container context. (e.g. when you run docker-compose exec api bash to connect to the container). In that case, use a Dockerfile to make those adjustments so they persist across restarts.

      context: ./api
      dockerfile: Dockerfile

Note that the dockerfile path is relative to the value for context

Then, in the Dockerfile, image: node:14 becomes FROM node:14 and you can add the rest of the configurations as needed. See also

Environment Variables

It's possible to put variables in a .env file and then reference those variables in the docker-compose.yml file


For troubleshooting, you can add a command that is sure to run in the docker-compose.yml, e.g.:

entrypoint: ["tail", "-f", "/dev/null"]

entrypoint: ["sh", "-c", "sleep 2073600"]

then connect with:

docker-compose exec SERVICE_NAME bash


Logging is available via docker directly:

docker logs repo_nginx_1

see also:

docker network ls


Straightforward guide for getting nginx running:

Great article for using Docker for a local development environment:

docker-compose -f docker-compose.builder.yml run --rm install



If no container_name parameter is set in docker-compose.yml, by default, the docker project name is the parent directory name. This is usually the case for local development setups.

There are times, however, when it is useful to make directory names that are different than the project name. For example, working on a different branch, it may be easier to use the branch name instead of the project name for the parent directory.

If the parent directory is not equal to the project name, you'll want to pass the project name in to all of the above docker-compose commands.

Using -p boilerplate allows the project name (and consequently the container name) to be consistent from one deployment to the next. That way containers are named with boilerplate_[service]_1.

This allows configuration files to be written ahead of time to work.

If you've checked out the repository to a directory named boilerplate, the project name boilerplate will be assumed by docker and the -p option may be omitted.

See Also