Remote docker deployment done securely

Secure Docker Deployment with TLSDid you follow my post Running WordPress using Docker or have you installed any docker containers directly on a docker host?
Do you find it painful to copy the setup and login to the Docker host every time you change something?Of course you do.
Today I tell you about secure docker deployment thus you can avoid this pain in the future.

Prerequisites you need

To follow this post you should have the following installed:

  • On your host:
    • Debian 9.3, Kernel Version 4.9.0-5-amd64
    • Docker Version 17.12.0-ce
  • On your client:

Other setups might work but I have not tested these and hence will not be able to help.

Default Docker Configuration

By default, you only can access Docker locally on your host as Docker runs via a non-networked Unix socket. You can configure it optionally to communicate using an HTTP socket.

But this is not secure as by default Docker will use HTTP.

If you want Docker to be reachable via the network in a safe manner, you have to enable TLS by specifying the tlsverify  flag and pointing Docker’s tlscacert  flag to a trusted CA certificate.

In the daemon mode, it only allows connections from clients authenticated by a certificate signed by that CA. In the client mode, it only connects to servers with a certificate signed by that CA.

Create a CA, server and client keys with OpenSSL

Note: replace $HOST  in the following example with the DNS name of your Docker host.

To make sure the Docker daemon can access the keys and certificates we store these under    /root/tls on the Docker daemon’s host machine which we need to create.

Do this as root as cd is a shell builtin and does not work with sudo:

First, generate CA private and public keys:

Now that you have a CA, create a server key and certificate signing request (CSR). Make sure that “Common Name” matches the hostname your host the docker daemon runs on has:

Next, you sign the public key with our CA:

Since TLS connections can be made via IP address as well as DNS name, the IP addresses need to be specified when creating the certificate. For example, to allow connections using your FQDN ( $HOST)  and, specify the following:

Set the Docker daemon key’s extended usage attributes to be used only for server authentication:

Now, generate the key:

For client authentication, create a client key and certificate signing request:

Note: for simplicity of the next couple of steps, perform this step on the Docker daemon’s host machine as well and then copy the keys to your client.

To make the key suitable for client authentication, create an extensions config file:

Now sign the private key:

After generating cert.pem  and server-cert.pem  you can remove the two certificate signing requests as we don’t need these anymore:

With a default umask of 022 , your secret keys are world-readable and writable for you and your group.

To protect your keys from accidental damage, remove their write permissions. To make them only readable by you, change file modes as follows:

Certificates can be world-readable, but you might want to remove write access to prevent accidental damage:

Configure the Docker daemon to allow http tls and use our generated keys and certs

Edit the file /etc/default/docker  and add one line as follows to switch the daemon into tls mode and which certificates and keys to use:

Note: You need to use the absolute path for certificates and keys as the daemon runs with no working directory

But wait, what does the top of the file say: # THIS FILE DOES NOT APPLY TO SYSTEMD

Wtf you might think: Isn’t Debian 9 using systemd? It is and there are tons of potential solutions for this around. Most of these don’t work.

What we are going to do is this: you will extend the systemd configuration in a clean and supported way to use the $DOCKER_OPTS  environment variable which is set in /etc/default/docker. Thus we do not change the supplied docker.service configuration file but add a configuration file to the systemd drop-in directory for docker. This will overwrite the standard configuration optionsduring runtime respectively. (The drop-in directory /etc/systemd/system/docker.service.d might not exists so you have to create it.)

Create the file /etc/systemd/system/docker.service.d/docker.conf  with the following contents:

(This appraoch DOES NOT WORK WHEN your options need expansion, e.g.: DOCKER_OPTS="-g $(readlink -f /var/lib/docker)" !)

Now tell systemd to reload the configuration files and restart the Docker service:

Check that it is up and running:

Client Setup for secure Docker deployment

To connect to the Docker daemon and validate its certificate, provide your client keys, certificates and trusted CA:

The easiest way to do this is to use scp user@$HOST:/path/to/file .. You need to copy the files: ca.pem, cert.pem and key.pem.

Run it on the client machine

This step should be run on your Docker client machine. As such, you need to copy your CA certificate, your server certificate, and your client certificate to that machine.

Note: replace all instances of $HOST in the following example with the DNS name of your Docker daemon’s host.

Note: Docker over TLS should run on TCP port 2376.

Warning: As shown in the example above, you don’t need to run the docker  client with sudo  or the docker  group when you use certificate authentication. That means anyone with the keys can give any instructions to your Docker daemon, giving them root access to the machine hosting the daemon. Guard these keys as you would a root password!

Secure Docker deployment by default

If you want secure Docker client connections by default, you can move the files to the .docker  directory in your home directory – and set the DOCKER_HOST  and DOCKER_TLS_VERIFY  variables as well (instead of passing -H=tcp://$HOST:2376  and --tlsverify  on every call). Docker will use the certificates and keys in the .docker directory automatically.

Integrated workflow for secure Docker deployment remote

If you have a development workflow: DEV-TEST-PROD, you cannot really use the above setup because you will have different certificates and keys for TEST und PROD. You will have to use the long command line version every time. Not cool.

I have a solution for you: set environment variables e.g. prod, test, … like this:

export prod="--tlsverify -H=$HOST:2376 --tlscacert=/full/path/.docker/ca.pem --tlscert=/full/path/.docker/cert.pem --tlskey=/full/path/.docker/key.pem"

You can now simply use:

  • $ docker ps  – for development
  • $ docker $prod ps  – for production

And, as you can imagine, you can create any other configuration e.g. test or stage.

The best: $ docker-compose $prod ps  does also work.

You have now a setup for remote secure docker deployment.


In my next post I will show you how to combine these different deployment options with different docker-compose files.

Stay tuned.

Leave a comment

Your email address will not be published. Required fields are marked *

One thought on “Remote docker deployment done securely”