Securely entering a docker swarm

One of the many nice things about docker containers is their isolation and how easily you can secure them by not exposing ports, only leaving a single entry point open for public access. They way I typically do this is by not exposing any ports from any container except for a lets-enginx (GitHub - smashwilson/lets-nginx: Push button, get TLS) container that handles all incoming connections from the outside world.

services:
    ...
    proxy:
        image: lets-nginx
        networks:
            - app_internal
        environment:
            - [email protected]
            - DOMAIN=mydomain.com
            - UPSTREAM=app:5000
        ports:
            - "80:80"
            - "443:443"
    ...

While that’s great for security it unfortunately also makes it really hard to inspect and trouble shoot your services. You can’t simply curl the service endpoint for app at port 5000 or check a web UI that a service might expose because their ports are not reachable.

Now you might consider opening the ports temporarily while trouble shooting but any time you leave things to policy – “make sure you disable the ports after you’re done” – leaves the door open to being distracted one day and forgetting to do just that.

Adding an sshd service

The way I work around this is to run an sshd container that sees the internal network and allows me to set up ssh tunnels for any internal services that I want to inspect:

services:
    ...
    sshd:
        image: corbinu/ssh-server
        networks:
            - app_internal
        ports:
            - "2222:22"

See GitHub - corbinu/ssh-server: SSH server docker container for details on how to configure the ssh server with your public key. You can either build a custom image or copy it into the container by running

cat ~/.ssh/id_rsa.pub | docker exec -i ssh-server /bin/bash -c "cat >> /root/.ssh/authorized_keys"

Tunneling into the container network

With this ssh server set up, I can then connect to the service app running on port 5000 on the app_internal network from an external machine via ssh. All it takes is starting up an ssh tunnel on my local machine as follows:

ssh -i my-private-key -p 2222 [email protected] -L 5001:localhost:5000

and then simply connect to port 5001 on localhost, i.e. the local machine. Note that I’ve chosen to map it to port 5001 here so that it’s easier to see which is the local port and which the one of the target service in case you need to adjust it. I can never remember which is which and need all the help I can get in setting up ssh tunnels – more on that below.

Also, I often run the same service locally while developing and it helps to avoid ports clashing when the tunnel is up.

The SSH Tunnel app

As I mentioned above I struggle to remember the parameters for ssh tunnels. I also like to be able see at a glance which tunnels are active and I don’t know of a good way to do that, short of having various terminal sessions open with the ssh commands in the foreground.

Here’s where a nice macOS app called SSH Tunnel comes into play which allows you to set up various tunnels and activate them with the click of a button.

SSH Tunnel configuration for a service container

SSH Tunnel configuration for a service container

With this setup I get the best of both worlds – a secure Docker Swarm without open ports plus easy access into the cluster and to all of its services.