Published: 28. 11. 2019   Category: GNU/Linux

Fix broken docker swarm cluster

This issue was quite new to me, however, it looks like it is happening to more people and also it is impossible to reproduce it. In my case, it happened on a docker swarm cluster with about 1.5-year uptime on version 17.12.0-ce, running in Ubuntu 16.04.

What are the symptoms of the broken swarm?

Prior to this issue, we have got quite a lot of errors with "dispatcher is stopped" from all nodes. I have seen "dispatcher is stopped" errors before but cluster usually reinitializes communication automatically and no administrator intervention is necessary. Our three managers in docker swarm are periodically selecting a leader, however, this process failed and manager-0 became unreachable from other nodes. The other two manager nodes did not set new quorum because of the stopped dispatcher and the cluster did not change the status of manager-0. Unfortunately, manager-0 was not able to renew the docker network and other managers and worker nodes have become unreachable.

The previous behavior was reported by other docker users and usually, they needed to destroy and reinitialize the whole cluster to fix the problem. No other solution or the description of how to reproduce this problem was given.

Reinitialize docker swarm cluster

  1. Optional if used, save tags assigned on nodes for later reassignment. On manager node, run this script:
    for node in $(docker node ls --filter role=worker --format '{{ .Hostname }}');
        tags=$(docker node inspect "$node" -f '{{.Spec.Labels}}' |\
            sed -e 's/^map\[//' -e 's/\]$//')
        printf "%s: %s\n" "$node" "$tags"
    done | sort
    It can be assign later back with: docker node update --label-add <TAG>=true <WORKER>
  2. On each node, force to leave: docker swarm leave --force
  3. On each node, restart service: systemctl restart docker.service
  4. On manager, create a new cluster:
    docker swarm init --availability drain --advertise-addr <INTERNAL IP>
    Internal IP address is the intranet IP of cluster which will be used for communication between nodes.
  5. Generate tokens for the manager/worker node invitation:
    $ docker swarm join-token manager
    To add a manager to this swarm, run the following command:
        docker swarm join --token SWMTKN-1-6bsyhhxe3txagx...
    $ docker swarm join-token worker
    To add a worker to this swarm, run the following command:
        docker swarm join --token SWMTKN-1-6bsyhhxe3txagy...
    The previous command's output are commands for joining the cluster. Copy-paste it to them.
  6. On manager, confirm with docker node ls that nodes have joined the cluster and also set availability of manager nodes to Drain: docker node update --availability drain <HOSTNAME>. Manager availability is by default Active but if you do not want it to run any containers set it to Drain.
  7. Optional for tags, add it to nodes now: docker node update --label-add <TAG>=true <WORKER>
  8. Deploy the stack again: docker stack deploy -c docker-compose.yml <STACK_NAME>