Docker Swarm for Container Orchestration

All about the Docker Swarm and how it manages and orchestrates all the containers in a cluster.
What is Docker Swarm?
Docker swarm is a mode of handling a cluster of Docker Engines, hence the name Swarm. The cluster of Docker hosts run in swarm mode consisting of managers and workers. The docker-engine instances which participate in the swarm are called nodes.
A production level swarm deployment consists of docker nodes spread across multiple servers.
Why use it? – Container Orchestration
When you are working in a production environment, 100s of docker containers will be running multiple applications in it. Managing all these containers can be a big pain for all the DevOps engineers; this is where Docker Swarm helps you out. It manages and orchestrates the cluster running multiple docker containers with ease.
Below are some of its features:
- High-Availability – aims to offer no downtime or outage.
- Load Balancing – allocate the resources and requests on other nodes in the cluster automatically if any node fails.
- De-centralized – multiple manager nodes run in a production environment; hence the cluster is never dependent on a single manager node.
- Scalability – using a single docker swarm command, you can easily scale-up or scale-down containers in the cluster.
Orchestrate Docker Containers
Now that you know the basics of Docker Swarm let us look at an example of its implementation.
In this example, I have three machines running in a cluster with the below details:
manager1: 192.168.56.104
worker1: 192.168.56.105
worker2: 192.168.56.102
To initialize the swarm mode in docker, run the below command on the manager node. The flag --advertise-addr
is used for advertising itself to the nodes which can join the cluster.
geekflare@manager1:~$ docker swarm init --advertise-addr 192.168.56.104
Swarm initialized: current node (lssbyfzuiuh3sye1on63eyixf) is now a manager.
To add a worker to this swarm, run the following command:
docker swarm join --token SWMTKN-1-3h3d8qgvdlxi8tl1oqpfho9khx7i1t5nq7562s9gzojbcm9kr6-azy4rffrzou0nem9hxq4ro5am 192.168.56.104:2377
To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.
The above command will generate a token which will be used by other nodes to join this cluster. Copy the command with the token generated and run it on the worker nodes.
Running the token on worker1 node.
geekflare@worker1:~$ docker swarm join --token SWMTKN-1-3h3d8qgvdlxi8tl1oqpfho9khx7i1t5nq7562s9gzojbcm9kr6-azy4rffrzou0nem9hxq4ro5am 192.168.56.104:2377
This node joined a swarm as a worker.
Running the token on worker2 node.
geekflare@worker2:~$ docker swarm join --token SWMTKN-1-3h3d8qgvdlxi8tl1oqpfho9khx7i1t5nq7562s9gzojbcm9kr6-azy4rffrzou0nem9hxq4ro5am 192.168.56.104:2377
This node joined a swarm as a worker.
Now, on the manager node, you can check which nodes are running in the cluster.
geekflare@manager1:~$ docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION
lssbyfzuiuh3sye1on63eyixf * manager1 Ready Active Leader 18.09.6
utdr3dnngqf1oy1spupy1qlhu worker1 Ready Active 18.09.6
xs6jqp95lw4cml1i1npygt3cg worker2 Ready Active 18.09.6
Let’s build the geekflare_mongodb
docker image which we used in Dockerfile Tutorial.
docker build -t geekflare_mongodb .
Run a container of the MongoDB docker image by creating a swarm service. 27017 is the port number on which MongoDB is exposed.
geekflare@manager1:~$ docker service create --name "Mongo-Container" -p 27017:27017 geekflare_mongodb
image geekflare_mongodb:latest could not be accessed on a registry to record its digest. Each node will access geekflare_mongodb:latest independently, possibly leading to different nodes running different versions of the image.
kok58xa4zi05psh3uy6s5x9e6
overall progress: 1 out of 1 tasks
1/1: running
verify: Service converged
Check if the docker swarm service has started. MODE replicated means the container has been replicated on all the nodes in the cluster and REPLICAS 1/1 means only one swarm service is currently running.
geekflare@manager1:~$ docker service ls
ID NAME MODE REPLICAS IMAGE PORTS
kok58xa4zi05 Mongo-Container replicated 1/1 geekflare_mongodb:latest *:27017->27017/tcp
Let us check on which node in the cluster this single service is running. It is running on manager1 node.
geekflare@manager1:~$ docker service ps Mongo-Container
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
jgqjo92rbq23 Mongo-Container.1 geekflare_mongodb:latest manager1 Running Running about a minute ago
Run docker ps command to get more details about the container which is running this swarm service.
geekflare@manager1:~$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
05d77e7b4850 geekflare_mongodb:latest "/bin/sh -c usr/bin/…" 2 minutes ago Up 2 minutes 27017/tcp Mongo-Container.1.jgqjo92rbq23sv01hrufdigtx
You can run the swarm service in “global” mode also instead of default “replicated” mode. Global mode runs one task of the swarm service on all the nodes in the cluster.
Before I run the service in global mode, let me remove the existing running container.
geekflare@manager1:~$ docker service rm Mongo-Container
Mongo-Container
Start the swarm service inside a docker container in global mode using –mode flag.
geekflare@manager1:~$ docker service create --name "Mongo-Container" -p 27017:27017 --mode global geekflare_mongodb
image geekflare_mongodb:latest could not be accessed on a registry to record its digest. Each node will access geekflare_mongodb:latest independently, possibly leading to different nodes running different versions of the image.
mfw8dp0zylffppkllkcjl8391
overall progress: 3 out of 3 tasks
utdr3dnngqf1: running
lssbyfzuiuh3: running
xs6jqp95lw4c: running
verify: Service converged
Check if the swarm service started in global mode. Since, three nodes (1 manager, 2 workers) are running in the cluster, that’s why the number of replicas is 3.
geekflare@manager1:~$ docker service ls
ID NAME MODE REPLICAS IMAGE PORTS
mfw8dp0zylff Mongo-Container global 3/3 geekflare_mongodb:latest *:27017->27017/tcp
3 services are running now across 3 nodes, check it by running the below command.
geekflare@manager1:~$ docker service ps Mongo-Container
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
zj2blvptkvj6 Mongo-Container.xs6jqp95lw4cml1i1npygt3cg geekflare_mongodb:latest worker2 Running Running about a minute ago
3eaweijbbutf Mongo-Container.utdr3dnngqf1oy1spupy1qlhu geekflare_mongodb:latest worker1 Running Running about a minute ago
yejg1o2oyab7 Mongo-Container.lssbyfzuiuh3sye1on63eyixf geekflare_mongodb:latest manager1 Running Running about a minute ago
Next, let me show how you can define the number of replicas. Before that, I will remove the current container, which is running.
geekflare@manager1:~$ docker service rm Mongo-Container
Mongo-Container
Use –replicas flag in the command and mention the number of replicas you want of the swarm service. For example, I want to have two replicas of the swarm service:
geekflare@manager1:~$ docker service create --name "Mongo-Container" -p 27017:27017 --replicas=2 geekflare_mongodb
image geekflare_mongodb:latest could not be accessed on a registry to record its digest. Each node will access geekflare_mongodb:latest independently, possibly leading to different nodes running different versions of the image.
4yfl41n7sfak65p6zqwwjq82c
overall progress: 2 out of 2 tasks
1/2: running
2/2: running
verify: Service converged
Check the swarm services currently running. You can see one replica is running on manager1 node and the other one on worker1 node.
geekflare@manager1:~$ docker service ps Mongo-Container
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
xukodj69h79q Mongo-Container.1 geekflare_mongodb:latest worker1 Running Running 9 seconds ago
e66zllm0foc8 Mongo-Container.2 geekflare_mongodb:latest manager1 Running Running 9 seconds ago
Go to worker1 node and check if the docker container is running the swarm service.
geekflare@worker1:~$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
5042b7f161cb geekflare_mongodb:latest "/bin/sh -c usr/bin/…" About a minute ago Up About a minute 27017/tcp Mongo-Container.1.xukodj69h79q3xf0pouwm7bwv
To stop this container, run the command below.
geekflare@worker1:~$ docker stop 5042b7f161cb
5042b7f161cb
Now from manager1 node if you check which all nodes are running the service, you will see its running on manager1 node and worker2 node. The CURRENT STATE of worker1 node is Shutdown (as we stopped the container running the service). But since two replicas must run of this service, another service was started on worker 2.
This is how you achieve high availability using docker swarm.
geekflare@manager1:~$ docker service ps Mongo-Container
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
cd2rlv90umej Mongo-Container.1 geekflare_mongodb:latest worker2 Running Running 30 seconds ago
xukodj69h79q \_ Mongo-Container.1 geekflare_mongodb:latest worker1 Shutdown Failed 38 seconds ago "task: non-zero exit (137)"
e66zllm0foc8 Mongo-Container.2 geekflare_mongodb:latest manager1 Running Running 3 minutes ago
It’s very easy to scale up or down docker containers. The command below will scale up the mongo container to 5.
geekflare@manager1:~$ docker service scale Mongo-Container=5
Mongo-Container scaled to 5
overall progress: 5 out of 5 tasks
1/5: running
2/5: running
3/5: running
4/5: running
5/5: running
verify: Service converged
Check how many replicas of mongo container is running now, it must be 5.
geekflare@manager1:~$ docker service ls
ID NAME MODE REPLICAS IMAGE PORTS
4yfl41n7sfak Mongo-Container replicated 5/5 geekflare_mongodb:latest *:27017->27017/tcp
Check where these 5 replicas are running in the cluster. 1 replica is running on manager1 node and 2 replicas on both the worker nodes each.
geekflare@manager1:~$ docker service ps Mongo-Container
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
cd2rlv90umej Mongo-Container.1 geekflare_mongodb:latest worker2 Running Running 2 minutes ago
xukodj69h79q \_ Mongo-Container.1 geekflare_mongodb:latest worker1 Shutdown Failed 2 minutes ago "task: non-zero exit (137)"
e66zllm0foc8 Mongo-Container.2 geekflare_mongodb:latest manager1 Running Running 5 minutes ago
qmp0gqr6ilxi Mongo-Container.3 geekflare_mongodb:latest worker2 Running Running 47 seconds ago
9ddrf4tsvnu2 Mongo-Container.4 geekflare_mongodb:latest worker1 Running Running 46 seconds ago
e9dhoud30nlk Mongo-Container.5 geekflare_mongodb:latest worker1 Running Running 44 seconds ago
In your cluster, if you don’t want your services to run on manager node(s), and want to keep it only for managing the nodes, you can drain the manager node out.
geekflare@manager1:~$ docker node update --availability drain manager1
manager1
Check the availability of the manager node.
geekflare@manager1:~$ docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION
lssbyfzuiuh3sye1on63eyixf * manager1 Ready Drain Leader 18.09.6
utdr3dnngqf1oy1spupy1qlhu worker1 Ready Active 18.09.6
xs6jqp95lw4cml1i1npygt3cg worker2 Ready Active 18.09.6
You will see the services are not running on the manager node anymore; they are spread across the worker nodes in the cluster.
geekflare@manager1:~$ docker service ps Mongo-Container
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
cd2rlv90umej Mongo-Container.1 geekflare_mongodb:latest worker2 Running Running 5 minutes ago
xukodj69h79q \_ Mongo-Container.1 geekflare_mongodb:latest worker1 Shutdown Failed 5 minutes ago "task: non-zero exit (137)"
qo405dheuutj Mongo-Container.2 geekflare_mongodb:latest worker1 Running Running 41 seconds ago
e66zllm0foc8 \_ Mongo-Container.2 geekflare_mongodb:latest manager1 Shutdown Shutdown 44 seconds ago
qmp0gqr6ilxi Mongo-Container.3 geekflare_mongodb:latest worker2 Running Running 3 minutes ago
9ddrf4tsvnu2 Mongo-Container.4 geekflare_mongodb:latest worker1 Running Running 3 minutes ago
e9dhoud30nlk Mongo-Container.5 geekflare_mongodb:latest worker1 Running Running 3 minutes ago
That was all about Docker Swarm and how to orchestrate containers in docker swarm mode. Try out these on your non-production environment to get an idea of how it works.