Docker Swarm Overlay Network Test

Docker Swarm Overlay Network Test

East-West Traffic Bandwith Test & Solution to Bandwith Limitation


4 min read


SwiftWave is a self-hosted lightweight PaaS solution to deploy and manage your applications on any VPS without any hassle ๐Ÿ‘€

For container management, SwiftWave uses Docker Swarm, and we are rearchitecting it to be able to set up SwiftWave in multiple nodes.

We are using HaProxy, which will handle ingress traffic, and HAProxy is available only on specific nodes. So internally, there will be a lot of east-west traffic.

That's why we are testing and will try to fix the issues as well.

Lab Setup

We have chosen AWS to test this.

We created 2 EC2 instances with 1Gb memory and 1vCPU t2.micro, installed Docker and set up Docker Swarm.

For bandwidth testing, we chose to use iperf3.

To test Bandwith, you need to start the iperf3 server on one host by

iperf3 -s

On the client side, we need to run

iperf3 -c <ip_of_iperf3_server>

That's all. This will show us the maximum network bandwidth available.

Bandwidth Test between Hosts

Before proceeding with the bandwidth test between containers, we test the bandwidth test between hosts.

As you can see, we are getting around ~ 1 Gb/sec speed on upload and download.

As per AWS documentation as well, t2.micro instances also have network bandwidth of around ~ 1 Gbps

So, we are getting total bandwidth.

Bandwidth Test between Containers

Now, we will test bandwidth test between two containers (each container must be on a separate host) on the overlay network in the docker swarm.

  1. Let's create an attachable overlay network first.

    On the swarm manager node run,

     sudo docker network create --driver overlay --attachable swarm_test_network
  2. Now run the iperf3 server in a container on the hosts and connect to the overlay network. Using networkstatic/iperf3 docker image for this.

     sudo docker service create --network swarm_test_network --mode global networkstatic/iperf3 -s
  3. Get the IP of the container running the iperf3 server of any hosts.
    - First, list down the running containers by sudo docker ps

    - Then inspect the container for the IP address

    We go the IP address

  4. Now, let's move to the second host and go inside a container to run the test.

  5. Here is the test result

We can observe that the maximum bandwidth is ~340 Mbps, almost 1/3 of the actual bandwidth ๐Ÿฅฒ.

Troubleshoot Slow Network Bandwidth

First of all, we thought it might be the limitation of the overlay network as it's a Layer 2 network. But overlay (virtual network) network built on top of tun, tap interfaces, and iptable. So we can achieve total bandwidth on overlay as well.

In the end, we found the issue ๐Ÿ˜….

After inspecting the main eth0 interface (network interface to AWS internal network on the specific subnet) and docker-created network interfaces, we found the issue.

eth0 interface details

Any docker interface details

eth0 interface has 9001 MTU, but the docker0 interface has only 1500 MTU, which is too low; 1/6 of eth0 interface.


Let's update the MTU of our overlay network.

We deleted the old swarm_test_network and create a new network

docker network create --driver overlay \
  --attachable \
  --opt \

After this, we follow the same steps [discussed above] to start containers and run the test.

Result -

Now, we are getting total bandwidth on the overlay network ๐Ÿ”ฅ๐Ÿ”ฅ.

Final Test

We have already found the solution (set max MTU at docker interfaces). We want to test this with a high network bandwidth.

This time, we create two EC2 instances with 1 Gb memory 1 vCPU, and maximum 5Gbps of network bandwidth. [t3.micro instance]

This is the test result.

In the case of the host-to-host bandwidth test, it was 4.5 Gbits / sec. So, between the containers over the swarm network, the bandwidth of 4.35 Gbits / sec is a good number.

๐Ÿฅณ๐Ÿฅณ We are getting almost the total bandwidth.

If you have read the blog till the end, subscribe to the newsletter for more updates.

Make sure to star project SwiftWave ( on GitHub to keep it growing.