In this post, we are going to delve in the theory and components of Docker networking. Generally, when we create a container, from the networking aspect it either needs to communicate with another container or it comprises an application that needs to communicate with the internet. Docker networking essentially consists of the following three major components:
1 Container network model:
This is a design specification that outlines the fundamental building blocks of docker.
This is the real-world implementation of CNM used by Docker to communicate between containers. libnetwork is also responsible for service discovery, ingress based container load balancing and the networking management control plane functionality.
libnetwork implements CNM using drivers. The drivers are used to implement different network topologies. We’ll now discuss the drivers that are used by libnetwork.
This is the default driver used by libnetwork. A bridged network is a link layer device that forwards traffic between different network segments. The bridge driver uses a software bridge which allows containers connected to the same bridged network to communicate with each other. It also provides a layer of isolation to containers not connected on the same network. The bridge driver only works on Linux.
If you use the host network mode for a container, that container’s network stack is not isolated from the Docker host (the container shares the host’s networking namespace), and the container does not get its own IP-address allocated.
Whenever we need to create a distributed network to communicate with different docker hosts we use the overlay network driver. A common example where this type of driver is used is when we work with docker swarm.
The macvlan driver allows you to assign a MAC address to a container. This gives the appearance of being a physical device on the network. Using this driver is useful in scenarios where we might be using a legacy application or an application that monitors network traffic and is expected to be physically connected to the network.
This is used when we want to disable networking and will be used in conjunction with a custom network driver. Also, it’s worth noting that this driver cannot be used with Docker Swarm service.
6 Network plugins
libnetwork also allows us to use third-party network plugins. To view what type of plugins are available to you visit docker hub.
We earlier mentioned that the container network model forms the fundamental building blocks of Docker networking. We’ll now talk about those building blocks.
The sandbox isolates the network stack. This includes network interfaces, route tables, and DNS.
They are virtual network interfaces and it is the responsibility of the endpoint to connect the sandbox to the network which is the third building block.
These networks are software implementations of the 802.1d bridge.
The below diagram illustrates how the different components we’ve discussed relate to containers.
If you look inside container A and container B, both have the sandbox component. This provides networking connectivity for containers. Container A has a single endpoint and this is our virtual interface and that is connecting to network A. On container B we have two endpoints. One is connected to network A and the other is connected to network B. Both containers A and B are able to communicate with each other through the endpoints that are connected to network A. But the two endpoints on container B are connected to separate networks and cannot communicate with each other unless there’s a layer three router involved. Since endpoints behave like real-world network adapters they can only be connected to one network.
Although container A and container B are on the same docker host they have completely isolated network stacks.
This concludes our basic overview of Docker networking. In our next post, we’ll be exploring Docker network commands.