容器的网络通讯分析
Table of contents
No headings in the article.
容器以其轻量及隔离性得到了广泛的应用,本篇意在通过抓包数据来分析容器的网络通讯是如何运行的。
先准备一下主机环境
Distributor ID: Raspbian
Description: Raspbian GNU/Linux 10 (buster)
Release: 10
Codename: buster
对,就是树莓派了。
查看树莓派主机上安装的Docker版本
Docker version 19.03.3, build a872fc2
我们先尝试运行一个容器并进行容器的命令行
docker run -it busybox /bin/sh
查看一上容器里面的网络设备
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
10: eth0@if11: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue
link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.2/16 brd 172.17.255.255 scope global eth0
valid_lft forever preferred_lft forever
lo本地回环网络,这个很好理解,eth0
是以太网接口,这正是我们所要讨论的点,eth0
是容器跟外部通信的网络接口,它到底是如何起作用的呢?
我们知道容器是在宿主机上运行的,容器的一切网络行为必定跟宿主机有关,那不妨先看一下宿主机的网络设备
docker0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 172.17.0.1 netmask 255.255.0.0 broadcast 172.17.255.255
inet6 fe80::ed99:9e76:f8a1:39f prefixlen 64 scopeid 0x20<link>
ether 02:42:82:79:80:c9 txqueuelen 0 (Ethernet)
RX packets 4 bytes 634 (634.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 48 bytes 9408 (9.1 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
wlan0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.43.116 netmask 255.255.255.0 broadcast 192.168.43.255
inet6 fe80::edab:19a:ebcc:c66f prefixlen 64 scopeid 0x20<link>
inet6 240e:470:320:9301:8722:7c63:7135:9414 prefixlen 64 scopeid 0x0<global>
ether dc:a6:32:1b:92:2a txqueuelen 1000 (Ethernet)
RX packets 5419 bytes 652998 (637.6 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 3688 bytes 686046 (669.9 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
inet6 ::1 prefixlen 128 scopeid 0x10<host>
loop txqueuelen 1000 (Local Loopback)
RX packets 156338 bytes 42863938 (40.8 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 156338 bytes 42863938 (40.8 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
......
看到了一个叫做docker0的网络设备,我们知道docker是一个C/S架构,默认情况下,docker的Server端会宿主机上创建一个叫做docker0
的网络设备,功能上是作为网桥存在的, 当然这是虚拟网桥,所以进出容器的网络流量都要经过docker0
。我们先做一个小实验,在容器里面向宿主机发3次包
ping -c3 192.168.43.116
看看docker0
会发生什么?
➜ ~ sudo tcpdump -nn -i docker0 icmp
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on docker0, link-type EN10MB (Ethernet), capture size 262144 bytes
10:18:05.256980 IP 172.17.0.2 > 192.168.43.116: ICMP echo request, id 2816, seq 0, length 64
10:18:05.257082 IP 192.168.43.116 > 172.17.0.2: ICMP echo reply, id 2816, seq 0, length 64
10:18:06.257362 IP 172.17.0.2 > 192.168.43.116: ICMP echo request, id 2816, seq 1, length 64
10:18:06.257439 IP 192.168.43.116 > 172.17.0.2: ICMP echo reply, id 2816, seq 1, length 64
10:18:07.257661 IP 172.17.0.2 > 192.168.43.116: ICMP echo request, id 2816, seq 2, length 64
10:18:07.257710 IP 192.168.43.116 > 172.17.0.2: ICMP echo reply, id 2816, seq 2, length 64
可以看到容器ip为172.17.0.2
向ip为192.168.43.116
的设备发送了3次ICMP包,如官方文档所描述的那样,容器的流量确实经过了docker0
,可以docker0
毕竟是网桥,容器连接到网桥上总得有根网线吧?当然这根网线一定是虚拟的,那么这根网络在哪呢?这里就必须得提另外一个概念:veth(Virtual Ethernet Device)
The veth devices are virtual Ethernet devices. They can act as tunnels between network namespaces to create a bridge to a physical network device in another namespace, but can also be used as standalone network devices.
这是官网的解释,我们可以这么理解(纯属为了方便理解),veth
是一根网络,它有两端,容器中的eth0@if11
是网线的一端,另一端在宿主机中,11
即代表对端的网络设备号,我们在宿主机树莓派上查到该网络设备
11: vethc426d5b@if10: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default
link/ether 82:6c:38:bf:17:cf brd ff:ff:ff:ff:ff:ff link-netnsid 3
inet 169.254.8.145/16 brd 169.254.255.255 scope global noprefixroute vethc426d5b
valid_lft forever preferred_lft forever
inet6 fe80::e079:e372:5724:fe10/64 scope link
valid_lft forever preferred_lft forever
同样的,我们对宿主机的vethc426d5b
网络设备进行抓包验证
➜ ~ sudo tcpdump -nn -i vethc426d5b icmp
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on vethc426d5b, link-type EN10MB (Ethernet), capture size 262144 bytes
10:37:24.832645 IP 172.17.0.2 > 192.168.43.116: ICMP echo request, id 3328, seq 0, length 64
10:37:24.832906 IP 192.168.43.116 > 172.17.0.2: ICMP echo reply, id 3328, seq 0, length 64
10:37:25.833154 IP 172.17.0.2 > 192.168.43.116: ICMP echo request, id 3328, seq 1, length 64
10:37:25.833256 IP 192.168.43.116 > 172.17.0.2: ICMP echo reply, id 3328, seq 1, length 64
10:37:26.833491 IP 172.17.0.2 > 192.168.43.116: ICMP echo request, id 3328, seq 2, length 64
10:37:26.833615 IP 192.168.43.116 > 172.17.0.2: ICMP echo reply, id 3328, seq 2, length 64
事实证明流量经过了vethc426d5b
, 进行合并验证,由于tcpdump
不提供同时对多个网络接口抓包的功能,因为,我分别同时对vethc426d5b
与docker0
进行icmp
监听, 然后在容器中执行
ping -Ieth0 -c1 192.168.43.116
按时间排序得到抓包结果整理如下
vethc426d5b 11:35:33.151414 IP 172.17.0.2 > 192.168.43.116: ICMP echo request, id 8704, seq 0, length 64
docker0 11:35:33.151545 IP 172.17.0.2 > 192.168.43.116: ICMP echo request, id 8704, seq 0, length 64
docker0 11:35:33.151657 IP 192.168.43.116 > 172.17.0.2: ICMP echo reply, id 8704, seq 0, length 64
vethc426d5b 11:35:33.151684 IP 192.168.43.116 > 172.17.0.2: ICMP echo reply, id 8704, seq 0, length 64
这个数据很好的证明了从容器中发出的icmp
包的网络流量的数据走向是:容器网络eth0
->vethc426d5b
->docker0
,所以容器与宿主机的网络通讯已经很清晰了,那容器与外部网络通信是如何进行的呢?
我们知道虚拟设备的功能都是需要依托于实体的,这里我的树莓派使用的是无线网络,网络设备是wlan0
,同样在容器内发包
ping -Ieth0 -c1 cn.bing.com
监控到wlan0
上的网络流量
➜ ~ sudo tcpdump -nn -i wlan0 icmp
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on wlan0, link-type EN10MB (Ethernet), capture size 262144 bytes
12:15:24.591014 IP 192.168.43.116 > 202.89.233.101: ICMP echo request, id 16384, seq 0, length 64
此时docker0
上的网络流量
➜ ~ sudo tcpdump -nn -i docker0 icmp
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on docker0, link-type EN10MB (Ethernet), capture size 262144 bytes
12:15:24.590912 IP 172.17.0.2 > 202.89.233.101: ICMP echo request, id 16384, seq 0, length 64
此时很容易印证来自于容器内的网络流程从docker0
流向了wlan0
,如此,我们可以画一张简单的图来总结我们上述的抓包行为:
至此,对容器的网络行为以及流向已经有了一个基本的了解,也可以跟docker
的官网描述互为印证。
Refer: