Test network bandwidth using iperf and iftop

"Big Data Platform Architecture and Prototype Implementation: Practical Combat of Data Middle Platform Construction" The blogger spent three years carefully creating the book "Big Data Platform Architecture and Prototype Implementation: Practical Practice of Data Middle Platform Construction" is now available Published and distributed by the well-known IT book brand Electronic Industry Press Bowen Viewpoint, click "Heavy Recommendation: Building a big data platform is too difficult!" Send me a project prototype! 》Learn more about the book, JD.com purchase link: https://item.jd.com/12677623.html, Scan the QR code on the left to enter the JD.com mobile book purchase page.

Sometimes, we need to know exactly what the upstream and downstream network bandwidth the server can achieve under the current network environment. This is very important for testing whether the upload or download job maximizes the utilization of bandwidth. Let's take EC2 located in the AWS VPC environment as an example to introduce how to actually measure the maximum bandwidth that an EC2 node can achieve. Let’s take the m5.4xlarge model as an example. According to the official AWS documentation: https://aws.amazon.com/ec2/instance-types/m5/, the maximum bandwidth that this type of instance can achieve is 10 Gbps. We will use two m5.4xlarge and use iperf to measure the maximum upload and download rates between the two servers.

1. Intranet two-machine mutual test

If you have two machines on the intranet, then the most accurate test method is: one acts as a server, and the other acts as a client to send data packets to it. This way the detection result is the most accurate, which can Shielding the impact of the downlink bandwidth on the external network is also how iperf works.

1.1. Install iperf

iperf can be installed through yum, but only if yum has installed the epel repo. Before installation, use the command:

yum repolist

Confirm whether the epel repo has been installed on the current OS. If not, you can use the following command to install the epel repo first:

# for centos 7
wget https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
# for centos 6
wget https://dl.fedoraproject.org/pub/archive/epel/6/x86_64/epel-release-6-8.noarch.rpm
sudo yum -y install ./epel-release-*.noarch.rpm

Then you can use yum to install iperf:

sudo yum -y install iperf

1.2. Start iperf server

iperf uses a C/S architecture. The client sends a large number of data packets to the server to measure the network speed. Therefore, it is necessary to start the server process (default port 5001) on a service station and send it as a client on another server. data. The command to start the server is:

iperf -s

1.3. Test speed on iperf client

After the server segment is started, log in to another server and use the following three sets of commands to send data packets to the server and view the speed measurement report. Among them, the -P parameter is used to specify the number of threads, which is very necessary to measure the maximum network speed:

iperf -c <服务器IP>
iperf -c <服务器IP> -P 2
iperf -c <服务器IP> -P 3

The command execution results are as follows:

image-20231209145734185

From the above three rounds of speed test reports, we can draw the following conclusions:

  1. The first round of speed test results is 4.97 Gbps. Is this the maximum network speed? Obviously not!
  2. After adjusting the concurrent threads to 2 and retesting, I got 9.92 Gbps, which is 2 times the speed of the previous round of testing. Is this the maximum network speed? Not sure, add another thread to see the trend!
  3. After adjusting the concurrent threads to 2 and retesting, we got 9.93 Gbps, which shows that 9.9+ Gbps is already the extreme bandwidth of this server in this network.

The above conclusion is consistent with the data given in AWS official documents: the bandwidth limit of the m5.4xlarge model is 10 Gbps (transmitting 1280M bytes of data per second)

The above test is a one-way network transmission, that is, sending data from the client to the server. In the actual network environment, the communication is two-way. In order to test the two-way communication bandwidth, iperf allows us to let the client receive as a server at the same time. The data from the server side simulates the network bandwidth performance accompanying the download during the upload process. The method is to add a -d parameter to indicate bidirectional transmission, and then give the client a listening port to receive data (use -L to set). Also execute the following three sets of commands:

iperf -c <服务器IP> -d -L 5002
iperf -c <服务器IP> -d -L 5002 -P 2
iperf -c <服务器IP> -d -L 5002 -P 3

The command execution results are as follows:

image-20231209161731557

From the above three rounds of speed test reports, we can draw the following conclusions:

  1. In duplex mode, the network bandwidth is basically allocated in half.
  2. Upload and download rates are added together into the total bandwidth

2. Self-test of the machine

If you do not have the ability to test the speed of dual-machine server/client mode, or you only care about the overall download and upload speed of the current machine, then iftop is a more suitable tool for you. They accurately reflect the overall upload and download speed of the current server. .

2.1. Install iftop

Like iperf, iftop can also be installed through yum, but the prerequisite is that yum has installed the epel repo. Before installation, use the command:

yum repolist

Confirm whether the epel repo has been installed on the current OS. If not, you can use the following command to install the epel repo first:

# for centos 7
wget https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
# for centos 6
wget https://dl.fedoraproject.org/pub/archive/epel/6/x86_64/epel-release-6-8.noarch.rpm
sudo yum -y install ./epel-release-*.noarch.rpm

Then you can use yum to install iftop:

sudo yum -y install iftop

2.1. Start iftop

Excuting an order:

sudo iftop

You will see the following interface:
Insert image description here
This is exactly the iftop output information intercepted from the client machine when we used iperf for stress testing. Explain the meaning of the information at each location:

  • The first line is bandwidth, with a scale below to indicate the bandwidth occupied by real-time traffic on each connection (from actual measurements, iperf's scale can only be displayed up to 1.79 Gbps)
  • The middle part is all the connections. The host name is displayed by default. The IP can be displayed through parameters. The arrow indicates the data direction.
  • The three columns on the right side of the middle are the average traffic of the connection in 2s, 10s, and 40s respectively.
  • The three lines at the bottom represent the sent, received, and summarized traffic respectively.
  • The second column of the three rows at the bottom is the traffic summary from iftop startup to now.
  • The bottom three rows and column 3 are the peak rates.
  • The bottom three rows and column 4 are the average values ​​of 2s, 10s, and 40s.

Appendix: iperf command parameters

  • Common parameters
-f, --format \[bkmaBKMA]   # 格式化带宽数输出
-i, --interval #           # 设置每次报告之间的时间间隔,单位为秒。如果设置为非零值,就会按照此时间间隔输出测试报告。默认值为零。
-l, --len #\[KM]           # 设置读写缓冲区的长度。TCP方式默认为8KB,UDP方式默认为1470字节。
-m, --print\_mss           # 输出TCP MSS值(通过TCP\_MAXSEG支持)。MSS值一般比MTU值小40字节。通常情况
-p, --port #               # 设置端口,与服务器端的监听端口一致。默认是5001端口,与ttcp的一样。
-u, --udp                  # 使用UDP方式而不是TCP方式。参看-b选项。
-w, --window #\[KM]        # 设置套接字缓冲区为指定大小。对于TCP方式,此设置为TCP窗口大小。
                           # 对于UDP方式,此设置为接受UDP数据包的缓冲区大小,限制可以接受数据包的最大值。
-B, --bind host            # 绑定到主机的多个地址中的一个。对于客户端来说,这个参数设置了出栈接口。对于服务器端来说,
                           # 这个参数设置入栈接口。这个参数只用于具有多网络接口的主机。在Iperf的UDP模式下,此参数用于绑定和加入一个多播组。
                           # 使用范围在224.0.0.0至239.255.255.255的多播地址。参考-T参数。
-C, --compatibility        # 与低版本的Iperf使用时,可以使用兼容模式。不需要两端同时使用兼容模式,但是强烈推荐两端同时使用兼容模式。
                           # 某些情况下,使用某些数据流可以引起1.7版本的服务器端崩溃或引起非预期的连接尝试。
-M, --mss                  # ip头减去40字节。在以太网中,MSS值 为1460字节(MTU1500字节)。许多操作系统不支持此选项。
-N, --nodelay              # 设置TCP无延迟选项,禁用Nagle's运算法则。通常情况此选项对于交互程序,例如telnet,是禁用的。
-V (from v1.6 or higher)   # 绑定一个IPv6地址。 服务端:$ iperf -s –V 客户端:$ iperf -c -V 注意:在1.6.3或更高版本中,指定IPv6地址不需要使用-B参数绑定,在1.6之前的版本则需要。在大多数操作系统中,将响应IPv4客户端映射的IPv4地址。
  • Server-side specific parameters
-s, --server                    # Iperf服务器模式
-D (v1.2或更高版本)               # Unix平台下Iperf作为后台守护进程运行。在Win32平台下,Iperf将作为服务运行。
-R(v1.2或更高版本,仅用于Windows)  # 卸载Iperf服务(如果它在运行)。
-o(v1.2或更高版本,仅用于Windows)  # 重定向输出到指定文件
-c, --client host               # 如果Iperf运行在服务器模式,并且用-c参数指定一个主机,那么Iperf将只接受指定主机的连接。此参数不能工作于UDP模式。
-P, --parallel #                # 服务器关闭之前保持的连接数。默认是0,这意味着永远接受连接。
  • Client-specific parameters
-b, --bandwidth #\[KM]       # UDP模式使用的带宽,单位bits/sec。此选项与-u选项相关。默认值是1 Mbit/sec。 
-c, --client host            # 运行Iperf的客户端模式,连接到指定的Iperf服务器端。 
-d, --dualtest               # 运行双测试模式。这将使服务器端反向连接到客户端,
                             # 使用-L 参数中指定的端口(或默认使用客户端连接到服务器端的端口)。
                             # 这些在操作的同时就立即完成了。如果你想要一个交互的测试,请尝试-r参数。 
-n, --num #\[KM]             # 传送的缓冲器数量。通常情况,Iperf按照10秒钟发送数据。
                             # -n参数跨越此限制,按照指定次数发送指定长度的数据,而不论该操作耗费多少时间。参考-l与-t选项。 
-r, --tradeoff               # 往复测试模式。当客户端到服务器端的测试结束时,服务器端通过-l选项指定的端口(或默认为客户端连接到服务器端的端口),
                             # 反向连接至客户端。当客户端连接终止时,反向连接随即开始。如果需要同时进行双向测试,请尝试-d参数。 
-t, --time #                 # 设置传输的总时间。Iperf在指定的时间内,重复的发送指定长度的数据包。默认是10秒钟。参考-l与-n选项。 
-L, --listenport #           # 指定服务端反向连接到客户端时使用的端口。默认使用客户端连接至服务端的端口。 
-P, --parallel #             # 线程数。指定客户端与服务端之间使用的线程数。默认是1线程。需要客户端与服务器端同时使用此参数。 
-S, --tos #                  # 出栈数据包的服务类型。许多路由器忽略TOS字段。你可以指定这个值,使用以"0x"开始的16进制数,
                             # 或以"0"开始的8进制数或10进制数。 例如,16进制'0x10' = 8进制'020' = 十进制'16'。
                             # TOS值1349就是: IPTOS\_LOWDELAY minimize delay 0x10 IPTOS\_THROUGHPUT maximize 
                             # throughput 0x08 IPTOS\_RELIABILITY maximize reliability 0x04 IPTOS\_LOWCOST minimize cost 0x02 
-T, --ttl #                  # 出栈多播数据包的TTL值。这本质上就是数据通过路由器的跳数。默认是1,链接本地。 
-F (from v1.2 or higher)     # 使用特定的数据流测量带宽,例如指定的文件。 $ iperf -c -F 
-I (from v1.2 or higher)     # 与-F一样,由标准输入输出文件输入数据。 

Guess you like

Origin blog.csdn.net/bluishglc/article/details/134896926