How Netty improves throughput

        Recently, I wrote a set of proxy services with netty and found that the network speed is super slow and the download is only about 200k. The actual network speed of the proxy server is about 100Mbit/s. After searching for a long time, I finally found the reason. It turns out that the netty tcp parameters SO_SNDBUF and SO_RCVBUF are set too small (previously 32K, now the network speed is restored to normal after setting it to 2M).

Source address (support CDN, ask for Star):  https://github.com/zhining-lu/netty-websocket-proxy

meaning

  • SO_SNDBUF: The upper limit of the TCP sending buffer capacity ;
  • SO_RCVBUF: The upper limit of the capacity of the TCP receiving buffer ;

Note: The upper limit of the buffer cannot be infinite. If it exceeds the upper limit set by the kernel, the kernel setting value shall prevail (check with the sysctl -a command).

net.ipv4.tcp_rmem = 8192 87380 16777216
net.ipv4.tcp_wmem = 8192 65536 16777216
net.ipv4.tcp_mem = 8388608 12582912 16777216

What is the relationship with actual memory usage?

  • SO_SNDBUF and SO_RCVBUF only specify the upper limit of the read and write buffer size. Before the actual use reaches the upper limit, SO_SNDBUF and SO_RCVBUF will not work.
  • The memory occupied by a TCP connection is equivalent to the sum of the actual memory occupied by the read and write buffers .

The relationship with the sliding window?

Relation between receiving buffer area and receiving sliding window

The receiving buffer area contains the sliding window, that is, the receiving buffer area size >= the sliding window size. The data in the receiving buffer is mainly divided into two parts:

  1. Accept out-of-order TCP packets in the sliding window;
  2. Orderly, data that has not been read by the application (occupancy ratio: 1/(2^tcp_adv_win_scale), the default tcp_adv_win_scale configuration is 2);

Therefore, when the upper limit of the receiving buffer is fixed, if the data reading rate of the application is too slow, the receiving sliding window will be reduced, thereby notifying the connected end to reduce the sending speed and avoiding unnecessary network transmission.

è¿éåå¾çæè¿ °

 

The relationship between sending buffer and sending sliding window

The sending buffer area contains the sending sliding window, that is, the sending buffer area >= the sending sliding window size. The data in the sending buffer is mainly divided into two parts:

  1. Data in the sending window: data that has been sent but not yet confirmed;
  2. Data written by the application;

 è¿éåå¾çæè¿ °

 

Buffer size estimation

#### Estimating the maximum receiving window size
Generally, BDP is used to set the maximum receiving window. BDP is called the bandwidth-delay product, which is the product of bandwidth and network delay. Because BDP represents the network bearing capacity, the maximum receiving window represents the packets that can be sent without confirmation within the network bearing capacity. As shown below:

 è¿éåå¾çæè¿ °

 

#### Calculate the size of the receiving buffer
  Calculate the upper limit of the buffer size based on the proportion of the receiving window size of 1-1/(2^tcp_adv_win_scale);

Example: For example, if our bandwidth is 2Gbps and the delay is 10ms, then the bandwidth delay product BDP is 2G/8*0.01=2.5MB, so the maximum receiving window can be set to 2.5MB in such a network, when tcp_adv_win_scale=2 When the maximum read cache can be set to 4/3*2.5MB=3.3MB.

 

Guess you like

Origin blog.csdn.net/qq_32445015/article/details/105643404