TCP system parameter settings

We use CentOS5.3 here, and the kernel uses 2.6.18-128.el5PAE #1 SMP. Modify some TCP, some to improve performance and load, but there is a risk of reducing stability. Others are security configurations that may sacrifice performance.

1.TCP keepalive TCP connection preservation settings

echo 1800 > /proc/sys/net/ipv4/tcp_keepalive_time
echo 15 > /proc/sys/net/ipv4/tcp_keepalive_intvl
echo 5 > /proc/sys/net/ipv4/tcp_keepalive_probes

keepalive is the TCP keepalive timer. After the TCP connection is established at both ends of the network, the idle idle (there is no data flow between the two sides) and tcp_keepalive_time, the server kernel will try to send a detection packet to the client to judge the TCP connection status (the client may crash, The application is forcibly closed, the host is unreachable, etc.). If the other party's answer (ack packet) is not received, it will try to send the detection packet again after tcp_keepalive_intvl until the other party's ack is received. If the other party's ack has not been received, it will try tcp_keepalive_probes times in total. The intervals here are 15s, 30s, 45s, 60s, 75s respectively. If you try tcp_keepalive_probes and still do not receive the ack packet from the other party, the TCP connection will be discarded.

2. Syn cookies settings

echo 0 > /proc/sys/net/ipv4/tcp_syncookies

In CentOS5.3, the default value of this option is 1, which enables the syn cookies function. We recommend turning it off first, and then turning on the syn cookies function until it is determined to be under a syn flood attack, to effectively prevent a syn flood attack. Syn flood attacks can also be rejected through iptables rules.

3. TCP connection establishment settings

echo 8192 > /proc/sys/net/ipv4/tcp_max_syn_backlog
echo 2 > /proc/sys/net/ipv4/tcp_syn_retries
echo 2 > /proc/sys/net/ipv4/tcp_synack_retries

tcp_max_syn_backlog The length of the SYN queue, often referred to as the unconnected queue. The system kernel maintains such a queue for accommodating TCP connections (half-open connections) with a state of SYN_RESC, that is, those TCP connection requests that have not yet been acknowledged (ack) by the client. Increase this value to accommodate more network connections waiting to be connected.

tcp_syn_retries To create a new TCP connection request, a SYN packet needs to be sent. This value determines how many times the kernel needs to try to send a syn connection request before deciding to give up the connection establishment. The default value is 5. For high-responsibility and well-communicated physical networks, adjust to 2

tcp_synack_retries For a remote SYN connection request, the kernel will send a SYN+ACK packet to confirm receipt of the previous SYN connection request packet, and then wait for the remote's acknowledgment (ack packet). This value specifies that the kernel will send tcp_synack_retires SYN+ACK packets to the remote end. The default setting is 5 and can be adjusted to 2

4. TCP connection disconnection related settings

echo 30 >  /proc/sys/net/ipv4/tcp_fin_timeout
echo 15000 > /proc/sys/net/ipv4/tcp_max_tw_buckets
echo 1 > /proc/sys/net/ipv4/tcp_tw_reuse
echo 1 >  /proc/sys/net/ipv4/tcp_tw_recycle

tcp_fin_timeout For a TCP connection that is actively disconnected by the local end, the local end will actively send a FIN datagram. After receiving the remote ACK, and before receiving the remote FIN packet, the state of the TCP connection is FIN_WAIT_2. At this time, when the remote end closes the application, the network is unreachable (the network is unplugged), the program cannot be interrupted, etc., the local end will always keep the TCP connection in the state of FIN_WAIT_2, and the value of tcp_fin_timeout specifies the TCP connection in the state of FIN_WAIT_2 How long to save, a FIN_WAIT_2 TCP connection occupies a maximum of 1.5k memory. The system default value is 60 seconds, this value can be adjusted to 30 seconds or even 10 seconds.

tcp_max_tw_buckets The system handles the number of TIME_WAIT sockets at the same time. If the number of TIME_WAIT tcp connections exceeds this number, the system will force it to clear and display a warning message. The setting of this limit is mainly to prevent those simple DoS attacks. Increasing this value may consume more memory resources. If there are too many TIME_WAIT sockets, it is possible to run out of memory resources. The default value is 18w, you can set this value to 5000~30000 tcp_tw_resue Whether you can use the TIME_WAIT tcp connection to establish a new tcp connection.

tcp_tw_recycle Whether to enable the function of fast belt recycling TIME_WAIT tcp connection.

5. tcp memory resource usage phase parameter setting

echo 16777216 > /proc/sys/net/core/rmem_max
echo 16777216 > /proc/sys/net/core/wmem_max
cat /proc/sys/net/ipv4/tcp_mem
echo “4096 65536 16777216″ > /proc/sys/net/ipv4/tcp_rmem
echo “4096 87380 16777216″ > /proc/sys/net/ipv4/tcp_wmem

rmem_max defines the maximum value that the receive window can use, which can be adjusted according to the BDP value.

wmem_max defines the maximum value that the send window can use, which can be adjusted according to the BDP value.

tcp_mem [low, pressure, high] TCP uses these three values ​​to track memory usage to limit resource usage. Normally, when the system boots, the kernel calculates these values ​​based on the total amount of available memory. If there is Out of socket memory, you can try to modify this parameter.

1) low: When TCP uses the number of memory pages lower than this value, TCP will not filter and release memory.

2) pressure: When TCP uses more memory pages than this value, TCP tries to stabilize its memory usage, enters pressure mode, and exits this mode until memory consumption reaches the low value.

3) hight: The number of memory pages that all tcp sockets are allowed to use to queue buffered datagrams.

tcp_rmem [min, default, max]

1) min The amount of memory reserved for receiving buffers for each TCP connection (tcp socket), even if the memory is tight, the TCP socket will have at least this amount of memory for receiving buffers.

2) default is the amount of memory reserved for receiving buffers for TCP sockets. By default, this value affects the value of rmem_default used by other protocols, so it may be overwritten by rmem_default.

3) max The value of each tcp connection (tcp socket) is used to receive buffer memory maximum value. This value does not affect the value of wmem_max, and the option parameter SO_SNDBUF is not affected by this value.

tcp_wmem [min, default, max] As above (tcp_rmen) is only used for sending buffers.

Note:

1) It can be permanently saved by sysctl -w or by writing to /etc/sysctl.conf

2) Performance tuning is only performed when needed. After tuning, the collected data needs to be compared with the benchmark data. It is recommended that these parameters do not need to be adjusted blindly.

Reprinted from: https://www.cnblogs.com/zengkefu/p/5635088.html

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=324364039&siteId=291194637