Linux tuning to meet the requirements of 1 million concurrent connections

Linux tuning to meet the requirements of 1 million concurrent connections

First, you need to use tools to test whether the operating system supports 1 million connections.

Tool address: https://github.com/ideawu/c1000k

The following command is for CentOS 7.4

installation

wget --no-check-certificate https://github.com/ideawu/c1000k/archive/master.zip
unzip master.zip
cd c1000k-master
make

First test result

After the compilation is complete, execute the command to detect

# 启动服务器,监听端口7000〜7099
./server 7000
# 运行客户端
./client 127.0.0.1 7000

The results of the implementation are as follows:

.....
server listen on port: 7096
server listen on port: 7097
server listen on port: 7098
server listen on port: 7099
connections: 921
error: Too many open files

Prompt error: Too many open files, indicating that the server can only accept 921 connections, reaching the maximum number of open files.

Maximum number of open files

View the maximum number of open file descriptors in the system

[root@host176 ~]# cat /proc/sys/fs/file-max
6553600

The number of file descriptors opened by all processes cannot exceed /proc/sys/fs/file-max

If the number in the machine is relatively small, you need to modify /etc/sysctl.conf

fs.file-max = 1020000

Optimization process limit

[root@host176 c1000k-master]# ulimit -n
1024

This is the limit on the number of files that a process can open

The default is 1024, which is a very low value. Because every user request corresponds to a file handle, and the stress test will generate a large number of requests, we need to increase this number and change it to the million level.

The configuration file /etc/security/limits.conf can be modified to take effect permanently:

* hard nofile 1024000
* soft nofile 1024000

Take effect after re-login

If it does not take effect, modify /etc/ssh/sshd_config to
ensure that UsePAM is enabled

UsePAM yes

If it still doesn't work

  1. Check /etc/pam.d/sshd for this file and its content
  2. Check whether the configuration of XX-nproc.conf under /etc/security/limits.d/ or a key file under /etc is maintained and hard-coded. These high-priority configurations will override the process opened by the limits.conf configuration File limit

Continue to verify

Continue to verify after optimizing ulimit

....
connections: 129999, fd: 130001
connections: 130573
error: Connection timed out

After the number of connections is established 130573, an error Connection timed out is reported at this time

Query the kernel parameters through sysctl -a and found net.nf_conntrack_max=131072that it is similar to the total number of connections, and guessing may be the cause of the bottleneck

net.nf_conntrack_max introduction:
Generally, the host will have a firewall enabled, so the firewall will record every tcp connection record, so if the number of tcp created by the virtual machine exceeds the maximum number of fireproof records of the host, the subsequent tcp will be dropped

Final verification

After knowing the reason, change the connection to 200w+ to meet the 100w long connection of a single machine.
Modify /etc/sysctl.conf

net.nf_conntrack_max=2048576

Make the configuration effective

sysctl -f /etc/sysctl.conf 

Final Results:

connections: 1015999, fd: 1016001
connections: 1016999, fd: 1017001
connections: 1017999, fd: 1018001
connections: 1018999, fd: 1019001
connections: 1019999, fd: 1020001
connections: 1020999, fd: 1021001
connections: 1021999, fd: 1022001
connections: 1022999, fd: 1023001
connections: 1023997
error: Too many open files

Support 1023997 connections

Guess you like

Origin blog.csdn.net/qq_33873431/article/details/114091062