Optimization of the number of concurrent TCP connections on a single Linux server

Question: How many concurrent TCP connections can a server support?

1. File descriptor limit:
    For the server, each TCP connection occupies a file descriptor. Once the file descriptor is used up, the error returned to us when a new connection arrives is "Socket/File: Can't open" so many files"      

    At this point, you need to understand the limit on the maximum number of files that the operating system can open.

        Process limit (user limit):

  • The first step is to view the current value
ulimit -n # 最大文件数,一般默认为1024个
ulimit -u # 最大进程数,一般为默认60000+

            Execute ulimit -n to output 1024, indicating that a process can only open up to 1024 files, so if you use this default configuration, you can have up to thousands of concurrent TCP connections.

            Temporary modification: ulimit -n 1000000, but this temporary modification is only valid for the environment currently used by the currently logged-in user, and will become invalid after system restart or user exit.

            Permanent effect: modify the /etc/security/limits.conf file:

# modify the number of files
                * soft nofile 1000000
                * hard nofile 1000000
                # modify the number of processes
                *    soft    noproc    60000
                *    hard    noproc    60000 
  • The second step is to modify the /etc/pam.d/login file and add the following lines to the file:

    session required /lib/security/pam_limits.so

    If it is a 64bit system, it should be:
    session required /lib64/security/pam_limits.so

  • The third step is to modify the /etc/sysctl.conf file and add the following lines to the file (to clear the original content of the file) (to modify the network kernel's restrictions on TCP connections):
  • Setting up Sysctl.conf to improve Linux performance https://blog.csdn.net/21aspnet/article/details/6584792
net.ipv4.ip_local_port_range = 1024 65535
net.core.rmem_max=16777216
net.core.wmem_max=16777216
net.ipv4.tcp_rmem=4096 87380 16777216
net.ipv4.tcp_wmem=4096 65536 16777216
net.ipv4.tcp_fin_timeout = 10
net.ipv4.tcp_tw_recycle = 1
net.ipv4.tcp_timestamps = 0
net.ipv4.tcp_window_scaling = 0
net.ipv4.tcp_sack = 0
net.core.netdev_max_backlog = 30000
net.ipv4.tcp_no_metrics_save=1
net.core.somaxconn = 262144
net.ipv4.tcp_syncookies = 0
net.ipv4.tcp_max_orphans = 262144
net.ipv4.tcp_max_syn_backlog = 262144
net.ipv4.tcp_synack_retries = 2
net.ipv4.tcp_syn_retries = 2
  • The fourth step, execute the following command (to make the above settings take effect):  
/sbin/sysctl -p /etc/sysctl.conf
/sbin/sysctl -w net.ipv4.route.flush=1
  • The fifth step, execute the following command (after the Linux system optimizes the network, the number of files allowed to be opened by the system must be increased to support large concurrency, the default 1024 is far from enough):
 echo ulimit -HSn 1000000 >> /etc/rc.local
 echo ulimit -HSn 1000000 >>/root/.bash_profile
  ulimit -SHu 6000 >> /etc/rc.local
  ulimit -SHn 1000000 >> /etc/rc.local   

 

Global limit:
            execute cat /proc/sys/fs/file-nr  
                1216 0 187612
                (1) 1216: the number of file descriptors that have been allocated
                (2) 0: the number of file descriptors that have been allocated but not used, the meaning here is The kernel allocates 1216 and then runs out of 1216, so the "number of handles allocated but not used" is 0 
                (3) 187612: the maximum number of file handles
                Note: in kernel 2.6 the value of the second item is always 0, which is not an error, it actually means that all the allocated file descriptors have
                been used.

                You can adjust the size of the last value by defining fs.file-max = 1000000 in /etc/sysctl.conf

2. Port number range restrictions:
    On the operating system, port numbers below 1024 are reserved by the system, and users from 1024-65535 Used, since no tcp connection occupies a port number, we can use up to
    more than 60,000 concurrent connections, which is the understanding of the client.
    Analyze it:
        (1) How to identify a TCP connection? The system uses a 4-tuple to identify a TCP connection: (local ip, local port, remote ip, remote port) For accept, the sock of accept does not account for the first local ip of the new port, and local port represents the ip of the client address and port number.
        As a server, we actually only use the bind port, which
        means that port 65535 is not a concurrency limit.
        (2) The maximum number of tcp connections of the server: the server is usually fixed on a certain local port to listen, waiting for the client's connection request. Without considering address reuse, even multiple ip
        local listening ports are exclusive. Therefore, only remote ip and remote port are variable in the 4-tuple of tcp connection on the server side, so the maximum tcp connection is the
        number of client ips * the number of client ports. For ipv4, regardless of factors such as ip address, the maximum tcp connection is about 2 to the 32nd power (ip number) * 2 to the 16th power (port number),
        which is the server side: the maximum number of tcp connections for a single machine is about: 2 48 power.


Question 1: The number of handles seen by using lsof to view the file descriptor is different from the value of /proc/sys/fs/file-nr, why?

[root@localhost ~]# lsof | wc -l
        710
        [root@localhost ~]# !cat
        cat /proc/sys/fs/file-nr
        416    0    1000000

    Answer: A file can be opened by multiple processes, and lsof lists the files opened by each process, so it is normal for the value of lsof to be larger than file-nr.

Question 2: How big is the appropriate file handle setting?

    How to check the number of handles:

[root@node1 ~]# cat /proc/sys/fs/file-nr
        832    0    97321        
        [root@node1 ~]# cat /proc/sys/fs/file-max
        97321        
        The default maximum number of handles is 97321

        This value in the kernel documentation means that file-max is generally calculated as 10% of the memory size (KB). If you use a shell, you can calculate it like this:
        grep -r MemTotal /proc/meminfo | awk '{printf("%d ",$2/10)}' The calculated value is generally similar to the default maximum number of handles.

echo "fs.file-max = 100133" >> /etc/sysctl.conf && sysctl -p

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=325816606&siteId=291194637