How to correctly configure the ulimit value of the Linux system

When deploying applications under Linux, sometimes you will encounter the problem of Socket/File: Can't open so many files; this value will also affect the maximum number of concurrent servers. In fact, Linux has file handle restrictions, and Linux defaults to It is not very high, generally 1024, which is actually very easy to reach this number for production servers. The following is how to correct the default value of this system through the positive solution configuration. Because this problem was encountered when I configured Nginx+php5, I summarized this article into nginx+apache.

Viewing method

We can use ulimit -a to view all limit values
​​[root@centos5 ~]# ulimit -a
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
max nice (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 4096
max locked memory (kbytes, -l) 32
max memory size (kbytes, -m) unlimited
open files (-n) 1024
pipe size (512 bytes, - p) 8
POSIX message queues (bytes, -q) 819200
max rt priority (-r) 0
stack size (kbytes, -s) 10240
cpu time (seconds, -t) unlimited
max user processes (-u) 4096
virtual memory (kbytes, - v) unlimited
file locks (-x) unlimited||<

where "open files (-n) 1024" is the limit on the number of file handles opened by a process (including the number of open SOCKETs, which can affect the concurrency of MySQL) number of connections). This value can be modified by the ulimit command, but the value modified by the ulimit command is only valid for the current usage environment of the currently logged-in user, and will become invalid after the system restarts or the user exits (I encountered this problem when deploying Nginx+FastCGI, set the ulimit -SHn 65535 in /etc/rc.d/rc.local doesn't work either)

The total system limit is here, /proc/sys/fs/file-max. You can view the current value through cat, and you can also control it by modifying /etc/sysctl.conf.

There is another one, /proc/sys/fs/file-nr, which can see the number of file handles currently used by the entire system.

When looking for file handle problems, there is also a very useful program lsof. It is easy to see which handles are opened by a process, and you can also see which process a file/directory is occupied by a certain process.

Modification method
If you want to modify the value of ulimits permanently, you must modify the configuration file. You can put the modification command of ulimit into /etc/profile. This method is really inconvenient. Another method is to modify /etc/sysctl.conf . I have modified and tested it, but the ulimits -a of the user will not change, only the value of /proc/sys/fs/file-max has changed.

I think the correct way is to modify /etc/security/limits.conf
with very detailed comments. For example,
*soft nofile 32768
*hard nofile 65536
can uniformly change the file handle limit to soft 32768 and hard 65536. The front of the configuration file refers to the domain, and the asterisk is set to represent the global. In addition, you can also make different restrictions for different users.

Note: The hard limit in this is the actual limit, and the soft limit is the warning limit, which only makes a warning; in fact, the ulimit command itself has soft and hard settings, adding -H is hard, adding -S is soft The
default display The soft limit is the soft limit. If it is not added when the ulimit command is modified, the two parameters are changed together.

Effective
because I usually work the most is to deploy the web environment (Nginx+FastCGI external network production environment and intranet development environment), I can log in again (reboot is also OK) I log in with root and www users respectively, and use ulimit -a respectively Check the confirmation, it is best to restart the ssh service before doing this, service sshd restart.

Guess you like

Origin http://10.200.1.11:23101/article/api/json?id=326667244&siteId=291194637