31. Prompt error fopen_means Too many open files

One: Print prompt fopen error:

open_file_and_get_length:175 fopen /var/1608536431170.jpg errno = 24, means: Too many open files

Two: Reason:
1. Too many open files literally means that there are too many files opened by the program,
but the files here not only mean files, but also open communication connections (such as sockets) and ports that are being monitored. Wait;
this error is usually the open file exceeds the system limit.
Because the file descriptor provided by the Linux system is only 1024 at most, as follows:

root@user126:# ulimit -a
core file size          (blocks, -c) 0
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 20
file size               (blocks, -f) unlimited
pending signals                 (-i) 16382
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 1024			//最大打开个数1024(包含socket的文件描述符)
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 8192
cpu time               (seconds, -t) unlimited
max user processes              (-u) unlimited
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited

Three: Solution:
1. If the system really needs more file descriptors, you can use the command to modify: ulimit -n 2048,
but this way of modification will restore the default value after restarting.

root@user126:# ulimit -n 2048
root@user126:# ulimit -a
core file size          (blocks, -c) 0
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 20
file size               (blocks, -f) unlimited
pending signals                 (-i) 16382
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 2048					//修改成功:
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 8192
cpu time               (seconds, -t) unlimited
max user processes              (-u) 2048
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited
root@user126:/mnt/yanghang/testdir/SourceInsight7/MainCode_371/MainCode/build# 

2. Write the number of allowed
openings to the configuration file; vim /etc/security/limits.conf

#Join at the end, * means all users:

* soft nofile 2048  
* hard nofile 2048  

3. View the number of file descriptors occupied by a process:

aston@ubuntu:~$ ps
  PID TTY          TIME CMD
 2580 pts/7    00:00:00 bash
 2652 pts/7    00:00:00 ps
aston@ubuntu:~$ lsof -p 2580 | wc -l
18

4. Write the log and analyze the 18 opened file descriptors:
aston@ubuntu:~$ lsof -p 2580> /mnt/hgfs/share/test/6/openfiles.log

COMMAND  PID  USER   FD   TYPE DEVICE SIZE/OFF   NODE NAME
bash    2580 aston  cwd    DIR    8,1     4096 675479 /home/aston
bash    2580 aston  rtd    DIR    8,1     4096      2 /
bash    2580 aston  txt    REG    8,1   986672 131079 /bin/bash
bash    2580 aston  mem    REG    8,1    46812 918593 /lib/i386-linux-gnu/libnss_files-2.19.so
bash    2580 aston  mem    REG    8,1    92036 918587 /lib/i386-linux-gnu/libnsl-2.19.so
bash    2580 aston  mem    REG    8,1 11688000 403327 /usr/lib/locale/locale-archive
bash    2580 aston  mem    REG    8,1  1758972 918518 /lib/i386-linux-gnu/libc-2.19.so
bash    2580 aston  mem    REG    8,1    13856 918535 /lib/i386-linux-gnu/libdl-2.19.so
bash    2580 aston  mem    REG    8,1   133164 918664 /lib/i386-linux-gnu/libtinfo.so.5.9
bash    2580 aston  mem    REG    8,1    26256 400379 /usr/lib/i386-linux-gnu/gconv/gconv-modules.cache
bash    2580 aston  mem    REG    8,1    42668 918603 /lib/i386-linux-gnu/libnss_nis-2.19.so
bash    2580 aston  mem    REG    8,1    30560 918589 /lib/i386-linux-gnu/libnss_compat-2.19.so
bash    2580 aston  mem    REG    8,1   134380 918494 /lib/i386-linux-gnu/ld-2.19.so
bash    2580 aston    0u   CHR  136,7      0t0     10 /dev/pts/7
bash    2580 aston    1u   CHR  136,7      0t0     10 /dev/pts/7
bash    2580 aston    2u   CHR  136,7      0t0     10 /dev/pts/7
bash    2580 aston  255u   CHR  136,7      0t0     10 /dev/pts/7

Four: Is it a program problem:
1. If you know enough about the program, you must have a certain estimate of the upper limit of the file descriptor opened by the program.
If you feel that the number is abnormal, you need to check further:
lsof -p process id > openfiles.log
obtains all the details of the currently occupied file descriptor for analysis;

*Are these files opened are all necessary;
*Locate to the code to open these files;
*Whether the program has manipulated file writing, but has not been closed normally;
*Whether the program has carried out socket communication, but has not closed normally (That is, there is no mechanism for overtime termination);

If these problems exist, no matter how large the system file descriptor is set, it will definitely run out over time.

Guess you like

Origin blog.csdn.net/yanghangwww/article/details/111772400