FD file descriptor

One: The mapping of file descriptor concept
  metadata (recording the size of the file, owner, group, size, modification date, etc.) to the file. When an application opens a file, the kernel returns a unique file descriptor to the application, and the application refers to the file through the file descriptor.
   The Linux system has restrictions on the file descriptors that can be opened by each user, process, and the entire system. The default value is 1024. When we encounter "too many open file" in the system or application log, not the entire system is opened. Too many files, but a certain user, process or system has reached the limit of open file descriptors. At this time, you can increase the limit on the number of file descriptors to solve this problem.
Two: Common commands related to file descriptors lsof
   2.1 Install lsof
There is no lsof command in Centos6, then you can install the lsof tool through the yum install lsof command
   2.2 Get the number of file descriptors opened by the system cat /proc/sys/fs/ file-nr
   cat /proc/sys/fs/file-nr
    2080 0 387517
   //1st column 2080: Number of FDs allocated
   //2nd column 0: Number of allocated unused
   FDs //3rd column 387517: The maximum number of FDs available to the system

Remarks : The /proc directory is used to access the kernel data structure at runtime, change the mechanism built into the kernel, and provide an interface for accessing the system kernel. stored in memory .
    2.3 Get the number of file descriptors opened by a process Take the vi program as an example
     2.3.1 pidof vi
      12874
      //The process number of the 12874 vi program in the first column
     2.3.2 ls -l /proc/12874/fd
      total 0
    lrwx------ 1 root root 64 Jun 2 14:28 0 -> /dev/pts/1
    lrwx------ 1 root root 64 Jun 2 14:28 1 -> /dev/pts/1
    lrwx------ 1 root root 64 Jun 2 14:00 2 -> /dev/pts/1
    lrwx------ 1 root root 64 Jun 2 14:28 4 -> /tmp/.2.txt.swp
     //Display vi program uses 4 file descriptors FD
  Note: pidof Find out the process number of the running program
     2.4 Change the file descriptor limit
  When encountering "too many open files" in the system or application log, you need to increase the limit of the number of file descriptors to solve this problem. In practical applications, the default file descriptor of the system is relatively large, so generally it is only necessary to change the limit number of both the file descriptor FD of the user and the process
     . -n
     [root@bdi23 ~]# ulimit -a
core file size          (blocks, -c) 0
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 30516
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 1024
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 10240
cpu time               (seconds, -t) unlimited
max user processes              (-u) 30516
virtual memory          (kbytes, -v) unlimited
file locks (-x) unlimited

     Note: ulimit -a is used to display the limit of shell resources, the red in the figure is the limit number of file descriptors

   [root@bdi23 ~]# ulimit -n
   1024
     Note: ulimit -n is used to display
    2.4.2 Changing the FD file descriptor      limit
       of a user or process
Restore the default 1024 FD file descriptor limit. The current session of the current user is valid
     [root@bdi23 ~]# ulimit -n 2000
     Remarks: Use this command to set the original FD number limit for the current session
      and then use ulimit -n or ulimit -a to view Whether the setting is successful
     [root@bdi23 ~]# ulimit -n
     2000
     [root@bdi23 ~]# ulimit -a
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f)
pending signals (-i) 30516
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 2000
pipe size (512 bytes, - p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 10240
cpu time (seconds, -t) unlimited
max user processes (-u) 30516
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited

       2.4.2 Permanently effective FD limit for a user
    When you log out and log in again, the user's FD limit is the modified limit
    [root@bdi23 ~]# vi /etc/security/limits.conf Add     fly hard nofile 100
    at the end of the file    //The first column fly: the specified user name, the FD limit that each process of the fly user can use after the change The number is 100    //The second column hard: limit type, soft, when the system reaches the FD limit of this type, a log will be recorded in /var/log/messages, but it will not affect usage. hard, when the system reaches the FD limit of this type, it will also record a log, but it will affect the use of    // the third column nofile: the content    of the limit // the value of the fourth column limit      2.4.3 Log in to the system with the fly user, ulimit -n or ulimit -a to view the setting result     [fly@bdi23 ~]$ ulimit -n      100 [fly@bdi23 ~]$ ulimit -a core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited














pending signals (-i) 30516
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 100
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 10240
cpu time (seconds, -t) unlimited
max user processes (-u) 1024
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited

Note: You can also achieve this effect by modifying a user's .bash_profile file, adding ulimit -n 200
   [root@bdi23 ~]# echo "ulimit -n 200" >> /home/ fly/.bash_profile

When I tried this on the machine, it didn't work. The reason for the failure is because the fly hard nofile 100 is added to the /etc/security/limits.conf file. So it can be seen that the file priority of /etc/security/limits.conf is higher than the .bash_profile file.
       2.4.4 Set the maximum number of FDs for the entire system
     echo "51200" >> /proc/sys/fs/file-max
     root@bdi23 ~]# cat /proc/sys/fs/file-nr
     2304 0 51200
  Remarks: like this After setting, the system restart will restore the maximum value by default
       2.4.5 Set the maximum value permanently for the entire system
      echo "fs.file-max = 51200" >> /etc/systcl.conf
   3 Common commands of lsof
       3.1 View the files of the entire system Number of open
        [root@bdi23 ~]# lsof | wc -l
         3491
       3.2 View the number of files opened by a user
         [root@bdi23 ~]# lsof -uk | wc -l
         102
       3.3 View the number of files opened by a program
         [root @bdi23 ~]# pidof vim
          9777
          [root@bdi23 ~]# lsof -p 9777 | wc -l
           27
  


The most common way of interacting with files is a system call.
A file table: The file must be opened before the file is read or written, and the kernel will maintain an open file table for each process. The child process will maintain a copy of the file table of your process. When the child process closes the file, it will not affect the file table of the parent process, only its own file table. This file table stores FD [each process contains at least three FDs]
The most common file access method to open a file
  is the system call read(), write(). open a file. After reading and writing is complete, close() the file.

  





      
 

Guess you like

Origin http://10.200.1.11:23101/article/api/json?id=326569976&siteId=291194637