02: linux disk is full, but can not find the problem handling large files lsof command

Today, the police received zabbix morning.

Alarm content: a server, the disk is full.

Troubleshooting steps:

Before the disk is full, are generally due to the server to run again due to the above service logs take up too much space, java program on the server to remove the extra log on it. But I remember I did this server log automatic cleaning mechanism.

So log on the server, df -h found disk 91%, df -i index normal. 

 

My first thought was not a problem to clean up the log, go to each service below to view the log found normal, log capacity is only a few megabytes.

So use du to find large files, so in the root directory, look at the disk occupancy of each directory.

you -sh /

The result is altogether out of the use of multi-6G, did not have a large catalog files.

 This is very strange to see the remaining space and used space data is not on, but also a difference of more than 40 G.

So, the next Baidu, some files may have deleted the original, but has not been released, it is like entering the recycle bin, like, so to see those with lsof command to delete a file, but also open the file handles, and arranged in order of size.

 

lsof | grep deleted | black NR

 

It was found that there is a java log file, there are more than 40 G, the state is deleted, but the file handle is still open. Still he kept writing data,

 

So the group found inside the development, allowing help to restart the service, then df -h, space instantly idle for more than 40 G.

 

Guess you like

Origin www.cnblogs.com/jim-xu/p/11441762.html