How does the Centos server clean up the disk, what are the easiest ways
foreword
If the online server disk is full, but I dare not clear it easily, you can refer to the following solutions at this time;
Tip: The operator needs to have a link server and use commands to operate;
check disk full
# 输入以下命令,查看服务器硬盘,是否爆满,像当前这个服务器,磁盘占用 27%;
df -h
Several ways to clean up
large file, delete
Retrieve the current system, files larger than 100M, and view them
find / . -type f -size +100M
After retrieval, please delete those files according to your needs;
if those files are unstable and you don’t know their purpose, please back up the snapshot before executing them;
for example, as follows
# 该命令不可胡乱使用,删除了 无法恢复
rm -rf /www/wwwroot/jar-prod/target/blade/log/info-2023-06-11.log
log clear
Check the current log files of the system and retrieve them; some servers have a lot of log files, so wait and take a look first;
find / -name *.log
Like this server, there are too many logs. If you can’t read them, you don’t know which ones are needed or not; you can simply take a look and find the logs with time format;
For example, if the log file has time, you can copy the path; Tip: For logs with time format, you can delete some appropriately
Execute the command:
#命令解析,查询某文件夹下的所有文件,修改时间低于当前时间3天
#也就是将大前天的文件之前的文件,进行删除
find /www/wwwroot/jar-dev/target/blade/log/ -mtime +2 -name "*" -exec rm -rf {
} \;
The path here: /www/wwwroot/jar-dev/target/blade/log/ You can adjust according to the log path you have obtained; you can sort out multiple log paths first, and make the sorted statement into a
script Set to timing processing;
The large memory file is deleted, but the memory is not released
The system will have a processing mechanism. When some large files are deleted, but the memory usage is still the same and cannot be released; you
can try the following command, only one is needed, which is the fastest;
# 找到未释放的,直接帮 kill -9 该进程
lsof | grep deleted | awk '{if (NR > 1){print $2}}'|uniq | xargs kill -9;