logrotate
At work there is often a demand to view the log, either by application or system error log to find problems or by visiting the site log statistics nginx average daily PV, UV. It reflects the importance of the log, but usually when the log each time more and more business generated by the project would be a very large number, we will need to find advantages to log on, and this time you need to log reasonable backup retention. With the cumulative time, a log file will become very large, we need to find the log section of the command when it will become very difficult, so we need to read all kinds of log cutting polling process.
logrotate system comes with log cutting tool with crond and shell scripts can achieve cutting poll for logs. The following describes how to use logrotate.
1, view the machine logrotate
[root@iZ28ed866qmZ data]# which logrotate /usr/sbin/logrotate
2, logrotate create executable files in any directory
[root@iZ28ed866qmZ data]# mkdir logrotate
[root@iZ28ed866qmZ logrotate]# vim tomcat_log1.conf
{/workspace/service_platform/apache-tomcat-7.0.69-jd1/logs/catalina.out # generated log path
copytruncate
Daily
dateext
missingok
}
3, logrotate arguments detailed
daily: Specifies the dump cycle daily
weekly: Weekly designated dump cycle
monthly: I log file monthly round-robin. Other useful value of 'daily', 'weekly' or 'yearly'
rotate 5: once the store five archive logs. For the sixth archives, the oldest archive will be deleted
compress: After round-robin task is completed, will have a round-robin archive compressed using gzip
missingok: During the round robin log any errors will be ignored, such as "file not found" error
notifempty: If the log file is empty, the round robin will not be
create 644 root root: the specified permission to create a new log file, while logrotate will rename the original log files
dateext: log file after the switching will be appended with a dash and date format YYYYMMDD
compress: gzip compression by old log dump
delaycompress: the current log file dump to dump when the next compression
notifempty: If the log file is empty, do not perform cutting
sharedscripts: only run once for the entire log group script
postrotate / endscript: In order to be executed after the dump can put this right, the two keywords must be on a separate line
size: size when the log file reaches the specified size when the dump, Size can be specified bytes (the default) and KB (sizek) or MB (sizem)
3, the timing is performed by crond
[root@iZ28ed866qmZ logrotate]# crontab -l ##tomcat log## 59 23 * * * /usr/sbin/logrotate -f /data/logrotate/tomcat_log1.conf
4, the backup schedule cleanup script
[root@iZ28ed866qmZ scripts]# cat log1_polling.sh #!/bin/sh logs_path="/workspace/service_platform/apache-tomcat-7.0.69-jd1/logs" c_log=catalina.out a_log=localhost_access_log find $logs_path -name "catalina.*-*-*.log" -exec rm -rf {} \; for i in $(seq 1 ); do dates=`date +"%Y%m%d" -d "-${i}day"` dates2=`date +"%Y-%m-%d" -d "-${i}day"` cd $logs_path tar zcf $a_log.$dates.tar.gz $a_log.$dates2.txt tar zcf $c_log.$dates.tar.gz $c_log-$dates sleep 30 wait find $logs_path -mtime +7 -name "localhost_access_log.*.txt" -exec rm -rf {} \; find $logs_path -mtime +7 -name "catalina.out-*.log" -exec rm -rf {} \; done find $logs_path -mtime +7 -name "localhost_access_log.*.tar.gz" -exec rm -rf {} \; find $logs_path -mtime +30 -name "catalina.*.tar.gz" -exec rm -rf {} \;
5, the timing task added
[root@iZ28ed866qmZ scripts]# crontab -l 00 00 * * * /bin/sh /data/scripts/log1_polling.sh