HDP cluster log configuration and log deletion script

1. Ambari cluster log4j configuration

The HDP cluster is installed by default, and logs are placed on the data disk, but the data disks of namenode and snamenode are not only 500G. Inadvertently, the data disk is filled with logs. First, start with the cluster configuration.

1.1.HDFS log4j

1.2.YARN log4j

1.3.Hive log4j

1.4.Solr log4j

1.5.Ranger log4j

 

Second, the script deletes expired and large logs

The HDP cluster is installed by default, and the logs are stored in the data disk, but the data disks of namenode and snamenode are not only 500G. Inadvertently the data disk is filled with logs, and my logs are stored in the data disk/hadoop.

hdfs_log_path=/app/var/log/hadoop/hdfs/
log_tmp=/app/cluster_log/log_tmp/

find  $hdfs_log_path -mtime +7 -size +500M  -name 'hdfs-audit.log.*' | xargs -I '{}' mv {} $log_tmp

find  $hdfs_log_path -mtime +7 -size +500M  -name 'SecurityAuth.audit.*' | xargs -I '{}' mv {} $log_tmp

rm -rf $log_tmp/*

find parameter : -mtime 7 days ago, -size is greater than 500M, -name fuzzy matching, after matching, move to the temporary directory and delete it. With this simple script example, you can add and delete path logs by yourself

Start scheduling, execute once at the end of each month

 

Guess you like

Origin blog.csdn.net/qq_35995514/article/details/110423279