Clean up hadoop cluster installed by ambari

This article is for redhat or centos

For the test cluster, if you want to restart the Hadoop cluster after installing it through ambari, you need to clean the cluster.

For many hadoop components installed, this work is very tedious. Next is the cleanup process I put together.

1. Close all components in the cluster through ambari. If it cannot be closed, kill -9 XXX directly

2. Close ambari-server, ambari-agent

 

[plain]  view plain copy  
 
  1. ambari-server stop  
  2. ambari-agent stop  

 

3. Uninstall the installed software

 

[plain]  view plain copy  
 
  1. yum remove hadoop_2* hdp-select* ranger_2* zookeeper_* bigtop* atlas-metadata* ambari* postgresql spark*  slider* storm* snappy*

 

 

The above command may not be complete. After executing the command, execute it again.

 

[plain]  view plain copy  
 
  1. yum list | grep @HDP  

 

Check if there is any uninstalled, if so, continue to uninstall via #yum remove XXX

4, delete postgresql data

      After the postgresql software is uninstalled, its data still remains in the hard disk, and this part of the data needs to be deleted. If it is not deleted, after reinstalling the ambari-server, the previous installation data may still be applied, and these data are incorrect data, so needs to be deleted.

 

[plain]  view plain copy  
 
  1. rm -rf /var/lib/pgsql  

 

5. Delete the user

     When ambari installs a hadoop cluster, some users will be created. When clearing the cluster, it is necessary to clear these users and delete the corresponding folders. Doing so avoids problems with file access permissions errors when the cluster is running.     

 

[plain]  view plain copy  
 
  1. userdel oozie  
  2. userdel hive  
  3. userdel ambari-qa  
  4. userdel flume    
  5. userdel hdfs    
  6. userdel knox    
  7. userdel storm    
  8. userdel mapred  
  9. userdel hbase    
  10. userdel tez    
  11. userdel zookeeper  
  12. userdel kafka    
  13. userdel falcon  
  14. userdel sqoop    
  15. userdel yarn    
  16. userdel hcat  
  17. userdel atlas  
  18. userdel spark  
  19. userdel ams

 

 

[plain]  view plain copy  
 
  1. rm -rf /home/atlas  
  2. rm -rf /home/accumulo  
  3. rm -rf /home/hbase  
  4. rm -rf /home/hive  
  5. rm -rf /home/oozie  
  6. rm -rf /home/storm  
  7. rm -rf /home/yarn  
  8. rm -rf /home/ambari-qa  
  9. rm -rf /home/falcon  
  10. rm -rf /home/hcat  
  11. rm -rf /home/kafka  
  12. rm -rf /home/mahout  
  13. rm -rf /home/spark  
  14. rm -rf /home/tez  
  15. rm -rf /home/zookeeper  
  16. rm -rf /home/flume  
  17. rm -rf /home/hdfs  
  18. rm -rf /home/knox  
  19. rm -rf /home/mapred  
  20. rm -rf /home/sqoop  

 

6. Delete ambari legacy data

 

[plain]  view plain copy  
 
  1. rm -rf /var/lib/ambari*  
  2. rm -rf /usr/lib/python2.6/site-packages/ambari_*  
  3. rm -rf /usr/lib/python2.6/site-packages/resource_management  
  4. rm -rf /usr/lib/ambri-*  

 

7. Delete the legacy data of other hadoop components

 

[plain]  view plain copy  
 
  1. rm -rf /etc/falcon
    rm -rf /etc/knox
    rm -rf /etc/hive-webhcat
    rm -rf /etc/kafka
    rm -rf /etc/slider
    rm -rf /etc/storm-slider-client
    rm -rf /etc/spark
    rm -rf /var/run/spark
    rm -rf /var/run/hadoop
    rm -rf /var/run/hbase
    rm -rf /var/run/zookeeper
    rm -rf /var/run/flume
    rm -rf /var/run/storm
    rm -rf /var/run/webhcat
    rm -rf /var/run/hadoop-yarn
    rm -rf /var/run/hadoop-mapreduce
    rm -rf /var/run/kafka
    rm -rf /var/log/hadoop
    rm -rf /var/log/hbase
    rm -rf /var/log/flume
    rm -rf /var/log/storm
    rm -rf /var/log/hadoop-yarn
    rm -rf /var/log/hadoop-mapreduce
    rm -rf /var/log/knox
    rm -rf /usr/lib/flume
    rm -rf /usr/lib/storm
    rm -rf /var/lib/hive
    rm -rf /var/lib/oozie
    rm -rf /var/lib/flume
    rm -rf /var/lib/hadoop-hdfs
    rm -rf /var/lib/knox
    rm -rf /var/log/hive
    rm -rf /var/log/oozie
    rm -rf /var/log/zookeeper
    rm -rf /var/log/falcon
    rm -rf /var/log/webhcat
    rm -rf /var/log/spark
    rm -rf /var/tmp/oozie
    rm -rf /tmp/ambari-qa
    rm -rf /var/hadoop
    rm -rf /hadoop/falcon
    rm -rf /tmp/hadoop
    rm -rf /tmp/hadoop-hdfs
    rm -rf /usr/hdp
    rm -rf /usr/hadoop
    rm -rf /opt/hadoop
    rm -rf /opt/hadoop2
    rm -rf /tmp/hadoop
    rm -rf /var/hadoop
    rm -rf /hadoop
    rm -rf /etc/ambari-metrics-collector
    rm -rf /etc/ambari-metrics-monitor
    rm -rf /var/run/ambari-metrics-collector
    rm -rf /var/run/ambari-metrics-monitor
    rm -rf /var/log/ambari-metrics-collector
    rm -rf /var/log/ambari-metrics-monitor
    rm -rf /var/lib/hadoop-yarn
    rm -rf /var/lib/hadoop-mapreduce

 

8. Clean up the yum data source

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=326273774&siteId=291194637