After a period of time, stop-dfs.sh cannot shut down the Hadoop3.1.3 cluster, and stop-hbase.sh cannot shut down the HBase cluster

After a period of time, the Hadoop3.1.3 cluster cannot be shut down

question
After a period of time, the use of stop-dfs.sh and stop-yarn.sh cannot shut down the Hadoop cluster 3.1.3 because the default process ID of Hadoop is stored in , and the unaccessed files in it will be cleaned up regularly
/tmp /tmp
solve
Modify the process ID storage path

1. Edit the configuration file

vim $HADOOP_HOME/etc/hadoop/hadoop-env.sh

2. Modify the process ID storage directory in the configuration file, save and exit

# export HADOOP_PID_DIR=/tmp
export HADOOP_PID_DIR=/opt/module/hadoop/pid/

3. Distribution configuration file

rsync.py $HADOOP_HOME/etc/hadoop/hadoop-env.sh

4. Hadoop3 group start script

#!/bin/bash
if [ $# -lt 1 ]
then
    echo "No Args Input..."
    exit ;
fi
case $1 in
"start")
        echo " =================== 启动 hadoop集群 ==================="

        echo " --------------- 启动 hdfs ---------------"
        ssh hadoop105 "/opt/module/hadoop/sbin/start-dfs.sh"
        echo " --------------- 启动 yarn ---------------"
        ssh hadoop106 "/opt/module/hadoop/sbin/start-yarn.sh"
        echo " --------------- 启动 historyserver ---------------"
        ssh hadoop105 "/opt/module/hadoop/bin/mapred --daemon start historyserver"
;;
"stop")
        echo " =================== 关闭 hadoop集群 ==================="

        echo " --------------- 关闭 historyserver ---------------"
        ssh hadoop105 "/opt/module/hadoop/bin/mapred --daemon stop historyserver"
        echo " --------------- 关闭 yarn ---------------"
        ssh hadoop106 "/opt/module/hadoop/sbin/stop-yarn.sh"
        echo " --------------- 关闭 hdfs ---------------"
        ssh hadoop105 "/opt/module/hadoop/sbin/stop-dfs.sh"
;;
*)
    echo "Input Args Error..."
;;
esac

5. Use killthe Hadoop-related processes (DataNode, NodeManager, DataNode, etc.) to shut down each machine

[root@hadoop107 ~]# jps | grep -v Jps
5041 NodeManager
4931 DataNode
[root@hadoop107 ~]# kill -9 5041 4931

6. Start

[hjw@hadoop105 hadoop]$ hdp.sh start
 =================== 启动 hadoop集群 ===================
 --------------- 启动 hdfs ---------------
Starting namenodes on [hadoop105]
hadoop105: WARNING: /opt/module/hadoop/pid/ does not exist. Creating.
Starting datanodes
hadoop107: WARNING: /opt/module/hadoop/pid/ does not exist. Creating.
hadoop106: WARNING: /opt/module/hadoop/pid/ does not exist. Creating.
Starting secondary namenodes [hadoop106]
 --------------- 启动 yarn ---------------
Starting resourcemanager
Starting nodemanagers
 --------------- 启动 historyserver ---------------

7. View process files

[hjw@hadoop105 hadoop]$ cluster.py ls $HADOOP_HOME/pid/
ssh hadoop105 'ls /opt/module/hadoop/pid/'
hadoop-hjw-datanode.pid
hadoop-hjw-historyserver.pid
hadoop-hjw-namenode.pid
hadoop-hjw-nodemanager.pid
ssh hadoop106 'ls /opt/module/hadoop/pid/'
hadoop-hjw-datanode.pid
hadoop-hjw-nodemanager.pid
hadoop-hjw-resourcemanager.pid
hadoop-hjw-secondarynamenode.pid
ssh hadoop107 'ls /opt/module/hadoop/pid/'
hadoop-hjw-datanode.pid
hadoop-hjw-nodemanager.pid

8. Check the process

[hjw@hadoop105 hadoop]$ jps.py 
---------------hadoop105----------------
31651 DataNode
32138 JobHistoryServer
31530 NameNode
31967 NodeManager
---------------hadoop106----------------
14394 SecondaryNameNode
14285 DataNode
14541 ResourceManager
14655 NodeManager
---------------hadoop107----------------
6694 NodeManager
6583 DataNode

HBase 2.0.5 cannot be shut down after starting for a period of time

stop-hbase.shThe HBase cluster cannot be shut down, the reason is that the process ID file is in the default/tmp

1. Edit the configuration file

vim $HBASE_HOME/conf/hbase-env.sh

2. Modify the process ID storage path

# The directory where pid files are stored. /tmp by default.
# export HBASE_PID_DIR=/var/hadoop/pids
export HBASE_PID_DIR=/opt/module/hbase/pids

3. Distribution

rsync.py $HBASE_HOME/conf/hbase-env.sh

4. killEnd the HBase process by using

kill -9

5. Start HBase

start-hbase.sh

6. View the process ID file

[hjw@hadoop105 phoenix]$ ll /opt/module/hbase/pids
total 16
-rw-rw-r-- 1 hjw hjw  5 Dec  1 18:53 hbase-hjw-master.pid
-rw-rw-r-- 1 hjw hjw 30 Dec  1 18:53 hbase-hjw-master.znode
-rw-rw-r-- 1 hjw hjw  5 Dec  1 18:53 hbase-hjw-regionserver.pid
-rw-rw-r-- 1 hjw hjw 40 Dec  1 18:53 hbase-hjw-regionserver.znode

Guess you like

Origin blog.csdn.net/Yellow_python/article/details/128126280