Table of contents
1. Solution without SecondaryNode
Don’t forget to shut down the Hadoop cluster first when performing the following operations
1. Solution without SecondaryNode
There is no SecondaryNode after starting the hadoop cluster:
When configuring environment variables for the hadoop cluster, modify /etc/profile on node1, node2, and node3 and forget to execute it to take source /etc/profile
effect.
source /etc/profile
Start the Hadoop cluster and check the process with jps
# Start HDFS cluster with one click
start-dfs.sh
# Shut down the hdfs cluster with one click
stop-dfs.sh
# Check the process
jps
This way there is a SecondaryNode process
2. Solution without DataNode
There is no DataNode after starting the hadoop cluster:
Delete all files under logs in the Hadoop installation directory with root user or permissions
Then delete all files under nn under data and all files under dn under data.
rm -rf /export/server/hadoop/logs/*
rm -rf /data/nn/* ; rm -rf /data/dn/*
After deletion, remember to return to the hadoop user (I created a new hadoop user to prevent problems when starting hadoop as the root user). Format the namenode and then start the hadoop cluster.
# Format namenode
hadoop namenode -format
# Start HDFS cluster with one click
start-dfs.sh
# Shut down the hdfs cluster with one click
stop-dfs.sh
If the namenode is not formatted, the following situation will occur (the namenode process is not started)
3. Solution without NameNode
There is no NameNode after starting the hadoop cluster:
Format the namenode and then start the hadoop cluster
# Format namenode
hadoop namenode -format
# Start HDFS cluster with one click
start-dfs.sh
# Shut down the hdfs cluster with one click
stop-dfs.sh
At this point Hadoop has been successfully started.