Hadoop fully distributed deployment

cd $HADOOP_HOME is set in /etc/profile

1. hadoop-env.sh

vim /usr/local/hadoop-2.8.4/etc/hadoop/hadoop-env.sh

26 export JAVA_HOME=/usr/local/jdk1.8.0_151
34 export HADOOP_CONF_DIR=/usr/local/hadoop-2.8.4/etc/hadoop/

source /usr/local/hadoop-2.8.4/etc/hadoop/hadoop-env.sh

2. Core settings

vim /usr/local/hadoop-2.8.4/etc/hadoop/core-site.xml
<configuration>
    <!--配置hdfs默认的命名-->
    <property>
        <name>fs.defaultFS</name>
        <value>hdfs://6059master:9000</value>
    </property>
    <!--配置操作hdfs缓冲区大小-->
    <property>
        <name>io.file.buffer.size</name>
        <value>4096</value>
    </property>
    <!--配置临时目录-->
    <property>
        <name>hadoop.tmp.dir</name>
        <value>/home/bigdata/tmp</value>
    </property>
</configuration>

3.hdfs setting hdfs-site.xml

Note: Replace the host domain name with your own domain name

vim ./etc/hadoop/hdfs-site.xml
vim /usr/local/hadoop-2.8.4/etc/hadoop/hdfs-site.xml
<configuration>
    <!--配置副本因子-->
    <property>   
        <name>dfs.replication</name>
        <value>3</value>
    </property>
    <!--配置块大小-->
    <property>
        <name>dfs.block.size</name>
        <value>134217728</value>
    </property>
    <!--配置元数据的存储位置-->
    <property>     
        <name>dfs.namenode.name.dir</name>
        <value>file:///home/hadoopdata/dfs/name</value>
    </property>
    <!--配置datanode数据存放位置-->
    <property>
        <name>dfs.datanode.data.dir</name>
        <value>/home/hadoopdata/dfs/data</value>
    </property>
    <!--配置dfs检测目录存放位置-->
    <property>
        <name>fs.checkpoint.dir</name>
        <value>/home/hadoopdata/checkpoint/dfs/lglname</value>
    </property>
    <!--配置hdfs的namenode的web ui地址-->
    <property>
        <name>dfs.http.address</name>
        <value>6059master:50070</value>
    </property>
    <!--配置dfs的SNN的web ui地址-->
    <property>
        <name>dfs.secondary.http.address</name>
        <value>6059master:50090</value>
    </property>
    <!--是否开启web操作dfs-->
    <property>
        <name>dfs.webhdfs.enabled</name>
        <value>true</value>
    </property>
    <!--是否启用hdfs的权限-->
    <property>
        <name>dfs.permissions</name>
        <value>true</value>
    </property>
</configuration>

4.mapreduce setting mapred-site.xml

cp /usr/local/hadoop-2.8.4/etc/hadoop/mapred-site.xml.template /usr/local/hadoop-2.8.4/etc/hadoop/mapred-site.xml
vim /usr/local/hadoop-2.8.4/etc/hadoop/mapred-site.xml
<configuration>
    <!--指定mapreduce运行框架-->
    <property>
        <name>mapreduce.framework.name</name>
        <value>yarn</value>
        <final>true</final>
    </property>
    <!--历史服务的通信地址-->
    <property>
        <name>mapreduce.jobhistory.address</name>
        <value>6059master:10020</value>
    </property>
    <!--历史服务的web ui通信地址-->
    <property>
        <name>mapreduce.jobhistory.webapp.address</name>
        <value>6059master:19888</value>
    </property>
</configuration>

5.yarn settings yarn-site.xml

vim /usr/local/hadoop-2.8.4/etc/hadoop/yarn-site.xml
<configuration>
    <!--指定resourcemanager所在的主机名-->
    <property>
        <name>yarn.resourcemanager.hostname</name>
        <value>6059master</value>
    </property>
    <!--指定mapreduce的shuffle-->
    <property>        
        <name>yarn.nodemanager.aux-services</name>
        <value>mapreduce_shuffle</value>
    </property>
    <!--指定resourcemanager内部通信地址-->
    <property>
        <name>yarn.resourcemanager.address</name>
        <value>6059master:8032</value>
    </property>
    <!--指定scheduler的内部通信地址-->
    <property>
        <name>yarn.resourcemanager.scheduler.address</name>
        <value>6059master:8030</value>
    </property>
    <!--指定rm的resource-tracker的内部通信地址-->
    <property>
        <name>yarn.resourcemanager.resource-tracker.address</name>
        <value>6059master:8031</value>
    </property>
    <!--指定rm的admin的内部通信地址-->
    <property>
        <name>yarn.resourcemanager.admin.address</name>
        <value>6059master:8033</value>
    </property>
    <!--指定rm的web ui地址-->
    <property>
        <name>yarn.resourcemanager.webapp.address</name>
        <value>6059master:8088</value>
    </property>
</configuration>

6. Configure the node domain name

vim /usr/local/hadoop-2.8.4/etc/hadoop/slaves
6059master
6059slave01
6059slave02

7. Complete the distribution task

vim /etc/hosts
192.168.56.20 6059master
192.168.56.21 6059slave01
192.168.56.22 6059slave02

Delete the Hadoop directory in the two slaves

6059slave01: rm -rf /usr/local/hadoop-2.8.4/
6059slave02: rm -rf /usr/local/hadoop-2.8.4/

Complete the distribution work: ping

master:   
scp -r /usr/local/hadoop-2.8.4/ root@6059slave01:/usr/local/
scp -r /usr/local/hadoop-2.8.4/ root@6059slave02:/usr/local/

8. Start

It needs to be formatted on the namenode server before starting, and it only needs to be done once.

hadoop namenode –format

8.1 Three ways to start:

8.1.1 Method 1: Full start:
 start-all.sh
Method 2: Mode start:
start-dfs.sh
start-yarn.sh
Method 3: Start a single process:
hadoop-daemon.sh start namenode
hadoop-daemons.sh start datanode
yarn-daemon.sh start namenode
yarn-daemons.sh start datanode
mr-jobhistory-daemon.sh start historyserver

9. Testing

9.1 Check whether the process is started:

jps

9.2 View the web of the corresponding module

http://192.168.56.20:50070

Insert picture description here

http://192.168.56.20:8088

Insert picture description here

9.3 File operations

document list

# hdfs dfs -ls /

Create xx file

# hdfs dfs -mkdir xx

upload files

# hdfs dfs –put ./***  /

Delete folder

Delete files, -rm -R delete directories and files recursively

hadoop fs -rm   删除文件,-rm -R 递归删除目录和文件

10. Run a program

yarn jar /usr/local/hadoop-2.8.4/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.8.4.jar wordcount  /input/word.txt /output/01
hdfs dfs –ls /output/01
hdfs dfs –cat /output/01/part-r-00000

11. Common Error Handling

WARN ipc.Client

error:

WARN ipc.Client: Failed to connect to server: 6059master/192.168.56.20:9000: try once and fail.

The 50070 port process and the 9000 port process did not start

11.1 View hdfs-site.xml
<!--配置hdfs的namenode的web ui地址-->
<property>
	<name>dfs.http.address</name>
	<value>6059master:50070</value>
</property>
11.2 Turn off the firewall
# 查看防火墙状态
systemctl status firewalld
# 关闭防火墙
systemctl stop firewalld.service 
# 禁用防火墙
systemctl disable firewalld.service

Another firewall is selinux: to be set to vim /etc/selinux/config

SELINUX=disabled
3. The namenode node is not started
# cd /usr/local/hadoop-2.8.4/bin/
# hdfs namenode -format

Guess you like

Origin blog.csdn.net/zx77588023/article/details/109519836