ubuntu18.04.2 hadoop3.1.2 + zookeeper3.5.5完全分散型の高可用性クラスタのセットアップ
クラスター計画:
ホスト名 | 名前ノード | データノード | JournalNode | このResourceManager | 飼育係 |
---|---|---|---|---|---|
node01 | √ | √ | √ | ||
node02 | √ | √ | |||
node03の | √ | √ | √ | √ | |
node04 | √ | √ | √ | ||
node05 | √ | √ | √ |
準備:
最初の5台のUbuntuの仮想マシンをクローン化しました
Vimは、ネットワーク構成を変更する/etc/netplan/01-network-manager-all.yaml
次のように私の5つのネットワーク構成は次のとおりです。(PS:これは自宅のデスクトップ、ゲートウェイで、ノートブックの前に乗って同じではありませんので、ということのよう)
# Let NetworkManager manage all devices on this system
# node01
network:
version: 2
renderer: NetworkManager
ethernets:
ens33:
dhcp4: no
dhcp6: no
addresses: [192.168.180.130/24]
gateway4: 192.168.180.2
nameservers:
addresses: [114.114.114.114, 8.8.8.8]
# Let NetworkManager manage all devices on this system
# node02
network:
version: 2
renderer: NetworkManager
ethernets:
ens33:
dhcp4: no
dhcp6: no
addresses: [192.168.180.131/24]
gateway4: 192.168.180.2
nameservers:
addresses: [114.114.114.114, 8.8.8.8]
# Let NetworkManager manage all devices on this system
# node03
network:
version: 2
renderer: NetworkManager
ethernets:
ens33:
dhcp4: no
dhcp6: no
addresses: [192.168.180.132/24]
gateway4: 192.168.180.2
nameservers:
addresses: [114.114.114.114, 8.8.8.8]
# Let NetworkManager manage all devices on this system
network:
version: 2
renderer: NetworkManager
ethernets:
ens33:
dhcp4: no
dhcp6: no
addresses: [192.168.180.133/24]
gateway4: 192.168.180.2
nameservers:
addresses: [114.114.114.114, 8.8.8.8]
# Let NetworkManager manage all devices on this system
network:
version: 2
renderer: NetworkManager
ethernets:
ens33:
dhcp4: no
dhcp6: no
addresses: [192.168.180.134/24]
gateway4: 192.168.180.2
nameservers:
addresses: [114.114.114.114, 8.8.8.8]
変更後、[OK] Baiduの接続のネットワーク構成についてのpingをネットワーク構成のアプリケーションを適用netplan。
ホスト名を変更します。
VIMの/ etc /ホスト名を対応するホスト名がnode01、niode02、node03の、node04、node05に変更されています
hostsファイルを変更します。
vimの/ etc / hostsファイルには、ホストが各マシンを修正するファイルは次のとおりです。
127.0.0.1 localhost
192.168.180.130 node01
192.168.180.131 node02
192.168.180.132 node03
192.168.180.133 node04
192.168.180.134 node05
# The following lines are desirable for IPv6 capable hosts
::1 ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
JDKの設定
あなたが設定ををjdkない場合は、https://www.cnblogs.com/ronnieyuan/p/11461377.htmlを参照してください
以前に設定した場合は、JDKのJDKの下で、他のバージョンでは、このような私のJVMディレクトリとしてタールは/ usr / libに/ JVMを、解凍しなければならなかった、次のとおりです。
drwxr-xr-x 4 root root 4096 9月 13 08:57 ./
drwxr-xr-x 133 root root 4096 9月 13 08:57 ../
lrwxrwxrwx 1 root root 25 4月 8 2018 default-java -> java-1.11.0-openjdk-amd64/
lrwxrwxrwx 1 root root 21 3月 27 04:57 java-1.11.0-openjdk-amd64 -> java-11-openjdk-amd64/
-rw-r--r-- 1 root root 1994 3月 27 04:57 .java-1.11.0-openjdk-amd64.jinfo
drwxr-xr-x 9 root root 4096 4月 25 20:43 java-11-openjdk-amd64/
drwxr-xr-x 7 uucp 143 4096 12月 16 2018 jdk1.8/
そして、(命令が参考に書き込まれている)コンフィギュレーションファイルを変更してJDKを選択
JDKのバージョン情報:
root@node01:~# java -version
java version "1.8.0_202"
Java(TM) SE Runtime Environment (build 1.8.0_202-b08)
Java HotSpot(TM) 64-Bit Server VM (build 25.202-b08, mixed mode)
近くにログインしないでください
(入力して覚えている) "とは、各マシン上で" SSH-keygenは-t rsaの-Pを実行します
例
root@node01:~# ssh-keygen -t rsa -P ""
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Created directory '/root/.ssh'.
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:el2s+e9UXxWjfGY1LS6RYD1CcHLmlXY+zJCopqRnuf0 root@node01
The key's randomart image is:
+---[RSA 2048]----+
| ooB+.+ +o|
| Bo.@ + *|
| ..o % =.|
| . o .. X .|
| o +S o. .o|
| . =. . + .o|
| o.o. + . .|
| ... . . |
| .E .oo |
+----[SHA256]-----+
〜ディレクトリのvimの.ssh / authorized_keysに時:
5台のパブリック仮想マシンが全会一致のauthorized_keysの各ファイルに格納されています
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDBw4yPomSFt009LQ3gvxv9vnAF4tSXrJvVBMkpoi78mLMspgxYW6q3vLCWFEHT6HOLrLAQ/+UjclXjuVEEUGVOyn+dgvX7fK+XCOuTVdTyJZ3nIGbHUZ5zB+KHcJN3tiGjFQ3vGEuUeVkQ4jkN5RXI33nSx1eUM/sOuXtQ7DdhJjAuBko7RNw/jjTXW8znv8l8n5hb4fu4B+2CLkIkO+1+mTu8hljE2B+pu4o6cIiY/RTb0hNRLSs6w7K7BJFa+3ZkeMtiLtI8MUaIQzo4/nv4FKa8/GSvxLyyBZGoaunAYsUn7qmlNxNjEXY7wojAnVkRMiyCsEXQU6cEsR//Zocz root@node01
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDg2AsPvf9TjjIVUlZutDqxFH579THtl6e7/SxYxHJ/def/T4dY5glzwW3AJ30Gcsw+k9E8PKiZIAiaQ7kU4/EmFK9LFhAuQx+glZS5GS88lXv7qSYOLmZtJPp0l4tgrIgk9u+PtZToCdlWpGLO2Xi3Dfggt//Lsl4Dqhl3dtrpZSjMGY7zkAd4fu696ri4rjv3kDciUdFNlKBFBkGA4RNFKylkPTlxLZfpqNU2pkZtBySHsGbEHMvnMQ1KOXRoW7pVvZ4QveR/eiQVXqq+v53oZ5KUmC5jpp6Abe3PVa7tG6s2ZOSP9ikOuFKrwXWArjp5H4oaYZIF/UenhhIdjxh3 root@node02
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDwtqNeAwYLWqY5otArMcKd4iMBCpZ5cd+RyECunVnmeefuN1U53fR+h2UcS6/Jr84ZKlDmJ5+r9jgcBPIftbkGi9RE4aHEqo14sC3P4t6DODxNCI+enytx5/kw3gpKmxOdanrtojSWLdL+5v/h4qPt5e8AFfxqJ9HfZ5darXgRLWbkYcBADH51XvisY9Gf+DJKPjcD+3E8gMbHHdeYWt0crOkxbRVgnjmZVuWsYBRFH5x6ueR5SOHUC3WPzfeEdBvIeRddl4y1DvtvZZuVOxs1rQF59KdDSKSKt4s1lScZS1Kc57yXY2s+L6HrFqxfOO0u1pisfiDwDKvZDwKeMd3n root@node03
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC+QfivNznStFt8xCZ1Qav6jKdErir0VbNRN0nqJaXUe+KL8YmYygofKEZRGQHCpYY2/rM7Cla6Pl9HLoatbvi89OYVy7V3hnu7SJwrqbkAGOqxCzW+OGdV9GRvhi3LTwJMAKxSrXB73tKK9ZqJd7WrP7o7ibyYMAbUiJTc0qa4gSXxXTunUuF2hOG7D88/93bxXXqSI9AydWrXBVxzmrP7CipXFOBqVC/mA/8SEdbVxSK0oGwa9KAAm690onoVevOVtTXWcvKSE/57WM94KJMbIKM/ypxKtUqKrgKuMfBsgs31Zu1j3SDkFC3Vm8uGj4yKnpxsaVJOwuMoRYiW90tT root@node04
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDF/mqbRAwPusxpz5FA9FtIa97QJSuXjaRP+/37S7JvtCAh2FvgPBLIQeAdp7hvc/RFJ8WqDlQWj2UVpBsu2sn3Kg2VZ30qEghMLkMcCTtKknNX+U7SvBWCRoGojxl9lmi/Y1kkVNQUTRPQ8QeNGN2SvUi5A4Q+X1H6MEy16sLuamMlXqiIeqttY33odXj6oXI6OFqoE98FrNbTBrPwJFCk4Uhgnplbb0YE+4dbs9mVdR/iHpGm84WfvITe6Rn9Ry4K+Wo4C+Bms4dGfcO8eh8lrwSCff2IUIc877Zzc6ImYrdvZu7rvrCPyfNdoCJzA5wtExPoAfUbuN5T77ieLgWH root@node05
無料の秘密のテスト用のログインに続いて成功しています
サクセスストーリー:
root@node01:~# ssh [email protected]
The authenticity of host '192.168.180.131 (192.168.180.131)' can't be established.
ECDSA key fingerprint is SHA256:++PMZ5boD2CgToi43EdaCSLtNGdVFt0xxCBoAIkggqk.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '192.168.180.131' (ECDSA) to the list of known hosts.
Welcome to Ubuntu 18.04.2 LTS (GNU/Linux 4.18.0-17-generic x86_64)
* Documentation: https://help.ubuntu.com
* Management: https://landscape.canonical.com
* Support: https://ubuntu.com/advantage
* Canonical Livepatch is available for installation.
- Reduce system reboots and improve kernel security. Activate at:
https://ubuntu.com/livepatch
254 packages can be updated.
253 updates are security updates.
Your Hardware Enablement Stack (HWE) is supported until April 2023.
Last login: Sat Sep 14 08:44:54 2019 from 192.168.180.1
root@node02:~#
Zookeeper3.5.5インストール
注意:飼育係だけnode03の、node04とnode05にインストールします
/ホーム/ロニー/ソフトディレクトリにアップロードzookeeper3.5.5
root@node03:/home/ronnie/soft# ll
total 524532
drwxr-xr-x 2 root root 4096 9月 14 09:51 ./
drwxr-xr-x 32 ronnie ronnie 4096 9月 14 08:39 ../
-rw-r--r-- 1 root root 10622522 9月 13 11:35 apache-zookeeper-3.5.5-bin.tar.gz
-rw-r--r-- 1 root root 332433589 9月 13 09:18 hadoop-3.1.2.tar.gz
-rw-r--r-- 1 root root 194042837 1月 18 2019 jdk-8u202-linux-x64.tar.gz
タール-zxvfのapache-飼育係-3.5.5-bin.tar.gz -Cは/ opt /ロニー/は/ opt /ロニー・ディレクトリにそれを抽出
飼育係のディレクトリ名を変更します。
cd /opt/ronnie
mv apache-zookeeper-3.5.5-bin/ zookeeper
コンフィギュレーションファイルの飼育係を作成および変更
最初のプロファイルディレクトリを入力します。
root@node03:/opt/ronnie# cd zookeeper/conf/
root@node03:/opt/ronnie/zookeeper/conf# ll
total 20
drwxr-xr-x 2 2002 2002 4096 4月 2 21:05 ./
drwxr-xr-x 6 root root 4096 9月 14 09:54 ../
-rw-r--r-- 1 2002 2002 535 2月 15 2019 configuration.xsl
-rw-r--r-- 1 2002 2002 2712 4月 2 21:05 log4j.properties
-rw-r--r-- 1 2002 2002 922 2月 15 2019 zoo_sample.cfg
zoo.cfgとしてzoo_sample.cfgコピーします。
cp zoo_sample.cfg zoo.cfg
vim zoo.cfg
# The number of milliseconds of each tick
tickTime=2000
# The number of ticks that the initial
# synchronization phase can take
initLimit=10
# The number of ticks that can pass between
# sending a request and getting an acknowledgement
syncLimit=5
# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just
# example sakes.
dataDir=/var/ronnie/zookeeper
# the port at which the clients will connect
clientPort=2181
# the maximum number of client connections.
# increase this if you need to handle more clients
#maxClientCnxns=60
#
# Be sure to read the maintenance section of the
# administrator guide before turning on autopurge.
#
# http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance
#
# The number of snapshots to retain in dataDir
#autopurge.snapRetainCount=3
# Purge task interval in hours
# Set to "0" to disable auto purge feature
#autopurge.purgeInterval=1
server.1=node03:2888:3888
server.2=node04:2888:3888
server.3=node05:2888:3888
他の二つの織機を送信するように構成された飼育係のディレクトリ
scp -r /opt/ronnie/zookeeper/ [email protected]:/opt/ronnie/
scp -r /opt/ronnie/zookeeper/ [email protected]:/opt/ronnie/
DATADIRにMYIDを作成し、それぞれ1を指定し、2、3
オペレーション(その他共感)node03の上、このディレクトリを作成する必要がない場合ます。mkdir -p / VAR /ロニー/飼育係/
cd /var/ronnie/zookeeper/
touch myid
echo 1 > myid
スタート飼育係
/opt/ronnie/zookeeper/bin/zkServer.sh start
# 若启动成功
ZooKeeper JMX enabled by default
Using config: /opt/ronnie/zookeeper/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED
# 检测状态
/opt/ronnie/zookeeper/bin/zkServer.sh status
# 这是一个从节点
ZooKeeper JMX enabled by default
Using config: /opt/ronnie/zookeeper/bin/../conf/zoo.cfg
Client port found: 2181. Client address: localhost.
Mode: follower
# 这是一个主节点
ZooKeeper JMX enabled by default
Using config: /opt/ronnie/zookeeper/bin/../conf/zoo.cfg
Client port found: 2181. Client address: localhost.
Mode: leader
閉じる飼育係:
/opt/ronnie/zookeeper/bin/zkServer.sh stop
その後、インストールの飼育係は、ここに終了しました
Hadoopの設定
vimの〜/ .bashrcのパスにHadoopを追加
#HADOOP VARIABLES
export HADOOP_HOME=/opt/ronnie/hadoop-3.1.2
export PATH=$PATH:$HADOOP_HOME/bin
export PATH=$PATH:$HADOOP_HOME/sbin
export HADOOP_MAPRED_HOME=$HADOOP_HOME
export HADOOP_COMMON_HOME=$HADOOP_HOME
export HADOOP_HDFS_HOME=$HADOOP_HOME
export YARN_HOME=$HADOOP_HOME
export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native
export HADOOP_OPTS="-Djava.library.path=$HADOOP_HOME/lib"
有効にするには、新しい構成を作成するソース〜/ .bashrcに(それぞれを変更する必要があります覚えています)
Hadoopのバージョンは、バージョンを確認し、ショーはパスのHadoopの設定が成功し、次のとおりです。
Hadoop 3.1.2
Source code repository https://github.com/apache/hadoop.git -r 1019dde65bcf12e05ef48ac71e84550d589e5d9a
Compiled by sunilg on 2019-01-29T01:39Z
Compiled with protoc 2.5.0
From source with checksum 64b8bdd4ca6e77cce75a93eb09ab2a9
This command was run using /opt/ronnie/hadoop-3.1.2/share/hadoop/common/hadoop-common-3.1.2.jar
JAVA_HOME経路変更hadoop-env.sh、mapred-env.sh、yarn-env.shはちょうど一番下に追加されていません
Vimの/opt/ronnie/hadoop-3.1.2/etc/hadoop/hadoop-env.sh
53 # variable is REQUIRED on ALL platforms except OS X!
54 export JAVA_HOME=/usr/lib/jvm/jdk1.8
Vimの/opt/ronnie/hadoop-3.1.2/etc/hadoop/mapred-env.sh
47 # JDK
48 export JAVA_HOME=/usr/lib/jvm/jdk1.8
Vimの/opt/ronnie/hadoop-3.1.2/etc/hadoop/yarn-env.sh
171 # JDK
172 export JAVA_HOME=/usr/lib/jvm/jdk1.8
vimの/opt/ronnie/hadoop-3.1.2/etc/hadoop/core-site.xmlコア-site.xml構成ファイル
1 <?xml version="1.0" encoding="UTF-8"?>
4 Licensed under the Apache License, Version 2.0 (the "License");
7
8 http://www.apache.org/licenses/LICENSE-2.0
9
12 WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 See the License for the specific language governing permissions and
14 limitations under the License. See accompanying LICENSE file.
15 -->
16
17 <!-- Put site-specific property overrides in this file. -->
18
19 <configuration>
20 <!-- 指定hdfs的nameservice名称 -->
21 <property>
22 <name>fs.defaultFS</name>
23 <value>hdfs://ns</value>
24 </property>
25 <!-- 指定临时目录 -->
26 <property>
27 <name>hadoop.tmp.dir</name>
28 <value>/var/ronnie/hadoop/tmp</value>
29 </property>
30 <!-- 指定zookeeper -->
31 <property>
32 <name>ha.zookeeper.quorum</name>
33 <value>node03:2181,node04:2181,node05:2181</value>
34 </property>
35 <!-- Namenode向JournalNode发起的ipc连接请求重试最大次数 -->
36 <property>
37 <name>ipc.client.connect.max.retries</name>
38 <value>100</value>
39 <description>Indicates the number of retries a client will make to establish a server c onnection.
40 </description>
41 </property>
42 <!-- Namenode向JournalNode发起的ipc连接请求的重试间隔时间 -->
43 <property>
44 <name>ipc.client.connect.retry.interval</name>
45 <value>10000</value>
46 <description>Indicates the number of milliseconds a client will wait for before retryin g to establish.
47 </description>
48 </property>
49 <!-- 开启回收功能, 并设置垃圾删除间隔(min) -->
50 <property>
51 <name>fs.trash.interval</name>
52 <value>360</value>
53 <description>
54 Trash deletion interval in minutes. If zero, the trash feature is diabled.
55 </description>
56 </property>
57 <!-- 设置垃圾检查点介个(min), 不设置的话默认和fs.trash.interval一样 -->
58 <property>
59 <name>fs.trash.checkpoint.interval</name>
60 <value>60</value>
61 <description>
62 Trash checkpoint interval in minutes. If zero, the deletion interval is used.
63 </description>
64 </property>
65 <!-- 配置oozie时使用以下参数 -->
66 <property>
67 <name>hadoop.proxyuser.deplab.groups</name>
68 <value>*</value>
69 </property>
70 <property>
71 <name>hadoop.proxyuser.deplab.hosts</name>
72 <value>*</value>
73 </property>
74 </configuration>
Vimの/opt/ronnie/hadoop-3.1.2/etc/hadoop/hdfs-site.xml修正HDFS-site.xmlの
1 <?xml version="1.0" encoding="UTF-8"?>
2 <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
3 <!--
4 Licensed under the Apache License, Version 2.0 (the "License");
5 you may not use this file except in compliance with the License.
6 You may obtain a copy of the License at
7
8 http://www.apache.org/licenses/LICENSE-2.0
9
10 Unless required by applicable law or agreed to in writing, software
11 distributed under the License is distributed on an "AS IS" BASIS,
12 WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 See the License for the specific language governing permissions and
14 limitations under the License. See accompanying LICENSE file.
15 -->
16
17 <!-- Put site-specific property overrides in this file. -->
18
19 <configuration>
20 <!-- 指定hdfs的nameservice为ns, 需要和core-site.xml中的保持一致 -->
21 <property>
22 <name>dfs.nameservices</name>
23 <value>ns</value>
24 </property>
25 <!-- ns下有nn1, nn2 两个NameNode -->
28 <value>nn1,nn2</value>
29 </property>
30 <!-- nn1的RPC通信地址 -->
33 <value>node01:9000</value>
34 </property>
35 <!-- nn1 的http通信地址 -->
36 <property>
37 <name>dfs.namenode.http-address.ns.nn1</name>
38 <value>node01:50070</value>
39 </property>
40 <!-- nn2的RPC通信地址 -->
41 <property>
42 <name>dfs.namenode.rpc-address.ns.nn2</name>
43 <value>node02:9000</value>
44 </property>
45 <!-- nn2 的http通信地址 -->
46 <property>
47 <name>dfs.namenode.http-address.ns.nn2</name>
48 <value>node02:50070</value>
49 </property>
50 <!-- 指定NameNode的edits元数据在JournalNode上的存放位置 -->
51 <property>
52 <name>dfs.namenode.shared.edits.dir</name>
53 <value>qjournal://node03:8485;node04:8485;node05:8485/ns</value>
54 </property>
55 <!-- 指定JournalNode在本地磁盘存放数据的尾椎 -->
56 <property>
57 <name>dfs.journalnode.edits.dir</name>
58 <value>/var/ronnie/hadoop/jdata</value>
59 </property>
60 <!-- 开启NameNode失败自动切换 -->
61 <property>
62 <name>dfs.ha.automatic-failover.enabled</name>
63 <value>true</value>
64 </property>
65 <!-- 配置失败自动切换实现方式 -->
66 <property>
67 <name>dfs.client.failover.proxy.provider.ns</name>
68 <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value >
69 </property>
70 <!-- 配置隔离机制方法, 多个机制用换行分割, 即每个机制暂用一行 -->
71 <property>
72 <name>dfs.ha.fencing.methods</name>
73 <value>
74 sshfence
75 shell(/bin/true)
76 </value>
77 </property>
78 <!-- 使用sshfence隔离机制时需要ssh免密登录 -->
79 <property>
80 <name>dfs.ha.fencing.ssh.private-key-files</name>
81 <value>/root/.ssh/id_rsa</value>
82 </property>
83 <!-- 配置sshfence隔离机制超时时间 -->
84 <property>
85 <name>dfs.ha.fencing.ssh.connect-timeout</name>
86 <value>30000</value>
87 </property>
88 </configuration>
Vimの/opt/ronnie/hadoop-3.1.2/etc/hadoop/mapred-site.xmlはmapred-site.xmlのを修正します
1 <?xml version="1.0"?>
2 <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
3 <!--
4 Licensed under the Apache License, Version 2.0 (the "License");
5 you may not use this file except in compliance with the License.
6 You may obtain a copy of the License at
7
8 http://www.apache.org/licenses/LICENSE-2.0
9
10 Unless required by applicable law or agreed to in writing, software
11 distributed under the License is distributed on an "AS IS" BASIS,
12 WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 See the License for the specific language governing permissions and
14 limitations under the License. See accompanying LICENSE file.
15 -->
16
17 <!-- Put site-specific property overrides in this file. -->
18
19 <configuration>
20 <!-- 指定mr框架为yarn方式 -->
21 <property>
22 <name>mapreduce.framework.name</name>
23 <value>yarn</value>
24 </property>
25 </configuration>
Vimは、糸-site.xmlのを修正/opt/ronnie/hadoop-3.1.2/etc/hadoop/yarn-site.xml
1 <?xml version="1.0"?>
2 <!--
3 Licensed under the Apache License, Version 2.0 (the "License");
4 you may not use this file except in compliance with the License.
5 You may obtain a copy of the License at
6
7 http://www.apache.org/licenses/LICENSE-2.0
8
9 Unless required by applicable law or agreed to in writing, software
10 distributed under the License is distributed on an "AS IS" BASIS,
11 WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 See the License for the specific language governing permissions and
13 limitations under the License. See accompanying LICENSE file.
14 -->
15 <configuration>
16 <!-- 开启RM高可用 -->
17 <property>
18 <name>yarn.resourcemanager.ha.enabled</name>
19 <value>true</value>
20 </property>
21 <!-- 指定RM的集群id -->
22 <property>
23 <name>yarn.resourcemanager.cluster-id</name>
24 <value>yrc</value>
25 </property>
26 <!-- 指定RM的名称 -->
27 <property>
28 <name>yarn.resourcemanager.ha.rm-ids</name>
29 <value>rm1,rm2</value>
30 </property>
31 <!-- 指定rm1, rm2的地址 -->
32 <property>
33 <name>yarn.resourcemanager.hostname.rm1</name>
34 <value>node01</value>
35 </property>
36 <property>
37 <name>yarn.resourcemanager.hostname.rm2</name>
38 <value>node02</value>
39 </property>
40 <!-- 指定zookeeper集群地址 -->
41 <property>
42 <name>yarn.resourcemanager.zk-address</name>
43 <value>node03:2181,node04:2181,node05:2181</value>
44 </property>
45 <!-- 设定洗牌 -->
46 <property>
47 <name>yarn.nodemanager.aux-services</name>
48 <value>mapreduce_shuffle</value>
49 </property>
50 </configuration>
Vimの/opt/ronnie/hadoop-3.1.2/etc/hadoop/workersは、ワーキンググループを変更します
node01
node02
node03
node04
node05
Vimの/opt/ronnie/hadoop-3.1.2/sbin/start-dfs.sh
Vimの/opt/ronnie/hadoop-3.1.2/sbin/stop-dfs.sh
追加するヘッダーファイルの先頭後:
#!/usr/bin/env bash
HDFS_DATANODE_USER=root
HADOOP_SECURE_DN_USER=hdfs
HDFS_NAMENODE_USER=root
HDFS_SECONDARYNAMENODE_USER=root
HDFS_JOURNALNODE_USER=root
HDFS_ZKFC_USER=root
Vimの/opt/ronnie/hadoop-3.1.2/sbin/start-yarn.sh
Vimの/opt/ronnie/hadoop-3.1.2/sbin/stop-yarn.sh
追加するヘッダーファイルの先頭後:
#!/usr/bin/env bash
YARN_RESOURCEMANAGER_USER=root
HADOOP_SECURE_DN_USER=yarn
YARN_NODEMANAGER_USER=root
他のノードに転送構成ディレクトリのHadoop
scp -r /opt/ronnie/hadoop-3.1.2/ [email protected]:/opt/ronnie/
scp -r /opt/ronnie/hadoop-3.1.2/ [email protected]:/opt/ronnie/
scp -r /opt/ronnie/hadoop-3.1.2/ [email protected]:/opt/ronnie/
scp -r /opt/ronnie/hadoop-3.1.2/ [email protected]:/opt/ronnie/
クラスタを起動
最初のZooKeeperクラスタを起動します。
root@node03:/var/ronnie/zookeeper# /opt/ronnie/zookeeper/bin/zkServer.sh start
root@node04:/var/ronnie/zookeeper# /opt/ronnie/zookeeper/bin/zkServer.sh start
root@node05:/var/ronnie/zookeeper# /opt/ronnie/zookeeper/bin/zkServer.sh start
それぞれnode03の、node04、node05にjournalnode開始
root@node03:~# /opt/ronnie/hadoop-3.1.2/sbin/hadoop-daemon.sh start journalnode
root@node04:~# /opt/ronnie/hadoop-3.1.2/sbin/hadoop-daemon.sh start journalnode
root@node05:~# /opt/ronnie/hadoop-3.1.2/sbin/hadoop-daemon.sh start journalnode
JPSは、プロセスを表示します
root@node03:~# jps
6770 Jps
6724 JournalNode
6616 QuorumPeerMain
選択した2名前ノード(選挙amdha01)でフォーマット
root@node01:~# hdfs namenode -format
設定ファイルが間違って表示しますエラーが間違っている場合、その後、戻って変更してください。
オープン名前ノード:
hdfs --daemon start namenode
JPS開くかどうかを見ます
root@node01:~# jps
5622 Jps
5549 NameNode
別の名前ノードの同期フォーマットされた情報
root@node02:~# hdfs namenode -bootstrapStandby
node01フォーマットZKFC(一度だけ実行される必要がある)に
root@node01:~# hdfs zkfc -formatZK
スタートクラスタHDFS
root@node01:~# start-dfs.sh
オープンポート50070 Node01
クラスタ糸を開始します
root@node01:~# start-yarn.sh
8088ポートを開くNode01
この設定は成功です