hadoop集群安装详细步骤

前提:安装jdk,关闭防火墙
将hadoop-2.6.4.tar.gz压缩包复制到Linux中

[hadoop@xuniji ~]$ rz
rz waiting to receive.
Starting zmodem transfer.  Press Ctrl+C to cancel.
  100%  176575 KB 11771 KB/s 00:00:15       0 Errors


[hadoop@xuniji ~]$ ll
总用量 176576
-rw-r--r--. 1 hadoop hadoop 180813065 7月  14 15:22 cenos-6.5-hadoop-2.6.4.tar.gz
[hadoop@xuniji ~]$ mkdir apps                     //我这里创建了目录用于存放hadoop
[hadoop@xuniji ~]$ ll

总用量 176580
drwxrwxr-x. 2 hadoop hadoop      4096 7月  15 20:22 apps
-rw-r--r--. 1 hadoop hadoop 180813065 7月  14 15:22 cenos-6.5-hadoop-2.6.4.tar.gz

[hadoop@xuniji ~]$ tar -zxvf cenos-6.5-hadoop-2.6.4.tar.gz -C apps/              //将hadoop解压到apps目录

[hadoop@xuniji apps]$ cd hadoop-2.6.4/
[hadoop@xuniji hadoop-2.6.4]$ ll

总用量 52
drwxrwxr-x. 2 hadoop hadoop  4096 3月   8 2016 bin
drwxrwxr-x. 3 hadoop hadoop  4096 3月   8 2016 etc
drwxrwxr-x. 2 hadoop hadoop  4096 3月   8 2016 include
drwxrwxr-x. 3 hadoop hadoop  4096 3月   8 2016 lib
drwxrwxr-x. 2 hadoop hadoop  4096 3月   8 2016 libexec
-rw-r--r--. 1 hadoop hadoop 15429 3月   8 2016 LICENSE.txt
-rw-r--r--. 1 hadoop hadoop   101 3月   8 2016 NOTICE.txt
-rw-r--r--. 1 hadoop hadoop  1366 3月   8 2016 README.txt
drwxrwxr-x. 2 hadoop hadoop  4096 3月   8 2016 sbin
drwxrwxr-x. 4 hadoop hadoop  4096 3月   8 2016 share
[hadoop@xuniji hadoop-2.6.4]$ ll
总用量 52
drwxrwxr-x. 2 hadoop hadoop  4096 3月   8 2016 bin
drwxrwxr-x. 3 hadoop hadoop  4096 3月   8 2016
etc
drwxrwxr-x. 2 hadoop hadoop  4096 3月   8 2016 include
drwxrwxr-x. 3 hadoop hadoop  4096 3月   8 2016 lib
drwxrwxr-x. 2 hadoop hadoop  4096 3月   8 2016 libexec
-rw-r--r--. 1 hadoop hadoop 15429 3月   8 2016 LICENSE.txt
-rw-r--r--. 1 hadoop hadoop   101 3月   8 2016 NOTICE.txt
-rw-r--r--. 1 hadoop hadoop  1366 3月   8 2016 README.txt
drwxrwxr-x. 2 hadoop hadoop  4096 3月   8 2016 sbin
drwxrwxr-x. 4 hadoop hadoop  4096 3月   8 2016 share
[hadoop@xuniji hadoop-2.6.4]$ cd etc/
[hadoop@xuniji etc]$ ll

总用量 4
drwxrwxr-x. 2 hadoop hadoop 4096 3月   8 2016 hadoop
[hadoop@xuniji etc]$ cd hadoop
[hadoop@xuniji hadoop]$ ll

总用量 152
-rw-r--r--. 1 hadoop hadoop  4436 3月   8 2016 capacity-scheduler.xml
-rw-r--r--. 1 hadoop hadoop  1335 3月   8 2016 configuration.xsl
-rw-r--r--. 1 hadoop hadoop   318 3月   8 2016 container-executor.cfg
-rw-r--r--. 1 hadoop hadoop   774 3月   8 2016 core-site.xml
-rw-r--r--. 1 hadoop hadoop  3670 3月   8 2016 hadoop-env.cmd
-rw-r--r--. 1 hadoop hadoop  4224 3月   8 2016 hadoop-env.sh
-rw-r--r--. 1 hadoop hadoop  2598 3月   8 2016 hadoop-metrics2.properties
-rw-r--r--. 1 hadoop hadoop  2490 3月   8 2016 hadoop-metrics.properties
-rw-r--r--. 1 hadoop hadoop  9683 3月   8 2016 hadoop-policy.xml
-rw-r--r--. 1 hadoop hadoop   775 3月   8 2016 hdfs-site.xml
-rw-r--r--. 1 hadoop hadoop  1449 3月   8 2016 httpfs-env.sh
-rw-r--r--. 1 hadoop hadoop  1657 3月   8 2016 httpfs-log4j.properties
-rw-r--r--. 1 hadoop hadoop    21 3月   8 2016 httpfs-signature.secret
-rw-r--r--. 1 hadoop hadoop   620 3月   8 2016 httpfs-site.xml
-rw-r--r--. 1 hadoop hadoop  3523 3月   8 2016 kms-acls.xml
-rw-r--r--. 1 hadoop hadoop  1325 3月   8 2016 kms-env.sh
-rw-r--r--. 1 hadoop hadoop  1631 3月   8 2016 kms-log4j.properties
-rw-r--r--. 1 hadoop hadoop  5511 3月   8 2016 kms-site.xml
-rw-r--r--. 1 hadoop hadoop 11291 3月   8 2016 log4j.properties
-rw-r--r--. 1 hadoop hadoop   938 3月   8 2016 mapred-env.cmd
-rw-r--r--. 1 hadoop hadoop  1383 3月   8 2016 mapred-env.sh
-rw-r--r--. 1 hadoop hadoop  4113 3月   8 2016 mapred-queues.xml.template
-rw-r--r--. 1 hadoop hadoop   758 3月   8 2016 mapred-site.xml.template
-rw-r--r--. 1 hadoop hadoop    10 3月   8 2016 slaves
-rw-r--r--. 1 hadoop hadoop  2316 3月   8 2016 ssl-client.xml.example
-rw-r--r--. 1 hadoop hadoop  2268 3月   8 2016 ssl-server.xml.example
-rw-r--r--. 1 hadoop hadoop  2237 3月   8 2016 yarn-env.cmd
-rw-r--r--. 1 hadoop hadoop  4567 3月   8 2016 yarn-env.sh
-rw-r--r--. 1 hadoop hadoop   690 3月   8 2016 yarn-site.xml
[hadoop@xuniji hadoop]$ echo $JAVA_HOME                      查看自己安装jdk的路径

/usr/local/jdk1.7.0_72

[hadoop@xuniji hadoop]$ vi hadoop-env.sh

# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements.  See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership.  The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License.  You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Set Hadoop-specific environment variables here.
# The only required environment variable is JAVA_HOME.  All others are
# optional.  When running a distributed configuration it is best to
# set JAVA_HOME in this file, so that it is correctly defined on
# remote nodes.
# The java implementation to use.

export JAVA_HOME=/usr/local/jdk1.7.0_72/             将这改为本机jdk安装路径,保存并退出

[hadoop@xuniji hadoop]$ vi core-site.xml                           //添加以下配置

<configuration>

<property>
<name>fs.defaultFS</name>
<value>hdfs://192.168.100.×××:9000</value>      
//192.168.100.×××  为我的IP地址
</property>

<property>
<name>hadoop.tmp.dir</name>
<value>/home/hadoop/hdpdata</value>

</property>

</configuration>

[hadoop@xuniji hadoop]$ vi hdfs-site.xml 

<configuration>
<property>
<name>dfs.replication</name>
<value>2</value>
</property>

</configuration>

[hadoop@xuniji hadoop]$ vi mapred-site.xml.template 

<configuration>

<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>

</configuration>

[hadoop@xuniji hadoop]$ vi yarn-site.xml 

<configuration>
<!-- Site specific YARN configuration properties -->

<property>
<name>yarn.resourcemanager.hostname</name>
<value>192.168.100.130</value>
</property>

<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>

</configuration>


[hadoop@xuniji ~]$ scp -r apps 192.168.100.xxx:/home/hadoop/      //将apps(我用来存放hadoop的目录)发送到其他集群

[hadoop@xuniji ~]$ cd apps/
[hadoop@xuniji apps]$ ll

总用量 4
drwxrwxr-x. 9 hadoop hadoop 4096 3月   8 2016 hadoop-2.6.4
[hadoop@xuniji apps]$ cd hadoop-2.6.4/
[hadoop@xuniji hadoop-2.6.4]$ pwd

/home/hadoop/apps/hadoop-2.6.4
[hadoop@xuniji hadoop-2.6.4]$ sudo vi /etc/profile            //修改添加以下环境变量

export JAVA_HOME=/usr/local/jdk1.7.0_72

export HADOOP_HOME=/home/hadoop/apps/hadoop-2.6.4
export PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin

[hadoop@xuniji hadoop-2.6.4]$ sudo scp /etc/profile 192.168.100.xxx:/etc/        //将profile传输到其他集群
[sudo] password for hadoop: 

profile                                                        100% 2116     2.1KB/s   00:00  

[hadoop@xuniji hadoop-2.6.4]$ source /etc/profile

[hadoop@xuniji hadoop-2.6.4]$ cd

[hadoop@xuniji ~]$ hadoop namenode -format 

(我这里遇到了访问jdk权限不够问题,另一篇博客会写解决方案)

地址:https://blog.csdn.net/qq_39212193/article/details/81052716

18/07/16 19:09:37 INFO namenode.FSImage: Allocated new BlockPoolId: BP-782112299-127.0.0.1-1531739377410
18/07/16 19:09:37 INFO common.Storage:
Storage directory /home/hadoop/hdpdata/dfs/name has been successfully formatted.           //在最后几行看到这一句也就是上面我们创建的hadoop工作目录表明格式化成功
18/07/16 19:09:38 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0
18/07/16 19:09:38 INFO util.ExitUtil: Exiting with status 0

18/07/16 19:09:38 INFO namenode.NameNode: SHUTDOWN_MSG: 

启动集群

[hadoop@linux2 ~]$ hadoop-daemon.sh start datanode
starting datanode, logging to /home/hadoop/apps/hadoop-2.6.4/logs/hadoop-hadoop-datanode-linux2.out
[hadoop@linux2 ~]$ jps
2744 Jps

2711 DataNode

[hadoop@linux3 ~]$ hadoop-daemon.sh start datanode
starting datanode, logging to /home/hadoop/apps/hadoop-2.6.4/logs/hadoop-hadoop-datanode-linux3.out
[hadoop@linux3 ~]$ jps

2769 Jps                //这里没有datanode     

解决:

[hadoop@linux3 ~]$ cd  /home/hadoop/apps/hadoop-2.6.4/logs
[hadoop@linux3 logs]$ ll

总用量 28
-rw-rw-r--. 1 hadoop hadoop 16588 7月  16 21:08 hadoop-hadoop-datanode-linux3.log
-rw-rw-r--. 1 hadoop hadoop   718 7月  16 21:07 hadoop-hadoop-datanode-linux3.out
-rw-rw-r--. 1 hadoop hadoop   934 7月  16 20:56 hadoop-hadoop-datanode-linux3.out.1
-rw-rw-r--. 1 hadoop hadoop     0 7月  16 21:07 SecurityAuth-hadoop.audit

[hadoop@linux3 logs]$ less hadoop-hadoop-datanode-linux3.log

2018-07-16 21:07:38,044 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG: 
/************************************************************
STARTUP_MSG: Starting DataNode

STARTUP_MSG:   host = java.net.UnknownHostException: linux3: linux3     

[hadoop@linux3 logs]$ sudo vi /etc/hosts      //添加host

192.168.100.xxx   linux3

再次启动就可以了

[hadoop@linux3 ~]$ hadoop-daemon.sh start datanode
starting datanode, logging to /home/hadoop/apps/hadoop-2.6.4/logs/hadoop-hadoop-datanode-linux3.out
[hadoop@linux3 ~]$ jps
2839 DataNode

2896 Jps

由于集群有多台一个个启动不方便,这里写个脚本执行这个脚本所有集群全部启动

[hadoop@xuniji etc]$ cd hadoop/
[hadoop@xuniji hadoop]$ ll

[hadoop@xuniji hadoop]$ vi slaves 

linux1
linux2

linux3

[hadoop@xuniji hadoop]$ ssh-keygen

[hadoop@xuniji hadoop]$ ssh-copy-id linux3

[hadoop@xuniji hadoop]$ ssh linux1          //检查是否可以连接到其他主机
Last login: Mon Jul 16 05:50:58 2018 from 192.168.100.1
[hadoop@linux1 ~]$ exit
logout

Connection to linux1 closed.

[hadoop@xuniji hadoop]$ cd ../../sbin
[hadoop@xuniji sbin]$ ll

[hadoop@xuniji sbin]$ start-all.sh            //启动集群,出现以下表明启动成功

starting yarn daemons
starting resourcemanager, logging to /home/hadoop/apps/hadoop-2.6.4/logs/yarn-hadoop-resourcemanager-xuniji.out
linux3: starting nodemanager, logging to /home/hadoop/apps/hadoop-2.6.4/logs/yarn-hadoop-nodemanager-linux3.out
linux1: starting nodemanager, logging to /home/hadoop/apps/hadoop-2.6.4/logs/yarn-hadoop-nodemanager-linux1.out
linux2: starting nodemanager, logging to /home/hadoop/apps/hadoop-2.6.4/logs/yarn-hadoop-nodemanager-linux2.out

猜你喜欢

转载自blog.csdn.net/qq_39212193/article/details/81051846