hdfs单机版的安装

一、 准备机器

机器编号     地址                          端口

1           10.211.55.8           9000、50070、8088

二、 安装

学习地址http://www.roncoo.com/course/view/5a057438cc2a4231a8c245695faea238

1、 安装java环境

        export JAVA_HOME=/data/program/software/java8

        export JRE_HOME=/data/program/software/java8/jre

        export CLASSPATH=.:$CLASSPATH:$JAVA_HOME/lib:$JRE_HOME/lib

 export PATH=$PATH:$JAVA_HOME/bin:$JRE_HOME/bin

 source /etc/profile使其生效

2、 修改hostname

vi /etc/profile

添加10.211.55.8 bigdata2

3、 关闭防火墙

service iptables stop

用久关闭防火墙:chkconfig iptables off

查看防火墙状态:service iptables status

4、 添加hadoop用户和用户组

创建用户组:groupadd hadoop   

新建hadoop用户并且增加到hadoop用户下:useradd –g hadoop hadoop

设置密码:passwd hadoop

5、 下载安装hadoop

    cd /data/program/software

    wget http://mirror.bit.edu.cn/apache/hadoop/common/hadoop-2.8.1/hadoop-2.8.1.tar.gz 

    解压:tar -zxf hadoop-2.8.1.tar.gz

    将hadoop-2.8.1操作权限赋给hadoop用户:chown –R hadoop:hadoop hadoop-2.8.1

6、 创建数据目录

mkdir –p /data/dfs/name

mkdir –p /data/dfs/data

mkdir –p /data/tmp

将/data文件权限赋给hadoop:chown –R hadoop:hadoop /data

7、 配置etc/hadoop/core-site.xml

cd /data/program/software/hadoop-2.8.1

        <configuration>

<property>

<name>fs.defaultFS</name>

<value>hdfs://bigdata2:9000</value>

</property>

<property>

<name>io.file.buffer.size</name>

<value>131072</value>

</property>

<property>

<name>hadoop.tmp.dir</name>

<value>file:/data/tmp</value>

<description>Abase for other temporary directories.</description>

</property>

<property>

<name>hadoop.proxyuser.hadoop.hosts</name>

<value>*</value>

</property>

<property>

<name>hadoop.proxyuser.hadoop.groups</name>

<value>*</value>

</property>

</configuration>

8、 配置etc/hadoop/hdfs-site.xml

<configuration>

<property>

<name>dfs.namenode.name.dir</name>

<value>file:/data/dfs/name</value>

<description>Determineswhere on the local filesystem the DFS name node should store the name table. Ifthis is a comma-delimited list of directories then the name table is replicatedin all of the directories, for redundancy. </description>

<final>true</final>

</property>

<property>

<name>dfs.datanode.data.dir</name>

<value>file:/data/dfs/data</value>

<description>Determineswhere on the local filesystem an DFS data node should store its blocks. If thisis a comma-delimited list of directories, then data will be stored in all nameddirectories, typically on different devices.Directories that do not exist areignored.

</description>

<final>true</final>

</property>

<property>

<name>dfs.replication</name>

<value>1</value>

</property>

<property>

<name>dfs.permissions</name>

<value>false</value>

</property>

</configuration>

9、 配置etc/hadoop/mapred-site.ml

 <?xml version="1.0"?>

<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<configuration>

<property>

<name>mapreduce.framework.name</name>

<value>yarn</value>

</property>

</configuration>

10、 配置yarn-site.xml

<property>

<name>yarn.nodemanager.aux-services</name>

<value>mapreduce_shuffle</value>

</property>

11、 配置slaves

bigdata2

12、 设置hadoop环境变量

vi /etc/profile

HADOOP_HOME=/data/program/software/hadoop-2.8.1

PATH=$HADOOP_HOME/bin:$PATH

export HADOOP_HOME PATH

export HADOOP_MAPRED_HOME=$HADOOP_HOME

export HADOOP_COMMON_HOME=$HADOOP_HOME

export HADOOP_HDFS_HOME=$HADOOP_HOME

export YARN_HOME=$HADOOP_HOME

export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native

13、 ssh无密码验证配置

切换到hadoop用户下:su hadoop

直接输入cd会切换到/home/hadoop根目录下:cd

创建.ssh目录:mkdir .ssh

生成秘钥(一直回车):ssh-keygen –t rsa

进入.ssh目录:cd .ssh

复制一份秘钥:cp id_rsa.pub authorized_keys

后退到根目录:cd ..

给.ssh700权限:chmod 700 .ssh

给.ssh里面的文件600权限:chmod 600 .ssh/*

ssh bigdata2

14、 运行hadoop

先格式化一下namenode:bin/hadoop namenode –format

为了让大家看一下hadoop我们将所有的服务全部启动:sbin/start-all.sh

看一下启动的服务:jps

看一下hdfs的管理界面:http://10.211.55.8:50070

看hadoop运行任务:http://10.211.55.8:8088/cluster/nodes

15、 测试

创建一个目录:bin/hadoop fs –mkdir /test

创建一个txt然后放到/test下:bin/hadoop fs –put /home/hadoop/first.txt /text

查看目录下文件:bin/hadoop fs –ls /test

启动过程中如果出现如下错误:则需要更改/data/program/software/hadoop-2.8.1/etc/hadoop/ hadoop-env.sh中的 JAVA_HOME为绝对地址。

[hadoop@bigdata2 hadoop-2.8.1]$ sbin/start-all.sh

This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh

17/07/25 13:52:49 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable

17/07/25 13:52:49 WARN conf.Configuration: bad conf file: element not <property>

17/07/25 13:52:49 WARN conf.Configuration: bad conf file: element not <property>

17/07/25 13:52:49 WARN conf.Configuration: bad conf file: element not <property>

17/07/25 13:52:49 WARN conf.Configuration: bad conf file: element not <property>

Starting namenodes on [bigdata2]

bigdata2: Error: JAVA_HOME is not set and could not be found.

The authenticity of host 'localhost (::1)' can't be established.

RSA key fingerprint is 24:e2:40:a1:fd:ac:68:46:fb:6b:6b:ac:94:ac:05:e3.

Are you sure you want to continue connecting (yes/no)? bigdata2: Error: JAVA_HOME is not set and could not be found.

^Clocalhost: Host key verification failed.

Starting secondary namenodes [0.0.0.0]

0.0.0.0: Error: JAVA_HOME is not set and could not be found.

猜你喜欢

转载自blog.csdn.net/wyqwilliam/article/details/85088899