Hadoop - cluster environment construction

1. Basic configuration

1. Server distribution and server name

192.168.1.1 primary name node master
192.168.1.2 data node 1
slave1 192.168.1.3 data node 2 slave2

The command to temporarily change the hostname is (root privileges):
hostname <newname>
A permanent change requires modification of the configuration file /etc/sysconfig/network.

HOSTNAME=master

2, hosts file settings

In the "/etc/hosts" file of each server, add the following:

192.168.1.1   master
192.168.1.2   slave1
192.168.1.3   slave2

3. SSH password-free login

Between the master and all the slaves, two-way SSH password-free access needs to be implemented (there is no need to implement between the slave and the slave. In order to simplify the steps, SSH password-free login between the slave and the slave is also implemented here).

3.1 Basic Services

Two services are required: ssh and rsync, the query method:

rpm –qa | grep openssh  

rpm –qa | grep rsync  

3.2 Generate public key and private key

Execute the command: ssh-keygen –t rsa –P ''
This command generates its passwordless key pair, and when asked for the save path, press Enter to use the default path. The generated key pair: id_rsa and id_rsa.pub, which are stored in the "~/.ssh" directory by default, including two files, id_rsa and id_rsa.pub, which are the private key and the public key, respectively.

3.3 Write the trust file

Run the following command to write the public key to the trust file:

cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys  

然后修改authorized_keys文件的权限:

chmod 600 ~/.ssh/authorized_keys  

3.4 配置sshd服务

用root用户登录服务器修改SSH配置文件"/etc/ssh/sshd_config"的下列内容:

RSAAuthentication yes # 启用 RSA 认证  

PubkeyAuthentication yes # 启用公钥私钥配对认证方式  

AuthorizedKeysFile .ssh/authorized_keys # 公钥文件路径(和上面生成的文件同)  

设置完之后记得重启SSH服务,才能使刚才设置有效:

service sshd restart  

3.5 归档信任文件

将所有authorized_keys文件内容组合成一份authorized_keys文件,然后替换每台服务器上原有的authorized_keys文件。

验证配置是否成功,使用普通用户,执行如下命令:

ssh <hostname>

如果执行成功,则说明配置成功。

二、Hadoop编译安装

1、jdk下载安装

tar -xvzf jdk-8u121-linux-x64.gz -C /usr/local
cd /usr/local
vi /etc/profile

添加:

export JAVA_HOME=/usr/local/jdk1.8.0_121

export CLASS_PATH=$JAVA_HOME/lib:$JAVA_HOME/jre/lib

export PATH=$PATH:$JAVA_HOME/bin

2、hadoop下载安装

tar -xvzf hadoop-2.7.3.tar.gz -C /usr/local

cd /usr/local

mv hadoop-2.7.3 hadoop

(重命名为hadoop)

cd hadoop

mkdir tmp

vi /etc/profile

添加:

export HADOOP_HOME=/usr/local/hadoop

export PATH=$PATH:$HADOOP_HOME/bin

2.1 配置hadoop-env.sh

文件在/usr/local/hadoop/etc/hadoop目录下

vi hadoop-env.sh

添加:

export JAVA_HOME=/usr/local/jdk1.8.0_121

export CLASS_PATH=$JAVA_HOME/lib:$JAVA_HOME/jre/lib

2.2 配置core-site.xml文件

文件在/usr/local/hadoop/etc/hadoop目录下,修改Hadoop核心配置文件core-site.xml,这里配置的是HDFS的地址和端口号。

vi /usr/local/hadoop/etc/hadoop/core-site.xml

<configuration>

<property>

<name>hadoop.tmp.dir</name>

<value>/usr/local/hadoop/tmp</value>

</property>

<property>

        <name>fs.defaultFS</name>

        <value>hdfs://master:8082</value>

    </property>

</configuration>

2.3 配置hdfs-site.xml文件

文件在/usr/local/hadoop/etc/hadoop目录下,修改Hadoop中HDFS的配置,配置的备份方式默认为3。

<configuration>

<property>

        <name>dfs.replication</name>

        <value>2</value>

</property>

        <property>

        <name>dfs.namenode.name.dir</name>

        <value>file:/usr/local/hadoop/tmp/dfs/name</value>

        </property>

        <property>

        <name>dfs.datanode.data.dir</name>

        <value>file:/usr/local/hadoop/tmp/dfs/data</value>

        </property>

</configuration>

2.4 配置mapred-env.xml文件

    export JAVA_HOME=/usr/local/jdk1.8.0_121

2.5 配置mapred-site.xml文件

修改Hadoop中MapReduce的配置文件,配置的是JobTracker的地址和端口。

cp mapred-site.xml.template mapred-site.xml

 

<configuration>

<property>

        <name>mapred.job.tracker</name>

        <value>http://master:9001</value>

    </property>

<property>

        <name>mapreduce.framework.name</name>

        <value>yarn</value>

    </property>

</configuration>

2.6 配置yarn-env.xml文件

    export JAVA_HOME=/usr/local/jdk1.8.0_121

2.7 配置yarn-site.xml文件

<configuration>

 

<!-- Site specific YARN configuration properties -->

        <property>

                <name>yarn.nodemanager.aux-services</name>

                <value>mapreduce_shuffle</value>

        </property>

        <property>

                <name>yarn.resourcemanager.webapp.address</name>

                <value>master:8088</value>

         </property>

        <property>

                <name>yarn.log-aggregation-enable</name>

                <value>true</value>

        </property>

        <property>

                <name>yarn.log-aggregation.retain-seconds</name>

                <value>640800</value>

        </property>

 

</configuration>

2.8配置slaves,添加

   master

   slave1

   slave2

 

2.9 将配置好的hadoop文件拷到各个服务器相应目录

2.10 执行hadoop

所有节点需要关闭防火墙:

/bin/systemctl stop firewalld

(启用防火墙命令是:systemctl mask firewalld)

在主节点上执行,启动集群

cd /usr/local/hadoop

(切换至hadoop目录)
hdfs namenode -format

(格式化hdfs)
sbin/start-dfs.sh

(启动hdfs)

检查进程是否正常启动:jps

hdfs信息查看:
hdfs dfsadmin -report
或hdfs fsck / -files -blocks

集群的后续维护(关闭/启动所有):
sbin/start-all.sh
sbin/stop-all.sh

访问:http://192.168.1.1 :50070/

(关闭防火墙才能访问)

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=326392609&siteId=291194637