Docker in Hadoop Cluster Setup

Use Tencent cloud host, docker build a cluster test environment.

surroundings

1, the operating system: CentOS 7.2 64 bit

Network Setup

hostname IP
cluster-master 172.18.0.2
cluster-slave1 172.18.0.3
cluster-slave2 172.18.0.4
cluster-slave3 172.18.0.5

Docker installation

curl -sSL https://get.daocloud.io/docker | sh

##换源
###这里可以参考这篇文章http://www.jianshu.com/p/34d3b4568059
curl -sSL https://get.daocloud.io/daotools/set_mirror.sh | sh -s http://67e93489.m.daocloud.io

##开启自启动
systemctl enable docker
systemctl start docker

Got me Centos Mirror

docker pull daocloud.io/library/centos:latest

Use docker psto view downloaded image

Create a container

According to the cluster architecture, it is necessary to set a fixed IP when you create a container, you must first use the following command in the docker to create a fixed IP subnet

docker network create --subnet=172.18.0.0/16 netgroup

docker subnet Once created you can create a fixed IP containers

#cluster-master
#-p 设置docker映射到容器的端口 后续查看web管理页面使用
docker run -d --privileged -ti -v /sys/fs/cgroup:/sys/fs/cgroup --name cluster-master -h cluster-master -p 18088:18088 -p 9870:9870 --net netgroup --ip 172.18.0.2 daocloud.io/library/centos /usr/sbin/init

#cluster-slaves
docker run -d --privileged -ti -v /sys/fs/cgroup:/sys/fs/cgroup --name cluster-slave1 -h cluster-slave1 --net netgroup --ip 172.18.0.3 daocloud.io/library/centos /usr/sbin/init

docker run -d --privileged -ti -v /sys/fs/cgroup:/sys/fs/cgroup --name cluster-slave2 -h cluster-slave2 --net netgroup --ip 172.18.0.4 daocloud.io/library/centos /usr/sbin/init

docker run -d --privileged -ti -v /sys/fs/cgroup:/sys/fs/cgroup --name cluster-slave3 -h cluster-slave3 --net netgroup --ip 172.18.0.5 daocloud.io/library/centos /usr/sbin/init

Start the console and into dockera container:

docker exec -it cluster-master /bin/bash

Install OpenSSH free dense Login

1, cluster-masterthe installation:

#cluster-master需要修改配置文件(特殊)
#cluster-master

#安装openssh
[root@cluster-master /]# yum -y install openssh openssh-server openssh-clients

[root@cluster-master /]# systemctl start sshd
####ssh自动接受新的公钥
####master设置ssh登录自动添加kown_hosts
[root@cluster-master /]# vi /etc/ssh/ssh_config
#将原来的StrictHostKeyChecking ask
#设置StrictHostKeyChecking为no
#保存
[root@cluster-master /]# systemctl restart sshd

2, are mounted on the slaves OpenSSH

#安装openssh
[root@cluster-slave1 /]#yum -y install openssh openssh-server openssh-clients

[root@cluster-slave1 /]# systemctl start sshd

3, cluster-master public key distribution

Performed on the master machine
ssh-keygen -t rsa
and all the way round, generated after completion ~ / .ssh directory, there id_rsa (private key file) and id_rsa.pub (key file) in the directory, then re id_rsa.pub directed to file authorized_keys

ssh-keygen -t rsa
#一路回车

[root@cluster-master /]# cat ~/.ssh/id_rsa.pub > ~/.ssh/authorized_keys

After the file is generated using scp will distribute the public key file to the cluster slave master

[root@cluster-master /]# ssh root@cluster-slave1 'mkdir ~/.ssh'
[root@cluster-master /]# scp ~/.ssh/authorized_keys root@cluster-slave1:~/.ssh
[root@cluster-master /]# ssh root@cluster-slave2 'mkdir ~/.ssh'
[root@cluster-master /]# scp ~/.ssh/authorized_keys root@cluster-slave2:~/.ssh
[root@cluster-master /]# ssh root@cluster-slave3 'mkdir ~/.ssh'
[root@cluster-master /]# scp ~/.ssh/authorized_keys root@cluster-slave3:~/.ssh

After completion of the test distribution (ssh root @ cluster-slave1) whether you can avoid the input login password

Ansible installation

[root@cluster-master /]# yum -y install epel-release
[root@cluster-master /]# yum -y install ansible
#这样的话ansible会被安装到/etc/ansible目录下

At this point we go to edit the hosts file ansible

vi /etc/ansible/hosts
[cluster]
cluster-master
cluster-slave1
cluster-slave2
cluster-slave3

[master]
cluster-master

[slaves]
cluster-slave1
cluster-slave2
cluster-slave3

Configuring docker container hosts

Since the / etc / hosts file is rewritten when the container starts, go directly modify the contents of the container can not be retained after the restart, in order to allow the vessel to obtain the cluster hosts after the restart, a method of rewriting the latter hosts started container.
Necessary to add the following in the in ~ / .bashrc

:>/etc/hosts
cat >>/etc/hosts<<EOF
127.0.0.1   localhost
172.18.0.2  cluster-master
172.18.0.3  cluster-slave1
172.18.0.4  cluster-slave2
172.18.0.5  cluster-slave3
EOF
source ~/.bashrc

The configuration file to take effect, you can see / etc / hosts file has been changed to require content

[root@cluster-master ansible]# cat /etc/hosts
127.0.0.1   localhost
172.18.0.2  cluster-master
172.18.0.3  cluster-slave1
172.18.0.4  cluster-slave2
172.18.0.5  cluster-slave3

Under .bashrc to distribute cluster slave with ansible

ansible cluster -m copy -a "src=~/.bashrc dest=~/"

Configuration Software Environment

Download and unzip to JDK1.8 /optdirectory

Download hadoop3 to /optthe directory, extract the installation package, and creates a link file

tar -xzvf hadoop-3.2.0.tar.gz
ln -s hadoop-3.2.0 hadoop

Java environment variables and configure hadoop

Edit ~/.bashrcFile

# hadoop
export HADOOP_HOME=/opt/hadoop-3.2.0
export PATH=$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$PATH

#java
export JAVA_HOME=/opt/jdk8
export PATH=$HADOOP_HOME/bin:$PATH

The file to take effect:

source .bashrc

Hadoop configuration required to run the configuration file

cd $HADOOP_HOME/etc/hadoop/

1, modifycore-site.xml

<configuration>
    <property>
        <name>hadoop.tmp.dir</name>
        <value>/home/hadoop/tmp</value>
        <description>A base for other temporary directories.</description>
    </property>
    <!-- file system properties -->
    <property>
        <name>fs.default.name</name>
        <value>hdfs://cluster-master:9000</value>
    </property>
    <property>
    <name>fs.trash.interval</name>
        <value>4320</value>
    </property>
</configuration>

2, modify hdfs-site.xml

<configuration>
<property>
   <name>dfs.namenode.name.dir</name>
   <value>/home/hadoop/tmp/dfs/name</value>
 </property>
 <property>
   <name>dfs.datanode.data.dir</name>
   <value>/home/hadoop/data</value>
 </property>
 <property>
   <name>dfs.replication</name>
   <value>3</value>
 </property>
 <property>
   <name>dfs.webhdfs.enabled</name>
   <value>true</value>
 </property>
 <property>
   <name>dfs.permissions.superusergroup</name>
   <value>staff</value>
 </property>
 <property>
   <name>dfs.permissions.enabled</name>
   <value>false</value>
 </property>
 </configuration>

3, modified mapred-site.xml

<configuration>
<property>
  <name>mapreduce.framework.name</name>
  <value>yarn</value>
</property>
<property>
    <name>mapred.job.tracker</name>
    <value>cluster-master:9001</value>
</property>
<property>
  <name>mapreduce.jobtracker.http.address</name>
  <value>cluster-master:50030</value>
</property>
<property>
  <name>mapreduce.jobhisotry.address</name>
  <value>cluster-master:10020</value>
</property>
<property>
  <name>mapreduce.jobhistory.webapp.address</name>
  <value>cluster-master:19888</value>
</property>
<property>
  <name>mapreduce.jobhistory.done-dir</name>
  <value>/jobhistory/done</value>
</property>
<property>
  <name>mapreduce.intermediate-done-dir</name>
  <value>/jobhisotry/done_intermediate</value>
</property>
<property>
  <name>mapreduce.job.ubertask.enable</name>
  <value>true</value>
</property>
</configuration>

4、yarn-site.xml

<configuration>
    <property>
   <name>yarn.resourcemanager.hostname</name>
   <value>cluster-master</value>
 </property>
 <property>
   <name>yarn.nodemanager.aux-services</name>
   <value>mapreduce_shuffle</value>
 </property>
 <property>
   <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
   <value>org.apache.hadoop.mapred.ShuffleHandler</value>
 </property>
 <property>
   <name>yarn.resourcemanager.address</name>
   <value>cluster-master:18040</value>
 </property>
<property>
   <name>yarn.resourcemanager.scheduler.address</name>
   <value>cluster-master:18030</value>
 </property>
 <property>
   <name>yarn.resourcemanager.resource-tracker.address</name>
   <value>cluster-master:18025</value>
 </property> <property>
   <name>yarn.resourcemanager.admin.address</name>
   <value>cluster-master:18141</value>
 </property>
<property>
   <name>yarn.resourcemanager.webapp.address</name>
   <value>cluster-master:18088</value>
 </property>
<property>
   <name>yarn.log-aggregation-enable</name>
   <value>true</value>
 </property>
<property>
   <name>yarn.log-aggregation.retain-seconds</name>
   <value>86400</value>
 </property>
<property>
   <name>yarn.log-aggregation.retain-check-interval-seconds</name>
   <value>86400</value>
 </property>
<property>
   <name>yarn.nodemanager.remote-app-log-dir</name>
   <value>/tmp/logs</value>
 </property>
<property>
   <name>yarn.nodemanager.remote-app-log-dir-suffix</name>
   <value>logs</value>
 </property>
</configuration>

Packaging hadoop distributed to slaves

tar -cvf hadoop-dis.tar hadoop hadoop-3.2.0

Use ansible-playbook distribution .bashrc and hadoop-dis.tar to slave host

---
- hosts: cluster
  tasks:
    - name: copy .bashrc to slaves
      copy: src=~/.bashrc dest=~/
      notify:
        - exec source
    - name: copy hadoop-dis.tar to slaves
      unarchive: src=/opt/hadoop-dis.tar dest=/opt

  handlers:
    - name: exec source
      shell: source ~/.bashrc

The above yaml saved as hadoop-dis.yaml, and perform

ansible-playbook hadoop-dis.yaml

Under hadoop-dis.tar to automatically extract the slave host / opt directory

Hadoop startup

Formatting namenode

hadoop namenode -format

If you see the word storage format success, etc., can be formatted successfully

Start the cluster

cd $HADOOP_HOME/sbin
start-all.sh

After you start using jps command to see if a successful start

Note:
encountered datanode service does not start on the node slaves, the slave view the directory structure found in practice that
is not generated file in your profile folder settings, such as: core-site.xml in

<property>
        <name>hadoop.tmp.dir</name>
        <value>/home/hadoop/tmp</value>
        <description>A base for other temporary directories.</description>
    </property>

hdfs-site.xml file:

<property>
   <name>dfs.namenode.name.dir</name>
   <value>/home/hadoop/tmp/dfs/name</value>
 </property>
 <property>
   <name>dfs.datanode.data.dir</name>
   <value>/home/hadoop/data</value>
 </property>

Manually to the node generates these folders, and then delete the master in these folders and files under $ HADOOP_HOME logs folder, and then reformat namenode

hadoop namenode -format

Again to start the cluster service:

start-all.sh

Then observe from the node should see the node in the service

Authentication Service

access

http://host:18088
http://host:9870

To see whether the service is started

Reprinted: https: //www.jianshu.com/p/d7fa21504784

Guess you like

Origin www.cnblogs.com/coolwxb/p/10975352.html