Running Hadoop and mirroring in the Docker

  Repeating create the wheel, generates a repackaged used herein is based Docker Hadoop mirror;
  Hadoop cluster dependent software are: jdk, ssh, etc., so long as there are two Hadoop related to the mirror can be packaged into;

Cluster architecture

Configuration file preparation

1, Hadoop configuration file: Core-the site.xml, HDFS-the site.xml, mapred-the site.xml, the Yarn-the site.xml, slaves, hadoop-env.sh
2, SSH configuration file: ssh_config
3, Hadoop cluster starts file: start-hadoop.sh

Mirrored

1, mounted reliance

RUN apt-get update && \
  apt-get install -y openssh-server openjdk-8-jdk wget

2, download Hadoop package

RUN wget http://mirror.bit.edu.cn/apache/hadoop/common/hadoop-2.10.0/hadoop-2.10.0.tar.gz && \
tar -xzvf hadoop-2.10.0.tar.gz && \
mv hadoop-2.10.0 /usr/local/hadoop && \
rm hadoop-2.10.0.tar.gz && \
rm /usr/local/hadoop/share/doc -rf

3, configure the environment variables

ENV JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64 
ENV HADOOP_HOME=/usr/local/hadoop 
ENV PATH=$PATH:/usr/local/hadoop/bin:/usr/local/hadoop/sbin

4, to generate SSH key, a node density log free

RUN ssh-keygen -t rsa -f ~/.ssh/id_rsa -P '' && \
cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys

5. Create a Hadoop-related directory, copy the configuration file, add execute permissions related files, last formatted namenode nodes, each node start time, start ssh service;

RUN mkdir -p ~/hdfs/namenode && \ 
mkdir -p ~/hdfs/datanode && \
mkdir $HADOOP_HOME/logs

COPY config/* /tmp/

#复制ssh、hadoop配置相关
RUN mv /tmp/ssh_config ~/.ssh/config && \
mv /tmp/hadoop-env.sh /usr/local/hadoop/etc/hadoop/hadoop-env.sh && \
mv /tmp/hdfs-site.xml $HADOOP_HOME/etc/hadoop/hdfs-site.xml && \ 
mv /tmp/core-site.xml $HADOOP_HOME/etc/hadoop/core-site.xml && \
mv /tmp/mapred-site.xml $HADOOP_HOME/etc/hadoop/mapred-site.xml && \
mv /tmp/yarn-site.xml $HADOOP_HOME/etc/hadoop/yarn-site.xml && \
mv /tmp/slaves $HADOOP_HOME/etc/hadoop/slaves && \
mv /tmp/start-hadoop.sh ~/start-hadoop.sh && \
mv /tmp/run-wordcount.sh ~/run-wordcount.sh

#添加执行权限
RUN chmod +x ~/start-hadoop.sh && \
chmod +x ~/run-wordcount.sh && \
chmod +x $HADOOP_HOME/sbin/start-dfs.sh && \
chmod +x $HADOOP_HOME/sbin/start-yarn.sh 

# format namenode
RUN /usr/local/hadoop/bin/hdfs namenode -format

Generate Mirror
View Mirror

Run Hadoop cluster in Docker

  After the image generated by the above Dockerfile, used herein to build the image generated above Hadoop cluster; start a master, where two slave nodes;

Add bridged network:

docker network create --driver=bridge solinx-hadoop

Start Master node:

docker run -itd --net=solinx-hadoop -p 10070:50070 -p 8088:8088 --name solinx-hadoop-master --hostname solinx-hadoop-master solinx/hadoop:0.1

Start Slave1 node:

docker run -itd --net=solinx-hadoop --name solinx-hadoop-slave1 --hostname solinx-hadoop-slave1 solinx/hadoop:0.1

Start Slave2 node:

docker run -itd --net=solinx-hadoop --name solinx-hadoop-slave2 --hostname solinx-hadoop-slave1 solinx/hadoop:0.1

Enter the Master node Hadoop cluster startup script to execute:

Start Hadoop cluster

View HDFS

Guess you like

Origin www.cnblogs.com/softlin/p/11924731.html