Docker build their own Hadoop Hadoop pseudo-distributed containers to build

Docker now used more and more, we have to keep up the rhythm Yeah, come on

My directory structure is as follows:
First, create a directory used to store some of the things we later want to store, such as hadoop installation package, jdk installation package and so on.

zhenghui@F117:/soft/code/hadoopImages$ 
zhenghui@F117:/soft/code/hadoopImages$ pwd
/soft/code/hadoopImages
zhenghui@F117:/soft/code/hadoopImages$ 
zhenghui@F117:/soft/code/hadoopImages$ 

zhenghui@F117:/soft/code/hadoopImages$ ll
总用量 404136
drwxr-xr-x 2 zhenghui zhenghui      4096 2月  12 09:26 ./
drwxr-xr-x 7 zhenghui zhenghui      4096 2月  12 08:44 ../
-rw-r--r-- 1 zhenghui zhenghui      1083 2月  12 09:08 Dockerfile
-rwxrwxrwx 1 zhenghui zhenghui 218720521 10月  2 20:38 hadoop-2.7.7.tar.gz*
-rwxrwxrwx 1 zhenghui zhenghui 195094741 10月  3 11:42 jdk-8u221-linux-x64.tar.gz*
zhenghui@F117:/soft/code/hadoopImages$ 
zhenghui@F117:/soft/code/hadoopImages$ 
zhenghui@F117:/soft/code/hadoopImages$ 

Edit Dockerfile file

zhenghui@F117:/soft/code/hadoopImages$ sudo vim Dockerfile 

It reads as follows:

FROM centos:7
MAINTAINER zhenghui<[email protected]>

ADD hadoop-2.7.7.tar.gz /usr/local/
ADD jdk-8u221-linux-x64.tar.gz /usr/local/

RUN yum -y install vim
RUN yum -y install net-tools

ENV MYLOGINPATH /usr/local
WORKDIR $MYLOGINPATH

ENV JAVA_HOME /usr/local/jdk1.8.0_221
ENV CLASSPATH $JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar

ENV HADOOP_HOME /usr/local/hadoop-2.7.7

ENV PATH $PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin


EXPOSE 22
EXPOSE 50010
EXPOSE 50075
EXPOSE 50475
EXPOSE 50020
EXPOSE 50070
EXPOSE 50470
EXPOSE 8020
EXPOSE 8485
EXPOSE 8019
EXPOSE 8032
EXPOSE 8030
EXPOSE 8031
EXPOSE 8033
EXPOSE 8088
EXPOSE 8088
EXPOSE 8040
EXPOSE 8041
EXPOSE 10020
EXPOSE 19888
EXPOSE 60000
EXPOSE 60010
EXPOSE 60020
EXPOSE 60030
EXPOSE 2181
EXPOSE 2888
EXPOSE 3888
EXPOSE 9083
EXPOSE 10000
EXPOSE 2181
EXPOSE 2888
EXPOSE 3888

Construction of container

zhenghui@F117:/soft/code/hadoopImages$ sudo docker build -f Dockerfile -t myhadoop:0.1 .

See if the build is complete

zhenghui@F117:/soft/code/hadoopImages$ 
zhenghui@F117:/soft/code/hadoopImages$ sudo docker images
REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
myhadoop            0.1                 21e9954de389        25 minutes ago      1.22GB
centos              7                   5e35e350aded        3 months ago        203MB
zhenghui@F117:/soft/code/hadoopImages$ 
zhenghui@F117:/soft/code/hadoopImages$ 

Build a good run hadoop container

zhenghui@F117:/soft/code/hadoopImages$ sudo docker run -itd --name myhd -p 2222:22 myadoop 

Check whether a successful start

zhenghui@F117:/soft/code/hadoopImages$ 
zhenghui@F117:/soft/code/hadoopImages$ sudo docker ps
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS                                                                                                                                                                                                                                                                          NAMES
78d601d004df        myhadoop:0.1        "/bin/bash"         23 minutes ago      Up 23 minutes       2181/tcp, 2888/tcp, 3888/tcp, 8019-8020/tcp, 8030-8033/tcp, 8040-8041/tcp, 8088/tcp, 8485/tcp, 9083/tcp, 10000/tcp, 10020/tcp, 19888/tcp, 50010/tcp, 50020/tcp, 50070/tcp, 50075/tcp, 50470/tcp, 50475/tcp, 60000/tcp, 60010/tcp, 60020/tcp, 60030/tcp, 0.0.0.0:2222->22/tcp   myhd
zhenghui@F117:/soft/code/hadoopImages$ 
zhenghui@F117:/soft/code/hadoopImages$ 

Into the container

zhenghui@F117:/soft/code/hadoopImages$ 
zhenghui@F117:/soft/code/hadoopImages$ sudo docker exec -it myhd /bin/bash

Check whether to enter

zhenghui@F117:/soft/code/hadoopImages$ 
zhenghui@F117:/soft/code/hadoopImages$ sudo docker exec -it myhd /bin/bash
[root@78d601d004df local]# 
[root@78d601d004df local]# 
[root@78d601d004df local]# pwd
/usr/local
[root@78d601d004df local]# 

You can see the directory jdk under hadoop and is now current directory

[root@78d601d004df local]# 
[root@78d601d004df local]# ll
total 52
drwxr-xr-x 2 root root 4096 Apr 11  2018 bin
drwxr-xr-x 2 root root 4096 Apr 11  2018 etc
drwxr-xr-x 2 root root 4096 Apr 11  2018 games
drwxr-xr-x 1 1000 ftp  4096 Feb 12 01:21 hadoop-2.7.7
drwxr-xr-x 2 root root 4096 Apr 11  2018 include
drwxr-xr-x 7   10  143 4096 Jul  4  2019 jdk1.8.0_221
drwxr-xr-x 2 root root 4096 Apr 11  2018 lib
drwxr-xr-x 2 root root 4096 Apr 11  2018 lib64
drwxr-xr-x 2 root root 4096 Apr 11  2018 libexec
drwxr-xr-x 2 root root 4096 Apr 11  2018 sbin
drwxr-xr-x 5 root root 4096 Oct  1 01:15 share
drwxr-xr-x 2 root root 4096 Apr 11  2018 src
[root@78d601d004df local]# 

Check to see when the construction of container has been configured environment variables

[root@78d601d004df local]# java -version
java version "1.8.0_221"
Java(TM) SE Runtime Environment (build 1.8.0_221-b11)
Java HotSpot(TM) 64-Bit Server VM (build 25.221-b11, mixed mode)
[root@78d601d004df local]# 
[root@78d601d004df local]# hadoop version
Hadoop 2.7.7
Subversion Unknown -r c1aad84bd27cd79c3d1a7dd58202a8c3ee1ed3ac
Compiled by stevel on 2018-07-18T22:47Z
Compiled with protoc 2.5.0
From source with checksum 792e15d20b12c74bd6f19a1fb886490
This command was run using /usr/local/hadoop-2.7.7/share/hadoop/common/hadoop-common-2.7.7.jar
[root@78d601d004df local]# 

Hadoop configuration of the host mapping file

[root@78d601d004df local]# vim /etc/hosts

Mapping and join local ip, otherwise one will come you will not start hadoop

172.17.0.3 hadoop101

Check whether the mapping is successful

[root@78d601d004df local]# 
[root@78d601d004df local]# ping hadoop101
PING localhost (127.0.0.1) 56(84) bytes of data.
64 bytes from localhost (127.0.0.1): icmp_seq=1 ttl=64 time=0.033 ms
64 bytes from localhost (127.0.0.1): icmp_seq=2 ttl=64 time=0.055 ms
64 bytes from localhost (127.0.0.1): icmp_seq=3 ttl=64 time=0.052 ms
^C
--- localhost ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2041ms
rtt min/avg/max/mdev = 0.033/0.046/0.055/0.012 ms
[root@78d601d004df local]# 

Hadoop configuration profiles

Enter this directory

[root@78d601d004df hadoop]# pwd
/usr/local/hadoop-2.7.7/etc/hadoop
[root@78d601d004df hadoop]# 

Edit hdfs-site.xml file

It reads as follows:

<configuration>

    <!--指定HDFS副本的数量,默认是三个,因为现在只有1个节点-->
    <property>
        <name>dfs.replication</name>
        <value>1</value>
    </property>
</configuration>

Placed core-site.xml

 vi etc/hadoop/core-site.xml

Add the following:

<configuration>
<!--指定DFS中NameNode的地址-->
    <property>
        <name>fs.defaultFS</name>
  	 <value>hdfs://hadoop101:9000</value>
    </property>
<!--指定Hadoop运行时产生文件的存储目录-->
    <property>
        <name>hadoop.tmp.dir</name>
        <value>/usr/local/hadoop-2.7.7/data/tmp</value>
    </property>
</configuration>

start up

1, format

hdfs namenode -format

Run format must pay attention to the command:
HDFS: // hadoop101: 9000 in hadoop101 can ping, or will have the card.

Here Insert Picture Description
Here Insert Picture Description
And the result of the presence of this I am following the same format even if successful
Here Insert Picture Description

2, start namenode and datanode

hadoop-daemon.sh start namenode
hadoop-daemon.sh start datanode

Run jps If the following on the success of these

[zhenghui@hadoop101 hadoop-2.7.7]$ jps
10582 NameNode
10726 Jps
10649 DataNode

test

Browser to access http: // ip: 50070 /

For example, I this: http: //172.17.0.3: 50070 /

Here Insert Picture Description

Published 101 original articles · won praise 76 · views 30000 +

Guess you like

Origin blog.csdn.net/qq_17623363/article/details/104273543