Hadoop-HA mode - Step Deployment

Hadoop-HA mode - Step Deployment

1 . In the host ( 192.168 . 15.47 Create) folder: / Data / zkdocker / with BigData / downloads / HA
 2 creates hadoopHA profile:. Core-site.xml, hdfs- site.xml, hosts_ansible.ini, mapred- the site.xml, yarn- the site.xml
 3 . in the host ( 192.168 . 15.47 ) directory / the Data / zkdocker / BigData / download Download good software:
    Hadoop - 2.7 . . 7 .tar.gz, JDK-8u201-Linux-x64.tar.gz, zookeeper- 3.4 . 14 .tar.gz
 . 4 . docker created using centos7 mirror 10 on the host:
    docker run -it --name=hadoop01 --hostname=hadoop01 -v /data/zkdocker/bigdata:/tmp centos7
    docker run -it --name=hadoop02 --hostname=hadoop02 -v /data/zkdocker/bigdata:/tmp centos7
    docker run -it --name=hadoop03 --hostname=hadoop03 -v /data/zkdocker/bigdata:/tmp centos7
    docker run -it --name=hadoop04 --hostname=hadoop04 -v /data/zkdocker/bigdata:/tmp centos7
    docker run -it --name=hadoop05 --hostname=hadoop05 -v /data/zkdocker/bigdata:/tmp centos7
    docker run -it --name=hadoop06 --hostname=hadoop06 -v /data/zkdocker/bigdata:/tmp centos7
    docker run -it --name=hadoop07 --hostname=hadoop07 -v /data/zkdocker/bigdata:/tmp centos7
    docker run -it --name=slave1 --hostname=slave1 -v /data/zkdocker/bigdata:/tmp centos7
    docker run -it --name=slave2 --hostname=slave2 -v /data/zkdocker/bigdata:/tmp centos7
    RUN Docker Expediting IT --name = slave3 slave3 --hostname = -v / Data / zkdocker / with BigData: / tmp centos7
 . 5 . host command into the container: Docker Exec Expediting IT hadoop01 / bin / the bash
 . 6 each container are. perform the installation ssh:
    yum install which -y
    yum install openssl openssh-server openssh-clients
    mkdir  /var/run/sshd/
    sed -i "s/UsePAM.*/UsePAM no/g" /etc/ssh/sshd_config
    ssh-keygen -t rsa -f /etc/ssh/ssh_host_rsa_key
    ssh-keygen -t ecdsa -f /etc/ssh/ssh_host_ecdsa_key
    SSH -keygen -t ed25519 -f / etc / SSH / ssh_host_ed25519_key
     / usr / sbin / -D & the sshd
 . 7 . Each container creates user hadoop
    useradd hadoop
    passwd hadoop Password: hadoop
8 . Configure hosts 10 of containers
    vi / etc / hosts
 9 . hadoop into the account
    su - hadoop
 10 .ssh free secret login:
    Each container execution: SSH -keygen - t rsa
                   ssh-copy-id -i ~/.ssh/id_rsa.pub hadoop01
    hadoop01,hadoop02上执行:ssh-copy-id -i ~/.ssh/id_rsa.pub hadoop01
                            ssh-copy-id -i ~/.ssh/id_rsa.pub hadoop02
                            And so on ...
11 . Xml used on the host replication ansible
    yum install -y ansible
    vi /etc/ansible/hosts
    ansible all -m ping
    su - hadoop
    ansible all -m copy -a "src=/data/zkdocker/bigdata/download/ha/core-site.xml dest=/home/hadoop/hadoop/etc/hadoop/ owner=hadoop mode=644"
    ansible all -m copy -a "src=/data/zkdocker/bigdata/download/ha/hdfs-site.xml dest=/home/hadoop/hadoop/etc/hadoop/ owner=hadoop mode=644"
    ansible all -m copy -a "src=/data/zkdocker/bigdata/download/ha/mapred-site.xml dest=/home/hadoop/hadoop/etc/hadoop/ owner=hadoop mode=644"
    ansible all -m copy -a "src=/data/zkdocker/bigdata/download/ha/yarn-site.xml dest=/home/hadoop/hadoop/etc/hadoop/ owner=hadoop mode=644"
12Each mounting containers account Java Hadoop, Hadoop, slave1- . 3 mounted zookeeper
    mkdir /home/hadoop/javalib18
    cd /home/hadoop/javalib18
    takes -zxf / tmp / download / JDK 8u201-linux x64.tar.gz
    mv jdk1.8.0_201/ jdk
    cd ..
    takes -zxf / tmp / download / hadoop- 2.7 . 7 .tar.gz
    mv hadoop-2.7.7/ hadoop
    tar zxf zookeeper Over 3.4 . 14 .tar.gz
    ZooKeeper Music Videos - 3.3 . 14 / ZooKeeper
 13 is . Each container configuration ~ / .bashrc
    we ~ / .bashrc
    export JAVA_HOME=/home/hadoop/javajdk18/jdk
    export ZOOKEEPER_HOME=/home/hadoop/zookeeper
    export HADOOP_HOME=/home/hadoop/hadoop
    export PATH=$PATH:$ZOOKEEPER_HOME/bin:$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin
14.source编译
    Source ~ / .bashrc
 15 . zookeeper respectively start slave1, slave2, slave3
    zkServer.sh start
    zkServer.sh status
    Verification is successful: two follower, a leader
16 Start journalnode respectively slave1, slave2, slave3
    hadoop-daemon.sh start journalnode
    ps ajx|grep java|awk '{print $11}'|cut -d _ -f 2
    Verification is successful: three journalnode process
17 . Namenode in a first format hadoop01 vessel:
    hdfs namenode - format
    Hadoop - daemon.sh start namenode
    Verification is successful: a process namenode
18 . In the first synchronous container hadoop02 data namenode
    hdfs namenode -bootstrapStandby
    Hadoop - daemon.sh start namenode
    Verification is successful: two months namenode process
19 .web View namenode
    HTTP: // hadoop01: 50070 -> standby state only in 
    HTTP: // hadoop02: 50070 -> only in standby state 
20 in hadoop01 vessel manually switched to an active state nn1
    haadmin HDFS -transitionToActive NN1 -> This command activation fails, you must force handover
    hdfs haadmin -transitionToActive - forcemanual nn1
    Verification is successful: HTTP: // hadoop01: 50070 -> the Active state 
              HTTP: // hadoop02: 50070 -> or standby state 

              HDFS haadmin -getServiceState NN1 -> the Active
              haadmin HDFS -getServiceState NN2 -> STANDBY
 21 is . arranged on the fail-over node zookeeper
    hdfs zkfc -formatZK
    On slave1 run zkCli.sh: 
    LS / 
    [ZooKeeper, Hadoop - HA]
 22 is . Start the cluster container hadoop01
    start-dfs.sh

    All current process container situation:
    hadoop01 namenode KFC
    hadoop02 namenode KFC
    hadoop03
    hadoop04
    hadoop05    datanode
    hadoop06    datanode
    hadoop07    datanode
    slave1      journalnode zookeeper  
    slave2      journalnode zookeeper
    slave3      journalnode zookeeper
     

23 Verify availability, kill a nn1:
    hadoop01 in the kill - 9 the NameNode process ID
    haadmin HDFS -getServiceState NN1 -> failure
    haadmin HDFS -getServiceState NN2 -> the Active
 24- . restart, synchronization nn2, then start nn1:
    hdfs namenode -bootstrapStandby
    Hadoop - daemon.sh start namenode

25 . Batch build mutual trust with ansible (no use, and no effect)
    Playbook: pushssh.yaml
---
  - hosts: all
    user: hadoop
    tasks:
     - name: ssh-copy
       authorized_key: user=hadoop key="{{ lookup('file', '/home/hadoop/.ssh/id_rsa.pub') }}"

    Run the command: ansible - PlayBook pushssh.yaml

26 .Yarn start Ha
    Start hadoop03 container: Start - yarn.sh
    Start hadoop04 vessel: Start - yarn.sh
 27 . Rm detected state
    bin/yarn rmadmin -getServiceState rm1   --> active
    bin/yarn rmadmin -getServiceState rm2   --> standby

    In hadoop03 container kill off rm1
    bin / Yarn rmadmin -getServiceState RM1 -> Offline
    bin/yarn rmadmin -getServiceState rm2   --> active

    Start rm1
    Start hadoop03 vessel: sbin / yarn- daemon.sh Start ResourceManager
    bin/yarn rmadmin -getServiceState rm1   --> standby
    bin/yarn rmadmin -getServiceState rm2   --> active

 

Guess you like

Origin www.cnblogs.com/zhangkaipc/p/11858012.html