高可用hadoop—安装Zookeeper

高可用hadoop—安装Zookeeper

配置修改

  1. 备用namenode生成公钥并分发

    ssh-keygen -t dsa
    cd .ssh
    # 追加到自己的authorized_keys
    cat id_dsa.pub >> authorized_keys
    # 分发到主namenode
    scp id_dsa.pub node01:`pwd`/node07.pub
    
  2. 修改配置文件

    # hdfs.xml应该有以下配置
    <property>
    	  <name>dfs.replication</name>
    	  <value>2</value>
        </property>
    
        <property>
    	  <name>dfs.nameservices</name>
    	  <value>mycluster</value>
        </property>
    
        <property>
              <name>dfs.ha.automatic-failover.enabled</name>
              <value>true</value>
       </property>
        <property>
    	  <name>dfs.ha.namenodes.mycluster</name>
    	  <value>nn1,nn2</value>
        </property>
    
        <property>
              <name>dfs.namenode.rpc-address.mycluster.nn1</name>
              <value>node01:9820</value>
        </property>
        <property>
              <name>dfs.namenode.rpc-address.mycluster.nn2</name>
              <value>node02:9820</value>
        </property>
    
        <property>
              <name>dfs.ha.namenode.http-address.mycluster.nn1</name>
              <value>node01:9870</value>
        </property>
        <property>
              <name>dfs.ha.namenode.http-address.mycluster.nn2</name>
              <value>node02:9870</value>
        </property>
    
        <property>
    	  <name>dfs.namenode.shared.edits.dir</name>
    	  <value>qjournal://node01:8485;node02:8485;node03:8485/mycluster</value>
        </property>
    
        <property>
    	  <name>dfs.client.failover.proxy.provider.mycluster</name>
    	  <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
        </property>
    
        <property>
    	  <name>dfs.ha.fencing.methods</name>
    	  <value>sshfence</value>
        </property>
        <property>
    	  <name>dfs.ha.fencing.ssh.private-key-files</name>
    	  <value>/root/.ssh/id_dsa</value>
        </property>
    
        <property>
    	  <name>dfs.journalnode.edits.dir</name>
    	  <value>/var/ihep/hadoop/ha/journalnode</value>
        </property>
    
       <property>
       	 <name>dfs.ha.fencing.ssh.connect-timeout</name>
       	 <value>30000</value>
       </property>
    
    </configuration>
    
    # core.xml应该有以下配置
    <configuration>
    	<property>
    		<name>fs.defaultFS</name>
    		<value>hdfs://mycluster</value>
    	</property>
    	<property>
    		<name>hadoop.tmp.dir</name>
    		<value>/var/ihep/hadoop/ha</value>
    	</property>
    	<property>
    		<name>ha.zookeeper.quorum</name>
    		<value>node02:2181,node03:2181,node04:2181</value>
    	</property>
    </configuration>
    
  3. 分发配置文件

    scp core-site.xml hdfs-site.xml node04:`pwd`
    scp core-site.xml hdfs-site.xml node03:`pwd`
    scp core-site.xml hdfs-site.xml node02:`pwd`
    

自动故障转移——安装Zookeeper

  1. 安装Zookeeper,在第二个节点

    # 注意安装包用bin版本
    tar xf apache-zookeeper-3.5.5-bin.tar -C /opt/yourpath
    # 进入Zookeeper目录下的conf
    cp zoo_sample.cfg zoo.cfg
    vi zoo.cfg
    # 修改dataDir=/var/path/zookeeper,并创建文件夹这个目录
    mkdir /var/path/zookeeper
    # 添加
    server.1=node02:2888:3888
    server.2=node03:2888:3888
    server.3=node04:2888:3888
    
  2. 分发zookeeper

    scp -r apache-zookeeper-3.5.5/ node03:`pwd`
    scp -r apache-zookeeper-3.5.5/ node04:`pwd`
    
  3. 在234号节点设置id,分别执行

    echo 1 > /var/ihep/zookeeper/myid
    echo 2 > /var/ihep/zookeeper/myid
    echo 3 > /var/ihep/zookeeper/myid
    
  4. 主节点修改环境变量

    vi /etc/profile
    # 添加export ZOOKEEPER_HOME=/opt/ihep/apache-zookeeper-3.5.5
    # export PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$ZOOKEEPER_HOME/bin
    
  5. 主节点分发

    scp /etc/profile node02:/etc/
    scp /etc/profile node03:/etc/
    scp /etc/profile node04:/etc/
    # 各节点执行
    . /etc/profile
    
  6. 启动journalnode

    # 在234节点上执行,检查是否能启动
    zkServer.sh start
    # 启动journalnode,在123号节点
    hdfs --daemon start journalnode
    
  7. 格式化

    # 主节点
    hdfs namenode -format
    
  8. node01数据同步给node02

    # 主节点
    hdfs --daemon start namenode
    # 备节点
    hdfs namenode -bootstrapStandby
    
  9. 注册Zookeeper

    hdfs zkfc -formatZK
    # 启动集群
    start-dfs.sh
    
发布了80 篇原创文章 · 获赞 68 · 访问量 7570

猜你喜欢

转载自blog.csdn.net/weixin_44048823/article/details/99707190