Big data cluster construction (zookeeper, high-availability hadoop, high-availability hbase)

Big data cluster construction (zookeeper, high-availability hadoop, high-availability hbase)


1. Preparation

  1. Architecture
    enter description here
  2. Three mutual password-free login
//1.进入ssh目录
cd ~/.ssh/
//若出现 没有那个文件或目录错误 “ssh localhost”再进入
//2. 生成秘钥对
ssh-keygen -t rsa
//3. 拷贝秘钥到指定ip,先对自己免密然后另外两个,共9次
ssh-copy-id 192.168.80.128
ssh-copy-id 192.168.80.129
ssh-copy-id 192.168.80.130
...
//4.测试免密是否成功
ssh 192.168.80.129
ssh 192.168.80.130
ssh 192.168.80.128

enter description here
3. Turn off the firewall

systemctl stop firewalld.service
systemctl disable firewalld.service
  1. Configure the domain name in the hosts file of the zhiyou001 host (all installations are in zhiyou001, and finally copy the other two.)
vi /etc/hosts
//修改为下面内容
#127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4   

#::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
# 上面的给注释掉或者删除
192.168.80.128 zhiyou001
192.168.80.129 zhiyou002
192.168.80.130 zhiyou003
//复制给另外两台
scp -r /etc/hosts [email protected]:/etc/hosts
scp -r /etc/hosts [email protected]:/etc/hosts
  1. Install jdk
  • The tutorial is out, so I won’t repeat it here
  • Server deployment-"jdk articles"
  • Copy installation files and environment variables to zhiyou002 and zhiyou003
//复制jdk
scp -r /opt/java/ root@zhiyou002:/opt/java/
scp -r /opt/java/ root@zhiyou003:/opt/java/
//复制配置文件
scp -r /etc/profile root@zhiyou002:/etc/profile
scp -r /etc/profile root@zhiyou003:/etc/profile
//刷新配置文件;可右键“发送键输入到所有对话”同时刷新
source /etc/profile
//查看配置是否成功(三台)
java    javac   java -version

enter description here

2. Title technology installation method. It’s too long to read, so it’s released separately.

1. Big data cluster construction-"zookeeper"

2. Building a big data cluster-"High-availability hadoop"

3. Building a Big Data Cluster-"Highly Available Habse"


Guess you like

Origin blog.csdn.net/qq_39231769/article/details/102750507