hadoop集群动态扩展节点

目录

一、修改hostname

二、修改网络配置

三、修改 hosts 文件

四、SSH免密登录配置

五、修改zookeeper配置

六、修改hadoop配置

七、修改hbase配置

八、修改spark配置


一、修改hostname

hostnamectl set-hostname hadoopxx

二、修改网络配置

1、生成UUID

UUID是网络的唯一标识,不能和之前的主机重复

uuidgen

2、修改 /etc/sysconfig/network-scripts/ifcfg-ens33 文件

ifconfig
cat /etc/sysconfig/network-scripts/ifcfg-ens33
cp /etc/sysconfig/network-scripts/ifcfg-ens33 /etc/sysconfig/network-scripts/ifcfg-ens33.tempalte
vim /etc/sysconfig/network-scripts/ifcfg-ens33
TYPE="Ethernet"
PROXY_METHOD="none"
BROWSER_ONLY="no"
BOOTPROTO="static"
DEFROUTE="yes"
IPV4_FAILURE_FATAL="no"
IPV6INIT="yes"
IPV6_AUTOCONF="yes"
IPV6_DEFROUTE="yes"
IPV6_FAILURE_FATAL="no"
IPV6_ADDR_GEN_MODE="stable-privacy"
NAME="ens33"
UUID="f18dea2a-c2de-489d-b172-d52c385bbbf6"
DEVICE="ens33"
ONBOOT="yes"

IPADDR="192.168.x.xxx"
NETMASK="255.255.255.0"
GATEWAY="192.168.0.1"
DNS="192.168.0.1"
NM_CONTROLLED="no"

3、停止networkManager服务

systemctl stop NetworkManager.service
systemctl disable NetworkManager.service

4、重启网卡服务

systemctl restart network
ifconfig
ping hao123.com

三、修改 hosts 文件

cat /etc/hosts

需要添加的节点ip和host

192.168.0.133 hadoop4
192.168.0.134 hadoop5

echo "192.168.0.133 hadoop4
192.168.0.134 hadoop5" >> /etc/hosts
cat /etc/hosts
reboot

四、SSH免密登录配置

大数据入门之 ssh 免密码登录https://blog.csdn.net/qq262593421/article/details/105325593

注意事项:

1、因为是复制过来的节点,原来的 ssh keygen 没变,这里直接 overwrite 就行了

2、原来的免密登录已经失效,需要把 /root/.ssh/known_hosts 文件和 authorized_keys 文件清空重新配置

cat /root/.ssh/known_hosts
> /root/.ssh/known_hosts
cat /root/.ssh/known_hosts

cat /root/.ssh/authorized_keys
> /root/.ssh/authorized_keys
cat /root/.ssh/authorized_keys

五、修改zookeeper配置

1、配置 zoo.cfg 文件

cd $ZOO_HOME/conf
cat $ZOO_HOME/conf/zoo.cfg
echo "server.4=hadoop4:2888:3888
tail -n 10 $ZOO_HOME/conf/zoo.cfg

2、 配置 zookeeper的myid

cat $ZOO_HOME/data/myid
# n 为zookeeper的myid,一直累加下去就行了,这里用的4和5
echo "n" > $ZOO_HOME/data/myid
cat $ZOO_HOME/data/myid

六、修改hadoop配置

cd $HADOOP_HOME/etc/hadoop
cat $HADOOP_HOME/etc/hadoop/workers
# 如果没有换行先换行
echo "" >>$HADOOP_HOME/etc/hadoop/workers
echo "hadoop4
hadoop5" >> $HADOOP_HOME/etc/hadoop/workers
cat $HADOOP_HOME/etc/hadoop/workers

NameNode上执行 

# NameNode上刷新节点
hdfs dfsadmin -refreshNodes
# 查看节点信息
hdfs dfsadmin -report
vim $HADOOP_HOME/etc/hadoop/core-site.xml

如果想添加zkfcz则配置此项

	<property>
		<name>ha.zookeeper.quorum</name>
		<value>hadoop1:2181,hadoop2:2181,hadoop3:2181</value>
		<description>DFSZKFailoverController</description>
	</property>

七、修改hbase配置

cd $HBASE_HOME/conf
cat $HBASE_HOME/conf/regionservers
echo "hadoop4
hadoop5" >> $HBASE_HOME/conf/regionservers
cat $HBASE_HOME/conf/regionservers

八、修改spark配置

1、配置work节点

cd $SPARK_HOME/conf
cat $SPARK_HOME/conf/slaves
echo "hadoop4
hadoop5" >> $SPARK_HOME/conf/slaves
cat $SPARK_HOME/conf/slaves

 2、配置spark高可用

vim $SPARK_HOME/conf/spark-env.sh
# export SPARK_MASTER_IP=hadoop1
export SPARK_DAEMON_JAVA_OPTS="-Dspark.deploy.recoveryMode=ZOOKEEPER -Dspark.deploy.zookeeper.url=hadoop1:2181,hadoop2:2181,hadoop3:2181 -Dspark.deploy.zookeeper.dir=/spark"
tail -n 20 $SPARK_HOME/conf/spark-env.sh

猜你喜欢

转载自blog.csdn.net/qq262593421/article/details/106252066