Flink安装 高可用

Local模式

对于Local模式来说,JobManager和TaskManager会公用一个JVM来完成Workload

如果要验证一个简单的应用,Local模式是最方便的

实际应用中大多使用Standalone或者Yarn Cluster,而local模式只是将安装包解压启动(bin/start-local.sh)即可

Standalone 模式(三个节点)

安装包下载地址:http://flink.apache.org/downloads.html  选择对应Hadoop的Flink版本下载

安装解压:tar -zxvf  flink-1.7.2-bin-hadoop27-scala_2.11.tgz

改名为:mv flink-1.7.2 flink

修改conf目录下的配置文件:

1.conf/masters

2.conf/slaves

3. conf/flink-conf.yaml

jobmanager.rpc.address: node7-1
taskmanager.numberOfTaskSlots: 2

拷贝安装包到各节点

 scp -r flink/ node7-2:`pwd`

 scp -r flink/ node7-3:`pwd`

启动flink

bin/start-cluster.sh

WebUI查看

http://node7-1:8081

HA高可用

修改配置文件 conf/flink-conf.yaml

# jobmanager.rpc.address: node7-1

jobmanager.rpc.port: 6123

jobmanager.heap.size: 1024m

taskmanager.heap.size: 1024m

taskmanager.numberOfTaskSlots: 2

parallelism.default: 1

#================================================================
# High Availability
#================================================================
high-availability: zookeeper

high-availability.storageDir: hdfs://jh/flink/ha/

# 指定高可用模式(必须)
high-availability.zookeeper.quorum: node7-1:2181,node7-2:2181,node7-3:2181

# jobManager元数据保存在文件系统storageDir中,只有指向此状态的指针存储在zookeeper中(必须)
high-availability.zookeeper.path.root: /flink

# 根据zookeeper节点,在该节点下放置所有集群节点(推荐)
high-availability.cluster-id: /flinkCluster 

#================================================================
# Fault tolerance and checkpointing
#================================================================
state.backend: filesystem

state.checkpoints.dir: hdfs://jh/flink/checkpoints

state.savepoints.dir: hdfs://jh/flink/checkpoints

修改配置文件 conf/slave

node7-1
node7-2
node7-3

修改配置文件 conf/master

node7-1:8081
node7-2:8081

修改配置文件 conf/zoo.cfg

# ZooKeeper quorum peers
server.1=node7-1:2888:3888
server.2=node7-2:2888:3888
server.3=node7-3:2888:3888

flink-1.7.2hadoop集成

  • 直接修改配置文件bin/config.sh
# YARN Configuration Directory, if necessary
DEFAULT_YARN_CONF_DIR="/data/hadoop/hadoop/share/hadoop/yarn"
# Hadoop Configuration Directory, if necessary
DEFAULT_HADOOP_CONF_DIR="/data/hadoop/hadoop/etc/hadoop"
  • 添加hadoop依赖

flink-shaded-hadoop-2-uber-2.8.3-10.0.jar

下载jar包放到/flink/lib目录下

启动

bin/start-cluster.sh

WebUI查看,这是会自动产生一个主Master

http://node7-1:8081

验证

手动杀死node7-1上的master,此时,node7-2上的备用master转为主master

猜你喜欢

转载自blog.csdn.net/Poolweet_/article/details/108671067
今日推荐