Flink installation is highly available

Local mode

For Local mode, JobManager and TaskManager share a JVM to complete Workload

If you want to verify a simple application, Local mode is the most convenient

Most practical applications use Standalone or Yarn Cluster, and the local mode just unzip the installation package and start ( bin/start-local.sh)

Standalone mode (three nodes)

Installation package download address: http://flink.apache.org/downloads.html   Select the Flink version corresponding to Hadoop to download

Installation and decompression: tar -zxvf flink-1.7.2-bin-hadoop27-scala_2.11.tgz

Renamed: mv flink-1.7.2 flink

Modify the configuration file in the conf directory:

1.conf/masters

2.conf/slaves

Conf / good-conf.yaml

jobmanager.rpc.address: node7-1
taskmanager.numberOfTaskSlots: 2

Copy the installation package to each node

 scp -r flink/ node7-2:`pwd`

 scp -r flink/ node7-3:`pwd`

Start flink

bin/start-cluster.sh

WebUI view

http://node7-1:8081

HA high availability

Modify the configuration file conf/flink-conf.yaml

# jobmanager.rpc.address: node7-1

jobmanager.rpc.port: 6123

jobmanager.heap.size: 1024m

taskmanager.heap.size: 1024m

taskmanager.numberOfTaskSlots: 2

parallelism.default: 1

#================================================================
# High Availability
#================================================================
high-availability: zookeeper

high-availability.storageDir: hdfs://jh/flink/ha/

# 指定高可用模式(必须)
high-availability.zookeeper.quorum: node7-1:2181,node7-2:2181,node7-3:2181

# jobManager元数据保存在文件系统storageDir中,只有指向此状态的指针存储在zookeeper中(必须)
high-availability.zookeeper.path.root: /flink

# 根据zookeeper节点,在该节点下放置所有集群节点(推荐)
high-availability.cluster-id: /flinkCluster 

#================================================================
# Fault tolerance and checkpointing
#================================================================
state.backend: filesystem

state.checkpoints.dir: hdfs://jh/flink/checkpoints

state.savepoints.dir: hdfs://jh/flink/checkpoints

Modify the configuration file conf/slave

node7-1
node7-2
node7-3

Modify the configuration file conf/master

node7-1:8081
node7-2:8081

Modify the configuration file conf/zoo.cfg

# ZooKeeper quorum peers
server.1=node7-1:2888:3888
server.2=node7-2:2888:3888
server.3=node7-3:2888:3888

flink-1.7.2With hadoopintegration

  • Modify the configuration file directlybin/config.sh
# YARN Configuration Directory, if necessary
DEFAULT_YARN_CONF_DIR="/data/hadoop/hadoop/share/hadoop/yarn"
# Hadoop Configuration Directory, if necessary
DEFAULT_HADOOP_CONF_DIR="/data/hadoop/hadoop/etc/hadoop"
  • Add hadoop dependency

flink-shaded-hadoop-2-uber-2.8.3-10.0.jar

Download the jarpackage to the /flink/libdirectory

start up

bin/start-cluster.sh

WebUI view, this will automatically generate a master

http://node7-1:8081

verification

Manually kill the master on node7-1, at this time, the standby master on node7-2 becomes the master

 

Guess you like

Origin blog.csdn.net/Poolweet_/article/details/108671067