Dinky上路之旅

1、部署flink集群

1.1、flink-conf.yaml

cat > flink-conf.yaml << EOF 
jobmanager.rpc.address: boshi-146
jobmanager.rpc.port: 6123
jobmanager.bind-host: 0.0.0.0
jobmanager.memory.process.size: 1600m
taskmanager.bind-host: 0.0.0.0
# 修改为本机ip
taskmanager.host: 0.0.0.0
taskmanager.memory.process.size: 1728m
taskmanager.numberOfTaskSlots: 1
parallelism.default: 1
jobmanager.execution.failover-strategy: region
rest.address: boshi-146
rest.bind-address: 0.0.0.0
classloader.check-leaked-classloader: false

EOF

1.2、配置 workers 文件

cat > workers << EOF 
boshi-107
boshi-124
boshi-131
boshi-139
EOF

1.3、配置 master 文件

cat > masters << EOF 
boshi-146:8081
EOF

1.4、分发文件

ansible cluster -m file -a "path=/data/app state=directory"
ansible cluster -m copy -a "src=/data/app/flink-1.17.1 dest=/data/app/ owner=root group=root mode=0755"

1.5、分别修改workers节点taskmanager.host

taskmanager.host: hostname

1.6、启动Standalone集群

bin/start-cluster.sh

1.7、访问Standalone集群

http://boshi-146:8081

1.8、Yarn Session集群(HDP3.1)

# 修改hdfs配置
dfs.permissions=false

#启动集群
export HADOOP_CLASSPATH=`hadoop classpath`
/data/app/flink-1.17.1/bin/yarn-session.sh -d

2、Dinky部署

2.1、MySQL建库并导入数据

CREATE DATABASE dlink;
create user 'dlink'@'%' IDENTIFIED WITH mysql_native_password by 'Dlink*2023';
grant ALL PRIVILEGES ON dlink.* to 'dlink'@'%';
flush privileges;

mysql -udlink -pDlink*2023

use dlink;

source /data/app/dlink/sql/dinky.sql;

2.2、加载Flink依赖

cp /data/app/flink-1.17.1/lib/* /data/app/dlink/plugins/flink1.17/

2.3、加载Hadoop依赖

cp flink-shaded-hadoop-3-uber-3.1.1.7.2.9.0-173-9.0.jar /data/app/dlink/plugins/

2.4、上传jar包

# 创建HDFS目录并上传dinky的jar包
sudo -u hdfs hdfs dfs -mkdir -p /dlink/jar/
sudo -u hdfs hdfs dfs -put /data/app/dlink/jar/dlink-app-1.17-0.7.3-jar-with-dependencies.jar /dlink/jar

# 创建HDFS目录并上传flink的jar包
sudo -u hdfs hadoop fs -mkdir /dlink/flink-dist-17
sudo -u hdfs hadoop fs -put /data/app/flink-1.17.1/lib /dlink/flink-dist-17
sudo -u hdfs hadoop fs -put /data/app/flink-1.17.1/plugins /dlink/flink-dist-17

2.5、修改配置

vi ./config/application.yml

spring:
  datasource:
    url: jdbc:mysql://${
    
    MYSQL_ADDR:boshi-146:3306}/${
    
    MYSQL_DATABASE:dinky}?useUnicode=true&characterEncoding=UTF-8&autoReconnect=true&useSSL=false&zeroDateTimeBehavior=convertToNull&serverTimezone=Asia/Shanghai&allowPublicKeyRetrieval=true
    username: ${
    
    MYSQL_USERNAME:dinky}
    password: ${
    
    MYSQL_PASSWORD:dinky}
    driver-class-name: com.mysql.cj.jdbc.Driver
  application:
    name: dinky
  mvc:
    pathmatch:
      matching-strategy: ant_path_matcher
    format:
      date: yyyy-MM-dd HH:mm:ss
    #json格式化全局配置
  jackson:
    time-zone: GMT+8
    date-format: yyyy-MM-dd HH:mm:ss

  main:
    allow-circular-references: true

2.6、启动并登录Dinky

cd /data/app/dlink
sh auto.sh start 1.17

http://boshi-146:8888
admin/admin

2.7、Flink设置

2.7.1、配置中心

在这里插入图片描述

2.7.2、Flink实例管理
1、standalone

在这里插入图片描述

2、Yarn Session

在这里插入图片描述

3、注册中心

在这里插入图片描述

2.7.3、集群配置管理

3、Dinky本地启动

3.1、安装环境

http://www.dlink.top/docs/next/developer_guide/local_debug
首先按照官网的步骤安装环境。
 
npm	       7.19.0
node.js	   14.17.0
jdk	       1.8
maven	   3.6.0+
lombok	   IDEA插件安装
mysql	   5.7+
 
版本必须一致。不然要踩很多坑。

3.2、升级flink-connector-starrocks

        <dependency>
            <groupId>com.starrocks</groupId>
            <artifactId>flink-connector-starrocks</artifactId>
            <version>1.2.7_flink-1.13_${
    
    scala.binary.version}</version>
            <exclusions>
                <exclusion>
                    <groupId>com.github.jsqlparser</groupId>
                    <artifactId>jsqlparser</artifactId>
                </exclusion>
            </exclusions>
        </dependency>

3.3、升级mysql-version

 <mysql-connector-java.version>8.0.33</mysql-connector-java.version>

3.4、本地编译测试

mvn clean install -P dev,scala-2.12,flink-1.14,web '-Dspotless.check.skip=true' -DskipTests

在这里插入图片描述

猜你喜欢

转载自blog.csdn.net/docsz/article/details/131982704