Practice data lake iceberg Lesson 28 Deploy packages that do not exist on the public warehouse to the local warehouse

Series Article Directory

Practice Data Lake iceberg Lesson 1 Getting Started
Practice Data Lake iceberg Lesson 2 Iceberg is based on hadoop’s underlying data format
Practice data lake
iceberg In sqlclient, use SQL to read data from Kafka to iceberg (upgrade the version to flink1.12.7)
practice data lake iceberg Lesson 5 hive catalog features
practice data lake iceberg Lesson 6 write from kafka to iceberg failure problem solving
practice data lake iceberg Lesson 7 Write to iceberg
practice data lake iceberg in real time Lesson 8 hive and iceberg integrate
practice data lake iceberg Lesson 9 merge small files
practice data lake iceberg Lesson 10 snapshot delete
practice data lake iceberg Lesson 11 test partition table integrity Process (creating numbers, building tables, merging, and deleting snapshots)
Practice data lake iceberg Lesson 12 What is a catalog
Practice data lake iceberg Lesson 13 Metadata is many times larger than data files
Practice data lake iceberg Lesson 14 Data merging (to solve the problem of metadata expansion over time)
practice data lake iceberg Lesson 15 spark installation and integration iceberg (jersey package conflict)
practice data lake iceberg Lesson 16 open the cognition of iceberg through spark3 Door
Practice data lake iceberg Lesson 17 Hadoop2.7, spark3 on yarn run iceberg configuration
Practice data lake iceberg Lesson 18 Multiple clients interact with iceberg Start commands (commonly used commands)
Practice data lake iceberg Lesson 19 flink count iceberg , No result problem
practice data lake iceberg Lesson 20 flink + iceberg CDC scenario (version problem, test failed)
practice data lake iceberg Lesson 21 flink1.13.5 + iceberg0.131 CDC (test successful INSERT, change operation failed)
Practice data lake iceberg Lesson 22 flink1.13.5 + iceberg0.131 CDC (CRUD test successful)
practice data lake iceberg Lesson 23 flink-sql restart
practice data lake iceberg from checkpoint Lesson 24 iceberg metadata details Analyzing
the practice data lake iceberg Lesson 25 Running flink sql in the background The effect of addition, deletion and modification
Practice data lake iceberg Lesson 26 checkpoint setting method
Practice data lake iceberg Lesson 27 Flink cdc test program failure restart: can restart from the last time checkpoint to continue working
practice data lake iceberg Lesson 28 Deploy packages that do not exist in the public warehouse to local warehouse
practice data lake iceberg Lesson 29 how to obtain flink jobId elegantly and efficiently
practice data lake iceberg lesson 30 mysql -> iceberg, different clients sometimes have zone issues
Practice data lake iceberg Lesson 31 use github's flink-streaming-platform-web tool to manage flink task flow, test cdc restart scenario
practice data lake iceberg more content directory



foreword

Problem : There are many packages in iceberg, but the maven warehouse does not have them, but iceberg provides them. After compiling and packaging through pom, an error is reported because the maven warehouse does not have this package.
Solution : Put these packages into the local warehouse through the maven command

Case 1. Deploy iceberg-flink-runtime-1.13-0.13.1.jar to the local warehouse

pom.xml

org.apache.iceberg
iceberg-flink-runtime
i c e b e r g . f l i n k . v e r s i o n < / v e r s i o n > < s c o p e > {iceberg.flink.version}</version> <scope> iceberg.flink.version</version><scope>{scope.type}

打包命令:
mvn install:install-file -Dfile=E:\icebergLib\flink1.13-iceberg0131\iceberg-flink-runtime-1.13-0.13.1.jar -DgroupId=org.apache.iceberg -DartifactId=iceberg-flink-runtime -Dversion=1.13-0.13.1 -Dpackaging=jar

Case 2. Deploy flink-sql-connector-mysql-cdc-2.2-SNAPSHOT.jar to the local warehouse

E:\icebergLib\flink-sql-connector-mysql-cdc-2.2-SNAPSHOT.jar

打包命令:
mvn install:install-file -DgroupId=com.ververica -DartifactId=flink-cdc-connectors -Dversion=2.2-SNAPSHOT -Dpackaging=jar -Dfile=E:\icebergLib\flink-sql-connector-mysql-cdc-2.2-SNAPSHOT.jar

C:\Users\Administrator>  mvn install:install-file -DgroupId=com.ververica -DartifactId=flink-cdc-connectors -Dversion=2.2-SNAPSHOT -Dpackaging=jar -Dfile=E:\icebergLib\flink-sql-connector-mysql-cdc-2.2-SNAPSHOT.jar
[INFO] Scanning for projects...
[INFO]
[INFO] ------------------< org.apache.maven:standalone-pom >-------------------
[INFO] Building Maven Stub Project (No POM) 1
[INFO] --------------------------------[ pom ]---------------------------------
[INFO]
[INFO] --- maven-install-plugin:2.4:install-file (default-cli) @ standalone-pom ---
[INFO] Installing E:\icebergLib\flink-sql-connector-mysql-cdc-2.2-SNAPSHOT.jar to d:\repo\.m2\com\ververica\flink-cdc-connectors\2.2-SNAPSHOT\flink-cdc-connectors-2.2-SNAPSHOT.jar
[INFO] Installing C:\Users\Administrator\AppData\Local\Temp\mvninstall8018599831875350589.pom to d:\repo\.m2\com\ververica\flink-cdc-connectors\2.2-SNAPSHOT\flink-cdc-connectors-2.2-SNAPSHOT.pom
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time:  0.506 s
[INFO] Finished at: 2022-04-14T16:36:50+08:00
[INFO] ------------------------------------------------------------------------

C:\Users\Administrator>

The result after packaging in the local warehouse:
insert image description here


Guess you like

Origin blog.csdn.net/spark_dev/article/details/124017060