Flink data is written to hive to achieve demo

        First of all, the writing of flink into hive has been implemented in 1.10, but we use it this way, in fact, it is a last resort for most companies, and it is also a moderate use case that has been slowly transformed. Apache has also provided support for us to consider , To help us re-distributed environment, stream computing today provides better help. Thanks to these community contributors and big brothers for research and sharing. The following is a small demo implemented, everyone analyzes and learns together, and there is a problem to exchange and learn.

Note: If it is possible to read data in the local environment, it must be possible to write data. When writing, you need to pay attention to the environment on the server, mainly permissions and jar dependencies.

1. Code implementation

1.1 Use tableEvironment to read catlog configuration, and then sql to operate hive

1. Let's first come to the most basic test demo. After the test is passed, look at the following, this is to read data from the hive table and then write it to hive

package flink.java.connector.hive.write;
import flink.java.utils.HiveResourceInfo;
import org.apache.flink.api.common.typeinfo.BasicTypeInfo;
import org.apache.flink.api.common.typeinfo.TypeInformation;
import org.apache.flink.api.java.DataSet;
import org.apache.flink.api.java.ExecutionEnvironment;
import org.apache.flink.api.java.typeutils.RowTypeInfo;
import org.apache.flink.connector.jdbc.JdbcInputFormat;
import org.apache.flink.table.api.*;
import org.apache.flink.table.api.bridge.java.BatchT

Guess you like

Origin blog.csdn.net/Baron_ND/article/details/110376212
Recommended