Flink reads and writes hive pay attention to details

Install FLink1.11.1 in Linux environment and start SQL-client to read Hive data

First go to the official website to download the tgz package of Flink1.11.1. The tutorial is the same as the process in the first half of the previous article, and then configure it FLINK_HOME/conf/sql-client-defaults.yaml:

catalogs:
   - name: myhive   #自己定个名字就行
     type: hive
     hive-conf-dir: /etc/hive/conf  # hive-site.xml的路径
     hive-version: 1.2.1    # hive版本
 execution:
  # select the implementation responsible for planning table programs
  # possible values are 'blink' (used by default) or 'old'
  planner: blink
  # 'batch' or 'streaming' execution
  type: batch  #这里streaming和batch都行
  # allow 'event-time' or only 'processing-time' in sources
  time-characteristic: event-time
  # interval in ms for emitting periodic watermarks
  periodic-watermarks-interval: 200
  # 'changelog' or 'table' presentation of results
  result-mode: table
  # maximum number of maintained rows in 'table' presentation of results
  max-table-

Guess you like

Origin blog.csdn.net/Baron_ND/article/details/109667896