flink a set of problem solving: a cluster installation

(1) installation standlode cluster mode is activated

Bootscripts bin / start-cluster.sh not use sh start-cluster

flink-1.8.1/bin/flink list

flink-1.8.1/bin/flink cancel 9b99be4eed871c4e62562f9035ebef65

(2) flink task can not afford to stop

After executing cancel command to view the state has been canceled

Because the machine is hung up, no response

 

 

When submitting tasks, error

flink scala.Predef$.refArrayOps([Ljava/lang/Object;)Lscala/collection/mutabl

Project uses import scala.collection.JavaConverters._

So it is necessary to introduce scala jar package

I use the local scala2.11, I believe it is 2.11 in his own line, looking for a long time, in flink directory opt / jar is seen in the installation of scala 2.12, resulting in inconsistent versions of

(3) web page monitoring

Bytes received       Records received  Bytes sent      Records sent

If the source and sink which is not in the same display value

Note that the need for additional, where input / output between the respective nodes flink only and does not include the component interaction information with the outside world. So, here's statistics, flink source of read-bytes / read-records is 0; flink sink of write-bytes / write-records is 0. Displayed on flink UI as well. (Yuchuanchen blog has instructions: https: //blog.csdn.net/yuchuanchen/article/details/88406438)

By adding a keyby prior written on it

val source = env
  .addSource(kafkaConsumerA)
  .map(a =>{
    val simpleDateFormat = new SimpleDateFormat("yyyy-MM-dd")
    val day = simpleDateFormat.format(new Date())
    parseToBean(a,day)
  })
source.keyBy(a =>{
  a.carid
}).map(a =>toClickHouseInsertFormat(a))
   .addSink(new ClickhouseSink(props)).name("ck_sink")

 You can also define your own metric, then you can submit up to see the data specified location

Guess you like

Origin www.cnblogs.com/lichar/p/11687085.html
Recommended