Articles -Flink tool inside some of the pit

1. Custom Sink write hbase?

Using native hbase client, each can control their own how many records are refreshed. We met a few pit result in data write hbase not go inside:

  • Cluster hbase version and client version is inconsistent (version 1 and version 2 will be conflict between each other)
  • Jar package conflicts

For example protobuf-java version conflicts, common to two key errors, java.io.IOException: java.lang.reflect. InvocationTargetExceptio the n- and  Caused by: java.lang.NoClassDefFoundError: Could not initialize class org.apache.hadoop.hbase .protobuf.ProtobufUtil .

2. Flink

Flink read Kafka, if used Consumer08, then the offset will be submitted Zk, this configuration can be written in the following Conf file, submitted to offset Zk can be specified directly. Consumer09 not later submitted to Zk, Kafka we will engage in a Topic memory consumption state alone.

1 xxxx08 {
2     bootstrap.servers = "ip:9092"
3     zookeeper.connect = "ip1:2181,ip2/vio"
4     group.id = "group1"
5     auto.commit.enable = true
6     auto.commit.interval.ms = 30000
7     zookeeper.session.timeout.ms = 60000
8     zookeeper.connection.timeout.ms = 30000
9 }
. 1  Final the Properties consumerProps = configutil
 2          . GetProperties (config, " xxxx08 "); // Use Util function arranged to read their preparation
 . 3  
. 4      Final FlinkKafkaConsumer08 <String> = Source
 . 5          new new FlinkKafkaConsumer08 <String> (Topic, new new SimpleStringSchema () , consumerProps);

 

Guess you like

Origin www.cnblogs.com/lcmichelle/p/11204362.html