Receiving a message sent to the canal kafka

In Canal message subscription ends binlog-server-new project, receiving canal complete message, sent kafka

We need to write a client Canal, CanalClient, to monitor process data. CanalClient here implements Runnable interface

CanalBinaryListener based binary canal listener, BinlogKafkaProducer CanalBinaryListener achieve the onBinlog transmits the received message to Kafka.

The BinlogKafkaProducer registered to CanalClient

  /**
    * 以Java 环境变量模式启动
    */
  def startServer(): Unit = {
    logger.info(s"启动服务 binlogServer...")

    val producerBrokerHost = SysEnvUtil.CANAL_KAFKA_HOST
    val topic = SysEnvUtil.CANAL_KAFKA_TOPIC

    val canalServerIp = SysEnvUtil.CANAL_SERVER_IP
    val canalServerPort = SysEnvUtil.CANAL_SERVER_PORT.toInt

    val destination = SysEnvUtil.CANAL_DESTINATION
    val username = SysEnvUtil.CANAL_USERNAME
    val password = SysEnvUtil.CANAL_PASSWORD

    val kafkaProducer = new BinlogKafkaProducer(producerBrokerHost, topic)
    kafkaProducer.init()


    val canalClient = new CanalClient(canalServerIp, canalServerPort, destination, username, password);
    canalClient.registerBinlogListener(kafkaProducer)

    val executorService = Executors.newFixedThreadPool(1)

    executorService.execute(canalClient)

    logger.info("启动服务 binlogService 成功...")


  }

Enabling a thread pool, run CanalClient. run () calls the main method of processing work.
At initialization, we got a SimpleCanalConnector, by CanalConnector of getWithoutAck (BatchSize) method, we can get a Message.

getWithoutAck (Batch Size) 方法:

Do not specify a position acquisition events, conditions returned by this method:

Try to take batchSize records, how many how many takes, does not block waiting

canal client will remember this latest position.

If this is the first time fetch, it will save from the canal the oldest piece of data starts to output.

By Message, we can get a unique id, and specific data objects. These data objects are processed
, ignoring affairs open end, binlog contents of the query. Then processing the data object, Submission Confirmation

    /**
     * 处理工作 work
     */
    private void work() {

        try {
            while (runing) {

                Message message = connector.getWithoutAck(BatchSize);

                long batchId = message.getId();
                int size = message.getEntries().size();

                if (batchId == -1 || size == 0) {
                    try {
                        Thread.sleep(Sleep);
                    } catch (InterruptedException e) {
                        logger.error(e.getMessage(), e);
                    }

                } else {
                    if(logger.isDebugEnabled()) {
                        logger.debug("读取binlog日志 batchId: {}, size: {}, name: {}, offsets:{}", batchId, size,
                                message.getEntries().get(0).getHeader().getLogfileName(),
                                message.getEntries().get(0).getHeader().getLogfileOffset());
                    }
                    //处理消息
                    process(message.getEntries());
                }
                // 提交确认
                connector.ack(batchId);
            }

        } catch (Exception e) {
            connector.disconnect();
            logger.error("[CanalClient] [run] " + e.getMessage(), e);
        } finally {
            reconnect();
        }
    }

When CanalClient registered listeners when listening calling methods, asynchronous callback mode sends the data object to kafka.

    /**
     * 异步回调模式发送消息
     *
     * @param topic
     * @param message
     */
    public void send(String topic, byte[] message) {
        producer.send(new ProducerRecord<>(topic, message), (metadata, e) -> {
            if (e != null) {
                logger.error("[" + getClass().getSimpleName() + "]: 消息发送失败,cause: " + e.getMessage(), e);
            }
            logger.info("[binlog]:消息发送成功,topic:{}, offset:{}, partition:{}, time:{}",
                    metadata.topic(), metadata.offset(), metadata.partition(), metadata.timestamp());

        });
    }


    @Override
    public void onBinlog(CanalEntry.Entry entry) {
        send(topic, entry.toByteArray());
    }

Guess you like

Origin blog.csdn.net/licheng989/article/details/90171032