In the case of kafka single server, how does the consumer determine whether the server is up?

            In the case of a single server in a kafka cluster, how to determine whether the kafka server or the zookeeper server is started through the consumer (because the consumer cannot currently determine whether the server is started, it just polls to obtain server data without reporting an error), if not Start, the consumer side makes corresponding operations to remind the consumer side users to maintain, here I provide a simple solution, may not be very general, just provide a simple idea.

         If the kafka server or zookeeper server is not started, there will be an error (org.apache.kafka.common.errors.TimeoutException) when the producer side sends information to the server, the maven version of kafka I use is 0.11.0.0, When an abnormal error occurs, we can use a static variable on the producer side to change the state of the record, and provide an interface on the producer side for the consumer side to call; on the consumer side, we provide a timed task, if If the consumer does not obtain data from the kafka server within the specified time, define a static variable to record the status of the obtained data, then in the scheduled task, it will decide whether to call the producer consumer interface according to the value of the static variable. The returned status, if the returned status indicates that the server is not started, the consumer will take the corresponding operation. If there are multiple servers in a cluster, the client can judge which servers are alive according to the subscribed topic topic. The main method of the producer producer and consumer consumer is provided below, and the interface is not completed according to the above description. If necessary do it yourself.

<dependency>
    <groupId>org.apache.kafka</groupId>
    <artifactId>kafka_2.11</artifactId>
    <version>0.11.0.0</version>
</dependency>
import org.apache.kafka.clients.producer.*;
import org.apache.kafka.clients.producer.KafkaProducer;

import java.util.Properties;
import java.util.concurrent.ExecutionException;
import java.util.concurrent.Future;
import java.util.concurrent.TimeUnit;
import java.util.concurrent.TimeoutException;

/**
 * @author liaoyubo
 * @version 1.0 2017/7/25
 * @description
 */
public class KafkaProducerTest {

    public static void main(String [] args){
        Properties properties = new Properties();
        //properties.put("zookeeper.connect","localhost:2181");
        properties.put("bootstrap.servers", "192.168.201.190:9092");
        properties.put("acks", "all");
        properties.put("retries", 0);
        properties.put("batch.size", 16384);
        properties.put("linger.ms", 1);
        properties.put("buffer.memory", 33554432);
        properties.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer");
        properties.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer");

        Producer<String,String> producer = new KafkaProducer<String, String>(properties);

        for(int i = 0;i < 10;i++){
            Future<RecordMetadata> futureRecordMetadata =  producer.send(new ProducerRecord<String, String>("myTopic",Integer.toString(i),Integer.toString(i)));
            try {
                futureRecordMetadata.get(3, TimeUnit.SECONDS);
                System.out.println("Sent message:"+i);
            } catch (InterruptedException e) {
                e.printStackTrace ();
            } catch (ExecutionException e) {
                if(e.getMessage().split(":")[0].split("\\.")[5].equals("TimeoutException")){
                    System.out.println("Unable to connect to server");
                }
                e.printStackTrace ();
            } catch (TimeoutException e) {
                System.out.println("Unable to connect to server");
                e.printStackTrace ();
            }
        }

        producer.close();

    }

}

 

import org.apache.kafka.clients.consumer.*;
import org.apache.kafka.clients.consumer.KafkaConsumer;
import org.apache.kafka.common.Node;
import org.apache.kafka.common.PartitionInfo;

import java.util.Arrays;
import java.util.List;
import java.util.Properties;

/**
 * @author liaoyubo
 * @version 1.0 2017/7/26
 * @description
 */
public class KafkaConsumerClientTest {

    public static void main(String [] args){

        Properties properties = new Properties();
        properties.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG,"192.168.201.190:9092");
        properties.put(ConsumerConfig.GROUP_ID_CONFIG,"test");
        properties.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, "true");
        properties.put(ConsumerConfig.AUTO_COMMIT_INTERVAL_MS_CONFIG, "1000");
        properties.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringDeserializer");
        properties.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringDeserializer");

        Consumer<String,String> consumer = new KafkaConsumer<String, String>(properties);

        consumer.subscribe(Arrays.asList("myTopic","myTest"));
        while (true){
            ConsumerRecords<String,String> records = consumer.poll(100);
            for (ConsumerRecord<String, String> record : records){
                //int partition = record.partition();
                String topic = record.topic();
                List<PartitionInfo> partitionInfoList = consumer.partitionsFor(topic);
                for (PartitionInfo partitionInfo : partitionInfoList){
                    Node node = partitionInfo.leader();
                    System.out.println(node.host());
                    //Get the live server
                    Node [] nodes = partitionInfo.replicas();
                }
                System.out.printf("offset = %d, key = %s, value = %s%n", record.offset(), record.key(), record.value());
            }
        }
    }

}

 

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=326319588&siteId=291194637