kafka configuration and basic commands

kafka:
    Distributed Messaging System
    p2p + ps = consumer group
    


JMS :
    java message service
    
    p2p:
        peer to peer
        point to point

    ps:
        publish && subscribe


kafka : scala + java
 ===============
    for real-time stream processing
    Features: 1. Persistent data
           2. High throughput
           3. Distributed
           4. Multi-client support
           5. Real-time performance

    kafka:broker:broker
           producer
           consumer

    
    kafka_2.11-1.1.0.tgz : // 2.11 ===> scala version
                 //
                 1.1.0 ==> kafka version


kafka dress:
========================
    1 , decompression
     2 , symbolic link
     3 , environment variables
        # kafka environment variables
        export KAFKA_HOME=/soft/kafka
        export PATH =$PATH:$KAFKA_HOME/ bin
     4 , effective environment variable
     5 ,


kafka local mode:
=======================
    1. Modify the configuration file: /soft/kafka/config/ server.properties
        Modify zookeeper.connect =s102:2181,s103:2181,s104: 
        2181Modify log.dirs =/home/centos/kafka/ logs
        Modify listeners =PLAINTEXT: // s101:9092

    2. Start kafka
        kafka-server-start.sh [-daemon] /soft/kafka/config/server.properties

    3 , jps to view the kafka process


    4. Close kafka
        kafka-server-stop.sh
        
        


kafka is fully distributed: s102 - s104
    
    1. Synchronize the kafka installation directory and symbolic link
        xsync.sh /soft/kafka
        xsync.sh /soft/kafka_2.11-1.1.0

    2 , root synchronization environment variables
        su root
        xsync.sh /etc/profile
        exit
    
    3. Make the s102- s104 environment variables take effect respectively
        s102> source /etc/profile
        s103> source /etc/profile
        s104> source /etc/profile

    4. Modify the kafka configuration file /soft/kafka/config/ server.properties of s102-s104 respectively
        s102:
            Modify broker.id = 102Modify 
            listeners =PLAINTEXT: // s102:9092

        s103:
            Modify broker.id = 103 
            Modify listeners =PLAINTEXT: // s103:9092

        s104:
            Modify broker.id = 104 
            Modify listeners =PLAINTEXT: // s104:9092

    5. Start kafka of s102- s104 respectively
        
        s102> kafka-server-start.sh -daemon /soft/kafka/config/server.properties
        s103> kafka-server-start.sh -daemon /soft/kafka/config/server.properties
        s104> kafka-server-start.sh -daemon /soft/kafka/config/server.properties


    6. Write batch start kafka
        #!/bin/bash
        if [ $# -ge 1 ] ; then echo param must be 0 ; exit ; fi
        for (( i=102 ; i<=104 ; i++ )) ; do
            tput setaf 2
            echo ================ s$i starting kafka  ================
            tput setaf 9
            ssh s$i "source /etc/profile ; kafka-server-start.sh -daemon /soft/kafka/config/server.properties"
        done

    
    7. Write batch shutdown kafka
        #!/bin/bash
        if [ $# -ge 1 ] ; then echo param must be 0 ; exit ; fi
        for (( i=102 ; i<=104 ; i++ )) ; do
            tput setaf 2
            echo ================ s$i stoping kafka  ================
            tput setaf 9
            ssh s$i "source /etc/profile ; kafka-server-stop.sh"
        done

    8. Combination: xkafka.sh

        #!/bin/bash
        cmd=$1
        if [ $# -gt 1 ] ; then echo param must be 1 ; exit ; fi
        for (( i=102 ; i<=104 ; i++ )) ; do
            tput setaf 2
            echo ================ s$i $cmd kafka  ================
            tput setaf 9
            case $cmd in 
            start ) ssh s$i "source /etc/profile ; kafka-server-start.sh -daemon /soft/kafka/config/server.properties" ;; 
            stop ) ssh s$i "source /etc/profile ; kafka-server-stop.sh" ;; 
            * ) echo illegal argument ; exit ;;
            esac
        done


kafka basic command: s102 - s104 operation
 =============================== 
    Start kafka service: kafka -server-start.sh - daemon /soft/kafka/config/server.propertiesShut 
    down kafka: kafka -server-stop.sh

    topic: The basic unit of kafka production and consumption
    record: the basic data unit of kafka production and consumption
        exists in the form of K - V
         
    Topic: Kafka produces data to the specified topic and consumes data to the specified topic
    
    创建topic: kafka-topics.sh --create --topic t1 --zookeeper s102:2181 --partitions 2 --replication-factor 2
                
    List topics: kafka -topics.sh --list --zookeeper s102:2181
                

    Create a producer: kafka -console-producer.sh --broker-list s102:9092 -- topic t1
               
    Create a consumer: kafka -console-consumer.sh --zookeeper s102:2181 --topic t1   // Continue the last location consumption 
                kafka-console-consumer.sh --zookeeper s102:2181 --topic t1 --from-beginning   // start consumption from scratch
                

kafka uses API producers and consumers:
==============================
    1. pom file
     <dependency>
            <groupId>org.apache.kafka</groupId>
            <artifactId>kafka_2.11</artifactId>
            <version>1.1.0</version>
        </dependency>

    2. Write the producer
         public  class MyProducer {


            public  static  void main(String[] args) throws Exception {
             // Initialize java configuration file 
            Properties props = new Properties();
            props.put("metadata.broker.list", "s102:9092, s103:9093, s104:9094 ");
            props.put("serializer.class", "kafka.serializer.StringEncoder");
            props.put("request.required.acks", "1");

            // Encapsulate the java configuration file into a kafka configuration file 
            ProducerConfig config = new ProducerConfig(props);
            Producer<String, String> producer = new Producer<String, String>(config);

            for (int i = 0; i < 100; i++) {
                String msg = "tom" + i;
                System.out.println(msg);

                // Create kafka message instance 
                KeyedMessage<String, String> data = new KeyedMessage<String, String>("t1" , msg);
                 // Send data 
                producer.send(data);
                System.out.println("Message Count - " + i);
                Thread.sleep(1000);
            }
            // Close the producer 
            producer.close();

            }

        }
        
    3. Write consumer
         public  class MyConsumer {

            public static void main(String[] args) {

            Properties props = new Properties();
            props.put("zookeeper.connect", "s102:2181,s103:2181,s104:2181");
            props.put("group.id", "g1");
            props.put("zookeeper.session.timeout.ms", "500");
            props.put("zookeeper.sync.time.ms", "250");
            props.put("auto.commit.interval.ms", "1000");

            // Initialize the consumer configuration 
            ConsumerConfig config = new ConsumerConfig(props);

            //初始化consumer
            ConsumerConnector consumer = kafka.consumer.Consumer
                .createJavaConsumerConnector(config);

            Map<String, Integer> topicMap = new HashMap<String, Integer>();

            // Specify the number of consumer threads 
            topicMap.put("t1", 1 );

            //创建消息对象
            //Map<topic, List<k, message>>
            Map<String, List<KafkaStream<byte[], byte[]>>> consumerStreamsMap = consumer.createMessageStreams(topicMap);

            // Get the data of t1 topic (kv) 
            List<KafkaStream< byte [], byte []>> streamList = consumerStreamsMap.get("t1" );

            for (final KafkaStream<byte[], byte[]> stream : streamList) {
                ConsumerIterator<byte[], byte[]> consumerIte = stream.iterator();
                while (consumerIte.hasNext()) {
                //迭代取出消息数据
                System.out.println("value: " + new String(consumerIte.next().message()));
                }
            }
            if (consumer != null) {
                consumer.shutdown();
            }
            }
        }

 

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=324887132&siteId=291194637