Message middleware -kafka (II)

Research news Middleware

Message middleware -kafka (II)

Environment to build

Step 1: Download Kafka, click Download and unzip the latest version.

 tar -xzf kafka_2.9.2-0.8.1.1.tgz
 cd kafka_2.9.2-0.8.1.1

Step 2: Start Services
Kafka uses Zookeeper, first of all start Zookper, simply enable Zookkeeper a single instance of the service below. You can leave after the end of the command console add & symbol, so that you can start.

bin/zookeeper-server-start.sh config/zookeeper.properties &

Start Kafka:

bin/kafka-server-start.sh config/server.properties

Step 3: Creating topic

bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic test

View by list command to create the topic:

bin/kafka-topics.sh --list --zookeeper localhost:2181 test

Step 4: transmitting a message.
Kafka using a simple command-line producer, or read from the file from the standard input and send messages to the server. Default each command to send a message.
Some run producer and outputting console messages, these messages will be sent to the server

 bin/kafka-console-producer.sh --broker-list localhost:9092 --topic test 

Step 5: Start consumer
line consumer can read the message and to standard output

bin/kafka-console-consumer.sh --zookeeper localhost:2181 --topic test --from-beginning

Achieve results:
Run consumer in a command-line terminal, another terminal run producer command line, the input message can be a terminal, another terminal reads the message.
Step 6: build more of a broker cluster
now start with three broker composed of clusters, these are also the broker nodes on this machine:
First, write configuration file for each node:

cp config/server.properties config/server-1.properties
cp config/server.properties config/server-2.properties

Add a new copy of the file in the following parameters:

config/server-1.properties:
    broker.id=1
    port=9093
    log.dir=/tmp/kafka-logs-1

config/server-2.properties:
    broker.id=2
    port=9094
    log.dir=/tmp/kafka-logs-2

broker.id only mark a node in the cluster, because on the same machine, it is necessary to develop different ports and log files to prevent data from being overwritten.
Zookeeper may have already started and a node, the other two nodes start now

 bin/kafka-server-start.sh config/server-1.properties &
 
 bin/kafka-server-start.sh config/server-2.properties &

Create a topic has three copies:

bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 3 --partitions 1 --topic my-replicated-topic

Now we set up a cluster, how to know information about each node it? Run "" describe topics "command on it.

 bin/kafka-topics.sh --describe --zookeeper localhost:2181 --topic my-replicated-topic
 //效果
 Topic:my-replicated-topic       PartitionCount:1        ReplicationFactor:3     Configs:
        Topic: my-replicated-topic      Partition: 0    Leader: 1       Replicas: 1,2,0 Isr: 1,2,0
        下面解释一下这些输出。第一行是对所有分区的一个描述,然后每个分区都会对应一行,因为我们只有一个分区所以下面就只加了一行。
        eader:负责处理消息的读和写,leader是从所有节点中随机选择的.
replicas:列出了所有的副本节点,不管节点是否在服务中.
isr:是正在服务中的节点.

Kafka test fault tolerance mechanisms (omitted)

Kafka development environment to build

In maven project, adding a dependency in pom.xml

<dependency>
        <groupId> org.apache.kafka</groupId >
        <artifactId> kafka_2.10</artifactId >
        <version> 0.8.0</ version>
</dependency>

The configuration program
first act as a role of an interface configuration file, configured with a variety of connection parameters Kafka:

kage com.sohu.kafkademon;
public interface KafkaProperties
{
    final static String zkConnect = "10.22.10.139:2181";
    final static String groupId = "group1";
    final static String topic = "topic1";
    final static String kafkaServerURL = "10.22.10.139";
    final static int kafkaServerPort = 9092;
    final static int kafkaProducerBufferSize = 64 * 1024;
    final static int connectionTimeOut = 20000;
    final static int reconnectInterval = 10000;
    final static String topic2 = "topic2";
    final static String topic3 = "topic3";
    final static String clientId = "SimpleConsumerDemoClient";
}

producer

import java.util.Properties;
import kafka.producer.KeyedMessage;
import kafka.producer.ProducerConfig;
/**
* @author leicui [email protected]
*/
public class KafkaProducer extends Thread
{
    private final kafka.javaapi.producer.Producer<Integer, String> producer;
    private final String topic;
    private final Properties props = new Properties();
    public KafkaProducer(String topic)
    {
        props.put("serializer.class", "kafka.serializer.StringEncoder");
        props.put("metadata.broker.list", "10.22.10.139:9092");
        producer = new kafka.javaapi.producer.Producer<Integer, String>(new ProducerConfig(props));
        this.topic = topic;
    }
    @Override
    public void run() {
        int messageNo = 1;
        while (true)
        {
            String messageStr = new String("Message_" + messageNo);
            System.out.println("Send:" + messageStr);
            producer.send(new KeyedMessage<Integer, String>(topic, messageStr));
            messageNo++;
            try {
                sleep(3000);
            } catch (InterruptedException e) {
                // TODO Auto-generated catch block
                e.printStackTrace();
            }
        }
    }
}

consumer

import java.util.HashMap;
import java.util.List;
import java.util.Map;
import java.util.Properties;
import kafka.consumer.ConsumerConfig;
import kafka.consumer.ConsumerIterator;
import kafka.consumer.KafkaStream;
import kafka.javaapi.consumer.ConsumerConnector;
/**
* @author leicui [email protected]
*/
public class KafkaConsumer extends Thread
{
    private final ConsumerConnector consumer;
    private final String topic;
    public KafkaConsumer(String topic)
    {
        consumer = kafka.consumer.Consumer.createJavaConsumerConnector(
                createConsumerConfig());
        this.topic = topic;
    }
    private static ConsumerConfig createConsumerConfig()
    {
        Properties props = new Properties();
        props.put("zookeeper.connect", KafkaProperties.zkConnect);
        props.put("group.id", KafkaProperties.groupId);
        props.put("zookeeper.session.timeout.ms", "40000");
        props.put("zookeeper.sync.time.ms", "200");
        props.put("auto.commit.interval.ms", "1000");
        return new ConsumerConfig(props);
    }
    @Override
    public void run() {
        Map<String, Integer> topicCountMap = new HashMap<String, Integer>();
        topicCountMap.put(topic, new Integer(1));
        Map<String, List<KafkaStream<byte[], byte[]>>> consumerMap = consumer.createMessageStreams(topicCountMap);
        KafkaStream<byte[], byte[]> stream = consumerMap.get(topic).get(0);
        ConsumerIterator<byte[], byte[]> it = stream.iterator();
        while (it.hasNext()) {
            System.out.println("receive:" + new String(it.next().message()));
            try {
                sleep(3000);
            } catch (InterruptedException e) {
                e.printStackTrace();
            }
        }
    }
}

Simple transmission and reception
run the following program, you can simply send a message received:

public class KafkaConsumerProducerDemo
{
    public static void main(String[] args)
    {
        KafkaProducer producerThread = new KafkaProducer(KafkaProperties.topic);
        producerThread.start();
        KafkaConsumer consumerThread = new KafkaConsumer(KafkaProperties.topic);
        consumerThread.start();
    }
}

Source: https: //blog.csdn.net/wangzhanzheng/article/details/79720029

Published 30 original articles · won praise 8 · views 20000 +

Guess you like

Origin blog.csdn.net/wg22222222/article/details/104829252