1. Install the stand-alone version of zookeeper
Note: you can use the stand-alone version of zookeeper or use kafka's own zookeeper
Download address: http://archive.apache.org/dist/zookeeper/
Download version: zookeeper-3.4.5.tar.gz
1. Unzip to /usr/local/zookeeper
tar -zxf zookeeper-3.4.5.tar.gz /usr/local/zookeeper
2. Create a new folder
Create two folders data and logs in the main directory to store data and logs
cd /usr/local/zookeeper/zookeeper-3.4.5
mkdir data
mkdir logs
3. Configuration information
Create a zoo.cfg file under the conf folder and write the following content
tickTime=2000
dataDir=/usr/local/zookeeper/zookeeper-3.4.5/data
dataLogDir=/usr/local/zookeeper/zookeeper-3.4.5/logs
clientPort=2181
4. Start zookeeper
cd /usr/local/zookeeper/zookeeper-3.4.5/bin
./zookeeper.sh start
2. Install kafka
Default premise: The machine has jdk installed
Download and use the kafka version
Download address: http://kafka.apache.org/downloads
Download version: kafka_2.11-1.1.0.tgz
Unzip to /usr/local/kafka
tar -zxf kafka_2.11-1.1.0.tgz / usr / local / kafka
3. Use kafka
There is a config folder in the kafka decompression directory, which stores configuration files. The configuration here uses the default configuration. If you want to modify the configuration information, enter the directory and find the corresponding file to modify.
For example: modify the location where the zookeeper data is stored
Find the zookeeper.properties file in the config directory
vim zookeeper.properties
As shown in the figure below, modify dataDir= the path set by yourself
1. General settings :
(1)brocker.id
The default value is 0, and it can be set to any integer, but it must be unique in the kafka cluster.
(2)port
The port configuration parameter can be set to any available port, but when using a port below 1024, you need to start kafka with root privileges
(3)zookeeper.connect
Specify the zookeeper address localhost:2181 for broker metadata to indicate that zookeeper is running locally on port 2181.
The configuration parameter form: hostname:port/path
hostname: The machine name or IP address of the zookeeper server
port: client connection port of zookeeper
/path is an optional zookeeper path. If not specified, the root path is used by default. If the specified path does not exist, the broker will be automatically created when it starts.
2. Start
Enter the kafka decompression directory
cd /usr/local/kafka/kafka_2.11-1.1.0
start zookeeper
bin/zookeeper-server-start.sh config/zookeeper.properties
start kafka
bin/kafka-server-start.sh config/server.properties
Create topic(name:testaha)
bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication -factor 1 --partitions 1 --topic testaha
View created topics
bin/kafka-topics.sh --list --zookeeper localhost:2181
Note: --zookeeper<string,hosts> here means string :localhost hosts:2181
create a consumer
bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic testaha --from-beginning
create a producer
bin/kafka-console-producer.sh --broker-list localhost:9092 --topic testaha
View topic related information
bin/kafka-topics.sh --zookeeper localhost:2181 --topic testaha --describe
Note: Leader: Responsible for reading and writing messages, it is randomly selected from all nodes
Replicas: Lists all replica nodes, regardless of whether the node is in service
Isr: node in service
Fourth, Java programs use kafka
1. Add the required dependencies
Open the pom.xml file and add the following code
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka_2.10</artifactId>
<version>0.10.2.1</version>
</dependency>
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka-clients</artifactId>
<version>0.10.2.1</version>
</dependency>
2. Simple version of producer and consumer implementation
Note: Make sure zookeeper and kafka are started before running Java code
producer
public class create_topic { public static void main(String [] args){ /*Create a new Properties object*/ Properties kafkaProps = new Properties(); /*Three properties that must be set*/ kafkaProps.put("bootstrap.servers","localhost:9092"); kafkaProps.put("key.serializer","org.apache.kafka.common.serialization.StringSerializer"); kafkaProps.put("value.serializer","org.apache.kafka.common.serialization.StringSerializer"); /* Pass the Properties object to the producer object */ Producer <String,String> producer =new KafkaProducer<String, String>(kafkaProps); /*Send message to kafka*/ try { int i=0; while(true) { producer.send(new ProducerRecord<String,String>("blue","sky","cloud"+i+"")); } }catch (Exception e){ e.printStackTrace (); }finally { producer.close(); } } }
consumer
public class create_consumer { public static void main(String [] args){ /*Create Properties object*/ Properties props=new Properties(); /*Three must-set attributes group.id can be optionally set*/ props.put("bootstrap.servers","localhost:9092"); props.put("group.id","black"); props.put("key.serializer","org.apache.kafka.common.serialization.StringSerializer"); props.put("value.serializer","org.apache.kafka.common.serialization.StringSerializer"); props.put("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer"); props.put("value.deserializer", "org.apache.kafka.common.serialization.StringDeserializer"); /*Create consumer*/ KafkaConsumer<String,String> consumer=new KafkaConsumer<>(props); /*Subscribe to topic*/ consumer.subscribe(Collections.singletonList("blue")); while(true){ /* request data from the server */ ConsumerRecords<String,String> records=consumer.poll(1000); for(ConsumerRecord<String,String> record:records){ System.out.println(record.key()+"---"+record.topic()+"----"+record.value()); } } } }
operation result