The basic operation of the installation and kafka

The installation of a kafka
1 Glossary:

Topic (topics) message classification, to mark marked message
generated Producer (producer) message transmits
Consumer (consumer) receives the message by using the
Consumer Group Consumer specified for each group belonging to each a specific Consumer Consumer Group (may be name, if the group name specified in the default Group)
Broker (kafak example) each node is kafka Broker
partition (partition partition is a physical concept, each containing one or more Topic partition is to improve throughput kafka
a copy of the replica partition, and high availability protection partition of
a role of leader replica, producer and consumer to interact only with the leader
, copy the data from the leader of the follower replica of a character
controller Kafka one of the servers in the cluster, used for leader election failover and various
modes of 2 kafka
pull (1) ad hoc mode (one, active pull data consumer, the message received message is cleared) taking mode

Model is typically a point to point messaging model based on polling or pulled, this model request information from the queue, the message is not pushed to the client. The characteristics of the model are sent to the message queue, and only one receiver is a reception process, even if a plurality of listeners message is true.

(2) publish / subscribe model (after many, data production, pushed to all subscribers) is the push mode

Publish and subscribe messaging model is a push-based model. Publish and subscribe model can have a variety of different subscribers, temporary subscribers only receive messages only when active listener theme, all messages and durable subscribers are listening to a topic, even if the current subscriber is not available offline.

3 kafka installation
1. Download

Apache kafka official: http: //kafka.apache.org/downloads.html

Scala 2.11 - kafka_2.11-0.10.2.0.tgz (asc, md5)

Note: When using a 2.11 Scala that Kafka is selected kafka_2.11-0.10.2 0.10.2 version of Kafka.

Kafka cluster installation:

  1. Install JDK & JAVA_HOME configuration

  2. Install Zookeeper

    Zookeeper reference to the official website to build a cluster ZK, ZK and start the cluster.

  3. Kafka extract the installation package

   [ambow@hadoopNode1 ambow]$ tar  -zxvf  kafka_2.11-0.10.2.1.tgz   -C  ~/app/

4. Configure Environment Variables

export KAFKA_HOME=/home/ambow/app/kafka_2.11-0.10.2.1

export PATH=$PATH:$KAFKA_HOME/bin
  1. Modify the configuration file config / the server.properties
    vi the server.properties
  #为依次增长的:0、1、2、3、4,集群中节点唯一id 
    broker.id=0    
     						
    #删除主题的配置,默认是false   生产环境设为false
    delete.topic.enable=true
    
    #监听的主机及端口号    各节点改为本机相应的hostName
    listeners=PLAINTEXT://hadoopNode1:9092   
    
    
    #Kafka的消息数据存储路径  
    log.dirs=/home/ambow/kafkaData/logs   
    
    #创建主题的时候,默认有1个分区
    num.partitions=3  
    
    #指定ZooKeeper集群列表,各节点以逗号分
    zookeeper.connect=hadoopNode1:2181,hadoopNode2:2181,hadoopNode3:2181,hadoopNode4:2181,hadoopNode5:2181
      


6. distributed to each node

[ambow@hadoopNode1 app]$ scp -r   kafka_2.11-0.10.2.1     ambow@hadoopNode5:~/app
[ambow@hadoopNode1 app]$ scp -r   kafka_2.11-0.10.2.1     ambow@hadoopNode4:~/app
[ambow@hadoopNode1 app]$ scp -r   kafka_2.11-0.10.2.1     ambow@hadoopNode3:~/app
[ambow@hadoopNode1 app]$ scp -r   kafka_2.11-0.10.2.1     ambow@hadoopNode2:~/app



[ambow@hadoopNode1 app]$ scp -r   ~/.bash_profile     ambow@hadoopNode5:~
[ambow@hadoopNode1 app]$ scp -r   ~/.bash_profile     ambow@hadoopNode4:~
[ambow@hadoopNode1 app]$ scp -r   ~/.bash_profile     ambow@hadoopNode3:~
[ambow@hadoopNode1 app]$ scp -r   ~/.bash_profile     ambow@hadoopNode2:~


source   ~/.bash_profile 

7. modify the configuration file of each node:

#为依次增长的:0、1、2、3、4,集群中节点唯一id 
broker.id=0  

# 监听的主机及端口号    各节点改为本机相应的hostName
listeners=PLAINTEXT://hadoopNode1:9092   

8. Start Kafka service on each table node

[ambow@hadoopNode1 app]$ kafka-server-start.sh  $KAFKA_HOME/config/server.properties  &

Note: each node must first start Zookeeper

zkServlet.sh start

[ambow@hadoopNode1 app]$ kafka-server-stop.sh

9. Test:

1) Create a Topic name: Subject test of

#语法:
kafka-topics.sh --create --zookeeper 指定zookeeper集群节点用逗号分隔 --replication-factor 指定副本数  --partitions 指定分区数 --topic 指定主题名      

[ambow@hadoopNode1 app]$ kafka-topics.sh --create --zookeeper hadoopNode1:2181,hadoopNode2:2181,hadoopNode3:2181,hadoopNode4:2181,hadoopNode5:2181 --replication-factor 3 --partitions 1 --topic test    




 kafka-topics.sh --create --zookeeper HadoopNode1:2181,HadoopNode2:2181,HadoopNode3:2181 --replication-factor 3 --partitions 1 --topic love    
 
 
  kafka-topics.sh --create --zookeeper hpNode1:2181 --replication-factor 2 --partitions 3 --topic it

Note: replication-factor equal to the number of copies must be less than the number of nodes

 如果只有一个副本,就会存在某一个节点上

 果设置两个副本,就会存在某两个节点上

kafka default the consumer storage 7 days

#删除指定的test主题
[ambow@hadoopNode1 app]$ kafka-topics.sh --delete --zookeeper hadoopNode1:2181,hadoopNode2:2181,hadoopNode3:2181,hadoopNode4:2181,hadoopNode5:2181   --topic test  

2) lists the topic list has been created

#查看所有的主题:
[ambow@hadoopNode1 app]$ kafka-topics.sh --list --zookeeper localhost:2181

[ambow@hadoopNode1 app]$ kafka-topics.sh --list --zookeeper localhost:2181

3) to view details of Topic

#查看指定主题的详细信息
[ambow@hadoopNode1 app]$ kafka-topics.sh --describe --zookeeper localhost:2181 --topic test

4) Analog producer sends a message posted messages to the specified topic

Open another terminal to perform:

[ambow@hadoopNode1 app]$ kafka-console-producer.sh --broker-list hadoopNode1:9092,hadoopNode2:9092,hadoopNode3:9092,hadoopNode4:9092,hadoopNode5:9092 --topic test



 kafka-console-producer.sh --broker-list HadoopNode1:9092,HadoopNode2:9092,HadoopNode3:9092 --topic test

Note: [ambow @ hadoopNode1 app] $ kafka-console-producer.sh --broker-list (a node may be generally present with a plurality of nodes to prevent shoot down) --topic test

5) Consumers:

Message to impersonate the client to accept the message specified consumer topics

Open another terminal to perform:

[ambow@hadoopNode1 app]$ kafka-console-consumer.sh --bootstrap-server hadoopNode1:9092,hadoopNode2:9092,hadoopNode3:9092,hadoopNode4:9092,hadoopNode5:9092 --from-beginning --topic test

#定阅
kafka-console-consumer.sh --bootstrap-server HadoopNode1:9092,HadoopNode2:9092,HadoopNode3:9092 --from-beginning --topic test
Published 133 original articles · won praise 53 · views 20000 +

Guess you like

Origin blog.csdn.net/weixin_43599377/article/details/104535222