Installation and Configuration Data Environment large high-availability cluster (10) - mounted Kafka Availability Cluster

1. Obtain the installation package download link

Access https://kafka.apache.org/downloads  find the corresponding version kafka

Consistent with the need to install the server version of scala (run spark-shell scala can see the currently installed version)

2. Run to download and install

cd /usr/local/src/
wget https://www.apache.org/dyn/closer.cgi?path=/kafka/2.2.2/kafka_2.11-2.2.2.tgz
tar -zxvf kafka_2.11-2.2.2.tgz
mv kafka_2.11-2.2.2 /usr/local/kafka
cd /usr/local/kafka/config
mkdir -p /data/logs/kafka

 

3. Modify the configuration server.properties

we server.properties

Modify the following configuration

# Kafka broker.id each service needs to set values, a first 1, second 2 and so 
Broker. ID = 1 
log.dirs = / Data / logs / kafka 
# number of partitions disposed kafka this case may be according 
num.partitions = 2 
zookeeper.connect = Master: 2181 , Backup Master-: 2181

 

4. Modify configuration zookeeper.properties

we zookeeper.properties

Modify the following configuration

dataDir=/usr/local/zookeeper

 

5. Modify server system environment variables

All servers need to modify the configuration required

vi /etc/profile

Add the following configuration in the tail

export KAFKA_HOME=/usr/local/kafka
export PATH=$KAFKA_HOME/bin:$PATH

Save and exit, run the command, make the configuration take effect immediately

source /etc/profile

 

6. kafka synchronized to the master-backup server

rsync -avz /usr/local/kafka/ master-backup:/usr/local/kafka/

Modify the configuration server.properties

we server.properties

The modified value is 2 broker.id

broker.id=2

 

7. Start kafka Service

Running on the master server and master-backup command to start the service kafka

kafka-server-start.sh -daemon $KAFKA_HOME/config/server.properties

 

8. kafka Common Operation Command

# 创建topic
kafka-topics.sh --create --zookeeper master:2181,master-backup:2181 --topic sendTopic --partitions 2 --replication-factor 1

# 查看topic
kafka-topics.sh --list --zookeeper master:2181,master-backup:2181

# 删除topic
kafka-topics.sh --delete --topic sendTopic --zookeeper master:2181,master-backup:2181

# 创建生产者
kafka-console-producer.sh --broker-list master:9092,master-backup:9092- Topic sendTopic 

# create consumer 
Kafka used to live the -console-Consumer. SH --bootstrap-Server Master: 9092 , Master-Backup: 9092 --topic sendTopic --from- Beginning 
# to enter characters in all the production side, all the consumer side are received can be 

# to view the topic For more information 
Kafka used to live -topics. SH --describe --zookeeper Master: 9092 , slave1: 9092 , slave2: 9092

 

Disclaimer: This article was published in the original  garden blog , author  AllEmpty  herein welcome to reprint, but without the author's consent declared by this section must be retained, and given the original connection in the apparent position of the article page, otherwise regarded as infringement.

On the blog: http: //www.cnblogs.com/EmptyFS/

Guess you like

Origin www.cnblogs.com/EmptyFS/p/12113192.html