Installation zookeeper + kafka cluster

Highly time-critical data, inventory, adopted a dual-write cache database + technical solutions, but also solve the problem of consistency double wrote
Production of cache data, after monitoring a message queue, and the data source service (product information management services) the data changes occur, data change messages will be pushed to the message queue
Cache data production services to the consumer to change the message of this data, and then extract some parameters according to the instructions of the message, and then call the interface data source services corresponding to the data got me, got me this time is usually from mysql library
1, zookeeper Cluster Setup
zookeeper-3.4.5.tar.gz using WinSCP copied to the / usr / local directory.
For zookeeper-3.4.5.tar.gz decompress: tar -zxvf zookeeper-3.4.5.tar.gz.
Zookeeper directory to rename: mv zookeeper-3.4.5 zk
Configuring environment variables related zookeeper
vi ~/.bashrc
export ZOOKEEPER_HOME=/usr/local/zk
export PATH=$ZOOKEEPER_HOME/bin
source ~/.bashrc

cd zk/conf
cp zoo_sample.cfg zoo.cfg

vi zoo.cfg

  modify:

dataDir=/usr/local/zk/data

  Add:

server.0=eshop-cache01:2888:3888	
server.1=eshop-cache02:2888:3888
server.2=eshop-cache03:2888:3888

  

cd zk
mkdir data
cd data

vi myid
0

The other two nodes in accordance with the above steps to configure ZooKeeper, zk using scp .bashrc and copied onto eshop-cache02 and can eshop-cache03. The only difference is set to the identification number 1 and 2 respectively.

They were executed on three machines: zkServer.sh start.
Check the ZooKeeper status: zkServer.sh status, should be a leader, two follower
jps: three nodes to check whether the process has QuromPeerMain
 
2, kafka Cluster Setup
scala, is a programming language, now more fire, a lot of data such as large area of ​​the inside of the spark (calculation engine) is written with scala

scala-2.11.4.tgz use WinSCP copied to the / usr / local directory.
Scala-2.11.4.tgz to decompress: tar -zxvf scala-2.11.4.tgz.
Scala directory to rename: mv scala-2.11.4 scala

Configuring scala-related environment variables
we ~ / .bashrc
export SCALA_HOME=/usr/local/scala
export PATH=$SCALA_HOME/bin
source ~/.bashrc

Check whether the installation was successful scala: scala -version

According to the above steps on other machines are installed scala. And using scp scala .bashrc copy to another two machines can be.
kafka_2.9.2-0.8.1.tgz using WinSCP copied to the / usr / local directory.
For kafka_2.9.2-0.8.1.tgz decompress: tar -zxvf kafka_2.9.2-0.8.1.tgz.
Kafka directory to rename: mv kafka_2.9.2-0.8.1 kafka

Configuration kafka
we /usr/local/kafka/config/server.properties
broker.id: integer order growth, 0,1,2, unique id of the cluster Broker
zookeeper.connect=192.168.31.187:2181,192.168.31.19:2181,192.168.31.227:2181

Installation slf4j
slf4j-1.7.6.zip uploaded to / usr / local directory
unzip slf4j-1.7.6.zip
Copy slf4j-nop-1.7.6.jar slf4j libs of the directory to kafka

Solve kafka Unrecognized VM option 'UseCompressedOops' problem

we /usr/local/kafka/bin/kafka-run-class.sh

if [ -z "$KAFKA_JVM_PERFORMANCE_OPTS" ]; then
KAFKA_JVM_PERFORMANCE_OPTS="-server -XX:+UseCompressedOops -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:+CMSClassUnloadingEnabled -XX:+CMSScavengeBeforeRemark -XX:+DisableExplicitGC -Djava.awt.headless=true"
be

Remove -XX: + UseCompressedOops to

According to the above additional steps are mounted on two machines kafka. With the kafka scp to copy it to another machine.
The only difference is that server.properties in broker.id, to be set to 1 and 2

In kafka directory on the three machines, respectively, execute the following command: nohup bin / kafka-server-start.sh config / server.properties &

Jps use to check the success of start

Use basic commands to check whether kafka build success

bin/kafka-topics.sh --zookeeper 192.168.31.187:2181,192.168.31.19:2181,192.168.31.227:2181 --topic test --replication-factor 1 --partitions 1 --create

bin/kafka-console-producer.sh --broker-list 192.168.31.181:9092,192.168.31.19:9092,192.168.31.227:9092 --topic test

bin/kafka-console-consumer.sh --zookeeper 192.168.31.187:2181,192.168.31.19:2181,192.168.31.227:2181 --topic test --from-beginning

Guess you like

Origin www.cnblogs.com/sunliyuan/p/11366478.html