Kafka cluster setup (manual)

 Kafka cluster construction

1 kafka installation and configuration
1.1 Cluster planning


1.2 Related software
1, jdk-8u131-linux-x64.tar.gz  
2, zookeeper-3.4.5.tar.gz
3, scala-2.10.4.tgz
4, Scala 2.10 - kafka_2.10-0.8.2.1.tgz

1.3 Modify the host name to configure the hosts file

Modify the host name
Modify the host name of the ha1 virtual machine:
execute the command: vi /etc/sysconfig/network
Modify HOSTNAME=ha1.ry600.com in the
same way, modify ha2 to ha2.ry600.com
, modify ha3 to ha3.ry600 .com

Configure the ha1 server and execute the command: vi /etc/hosts
127.0.0.1 localhost
192.168.137.171 ha1.ry600.com
192.168.137.172 ha2.ry600.com
192.168.137.173 ha3.ry600.com

Copy other server scp commands:
scp /etc/hosts 192.168.137.172:/etc/
scp /etc/hosts 192.168.137.173:/etc/
 
1.4 Configure ssh password-free login
Generate keys, execute the command: ssh-keygen -t rsa
Press 4 Enter, the key file is located in the ~/.ssh file
. Generate a pair of keys on ha1, copy the public key to other nodes, including yourself, and execute the command:
   ssh-copy-id ha1.ry600.com
   ssh-copy- id ha2.ry600.com
   ssh-copy-id ha3.ry600.com
 (Optional: You can continue to generate keys on ha2 and ha3 and copy them to other nodes)


1.5 Turn off the firewall
systemctl stop firewalld.service // Stop the firewall
systemctl disable firewalld.service // The firewall will no longer start after the system restarts

1.6 To install JDK, SCALA, ZOOKEEPER
, please refer to the jdk, scala, zookeeper installation documentation


1.7 Set the environment variable
ha1 to modify the profile file:
execute the command: vi /etc/profile
Add at the end of the file:
export JAVA_HOME=/hasoft/jdk1.8.0_131
export SCALA_HOME=/hasoft/scala-2.10.4
export PATH=${JAVA_HOME }/bin:${SCALA_HOME}/bin:$PATH

1.8 To install kafka
, jdk, Scala and Zookeeper must be installed in advance, and then download Scala 2.10 - kafka_2.10-0.8.2.1.tgz
http://kafka.apache.org/downloads.htmlDownload address:
After the download is complete, unzip it.
Command: tar -zxvf kafka_2.10-0.8.2.1.tgz 
Enter the config directory,
command: vi server.properties
There are a lot of configuration contents, and the key modification is the following three items.
broker.id The ID of this machine in the cluster, starting from 0, is different for each machine.
host.name local machine name
zookeeper.connect ZOOKEEPER cluster address

Machine 1 is configured as follows:
broker.id=0
host.name=ha1.ry600.com
zookeeper.connect=ha1.ry600.com:2181,ha2.ry600.com:2181,ha3.ry600.com:2181

Copy the entire installation file across machines:
command: scp -r kafka_2.10-0.8.2.1/ ha2.ry600.com:/hasoft/ 
command: scp -r kafka_2.10-0.8.2.1/ ha3.ry600.com: The configuration of /hasoft/ 
 
machine 2 is modified as follows:
broker.id=1
host.name=ha2.ry600.com
zookeeper.connect=ha1.ry600.com:2181,ha2.ry600.com:2181,ha3.ry600.com:2181

The configuration of machine 3 is modified as follows:
broker.id=2
host.name=ha3.ry600.com
zookeeper.connect=ha1.ry600.com:2181,ha2.ry600.com:2181,ha3.ry600.com:2181
 
2 Start Kafka
The three machines in the  cluster enter the kafka home directory and enter the command: bin/kafka-server-start.sh config/server.properties
 

3 Create topic
Execute the following command on the ha1 machine to create a topic
command: bin/kafka-topics.sh --create --topic kafkatopictest --replication-factor 3 --partitions 2 --zookeeper ha1.ry600.com:2181
execute Post prompt:
Created topic "kafkatopictest".
 
4 Send message to kafka
Execute the following command on ha2 machine and send message
command to kafka: bin/kafka-console-producer.sh --broker-list ha2.ry600.com:9092 - -sync --topic kafkatopictest
prompt after execution:
Hello Kafka, I will test SparkStreaming on you next lesson


 
5 Receive messages sent by kafka
Execute the following commands on the ha3 machine and receive messages sent by kafka
: bin/kafka-console-consumer.sh --zookeeper ha1.ry600.com:2181 --topic kafkatopictest --from-beginning
execute Reminder:
Hello Kafka, I will test SparkStreaming on you next lesson The
 
 
Kafka cluster has been built and completed  .

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=325950649&siteId=291194637