Kafka cluster environment deployment fabric preparation

kafka cluster deployment fabric

Gossip: In this step, do some preparatory work first, because cluster deployment involves a lot of things, here the foundation is laid, and then the cluster can be run.


At that time, there were no virtual machines, real physical machines were used, and there was not much money to build a cluster. So at that time, the analysis required 4 servers to lay the foundation, one was used as a peer node, and the other was used as a new node to test the dynamic addition. Please note that this is the minimum requirement. Generally speaking, a formal kafka cluster requires 8 servers, including:

Three order nodes,
three zookeeper nodes,
four kafka nodes
, two organizations, each of which has two peer nodes

The overall general structure is as follows:

insert image description here

The first step is to modify the hosts file

According to the record in the previous blog post, 8 virtual machines are set up, and the ip nodes are all set in the tutorial of the previous post. Then modify the host file of each server one by one.

vim   /etc/hosts

Then put the configuration in it:

192.168.137.128   kafka0
192.168.137.129   kafka1
192.168.137.130   kafka2
192.168.137.131   kafka3

192.168.137.132   peer0.org1.example.com
192.168.137.133   peer1.org1.example.com
192.168.137.134   peer0.org2.example.com
192.168.137.135   peer1.org2.example.com

192.168.137.128   orderer0.example.com
192.168.137.129   orderer1.example.com
192.168.137.130   orderer2.example.com

192.168.137.128   zookeeper0
192.168.137.129   zookeeper1
192.168.137.130   zookeeper2

insert image description here

Then, for each server that needs to join the cluster, follow this step once, and ensure that the content is consistent .


The second step is to configure password-free login

This step is performed by default on the kafka0 server, which is the server with the IP address 192.168.137.128.

In the process of environment deployment, sometimes it is necessary to transfer files between servers. Of course, you can download them to the computer first, and then transfer them to other servers, but the process is cumbersome. Please ensure that this step is carried out on the basis of the completion of the previous step.

First, password-free login requires root privileges. In our root directory, we can see a hidden file such as .ssh:
insert image description here
after we enter this directory, generate public and private keys:

cd  .ssh
ssh-keygen

Then we press Enter three times in a row, no password is set by default, and
two files id_rsa.pub and id_rsa will be generated below.
We upload the public key to other servers, for example here uploaded to

ssh-copy-id root@kafka1

or

ssh-copy-id [email protected]

Then there are a total of 8 servers, and the public keys are uploaded to the other 7 in turn. Building the environment involves transferring some files to other servers. As will be mentioned in the next step, the file transfer here is mainly from the kafka0 machine to other servers. We can also configure several other units.

The third step of document preparation

First download some configuration files I wrote and post them here:

Link: https://pan.baidu.com/s/1cHDNoiJmztUyZczSlvWH7g
Extraction code: 17to

This contains all configuration files of a total of eight servers from kafka0 to org2peer1.
insert image description here
All are uploaded to the directory of the server in a unified way. If there is no kafkapeer directory, you can create one yourself.

cd /opt/gopath/src/github.com/hyperledger/fabric/kafkapeer/

Download it to the local computer first, and upload it to the server without hesitation. In the next article, I will explain the cluster construction process.


The preparation work is almost like this. If you have not followed my tutorial, it may be that some ip addresses specifically involved in the yaml file need to be filled in with your own actual address, or the environment variables of gopath may need to be written by yourself. In addition, although I am using the mode of 3kafka + 3zookeeper + 4orderer + two organizations and four peers, in fact, you can even use only one organization with only one peer in it. Of course, this requires further modification of the configtx.yaml file and crypto-config.yaml (because these two configuration files are written in two organizations, if you only have one organization, delete one). In addition, the peer’s yaml configuration file also needs to delete the following redundant peer node addresses Otherwise, there will be a risk of failing to build a cluster (too troublesome).
Based on this, it is recommended to use the standard 8 servers. Anyway, the virtual machine does not cost money. If limited by the actual server, you can reduce a few peer nodes appropriately, but the four core servers that form the Kafka cluster must not be less. (Strictly speaking, only 3 sets are enough, because Kafka can also be composed of three sets, just for stability and to prevent collapse, so one more set; but it is still recommended to form a cluster with four sets)

Guess you like

Origin blog.csdn.net/weixin_44573310/article/details/123504792