True distributed zookeeper environment, non-pseudo environment cluster, additional solr cluster environment, the addition of ik word segmentation
Environment and download package preparation
Download package
environment
zookeeper
It is best to have more than 3 servers with odd-numbered nodes. I used two servers here. In fact, this is problematic. I will talk about it later, and then the nodes are 4 tomcats. First build zookeeper
and then deploy solr-cloud
.
Zookeeper cluster construction
Features of Zookeeper:
1. Cluster management: fault tolerance, load balancing.
2. Centralized management of configuration files
3. Cluster entry
All files used for building later are placed under the solr-cloud
directory
Deploy zookeeper nodes on two servers respectively
Unzip the entire Zookeeper3.4.10
package , copy it to the 06 server solr-cloud
directory and name it zookeeper01
, and under the 07 server solr-cloud
directory , name it aszookeeper02
Create data/myid
file storageid
06 on server
07 on server
Modify the configuration file
The modification solr-cloud/zookeeper01/conf/zoo_sample.cfg
is solr-cloud/zookeeper01/conf/zoo.cfg
, the modification content is as follows:
zk01
Node:
zk02
Node:
2181 is the port for the zk client to connect to the cluster. Custom ports such as 2881 and 3881 are the ports for the direct communication of the zk cluster, which can be defined by themselves, and each zk
node can be different.
start zookeeper
node
It is too troublesome to execute the cluster command every time, so configure alias, Alias and the use of parameters .
For example, if the manual startup zookeeper
needs to be executed /opt/app/ctpsp/solr-cloud/zookeeper01/bin/zkServer.sh start
, the zookeeperStartAll
command , and the subsequent restart tomcat
and query status commands are all commands alias
after .
alias tomcatshutall=shutAllTomcatexec
startZookeerperexec() {
/opt/app/ctpsp/solr-cloud/zookeeper01/bin/zkServer.sh start
}
alias zookeeperStartAll=startZookeerperexec
statusZookeerperexec() {
/opt/app/ctpsp/solr-cloud/zookeeper01/bin/zkServer.sh status
}
alias zookeeperStatusAll=statusZookeerperexec
zookeeperStartAll
Start zk on 06 and 07 zk nodes respectively , zookeeperStatusAll
query the zk status, the display is as follows:
On 06:
07:
You can see here one 07zk节点
is leader
, one 06zk节点
is follower
.
Distributed Cluster Split Brain Problem
Here is a bit of extended discussion that can be skipped and goes directly to solr-cloud
the deployment without affecting the configuration.
Here we try to stop the 07 leader zk
node and see if another 06 will becomeleader节点
What is the status of 06?
It can be seen that 06 stops directly, which is the distributed cluster split-brain problem. If the zk node is greater than 3, another leader
one . Therefore, the two servers cannot do zookeeper
the clustering function, but the configuration is the same. This article is still based on two deployments.
Solr-cloud
deploy
Create instances in 06
and respectively07
tomcat01/2
tomcat/3/4
Modify the tomcat
running port
Configuration file location:
/opt/app/ctpsp/solr-cloud/tomcat01/conf/server.xml
1 changed to 8105,8205,8305,8405
2 changed to 8180,8280,8380,8480
3 changed to8189,8289,8389,8489
Add the solr project in tomcat
Refer to the solr
project solr on Linux, and copy it solr-cloud
to tomcat
.
Add solrhome
project
Copy the solrhome
project solr-cloud
to
The current project structure is like this:
06 on:
07 on:
Check it out, the Long March Road is coming to an end.
Modify the solrhome
configuration
Configure two servers 4 solrhome
under the solr.xml
corresponding server ip and corresponding port
Modify solr
configuration associationssolrhome
The same is to configure two servers in 4 places, look at the picture to understand
Zookeeper uniformly manages and configures the solrhome file.
# 进入solr源码包
/usr/local/src/solr-6.6.0/server/scripts/cloud-scripts
#执行
./zkcli.sh -zkhost 10.237.67.6:2181,10.237.67.7:2181 -cmd upconfig -confdir /opt/app/ctpsp/solr-cloud/solrhome01/configsets/sample_techproducts_configs/conf -confname myconf
zkhost
is the cluster ip; confdir
is solrhome
the conf
directory; confname
is the name of the uploaded conf
file. This command only needs to be executed on any zk node to realize the unified management of 4 solrhome
configurations .
Check zk的conf
if the upload is successful, enter this directory, execute zookeepeer
the script under to enter zookeeper
, use quit
to exit
[prouser@b7515169318-1 conf]$ cd ../../zookeeper01/bin/
[prouser@b7515169318-1 bin]$ ls
README.txt zkCleanup.sh zkCli.cmd zkCli.sh zkEnv.cmd zkEnv.sh zkServer.cmd zkServer.sh zookeeper.out
[prouser@b7515169318-1 bin]$ ./zkCli.sh
After entering, check if there is a myconf
file , next to it myconfIK
is the conf where I uploaded the ik word segmentation later. If the conf configuration is modified here, you only need to perform this step again and confname
set it to the same as before to update.
associated solr
andzookeeper
Modify the configuration file
The main zookeeper
thing cluster ip configuration of , which can be searched umask
to locate. 4 tomcat
modifications are the same.
start tomcat
Started on two servers respectively 01-04 tomcat
, the same definition is alias
convenient for each use
startAllTomcatexec() {
bash /opt/app/ctpsp/solr-cloud/tomcat01/bin/startup.sh
bash /opt/app/ctpsp/solr-cloud/tomcat02/bin/startup.sh
}
alias tomcatstartall=startAllTomcatexec
shutAllTomcatexec() {
bash /opt/app/ctpsp/solr-cloud/tomcat01/bin/shutdown.sh
bash /opt/app/ctpsp/solr-cloud/tomcat02/bin/shutdown.sh
}
alias tomcatshutall=shutAllTomcatexec
Access the cluster
It can be seen that compared to the stand-alone version, the cluster has more red boxes.
create a collection
try
View the cluster cloud map
At this point, the deployment of zookeeper + solrcloud is completed. Because the space is too long, the deployment of cluster ik word segmentation is in the next article.