Mongodb sharding environment construction and verification (redis final project)

Overall experimental steps: Environment preparation -> Deploy Mongodb -> Deploy Config Server -> Deploy Shard -> Deploy mongos -> Start the sharding function -> Basic operations and functional verification of sharding

Table of contents

Experimental operation code:

Configuration file, installation package and installation process PDF: ​​​​​​Resource download location

Environmental preparation

Deploy MongoDB

Deploy Config Server 

Deploy Shard

Deploy mongos

Start sharding function

Basic operations and functional verification of sharding


Experimental operation code:

#1 bigdata111 bigdata112 bigdata113虚拟机环境准备/etc/hosts并修改/etc/hostname文件内容的主机名
bigdata111	192.168.1.111
bigdata112	192.168.1.112
bigdata113	192.168.1.113
你可以选择重新建立三个主机,在初始的信息配置里改成上面格式
如果你虚拟机上已经有了三个主机,可以通过命令操作来修改:
su root
hostname  查看主机名
vim /etc/hostname  修改主机名
ip addr或者ifconfig  查看主机IP
cd /etc/sysconfig/network-scripts
vi ifcfg-ens33 修改网络配置文件
{
将BOOTPROTO=dhcp改为static
ONBOOT=NO改为yes

首先设置IPADDR=192.168..(要根据自己电脑情况)最后一个要除了2,可以填1-255之间的数
NETMASK=255.255.255.0(与虚拟机的子网掩码保持一致)
GATEWAY=192.168..(与虚拟机的网关保持一致)
到编辑里面打开虚拟网络编辑器然后找到NAT模式点击NAT设置就可以看到网关
DNS1=8.8.8.8(属于默认的服务器端口)
#下面四行就是我在bigdata112上的网络配置文件中加入的配置信息:(每个人的网关是不一样的,其他都要和我这个相同)
IPADDR=192.168.1.112
NETMASK=255.255.255.0
GATEWAY=192.168.80.2
DNS1=8.8.8.8
}

#2 bigdata111 将mongodb压缩包放置在/home/bigdata目录下 解压
su root
mkdir -p /home/bigdata 
cd /home/bigdata/
tar -zxvf mongodb-linux-x86_64-rhel70-4.4.13.tgz
mv mongodb-linux-x86_64-rhel70-4.4.13 mongodb


#3 bigdata111 目录与数据准备
cd /home/bigdata/mongodb
mkdir -p /home/bigdata/mongodb/shardcluster/configServer/configFile           
mkdir -p /home/bigdata/mongodb/shardcluster/configServer/data
mkdir -p /home/bigdata/mongodb/shardcluster/configServer/logs

mkdir -p /home/bigdata/mongodb/shardcluster/shard/configFile
mkdir -p /home/bigdata/mongodb/shardcluster/shard/shard1_data
mkdir -p /home/bigdata/mongodb/shardcluster/shard/shard2_data
mkdir -p /home/bigdata/mongodb/shardcluster/shard/shard3_data
mkdir -p /home/bigdata/mongodb/shardcluster/shard/logs

mkdir -p /home/bigdata/mongodb/shardcluster/mongos/configFile
mkdir -p /home/bigdata/mongodb/shardcluster/mongos/logs

touch /home/bigdata/mongodb/shardcluster/configServer/logs/config_server.log
touch /home/bigdata/mongodb/shardcluster/shard/logs/shard1.log
touch /home/bigdata/mongodb/shardcluster/shard/logs/shard2.log
touch /home/bigdata/mongodb/shardcluster/shard/logs/shard3.log
touch /home/bigdata/mongodb/shardcluster/mongos/logs/mongos.log

#4 bigdata111 放置config server配置文件
shardcluster/configServer/configFile        mongodb_config.conf


#5 将bigdata111上的mongodb目录及其内容上传至bigdata112和bigdata113
先在bigdata112 bigdata113上mkdir -p /home/bigdata
scp -r /home/bigdata/mongodb/  [email protected]:/home/bigdata/
scp -r /home/bigdata/mongodb/  [email protected]:/home/bigdata/

scp -r /home/bigdata/mongodb/shardcluster/configServer/configFile/mongodb_config.conf  [email protected]:/home/bigdata/mongodb/shardcluster/configServer/configFile/


#6 bigdata111 112 113 在/etc/profile文件中添加如下内容 
export PATH=/home/bigdata/mongodb/bin:$PATH


#7 启动config server集
##bigdata111 bigdata112 bigdata113
systemctl stop firewalld.service
source /etc/profile
mongod -f /home/bigdata/mongodb/shardcluster/configServer/configFile/mongodb_config.conf

#8 配置config server集
##bigdata111
mongo --host bigdata111 --port 27022

config_conf={
  _id: "configs",
  members: [
    {_id: 0,host: "192.168.1.111:27022"},
    {_id: 1,host: "192.168.1.112:27022"},
    {_id: 2,host: "192.168.1.113:27022"}
  ]
}

rs.initiate(config_conf)

#9 放置shard配置文件
##bigdata111 112 113
shardcluster/shard/configFile        

#10 启动三个shard集群
##bigdata111 112 113
mongod -f /home/bigdata/mongodb/shardcluster/shard/configFile/mongodb_shard1.conf
mongod -f /home/bigdata/mongodb/shardcluster/shard/configFile/mongodb_shard2.conf
mongod -f /home/bigdata/mongodb/shardcluster/shard/configFile/mongodb_shard3.conf

#11 配置三个shard集群
##bigdata111
mongo --host bigdata111 --port 27018


shard_conf={
  _id: "shard1",
  members: [
    {_id: 0,host: "192.168.1.111:27018"},
    {_id: 1,host: "192.168.1.112:27019"},
    {_id: 2,host: "192.168.1.113:27020",arbiterOnly: true}
  ]
}

rs.initiate(shard_conf)

##bigdata112
mongo --host bigdata112 --port 27018

shard_conf={
  _id: "shard2",
  members: [
    {_id: 1,host: "192.168.1.111:27020",arbiterOnly: true},
    {_id: 0,host: "192.168.1.112:27018"},
    {_id: 2,host: "192.168.1.113:27019"}
  ]
}

rs.initiate(shard_conf)

##bigdata113
mongo --host bigdata113 --port 27018

shard_conf={
  _id: "shard3",
  members: [
    {_id: 2,host: "192.168.1.111:27019"},
    {_id: 1,host: "192.168.1.112:27020",arbiterOnly: true},
    {_id: 0,host: "192.168.1.113:27018"}
  ]
}

rs.initiate(shard_conf)

#12 放置mongos配置文件
shardcluster/mongos/configFile 


#13 启动mongos服务
##bigdata111 112
mongos -f /home/bigdata/mongodb/shardcluster/mongos/configFile/mongodb_mongos.conf

#14 启动分片功能
##bigdata111
mongo --host bigdata111 --port 27021

use gateway

sh.addShard("shard1/192.168.1.111:27018,192.168.1.112:27019,192.168.1.113:27020")

sh.addShard("shard2/192.168.1.111:27020,192.168.1.112:27018,192.168.1.113:27019")

sh.addShard("shard3/192.168.1.111:27019,192.168.1.112:27020,192.168.1.113:27018")

#15 验证分片功能
use config
db.settings.save({"_id":"chunksize","value":1}) #设置块大小1MB
use school
for(i=1;i<=5;i++){db.user.insert({"id":i,"name":"jack"+i})} #添加5个文档数据
use gateway
sh.enableSharding("school")  #开启集合school的分片功能
use school
db.user.createIndex({"id":1})  #以ID创建为主键索引
use gateway
sh.shardCollection("school.user",{"id":1})  #启动
sh.status()   #查看分片信息

Configuration file, installation package and installation process PDF:  Resource download location

Resource description: There are 14 sharded conf configuration files in total, 3 hosts each with 3 shard configuration files, and one configuration file for each host. Only the first two hosts have mongos routing configuration files. You can download them through the xftp file To transfer to the virtual machine, you can also copy the content and copy it in through the vim write command in the Linux terminal. In order to simplify the operation later, you can only implement the files on one machine and then transfer them to other files, but you need to make some minor modifications. After all, the configuration file content of each host is different.

  • Environmental preparation

1. Use the command vim etc/hostname to modify the host name and the command vi /etc/sysconfig/network-scripts

/ifcfg-ens33 modifies the host’s IP address. The IP address and host name are as follows:

bigdata111 192.168.1.111 bigdata112 192.168.1.112 bigdata113 192.168.1.113

 

 From the picture above, you can see that the host name and IP address are consistent with the design.

2. Next, create the data files, configuration files and log files of the MongoDB shard cluster-related servers on bigdata111 .

 The final blue directory indicates that the relevant directories and data files have been successfully prepared.

  • Deploy MongoDB

  1. I have just prepared the environment files required for Mongodb on bigdata111. With mongodb installed on all three hosts, use the scp command to transfer the files and directories on bigdata111 to bigdata112 and bigdata113, because I have configured two The host currently only performs the transmission process from 111 to 113.

  • Deploy Config Server 

1. Create and write the config.conf file on bigdata111 and transfer it to 112 and 113

2. Start the config.conf file on bigdata111 and 112 and 113

 

After the startup is completed, successfully appears on all three hosts, indicating that the startup is successful.

3. Finally return to bigdata111, start mongo --host bigdata111 --port 27022 and then configure and initialize the config server.

 The ok field appears in the initialization result, and the configs changes from secondary to primary, indicating that the configuration is successful.

  • Deploy Shard

1. Create and write three shard.conf configuration files on each host and start them using the mongod command. You can see that three successfully appear on each host.

 

2. Perform the Shard cluster configuration operation on each host. You can see three initialized OK fields. On bigdata111, shard1 changes from secondary to primary, on bigdata112, shard2 changes from secondary to primary, and on bigdata113, shard3 changes from other to primary. The result indicates that the Shard cluster is configured successfully.

 

  • Deploy mongos

1. Create and write mongos.conf files on bigdta111 and 112, use mongos to start, you can see that two successfully appear, that is, the startup is successful.

  • Start sharding function

1. Start mongodb on bigdata111 through mongo --host bigdata111 --port 27021, use the gateway database, and add three Shards to the sharded cluster. Three OK fields appear, indicating that all three Shards have been added successfully.

 

  • Basic operations and functional verification of sharding

1. Still operating on bigdata111, first switch to the database config, set the block size to 1M, then switch to the database school, add 5 documents to the user collection, and finally switch to the database gateway, use enableSharding to implement the database school sharding function , the OK field appears, indicating that we successfully enabled the database sharding function, that is, successfully performed the sharding operation on the database.

2. Switch to the database school to create an index for "id", and then switch to the database gateway to perform sharding operations on the user collection using id as the sharding key.

3. Check the shard information of the user set in the database school under the database gateway. From the returned results, we can see that the distribution information of each Shard in the chunks is "shard1 767, shard2 128, shard3 129", indicating that there are corresponding numbers for each shard. The amount of chunks. In the information of school.user, its shard key is shard key: { " id " : 1}. The shard information in shards also corresponds to the operation added previously. In summary, it can be seen that the shard key is shard key: { " id ": 1}. The verification of the piece was successful.

 

Guess you like

Origin blog.csdn.net/weixin_56115549/article/details/125460773
Recommended