EDITORIAL words
For a production environment, unless it is a very important business, and this business allows us to downtime for some time, we generally use only a single node, and the single-node must have a complete backup means.
RS replication set
Here we take from two main ways to build a replication set in the mongodb, which uses Raft monitor voting mechanism. If the primary database downtime occurs, the replication set inside the main library will be re-elected, and to notify the client application will connect to the new main library. Of course, the end should be how to connect applications, this is not what we think about the code there will be a special connection method for this approach.
1. The prepared three virtual machines to build the cluster, the initial configuration is the same:
# 关闭大页内存机制 if test -f /sys/kernel/mm/transparent_hugepage/enabled; then echo never > /sys/kernel/mm/transparent_hugepage/enabled fi if test -f /sys/kernel/mm/transparent_hugepage/defrag; then echo never > /sys/kernel/mm/transparent_hugepage/defrag fi # 查看设置结果 cat /sys/kernel/mm/transparent_hugepage/enabled cat /sys/kernel/mm/transparent_hugepage/defrag # 初始化目录 mkdir -p /data/{backup,data,logs,packages,services} mkdir -p /data/packages/mongodb #The installation package upload: / data under / Packages / MongoDB # decompression CD / Data / Packages / MongoDB the tar -zxf MongoDB-Linux-rhel70-4.2.1 the x86_64- .tgz Music Videos MongoDB -linux-the x86_64-rhel70-4.2.1 / Data / Services / MongoDB # configure the base directory CD / Data / Services / MongoDB RM -f Community.txt the LICENSE-MPL-2-THIRD, the README the PARTY-THIRD, the NOTICES-Party- NOTICES.gotools mkdir Data logs the conf
2. Add all the configuration file:
cat > /data/services/mongodb/conf/mongo.conf << EOF # 系统日志相关 systemLog: destination: file path: "/data/services/mongodb/logs/mongodb.log" logAppend: true # 数据存储相关 storage: journal: enabled: true dbPath: "/data/services/mongodb/data" directoryPerDB: true wiredTiger: engineConfig: cacheSizeGB: 1 directoryForIndexes: true collectionConfig: blockCompressor: zlib indexConfig: prefixCompression: to true # Processes associated processManagement: fork: to true pidFilePath: " /data/services/mongodb/logs/mongodb.pid " # relevant for network configuration NET: bindIp: 192.168.200.101 , 127.0.0.1 Port: 27000 # copy set configuration related Replication: oplogSizeMB : 2048 replSetName: my_mongo_repl EOF
Note the red configuration!
3. Configure systemd:
cat > /etc/systemd/system/mongod-27000.service <<EOF [Unit] Description=mongodb-27000 After=network.target remote-fs.target nss-lookup.target [Service] User=root Type=forking ExecStart=/data/services/mongodb/bin/mongod --config /data/services/mongodb/conf/mongo.conf ExecReload=/bin/kill -s HUP $MAINPID ExecStop=/data/services/mongodb/bin/mongod --config /data/services/mongodb/conf/mongo.conf --shutdown PrivateTmp=true [Install] WantedBy=multi-user.target EOF
Start all nodes:
systemctl start mongod-27000.service
4. Log any node configuration:
/data/services/mongodb/bin/mongo --port 27000 admin
Configuration:
# Add set config = {the _id: " my_mongo_repl " , Members: [ {the _id: 0, Host: ' 192.168.200.101:27000 ' }, {the _id: . 1, Host: ' 192.168.200.102:27000 ' }, {the _id: 2, Host: ' 192.168.200.103:27000 ' } ]} # initialize the cluster rs.initiate (config)
Implementation of the results:
After the success prompt will become!
5. View replica Status:
rs.status()
The main results are as follows:
Red for the main part we need to understand, it is the identity of a node, or from the main, the other is the current status of replication, in MySQL, we use a pointer or GTID represents the current copy to where we are. The use of timestamps plus several operations in mongodb indicating where the current execution to.
In addition you can also use:
# Check whether the current master node rs.isMaster () # View configuration rs.conf ()
This is the most basic of the current RS replication set architecture, as well as other designs!
Special node
In the mongodb, there are three special node from node:
Arbiter : mainly used for monitoring, which means that we only need one Lord, one from the arbiter, since this node do not synchronize data, you can use the machine performance handicap.
hidden : hidden nodes, does not participate in the primary election, not external services.
Delay : Delay node, due to the delay, the owners do not participate in the election to provide services, general and hidden in combination.
Configuration arbiter node:
In both cases, one is to build a cluster of nodes when it is initialized to the arbiter, config just need to increase the allocation can be defined:
config = {_id: "my_mongo_repl", members:[ {_id: 0, host: '192.168.200.101:27000'}, {_id: 1, host: '192.168.200.102:27000'}, {_id: 2, host: '192.168.200.103:27000', "arbiterOnly": true} ]}
Then initialization can be!
Another case is the cluster already exists, but is an ordinary master-slave environment, you need to modify one of the nodes as arbiter node:
1. First remove the node:
# Removed from the node rs.remove ( " 192.168.200.103:27000 " ) # increased arbiter node rs.addArb ( " 192.168.200.103:27000 " )
If only increased from an ordinary node only need to use rs.add () can be!
2. Check the cluster case:
rs.status()
The results are shown:
Remember to check whether the node is started, if useless startup error: Error Caused by Connecting to 192.168.200.103:27000 :: :: Connection refused The
Configure hidden delay node:
First we two from a master node, select a node from the node as a hidden delay:
# Copy of the original configuration CFG = rs.conf () # modify parameters labeled angular member 2, attention is not id 2 # weight is set, it can not be selected main cfg.members [2] .priority = 0 # whether Hide cfg.members [2] = .hidden to true # delay time cfg.members [2] = 120 .slaveDelay # reload new configuration rs.reconfig (CFG) # view the new configuration rs.conf ()
The results are as follows:
Of course, canceled only need to turn back to the default configuration can be!