MongoDB cluster construction and use

Author: doremi

There are three main ways to build mongodb clusters, master-slave mode, Replica set mode, and sharding mode. The three modes have their own advantages and disadvantages, and are suitable for different occasions. The Replica set is the most widely used, and the master-slave mode is less used now. The sharding mode is the most complete, but the configuration and maintenance are more complicated. In this article, let's take a look at how to build the Replica Set mode.

Mongodb's Replica Set has two main purposes. One is data redundancy for fault recovery. When hardware failures or other reasons cause downtime, replicas can be used for recovery. The other is to separate reads and writes. Read requests are distributed to replicas to reduce the read pressure on the primary.

Replica Set is a collection of mongod instances, which have the same data content. Contains three types of roles:
(1) The primary node (Primary)
receives all write requests, and then synchronizes the modification to all secondary nodes. A Replica Set can only have one Primary node. When the Primary fails, other Secondary or Arbiter nodes will re-elect a Primary node. By default, the read request is also sent to the Primary node for processing. If it needs to be forwarded to the Secondary node, the client needs to modify the connection configuration.

(2) The replica node (Secondary)
maintains the same data set as the master node. When the master node hangs up, participate in the election of the master.

(3) The arbiter (Arbiter)
does not keep data, does not participate in the election of the leader, and only votes for the election of the leader. Using Arbiter can reduce the hardware requirements for data storage. Arbiter has almost no large hardware resource requirements to run, but the important point is that it should not be deployed on the same machine as other data nodes in the production environment.
Note that the number of nodes in an automatic failover Replica Set must be an odd number. The purpose is that there must be a majority when electing the master to vote for the master election decision.

(4) The process of selecting the master
. The Secondary is down and will not be affected. If the Primary is down, the master will be re-elected:
Write picture description here

####Use Arbiter to build a Replica Set
with an even number of data nodes, and add a Replica Set composed of Arbiter:
Write picture description here

Build a cluster

host use
192.168.255.141 master node
192.168.255.142 Standby node + arbitration point (slave+arbiter)

#####Unzip mongodb under /opt

tar -zxvf mongodb-linux-x86_64-ubuntu1404-3.2.4

Create data directory

mkdir -p data/mongodb/{master,slave,arbiter}

#####2. Create a configuration file
Master node:vi /etc/mongodb_master.conf

#master.conf
dbpath=/opt/data/mongodb/master
logpath=/opt/mongodb/master.log
pidfilepath=/opt/mongodb/master.pid
#keyFile=/opt/mongodb/mongodb.key
directoryperdb=true
logappend=true
replSet=testdb
bind_ip=192.168.255.141
port=27017
#auth=true
oplogSize=100
fork=true
noprealloc=true
#maxConns=4000

Backup node:vi /etc/mongodb_slave.conf

#slave.conf
dbpath=/opt/data/mongodb/slave
logpath=/opt/mongodb/slave.log
pidfilepath=/opt/mongodb/slave.pid
#keyFile=/opt/mongodb/mongodb.key
directoryperdb=true
logappend=true
replSet=testdb
bind_ip=192.168.255.142
port=27017
#auth=true
oplogSize=100
fork=true
noprealloc=true
#maxConns=4000

Arbitration point:vi /etc/mongodb_arbiter.conf

#arbiter.conf
dbpath=/opt/data/mongodb/arbiter
logpath=/opt/mongodb/arbiter.log
pidfilepath=/opt/mongodb/arbiter.pid
#keyFile=/opt/mongodb/mongodb.key
directoryperdb=true
logappend=true
replSet=testdb
bind_ip=192.168.255.142
port=27019
#auth=true
oplogSize=100
fork=true
noprealloc=true
#maxConns=4000

Remarks:
The keyFile and auth options should be enabled after the cluster is configured and the authenticated user is added.
Parameter description:
dbpath: storage data directory
logpath: log data directory
pidfilepath: pid file
keyFile: used to verify files between nodes, the content must Keep consistent, permission 600, only Replica Set mode is valid
directoryperdb: whether the database is stored in sub-directories
logappend: log append method to store
replSet: the name of the Replica Set
bind_ip: the ip address bound to mongodb
port: port
auth: whether to enable verification
oplogSize: set oplog Size (MB)
fork: daemon running, creating process
moprealloc: whether to disable data file pre-allocation (often affecting performance)
maxConns: maximum number of connections, default 2000
#####3. Start mongodb

/opt/mongodb/bin/mongod -f /etc/mongodb_master.conf
/opt/mongodb/bin/mongod -f /etc/mongodb_slave.conf
/opt/mongodb/bin/mongod -f /etc/mongodb_arbiter.conf

#####4. Configure on the master node Connect to
mongoDB
Write picture description here
to configure the cluster

cfg={ _id:"testdb", members:[ {_id:0,host:'192.168.255.141:27017',priority:2}, {_id:1,host:'192.168.255.142:27017',priority:1}, {_id:2,host:'192.168.255.142:27019',arbiterOnly:true}] };
rs.initiate(cfg)

Write picture description here
Note:
The cfg name is optional, as long as it does not conflict with the mongodb parameter, _id is the name of the Replica Set, and the priority value in the members is the master node, and arbiterOnly:true must be added to the arbitration point, otherwise the master-standby mode will not take effect
priority indicates the priority level, the larger the value, it means the master node
arbiterOnly: true means the arbitration node
Make the cluster cfg configuration take effect rs.initiate(cfg)
Check whether it takes effect rs.status()
Write picture description here
"stateStr": "PRIMARY" means the master node, "stateStr": "SECONDARY" means the slave node, " stateStr”: “ARBITER”, which means the arbitration node
Add node command
Add secondary: rs.add({host: "192.168.255.141:27019", priority: 1 })
Add arbitration point: rs.addArb("192.168.255.142:27019")
Remove node: rs.remove({host: "192.168.255.141:27019"})
####Configure the use of MongoDB cluster in iServer After adding
the MongoDB storage location in the distributed graph cutting option , start
Write picture description here
Cut the graph, and you can see that the tiles have been stored in each node in mongoDB.
Write picture description here
Publish the tile service and browse the map
Write picture description here

Guess you like

Origin blog.csdn.net/supermapsupport/article/details/78953080