mongodb replica set from node to add a consistent snapshot of the method step of

Environmental Description

The master node 192.168.0.1:27002


Two from the node

192.168.0.2:27002

192.168.0.3:27002


Goal: consistent snapshots adding a third node from 192.168.0.3


Step Description:

1) be consistent snapshot backups on the primary node

2) be consistent snapshots to recover from the node, only the data portion of the recovery, do not start to recover oplog

3) Initialization oplog.rs collection and recovery oplog record

4) the other set of two initialization db.replset.election local database, db.system.replset

5) Modify the configuration database and restart the database (this step does not open before the authentication mode example, replication configuration set),

6) with rs.add ( "HOST_NAME: PORT") command to add the node into the cluster from

7). () Were observed and consistency check data integrity and synchronization status with rs.status



A master node or from the other two backup data node:

mongodump -uroot -ptest --host 192.168.0.2 --authenticationDatabase=admin --port=27002 --oplog -o /data/mongo/backup


Second scp backup file and restore it to 192.168.0.3 on:

scp -r /data/mongo/backup [email protected]/data/mongo 


Three third node in a single way of example promoter:

Note: The need to comment out the following replica set parameters

# auth = true

#replSet = test27002

#replSet = repl_mongo

#keyFile = /data/mongo/27002/replSet.key


# Su - mongo


$ mongod -f /data/mongo/27002/conf/mongodb.conf 



Consistency snapshot recovery in the 192.168.0.3:

$ mongorestore --oplogReplay --port=27002 /data/mongo/backup


创建oplog.rs集合并初始化大小:

>use local


>db.createCollection("oplog.rs",{"capped":true,"size":100000000})


恢复一致性备份的oplog.rs集合的数据到192.168.0.3:

$ mongorestore -d local -c oplog.rs --port=27002 /data/mongo/backup/oplog.bson


需要查询主节点replset.election集合的数据并将这些数据存储到192.168.0.3节点

主DB上的操作:

$ mongo 192.168.0.1:27002/admin -uroot -ptest


test27002:PRIMARY> use local

switched to db local

test27002:PRIMARY>  db.replset.election.find()

{ "_id" : ObjectId("5d64912a1978c9b194cf7cc5"), "term" : NumberLong(2), "candidateIndex" : NumberLong(2) }


192.168.0.3节点上保存主DB上replset.election集合的数据内容:

use local

db.replset.election.save({ "_id" : ObjectId("5d64912a1978c9b194cf7cc5"), "term" : NumberLong(2), "candidateIndex" : NumberLong(2) })

关闭第三个从节点,以副本集方式启动mongodb:

> use admin

switched to db admin

> db.shutdownServer()

server should be down...

2019-09-01T18:10:57.337+0800 I NETWORK  [js] trying reconnect to 127.0.0.1:27002 failed

2019-09-01T18:10:57.337+0800 I NETWORK  [js] reconnect 127.0.0.1:27002 failed failed 


修改第三个从节点配置,注释去掉:

auth = true

replSet = test27002

keyFile = /data/mongo/27002/replSet.key

https://wenku.baidu.com/view/4bac53c8326c1eb91a37f111f18583d048640f38

以副本集方式启动mongodb

$ mongod -f  /data/mongo/27002/conf/mongodb.conf 


主节点执行添加节点操作:

mongo 192.168.0.1:27002/admin -uroot -ptest


>rs.add("172.31.30.82:27001");


主节点上写入数据:

use test

for (var i=0;i<=500;i++) { db.test.insert({id:i,name:"chenfeng"}) }

郑州不孕不育医院:http://yyk.39.net/zz3/zonghe/1d427.html

登录第三个从节点进行数据验证:

>use test

>db.test.count()


Guess you like

Origin blog.51cto.com/14510269/2438330