Deep Dive into MongoDB Replication (Replica) Sets: Best Practices for Highly Available Architectures

1. What is a MongoDB replication (replica) set

  A MongoDB replica set is a cluster containing multiple MongoDB instances, including a primary node (Primary) and multiple backup nodes (Secondary). The primary node is responsible for handling all write requests and replicating the written data to the backup node. Backup nodes maintain data consistency by synchronizing data from the primary node. If the primary node fails, one of the backup nodes will be selected as the new primary node to maintain high availability of the system. The following is a schematic diagram of the official website
of a MongoDB replica set :
insert image description here

1.1 Components

  A replica set is made up of multiple mongod instances maintaining the same dataset. A replica set consists of multiple data bearing nodes and an optional arbitration node. In a data-bearing node, one and only one member is considered as the primary node, while the other nodes are considered as secondary nodes.

  • The master node receives all write operations . In a replica set, there can only be one primary that can acknowledge writes with {w: "majority"} write concern; although in some cases another mongod instance may temporarily consider itself to be primary as well. The master node records all changes to its dataset in its operation log (i.e. oplog).
  • The secondary node replicates the primary node's operation log and applies operations to its dataset such that the secondary node's dataset mirrors the primary node's dataset. If the primary node becomes unavailable, an eligible secondary node will hold an election to elect itself as the new primary node.

1.2 Three-node replication mode

1.2.1 PSS

  • Composition : 1 primary node and 2 standby nodes, namely Primary+Secondary+Secondary.
  • Features : In this mode, the primary node receives all write operations, and the standby nodes replicate the operation log of the primary node and apply these operations to their own data sets. If the primary node becomes unavailable, the replica set will select one of the standby nodes as the new primary node and continue normal operations until the old primary node rejoins the replica set. The advantage of this mode is that it is simple and easy to understand, and it provides a certain degree of fault tolerance for small data sets. However, when the data set becomes very large, the synchronization of the standby nodes may become a performance bottleneck, because each standby node must replicate all the write operations of the primary node.
  • Structure diagram :
    insert image description here

1.2.2 PSA

  • Composition : 1 primary node, 1 standby node and 1 arbitrator node, namely Primary+Secondary+Arbiter
  • Features : The Arbiter node is a special node in the replica set. It does not store data copies, nor does it provide business read and write operations. Instead, it only participates in the replica set election process to help determine which node should become the primary node. Arbiter nodes are usually deployed on different machines from data nodes, so that even if the machine where the data node is located fails, Arbiter nodes can still participate in the election. Since the Arbiter node does not store data, its failure will not affect the normal operation of the business, but will only affect the election vote. The mode using the Arbiter node only provides a complete copy of the data set. If the primary node is unavailable, the replica set will select the standby node as the primary node.
  • Structure diagram :insert image description here

1.3 Advantages of replica sets

MongoDB replica sets have several advantages:

  1. The high-availability
    MongoDB replica set can ensure redundant backup of data on multiple nodes. When the primary node (Primary) goes down or the network is abnormal, the backup node (Secondary) can automatically take over the work of the primary node to ensure high availability of the system. By setting the voting weight of replica set members, the availability and fault tolerance of the system can be further improved.

  2. Data reliability
    MongoDB replication set can ensure redundant backup of data on multiple nodes, which can effectively avoid data loss or damage. When the master node writes data, the backup node will automatically copy the data to ensure the consistency of data on multiple nodes. When a node goes down or the network is abnormal, the system will automatically perform failover to ensure data reliability.

  3. Separation of data reading and writing
    In the MongoDB replication set, the backup node can undertake the task of reading data, which reduces the pressure on the primary node and improves the concurrent reading capability of the system. The backup node can also be used to support query operations, so as to realize the separation of data reading and writing, and further improve the performance and availability of the system.

  4. Data disaster recovery
    MongoDB replica set can realize data disaster recovery across geographical locations, and back up data to different geographical locations to prevent data loss or unavailability caused by natural disasters and other factors. At the same time, when backing up data across geographical locations, issues such as network latency and bandwidth limitations also need to be considered.

  MongoDB replica sets can improve system availability, fault tolerance, performance, and reliability, and are especially suitable for application scenarios that require high availability, data reliability, and data read-write separation.

2. Precautions

When building a MongoDB replica set, you need to pay attention to the following:

2.1 Software

  1. Version : Select a stable version of MongoDB, and the version of each node needs to be consistent.

  2. Configuration file : When starting the MongoDB node, you need to specify the corresponding configuration file, including port number, data directory, log file and other information.

  3. Security authentication : It is recommended to enable MongoDB's security authentication function to protect data security.

  4. Monitoring : It is recommended to use monitoring tools to monitor and optimize the performance of MongoDB, as well as to discover and solve problems in time.

2.2 Hardware

  1. CPU : Choose a high-performance multi-core CPU to improve system throughput and responsiveness.

  2. Memory : Allocate enough memory to the database cache and operating system cache to improve the performance of data read and write.

  3. Storage : Use solid-state hard drives and use RAID technology for data redundancy backup to improve data reliability.

  4. Network : Choose high-speed, stable network equipment and service providers to ensure the efficiency and reliability of data synchronization.

  When building a MongoDB replication set, each node needs to be allocated to different physical machines to improve the availability and reliability of the system. In addition, network configuration between nodes ( the performance of the same local area network is the best ) and communication test are required to ensure the correctness and efficiency of data synchronization. At the same time, regular backup and recovery tests are required to ensure data reliability and integrity.

3. Preparations

3.1 Release notes

System version: centos7.6
mongodb version: 6.0.5
Client version: 1.8.2

Before building, it is best to be familiar with the installation of a single machine. You can refer to the Complete Guide to MongoDB Installation and Configuration

Note : The software version of each node in the replication set must be consistent to avoid unpredictable problems

3.2 System Requirements

  SELinux is a security-enhanced Linux kernel module used to protect system security and integrity. But in some cases, it may affect the normal operation of MongoDB, so it is sometimes necessary to turn off SELinux.

3.2.1 Check SELinux status

If the SELinux status is enabled, it means that SELinux is enabled.

[root@localhost ~]# sestatus
SELinux status:                 enabled
SELinuxfs mount:                /sys/fs/selinux
SELinux root directory:         /etc/selinux
Loaded policy name:             targeted
Current mode:                   enforcing
Mode from config file:          enforcing
Policy MLS status:              enabled
Policy deny_unknown status:     allowed
Max kernel policy version:      31
[root@localhost  ~]# 

3.2.2 Temporarily disable SELinux

If SELinux is in Permissive mode, even if it is set to 0, it will only output a warning message and will not prevent any operation. Therefore, it is not possible to temporarily close SELinux, only to perform 3.2.3 to permanently close SELinux

[root@localhost  ~]# setenforce 0
[root@localhost  ~]# getenforce
Permissive

3.2.3 Permanently disable SELinux

Modify the configuration file /etc/selinux/config. Open the file with a text editor and set the value of SELINUX to disabled:

# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
#     enforcing - SELinux security policy is enforced.
#     permissive - SELinux prints warnings instead of enforcing.
#     disabled - No SELinux policy is loaded.
#SELINUX=enforcing
SELINUX=disabled
# SELINUXTYPE= can take one of three values:
#     targeted - Targeted processes are protected,
#     minimum - Modification of targeted policy. Only selected processes are protected.
#     mls - Multi Level Security protection.
SELINUXTYPE=targeted

Save and exit, then restart the system
to check the status after restart

[root@localhost  ~]# sestatus
SELinux status:                 disabled

3.3 Creating Nodes

3.3.1 My Directory

client: client of mongodb server
: server of mongodb
standalone: ​​configuration and data files of single node mode

[root@localhost   mongodb]# ll
total 0
drwxr-xr-x. 3 1000 1000 130 Apr 27 18:06 client
drwxr-xr-x. 3 root root 100 May 11 21:35 server
drwxr-xr-x  4 root root  48 May 16 00:10 standalone
[root@localhost   mongodb]# 

3.3.2 Configure environment variables

Configuring environment variables will facilitate mongodthe commands we use, or we don’t need to configure them. If we don’t configure them, we can use the full path operation
. vi /etc/profileMy service file path is: /usr/local/mongodb/server , so add it at the end of the file

export MONGODB_HOME=/usr/local/mongodb/server
PATH=$PATH:$MONGODB_HOME/bin 

After the addition is completed source /etc/profile, the configuration will take effect immediately. After the configuration is completed, you can use mongodthe replacement/usr/local/mongodb/server/bin/mongod

3.3.3 Create a node directory

Create the node 1 directory of the replica set under the mongodb directory

[root@localhost   mongodb]# mkdir -p rs/rs_1/{data,logs}
[root@localhost   mongodb]# ll
total 0
drwxr-xr-x. 3 1000 1000 130 Apr 27 18:06 client
drwxr-xr-x  3 root root  18 May 16 00:21 rs
drwxr-xr-x. 3 root root 100 May 11 21:35 server
drwxr-xr-x  4 root root  48 May 16 00:10 standalone

3.3.4 Create configuration file

Create the file mongo.conf in the rs/rs_1 directory

[root@localhost    rs_1]# vi mongo.conf

file input content

systemLog:
  destination: file
  path: /usr/local/mongodb/rs/rs_1/logs/mongod.log # 日志路径,这里设置相对于当前文件的路径,也可以使用绝对路径
  logAppend: true
storage:
  dbPath: /usr/local/mongodb/rs/rs_1/data # 数据目录
  engine: wiredTiger  #存储引擎
  journal:            #是否启用journal日志
    enabled: true
net:
  bindIp: 0.0.0.0
  port: 28001 # port
replication:
  replSetName: test #副本集的名称
processManagement:
  fork: true
security:
  authorization: disabled #是否开启安全校验,默认不开启,可填disabled(不开启)/enabled (开启)

Save and exit, now node 1 has been configured

3.3.5 Create other nodes

Execute cp -r rs_1/ rs_2and copy the entire directory of rs/rs_1 to rs_2, modify the port of the configuration file mongo.conf to 28002, modify the data directory and log directory at the same time, and complete the content of the configuration file

systemLog:
  destination: file
  path: /usr/local/mongodb/rs/rs_2/logs/mongod.log # 日志路径,这里设置相对于当前文件的路径,也可以使用绝对路径
  logAppend: true
storage:
  dbPath: /usr/local/mongodb/rs/rs_2/data # 数据目录
  engine: wiredTiger  #存储引擎
  journal:            #是否启用journal日志
    enabled: true
net:
  bindIp: 0.0.0.0
  port: 28002 # port
replication:
  replSetName: test #副本集的名称
processManagement:
  fork: true
security:
  authorization: disabled #是否开启安全校验,默认不开启,可填disabled(不开启)/enabled (开启)

Refer to this method to create node rs_3, modify the port to 28003, and change the data directory to rs_3
Finally, summarize the information of the three nodes
rs_1: port 28001 directory /usr/local/mongodb/rs/rs_1
rs_2: port 28002 directory /usr/local /mongodb/rs/rs_2
rs_3: port 28003 directory /usr/local/mongodb/rs/rs_3

4. Build a MongoDB replication cluster

Through the above, we have prepared the directories and configuration files required by the three nodes, and started them separately like ordinary nodes

mongod -f /usr/local/mongodb/rs/rs_1/mongo.conf
mongod -f /usr/local/mongodb/rs/rs_2/mongo.conf
mongod -f /usr/local/mongodb/rs/rs_3/mongo.conf

A successful startup will display

[root@localhost ~]# mongod -f /usr/local/mongodb/rs/rs_1/mongo.conf
about to fork child process, waiting until server is ready for connections.
forked process: 7330      
child process started successfully, parent exiting
[root@localhost ~]# mongod -f /usr/local/mongodb/rs/rs_2/mongo.conf
about to fork child process, waiting until server is ready for connections.
forked process: 7412      
child process started successfully, parent exiting
[root@localhost ~]# mongod -f /usr/local/mongodb/rs/rs_3/mongo.conf
about to fork child process, waiting until server is ready for connections.
forked process: 7478      
child process started successfully, parent exiting

You can also check whether each node starts successfully by querying the process

[root@localhost ~]# ps -ef|grep mongod
root       6870      1  1 21:37 ?        00:00:01 /usr/local/mongodb/server/bin/mongod -f /usr/local/mongodb/standalone/mongo.conf
root       7330      1  2 21:37 ?        00:00:02 mongod -f /usr/local/mongodb/rs/rs_1/mongo.conf
root       7412      1  3 21:38 ?        00:00:03 mongod -f /usr/local/mongodb/rs/rs_2/mongo.conf
root       7478      1  4 21:38 ?        00:00:03 mongod -f /usr/local/mongodb/rs/rs_3/mongo.conf
root       7609   7307  0 21:39 pts/0    00:00:00 grep --color=auto mongod

4.1 Initialize the replica set

Use the client to log in to one of them first, and I will log in to 28001 first

/usr/local/mongodb/client/bin/mongosh --port 28001

After logging in, you need to initialize the cluster, the syntax of initializing the cluster

rs.initiate({
    _id: "集群的名字",
    members: [
       	 	{_id: {节点A的id}, host: "节点A的地址:端口"},
        	{_id: {节点B的id}, host: "节点B的地址:端口"}
       		 .....
		]
})

Build our cluster node information according to the syntax:

rs.initiate({
    _id: "test",
    members: [
        {_id: 1, host: "localhost:28001"},
        {_id: 2, host: "localhost:28002"},
        {_id: 3, host: "localhost:28003"},
       ]
})

Execute in the client

test> rs.initiate({
...     _id: "test",
...     members: [
...         {_id: 1, host: "localhost:28001"},
...         {_id: 2, host: "localhost:28002"},
...         {_id: 3, host: "localhost:28003"},
...        ]
... })
{ ok: 1 }

4.2 View the status of the replica set

Execute rs.status()to view the cluster information. After the cluster is initialized in 4.1, the bottom layer will select the master node through the raft algorithm. This process takes time. If the following information is printed out, it means that the cluster has elected the master node. set: It is the cluster name we
set as test
members : It indicates the information of all nodes owned by the cluster.
stateStr : It is the type of node, PRIMARY is the primary node, and SECONDARY is the backup node, so it can be seen that our primary node is rs_1, and the backup nodes are rs_2 and rs_3

Note: If you want to implement the attributes of the menbers, please move to 4.4

test [direct: primary] test> rs.status()
{
  set: 'test',
 ...
  members: [
    {
      _id: 1,
      name: 'localhost:28001',
      health: 1,
      state: 1,
      stateStr: 'PRIMARY',
      uptime: 1019,
      optime: { ts: Timestamp({ t: 1684418091, i: 1 }), t: Long("1") },
      optimeDate: ISODate("2023-05-18T13:54:51.000Z"),
      lastAppliedWallTime: ISODate("2023-05-18T13:54:51.637Z"),
      lastDurableWallTime: ISODate("2023-05-18T13:54:51.637Z"),
      syncSourceHost: '',
      syncSourceId: -1,
      infoMessage: '',
      electionTime: Timestamp({ t: 1684417851, i: 1 }),
      electionDate: ISODate("2023-05-18T13:50:51.000Z"),
      configVersion: 1,
      configTerm: 1,
      self: true,
      lastHeartbeatMessage: ''
    },
    {
      _id: 2,
      name: 'localhost:28002',
      health: 1,
      state: 2,
      stateStr: 'SECONDARY',
      uptime: 258,
      optime: { ts: Timestamp({ t: 1684418091, i: 1 }), t: Long("1") },
      optimeDurable: { ts: Timestamp({ t: 1684418091, i: 1 }), t: Long("1") },
      optimeDate: ISODate("2023-05-18T13:54:51.000Z"),
      optimeDurableDate: ISODate("2023-05-18T13:54:51.000Z"),
      lastAppliedWallTime: ISODate("2023-05-18T13:54:51.637Z"),
      lastDurableWallTime: ISODate("2023-05-18T13:54:51.637Z"),
      lastHeartbeat: ISODate("2023-05-18T13:54:57.726Z"),
      lastHeartbeatRecv: ISODate("2023-05-18T13:54:57.221Z"),
      pingMs: Long("0"),
      lastHeartbeatMessage: '',
      syncSourceHost: 'localhost:28001',
      syncSourceId: 1,
      infoMessage: '',
      configVersion: 1,
      configTerm: 1
    },
    {
      _id: 3,
      name: 'localhost:28003',
      health: 1,
      state: 2,
      stateStr: 'SECONDARY',
      uptime: 258,
      optime: { ts: Timestamp({ t: 1684418091, i: 1 }), t: Long("1") },
      optimeDurable: { ts: Timestamp({ t: 1684418091, i: 1 }), t: Long("1") },
      optimeDate: ISODate("2023-05-18T13:54:51.000Z"),
      optimeDurableDate: ISODate("2023-05-18T13:54:51.000Z"),
      lastAppliedWallTime: ISODate("2023-05-18T13:54:51.637Z"),
      lastDurableWallTime: ISODate("2023-05-18T13:54:51.637Z"),
      lastHeartbeat: ISODate("2023-05-18T13:54:57.726Z"),
      lastHeartbeatRecv: ISODate("2023-05-18T13:54:57.190Z"),
      pingMs: Long("0"),
      lastHeartbeatMessage: '',
      syncSourceHost: 'localhost:28001',
      syncSourceId: 1,
      infoMessage: '',
      configVersion: 1,
      configTerm: 1
    }
  ],
  ...
}

4.3 Master node adds data

Fortunately, the node we are currently logging in (rs_1) is the master node. If not, we need to switch to the corresponding master node and add a row of records to the master node

test [direct: primary] test> show dbs
admin    80.00 KiB
config  160.00 KiB
local   436.00 KiB
test [direct: primary] test> use rs
switched to db rs
test [direct: primary] rs> db.user.insertOne({name:"Tom"})
{
  acknowledged: true,
  insertedId: ObjectId("64662f80d455144ad8ec8620")
}

4.3 Initialize the backup node

The master node is rs_1, and the backup nodes are rs_2 and rs_3. We first log in to rs_2 and execute rs.isMaster()to view the cluster information

[root@localhost ~]# /usr/local/mongodb/client/bin/mongosh --port 28002
...
test [direct: secondary] test> rs.isMaster()
{
  topologyVersion: {
    processId: ObjectId("64662a4edbddbfe2e5df4f0c"),
    counter: Long("4")
  },
  hosts: [ 'localhost:28001', 'localhost:28002', 'localhost:28003' ],
  setName: 'test',
  setVersion: 1,
  ismaster: false,
  secondary: true,
  primary: 'localhost:28001',
  me: 'localhost:28002',
  ...
}

In the data returned by this method, the host address information of all our nodes and the role of the current node and the master node are also displayed. We query the data just added by the master node

test [direct: secondary] test> show dbs
admin    80.00 KiB
config  224.00 KiB
local   452.00 KiB
rs       40.00 KiB
test [direct: secondary] test> use rs
switched to db rs
test [direct: secondary] rs> show tables
user
test [direct: secondary] rs> db.user.find()
MongoServerError: not primary and secondaryOk=false - consider using db.getMongo().setReadPref() or readPreference in the connection string
test [direct: secondary] rs> 

It can be seen userthat the table exists, but it does not query us, prompting us to execute the setting mode method, the old version can be executed directly rs.secondaryOk(), but the new server version is recommended to usedb.getMongo().setReadPref("secondary")

test [direct: secondary] rs> db.getMongo().setReadPref("secondary")

test [direct: secondary] rs> db.user.find()
[ { _id: ObjectId("64662f80d455144ad8ec8620"), name: 'Tom' } ]

Execute db.getMongo().setReadPref("secondary")or rs.secondaryOk()it will enable the node for read operations so that it can receive read requests from applications. This method does not trigger specific underlying operations, it just tells MongoDB that the node is allowed to play a secondary role in the replica set and accept read operations.

Refer to the setting method, log in to rs_3 for the same configuration

4.4 Properties of nodes

  In the information returned by rs.status()the cluster command, the members element is an array used to define the node information in the replica set. Each members element contains a set of attributes for configuring the node's role, host and port, priority, voting weight, etc. The following are commonly used attributes and their meanings in the members element:

  • _id : The unique identifier of each node in the replica set. This value is an integer and must be unique within the replica set.

  • host : The hostname or IP address of the node, used to specify the location of the node.

  • port : The port number of the node, which is used to specify the connection port of the node.

  • priority : priority, specify the priority of the node in the election. Nodes with larger values ​​have higher priority in elections. By default, the primary node has a priority of 1 and the secondary node has a priority of 0.

  • votes : Vote weights, used to determine election results. Each node has a voting weight, which is 1 by default. Nodes with larger values ​​have greater voting weight in elections.

  • arbiterOnly : Specifies whether the node is an arbitration node (Arbiter). Set to true to indicate that the node is a quorum node, which does not store data and only participates in election voting.

  • buildIndexes : Specifies whether the node builds indexes when copying data. Set to true to indicate that the node will build indexes when replicating data to improve the performance of the replica set.

  • hidden : Specifies whether the node is a hidden node. Set to true to indicate that the node is a hidden node and will not appear in the member list of the replica set.

  • slaveDelay : If the node is a delayed slave node (Delayed Secondary), you can set the delay time (in seconds). There is a delay before the node replicates operations from the Primary node.

  • tags : tags, used to group nodes. Tags can be used to control the routing of read operations for data localization or partitioning.

4.4.1 Modifying an active configuration

If it has already been started or the existing configuration needs to be modified, you can refer to this template:

# 获取配置属性
cfg = rs.conf()
# 将menbers的第一个元素的priority 设置为0
cfg.members[0].priority = 0
# 将menbers的第二个元素的hidden  设置为true
cfg.members[1].hidden = true
# 调用rs提供的刷新命令让配置生效
rs.reconfig(cfg)

The client provided by mongodb supports js syntax, so the idea of ​​modifying configuration is essentially the idea of ​​using js. There will be usage examples in the following practice.

4.5 The role of nodes

In a MongoDB replica set, nodes can have the following roles:

  • Primary (primary node) : The primary node is the main node of the replica set, which is responsible for processing all write operations and replicating the results of the write operations to other nodes in the replica set. The master node is also responsible for processing client read requests and providing real-time data.

  • Secondary (slave node) : The slave node is a standby node in the replica set, which keeps the data synchronized with the master node through replication. Slave nodes replicate data from the master node and apply write operations to maintain data consistency. Slave nodes can handle client read requests, providing data scalability and high availability.

  • Hidden (hidden node) : A hidden node is a special type of slave node that does not appear in the member list of the replica set. Hidden nodes are used to perform specific tasks, such as backups or report generation, without interfering with the normal operation of the replica set. Hidden nodes can handle read requests from clients, but are not elected as new master nodes.

  • Delayed Secondary : A delayed secondary node is a special type of secondary node that has a certain delay in data replication. Delayed slave nodes will delay for a period of time before replicating the operations of the master node, which can be used to create data recovery points or perform delayed tasks.

  • Arbiter (arbitration node) : Arbitration node is a special type of node that does not store data and only participates in the election process. The arbitration node is used to help solve the election problem in the replica set and ensure the high availability and data consistency of the replica set.

  • Priority 0 (priority is 0) : The node with priority 0 is a slave node, but will not be elected as a new master node. Such nodes exist in the replica set but do not affect the election process.

  • Priority 1+ (Priority 1+) : Nodes with a priority of 1 or higher are slave nodes and can become new master nodes. In the election, the higher priority node has higher priority and is more likely to become the new master node.

  It should be noted that the node role in the replica set is determined according to the configuration and running status of the node. The roles of master nodes and slave nodes are dynamic and can change according to the election process and changes in node status. Arbitrator nodes, hidden nodes, and delayed slaves. There will be usage examples in the following practice.

5. High availability practice of MongoDB replica set

5.1 Write data and read data

  In a MongoDB replica set, there is usually one Primary node (primary node) and multiple Secondary nodes (secondary nodes). The Primary node is responsible for processing the write operation and replicating the result of the write operation to the Secondary node. The Secondary node replicates the data of the Primary node and does not receive read requests from applications by default.
When we try to write data on rs_2 (replica node), it will throw us an error that it is not the primary node

[ { _id: ObjectId("64662f80d455144ad8ec8620"), name: 'Tom' } ]
test [direct: secondary] rs> db.user.insertOne({name:"Jerry"})
MongoServerError: not primary
test [direct: secondary] rs> 

Go back to the primary node rs_1 to write data, and write without accident

test [direct: primary] rs> db.user.insertOne({name:"Jerry"})
{
  acknowledged: true,
  insertedId: ObjectId("646632d2d455144ad8ec8621")
}

Go back to the copy node to read data, you can read the data written by the master node

test [direct: secondary] rs> db.user.find()
[
  { _id: ObjectId("64662f80d455144ad8ec8620"), name: 'Tom' },
  { _id: ObjectId("646632d2d455144ad8ec8621"), name: 'Jerry' }
]
test [direct: secondary] rs> 

5.2 Security mechanism

5.2.1 Basic concepts

  Security mechanisms refer to a series of security measures implemented in the MongoDB replica set environment to ensure the confidentiality, integrity and availability of data. The following are the key takeaways from replica set security mechanisms:

  1. Authentication and Authorization: MongoDB supports an authentication mechanism of username and password. Access to databases in a replication set can be restricted by assigning roles and permissions to users. Only authorized users can connect to the replica set and perform operations.

  2. Data transmission encryption: By using the TLS/SSL protocol to encrypt the communication between replica set members, the security of data during transmission can be protected. This prevents sensitive data from being intercepted or tampered with on the network.

  3. Database access control: By setting roles and permissions, users can be restricted from operating on the database in the replication set. Administrators can grant users specific permissions to read, write, manage collections or indexes as needed, and prevent unauthorized users from performing sensitive operations.

  4. Security auditing: MongoDB provides an auditing function to record and track key events in the replication set, such as user login, command execution, data changes, etc. Audit logs can be used to monitor unusual activity, identify security threats, and troubleshoot.

  5. Network isolation: Deploying replica set members in a trusted network environment, such as a virtual private network (VPC) or restricted network, can reduce the risk of unauthorized access.

  By comprehensively applying these security mechanisms, MongoDB replica sets can provide a certain level of data security and protection, ensuring that data is properly protected during storage, transmission, and access in the replica set. To ensure optimal security, it is recommended to configure and tune security settings according to actual needs and best practices.

5.2.2 Common Security Configurations

  Through the description of 5.2.1, the best practice generally only needs to do three points

  1. Configure the ip access whitelist of the service
  2. Secure communication between nodes
  3. Security authentication of the client
    The first point is generally implemented by the operation and maintenance personnel, and we will practice it from the second and third points here

5.2.3 Client Security Authentication

Note : The configuration order of 5.2.3 and 5.2.4 cannot be messed up, because configuration 5.2.4 enables client authentication by default

Log in to the master node first, and create a user for logging in to the operating clusterrs

db.createUser( {
    user: "rs",
    pwd: "rs",
     roles: [ { role: "clusterAdmin", db: "admin" } ,
         { role: "userAdminAnyDatabase", db: "admin"},
         { role: "userAdminAnyDatabase", db: "admin"},
         { role: "readWriteAnyDatabase", db: "admin"}]
})

Results of the

test [direct: primary] test> use admin
switched to db admin
test [direct: primary] admin> db.createUser( {
...     user: "rs",
...     pwd: "rs",
...      roles: [ { role: "clusterAdmin", db: "admin" } ,
...          { role: "userAdminAnyDatabase", db: "admin"},
...          { role: "userAdminAnyDatabase", db: "admin"},
...          { role: "readWriteAnyDatabase", db: "admin"}]
... })
{
  ok: 1,
  '$clusterTime': {
    clusterTime: Timestamp({ t: 1684424590, i: 4 }),
    signature: {
      hash: Binary(Buffer.from("0000000000000000000000000000000000000000", "hex"), 0),
      keyId: Long("0")
    }
  },
  operationTime: Timestamp({ t: 1684424590, i: 4 })
}

You can also create a rootrole so that you can have all the permissions of root

db.createUser( {
    user: "rs",
    pwd: "rs",
    roles: [{ role: "root", db: "admin" }]
})

Query the user, you can see that rsit has been created successfully

test [direct: primary] rs> use admin
switched to db admin
test [direct: primary] admin> show users
[
  {
    _id: 'admin.rs',
    userId: new UUID("784042fa-f765-41bc-a91e-b5e081b15e04"),
    user: 'rs',
    db: 'admin',
    roles: [
      { role: 'clusterAdmin', db: 'admin' },
      { role: 'userAdminAnyDatabase', db: 'admin' },
      { role: 'readWriteAnyDatabase', db: 'admin' }
    ],
    mechanisms: [ 'SCRAM-SHA-1', 'SCRAM-SHA-256' ]
  }
]

5.2.4 Secure communication between nodes

Note : The configuration order of 5.2.3 and 5.2.4 cannot be messed up, because configuration 5.2.4 enables client authentication by default

关闭正在运行的复制集, and then execute cd /usr/local/mongodb/rs/rs_1to enter one of our node directories and create a key file

[root@localhost ~]# cd /usr/local/mongodb/rs/rs_1
[root@localhost rs_1]# openssl rand -base64 756 > mongo.key
[root@localhost rs_1]# chmod 600 mongo.key
[root@localhost rs_1]# 

Copy mongo.keythe file to other node directories, then shut down the running mongod process, and execute the startup command if it does not exist

mongod -f /usr/local/mongodb/rs/rs_1/mongo.conf --keyFile /usr/local/mongodb/rs/rs_1/mongo.key
mongod -f /usr/local/mongodb/rs/rs_2/mongo.conf --keyFile /usr/local/mongodb/rs/rs_2/mongo.key
mongod -f /usr/local/mongodb/rs/rs_3/mongo.conf --keyFile /usr/local/mongodb/rs/rs_3/mongo.key

If you don’t want to use the above and add parameters at the end, modify mongo.confthe configuration file, and then securityadd the corresponding key file under the node

security:
   keyFile: {keyfile的路径}

For example the configuration of rs_1 is

security:
  keyFile: /usr/local/mongodb/rs/rs_1/mongo.key

The configured startup method does not need to add --keyFileparameters. If the keyFile is configured and the .key file cannot be found, the startup will report an error

[root@localhost rs_2]mongod -f /usr/local/mongodb/rs/rs_2/mongo.conf
about to fork child process, waiting until server is ready for connections.
forked process: 13268
ERROR: child process failed, exited with 1
To see additional information in this output, start without the "--fork" option.

5.3 Adding new nodes

5.3.1 Create configuration directory and configuration file

Enter rsthe directory first, execute mkdir -p rs_4/{data,logs},

[root@localhost ~]# cd /usr/local/mongodb/rs
[root@localhost rs]# mkdir -p rs_4/{data,logs}
[root@localhost rs]# ll rs_4
total 0
drwxr-xr-x 2 root root 6 May 18 23:58 data
drwxr-xr-x 2 root root 6 May 18 23:58 logs
[root@localhost rs]# 

Copy the sum in the rs_1 directory mongo.confto mongo.keyrs_4, modify mongo.confthe configuration file of rs_4, and modify the complete configuration

systemLog:
  destination: file
  path: /usr/local/mongodb/rs/rs_4/logs/mongod.log # 日志路径,这里设置相对于当前文件的路径,也可以使用绝对路径
  logAppend: true
storage:
  dbPath: /usr/local/mongodb/rs/rs_4/data # 数据目录
  engine: wiredTiger  #存储引擎
  journal:            #是否启用journal日志
    enabled: true
net:
  bindIp: 0.0.0.0
  port: 28004 # port
replication:
  replSetName: test #副本集的名称
processManagement:
  fork: true
security:
  keyFile: /usr/local/mongodb/rs/rs_4/mongo.key
  authorization: disabled #是否开启安全校验,默认不开启,可填disabled(不开启)/enabled (开启)

5.3.2 Start the node

Execute the node that starts rs_4

[root@localhost ~]# mongod -f /usr/local/mongodb/rs/rs_4/mongo.conf
about to fork child process, waiting until server is ready for connections.
forked process: 16075
child process started successfully, parent exiting
[root@localhost ~]# 

5.3.3 Adding Nodes

The syntax of adding a node rs.add(hostport, arbiterOnly?): the first parameter is the address and port, the second is optional, if set to true, the node represents the arbitration role, does not participate in reading and writing data, and is only responsible for participating in the election of the master node to enter the client of the master
node , execute the add node command

test [direct: primary] admin> rs.add("127.0.0.1:28004")
{
  ok: 1,
  '$clusterTime': {
    clusterTime: Timestamp({ t: 1684425967, i: 1 }),
    signature: {
      hash: Binary(Buffer.from("667e90765de0730dad66e0a6dfea63885fb40590", "hex"), 0),
      keyId: Long("7234519582843600902")
    }
  },
  operationTime: Timestamp({ t: 1684425967, i: 1 })
}
test [direct: primary] admin> 

5.3.4 Initialize new nodes

Log in to a new node and start with verification

[root@localhost ~]# /usr/local/mongodb/client/bin/mongosh --port 28004 -u rs -p rs --authenticationDatabase admin
Current Mongosh Log ID:	64664d4222b936e5a4226210
...
test [direct: secondary] test> db.getMongo().setReadPref("secondary")

After execution, a new node has been added, and you can view the cluster status information

test [direct: secondary] rs> rs.status()
{
  set: 'test',
  ...
  members: [
    {
      _id: 1,
      ...
    },
    {
      _id: 2,
     ...
    },
    {
      _id: 3,
      ...
    },
    {
      _id: 4,
      name: '127.0.0.1:28004',
      health: 1,
      state: 2,
      stateStr: 'SECONDARY',
      uptime: 485,
      optime: { ts: Timestamp({ t: 1684426231, i: 1 }), t: Long("4") },
      optimeDate: ISODate("2023-05-18T16:10:31.000Z"),
      lastAppliedWallTime: ISODate("2023-05-18T16:10:31.782Z"),
      lastDurableWallTime: ISODate("2023-05-18T16:10:31.782Z"),
      syncSourceHost: 'localhost:28003',
      syncSourceId: 3,
      infoMessage: '',
      configVersion: 3,
      configTerm: 4,
      self: true,
      lastHeartbeatMessage: ''
    }
  ],
 ...
}

Query data using new nodes

test [direct: secondary] test> use rs
switched to db rs
test [direct: secondary] rs> db.user.find()
[
  { _id: ObjectId("64662f80d455144ad8ec8620"), name: 'Tom' },
  { _id: ObjectId("646632d2d455144ad8ec8621"), name: 'Jerry' }
]

So far, the new node has successfully joined the cluster

5.4 Remove replica nodes

There are two ways to remove a node

  1. Find the hostnamers.status() attribute of the node to be removed through the command , that is, the domain name plus the port, and then execute it , such as: remove 127.0.0.1:28001 and execute it (need to be executed in a writable node)rs.remove(hostname)
test [direct: primary] rs> rs.remove("127.0.0.1:28004")
{
  ok: 1,
  '$clusterTime': {
    clusterTime: Timestamp({ t: 1684568847, i: 1 }),
    signature: {
      hash: Binary(Buffer.from("088d4a90293b366b16d565ed09c14f7f59c3f7de", "hex"), 0),
      keyId: Long("7234519582843600902")
    }
  },
  operationTime: Timestamp({ t: 1684568847, i: 1 })
}

  1. Remove nodes by modifying conf configuration
    such as: remove the first node of members
test [direct: primary] rs> cfg = rs.conf()
...
test [direct: primary] rs> cfg.members.splice(0,1)
[
  {
    _id: 1,
    host: 'localhost:28001',
    arbiterOnly: false,
    buildIndexes: true,
    hidden: false,
    priority: 1,
    tags: {},
    secondaryDelaySecs: Long("0"),
    votes: 1
  }
]

test [direct: primary] rs> rs.reconfig(cfg)
{
  ok: 1,
  '$clusterTime': {
    clusterTime: Timestamp({ t: 1684569146, i: 1 }),
    signature: {
      hash: Binary(Buffer.from("2bab40af256b287fba36eeb4f48ff14f4486f4bd", "hex"), 0),
      keyId: Long("7234519582843600902")
    }
  },
  operationTime: Timestamp({ t: 1684569146, i: 1 })
}

splice(0,1)It is a function of js, which can remove a node with an array subscript starting from 0

5.5 Add arbitration node

show the current node,

test [direct: primary] rs> rs.status()
{
  set: 'test',
  ...
  members: [
    {
      _id: 2,
      name: 'localhost:28002',
      health: 1,
      state: 1,
      stateStr: 'PRIMARY',
      uptime: 14528,
      optime: { ts: Timestamp({ t: 1684569224, i: 1 }), t: Long("6") },
      optimeDate: ISODate("2023-05-20T07:53:44.000Z"),
      lastAppliedWallTime: ISODate("2023-05-20T07:53:44.749Z"),
      lastDurableWallTime: ISODate("2023-05-20T07:53:44.749Z"),
      syncSourceHost: '',
      syncSourceId: -1,
      infoMessage: '',
      electionTime: Timestamp({ t: 1684554713, i: 1 }),
      electionDate: ISODate("2023-05-20T03:51:53.000Z"),
      configVersion: 5,
      configTerm: 6,
      self: true,
      lastHeartbeatMessage: ''
    },
    {
      _id: 3,
      name: 'localhost:28003',
      health: 1,
      state: 2,
      stateStr: 'SECONDARY',
      uptime: 14520,
      optime: { ts: Timestamp({ t: 1684569224, i: 1 }), t: Long("6") },
      optimeDurable: { ts: Timestamp({ t: 1684569224, i: 1 }), t: Long("6") },
      optimeDate: ISODate("2023-05-20T07:53:44.000Z"),
      optimeDurableDate: ISODate("2023-05-20T07:53:44.000Z"),
      lastAppliedWallTime: ISODate("2023-05-20T07:53:44.749Z"),
      lastDurableWallTime: ISODate("2023-05-20T07:53:44.749Z"),
      lastHeartbeat: ISODate("2023-05-20T07:53:48.082Z"),
      lastHeartbeatRecv: ISODate("2023-05-20T07:53:48.082Z"),
      pingMs: Long("0"),
      lastHeartbeatMessage: '',
      syncSourceHost: 'localhost:28002',
      syncSourceId: 2,
      infoMessage: '',
      configVersion: 5,
      configTerm: 6
    }
  ]
  ...

Make a copy of our rs_1, remember to clean up the files in the original data and log directories after copying

[root@localhost rs]# cp -r rs_1 arb
[root@localhost rs]# ll
total 0
drwxr-xr-x 4 root root 65 May 20 15:56 arb
drwxr-xr-x 4 root root 65 May 18 23:47 rs_1
drwxr-xr-x 4 root root 65 May 18 23:47 rs_2
drwxr-xr-x 4 root root 65 May 18 23:47 rs_3
drwxr-xr-x 4 root root 65 May 19 00:01 rs_4
[root@localhost rs]# 

Modify the configuration file

systemLog:
  destination: file
  path: /usr/local/mongodb/rs/arb/logs/mongod.log # 日志路径,这里设置相对于当前文件的路径,也可以使用绝对路径
  logAppend: true
storage:
  dbPath: /usr/local/mongodb/rs/arb/data # 数据目录
  engine: wiredTiger  #存储引擎
  journal:            #是否启用journal日志
    enabled: true
net:
  bindIp: 0.0.0.0
  port: 28005 # port
replication:
  replSetName: test #副本集的名称
processManagement:
  fork: true
security:
  keyFile: /usr/local/mongodb/rs/arb/mongo.key
  authorization: disabled #是否开启安全校验,默认不开启,可填disabled(不开启)/enabled (开启)

start node
mongod -f /usr/local/mongodb/rs/arb/mongo.conf

Log in to the master node of the cluster and execute the add quorum commandrs.addArb(hostname)

test [direct: primary] rs> rs.addArb("127.0.0.1:28005")
MongoServerError: Reconfig attempted to install a config that would change the implicit default write concern. Use the setDefaultRWConcern command to set a cluster-wide write concern and try the reconfig again.

Execute an error, it is recommended that we use the configuration to find an execution syntax setDefaultRWConcern by searching the official website documentation
db.adminCommand( {"setDefaultRWConcern" : 1, "defaultWriteConcern" : { "w" : “majority” } } )
Execution process

test [direct: primary] rs> db.adminCommand( {"setDefaultRWConcern" : 1, "defaultWriteConcern" : { "w" : "majority" } } )
{
  defaultReadConcern: { level: 'local' },
  defaultWriteConcern: { w: 'majority', wtimeout: 0 },
  updateOpTime: Timestamp({ t: 1684571614, i: 1 }),
  updateWallClockTime: ISODate("2023-05-20T08:33:35.147Z"),
  defaultWriteConcernSource: 'global',
  defaultReadConcernSource: 'implicit',
  localUpdateWallClockTime: ISODate("2023-05-20T08:33:35.152Z"),
  ok: 1,
  '$clusterTime': {
    clusterTime: Timestamp({ t: 1684571615, i: 2 }),
    signature: {
      hash: Binary(Buffer.from("0bc25b8ee44b8b69a9f0f03a516080c8d9f699d6", "hex"), 0),
      keyId: Long("7234519582843600902")
    }
  },
  operationTime: Timestamp({ t: 1684571615, i: 2 })
}
test [direct: primary] rs> rs.addArb("127.0.0.1:28005")
{
  ok: 1,
  '$clusterTime': {
    clusterTime: Timestamp({ t: 1684571660, i: 1 }),
    signature: {
      hash: Binary(Buffer.from("e7188aea177a5918fab04f710bae23570cb0faf4", "hex"), 0),
      keyId: Long("7234519582843600902")
    }
  },
  operationTime: Timestamp({ t: 1684571660, i: 1 })
}
....

This is because we did not configure the read-write concern attribute when connecting to the client. If it has been configured, there is no need to execute the command to add the arbitration node
to view the cluster status. stateStr: 'ARBITER',The information shows that our arbitration node has been added successfully.

test [direct: primary] rs> rs.status()
 ...
 {
      _id: 3,
      name: '127.0.0.1:28005',
      health: 1,
      state: 7,
      stateStr: 'ARBITER',
      uptime: 8,
      lastHeartbeat: ISODate("2023-05-20T09:04:20.368Z"),
      lastHeartbeatRecv: ISODate("2023-05-20T09:04:20.367Z"),
      pingMs: Long("3"),
      lastHeartbeatMessage: '',
      syncSourceHost: '',
      syncSourceId: -1,
      infoMessage: '',
      configVersion: 2,
      configTerm: 1
    }`

5.6 Correct transfer of the master node

  In a MongoDB replica set, when the primary node is unavailable or needs maintenance, the primary node transfer can be performed manually or automatically. Master transfer is the process of promoting a slave (usually the best slave) in a replica set to become the new master.

Although the kill process can be used to forcibly remove the current master node to trigger the election of a new master node, it is not recommended to do so. The following are the correct steps to transfer the master node in the MongoDB replica set:

  1. Check replica set status : Use the rs.status() command to view the status of the replica set and determine the status and availability of the current primary node.

  2. Verify slave node status : Make sure that other slave nodes are healthy and have the conditions to become master nodes. The slave nodes should be in sync and have sufficient replication processes.

  3. Select a new master node : Select one of the healthy slave nodes as the new master node. Factors such as node replication delay, network delay, and hardware resources can be considered for selection.

  4. Execute master node transfer : Use the command to trigger the master node transfer within the specified time ( 60 secondsrs.stepDown() by default) on the current master node . This will cause the current masternode to voluntarily relinquish masternode status so that a new masternode can take over.

  5. Monitor state changes : Monitor replica set state changes and ensure that a new primary node has been successfully elected to take over the role of primary node.

5.6.1 The role of master node transfer

  1. High availability : When the master node is unavailable, a slave node can be quickly and automatically or manually selected as the new master node to ensure continuous availability of the system.

  2. Fault tolerance : If the primary node fails or needs maintenance, the transfer of the primary node can ensure that other nodes in the replica set can take over the role of the primary node to avoid service interruption.

  3. Load balancing : By selecting the optimal slave node as the new master node, load balancing can be achieved according to the status and resource conditions of the nodes, improving system performance and throughput.

  4. Data consistency : During the master node transfer process, MongoDB will ensure data consistency and reliability. The slave nodes will synchronize data according to the replication set protocol to ensure data integrity.

  When transferring the master node, make sure that all nodes in the replica set can work normally and the network connection is stable. In addition, the master node transfer may affect the read and write performance of the system to a certain extent, so sufficient planning and testing should be carried out when performing the transfer.

6. Other features and functions of MongoDB replica sets

6.1 Election mechanism

  The election mechanism of the MongoDB replica set is used to select a new master node when the master node is unavailable to ensure the normal operation of the replica set and data consistency. The following is the detailed process of replica set election:

  1. Member status check : Each member (node) periodically sends a heartbeat signal to other members to check its status. If a member does not receive a heartbeat signal from the master node within a certain period of time, it will consider the master node unavailable and trigger the election process.

  2. Candidate status : The member that triggers the election will become a candidate, it will send an election request to other members, and wait for other members to vote. Candidates maintain a ballot box to record ballots received.

  3. Voting process : After receiving the election request, each member will verify the legitimacy of the candidate and vote for the candidate. Each member can only cast one vote, and only for the same candidate in one election.

  4. Ballot Counting : Candidates will count the number of ballots received. Votes are counted according to the following rules:

  • If a candidate receives the votes of more than half of the members, it will become the new master node.
  • If no candidate receives more than half of the votes, the election fails and the replica set remains in the current master state.
  • Masternode election result: If the candidate becomes the new masternode, it will send the election result to other members, notifying them of the masternode change. Other members will update their configuration and switch over to the new master node.

  During the election process, each member has one 优先级(priority)值. Members with higher priority have higher weight in elections and are more likely to become master nodes. The priority also affects the data synchronization process in the replica set.

  The goal of replica set election is to select a stable primary node to ensure data consistency and reliability. The election mechanism ensures that the replica set can quickly select a new master node when the master node fails or changes, and the priority of members is considered during the election process to ensure the normal operation of the entire replica set.

6.1.1 Election Algorithm Optimization

  MongoDB's election algorithm does not directly adopt the Raft algorithm . The design of the MongoDB replica set election algorithm is different from the Raft algorithm, and some optimizations have been made in the implementation. Here are some optimized aspects of MongoDB's election algorithm:

  1. Fast elections : MongoDB's election algorithm is designed to quickly elect new master nodes to ensure high availability. It speeds up elections by setting timeouts and voting rounds during the election process. If the election process exceeds a certain amount of time or the number of rounds has not reached a result, the election is aborted and restarted.

  2. Priority voting : Each member has a priority value in the election. This value can be tuned via configuration, with a higher priority increasing the chances of a member becoming a master. This mechanism ensures that members with higher priority are more likely to become master nodes, resulting in better resource utilization.

  3. Majority vote mechanism : In order to ensure the consistency of election results, MongoDB uses a majority vote mechanism. In the replica set, more than half of the voting results of the members need to be elected successfully. This prevents splits where multiple members declare themselves masters at the same time.

  4. Data synchronization optimization : During the election process, the new primary node needs to ensure that its data copy is consistent with other members' data copies. In order to reduce data synchronization delay and network bandwidth consumption, MongoDB uses an incremental synchronization mechanism. Instead of replicating the entire dataset, the master node only needs to send the unsynced operation log (oplog) to other members.

  While MongoDB's election algorithm differs from Raft's algorithm, both are designed to provide consistency and high availability. MongoDB's election algorithm ensures the speed and correctness of the election through the above optimization, and can perform efficient data synchronization in the replica set.

6.2 How does a replica set handle failover

The underlying MongoDB uses the following mechanisms when handling failover:

  1. Heartbeat detection : Members in a MongoDB replica set periodically send heartbeat signals to detect each other's availability. Each member periodically sends heartbeat requests to other members and waits for heartbeat responses. If a member does not receive a heartbeat response within a certain period of time, it will consider the other party unavailable and mark it as faulty.

  2. Master node failure detection : When a member detects that the current master node is unavailable, it will attempt to initiate an election process to elect a new master node. The electoral process includes voting among members and campaigning for candidates. Members will express their support for the candidates by voting, and a new master node will be elected based on the principle of majority vote.

  3. Data synchronization and replication : Once the election of the new master node is completed, other members will synchronize data with the new master node. The master node will send the operation records in the operation log (oplog) to other members to ensure data consistency. Members apply these operations in the order of the master node to keep the data in sync.

  4. Failover complete : The failover process is complete when the election of a new master node is complete and data synchronization is complete. At this point, all members of the replica set are aware of the new primary, and data replicas are also kept in sync with the new primary. The replica set will return to a normal working state and can continue to process read and write requests from clients.

  MongoDB implements automatic failover through the above mechanism. 一旦主节点发生故障,其他成员会迅速检测到,并自动选举出新的主节点. The data synchronization mechanism ensures the synchronization between the new master node and other members 数据一致性. This way, even in the event of a failure, the replica set remains 高可用性intact 可靠性.

  The failover process may cause a short period of service interruption or read and write delays, depending on factors such as data synchronization speed and network latency. But MongoDB's failover mechanism is designed to be as fast and reliable as possible to minimize impact on applications.

7. Summary

  Finally, let's make a summary. We have introduced the concept of MongoDB replica set, construction and management steps, and related precautions. A replica set consists of primary and backup nodes, providing high availability and data redundancy. By initializing the replica set and setting up the master node, we can build a replica cluster and add data. It also covers the attributes and roles of nodes, as well as high availability practices for replica sets, such as security mechanisms, adding/removing nodes, and master transfer. Additionally, election mechanisms and failover handling are covered. I hope this article can help you fully understand the configuration and management of MongoDB replica sets to achieve data reliability and continuous availability.

Guess you like

Origin blog.csdn.net/dougsu/article/details/130692610