M103: Basic Cluster Administration chapter 2 Replication学习记录
运行环境
操作系统:windows 10 家庭中文版
Mongodb :Mongodb 3.4
Mongodb安装路径:E:>MongoDB\Server\3.4\bin\
Mongodb存储路径:E:>MongoDB\data
课后问题
lab - Initiate a Replica Set Locally
In this lab you will launch a replica set with three members from within your Vagrant environment. To secure this replica set, you will create a keyfile for your nodes to use when communicating with each other.
For this lab, you must place this keyfile in the /var/mongodb/pki directory and change the permissions so only the owner of the file can read it or write to it:
sudo mkdir -p /var/mongodb/pki
sudo chown vagrant:vagrant -R /var/mongodb
openssl rand -base64 741 > /var/mongodb/pki/m103-keyfile
chmod 600 /var/mongodb/pki/m103-keyfile
Your three mongod processes will each have their own configuration file, and now those config files can reference the keyfile you just made. These config files will be similar to the config file from the previous lab, but with the following adjustments:
type | PRIMARY | SECONDARY | SECONDARY |
---|---|---|---|
col 3 is | right-aligned | $1600 | |
config filename | mongod-repl-1.conf | mongod-repl-2.conf | mongod-repl-3.conf |
port | 27001 | 27002 | 27003 |
dbPath | /var/mongodb/db/1 | /var/mongodb/db/2 | /var/mongodb/db/3 |
logPath | /var/mongodb/db/mongod1.log | /var/mongodb/db/mongod2.log | /var/mongodb/db/mongod3.log |
replSet | m103-repl | m103-repl | m103-repl |
keyFile | /var/mongodb/pki/m103-keyfile | /var/mongodb/pki/m103-keyfile | /var/mongodb/pki/m103-keyfile |
bindIP | localhost,192.168.103.100 | localhost,192.168.103.100 | localhost,192.168.103.100 |
Note that the mongod does not automatically create the dbPath or logPath directories.
When your config files are complete, start a mongod process with the first config file (on port 27001). This mongod process will act as the primary node in your replica set (at least, until an election occurs).
Now use the mongo shell to connect to this node. On this node, and only this node, initiate your replica set with rs.initiate(). Remember that this will only work if you are connected from localhost.
Once you run rs.initiate(), the node automatically configures a default replication configuration and elects itself as a primary. Use rs.status() to check the status of the replica set. The shell prompt will read PRIMARY once the initiation process completes successfully.
Because the replica set uses a keyfile for internal authentication, clients must authenticate before performing any actions.
While still connected to the primary node, create an admin user for your cluster using the localhost exception. As a reminder, here are the requirements for this user:
- Role: root on admin database
- Username: m103-admin
- Password: m103-pass
Now exit the mongo shell and start the other two mongod processes with their respective configuration files.
Next, you will reconnect to your primary node as m103-admin, and add the other two nodes to your replica set. Remember to use the IP address of the Vagrant box 192.168.103.100 when adding these nodes.
Once your other two members have been successfully added, run rs.status() to check that the members array has three nodes - one labeled PRIMARY and two labeled SECONDARY.
Now run the validation script and enter the validation key you receive below. If you receive an error, it should give you some idea of what went wrong.
validate_lab_initialize_local_replica_set
解答
启动vagrant容器,并进入box(参考M103: Basic Cluster Administration chapter 0 Introduction学习记录):
C:\Users\Shinelon>e:
E:\>cd e:\MongoDB\m103\chapter_0_introduction_setup\m103-vagrant-env
e:\MongoDB\m103\chapter_0_introduction_setup\m103-vagrant-env>vagrant up --provision
e:\MongoDB\m103\chapter_0_introduction_setup\m103-vagrant-env>vagrant ssh
Welcome to Ubuntu 14.04.5 LTS (GNU/Linux 3.13.0-144-generic x86_64)
* Documentation: https://help.ubuntu.com/
System information as of Thu Apr 19 12:33:13 UTC 2018
System load: 0.48 Processes: 107
Usage of /: 6.8% of 39.34GB Users logged in: 0
Memory usage: 7% IP address for eth0: 10.0.2.15
Swap usage: 0% IP address for eth1: 192.168.103.100
Graph this data and manage this system at:
https://landscape.canonical.com/
Get cloud support with Ubuntu Advantage Cloud Guest:
http://www.ubuntu.com/business/services/cloud
2 packages can be updated.
2 updates are security updates.
Last login: Tue Apr 17 07:26:40 2018 from 10.0.2.2
vagrant@m103:~$
根据题目需要执行脚本创建m103-keyfile文件并授权
vagrant@m103:~$ sudo mkdir -p /var/mongodb/pki
vagrant@m103:~$ sudo chown vagrant:vagrant -R /var/mongodb
vagrant@m103:~$ openssl rand -base64 741 > /var/mongodb/pki/m103-keyfile
vagrant@m103:~$ chmod 600 /var/mongodb/pki/m103-keyfile
创建dbpath路径(顺带创建了log路径)
vagrant@m103:~$ mkdir -p /var/mongodb/db/1
vagrant@m103:~$ mkdir -p /var/mongodb/db/2
vagrant@m103:~$ mkdir -p /var/mongodb/db/3
配置mongod-repl-1.conf文件:
vagrant@m103:~$ cp /etc/mongod.conf ./mongod-repl-1.conf
vagrant@m103:~$ vim mongod-repl-1.conf
# mongod.conf
# for documentation of all options, see:
# http://docs.mongodb.org/manual/reference/configuration-options/
# Where and how to store data.
storage:
dbPath: /var/mongodb/db/1
journal:
enabled: true
# engine:
# mmapv1:
# wiredTiger:
# where to write logging data.
systemLog:
destination: file
logAppend: true
path: /var/mongodb/db/mongod1.log
# network interfaces
net:
port: 27001
bindIp: 192.168.103.100,localhost
# how the process runs
processManagement:
timeZoneInfo: /usr/share/zoneinfo
fork: true
#security:
#operationProfiling:
replication:
replSetName: m103-repl
#sharding:
## Enterprise-Only Options:
#auditLog:
#snmp:
security:
authorization: enabled
keyFile: /var/mongodb/pki/m103-keyfile
根据配置文件启动mongod守护进程:
vagrant@m103:~$ mongod --config ./mongod-repl-1.conf
进入mongo shell:
vagrant@m103:~$ mongo --port 27001
MongoDB shell version v3.6.4-rc0
connecting to: mongodb://127.0.0.1:27001/
MongoDB server version: 3.6.4-rc0
初始化副本集:
MongoDB Enterprise > rs.initiate()
{
"info2" : "no configuration specified. Using a default configuration for the set",
"me" : "192.168.103.100:27001",
"ok" : 1,
"operationTime" : Timestamp(1524147561, 1),
"$clusterTime" : {
"clusterTime" : Timestamp(1524147561, 1),
"signature" : {
"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
"keyId" : NumberLong(0)
}
}
}
等待一段时间后查看副本集状态:
MongoDB Enterprise > rs.status()
{
"set" : "m103-repl",
"date" : ISODate("2018-04-19T14:22:28.736Z"),
"myState" : 1,
"term" : NumberLong(1),
"heartbeatIntervalMillis" : NumberLong(2000),
"optimes" : {
"lastCommittedOpTime" : {
"ts" : Timestamp(1524147742, 1),
"t" : NumberLong(1)
},
"readConcernMajorityOpTime" : {
"ts" : Timestamp(1524147742, 1),
"t" : NumberLong(1)
},
"appliedOpTime" : {
"ts" : Timestamp(1524147742, 1),
"t" : NumberLong(1)
},
"durableOpTime" : {
"ts" : Timestamp(1524147742, 1),
"t" : NumberLong(1)
}
},
"members" : [
{
"_id" : 0,
"name" : "192.168.103.100:27001",
"health" : 1,
"state" : 1,
"stateStr" : "PRIMARY",
"uptime" : 4913,
"optime" : {
"ts" : Timestamp(1524147742, 1),
"t" : NumberLong(1)
},
"optimeDate" : ISODate("2018-04-19T14:22:22Z"),
"electionTime" : Timestamp(1524147561, 2),
"electionDate" : ISODate("2018-04-19T14:19:21Z"),
"configVersion" : 1,
"self" : true
}
],
"ok" : 1,
"operationTime" : Timestamp(1524147742, 1),
"$clusterTime" : {
"clusterTime" : Timestamp(1524147742, 1),
"signature" : {
"hash" : BinData(0,"YShxth9DtvPuS2QaFmdHs1A4QXY="),
"keyId" : NumberLong("6546163933068132353")
}
}
}
状态已经变成了PRIMARY,根据题意创建管理用户:
MongoDB Enterprise > use admin
switched to db admin
MongoDB Enterprise > db.createUser({
... user: "m103-admin",
... pwd: "m103-pass",
... roles: [
... {role: "root", db: "admin"}
... ]
... })
Successfully added user: {
"user" : "m103-admin",
"roles" : [
{
"role" : "root",
"db" : "admin"
}
]
}
退出mongo shell
MongoDB Enterprise > exit
bye
根据mongod-repl-1.conf分别创建mongod-repl-2.conf,mongod-repl-3.conf配置文件,并做对应修改:
vagrant@m103:~$ cp ./mongod-repl-1.conf mongod-repl-2.conf
vagrant@m103:~$ cp ./mongod-repl-1.conf mongod-repl-3.conf
vagrant@m103:~$ vim mongod-repl-2.conf
vagrant@m103:~$ vim mongod-repl-3.conf
启动2,3号mongod守护进程:
vagrant@m103:~$ mongod --config ./mongod-repl-2.conf
about to fork child process, waiting until server is ready for connections.
forked process: 6979
child process started successfully, parent exiting
vagrant@m103:~$ mongod --config ./mongod-repl-3.conf
about to fork child process, waiting until server is ready for connections.
forked process: 7008
child process started successfully, parent exiting
通过27001端口进入mongo shell:
vagrant@m103:~$ mongo --port 27001
MongoDB shell version v3.6.4-rc0
connecting to: mongodb://127.0.0.1:27001/
MongoDB server version: 3.6.4-rc0
再次确认副本集状态:
MongoDB Enterprise m103-repl:PRIMARY> rs.status()
{
"set" : "m103-repl",
"date" : ISODate("2018-04-20T02:51:54.414Z"),
"myState" : 1,
"term" : NumberLong(1),
"heartbeatIntervalMillis" : NumberLong(2000),
"optimes" : {
"lastCommittedOpTime" : {
"ts" : Timestamp(1524192708, 1),
"t" : NumberLong(1)
},
"readConcernMajorityOpTime" : {
"ts" : Timestamp(1524192708, 1),
"t" : NumberLong(1)
},
"appliedOpTime" : {
"ts" : Timestamp(1524192708, 1),
"t" : NumberLong(1)
},
"durableOpTime" : {
"ts" : Timestamp(1524192708, 1),
"t" : NumberLong(1)
}
},
"members" : [
{
"_id" : 0,
"name" : "192.168.103.100:27001",
"health" : 1,
"state" : 1,
"stateStr" : "PRIMARY",
"uptime" : 530,
"optime" : {
"ts" : Timestamp(1524192708, 1),
"t" : NumberLong(1)
},
"optimeDate" : ISODate("2018-04-20T02:51:48Z"),
"electionTime" : Timestamp(1524192276, 2),
"electionDate" : ISODate("2018-04-20T02:44:36Z"),
"configVersion" : 1,
"self" : true
}
],
"ok" : 1,
"operationTime" : Timestamp(1524192708, 1),
"$clusterTime" : {
"clusterTime" : Timestamp(1524192708, 1),
"signature" : {
"hash" : BinData(0,"3w15jmseCfffFYXGR5Do3wzYgOg="),
"keyId" : NumberLong("6546355982530772993")
}
}
}
将2,3号机添加进集群:
MongoDB Enterprise m103-repl:PRIMARY> rs.add("192.168.103.100:27002")
{
"ok" : 1,
"operationTime" : Timestamp(1524193032, 1),
"$clusterTime" : {
"clusterTime" : Timestamp(1524193032, 1),
"signature" : {
"hash" : BinData(0,"zQy1x7sLBEiDiP4ZxJYMcmFX6IA="),
"keyId" : NumberLong("6546355982530772993")
}
}
}
MongoDB Enterprise m103-repl:PRIMARY> rs.add("192.168.103.100:27003")
{
"ok" : 1,
"operationTime" : Timestamp(1524193037, 1),
"$clusterTime" : {
"clusterTime" : Timestamp(1524193037, 1),
"signature" : {
"hash" : BinData(0,"tZOPE14j++zl4+9FvCfvC/OxNhc="),
"keyId" : NumberLong("6546355982530772993")
}
}
}
再次查看集群状态:
MongoDB Enterprise m103-repl:PRIMARY> rs.status()
{
"set" : "m103-repl",
"date" : ISODate("2018-04-20T02:57:50.429Z"),
"myState" : 1,
"term" : NumberLong(1),
"heartbeatIntervalMillis" : NumberLong(2000),
"optimes" : {
"lastCommittedOpTime" : {
"ts" : Timestamp(1524193068, 1),
"t" : NumberLong(1)
},
"readConcernMajorityOpTime" : {
"ts" : Timestamp(1524193068, 1),
"t" : NumberLong(1)
},
"appliedOpTime" : {
"ts" : Timestamp(1524193068, 1),
"t" : NumberLong(1)
},
"durableOpTime" : {
"ts" : Timestamp(1524193068, 1),
"t" : NumberLong(1)
}
},
"members" : [
{
"_id" : 0,
"name" : "192.168.103.100:27001",
"health" : 1,
"state" : 1,
"stateStr" : "PRIMARY",
"uptime" : 886,
"optime" : {
"ts" : Timestamp(1524193068, 1),
"t" : NumberLong(1)
},
"optimeDate" : ISODate("2018-04-20T02:57:48Z"),
"electionTime" : Timestamp(1524192276, 2),
"electionDate" : ISODate("2018-04-20T02:44:36Z"),
"configVersion" : 3,
"self" : true
},
{
"_id" : 1,
"name" : "192.168.103.100:27002",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 38,
"optime" : {
"ts" : Timestamp(1524193068, 1),
"t" : NumberLong(1)
},
"optimeDurable" : {
"ts" : Timestamp(1524193068, 1),
"t" : NumberLong(1)
},
"optimeDate" : ISODate("2018-04-20T02:57:48Z"),
"optimeDurableDate" : ISODate("2018-04-20T02:57:48Z"),
"lastHeartbeat" : ISODate("2018-04-20T02:57:49.692Z"),
"lastHeartbeatRecv" : ISODate("2018-04-20T02:57:50.207Z"),
"pingMs" : NumberLong(0),
"syncingTo" : "192.168.103.100:27001",
"configVersion" : 3
},
{
"_id" : 2,
"name" : "192.168.103.100:27003",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 32,
"optime" : {
"ts" : Timestamp(1524193068, 1),
"t" : NumberLong(1)
},
"optimeDurable" : {
"ts" : Timestamp(1524193068, 1),
"t" : NumberLong(1)
},
"optimeDate" : ISODate("2018-04-20T02:57:48Z"),
"optimeDurableDate" : ISODate("2018-04-20T02:57:48Z"),
"lastHeartbeat" : ISODate("2018-04-20T02:57:49.691Z"),
"lastHeartbeatRecv" : ISODate("2018-04-20T02:57:49.150Z"),
"pingMs" : NumberLong(0),
"syncingTo" : "192.168.103.100:27002",
"configVersion" : 3
}
],
"ok" : 1,
"operationTime" : Timestamp(1524193068, 1),
"$clusterTime" : {
"clusterTime" : Timestamp(1524193068, 1),
"signature" : {
"hash" : BinData(0,"bKC7oTmVF2q0EtCKk+gSrZJc7RU="),
"keyId" : NumberLong("6546355982530772993")
}
}
}
发现2,3号机器已经以SECONDARY身份添加入集群,退出集群,执行答案所给脚本:
MongoDB Enterprise m103-repl:PRIMARY> exit
bye
vagrant@m103:~$ validate_lab_initialize_local_replica_set
5a4d32f979235b109001c7bc
为所求。
lab - Remove and Re-Add a Node
In this lab, you will make your replica set even more secure. In the previous lab, you used a keyfile so individual nodes authenticate to one other. However, they were still connecting to each other using the external IP address of the Vagrant box, even though all three nodes were in the same box.
You will modify the replica set so one of the nodes uses its local IP address instead of the external IP address of the Vagrant box. This way, all communication with that node will stay within the box.
m103 is an alias for localhost in this Vagrant box, but you must use this alias as the hostname because localhost is restricted.
To correctly reconfigure this node, you will have to remove the node from the replica set, and then add it back with the correct hostname. For this lab, you only need to do this for one of the nodes. You will not need to change the bindIP.
When you’re finished, run the validation script and enter the validation key you receive below. If you receive an error, it should give you some idea of what went wrong.
validate_lab_remove_readd_node
解答
确认hostname:
vagrant@m103:~$ hostname
m103
修改host配置(不修改ip的话怎么也连不通的):
vagrant@m103:~$ sudo vim /etc/hosts
192.168.103.100 m103.mongodb.university m103
127.0.0.1 localhost
# The following lines are desirable for IPv6 capable hosts
::1 ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
ff02::3 ip6-allhosts
通过mongo shell进入集群:
vagrant@m103:~$ mongo --port 27001 -u m103-admin -p m103-pass admin
MongoDB shell version v3.6.4-rc0
connecting to: mongodb://127.0.0.1:27001/admin
MongoDB server version: 3.6.4-rc0
Server has startup warnings:
2018-04-20T02:43:04.649+0000 I STORAGE [initandlisten]
2018-04-20T02:43:04.649+0000 I STORAGE [initandlisten] ** WARNING: Using the XFS filesystem is strongly recommended with the WiredTiger storage engine
2018-04-20T02:43:04.649+0000 I STORAGE [initandlisten] ** See http://dochub.mongodb.org/core/prodnotes-filesystem
确认集群状态:
MongoDB Enterprise m103-repl:PRIMARY> rs.status()
{
"set" : "m103-repl",
"date" : ISODate("2018-04-20T03:25:50.499Z"),
"myState" : 1,
"term" : NumberLong(1),
"heartbeatIntervalMillis" : NumberLong(2000),
"optimes" : {
"lastCommittedOpTime" : {
"ts" : Timestamp(1524194748, 1),
"t" : NumberLong(1)
},
"readConcernMajorityOpTime" : {
"ts" : Timestamp(1524194748, 1),
"t" : NumberLong(1)
},
"appliedOpTime" : {
"ts" : Timestamp(1524194748, 1),
"t" : NumberLong(1)
},
"durableOpTime" : {
"ts" : Timestamp(1524194748, 1),
"t" : NumberLong(1)
}
},
"members" : [
{
"_id" : 0,
"name" : "192.168.103.100:27001",
"health" : 1,
"state" : 1,
"stateStr" : "PRIMARY",
"uptime" : 2566,
"optime" : {
"ts" : Timestamp(1524194748, 1),
"t" : NumberLong(1)
},
"optimeDate" : ISODate("2018-04-20T03:25:48Z"),
"electionTime" : Timestamp(1524192276, 2),
"electionDate" : ISODate("2018-04-20T02:44:36Z"),
"configVersion" : 3,
"self" : true
},
{
"_id" : 1,
"name" : "192.168.103.100:27002",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 1718,
"optime" : {
"ts" : Timestamp(1524194748, 1),
"t" : NumberLong(1)
},
"optimeDurable" : {
"ts" : Timestamp(1524194748, 1),
"t" : NumberLong(1)
},
"optimeDate" : ISODate("2018-04-20T03:25:48Z"),
"optimeDurableDate" : ISODate("2018-04-20T03:25:48Z"),
"lastHeartbeat" : ISODate("2018-04-20T03:25:48.928Z"),
"lastHeartbeatRecv" : ISODate("2018-04-20T03:25:49.158Z"),
"pingMs" : NumberLong(0),
"syncingTo" : "192.168.103.100:27001",
"configVersion" : 3
},
{
"_id" : 2,
"name" : "192.168.103.100:27003",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 1712,
"optime" : {
"ts" : Timestamp(1524194748, 1),
"t" : NumberLong(1)
},
"optimeDurable" : {
"ts" : Timestamp(1524194748, 1),
"t" : NumberLong(1)
},
"optimeDate" : ISODate("2018-04-20T03:25:48Z"),
"optimeDurableDate" : ISODate("2018-04-20T03:25:48Z"),
"lastHeartbeat" : ISODate("2018-04-20T03:25:48.899Z"),
"lastHeartbeatRecv" : ISODate("2018-04-20T03:25:50.270Z"),
"pingMs" : NumberLong(0),
"syncingTo" : "192.168.103.100:27002",
"configVersion" : 3
}
],
"ok" : 1,
"operationTime" : Timestamp(1524194748, 1),
"$clusterTime" : {
"clusterTime" : Timestamp(1524194748, 1),
"signature" : {
"hash" : BinData(0,"aV5AXhJ3cLOszc5/3+K/8sCpCUU="),
"keyId" : NumberLong("6546355982530772993")
}
}
}
以3号机器做实验,先将其从集群中删除:
MongoDB Enterprise m103-repl:PRIMARY> rs.remove("192.168.103.100:27003")
{
"ok" : 1,
"operationTime" : Timestamp(1524194849, 1),
"$clusterTime" : {
"clusterTime" : Timestamp(1524194849, 1),
"signature" : {
"hash" : BinData(0,"UbMR0ykOtM9hsfOaz1uLMfwIH3Y="),
"keyId" : NumberLong("6546355982530772993")
}
}
}
再将其以host名字加入到集群中:
MongoDB Enterprise m103-repl:PRIMARY> rs.add("m103:27003")
{
"ok" : 1,
"operationTime" : Timestamp(1524195318, 2),
"$clusterTime" : {
"clusterTime" : Timestamp(1524195318, 2),
"signature" : {
"hash" : BinData(0,"0/hxt+8Cc60hyJzBwWe+rJDOzdM="),
"keyId" : NumberLong("6546355982530772993")
}
}
}
等待一段时间同步后,查看集群状态:
MongoDB Enterprise m103-repl:PRIMARY> rs.status()
{
"set" : "m103-repl",
"date" : ISODate("2018-04-20T03:35:25.425Z"),
"myState" : 1,
"term" : NumberLong(1),
"heartbeatIntervalMillis" : NumberLong(2000),
"optimes" : {
"lastCommittedOpTime" : {
"ts" : Timestamp(1524195318, 2),
"t" : NumberLong(1)
},
"readConcernMajorityOpTime" : {
"ts" : Timestamp(1524195318, 2),
"t" : NumberLong(1)
},
"appliedOpTime" : {
"ts" : Timestamp(1524195318, 2),
"t" : NumberLong(1)
},
"durableOpTime" : {
"ts" : Timestamp(1524195318, 2),
"t" : NumberLong(1)
}
},
"members" : [
{
"_id" : 0,
"name" : "192.168.103.100:27001",
"health" : 1,
"state" : 1,
"stateStr" : "PRIMARY",
"uptime" : 3141,
"optime" : {
"ts" : Timestamp(1524195318, 2),
"t" : NumberLong(1)
},
"optimeDate" : ISODate("2018-04-20T03:35:18Z"),
"electionTime" : Timestamp(1524192276, 2),
"electionDate" : ISODate("2018-04-20T02:44:36Z"),
"configVersion" : 11,
"self" : true
},
{
"_id" : 1,
"name" : "192.168.103.100:27002",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 2293,
"optime" : {
"ts" : Timestamp(1524195318, 2),
"t" : NumberLong(1)
},
"optimeDurable" : {
"ts" : Timestamp(1524195318, 2),
"t" : NumberLong(1)
},
"optimeDate" : ISODate("2018-04-20T03:35:18Z"),
"optimeDurableDate" : ISODate("2018-04-20T03:35:18Z"),
"lastHeartbeat" : ISODate("2018-04-20T03:35:24.732Z"),
"lastHeartbeatRecv" : ISODate("2018-04-20T03:35:25.264Z"),
"pingMs" : NumberLong(0),
"configVersion" : 11
},
{
"_id" : 2,
"name" : "m103:27003",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 4,
"optime" : {
"ts" : Timestamp(1524195318, 2),
"t" : NumberLong(1)
},
"optimeDurable" : {
"ts" : Timestamp(1524195318, 2),
"t" : NumberLong(1)
},
"optimeDate" : ISODate("2018-04-20T03:35:18Z"),
"optimeDurableDate" : ISODate("2018-04-20T03:35:18Z"),
"lastHeartbeat" : ISODate("2018-04-20T03:35:24.737Z"),
"lastHeartbeatRecv" : ISODate("2018-04-20T03:35:25.094Z"),
"pingMs" : NumberLong(0),
"configVersion" : 11
}
],
"ok" : 1,
"operationTime" : Timestamp(1524195318, 2),
"$clusterTime" : {
"clusterTime" : Timestamp(1524195318, 2),
"signature" : {
"hash" : BinData(0,"0/hxt+8Cc60hyJzBwWe+rJDOzdM="),
"keyId" : NumberLong("6546355982530772993")
}
}
}
3号机器已经重新加入集群,退出集群,执行答案脚本:
MongoDB Enterprise m103-repl:PRIMARY> exit
bye
vagrant@m103:~$ validate_lab_remove_readd_node
5a4fff19c0324e9feb9f60b9
为所求。
lab - Writes with Failovers
In this lab, you will attempt to write data with a writeConcern to a replica set where one node has failed.
In order to simulate a node failure within your replica set, you will connect to the node individually and shut it down. Connecting back to the replica set and running rs.status() should show the failing node with a description like this:
{
"name" : "m103:27001",
"health" : 0,
"stateStr" : "(not reachable/healthy)",
"lastHeartbeatMessage" : "Connection refused",
"configVersion" : -1
}
With one of your nodes down, attempt to insert a document in your replica set by running the following commands:
use testDatabase
db.new_data.insert({"m103": "very fun"}, { writeConcern: { w: 3, wtimeout: 1000 }})
This will attempt to insert one record into a collection called testDatabase.new_data, while verifying that 3 nodes registered the write. It should return an error, because only 2 nodes are healthy.
Given the output of the insert command, and your knowledge of writeConcern, check all that apply:
Check all that apply:
- The write operation will always return with an error,even if wtimeout is not specified.
- When a writeConcernError occurs,the document is still written to the healthy nodes.
- The unhealthy node will have the inserted document when it is brought back online.
- w: “majority” would also cause this write operation to return with an error.
解答
按题目要求强制关闭1号库:
vagrant@m103:~$ ps -ef|grep mongod
vagrant 6865 1 0 02:43 ? 00:00:20 mongod --config ./mongod-repl-1.conf
vagrant 6979 1 0 02:50 ? 00:00:17 mongod --config ./mongod-repl-2.conf
vagrant 7008 1 0 02:50 ? 00:00:17 mongod --config ./mongod-repl-3.conf
vagrant 7442 1 0 03:18 ? 00:00:05 mongod --dbpath allbymyselfdb --syslog --fork
vagrant 7905 6548 0 03:52 pts/0 00:00:00 grep --color=auto mongod
vagrant@m103:~$ kill -9 6865
在2,3号机器中找到新的PRIMARY机器:
vagrant@m103:~$ mongo --port 27002 -u m103-admin -p m103-pass admin
MongoDB shell version v3.6.4-rc0
connecting to: mongodb://127.0.0.1:27002/admin
MongoDB server version: 3.6.4-rc0
Server has startup warnings:
2018-04-20T02:50:14.922+0000 I STORAGE [initandlisten]
2018-04-20T02:50:14.922+0000 I STORAGE [initandlisten] ** WARNING: Using the XFS filesystem is strongly recommended with the WiredTiger storage engine
2018-04-20T02:50:14.922+0000 I STORAGE [initandlisten] ** See http://dochub.mongodb.org/core/prodnotes-filesystem
MongoDB Enterprise m103-repl:SECONDARY> exit
bye
vagrant@m103:~$ mongo --port 27003 -u m103-admin -p m103-pass admin
MongoDB shell version v3.6.4-rc0
connecting to: mongodb://127.0.0.1:27003/admin
MongoDB server version: 3.6.4-rc0
Server has startup warnings:
2018-04-20T02:50:18.795+0000 I STORAGE [initandlisten]
2018-04-20T02:50:18.795+0000 I STORAGE [initandlisten] ** WARNING: Using the XFS filesystem is strongly recommended with the WiredTiger storage engine
2018-04-20T02:50:18.795+0000 I STORAGE [initandlisten] ** See http://dochub.mongodb.org/core/prodnotes-filesystem
MongoDB Enterprise m103-repl:PRIMARY>
确认集群状态状态:
MongoDB Enterprise m103-repl:PRIMARY> rs.status()
{
"set" : "m103-repl",
"date" : ISODate("2018-04-20T03:56:56.933Z"),
"myState" : 1,
"term" : NumberLong(2),
"heartbeatIntervalMillis" : NumberLong(2000),
"optimes" : {
"lastCommittedOpTime" : {
"ts" : Timestamp(1524196613, 1),
"t" : NumberLong(2)
},
"readConcernMajorityOpTime" : {
"ts" : Timestamp(1524196613, 1),
"t" : NumberLong(2)
},
"appliedOpTime" : {
"ts" : Timestamp(1524196613, 1),
"t" : NumberLong(2)
},
"durableOpTime" : {
"ts" : Timestamp(1524196613, 1),
"t" : NumberLong(2)
}
},
"members" : [
{
"_id" : 0,
"name" : "192.168.103.100:27001",
"health" : 0,
"state" : 8,
"stateStr" : "(not reachable/healthy)",
"uptime" : 0,
"optime" : {
"ts" : Timestamp(0, 0),
"t" : NumberLong(-1)
},
"optimeDurable" : {
"ts" : Timestamp(0, 0),
"t" : NumberLong(-1)
},
"optimeDate" : ISODate("1970-01-01T00:00:00Z"),
"optimeDurableDate" : ISODate("1970-01-01T00:00:00Z"),
"lastHeartbeat" : ISODate("2018-04-20T03:56:55.998Z"),
"lastHeartbeatRecv" : ISODate("2018-04-20T03:53:11.418Z"),
"pingMs" : NumberLong(0),
"lastHeartbeatMessage" : "Connection refused",
"configVersion" : -1
},
{
"_id" : 1,
"name" : "192.168.103.100:27002",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 1298,
"optime" : {
"ts" : Timestamp(1524196613, 1),
"t" : NumberLong(2)
},
"optimeDurable" : {
"ts" : Timestamp(1524196613, 1),
"t" : NumberLong(2)
},
"optimeDate" : ISODate("2018-04-20T03:56:53Z"),
"optimeDurableDate" : ISODate("2018-04-20T03:56:53Z"),
"lastHeartbeat" : ISODate("2018-04-20T03:56:55.973Z"),
"lastHeartbeatRecv" : ISODate("2018-04-20T03:56:56.875Z"),
"pingMs" : NumberLong(0),
"syncingTo" : "m103:27003",
"configVersion" : 11
},
{
"_id" : 2,
"name" : "m103:27003",
"health" : 1,
"state" : 1,
"stateStr" : "PRIMARY",
"uptime" : 3998,
"optime" : {
"ts" : Timestamp(1524196613, 1),
"t" : NumberLong(2)
},
"optimeDate" : ISODate("2018-04-20T03:56:53Z"),
"electionTime" : Timestamp(1524196401, 1),
"electionDate" : ISODate("2018-04-20T03:53:21Z"),
"configVersion" : 11,
"self" : true
}
],
"ok" : 1,
"operationTime" : Timestamp(1524196613, 1),
"$clusterTime" : {
"clusterTime" : Timestamp(1524196613, 1),
"signature" : {
"hash" : BinData(0,"BkHE37/7BKET5gczThnKeqFB80I="),
"keyId" : NumberLong("6546355982530772993")
}
}
}
发现1号机状态已经是not reachable/healthy
在主机端执行插入脚本:
MongoDB Enterprise m103-repl:PRIMARY> use testDatabase
switched to db testDatabase
MongoDB Enterprise m103-repl:PRIMARY> db.new_data.insert({"m103": "very fun"}, { writeConcern: { w: 3, wtimeout: 1000 }})
WriteResult({
"nInserted" : 1,
"writeConcernError" : {
"code" : 64,
"codeName" : "WriteConcernFailed",
"errInfo" : {
"wtimeout" : true
},
"errmsg" : "waiting for replication timed out"
}
})
爆出错误,再查看数据是否写入:
MongoDB Enterprise m103-repl:PRIMARY> db.new_data.find().count()
1
说明数据成功写入,选项
- When a writeConcernError occurs,the document is still written to the healthy nodes.正确
尝试去掉wtimeout,执行插入语句:
MongoDB Enterprise m103-repl:PRIMARY> db.new_data.insert({"m103": "very fun"}, { writeConcern: { w: 3 }})
语句将卡死,不返回错误,选项
- The write operation will always return with an error,even if wtimeout is not specified.错误
使用w: “majority”插入数据:
MongoDB Enterprise m103-repl:PRIMARY> db.new_data.insert({"m103": "very fun"}, { writeConcern: { w:"majority" , wtimeout: 1000 }})
WriteResult({ "nInserted" : 1 })
不会返回错误,且数据成功插入,选项
- w: “majority” would also cause this write operation to return with an error错误
重新开启一号机器:
vagrant@m103:~$ mongod --config ./mongod-repl-1.conf
about to fork child process, waiting until server is ready for connections.
forked process: 7961
child process started successfully, parent exiting
从三号机器进入集群:
vagrant@m103:~$ mongo --port 27003 -u m103-admin -p m103-pass admin
MongoDB shell version v3.6.4-rc0
connecting to: mongodb://127.0.0.1:27003/admin
MongoDB server version: 3.6.4-rc0
Server has startup warnings:
2018-04-20T02:50:18.795+0000 I STORAGE [initandlisten]
2018-04-20T02:50:18.795+0000 I STORAGE [initandlisten] ** WARNING: Using the XFS filesystem is strongly recommended with the WiredTiger storage engine
2018-04-20T02:50:18.795+0000 I STORAGE [initandlisten] ** See http://dochub.mongodb.org/core/prodnotes-filesystem
MongoDB Enterprise m103-repl:PRIMARY>
查看集群状态:
MongoDB Enterprise m103-repl:PRIMARY> rs.status()
{
"set" : "m103-repl",
"date" : ISODate("2018-04-20T04:06:54.853Z"),
"myState" : 1,
"term" : NumberLong(2),
"heartbeatIntervalMillis" : NumberLong(2000),
"optimes" : {
"lastCommittedOpTime" : {
"ts" : Timestamp(1524197213, 1),
"t" : NumberLong(2)
},
"readConcernMajorityOpTime" : {
"ts" : Timestamp(1524197213, 1),
"t" : NumberLong(2)
},
"appliedOpTime" : {
"ts" : Timestamp(1524197213, 1),
"t" : NumberLong(2)
},
"durableOpTime" : {
"ts" : Timestamp(1524197213, 1),
"t" : NumberLong(2)
}
},
"members" : [
{
"_id" : 0,
"name" : "192.168.103.100:27001",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 48,
"optime" : {
"ts" : Timestamp(1524197213, 1),
"t" : NumberLong(2)
},
"optimeDurable" : {
"ts" : Timestamp(1524197213, 1),
"t" : NumberLong(2)
},
"optimeDate" : ISODate("2018-04-20T04:06:53Z"),
"optimeDurableDate" : ISODate("2018-04-20T04:06:53Z"),
"lastHeartbeat" : ISODate("2018-04-20T04:06:54.559Z"),
"lastHeartbeatRecv" : ISODate("2018-04-20T04:06:53.392Z"),
"pingMs" : NumberLong(0),
"syncingTo" : "m103:27003",
"configVersion" : 11
},
{
"_id" : 1,
"name" : "192.168.103.100:27002",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 1896,
"optime" : {
"ts" : Timestamp(1524197213, 1),
"t" : NumberLong(2)
},
"optimeDurable" : {
"ts" : Timestamp(1524197213, 1),
"t" : NumberLong(2)
},
"optimeDate" : ISODate("2018-04-20T04:06:53Z"),
"optimeDurableDate" : ISODate("2018-04-20T04:06:53Z"),
"lastHeartbeat" : ISODate("2018-04-20T04:06:54.281Z"),
"lastHeartbeatRecv" : ISODate("2018-04-20T04:06:53.245Z"),
"pingMs" : NumberLong(0),
"syncingTo" : "m103:27003",
"configVersion" : 11
},
{
"_id" : 2,
"name" : "m103:27003",
"health" : 1,
"state" : 1,
"stateStr" : "PRIMARY",
"uptime" : 4596,
"optime" : {
"ts" : Timestamp(1524197213, 1),
"t" : NumberLong(2)
},
"optimeDate" : ISODate("2018-04-20T04:06:53Z"),
"electionTime" : Timestamp(1524196401, 1),
"electionDate" : ISODate("2018-04-20T03:53:21Z"),
"configVersion" : 11,
"self" : true
}
],
"ok" : 1,
"operationTime" : Timestamp(1524197213, 1),
"$clusterTime" : {
"clusterTime" : Timestamp(1524197213, 1),
"signature" : {
"hash" : BinData(0,"O5tBbe39awz9HjPpztVMcsNDVoU="),
"keyId" : NumberLong("6546355982530772993")
}
}
}
发现一号机器已经以SECONDARY身份自动加入了集群,切换进入1号机:
MongoDB Enterprise m103-repl:PRIMARY> exit
bye
vagrant@m103:~$ mongo --port 27001 -u m103-admin -p m103-pass admin
MongoDB shell version v3.6.4-rc0
connecting to: mongodb://127.0.0.1:27001/admin
MongoDB server version: 3.6.4-rc0
Server has startup warnings:
2018-04-20T04:06:05.331+0000 I STORAGE [initandlisten]
2018-04-20T04:06:05.331+0000 I STORAGE [initandlisten] ** WARNING: Using the XFS filesystem is strongly recommended with the WiredTiger storage engine
2018-04-20T04:06:05.331+0000 I STORAGE [initandlisten] ** See http://dochub.mongodb.org/core/prodnotes-filesystem
MongoDB Enterprise m103-repl:SECONDARY>
开启可读:
MongoDB Enterprise m103-repl:SECONDARY> rs.slaveOk()
查看数据是否同步:
MongoDB Enterprise m103-repl:SECONDARY> use testDatabase
switched to db testDatabase
MongoDB Enterprise m103-repl:SECONDARY> db.new_data.find().count()
3
证明数据同步了,选项
- The unhealthy node will have the inserted document when it is brought back online.正确
所以答案为:
- When a writeConcernError occurs,the document is still written to the healthy nodes.
- The unhealthy node will have the inserted document when it is brought back online.
lab - Read Concern and Read Preferences
In this lab, you will take advantage of different read preferences to increase the availability of your replica set.
To begin, load the dataset in your Vagrant box into your replica set:
mongoimport --drop \
--host m103-repl/192.168.103.100:27002,192.168.103.100:27001,192.168.103.100:27003 \
-u "m103-admin" -p "m103-pass" --authenticationDatabase "admin" \
--db applicationData --collection products /dataset/products.json
You can check that you’ve loaded the entire dataset by verifying that there are exactly 516784 documents in the applicationData.products collection:
use applicationData
db.products.count()
Once the dataset is fully imported into your replica set, you will simulate a node failure. This is similar to the previous lab, but this time you will shut down two nodes.
When two of your nodes are unresponsive, you will not be able to connect to the replica set. You will have to connect to the third node, which should be the only healthy node in the cluster.
Which of these readPreferences will allow you to read data from this node?
Check all that apply:
- secondaryPreferred
- primary
- primaryPreferred
- secondary
- nearest
解答
先确认脚本:
vagrant@m103:~$ ls /dataset/
products.json
执行题目给出的语句:
vagrant@m103:~$ mongoimport --drop \
> --host m103-repl/192.168.103.100:27002,192.168.103.100:27001,192.168.103.100:27003 \
> -u "m103-admin" -p "m103-pass" --authenticationDatabase "admin" \
> --db applicationData --collection products /dataset/products.json
2018-04-20T06:39:02.031+0000 connected to: m103-repl/192.168.103.100:27002,192.168.103.100:27001,192.168.103.100:27003
2018-04-20T06:39:02.033+0000 dropping: applicationData.products
2018-04-20T06:39:05.018+0000 [#.......................] applicationData.products 6.43MB/87.9MB (7.3%)
2018-04-20T06:39:08.018+0000 [###.....................] applicationData.products 13.4MB/87.9MB (15.2%)
2018-04-20T06:39:11.019+0000 [#####...................] applicationData.products 19.4MB/87.9MB (22.0%)
2018-04-20T06:39:14.018+0000 [######..................] applicationData.products 25.6MB/87.9MB (29.1%)
2018-04-20T06:39:17.018+0000 [########................] applicationData.products 32.0MB/87.9MB (36.4%)
2018-04-20T06:39:20.019+0000 [#########...............] applicationData.products 35.2MB/87.9MB (40.0%)
2018-04-20T06:39:23.019+0000 [##########..............] applicationData.products 38.6MB/87.9MB (43.9%)
2018-04-20T06:39:26.019+0000 [###########.............] applicationData.products 41.1MB/87.9MB (46.8%)
2018-04-20T06:39:29.018+0000 [############............] applicationData.products 44.2MB/87.9MB (50.3%)
2018-04-20T06:39:32.020+0000 [############............] applicationData.products 47.3MB/87.9MB (53.8%)
2018-04-20T06:39:35.020+0000 [#############...........] applicationData.products 50.1MB/87.9MB (56.9%)
2018-04-20T06:39:38.018+0000 [##############..........] applicationData.products 53.4MB/87.9MB (60.7%)
2018-04-20T06:39:41.018+0000 [###############.........] applicationData.products 56.2MB/87.9MB (63.9%)
2018-04-20T06:39:44.018+0000 [################........] applicationData.products 59.4MB/87.9MB (67.6%)
2018-04-20T06:39:47.018+0000 [#################.......] applicationData.products 62.9MB/87.9MB (71.5%)
2018-04-20T06:39:50.019+0000 [##################......] applicationData.products 66.2MB/87.9MB (75.3%)
2018-04-20T06:39:53.020+0000 [##################......] applicationData.products 69.3MB/87.9MB (78.8%)
2018-04-20T06:39:56.018+0000 [###################.....] applicationData.products 72.4MB/87.9MB (82.3%)
2018-04-20T06:39:59.019+0000 [####################....] applicationData.products 75.9MB/87.9MB (86.3%)
2018-04-20T06:40:02.020+0000 [#####################...] applicationData.products 78.7MB/87.9MB (89.4%)
2018-04-20T06:40:05.020+0000 [######################..] applicationData.products 81.5MB/87.9MB (92.7%)
2018-04-20T06:40:08.019+0000 [#######################.] applicationData.products 85.4MB/87.9MB (97.1%)
2018-04-20T06:40:09.837+0000 [########################] applicationData.products 87.9MB/87.9MB (100.0%)
2018-04-20T06:40:09.837+0000 imported 516784 documents
验证脚本是否执行成功:
vagrant@m103:~$ mongo --port 27003 -u m103-admin -p m103-pass admin
MongoDB shell version v3.6.4-rc0
connecting to: mongodb://127.0.0.1:27003/admin
MongoDB server version: 3.6.4-rc0
Server has startup warnings:
2018-04-20T02:50:18.795+0000 I STORAGE [initandlisten]
2018-04-20T02:50:18.795+0000 I STORAGE [initandlisten] ** WARNING: Using the XFS filesystem is strongly recommended with the WiredTiger storage engine
2018-04-20T02:50:18.795+0000 I STORAGE [initandlisten] ** See http://dochub.mongodb.org/core/prodnotes-filesystem
MongoDB Enterprise m103-repl:PRIMARY> use applicationData
switched to db applicationData
MongoDB Enterprise m103-repl:PRIMARY> db.products.count()
516784
按题意关闭1,2号机器:
vagrant@m103:~$ ps -ef|grep mongod
vagrant 6979 1 0 02:50 ? 00:01:17 mongod --config ./mongod-repl-2.conf
vagrant 7008 1 0 02:50 ? 00:01:19 mongod --config ./mongod-repl-3.conf
vagrant 7442 1 0 03:18 ? 00:00:28 mongod --dbpath allbymyselfdb --syslog --fork
vagrant 7961 1 0 04:06 ? 00:00:57 mongod --config ./mongod-repl-1.conf
vagrant 9514 6548 0 06:43 pts/0 00:00:00 grep --color=auto mongod
vagrant@m103:~$ kill -9 7961
vagrant@m103:~$ kill -9 6979
再次连入3号机:
MongoDB Enterprise m103-repl:SECONDARY> show dbs
2018-04-20T06:44:44.177+0000 E QUERY [thread1] Error: listDatabases failed:{
"operationTime" : Timestamp(1524206633, 1),
"ok" : 0,
"errmsg" : "not master and slaveOk=false",
"code" : 13435,
"codeName" : "NotMasterNoSlaveOk",
"$clusterTime" : {
"clusterTime" : Timestamp(1524206633, 1),
"signature" : {
"hash" : BinData(0,"u55xpN/Bs4Xt2byaLJISt634uJ4="),
"keyId" : NumberLong("6546355982530772993")
}
}
} :
_getErrorWithCode@src/mongo/shell/utils.js:25:13
Mongo.prototype.getDBs@src/mongo/shell/mongo.js:65:1
shellHelper.show@src/mongo/shell/utils.js:820:19
shellHelper@src/mongo/shell/utils.js:710:15
@(shellhelp2):1:1
发现集群已崩,且不可读,说明primary(默认设置)模式不可。
下表是Read Preference各种模式的描述:
Read Preference Mode | Description | 翻译 |
---|---|---|
primary | Default mode. All operations read from the current replica set primary. | 默认模式,所有的读操作都在复制集的 主节点 进行的。 |
primaryPreferred | In most situations, operations read from the primary but if it is unavailable, operations read from secondary members. | 在大多数情况时,读操作在 主节点 上进行,但是如果主节点不可用了,读操作就会转移到 从节点 上执行。 |
secondary | All operations read from the secondary members of the replica set. | 所有的读操作都在从节点上执行 |
secondaryPreferred | In most situations, operations read from secondary members but if no secondary members are available, operations read from the primary. | 在大多数情况下,读操作都是在 从节点 上进行的,但是当 从节点 不可用了,读操作会转移到 主节点 上进行。 |
nearest | Operations read from member of the replica set with the least network latency, irrespective of the member’s type. | 读操作会在 复制集 中网络延时最小的节点上进行,与节点类型无关。 |
所以按题意,答案为:
- secondaryPreferred
- primaryPreferred
- secondary
- nearest