MongoDB index and replica set, shard miscellaneous notes

1. Index

1. Index operation

1.1 Insert data

> use testdb
switched to db testdb
> for (i=1;i<=10000;i++) db.students.insert({name:"student"+i,age:(i%120),address:"#85 Wenhua Road,Zhengzhou,China"})

> db.students.find().count()
10000

1.2 Create an index

在name字段构建升序索引:
> db.students.ensureIndex({name: 1})
{
    "createdCollectionAutomatically" : false,
    "numIndexesBefore" : 1,
    "numIndexesAfter" : 2,
    "ok" : 1
}

查看索引:
> db.students.getIndexes()
[
    {
        "v" : 1,
        "key" : {
            "_id" : 1
        },
        "name" : "_id_",
        "ns" : "testdb.students"
    },
    {
        "v" : 1,
        "key" : {
            "name" : 1
        },
        "name" : "name_1",
        "ns" : "testdb.students"
    }
]

1.3 delete index

> db.students.dropIndex("name_1")
{ "nIndexesWas" : 2, "ok" : 1 }
> db.students.getIndexes()
[
    {
        "v" : 1,
        "key" : {
            "_id" : 1
        },
        "name" : "_id_",
        "ns" : "testdb.students"
    }
]
> 

1.4 Create a unique key index

> db.students.ensureIndex({name: 1},{unique: true})
> db.students.getIndexes()
{
    "v" : 1,
    "unique" : true,
    "key" : {
        "name" : 1
    },
    "name" : "name_1",
    "ns" : "testdb.students"

插入同样的值会有约束:
> db.students.insert({name: "student20",age: 20})
WriteResult({
    "nInserted" : 0,
    "writeError" : {
        "code" : 11000,
        "errmsg" : "E11000 duplicate key error index: testdb.students.$name_1 dup key: { : \"student20\" }"
    }
})

1.5 View the detailed execution process of the find statement

> db.students.find({name: "student5000"}).explain("executionStats")
{
    "queryPlanner" : {
        "plannerVersion" : 1,
        "namespace" : "testdb.students",
        "indexFilterSet" : false,
        "parsedQuery" : {
            "name" : {
                "$eq" : "student5000"
            }
        },
        "winningPlan" : {
            "stage" : "FETCH",
            "inputStage" : {
                "stage" : "IXSCAN",
                "keyPattern" : {
                    "name" : 1
                },
                "indexName" : "name_1",
                "isMultiKey" : false,
                "direction" : "forward",
                "indexBounds" : {
                    "name" : [
                        "[\"student5000\", \"student5000\"]"
                    ]
                }
            }

        },
        "rejectedPlans" : [ ]
    },
    "serverInfo" : {
        "host" : "master1.com",
        "port" : 27017,
        "version" : "3.0.0",
        "gitVersion" : "a841fd6394365954886924a35076691b4d149168"
    },
    "ok" : 1
}

查看大于5000的记录执行过程
db.students.find({name: {$gt: "student5000"}}).explain("executionStats")

在15万条记录中查找name大于8万的记录,不做索引和做所用对比。
> for (i=1;i<=150000;i++) db.test.insert({name:"student"+i,age:(i%120),address:"#85 Wenhua Road,Zhengzhou,China"})
查找:
db.test.find({name: {$gt: "student80000"}}).explain("executionStats")

对比截图,左边是全表查找,右边是添加索引后查找

MongoDB index and replica set, shard miscellaneous notes

2. MongoDB replica set

1. mongod replica set configuration

1.1 Miscellaneous

主节点将数据修改操作保存至oplog中,从节点通过oplog复制到本地并应用。oplog一般存储在local数据库

> show dbs
local   0.078GB
testdb  0.078GB
> use local
switched to db local
> show collections
startup_log
system.indexes

只有启动副本集,才会产生相关的文件

1.2 Prepare three nodes

master1(主节点),2,3

1.3 Install MongoDB

master2,3安装MongoDB
[root@master2 mongodb-3.0.0]# ls
mongodb-org-server-3.0.0-1.el7.x86_64.rpm
mongodb-org-shell-3.0.0-1.el7.x86_64.rpm
mongodb-org-tools-3.0.0-1.el7.x86_64.rpm
[root@master2 mongodb-3.0.0]# yum install *.rpm

master2配置:
[root@master2 ~]# mkdir -pv /mongodb/data
[root@master2 ~]# chown -R mongod.mongod /mongodb/

从master1拷贝配置到master2,并修改
[root@master1 ~]# scp /etc/mongod.conf root@master2:/etc/
[root@master1 ~]# scp /etc/mongod.conf root@master3:/etc/

启动服务:
[root@master2 ~]# systemctl start mongod.service

master3同上配置,并启动mongod服务。

1.4 Master node configuration

先停止刚才master1的mongod服务:
[root@master1 ~]# systemctl stop mongod.service

启动主节点复制集功能
[root@master1 ~]# vim /etc/mongod.conf 

replSet=testSet     #复制集名称
replIndexPrefetch=_id_only  

重新启动服务:
[root@master1 ~]# systemctl start mongod.service

查看:
[root@master1 ~]# mongo

1.5 Primary node (master1), replication set initialization

获取复制命令相关帮助:
> rs.help()

主节点复制集初始化
> rs.initiate()

主节点rs状态:
> rs.initiate()
{
    "info2" : "no configuration explicitly specified -- making one",
    "me" : "master1.com:27017",
    "ok" : 1
}
testSet:OTHER> 
testSet:PRIMARY> rs.status()
{
    "set" : "testSet",      #复制集名称
    "date" : ISODate("2017-01-16T14:36:29.948Z"),
    "myState" : 1,
    "members" : [
        {
            "_id" : 0,      #节点标识
            "name" : "master1.com:27017",   #节点名称
            "health" : 1,   #节点健康状态
            "state" : 1,    #有没有状态信息
            "stateStr" : "PRIMARY", #节点角色
            "uptime" : 790, #运行时长
            "optime" : Timestamp(1484577363, 1),    #最后一次oplog时间戳
            "optimeDate" : ISODate("2017-01-16T14:36:03Z"), #最后一次oplog时间
            "electionTime" : Timestamp(1484577363, 2),  #选举时间戳
            "electionDate" : ISODate("2017-01-16T14:36:03Z"),   #选举时间
            "configVersion" : 1,
            "self" : true   #是不是当前节点
        }
    ],
    "ok" : 1
}

主节点rs配置:
testSet:PRIMARY> rs.conf()
{
    "_id" : "testSet",
    "version" : 1,
    "members" : [
        {
            "_id" : 0,
            "host" : "master1.com:27017",
            "arbiterOnly" : false,
            "buildIndexes" : true,
            "hidden" : false,
            "priority" : 1,
            "tags" : {

            },
            "slaveDelay" : 0,
            "votes" : 1
        }
    ],
    "settings" : {
        "chainingAllowed" : true,
        "heartbeatTimeoutSecs" : 10,
        "getLastErrorModes" : {

        },
        "getLastErrorDefaults" : {
            "w" : 1,
            "wtimeout" : 0
        }
    }
}

1.6 Add a slave node to the master node

testSet:PRIMARY> rs.add("10.201.106.132")
{ "ok" : 1 }

查看
testSet:PRIMARY> rs.status()
        {
            "_id" : 1,
            "name" : "10.201.106.132:27017",
            "health" : 1,
            "state" : 2,
            "stateStr" : "SECONDARY",
            "uptime" : 35,
            "optime" : Timestamp(1524666332, 1),
            "optimeDate" : ISODate("2018-04-25T14:25:32Z"),
            "lastHeartbeat" : ISODate("2018-04-25T14:26:07.009Z"),
            "lastHeartbeatRecv" : ISODate("2018-04-25T14:26:07.051Z"),
            "pingMs" : 0,
            "configVersion" : 2
        }
    ],

master2(从节点)查看:
[root@master2 ~]# mongo

遇到报错:
Error: listDatabases failed:{ "note" : "from execCommand", "ok" : 0, "errmsg" : "not master" }

解决办法
执行 rs.slaveOk()方法

查看:
testSet:SECONDARY> show dbs
local   2.077GB
testdb  0.078GB
testSet:SECONDARY> use testdb
switched to db testdb
testSet:SECONDARY> db.students.findOne()
{
    "_id" : ObjectId("587c9032fe3baa930c0f51d9"),
    "name" : "student1",
    "age" : 1,
    "address" : "#85 Wenhua Road,Zhengzhou,China"
}

查看谁是主节点:
testSet:SECONDARY> rs.isMaster()
{
    "setName" : "testSet",
    "setVersion" : 2,
    "ismaster" : false,
    "secondary" : true,
    "hosts" : [
        "master1.com:27017",
        "10.201.106.132:27017"
    ],
    "primary" : "master1.com:27017",    ###
    "me" : "10.201.106.132:27017",      ###
    "maxBsonObjectSize" : 16777216,
    "maxMessageSizeBytes" : 48000000,
    "maxWriteBatchSize" : 1000,
    "localTime" : ISODate("2018-04-25T14:43:35.956Z"),
    "maxWireVersion" : 3,
    "minWireVersion" : 0,
    "ok" : 1
}

主节点添加第三个节点(master3)
[root@master1 ~]# mongo
testSet:PRIMARY> rs.add("10.201.106.133")
{ "ok" : 1 }

从节点(master3)配置成可用节点:
[root@master3 ~]# mongo
testSet:SECONDARY> rs.slaveOk()

testSet:SECONDARY> use testdb
switched to db testdb
testSet:SECONDARY> db.students.findOne()
{
    "_id" : ObjectId("587c9032fe3baa930c0f51d9"),
    "name" : "student1",
    "age" : 1,
    "address" : "#85 Wenhua Road,Zhengzhou,China"
}

一旦添加从节点,从节点会自动克隆主节点的所有数据库后,并开始复制主节点的oplog,并应用于本地并为collection构建索引。

1.7 View rs configuration

testSet:SECONDARY> rs.conf()

1.8 The master node writes data and tests synchronization

testSet:PRIMARY> db.classes.insert({class: "One",nostu: 40})

从节点查看:
testSet:SECONDARY> db.classes.findOne()
{
    "_id" : ObjectId("5ae09653f7aa5c90df36dc59"),
    "class" : "One",
    "nostu" : 40
}

从节点禁止插入数据:
testSet:SECONDARY> db.classes.insert({class: "Tow",nostu: 50})
WriteResult({ "writeError" : { "code" : undefined, "errmsg" : "not master" } })

1.9 The main node is down, test switching

主节点手动down
testSet:PRIMARY> rs.stepDown()

重新查看状态,master3已经成为主节点:
testSet:SECONDARY> rs.status()

master3查看,状态已经改变:
testSet:SECONDARY> 
testSet:PRIMARY> 

2. Other

2.1 View oplog size and synchronization time

testSet:PRIMARY> db.printReplicationInfo()
configured oplog size:   1165.03515625MB
log length start to end: 390secs (0.11hrs)
oplog first event time:  Wed Apr 25 2018 22:46:37 GMT+0800 (CST)
oplog last event time:   Wed Apr 25 2018 22:53:07 GMT+0800 (CST)
now:                     Wed Apr 25 2018 23:32:35 GMT+0800 (CST)

2.2 Modify the priority of master2 to become the master node first

rs.conf()对应的collection是local.system.replset
local.system.replset.members[n].priority

需要在主节点操作***
先将配置导入cfg变量
testSet:SECONDARY> cfg=rs.conf()
然后修改值(ID号默认从0开始):
testSet:SECONDARY> cfg.members[1].priority=2
2
重新加载配置
testSet:SECONDARY> rs.reconfig(cfg)
{ "ok" : 1 }
重载后master2自动变成主节点,master3变回从节点

2.3 Modify master3 to be a pure arbitration node

需要在主节点配置***

需要先将master3的从节点角色移除:
testSet:PRIMARY> rs.remove("10.201.106.133:27017")
{ "ok" : 1 }

修改master3为仲裁节点:
testSet:PRIMARY> rs.addArb("10.201.106.133")
{ "ok" : 1 }

testSet:PRIMARY> rs.status()
        {
            "_id" : 2,
            "name" : "10.201.106.133:27017",
            "health" : 1,
            "state" : 7,
            "stateStr" : "ARBITER",
            "uptime" : 21,
            "lastHeartbeat" : ISODate("2018-04-25T16:06:39.938Z"),
            "lastHeartbeatRecv" : ISODate("2018-04-25T16:06:39.930Z"),
            "pingMs" : 0,
            "syncingTo" : "master1.com:27017",
            "configVersion" : 6
        }

2.4 View slave information

testSet:PRIMARY> rs.printSlaveReplicationInfo()
source: master1.com:27017
    syncedTo: Thu Apr 26 2018 00:50:53 GMT+0800 (CST)
    0 secs (0 hrs) behind the primary 
source: 10.201.106.133:27017
    syncedTo: Wed Apr 25 2018 23:53:51 GMT+0800 (CST)
    3422 secs (0.95 hrs) behind the primary

3. MongoDB sharding

In the production environment, it is recommended to use a pair of mongos servers for high availability through keepalived, at least three config servers to implement the arbitration function, and multiple shard nodes

1. Sharding (master1: mongos, master2: config server, master3, 4: shard), test environment

1.1 Environment Preparation

停止之前的服务:
[root@master1 ~]# systemctl stop mongod
[root@master2 ~]# systemctl stop mongod
[root@master3 ~]# systemctl stop mongod

删除之前的数据:
[root@master1 ~]# rm -rf /mongodb/data/*
[root@master2 ~]# rm -rf /mongodb/data/*
[root@master3 ~]# rm -rf /mongodb/data/*

4个节点时间同步:
/usr/sbin/ntpdate ntp1.aliyun.com

master4安装MongoDB
[root@master4 mongodb-3.0.0]# ls
mongodb-org-server-3.0.0-1.el7.x86_64.rpm  mongodb-org-tools-3.0.0-1.el7.x86_64.rpm
mongodb-org-shell-3.0.0-1.el7.x86_64.rpm
[root@master4 mongodb-3.0.0]# yum install -y *.rpm

[root@master4 ~]# mkdir -pv /mongodb/data
[root@master4 ~]# chown -R mongod:mongod /mongodb/

1.2 First configure config-server (master2)

[root@master2 ~]# vim /etc/mongod.conf 

#注释刚才的复制集配置
#replSet=testSet
#replIndexPrefetch=_id_only

dbpath=/mongodb/data
#配置该节点为config-server
configsvr=true

启动mongod:
[root@master2 ~]# systemctl start mongod

监听端口:
[root@master2 ~]# netstat -tanp  | grep mongod
tcp        0      0 0.0.0.0:27019           0.0.0.0:*               LISTEN      24036/mongod        
tcp        0      0 0.0.0.0:28019           0.0.0.0:*               LISTEN      24036/mongod  

1.3 mongos (master1) configuration

安装mongos包:
[root@master1 mongodb-3.0.0]# yum install mongodb-org-mongos-3.0.0-1.el7.x86_64.rpm

通过命令启动,指向config服务器,并运行在后台:
[root@master1 ~]# mongos --configdb=10.201.106.132 --fork --logpath=/var/log/mongodb/mongos.log

查看监听端口:
[root@master1 ~]# netstat -tanp | grep mon
tcp        0      0 0.0.0.0:27017           0.0.0.0:*               LISTEN      27801/mongos        
tcp        0      0 10.201.106.131:60956    10.201.106.132:27019    ESTABLISHED 27801/mongos        
tcp        0      0 10.201.106.131:60955    10.201.106.132:27019    ESTABLISHED 27801/mongos        
tcp        0      0 10.201.106.131:60958    10.201.106.132:27019    ESTABLISHED 27801/mongos        
tcp        0      0 10.201.106.131:60957    10.201.106.132:27019    ESTABLISHED 27801/mongos 

连接:
[root@master1 ~]# mongo
查看当前shard状态信息:
mongos> sh.status()
--- Sharding Status --- 
  sharding version: {
    "_id" : 1,
    "minCompatibleVersion" : 5,
    "currentVersion" : 6,
    "clusterId" : ObjectId("5ae16bddf4bf9c27f1816692")
}
  shards:
  balancer:
    Currently enabled:  yes
    Currently running:  no
    Failed balancer rounds in last 5 attempts:  0
    Migration Results for the last 24 hours: 
        No recent migrations
  databases:
    {  "_id" : "admin",  "partitioned" : false,  "primary" : "config" }

1.4 Configuring shard nodes

master3配置:
[root@master3 ~]# vim /etc/mongod.conf 

#取消刚才的复制集配置
#replSet=testSet
#replIndexPrefetch=_id_only

#其他配置不变
dbpath=/mongodb/data
#bind_ip=127.0.0.1

启动服务:
[root@master3 ~]# systemctl start mongod

master4:
[root@master4 ~]# vim /etc/mongod.conf 

dbpath=/mongodb/data
#注释127.0.0.1的配置,服务会自动监听0.0.0.0地址
#bind_ip=127.0.0.1

启动服务:
[root@master4 ~]# systemctl start mongod

1.5 Add a shard node on mongos (master1)

[root@master1 ~]# mongo
添加第一个shard节点
mongos> sh.addShard("10.201.106.133")
{ "shardAdded" : "shard0000", "ok" : 1 }

查看状态
mongos> sh.status()

添加第二个shard节点:
mongos> sh.addShard("10.201.106.134")
{ "shardAdded" : "shard0001", "ok" : 1 }

1.6 Enable shard

shard是collection级别的,不分片的collection放在主shard上。

testdb数据库开启shard功能:
mongos> sh.enableSharding("testdb")
{ "ok" : 1 }

查看状态,testdb数据已经支持shard功能
mongos> sh .status()
--- Sharding Status --- 
  sharding version: {
    "_id" : 1,
    "minCompatibleVersion" : 5,
    "currentVersion" : 6,
    "clusterId" : ObjectId("5ae16bddf4bf9c27f1816692")
}
  shards:
    {  "_id" : "shard0000",  "host" : "10.201.106.133:27017" }
    {  "_id" : "shard0001",  "host" : "10.201.106.134:27017" }
  balancer:
    Currently enabled:  yes
    Currently running:  no
    Failed balancer rounds in last 5 attempts:  0
    Migration Results for the last 24 hours: 
        No recent migrations
  databases:
    {  "_id" : "admin",  "partitioned" : false,  "primary" : "config" }
    {  "_id" : "test",  "partitioned" : false,  "primary" : "shard0000" }
    {  "_id" : "testdb",  "partitioned" : true,  "primary" : "shard0000" }  #主shard

1.7 Collection enables fragmentation

students启动分片,根据age索引进行分配:
mongos> sh.shardCollection("testdb.students",{"age": 1})
{ "collectionsharded" : "testdb.students", "ok" : 1 }

查看:
mongos> sh .status()

创建数据(需要等待一段时间,可另开窗口查看db.students.find().count()
):
mongos> use testdb
switched to db testdb
mongos> for (i=1;i<=100000;i++) db.students.insert({name:"students"+i,age:(i%120),classes:"class"+(i%10),address:"www.magedu.com,MageEdu,#85 Wenhua Road,Zhenzhou,China"})

查看状态,已经有5个分片,按照年龄段范围进行分片:
mongos> sh.status()
--- Sharding Status --- 
  sharding version: {
    "_id" : 1,
    "minCompatibleVersion" : 5,
    "currentVersion" : 6,
    "clusterId" : ObjectId("5ae16bddf4bf9c27f1816692")
}
  shards:
    {  "_id" : "shard0000",  "host" : "10.201.106.133:27017" }
    {  "_id" : "shard0001",  "host" : "10.201.106.134:27017" }
  balancer:
    Currently enabled:  yes
    Currently running:  no
    Failed balancer rounds in last 5 attempts:  0
    Migration Results for the last 24 hours: 
        2 : Success
  databases:
    {  "_id" : "admin",  "partitioned" : false,  "primary" : "config" }
    {  "_id" : "test",  "partitioned" : false,  "primary" : "shard0000" }
    {  "_id" : "testdb",  "partitioned" : true,  "primary" : "shard0000" }
        testdb.students
            shard key: { "age" : 1 }
            chunks:
                shard0000   3         ###
                shard0001   2       ###
            { "age" : { "$minKey" : 1 } } -->> { "age" : 2 } on : shard0001 Timestamp(2, 0) 
            { "age" : 2 } -->> { "age" : 6 } on : shard0001 Timestamp(3, 0) 
            { "age" : 6 } -->> { "age" : 54 } on : shard0000 Timestamp(3, 1) 
            { "age" : 54 } -->> { "age" : 119 } on : shard0000 Timestamp(2, 3) 
            { "age" : 119 } -->> { "age" : { "$maxKey" : 1 } } on : shard0000 Timestamp(2, 4) 

1.8 View shard information

列出有几个shard
mongos> use admin
switched to db admin
mongos> db.runCommand("listShards")
{
    "shards" : [
        {
            "_id" : "shard0000",
            "host" : "10.201.106.133:27017"
        },
        {
            "_id" : "shard0001",
            "host" : "10.201.106.134:27017"
        }
    ],
    "ok" : 1
}

显示集群详细信息:
mongos> db.printShardingStatus()

shard帮助:
mongos> sh.help()

1.9 View the equalizer

查看均衡器是否工作(需要重新均衡时系统才会自动启动,不用管它):
mongos> sh.isBalancerRunning()
false

查看当前Balancer状态:
mongos> sh.getBalancerState()
true

移动chunk(该操作会触发config-server更新元数据,不到万不得已建议不要手动移动chunk):
mongos> sh.moveChunk("testdb.students",{Age: {$gt: 119}},"shard0000")

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=324896409&siteId=291194637