mongodb sharding ( replica set + sharding) 添加shard节点

os: centos 7.4
monggo: 3.6.6

mongodb replicat set + sharding 规划如下:

192.168.56.101 node1 configserver replset(27017、27018、27019)

192.168.56.102 node2 mongos(27017、27018、27019)

192.168.56.103 node3 shard1 replset(27017、27018、27019)
192.168.56.104 node4 shard2 replset(27017、27018、27019)
192.168.56.105 node5 shard3 replset(27017、27018、27019)

现在新加一个 shard4
192.168.56.106 node6 shard4 replset(27017、27018、27019)

os设置

配置dns

vi /etc/resolv.conf 
nameserver 8.8.8.8 
nameserver 8.8.4.4

禁止selinux

# vi /etc/selinux/config
SELINUX=disabled

node6 数据节点创建相关目录

# mkdir -p /var/lib/{mongodb1,mongodb2,mongodb3}
# mkdir -p /var/log/mongodb

# chown -R mongodb:mongodb /var/lib/mongodb*
# chown -R mongodb:mongodb /var/log/mongodb

安装mongodb

在 node1 ~ node5 节点上都安装mongodb
下载 tgz
https://fastdl.mongodb.org/linux/mongodb-linux-x86_64-rhel70-3.6.6.tgz

或者使用yum安装
配置yum源
https://docs.mongodb.com/v3.6/tutorial/install-mongodb-on-red-hat/

扫描二维码关注公众号,回复: 2433396 查看本文章
# vi /etc/yum.repos.d/mongodb-org-3.6.repo
[mongodb-org-3.6]
name=MongoDB Repository
baseurl=https://repo.mongodb.org/yum/redhat/$releasever/mongodb-org/3.6/x86_64/
gpgcheck=1
enabled=1
gpgkey=https://www.mongodb.org/static/pgp/server-3.6.asc
# yum install -y mongodb-org-3.6.6 mongodb-org-server-3.6.6 mongodb-org-shell-3.6.6 mongodb-org-mongos-3.6.6 mongodb-org-tools-3.6.6

mongodb shard(数据节点)

node6 节点上的 replSetName 为 shared4

# vi /etc/mongod1.conf
systemLog:
    destination: file
    path: /var/log/mongodb/mongod1.log
    logAppend: true
    logRotate: reopen
storage:
    ##journal配置
    journal:
        enabled: true
    ##数据文件存储位置
    dbPath: /var/lib/mongodb1
    ##是否一个库一个文件夹
    directoryPerDB: true
    ##数据引擎
    engine: wiredTiger
    ##WT引擎配置
    wiredTiger:
        engineConfig:
            ##WT最大使用cache(根据服务器实际情况调节,32G的独享服务器,设置为24G)
            cacheSizeGB: 1
            ##是否将索引也按数据库名单独存储
            directoryForIndexes: true
        ##表压缩配置(数据量不是很大,使用snappy减少资源占用)
        collectionConfig:
            blockCompressor: snappy
        ##索引配置
        indexConfig:
            prefixCompression: true
##端口配置
net:
    port: 27017
    bindIp: 0.0.0.0
##进程管理
processManagement:
    fork: true
##复制集配置
replication:
    ##oplog大小
    oplogSizeMB: 1024
    ##复制集名称
    replSetName: shard4
##分片
sharding:
    clusterRole: shardsvr    
##安全认证
security:
    authorization: enabled
    keyFile: /var/lib/mongodb1/mongoDB_keyfile

node6节点启动shared

创建keyfile

$ vi /var/lib/mongodb1/mongoDB_keyfile
This is mongos mongodb key file DO NOT DELETE IT

$ vi /var/lib/mongodb2/mongoDB_keyfile
This is mongos mongodb key file DO NOT DELETE IT

$ vi /var/lib/mongodb3/mongoDB_keyfile
This is mongos mongodb key file DO NOT DELETE IT

$ chmod 600 /var/lib/mongodb1/mongoDB_keyfile
$ chmod 600 /var/lib/mongodb2/mongoDB_keyfile
$ chmod 600 /var/lib/mongodb3/mongoDB_keyfile

同样启动前先注销配置文件的 authorization 和 keyFile

$ mongod --config  /etc/mongod1.conf
$ mongod --config  /etc/mongod2.conf
$ mongod --config  /etc/mongod3.conf

初始化shard副本集,不同节点执行时注意id、ip

> config = {
    _id : "shard4",
    members : [
      {_id : 0, host : "192.168.56.106:27017"},
      {_id : 1, host : "192.168.56.106:27018"},
      {_id : 2, host : "192.168.56.106:27019"}
    ]
  }
> rs.initiate(config)

初始化完毕之后,创建admin用户

> use admin;
> db.getUsers();
> use admin
switched to db admin
> db.createUser(
{ user:"root",
  pwd:"rootroot",
  roles:[{role:"readWriteAnyDatabase",db:"admin"},
         {role:"dbAdminAnyDatabase",db:"admin"},
         {role:"userAdminAnyDatabase",db:"admin"},
         { role: "clusterAdmin", db: "admin" }
         ]
}
)

关闭mongodb

$ mongod --config  /etc/mongod1.conf --shutdown
$ mongod --config  /etc/mongod2.conf --shutdown
$ mongod --config  /etc/mongod3.conf --shutdown

配置文件启用 authorization 和 keyFile,再次启动

$ mongod --config  /etc/mongod1.conf
$ mongod --config  /etc/mongod2.conf
$ mongod --config  /etc/mongod3.conf

至此 mongodb replica set 配置完毕

node6 添加到 mongodb sharding

node2 登录 mongos

$ mongo --port 27017

mongos> use admin
mongos> db.auth('root','rootroot')
mongos> show dbs
mongos> use config
mongos> sh.status()

mongos> sh.addShard("shard4/192.168.56.106:27017,192.168.56.106:27018,192.168.56.106:27019");
{
    "shardAdded" : "shard4",
    "ok" : 1,
    "$clusterTime" : {
        "clusterTime" : Timestamp(1532420976, 8),
        "signature" : {
            "hash" : BinData(0,"DJmhE9tO67RqE01dJ9Afeavd8sc="),
            "keyId" : NumberLong("6581372726341009427")
        }
    },
    "operationTime" : Timestamp(1532420976, 8)
}

现在,shard4已经添加到sharding cluster了。

验证

node2 登录mongos

$ mongo --port 27018

mongos> use admin
mongos> db.auth('root','rootroot')
mongos> show dbs
mongos> use config
mongos> db.getCollectionNames()
[
    "actionlog",
    "changelog",
    "chunks",
    "collections",
    "databases",
    "lockpings",
    "locks",
    "migrations",
    "mongos",
    "settings",
    "shards",
    "tags",
    "transactions",
    "version"
]

mongos> db.mongos.find()
{ "_id" : "node2:27017", "advisoryHostFQDNs" : [ ], "mongoVersion" : "3.6.6", "ping" : ISODate("2018-07-23T20:24:55.557Z"), "up" : NumberLong(9356), "waiting" : true }
{ "_id" : "node2:27018", "advisoryHostFQDNs" : [ ], "mongoVersion" : "3.6.6", "ping" : ISODate("2018-07-23T20:24:55.558Z"), "up" : NumberLong(9351), "waiting" : true }
{ "_id" : "node2:27019", "advisoryHostFQDNs" : [ ], "mongoVersion" : "3.6.6", "ping" : ISODate("2018-07-23T20:24:55.559Z"), "up" : NumberLong(9346), "waiting" : true }

mongos> db.shards.find()
{ "_id" : "shard1", "host" : "shard1/192.168.56.103:27017,192.168.56.103:27018,192.168.56.103:27019", "state" : 1 }
{ "_id" : "shard2", "host" : "shard2/192.168.56.104:27017,192.168.56.104:27018,192.168.56.104:27019", "state" : 1 }
{ "_id" : "shard3", "host" : "shard3/192.168.56.105:27017,192.168.56.105:27018,192.168.56.105:27019", "state" : 1 }
{ "_id" : "shard4", "host" : "shard4/192.168.56.106:27017,192.168.56.106:27018,192.168.56.106:27019", "state" : 1 }

mongos> sh.status()
--- Sharding Status --- 
  sharding version: {
    "_id" : 1,
    "minCompatibleVersion" : 5,
    "currentVersion" : 6,
    "clusterId" : ObjectId("5b55bba0b4856e5663e0a7ad")
  }
  shards:
        {  "_id" : "shard1",  "host" : "shard1/192.168.56.103:27017,192.168.56.103:27018,192.168.56.103:27019",  "state" : 1 }
        {  "_id" : "shard2",  "host" : "shard2/192.168.56.104:27017,192.168.56.104:27018,192.168.56.104:27019",  "state" : 1 }
        {  "_id" : "shard3",  "host" : "shard3/192.168.56.105:27017,192.168.56.105:27018,192.168.56.105:27019",  "state" : 1 }
        {  "_id" : "shard4",  "host" : "shard4/192.168.56.106:27017,192.168.56.106:27018,192.168.56.106:27019",  "state" : 1 }
  active mongoses:
        "3.6.6" : 3
  autosplit:
        Currently enabled: yes
  balancer:
        Currently enabled:  yes
        Currently running:  no
        Failed balancer rounds in last 5 attempts:  0
        Migration Results for the last 24 hours: 
                10 : Success

参考:
https://docs.mongodb.com/v3.6/sharding/
http://www.mongoing.com/

猜你喜欢

转载自blog.csdn.net/ctypyb2002/article/details/81187593