一 replica set 和shard 分配
环境:
1. 三台物理机,ip分别是X.X.X.75,X.X.X.76,X.X.X.77
2. 系统:centos7.2
集群组成:
三个replicaSet的分片+一个replicaSet的config+三个入口mongos
分配方案:
1. 三个数据分片: 分别为shard01,shard02,shard03; 每个分片都是一个replicaSet,每个replicaSet由三个节点组成,一个主节点,一个从节点,一个选举节点,这样每个分片数据的相当于一主一备。
具体分配如下:
shard01: 75 PRIMARY, 76 SECONDARY, 77 ARBITER
shard02: 75 ARBITER, 76 PRIMARY, 77 SECONDARY
shard03: 75 SECONDARY, 76 ARBITER, 77 PRIMARY
2. config 节点配置为replicaSet,一主两从。
具体如下:
config:75 PRIMARY,76 SECONDARY,77 SECONDARY
3. mongos 分别在75,76,77上配置一个mongos,分担负载。
二 具体配置
shard01的配置,shard01.yaml
systemLog:
destination: file
path: "/usr/local/mongodb/instance/data_shard_01/log/shard01.log"
processManagement:
fork: true
net:
port: 27018
storage:
dbPath: "/usr/local/mongodb/instance/data_shard_01/data/"
replication:
replSetName: "shard01"
sharding:
clusterRole: shardsvr
shard02的配置,shard02.yaml
systemLog:
destination: file
path: "/usr/local/mongodb/instance/data_shard_02/log/shard02.log"
processManagement:
fork: true
net:
port: 27118
storage:
dbPath: "/usr/local/mongodb/instance/data_shard_02/data/"
replication:
replSetName: "shard02"
sharding:
clusterRole: shardsvr
shard03的配置,shard03.yaml
systemLog:
destination: file
path: "/usr/local/mongodb/instance/data_shard_03/log/shard03.log"
processManagement:
fork: true
net:
port: 27218
storage:
dbPath: "/usr/local/mongodb/instance/data_shard_03/data/"
replication:
replSetName: "shard03"
sharding:
clusterRole: shardsvr
config的配置,config.yaml
systemLog:
destination: file
path: "/usr/local/mongodb/instance/config/log/config.log"
storage:
dbPath: "/usr/local/mongodb/instance/config/data"
net:
port: 27019
processManagement:
fork: true
sharding:
clusterRole: configsvr
replication:
replSetName: cfgSet
mongos的配置,mongos.yaml
systemLog:
destination: file
path: "/usr/local/mongodb/instance/mongos/log/mongos.log"
net:
port: 27017
processManagement:
fork: true
sharding:
configDB: cfgSet/X.X.X.75:27019,X.X.X.76:27019,X.X.X.77:27019
三 搭建步骤
1. https://www.mongodb.com/download-center#community下载RHEL 7 Linux 64-bit x64的版本
2. 解压到 /usr/local/mongodb/ 目录下,更名为mongodb3.4.3
3. 配置mongodb的环境变量:
> vim /etc/profile
export PATH=$PATH:/usr/local/mongodb/mongodb3.4.3/bin
> source /etc/profile
4. 创建目录结构:
> cd /usr/local/mongodb/instance
创建如下目录
.
├── config
│ ├── config.yaml
│ ├── data
│ └── log
├── data_shard_01
│ ├── data
│ ├── log
│ └── shard01.yaml
├── data_shard_02
│ ├── data
│ ├── log
│ └── shard02.yaml
├── data_shard_03
│ ├── data
│ ├── log
│ └── shard03.yaml
└── mongos
├── log
└── mongos.yaml
5.分别在75,76,77上配置上述目录结构,并分别启动shard server进程:
> mongod -f ./data_shard_01/shard01.yaml
> mongod -f ./data_shard_02/shard02.yaml
> mongod -f ./data_shard_03/shard03.yaml
ps命令分别查看三台机器的进程,每台机器有三个mongod的进程:
[root@test03 instance]# ps aux | grep mongod
root 2421 0.7 0.1 381084 39576 ? Sl 11:14 2:36 mongod -f ./data_shard_01/shard01.yaml
root 2468 1.1 0.1 858008 51076 ? Sl 11:58 3:31 mongod -f ./data_shard_02/shard02.yaml
root 2575 1.0 0.1 886996 46616 ? Sl 12:04 3:14 mongod -f ./data_shard_03/shard03.yaml
6. 分别配置shard01,shard02,shard03的replica set
shard01的配置为:
{
_id: "shard01",
members:[
{
_id: 0,
host: "X.X.X.75:27018",
priority: 3
},
{
_id: 1,
host: "X.X.X.76:27018",
priority: 2
},
{
_id: 2,
host: "X.X.X.77:27018",
arbiterOnly: true
}
]
}
shard02的配置为:
{
_id: "shard02",
members:[
{
_id: 0,
host: "X.X.X.75:27118",
arbiterOnly: true
},
{
_id: 1,
host: "X.X.X.76:27118",
priority: 3
},
{
_id: 2,
host: "X.X.X.77:27118",
priority: 2
}
]
}
shard03的配置为:
{
_id: "shard03",
members:[
{
_id: 0,
host: "X.X.X.75:27218",
priority: 2
},
{
_id: 1,
host: "X.X.X.76:27218",
arbiterOnly: true
},
{
_id: 2,
host: "X.X.X.77:27218",
priority: 3
}
]
}
登录75
> mongo --port 27018
> cfg = {
_id: "shard01",
members:[
{
_id: 0,
host: "X.X.X.75:27018",
priority: 3
},
{
_id: 1,
host: "X.X.X.76:27018",
priority: 2
},
{
_id: 2,
host: "X.X.X.77:27018",
arbiterOnly: true
}
]
}
> rs.initiate(cfg)
> rs.status()
显示如下,配置成功:
{
"set" : "shard01",
"date" : ISODate("2017-05-02T09:42:22.484Z"),
"myState" : 1,
"term" : NumberLong(1),
"heartbeatIntervalMillis" : NumberLong(2000),
"optimes" : {
"lastCommittedOpTime" : {
"ts" : Timestamp(1493718135, 1),
"t" : NumberLong(1)
},
"appliedOpTime" : {
"ts" : Timestamp(1493718135, 1),
"t" : NumberLong(1)
},
"durableOpTime" : {
"ts" : Timestamp(1493718135, 1),
"t" : NumberLong(1)
}
},
"members" : [
{
"_id" : 0,
"name" : "X.X.X.75:27018",
"health" : 1,
"state" : 1,
"stateStr" : "PRIMARY",
"uptime" : 23308,
"optime" : {
"ts" : Timestamp(1493718135, 1),
"t" : NumberLong(1)
},
"optimeDate" : ISODate("2017-05-02T09:42:15Z"),
"electionTime" : Timestamp(1493696804, 1),
"electionDate" : ISODate("2017-05-02T03:46:44Z"),
"configVersion" : 1,
"self" : true
},
{
"_id" : 1,
"name" : "X.X.X.76:27018",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 21349,
"optime" : {
"ts" : Timestamp(1493718135, 1),
"t" : NumberLong(1)
},
"optimeDurable" : {
"ts" : Timestamp(1493718135, 1),
"t" : NumberLong(1)
},
"optimeDate" : ISODate("2017-05-02T09:42:15Z"),
"optimeDurableDate" : ISODate("2017-05-02T09:42:15Z"),
"lastHeartbeat" : ISODate("2017-05-02T09:42:22.203Z"),
"lastHeartbeatRecv" : ISODate("2017-05-02T09:42:22.255Z"),
"pingMs" : NumberLong(0),
"syncingTo" : "X.X.X.75:27018",
"configVersion" : 1
},
{
"_id" : 2,
"name" : "X.X.X.77:27018",
"health" : 1,
"state" : 7,
"stateStr" : "ARBITER",
"uptime" : 21349,
"lastHeartbeat" : ISODate("2017-05-02T09:42:21.882Z"),
"lastHeartbeatRecv" : ISODate("2017-05-02T09:42:21.598Z"),
"pingMs" : NumberLong(1),
"configVersion" : 1
}
],
"ok" : 1
}
分别登录76 ,77 配置76,77的shard的replicaSet
> mongo --port 27118 配置shard02
> mongo --port 27218 配置shard03
7. 配置config
分别启动三台机器上的config server
> mongod -f ./config/config.yaml
至此,每台机器上应该有4个mongod进程
[root@test03 instance]# ps aux | grep mongod
root 2421 0.7 0.1 381084 39584 ? Sl 11:14 2:54 mongod -f ./data_shard_01/shard01.yaml
root 2468 1.1 0.1 858008 51016 ? Sl 11:58 3:58 mongod -f ./data_shard_02/shard02.yaml
root 2575 1.0 0.1 886996 46968 ? Sl 12:04 3:41 mongod -f ./data_shard_03/shard03.yaml
root 2771 1.2 0.1 866216 48772 ? Sl 14:15 2:45 mongod -f ./config/config.yaml
root 3228 0.0 0.0 112664 984 pts/0 S+ 17:51 0:00 grep --color=auto mongod
8. 配置config的replicaSet,config的replicaSet为一主两从,配置如下:
登录75
> mongo --port 27019
> cfg =
{
_id: "cfgSet",
configsvr: true,
members:[
{
_id: 0,
host: "X.X.X.75:27019",
priority: 3
},
{
_id: 1,
host: "X.X.X.76:27019",
priority: 2
},
{
_id: 2,
host: "X.X.X.77:27019",
priority: 1
}
]
}
> rs.initiate(cfg)
> rs.status() 显示详细配置信息即config的replicaSet配置完成!
9. 配置mongos
分别启动三台机器上的mongos进程
> mongos -f ./mongos/mongos.yaml
至此,每台机器上应该有5个mongo进程
[root@test03 instance]# ps aux | grep mongo
root 2421 0.7 0.1 381084 39596 ? Sl 11:14 2:58 mongod -f ./data_shard_01/shard01.yaml
root 2468 1.1 0.1 858008 50848 ? Sl 11:58 4:03 mongod -f ./data_shard_02/shard02.yaml
root 2575 1.0 0.1 886996 46900 ? Sl 12:04 3:47 mongod -f ./data_shard_03/shard03.yaml
root 2771 1.2 0.1 866216 48696 ? Sl 14:15 2:52 mongod -f ./config/config.yaml
root 2927 0.3 0.0 217972 9900 ? Sl 14:45 0:39 mongos -f ./mongos/mongos.yaml
root 3232 0.0 0.0 112664 980 pts/0 S+ 18:00 0:00 grep --color=auto mongo
添加路由信息:
> mongo --port 27017
> use admin
switched to db admin
>db.runCommand({addshard:"shard01/X.X.X.75:27018,X.X.X.76:27018,X.X.X.77:27018"})
>db.runCommand({addshard:"shard02/X.X.X.75:27118,X.X.X.76:27118,X.X.X.77:27118"})
>db.runCommand({addshard:"shard03/X.X.X.75:27218,X.X.X.76:27218,X.X.X.77:27218"})
>sh.status() 输出如下,即为配置成功:
--- Sharding Status ---
sharding version: {
"_id" : 1,
"minCompatibleVersion" : 5,
"currentVersion" : 6,
"clusterId" : ObjectId("5908240e38e25ae4cf8f20e8")
}
shards:
{ "_id" : "shard01", "host" : "shard01/X.X.X.75:27018,X.X.X.76:27018", "state" : 1 }
{ "_id" : "shard02", "host" : "shard02/X.X.X.76:27118,X.X.X.77:27118", "state" : 1 }
{ "_id" : "shard03", "host" : "shard03/X.X.X.75:27218,X.X.X.77:27218", "state" : 1 }
active mongoses:
"3.4.3" : 3
autosplit:
Currently enabled: yes
balancer:
Currently enabled: yes
Currently running: no
Balancer lock taken at Tue May 02 2017 14:15:45 GMT+0800 (CST) by ConfigServer:Balancer
Failed balancer rounds in last 5 attempts: 0
Migration Results for the last 24 hours:
2 : Success
databases:
至此mongo3.4.3 集群配置完成,客户端直接访问mongos即可