Mongodb集群搭建之 Sharding+ Replica Sets集群架构(2)

参考http://blog.51cto.com/kaliarch/2047358

一、概述

1.1 背景

为解决mongodb在replica set每个从节点上面的数据库均是对数据库的全量拷贝,从节点压力在高并发大数据量的场景下存在很大挑战,同时考虑到后期mongodb集群的在数据压力巨大时的扩展性,应对海量数据引出了分片机制。

1.2 分片概念

分片是将数据库进行拆分,将其分散在不同的机器上的过程,无需功能强大的服务器就可以存储更多的数据,处理更大的负载,在总数据中,将集合切成小块,将这些块分散到若干片中,每个片只负载总数据的一部分,通过一个知道数据与片对应关系的组件mongos的路由进程进行操作。

1.3 基础组件

其利用到了四个组件:mongos,config server,shard,replica set

mongos:数据库集群请求的入口,所有请求需要经过mongos进行协调,无需在应用层面利用程序来进行路由选择,mongos其自身是一个请求分发中心,负责将外部的请求分发到对应的shard服务器上,mongos作为统一的请求入口,为防止mongos单节点故障,一般需要对其做HA。

config server:配置服务器,存储所有数据库元数据(分片,路由)的配置。mongos本身没有物理存储分片服务器和数据路由信息,只是存缓存在内存中来读取数据,mongos在第一次启动或后期重启时候,就会从config server中加载配置信息,如果配置服务器信息发生更新会通知所有的mongos来更新自己的状态,从而保证准确的请求路由,生产环境中通常也需要多个config server,防止配置文件存在单节点丢失问题。

shard:在传统意义上来讲,如果存在海量数据,单台服务器存储1T压力非常大,无论考虑数据库的硬盘,网络IO,又有CPU,内存的瓶颈,如果多台进行分摊1T的数据,到每台上就是可估量的较小数据,在mongodb集群只要设置好分片规则,通过mongos操作数据库,就可以自动把对应的操作请求转发到对应的后端分片服务器上。

replica set:在总体mongodb集群架构中,对应的分片节点,如果单台机器下线,对应整个集群的数据就会出现部分缺失,这是不能发生的,因此对于shard节点需要replica set来保证数据的可靠性,生产环境通常为2个副本+1个仲裁。

1.4 架构图

 

二、安装部署

2.1 基础环境

为了节省服务器,采用多实例配置,三个mongos,三个config server,单个服务器上面运行不通角色的shard(为了后期数据分片均匀,将三台shard在各个服务器上充当不同的角色。),在一个节点内采用replica set保证高可用,对应主机与端口信息如下:

主机名

IP地址

组件config server

组件mongos

shard

 

 

docker-1

 

 

 

172.17.0.2

 

 

 

  端口:20000

 

 

端口:30000

主节点:   27017

副本节点:27018

仲裁节点:27019

 

docker-2

 

172.17.0.3

 

  端口:20000

 

端口:30000

仲裁节点:27017

主节点:   27018

副本节点:27019

 

docker-3

 

172.17.0.4

 

  端口:20000

 

端口:30000

副本节点27017

仲裁节点:27018

主节点:   27019

 

2.2、安装部署

2.2.1 软件下载目录创建

wget -c https://fastdl.mongodb.org/linux/mongodb-linux-x86_64-3.6.3.tgz-c:继续执行上次终端的任务

tar -zxvf mongodb-linux-x86_64-3.6.3.tgz

ln -sv mongodb-linux-x86_64-3.6.3 /usr/local/mongodb

-s 软链接(符号链接)

-v 显示详细的处理过程

 

echo "PATH=$PAHT:/usr/local/mongodb/bin">/etc/profile.d/mongodb.sh

source /etc/profile.d/mongodb.sh

 

2.2.2 创建目录

分别在docker-1/docker-2/docker-3创建目录及日志文件

mkdir -p /root/application/program/mongodb/data/server1/{logs,conf,data,socket}

mkdir -p /root/application/program/mongodb/data/server2/{logs,conf,data,socket}

mkdir -p /root/application/program/mongodb/data/server3/{logs,conf,data,socket}

mkdir -p /root/application/program/mongodb/data/server1/data/mongod-27017

mkdir -p /root/application/program/mongodb/data/server1/data/mongod-27018

mkdir -p /root/application/program/mongodb/data/server1/data/mongod-27019

mkdir -p /root/application/program/mongodb/data/server1/data/mongosvr-20000

mkdir -p /root/application/program/mongodb/data/server2/data/mongod-27017

mkdir -p /root/application/program/mongodb/data/server2/data/mongod-27018

mkdir -p /root/application/program/mongodb/data/server2/data/mongod-27019

mkdir -p /root/application/program/mongodb/data/server2/data/mongosvr-20000

mkdir -p /root/application/program/mongodb/data/server3/data/mongod-27017

mkdir -p /root/application/program/mongodb/data/server3/data/mongod-27018

mkdir -p /root/application/program/mongodb/data/server3/data/mongod-27019

mkdir -p /root/application/program/mongodb/data/server3/data/mongosvr-20000

2.2.3 启动docker容器

cd /root/application/program/mongodb

docker run -d -v `pwd`/data/server1:/mongodb -p 27017:27017 docker.io/mongodb:3.6.3 /usr/sbin/init

docker run -d -v `pwd`/data/server2:/mongodb -p 27018:27017 docker.io/mongodb:3.6.3 /usr/sbin/init

docker run -d -v `pwd`/data/server3:/mongodb -p 27019:27017 docker.io/mongodb:3.6.3 /usr/sbin/init

 

2.2.4 配置config server副本集

在mongodb3.4版本后要求配置服务器也创建为副本集,在此副本集名称:configdb

在三台服务器上配置config server副本集配置文件,并启动服务

cat >> /mongodb/mongosvr-20000.conf <<ENDF

systemLog:

 destination: file

###日志存储位置

 path: /mongodb/logs/mongosvr-20000.log  #定义config server日志文件

 logAppend: true

storage:

##journal配置

 journal:

  enabled: true

##数据文件存储位置

 dbPath: /mongodb/data/mongosvr-20000

##是否一个库一个文件夹

 directoryPerDB: true

##数据引擎

 engine: wiredTiger

##WT引擎配置

 wiredTiger:

  engineConfig:

##WT最大使用cache(根据服务器实际情况调节)

   cacheSizeGB: 10

##是否将索引也按数据库名单独存储

   directoryForIndexes: true

##表压缩配置

  collectionConfig:

   blockCompressor: zlib

##索引配置

  indexConfig:

   prefixCompression: true

processManagement:

 fork: true  # fork and run in background

 pidFilePath: /mongodb/socket/mongodsvr-20000.pid

##端口配置

net:

 port: 20000

 bindIp: 172.17.0.2    #注意修改绑定IP

sharding:

 clusterRole: configsvr

ENDF

 

mongod -f mongosvr-20000.conf -replSet configdb

[root@a35e154acb47 mongodb]# mongod -f mongosvr-20000.conf -replSet configdb

about to fork child process, waiting until server is ready for connections.

forked process: 204

child process started successfully, parent exiting

[root@a35e154acb47 mongodb]# mongod -f mongosvr-20000.conf -replSet configdb

[root@a35e154acb47 mongodb]# ps -ef | grep mongo

root        204      0 22 06:57 ?        00:00:00 mongod -f mongosvr-20000.conf -replSet configdb

root        240    170  0 06:57 ?        00:00:00 grep --color=auto mongo

[root@a35e154acb47 mongodb]#

 

任意登录一台服务器进行配置服务器副本集初始化     

config = {_id:"configdb",members:[             

{_id:0,host:"172.17.0.2:20000"},{_id:0,host:"172.17.0.2:20000"},

{_id:1,host:"172.17.0.3:20000"},{_id:1,host:"172.17.0.3:20000"},

{_id:2,host:"172.17.0.4:20000"},]{_id:2,host:"172.17.0.4:20000"},]

}}

rs.initiate(config)

 

[root@a35e154acb47 mongodb]# mongo 172.17.0.2:20000

> use admin

switched to db admin

> config = {_id:"configdb",members:[             

... {_id:0,host:"172.17.0.2:20000"},

... {_id:1,host:"172.17.0.3:20000"},

... {_id:2,host:"172.17.0.4:20000"},]

... }

{

"_id" : "configdb",

"members" : [

{

"_id" : 0,

"host" : "172.17.0.2:20000"

},

{

"_id" : 1,

"host" : "172.17.0.3:20000"

},

{

"_id" : 2,

"host" : "172.17.0.4:20000"

}

]

}

> rs.initiate(config)

{

"ok" : 1,

"operationTime" : Timestamp(1534231136, 1),

"$gleStats" : {

"lastOpTime" : Timestamp(1534231136, 1),

"electionId" : ObjectId("000000000000000000000000")

},

"$clusterTime" : {

"clusterTime" : Timestamp(1534231136, 1),

"signature" : {

"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),

"keyId" : NumberLong(0)

}

}

}

 

查看集群状态:

configdb:SECONDARY> rs.status()rs.status()

{

"set" : "configdb",

"date" : ISODate("2018-08-14T07:19:04.515Z"),

"myState" : 2,

"term" : NumberLong(0),

"configsvr" : true,

"heartbeatIntervalMillis" : NumberLong(2000),

"optimes" : {

"lastCommittedOpTime" : {

"ts" : Timestamp(0, 0),

"t" : NumberLong(-1)

},

"appliedOpTime" : {

"ts" : Timestamp(1534231136, 1),

"t" : NumberLong(-1)

},

"durableOpTime" : {

"ts" : Timestamp(1534231136, 1),

"t" : NumberLong(-1)

}

},

"members" : [

{

"_id" : 0,

"name" : "172.17.0.2:20000",

"health" : 1,

"state" : 2,

"stateStr" : "PRIMARY",

"uptime" : 1303,

"optime" : {

"ts" : Timestamp(1534231136, 1),

"t" : NumberLong(-1)

},

"optimeDate" : ISODate("2018-08-14T07:18:56Z"),

"infoMessage" : "could not find member to sync from",

"configVersion" : 1,

"self" : true

},

{

"_id" : 1,

"name" : "172.17.0.3:20000",

"health" : 1,

"state" : 2,

"stateStr" : "SECONDARY",

"uptime" : 8,

"optime" : {

"ts" : Timestamp(1534231136, 1),

"t" : NumberLong(-1)

},

"optimeDurable" : {

"ts" : Timestamp(1534231136, 1),

"t" : NumberLong(-1)

},

"optimeDate" : ISODate("2018-08-14T07:18:56Z"),

"optimeDurableDate" : ISODate("2018-08-14T07:18:56Z"),

"lastHeartbeat" : ISODate("2018-08-14T07:19:01.250Z"),

"lastHeartbeatRecv" : ISODate("2018-08-14T07:19:01.983Z"),

"pingMs" : NumberLong(0),

"configVersion" : 1

},

{

"_id" : 2,

"name" : "172.17.0.4:20000",

"health" : 1,

"state" : 2,

"stateStr" : "SECONDARY",

"uptime" : 8,

"optime" : {

"ts" : Timestamp(1534231136, 1),

"t" : NumberLong(-1)

},

"optimeDurable" : {

"ts" : Timestamp(1534231136, 1),

"t" : NumberLong(-1)

},

"optimeDate" : ISODate("2018-08-14T07:18:56Z"),

"optimeDurableDate" : ISODate("2018-08-14T07:18:56Z"),

"lastHeartbeat" : ISODate("2018-08-14T07:19:01.251Z"),

"lastHeartbeatRecv" : ISODate("2018-08-14T07:19:02.006Z"),

"pingMs" : NumberLong(1),

"configVersion" : 1

}

],

"ok" : 1,

"operationTime" : Timestamp(1534231136, 1),

"$gleStats" : {

"lastOpTime" : Timestamp(1534231136, 1),

"electionId" : ObjectId("000000000000000000000000")

},

"$clusterTime" : {

"clusterTime" : Timestamp(1534231136, 1),

"signature" : {

"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),

"keyId" : NumberLong(0)

}

}

}

此时config server集群已经配置完成,docker-1为primary,docker-2/docker-3为secondary

 

 

2.2.5 配置shard集群

三台服务器均进行shard集群配置

shard1配置

cat >> /mongodb/mongod-27017.conf <<ENDF

systemLog:

 destination: file

###日志存储位置

 path: /mongodb/logs/mongod-27017.log

 logAppend: true

storage:

##journal配置

 journal:

  enabled: true

##数据文件存储位置

 dbPath: /mongodb/data/mongod-27017

##是否一个库一个文件夹

 directoryPerDB: true

##数据引擎

 engine: wiredTiger

##WT引擎配置

 wiredTiger:

  engineConfig:

##WT最大使用cache(根据服务器实际情况调节)

   cacheSizeGB: 10

##是否将索引也按数据库名单独存储

   directoryForIndexes: true

##表压缩配置

  collectionConfig:

   blockCompressor: zlib

##索引配置

  indexConfig:

   prefixCompression: true

processManagement:

 fork: true  # fork and run in background

 pidFilePath: /mongodb/socket/mongod-27017.pid

##端口配置

net:

 port: 27017

 bindIp: 172.17.0.2

ENDF

 

### –shardsvr 是表示以sharding模式启动Mongodb服务器

mongod -f mongod-27017.conf -replSet shard1 -shardsvr

 

[root@a35e154acb47 mongodb]# mongod -f mongod-27017.conf -replSet shard1 -shardsvr

about to fork child process, waiting until server is ready for connections.

forked process: 1020

child process started successfully, parent exiting

[root@a35e154acb47 mongodb]# ps -ef | grep mongo

root        204      0  1 06:57 ?        00:01:27 mongod -f mongosvr-20000.conf -replSet configdb

root       1020      0 15 08:47 ?        00:00:01 mongod -f mongod-27017.conf -replSet shard1 -shardsvr

root       1110    170  0 08:47 ?        00:00:00 grep --color=auto mongo

 

查看此时服务已经正常启动,shard1的27017端口已经正常监听,接下来登录docker-1服务器进行shard1副本集初始化

 

[root@a35e154acb47 mongodb]# mongo 172.17.0.2:27017

MongoDB shell version v3.6.3

connecting to: mongodb://172.17.0.2:27017/test

MongoDB server version: 3.6.3

Server has startup warnings:

2018-08-14T07:29:04.783+0000 I CONTROL  [initandlisten]

2018-08-14T07:29:04.783+0000 I CONTROL  [initandlisten] ** WARNING: Access control is not enabled for the database.

2018-08-14T07:29:04.783+0000 I CONTROL  [initandlisten] **          Read and write access to data and configuration is unrestricted.

2018-08-14T07:29:04.783+0000 I CONTROL  [initandlisten] ** WARNING: You are running this process as the root user, which is not recommended.

2018-08-14T07:29:04.783+0000 I CONTROL  [initandlisten]

2018-08-14T07:29:04.784+0000 I CONTROL  [initandlisten]

2018-08-14T07:29:04.784+0000 I CONTROL  [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/enabled is 'always'.

2018-08-14T07:29:04.784+0000 I CONTROL  [initandlisten] **        We suggest setting it to 'never'

2018-08-14T07:29:04.784+0000 I CONTROL  [initandlisten]

2018-08-14T07:29:04.784+0000 I CONTROL  [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/defrag is 'always'.

2018-08-14T07:29:04.784+0000 I CONTROL  [initandlisten] **        We suggest setting it to 'never'

2018-08-14T07:29:04.784+0000 I CONTROL  [initandlisten]

>

>

> use admin                

switched to db admin

> config = {_id:"shard1",members:[                    

... {_id:0,host:"172.17.0.2:27017"},

... {_id:1,host:"172.17.0.3:27017",arbiterOnly:true},

... {_id:2,host:"172.17.0.4:27017"},]

... }}

{

"_id" : "shard1",

"members" : [

{

"_id" : 0,

"host" : "172.17.0.2:27017"

},

{

"_id" : 1,

"host" : "172.17.0.3:27017",

"arbiterOnly" : true

},

{

"_id" : 2,

"host" : "172.17.0.4:27017"

}

]

}

> rs.initiate(config);rs.initiate(config);

{

"ok" : 1,

"operationTime" : Timestamp(1534232200, 1),

"$clusterTime" : {

"clusterTime" : Timestamp(1534232200, 1),

"signature" : {

"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),

"keyId" : NumberLong(0)

}

}

}

 

 

查看集群状态,只列出了部分信息:

shard1:SECONDARY> rs.status()

{

"set" : "shard1",

"date" : ISODate("2018-08-14T07:36:53.095Z"),

"myState" : 1,

"term" : NumberLong(1),

"heartbeatIntervalMillis" : NumberLong(2000),

"optimes" : {

"lastCommittedOpTime" : {

"ts" : Timestamp(1534232212, 5),

"t" : NumberLong(1)

},

"readConcernMajorityOpTime" : {

"ts" : Timestamp(1534232212, 5),

"t" : NumberLong(1)

},

"appliedOpTime" : {

"ts" : Timestamp(1534232212, 5),

"t" : NumberLong(1)

},

"durableOpTime" : {

"ts" : Timestamp(1534232212, 5),

"t" : NumberLong(1)

}

},

"members" : [

{

"_id" : 0,

"name" : "172.17.0.2:27017",

"health" : 1,

"state" : 1,

"stateStr" : "PRIMARY",

"uptime" : 69,

"optime" : {

"ts" : Timestamp(1534232212, 5),

"t" : NumberLong(1)

},

"optimeDate" : ISODate("2018-08-14T07:36:52Z"),

"infoMessage" : "could not find member to sync from",

"electionTime" : Timestamp(1534232211, 1),

"electionDate" : ISODate("2018-08-14T07:36:51Z"),

"configVersion" : 1,

"self" : true

},

{

"_id" : 1,

"name" : "172.17.0.3:27017",

"health" : 1,

"state" : 7,

"stateStr" : "ARBITER",

"uptime" : 12,

"lastHeartbeat" : ISODate("2018-08-14T07:36:53.062Z"),

"lastHeartbeatRecv" : ISODate("2018-08-14T07:36:52.233Z"),

"pingMs" : NumberLong(0),

"configVersion" : 1

},

{

"_id" : 2,

"name" : "172.17.0.4:27017",

"health" : 1,

"state" : 2,

"stateStr" : "SECONDARY",

"uptime" : 12,

"optime" : {

"ts" : Timestamp(1534232212, 5),

"t" : NumberLong(1)

},

"optimeDurable" : {

"ts" : Timestamp(1534232212, 5),

"t" : NumberLong(1)

},

"optimeDate" : ISODate("2018-08-14T07:36:52Z"),

"optimeDurableDate" : ISODate("2018-08-14T07:36:52Z"),

"lastHeartbeat" : ISODate("2018-08-14T07:36:53.062Z"),

"lastHeartbeatRecv" : ISODate("2018-08-14T07:36:52.345Z"),

"pingMs" : NumberLong(0),

"syncingTo" : "172.17.0.2:27017",

"configVersion" : 1

}

],

"ok" : 1,

"operationTime" : Timestamp(1534232212, 5),

"$clusterTime" : {

"clusterTime" : Timestamp(1534232212, 5),

"signature" : {

"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),

"keyId" : NumberLong(0)

}

}

}

 

此时shard1 副本集已经配置完成,docker-1为primary,docker-2为arbiter,docker-3为secondary。

同样的操作进行shard2配置和shard3配置

注意:进行shard2的副本集初始化,在mongodb-2, 初始化shard3副本集在mongodb-3上进行操作。

shard2配置文件

cat >> /mongodb/mongod-27017.conf <<ENDF

systemLog:

 destination: file

###日志存储位置

 path: /mongodb/logs/mongod-27017.log

 logAppend: true

storage:

##journal配置

 journal:

  enabled: true

##数据文件存储位置

 dbPath: /mongodb/data/mongod-27017

##是否一个库一个文件夹

 directoryPerDB: true

##数据引擎

 engine: wiredTiger

##WT引擎配置

 wiredTiger:

  engineConfig:

##WT最大使用cache(根据服务器实际情况调节)

   cacheSizeGB: 10

##是否将索引也按数据库名单独存储

   directoryForIndexes: true

##表压缩配置

  collectionConfig:

   blockCompressor: zlib

##索引配置

  indexConfig:

   prefixCompression: true

processManagement:

 fork: true  # fork and run in background

 pidFilePath: /mongodb/socket/mongod-27017.pid

##端口配置

net:

 port: 27017

 bindIp: 172.17.0.3

ENDF

 

mongod -f mongod-27017.conf -replSet shard2 -shardsvr

 

shard3配置文件

cat >> /mongodb/mongod-27017.conf <<ENDF

systemLog:

 destination: file

###日志存储位置

 path: /mongodb/logs/mongod-27017.log

 logAppend: true

storage:

##journal配置

 journal:

  enabled: true

##数据文件存储位置

 dbPath: /mongodb/data/mongod-27017

##是否一个库一个文件夹

 directoryPerDB: true

##数据引擎

 engine: wiredTiger

##WT引擎配置

 wiredTiger:

  engineConfig:

##WT最大使用cache(根据服务器实际情况调节)

   cacheSizeGB: 10

##是否将索引也按数据库名单独存储

   directoryForIndexes: true

##表压缩配置

  collectionConfig:

   blockCompressor: zlib

##索引配置

  indexConfig:

   prefixCompression: true

processManagement:

 fork: true  # fork and run in background

 pidFilePath: /mongodb/socket/mongod-27017.pid

##端口配置

net:

 port: 27017

 bindIp: 172.17.0.3

ENDF

 

mongod -f mongod-27017.conf -replSet shard3 -shardsvr

 

docker-2上进行shard2副本集初始化

[root@a35e154acb47 mongodb]# mongo 172.17.0.2:27018

MongoDB shell version v3.6.3

connecting to: mongodb://172.17.0.2:27018/test

MongoDB server version: 3.6.3

Server has startup warnings:

2018-08-14T07:40:03.757+0000 I CONTROL  [initandlisten]

2018-08-14T07:40:03.757+0000 I CONTROL  [initandlisten] ** WARNING: Access control is not enabled for the database.

2018-08-14T07:40:03.757+0000 I CONTROL  [initandlisten] **          Read and write access to data and configuration is unrestricted.

2018-08-14T07:40:03.757+0000 I CONTROL  [initandlisten] ** WARNING: You are running this process as the root user, which is not recommended.

2018-08-14T07:40:03.757+0000 I CONTROL  [initandlisten]

2018-08-14T07:40:03.757+0000 I CONTROL  [initandlisten]

2018-08-14T07:40:03.757+0000 I CONTROL  [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/enabled is 'always'.

2018-08-14T07:40:03.757+0000 I CONTROL  [initandlisten] **        We suggest setting it to 'never'

2018-08-14T07:40:03.757+0000 I CONTROL  [initandlisten]

2018-08-14T07:40:03.757+0000 I CONTROL  [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/defrag is 'always'.

2018-08-14T07:40:03.757+0000 I CONTROL  [initandlisten] **        We suggest setting it to 'never'

2018-08-14T07:40:03.757+0000 I CONTROL  [initandlisten]

> use admin              

switched to db admin

> config = {_id:"shard2",members:[       

... {_id:0,host:"172.17.0.2:27018"},

... {_id:1,host:"172.17.0.3:27018"},

... {_id:2,host:"172.17.0.4:27018",arbiterOnly:true},]

... }}

{

"_id" : "shard2",

"members" : [

{

"_id" : 0,

"host" : "172.17.0.2:27018"

},

{

"_id" : 1,

"host" : "172.17.0.3:27018"

},

{

"_id" : 2,

"host" : "172.17.0.4:27018",

"arbiterOnly" : true

}

]

}

> rs.initiate(config);rs.initiate(config);

{

"ok" : 1,

"operationTime" : Timestamp(1534232500, 1),

"$clusterTime" : {

"clusterTime" : Timestamp(1534232500, 1),

"signature" : {

"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),

"keyId" : NumberLong(0)

}

}

}

shard2:SECONDARY>

查看shard2副本集状态

shard2:PRIMARY> rs.status()

{

"set" : "shard2",

"date" : ISODate("2018-08-14T07:44:32.972Z"),

"myState" : 1,

"term" : NumberLong(1),

"heartbeatIntervalMillis" : NumberLong(2000),

"optimes" : {

"lastCommittedOpTime" : {

"ts" : Timestamp(1534232663, 1),

"t" : NumberLong(1)

},

"readConcernMajorityOpTime" : {

"ts" : Timestamp(1534232663, 1),

"t" : NumberLong(1)

},

"appliedOpTime" : {

"ts" : Timestamp(1534232663, 1),

"t" : NumberLong(1)

},

"durableOpTime" : {

"ts" : Timestamp(1534232663, 1),

"t" : NumberLong(1)

}

},

"members" : [

{

"_id" : 0,

"name" : "172.17.0.2:27018",

"health" : 1,

"state" : 1,

"stateStr" : "PRIMARY",

"uptime" : 269,

"optime" : {

"ts" : Timestamp(1534232663, 1),

"t" : NumberLong(1)

},

"optimeDate" : ISODate("2018-08-14T07:44:23Z"),

"electionTime" : Timestamp(1534232512, 1),

"electionDate" : ISODate("2018-08-14T07:41:52Z"),

"configVersion" : 1,

"self" : true

},

{

"_id" : 1,

"name" : "172.17.0.3:27018",

"health" : 1,

"state" : 2,

"stateStr" : "SECONDARY",

"uptime" : 172,

"optime" : {

"ts" : Timestamp(1534232663, 1),

"t" : NumberLong(1)

},

"optimeDurable" : {

"ts" : Timestamp(1534232663, 1),

"t" : NumberLong(1)

},

"optimeDate" : ISODate("2018-08-14T07:44:23Z"),

"optimeDurableDate" : ISODate("2018-08-14T07:44:23Z"),

"lastHeartbeat" : ISODate("2018-08-14T07:44:32.246Z"),

"lastHeartbeatRecv" : ISODate("2018-08-14T07:44:30.986Z"),

"pingMs" : NumberLong(0),

"syncingTo" : "172.17.0.2:27018",

"configVersion" : 1

},

{

"_id" : 2,

"name" : "172.17.0.4:27018",

"health" : 1,

"state" : 7,

"stateStr" : "ARBITER",

"uptime" : 172,

"lastHeartbeat" : ISODate("2018-08-14T07:44:32.244Z"),

"lastHeartbeatRecv" : ISODate("2018-08-14T07:44:32.794Z"),

"pingMs" : NumberLong(0),

"configVersion" : 1

}

],

"ok" : 1,

"operationTime" : Timestamp(1534232663, 1),

"$clusterTime" : {

"clusterTime" : Timestamp(1534232663, 1),

"signature" : {

"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),

"keyId" : NumberLong(0)

}

}

}

登录docker-3进行shard3副本集初始化

 

[root@a35e154acb47 mongodb]# mongo 172.17.0.3:27019

MongoDB shell version v3.6.3

connecting to: mongodb://172.17.0.3:27019/test

MongoDB server version: 3.6.3

Server has startup warnings:

2018-08-14T08:00:06.479+0000 I CONTROL  [initandlisten]

2018-08-14T08:00:06.479+0000 I CONTROL  [initandlisten] ** WARNING: Access control is not enabled for the database.

2018-08-14T08:00:06.479+0000 I CONTROL  [initandlisten] **          Read and write access to data and configuration is unrestricted.

2018-08-14T08:00:06.479+0000 I CONTROL  [initandlisten] ** WARNING: You are running this process as the root user, which is not recommended.

2018-08-14T08:00:06.479+0000 I CONTROL  [initandlisten]

2018-08-14T08:00:06.479+0000 I CONTROL  [initandlisten]

2018-08-14T08:00:06.479+0000 I CONTROL  [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/enabled is 'always'.

2018-08-14T08:00:06.479+0000 I CONTROL  [initandlisten] **        We suggest setting it to 'never'

2018-08-14T08:00:06.479+0000 I CONTROL  [initandlisten]

2018-08-14T08:00:06.479+0000 I CONTROL  [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/defrag is 'always'.

2018-08-14T08:00:06.479+0000 I CONTROL  [initandlisten] **        We suggest setting it to 'never'

2018-08-14T08:00:06.479+0000 I CONTROL  [initandlisten]

> use adminuse admin

switched to db admin

>

>

> config = {_id:"shard3",members:[            

... {_id:0,host:"172.17.0.2:27019",arbiterOnly:true},

... {_id:1,host:"172.17.0.3:27019"},

... {_id:2,host:"172.17.0.4:27019"},]

... }}

{

"_id" : "shard3",

"members" : [

{

"_id" : 0,

"host" : "172.17.0.2:27019",

"arbiterOnly" : true

},

{

"_id" : 1,

"host" : "172.17.0.3:27019"

},

{

"_id" : 2,

"host" : "172.17.0.4:27019"

}

]

}

> rs.initiate(config);

{

"ok" : 1,

"operationTime" : Timestamp(1534234156, 1),

"$clusterTime" : {

"clusterTime" : Timestamp(1534234156, 1),

"signature" : {

"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),

"keyId" : NumberLong(0)

}

}

}

shard3:SECONDARY> rs.status()

{

"set" : "shard3",

"date" : ISODate("2018-08-14T08:09:23.568Z"),

"myState" : 2,

"term" : NumberLong(0),

"heartbeatIntervalMillis" : NumberLong(2000),

"optimes" : {

"lastCommittedOpTime" : {

"ts" : Timestamp(0, 0),

"t" : NumberLong(-1)

},

"appliedOpTime" : {

"ts" : Timestamp(1534234156, 1),

"t" : NumberLong(-1)

},

"durableOpTime" : {

"ts" : Timestamp(1534234156, 1),

"t" : NumberLong(-1)

}

},

"members" : [

{

"_id" : 0,

"name" : "172.17.0.2:27019",

"health" : 1,

"state" : 7,

"stateStr" : "ARBITER",

"uptime" : 6,

"lastHeartbeat" : ISODate("2018-08-14T08:09:21.944Z"),

"lastHeartbeatRecv" : ISODate("2018-08-14T08:09:18.925Z"),

"pingMs" : NumberLong(0),

"configVersion" : 1

},

{

"_id" : 1,

"name" : "172.17.0.3:27019",

"health" : 1,

"state" : 2,

"stateStr" : "SECONDARY",

"uptime" : 558,

"optime" : {

"ts" : Timestamp(1534234156, 1),

"t" : NumberLong(-1)

},

"optimeDate" : ISODate("2018-08-14T08:09:16Z"),

"infoMessage" : "could not find member to sync from",

"configVersion" : 1,

"self" : true

},

{

"_id" : 2,

"name" : "172.17.0.4:27019",

"health" : 1,

"state" : 2,

"stateStr" : "SECONDARY",

"uptime" : 6,

"optime" : {

"ts" : Timestamp(1534234156, 1),

"t" : NumberLong(-1)

},

"optimeDurable" : {

"ts" : Timestamp(1534234156, 1),

"t" : NumberLong(-1)

},

"optimeDate" : ISODate("2018-08-14T08:09:16Z"),

"optimeDurableDate" : ISODate("2018-08-14T08:09:16Z"),

"lastHeartbeat" : ISODate("2018-08-14T08:09:21.944Z"),

"lastHeartbeatRecv" : ISODate("2018-08-14T08:09:19.060Z"),

"pingMs" : NumberLong(0),

"configVersion" : 1

}

],

"ok" : 1,

"operationTime" : Timestamp(1534234156, 1),

"$clusterTime" : {

"clusterTime" : Timestamp(1534234156, 1),

"signature" : {

"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),

"keyId" : NumberLong(0)

}

}

}

shard3:SECONDARY>

查看shard3副本集状态

shard3:SECONDARY> rs.status()

{

"set" : "shard3",

"date" : ISODate("2018-08-14T08:09:25.488Z"),

"myState" : 2,

"term" : NumberLong(0),

"heartbeatIntervalMillis" : NumberLong(2000),

"optimes" : {

"lastCommittedOpTime" : {

"ts" : Timestamp(0, 0),

"t" : NumberLong(-1)

},

"appliedOpTime" : {

"ts" : Timestamp(1534234156, 1),

"t" : NumberLong(-1)

},

"durableOpTime" : {

"ts" : Timestamp(1534234156, 1),

"t" : NumberLong(-1)

}

},

"members" : [

{

"_id" : 0,

"name" : "172.17.0.2:27019",

"health" : 1,

"state" : 7,

"stateStr" : "ARBITER",

"uptime" : 8,

"lastHeartbeat" : ISODate("2018-08-14T08:09:21.944Z"),

"lastHeartbeatRecv" : ISODate("2018-08-14T08:09:23.928Z"),

"pingMs" : NumberLong(0),

"configVersion" : 1

},

{

"_id" : 1,

"name" : "172.17.0.3:27019",

"health" : 1,

"state" : 2,

"stateStr" : "SECONDARY",

"uptime" : 560,

"optime" : {

"ts" : Timestamp(1534234156, 1),

"t" : NumberLong(-1)

},

"optimeDate" : ISODate("2018-08-14T08:09:16Z"),

"infoMessage" : "could not find member to sync from",

"configVersion" : 1,

"self" : true

},

{

"_id" : 2,

"name" : "172.17.0.4:27019",

"health" : 1,

"state" : 2,

"stateStr" : "SECONDARY",

"uptime" : 8,

"optime" : {

"ts" : Timestamp(1534234156, 1),

"t" : NumberLong(-1)

},

"optimeDurable" : {

"ts" : Timestamp(1534234156, 1),

"t" : NumberLong(-1)

},

"optimeDate" : ISODate("2018-08-14T08:09:16Z"),

"optimeDurableDate" : ISODate("2018-08-14T08:09:16Z"),

"lastHeartbeat" : ISODate("2018-08-14T08:09:21.944Z"),

"lastHeartbeatRecv" : ISODate("2018-08-14T08:09:24.061Z"),

"pingMs" : NumberLong(0),

"configVersion" : 1

}

],

"ok" : 1,

"operationTime" : Timestamp(1534234156, 1),

"$clusterTime" : {

"clusterTime" : Timestamp(1534234156, 1),

"signature" : {

"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),

"keyId" : NumberLong(0)

}

}

}

 

此时shard集群全部已经配置完毕。

2.2.6 配置路由服务器mongos

目前三台服务器的配置服务器和分片服务器均已启动,配置三台mongos服务器

由于mongos服务器的配置是从内存中加载,所以自己没有存在数据目录configdb连接为配置服务器集群

cat >> /mongodb/mongos-30000.conf <<ENDF

systemLog:

 destination: file

###日志存储位置

 path: /mongodb/logs/mongos-30000.log

 logAppend: true

processManagement:

 fork: true  # fork and run in background

 pidFilePath: /mongodb/socket/mongos-30000.pid

##端口配置

net:

 port: 30000

 bindIp: 172.17.0.2

 

## 将confige server 添加到路由

sharding:

 configDB: configdb/172.17.0.2:20000,172.17.0.3:20000,172.17.0.4:20000

ENDF

 

mongos -f mongos-30000.conf

 

 

目前config server集群/shard集群/mongos服务均已启动,但此时为设置分片,还不能使用分片功能。需要登录mongos启用分片

登录任意一台mongos

[root@a35e154acb47 mongodb]# mongo 172.17.0.2:30000

MongoDB shell version v3.6.3

connecting to: mongodb://172.17.0.2:30000/test

MongoDB server version: 3.6.3

Server has startup warnings:

2018-08-14T07:21:29.694+0000 I CONTROL  [main]

2018-08-14T07:21:29.694+0000 I CONTROL  [main] ** WARNING: Access control is not enabled for the database.

2018-08-14T07:21:29.694+0000 I CONTROL  [main] **          Read and write access to data and configuration is unrestricted.

2018-08-14T07:21:29.694+0000 I CONTROL  [main] ** WARNING: You are running this process as the root user, which is not recommended.

2018-08-14T07:21:29.694+0000 I CONTROL  [main]

mongos> use adminuse admin

switched to db admin

mongos> db.runCommand({addshard:"shard1/172.17.0.2:27017,172.17.0.3:27017,172.17.0.4:27017"})

db.runCommand({addshard:"shard2/172.17.0.2:27018,172.17.0.3:27018,172.17.0.4:27018"})

db.runCommand({addshard:"shard3/172.17.0.2:27019,172.17.0.3:27019,172.17.0.4:27019"}){

"shardAdded" : "shard1",

"ok" : 1,

"$clusterTime" : {

"clusterTime" : Timestamp(1534234937, 3),

"signature" : {

"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),

"keyId" : NumberLong(0)

}

},

"operationTime" : Timestamp(1534234937, 3)

}

mongos> db.runCommand({addshard:"shard2/172.17.0.2:27018,172.17.0.3:27018,172.17.0.4:27018"})db.runCommand({addshard:"shard2/172.17.0.2:27018,172.17.0.3:27018,172.17.0.4:27018"})

{

"shardAdded" : "shard2",

"ok" : 1,

"$clusterTime" : {

"clusterTime" : Timestamp(1534234937, 5),

"signature" : {

"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),

"keyId" : NumberLong(0)

}

},

"operationTime" : Timestamp(1534234937, 5)

}

mongos> db.runCommand({addshard:"shard3/172.17.0.2:27019,172.17.0.3:27019,172.17.0.4:27019"})db.runCommand({addshard:"shard3/172.17.0.2:27019,172.17.0.3:27019,172.17.0.4:27019"})

{

"shardAdded" : "shard3",

"ok" : 1,

"$clusterTime" : {

"clusterTime" : Timestamp(1534234938, 2),

"signature" : {

"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),

"keyId" : NumberLong(0)

}

},

"operationTime" : Timestamp(1534234938, 2)

}

mongos>

 

 

查看集群

mongos> sh.status()

--- Sharding Status ---

  sharding version: {

   "_id" : 1,

   "minCompatibleVersion" : 5,

   "currentVersion" : 6,

   "clusterId" : ObjectId("5b72826cf59ff6d759023045")

  }

  shards:

        {  "_id" : "shard1",  "host" : "shard1/172.17.0.2:27017,172.17.0.4:27017",  "state" : 1 }

        {  "_id" : "shard2",  "host" : "shard2/172.17.0.2:27018,172.17.0.3:27018",  "state" : 1 }

        {  "_id" : "shard3",  "host" : "shard3/172.17.0.3:27019,172.17.0.4:27019",  "state" : 1 }

  active mongoses:

        "3.6.3" : 3

  autosplit:

        Currently enabled: yes

  balancer:

        Currently enabled:  yes

        Currently running:  no

        Failed balancer rounds in last 5 attempts:  0

        Migration Results for the last 24 hours:

                2 : Success

                1 : Failed with error 'aborted', from shard3 to shard1

  databases:

        {  "_id" : "config",  "primary" : "config",  "partitioned" : true }

                config.system.sessions

                        shard key: { "_id" : 1 }

                        unique: false

                        balancing: true

                        chunks:

                                shard1 1

                        { "_id" : { "$minKey" : 1 } } -->> { "_id" : { "$maxKey" : 1 } } on : shard1 Timestamp(1, 0)

        {  "_id" : "school",  "primary" : "shard3",  "partitioned" : true }

                school.student

                        shard key: { "_id" : "hashed" }

                        unique: false

                        balancing: true

                        chunks:

                                shard1 1

                                shard2 1

                                shard3 1

                        { "_id" : { "$minKey" : 1 } } -->> { "_id" : NumberLong("-3074457345618258602") } on : shard1 Timestamp(2, 0)

                        { "_id" : NumberLong("-3074457345618258602") } -->> { "_id" : NumberLong("3074457345618258602") } on : shard2 Timestamp(3, 0)

                        { "_id" : NumberLong("3074457345618258602") } -->> { "_id" : { "$maxKey" : 1 } } on : shard3 Timestamp(3, 1)

 

mongos>

三、 测试

目前配置服务、路由服务、分片服务、副本集服务都已经串联起来了,此时进行数据插入,数据能够自动分片。连接在mongos上让指定的数据库、指定的集合分片生效。

注意:设置分片需要在admin数据库进行

1

2

3

use admin

db.runCommand( { enablesharding :"school"});    #开启kaliarch库分片功能

db.runCommand( { shardcollection : "school.student",key : {_id:"hashed"} } )    #指定数据库里需要分片的集合tables和片键_id

设置schoolstudent表需要分片,根据 _id 自动分片到 shard1 ,shard2,shard3 上面去。

mongos> db.runCommand( { enablesharding :"school"});

{

"ok" : 1,

"$clusterTime" : {

"clusterTime" : Timestamp(1534235216, 7),

"signature" : {

"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),

"keyId" : NumberLong(0)

}

},

"operationTime" : Timestamp(1534235216, 7)

}

mongos> db.runCommand( { shardcollection : "school.student",key : {_id:"hashed"} } )

{

"collectionsharded" : "school.student",

"collectionUUID" : UUID("dbbcd092-a519-44be-8ebf-3cec16f866c5"),

"ok" : 1,

"$clusterTime" : {

"clusterTime" : Timestamp(1534235226, 22),

"signature" : {

"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),

"keyId" : NumberLong(0)

}

},

"operationTime" : Timestamp(1534235226, 22)

}

查看分片信息

mongos> db.runCommand({listshards:1})

{

"shards" : [

{

"_id" : "shard1",

"host" : "shard1/172.17.0.2:27017,172.17.0.4:27017",

"state" : 1

},

{

"_id" : "shard2",

"host" : "shard2/172.17.0.2:27018,172.17.0.3:27018",

"state" : 1

},

{

"_id" : "shard3",

"host" : "shard3/172.17.0.3:27019,172.17.0.4:27019",

"state" : 1

}

],

"ok" : 1,

"$clusterTime" : {

"clusterTime" : Timestamp(1534235275, 2),

"signature" : {

"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),

"keyId" : NumberLong(0)

}

},

"operationTime" : Timestamp(1534235275, 2)

}

测试插入数据

mongos> use school

switched to db school

mongos> for (var i = 1; i <= 1; i++) db.student.save({_id:i,"test1":"testval1"});for (var i = 1; i <= 1; i++) db.student.save({_id:i,"test1":"testval1"});

WriteResult({ "nMatched" : 0, "nUpserted" : 1, "nModified" : 0, "_id" : 1 })

mongos> for (var i = 1; i <= 100000; i++) db.student.save({_id:i,"test1":"testval1"});

WriteResult({ "nMatched" : 0, "nUpserted" : 1, "nModified" : 0, "_id" : 100000 })

mongos>

 

查看分片情况:(省去部分信息)

db.table1.stats()

{

    "sharded" : true,

    "capped" : false,

    "ns" : "school.student",

    "count" : 100000,        #总count

    "size" : 3800000,

    "storageSize" : 1335296,

    "totalIndexSize" : 4329472,

    "indexSizes" : {

        "_id_" : 1327104,

        "_id_hashed" : 3002368

    },

    "avgObjSize" : 38,

    "nindexes" : 2,

    "nchunks" : 6,

    "shards" : {

        "shard1" : {

            "ns" : "school.student",

            "size" : 1282690,

            "count" : 33755,        #shard1的count数

            "avgObjSize" : 38,

            "storageSize" : 450560,

            "capped" : false,

            ......

 

    "shard2" : {

                "ns" : "school.student",

                "size" : 1259434,

                "count" : 33143,        #shard2的count数

                "avgObjSize" : 38,

                "storageSize" : 442368,

                "capped" : false,

            .......

    "shard3" : {

            "ns" : "school.student",

            "size" : 1257876,    

            "count" : 33102,            #shard3的count数

            "avgObjSize" : 38,

            "storageSize" : 442368,

            "capped" : false,

             .......

此时架构中的mongos,config server,shard集群均已经搭建部署完毕,在实际生成环境话需要对前端的mongos做高可用来提示整体高可用。

猜你喜欢

转载自www.cnblogs.com/EikiXu/p/9476676.html