【MongoDB】配置shard集群 完整教程

https://blog.csdn.net/u010900754/article/details/78157146

【MongoDB】配置shard集群 完整教程

2017年10月04日 05:45:19

阅读数:903

分片技术可以解决单表数据量太大以至于单台机器都无法支持的情况,是一种水平扩展技术。其中也会有一些技术难点,比如一致性哈希等。

这里记录一下自己使用MongoDB实现shard功能。机器MacOS,MongoDB版本:3.4.9。

在MongoDB的shard集群中共有三种角色:

(1)config,用于存储配置信息,在3.4版本的MongoDB中,要求config节点必须是一个大于等于3个节点的复制集,这样容错率更高;

(2)routing,用户配置整个shard集群,实现路由功能,是程序与MongoDB的shard集群交互的入口;

(3)shard,存放分片数据的节点。

这里的拓扑为:

由c0,c1,c2组成的复制集作为config节点集,c0:<127.0.0.1:40000>,c1:<127.0.0.1:40001>,c2:<127.0.0.1:40002>

由s0,s1组成的shard,s0:<127.0.0.1:30001>,s1:<127.0.0.1:30002>

由m0组成的routing,m0:<127.0.0.1:50000>

根目录为:/data/shard-project/,在该目录下新建log,config和data目录,分别存放日志,配置和数据。在data目录下分别建立c0,c1,c2,s0,s1目录,m0不需要data目录。

下面分别建立各个节点的配置文件:

c0.conf:

 
  1. #数据文件夹

  2. dbpath=/data/shard-project/data/c0

  3.  
  4. #日志文件夹,如果以后台方式运行,必须指定logpath

  5. logpath=/data/shard-project/log/c0.log

  6.  
  7. #以追加而非覆盖方式写入日志

  8. logappend=true

  9.  
  10. #端口

  11. port=40000

  12.  
  13. #以后台方式运行

  14. fork=true

  15.  
  16. #shard角色

  17. configsvr=true

  18.  
  19. #复制集

  20. replSet=cs

c1.conf:

 
  1. #数据文件夹

  2. dbpath=/data/shard-project/data/c1

  3.  
  4. #日志文件夹,如果以后台方式运行,必须指定logpath

  5. logpath=/data/shard-project/log/c1.log

  6.  
  7. #以追加而非覆盖方式写入日志

  8. logappend=true

  9.  
  10. #端口

  11. port=40001

  12.  
  13. #以后台方式运行

  14. fork=true

  15.  
  16. #shard角色

  17. configsvr=true

  18.  
  19. #复制集

  20. replSet=cs

c2.conf:

 
  1. #数据文件夹

  2. dbpath=/data/shard-project/data/c2

  3.  
  4. #日志文件夹,如果以后台方式运行,必须指定logpath

  5. logpath=/data/shard-project/log/c2.log

  6.  
  7. #以追加而非覆盖方式写入日志

  8. logappend=true

  9.  
  10. #端口

  11. port=40002

  12.  
  13. #以后台方式运行

  14. fork=true

  15.  
  16. #shard角色

  17. configsvr=true

  18.  
  19. #复制集

  20. replSet=cs

s0.conf:

 
  1. #数据文件夹

  2. dbpath=/data/shard-project/data/s0

  3.  
  4. #日志文件夹,如果以后台方式运行,必须指定logpath

  5. logpath=/data/shard-project/log/s0.log

  6.  
  7. #以追加而非覆盖方式写入日志

  8. logappend=true

  9.  
  10. #端口

  11. port=30001

  12.  
  13. #以后台方式运行

  14. fork=true

  15.  
  16. #shard角色

  17. shardsvr=true

s1.conf:

 
  1. #数据文件夹

  2. dbpath=/data/shard-project/data/s1

  3.  
  4. #日志文件夹,如果以后台方式运行,必须指定logpath

  5. logpath=/data/shard-project/log/s1.log

  6.  
  7. #以追加而非覆盖方式写入日志

  8. logappend=true

  9.  
  10. #端口

  11. port=30002

  12.  
  13. #以后台方式运行

  14. fork=true

  15.  
  16. #shard角色

  17. shardsvr=true


m0.conf:

 
  1. #日志文件夹,如果以后台方式运行,必须指定logpath

  2. logpath=/data/shard-project/log/m0.log

  3.  
  4. #以追加而非覆盖方式写入日志

  5. logappend=true

  6.  
  7. #端口

  8. port=50000

  9.  
  10. #以后台方式运行

  11. fork=true

  12.  
  13. #指定配置服务器

  14. configdb=cs/localhost:40000,localhost:40001,localhost:40002

然后启动:

s0,s1:

 
  1. sudo mongod -f /data/shard-project/config/s0.conf

  2. sudo mongod -f /data/shard-project/config/s1.conf

c0,c1,c2:

 
  1. sudo mongod -f /data/shard-project/config/c0.conf

  2. sudo mongod -f /data/shard-project/config/c1.conf

  3. sudo mongod -f /data/shard-project/config/c2.conf


接着配置复制集:

登陆c0:

mongo --port 40000

配置复制集,并且指定自己为primary:

 
  1. config={_id:"cs",members:[{_id:0,host:"localhost:40000",priority:1}]}

  2. rs.initiate(config)

 最后提示符会改变,添加c1和c2:

 
  1. rs.add("localhost:40001")

  2. rs.add("localhost:40002")

最后检查复制集:

 
  1. cs:PRIMARY> rs.status()

  2. {

  3. "set" : "cs",

  4. "date" : ISODate("2017-10-03T21:10:19.697Z"),

  5. "myState" : 1,

  6. "term" : NumberLong(1),

  7. "configsvr" : true,

  8. "heartbeatIntervalMillis" : NumberLong(2000),

  9. "optimes" : {

  10. "lastCommittedOpTime" : {

  11. "ts" : Timestamp(1507065009, 1),

  12. "t" : NumberLong(1)

  13. },

  14. "readConcernMajorityOpTime" : {

  15. "ts" : Timestamp(1507065009, 1),

  16. "t" : NumberLong(1)

  17. },

  18. "appliedOpTime" : {

  19. "ts" : Timestamp(1507065009, 1),

  20. "t" : NumberLong(1)

  21. },

  22. "durableOpTime" : {

  23. "ts" : Timestamp(1507065009, 1),

  24. "t" : NumberLong(1)

  25. }

  26. },

  27. "members" : [

  28. {

  29. "_id" : 0,

  30. "name" : "localhost:40000",

  31. "health" : 1,

  32. "state" : 1,

  33. "stateStr" : "PRIMARY",

  34. "uptime" : 251,

  35. "optime" : {

  36. "ts" : Timestamp(1507065009, 1),

  37. "t" : NumberLong(1)

  38. },

  39. "optimeDate" : ISODate("2017-10-03T21:10:09Z"),

  40. "electionTime" : Timestamp(1507064883, 2),

  41. "electionDate" : ISODate("2017-10-03T21:08:03Z"),

  42. "configVersion" : 3,

  43. "self" : true

  44. },

  45. {

  46. "_id" : 1,

  47. "name" : "localhost:40001",

  48. "health" : 1,

  49. "state" : 2,

  50. "stateStr" : "SECONDARY",

  51. "uptime" : 29,

  52. "optime" : {

  53. "ts" : Timestamp(1507065009, 1),

  54. "t" : NumberLong(1)

  55. },

  56. "optimeDurable" : {

  57. "ts" : Timestamp(1507065009, 1),

  58. "t" : NumberLong(1)

  59. },

  60. "optimeDate" : ISODate("2017-10-03T21:10:09Z"),

  61. "optimeDurableDate" : ISODate("2017-10-03T21:10:09Z"),

  62. "lastHeartbeat" : ISODate("2017-10-03T21:10:19.261Z"),

  63. "lastHeartbeatRecv" : ISODate("2017-10-03T21:10:18.259Z"),

  64. "pingMs" : NumberLong(0),

  65. "syncingTo" : "localhost:40000",

  66. "configVersion" : 3

  67. },

  68. {

  69. "_id" : 2,

  70. "name" : "localhost:40002",

  71. "health" : 1,

  72. "state" : 2,

  73. "stateStr" : "SECONDARY",

  74. "uptime" : 26,

  75. "optime" : {

  76. "ts" : Timestamp(1507065009, 1),

  77. "t" : NumberLong(1)

  78. },

  79. "optimeDurable" : {

  80. "ts" : Timestamp(1507065009, 1),

  81. "t" : NumberLong(1)

  82. },

  83. "optimeDate" : ISODate("2017-10-03T21:10:09Z"),

  84. "optimeDurableDate" : ISODate("2017-10-03T21:10:09Z"),

  85. "lastHeartbeat" : ISODate("2017-10-03T21:10:19.261Z"),

  86. "lastHeartbeatRecv" : ISODate("2017-10-03T21:10:18.727Z"),

  87. "pingMs" : NumberLong(0),

  88. "syncingTo" : "localhost:40001",

  89. "configVersion" : 3

  90. }

  91. ],

  92. "ok" : 1

  93. }

可以看到已经配置好了3个复制集。

启动m0:

sudo mongos -f /data/shard-project/config/m0.conf

登陆m0:

mongo --port 50000

使用admin库配置,添加分片,打开test数据库的分片选项,为test下的persons表添加片键:

 
  1. use admin

  2. db.runCommand({addshard:"localhost:30001"})

  3. db.runCommand({addshard:"localhost:30002"})

  4. db.runCommand({enablesharding:"test"})

  5. db.runCommand({shardcollection:"test.persons",key:{_id:1}})

下面开始测试:

向test.persons数据库添加500000条数据,查看分片情况:

 
  1. use test

  2. for(var i = 0; i < 500000; i++) db.persons.insert({age:i, name:"ly"})

由于数据量比较大,需要一段时间。

我实际上后面又添加了50000条数据,其实不需要,但是如果要计算后面的每一个shard条数之和,可能会引起混淆,所以说明一下。

完成之后,为了显示效果,这里需要把chunk的size调小,调成1M:

 
  1. use config

  2. db.settings.save( { _id:"chunksize", value: 1 } )

然后,Mongo会在后台把chunk拆分,均摊到每一个shard上。如果不改chunksize,可能最后只会分到一个shard上面。

检测每一个shard的情况:

 
  1. use test

  2. db.persons.stats()

显示:

 
  1. mongos> db.persons.stats()

  2. {

  3. "sharded" : true,

  4. "capped" : false,

  5. "ns" : "test.persons",

  6. "count" : 550000,

  7. "size" : 26400000,

  8. "storageSize" : 10850304,

  9. "totalIndexSize" : 7352320,

  10. "indexSizes" : {

  11. "_id_" : 7352320

  12. },

  13. "avgObjSize" : 48,

  14. "nindexes" : 1,

  15. "nchunks" : 50,

  16. "shards" : {

  17. "shard0000" : {

  18. "ns" : "test.persons",

  19. "size" : 17761440,

  20. "count" : 370030,

  21. "avgObjSize" : 48,

  22. "storageSize" : 9039872,

  23. "capped" : false,

  24. "wiredTiger" : {

  25. "metadata" : {

  26. "formatVersion" : 1

  27. },

  28. "creationString" : "access_pattern_hint=none,allocation_size=4KB,app_metadata=(formatVersion=1),block_allocation=best,block_compressor=snappy,cache_resident=false,checksum=on,colgroups=,collator=,columns=,dictionary=0,encryption=(keyid=,name=),exclusive=false,extractor=,format=btree,huffman_key=,huffman_value=,ignore_in_memory_cache_size=false,immutable=false,internal_item_max=0,internal_key_max=0,internal_key_truncate=true,internal_page_max=4KB,key_format=q,key_gap=10,leaf_item_max=0,leaf_key_max=0,leaf_page_max=32KB,leaf_value_max=64MB,log=(enabled=true),lsm=(auto_throttle=true,bloom=true,bloom_bit_count=16,bloom_config=,bloom_hash_count=8,bloom_oldest=false,chunk_count_limit=0,chunk_max=5GB,chunk_size=10MB,merge_max=15,merge_min=0),memory_page_max=10m,os_cache_dirty_max=0,os_cache_max=0,prefix_compression=false,prefix_compression_min=4,source=,split_deepen_min_child=0,split_deepen_per_child=0,split_pct=90,type=file,value_format=u",

  29. "type" : "file",

  30. "uri" : "statistics:table:collection-5-1934451794160605558",

  31. "LSM" : {

  32. "bloom filter false positives" : 0,

  33. "bloom filter hits" : 0,

  34. "bloom filter misses" : 0,

  35. "bloom filter pages evicted from cache" : 0,

  36. "bloom filter pages read into cache" : 0,

  37. "bloom filters in the LSM tree" : 0,

  38. "chunks in the LSM tree" : 0,

  39. "highest merge generation in the LSM tree" : 0,

  40. "queries that could have benefited from a Bloom filter that did not exist" : 0,

  41. "sleep for LSM checkpoint throttle" : 0,

  42. "sleep for LSM merge throttle" : 0,

  43. "total size of bloom filters" : 0

  44. },

  45. "block-manager" : {

  46. "allocations requiring file extension" : 1123,

  47. "blocks allocated" : 1938,

  48. "blocks freed" : 1151,

  49. "checkpoint size" : 6922240,

  50. "file allocation unit size" : 4096,

  51. "file bytes available for reuse" : 2084864,

  52. "file magic number" : 120897,

  53. "file major version number" : 1,

  54. "file size in bytes" : 9039872,

  55. "minor version number" : 0

  56. },

  57. "btree" : {

  58. "btree checkpoint generation" : 21,

  59. "column-store fixed-size leaf pages" : 0,

  60. "column-store internal pages" : 0,

  61. "column-store variable-size RLE encoded values" : 0,

  62. "column-store variable-size deleted values" : 0,

  63. "column-store variable-size leaf pages" : 0,

  64. "fixed-record size" : 0,

  65. "maximum internal page key size" : 368,

  66. "maximum internal page size" : 4096,

  67. "maximum leaf page key size" : 2867,

  68. "maximum leaf page size" : 32768,

  69. "maximum leaf page value size" : 67108864,

  70. "maximum tree depth" : 3,

  71. "number of key/value pairs" : 0,

  72. "overflow pages" : 0,

  73. "pages rewritten by compaction" : 0,

  74. "row-store internal pages" : 0,

  75. "row-store leaf pages" : 0

  76. },

  77. "cache" : {

  78. "bytes currently in the cache" : 54288061,

  79. "bytes read into cache" : 0,

  80. "bytes written from cache" : 53373061,

  81. "checkpoint blocked page eviction" : 0,

  82. "data source pages selected for eviction unable to be evicted" : 0,

  83. "hazard pointer blocked page eviction" : 0,

  84. "in-memory page passed criteria to be split" : 24,

  85. "in-memory page splits" : 12,

  86. "internal pages evicted" : 0,

  87. "internal pages split during eviction" : 0,

  88. "leaf pages split during eviction" : 6,

  89. "modified pages evicted" : 16,

  90. "overflow pages read into cache" : 0,

  91. "overflow values cached in memory" : 0,

  92. "page split during eviction deepened the tree" : 0,

  93. "page written requiring lookaside records" : 0,

  94. "pages read into cache" : 0,

  95. "pages read into cache requiring lookaside entries" : 0,

  96. "pages requested from the cache" : 2411481,

  97. "pages written from cache" : 1909,

  98. "pages written requiring in-memory restoration" : 0,

  99. "tracked dirty bytes in the cache" : 9845189,

  100. "unmodified pages evicted" : 0

  101. },

  102. "cache_walk" : {

  103. "Average difference between current eviction generation when the page was last considered" : 0,

  104. "Average on-disk page image size seen" : 0,

  105. "Clean pages currently in cache" : 0,

  106. "Current eviction generation" : 0,

  107. "Dirty pages currently in cache" : 0,

  108. "Entries in the root page" : 0,

  109. "Internal pages currently in cache" : 0,

  110. "Leaf pages currently in cache" : 0,

  111. "Maximum difference between current eviction generation when the page was last considered" : 0,

  112. "Maximum page size seen" : 0,

  113. "Minimum on-disk page image size seen" : 0,

  114. "On-disk page image sizes smaller than a single allocation unit" : 0,

  115. "Pages created in memory and never written" : 0,

  116. "Pages currently queued for eviction" : 0,

  117. "Pages that could not be queued for eviction" : 0,

  118. "Refs skipped during cache traversal" : 0,

  119. "Size of the root page" : 0,

  120. "Total number of pages currently in cache" : 0

  121. },

  122. "compression" : {

  123. "compressed pages read" : 0,

  124. "compressed pages written" : 1869,

  125. "page written failed to compress" : 0,

  126. "page written was too small to compress" : 40,

  127. "raw compression call failed, additional data available" : 0,

  128. "raw compression call failed, no additional data available" : 0,

  129. "raw compression call succeeded" : 0

  130. },

  131. "cursor" : {

  132. "bulk-loaded cursor-insert calls" : 0,

  133. "create calls" : 19,

  134. "cursor-insert key and value bytes inserted" : 43675843,

  135. "cursor-remove key bytes removed" : 1803531,

  136. "cursor-update value bytes updated" : 0,

  137. "insert calls" : 841504,

  138. "next calls" : 321138,

  139. "prev calls" : 2,

  140. "remove calls" : 471474,

  141. "reset calls" : 2411447,

  142. "restarted searches" : 0,

  143. "search calls" : 1248799,

  144. "search near calls" : 321145,

  145. "truncate calls" : 0,

  146. "update calls" : 0

  147. },

  148. "reconciliation" : {

  149. "dictionary matches" : 0,

  150. "fast-path pages deleted" : 0,

  151. "internal page key bytes discarded using suffix compression" : 4958,

  152. "internal page multi-block writes" : 11,

  153. "internal-page overflow keys" : 0,

  154. "leaf page key bytes discarded using prefix compression" : 0,

  155. "leaf page multi-block writes" : 34,

  156. "leaf-page overflow keys" : 0,

  157. "maximum blocks required for a page" : 2,

  158. "overflow values written" : 0,

  159. "page checksum matches" : 630,

  160. "page reconciliation calls" : 72,

  161. "page reconciliation calls for eviction" : 16,

  162. "pages deleted" : 12

  163. },

  164. "session" : {

  165. "object compaction" : 0,

  166. "open cursor count" : 7

  167. },

  168. "transaction" : {

  169. "update conflicts" : 0

  170. }

  171. },

  172. "nindexes" : 1,

  173. "totalIndexSize" : 5877760,

  174. "indexSizes" : {

  175. "_id_" : 5877760

  176. },

  177. "ok" : 1

  178. },

  179. "shard0001" : {

  180. "ns" : "test.persons",

  181. "size" : 8638560,

  182. "count" : 179970,

  183. "avgObjSize" : 48,

  184. "storageSize" : 1810432,

  185. "capped" : false,

  186. "wiredTiger" : {

  187. "metadata" : {

  188. "formatVersion" : 1

  189. },

  190. "creationString" : "access_pattern_hint=none,allocation_size=4KB,app_metadata=(formatVersion=1),block_allocation=best,block_compressor=snappy,cache_resident=false,checksum=on,colgroups=,collator=,columns=,dictionary=0,encryption=(keyid=,name=),exclusive=false,extractor=,format=btree,huffman_key=,huffman_value=,ignore_in_memory_cache_size=false,immutable=false,internal_item_max=0,internal_key_max=0,internal_key_truncate=true,internal_page_max=4KB,key_format=q,key_gap=10,leaf_item_max=0,leaf_key_max=0,leaf_page_max=32KB,leaf_value_max=64MB,log=(enabled=true),lsm=(auto_throttle=true,bloom=true,bloom_bit_count=16,bloom_config=,bloom_hash_count=8,bloom_oldest=false,chunk_count_limit=0,chunk_max=5GB,chunk_size=10MB,merge_max=15,merge_min=0),memory_page_max=10m,os_cache_dirty_max=0,os_cache_max=0,prefix_compression=false,prefix_compression_min=4,source=,split_deepen_min_child=0,split_deepen_per_child=0,split_pct=90,type=file,value_format=u",

  191. "type" : "file",

  192. "uri" : "statistics:table:collection-5--2521018454738123524",

  193. "LSM" : {

  194. "bloom filter false positives" : 0,

  195. "bloom filter hits" : 0,

  196. "bloom filter misses" : 0,

  197. "bloom filter pages evicted from cache" : 0,

  198. "bloom filter pages read into cache" : 0,

  199. "bloom filters in the LSM tree" : 0,

  200. "chunks in the LSM tree" : 0,

  201. "highest merge generation in the LSM tree" : 0,

  202. "queries that could have benefited from a Bloom filter that did not exist" : 0,

  203. "sleep for LSM checkpoint throttle" : 0,

  204. "sleep for LSM merge throttle" : 0,

  205. "total size of bloom filters" : 0

  206. },

  207. "block-manager" : {

  208. "allocations requiring file extension" : 223,

  209. "blocks allocated" : 223,

  210. "blocks freed" : 4,

  211. "checkpoint size" : 1753088,

  212. "file allocation unit size" : 4096,

  213. "file bytes available for reuse" : 40960,

  214. "file magic number" : 120897,

  215. "file major version number" : 1,

  216. "file size in bytes" : 1810432,

  217. "minor version number" : 0

  218. },

  219. "btree" : {

  220. "btree checkpoint generation" : 7,

  221. "column-store fixed-size leaf pages" : 0,

  222. "column-store internal pages" : 0,

  223. "column-store variable-size RLE encoded values" : 0,

  224. "column-store variable-size deleted values" : 0,

  225. "column-store variable-size leaf pages" : 0,

  226. "fixed-record size" : 0,

  227. "maximum internal page key size" : 368,

  228. "maximum internal page size" : 4096,

  229. "maximum leaf page key size" : 2867,

  230. "maximum leaf page size" : 32768,

  231. "maximum leaf page value size" : 67108864,

  232. "maximum tree depth" : 3,

  233. "number of key/value pairs" : 0,

  234. "overflow pages" : 0,

  235. "pages rewritten by compaction" : 0,

  236. "row-store internal pages" : 0,

  237. "row-store leaf pages" : 0

  238. },

  239. "cache" : {

  240. "bytes currently in the cache" : 24532401,

  241. "bytes read into cache" : 0,

  242. "bytes written from cache" : 6199839,

  243. "checkpoint blocked page eviction" : 0,

  244. "data source pages selected for eviction unable to be evicted" : 0,

  245. "hazard pointer blocked page eviction" : 0,

  246. "in-memory page passed criteria to be split" : 4,

  247. "in-memory page splits" : 2,

  248. "internal pages evicted" : 0,

  249. "internal pages split during eviction" : 0,

  250. "leaf pages split during eviction" : 0,

  251. "modified pages evicted" : 0,

  252. "overflow pages read into cache" : 0,

  253. "overflow values cached in memory" : 0,

  254. "page split during eviction deepened the tree" : 0,

  255. "page written requiring lookaside records" : 0,

  256. "pages read into cache" : 0,

  257. "pages read into cache requiring lookaside entries" : 0,

  258. "pages requested from the cache" : 180043,

  259. "pages written from cache" : 220,

  260. "pages written requiring in-memory restoration" : 0,

  261. "tracked dirty bytes in the cache" : 15472806,

  262. "unmodified pages evicted" : 0

  263. },

  264. "cache_walk" : {

  265. "Average difference between current eviction generation when the page was last considered" : 0,

  266. "Average on-disk page image size seen" : 0,

  267. "Clean pages currently in cache" : 0,

  268. "Current eviction generation" : 0,

  269. "Dirty pages currently in cache" : 0,

  270. "Entries in the root page" : 0,

  271. "Internal pages currently in cache" : 0,

  272. "Leaf pages currently in cache" : 0,

  273. "Maximum difference between current eviction generation when the page was last considered" : 0,

  274. "Maximum page size seen" : 0,

  275. "Minimum on-disk page image size seen" : 0,

  276. "On-disk page image sizes smaller than a single allocation unit" : 0,

  277. "Pages created in memory and never written" : 0,

  278. "Pages currently queued for eviction" : 0,

  279. "Pages that could not be queued for eviction" : 0,

  280. "Refs skipped during cache traversal" : 0,

  281. "Size of the root page" : 0,

  282. "Total number of pages currently in cache" : 0

  283. },

  284. "compression" : {

  285. "compressed pages read" : 0,

  286. "compressed pages written" : 218,

  287. "page written failed to compress" : 0,

  288. "page written was too small to compress" : 2,

  289. "raw compression call failed, additional data available" : 0,

  290. "raw compression call failed, no additional data available" : 0,

  291. "raw compression call succeeded" : 0

  292. },

  293. "cursor" : {

  294. "bulk-loaded cursor-insert calls" : 0,

  295. "create calls" : 3,

  296. "cursor-insert key and value bytes inserted" : 9275921,

  297. "cursor-remove key bytes removed" : 0,

  298. "cursor-update value bytes updated" : 0,

  299. "insert calls" : 179967,

  300. "next calls" : 0,

  301. "prev calls" : 1,

  302. "remove calls" : 0,

  303. "reset calls" : 179967,

  304. "restarted searches" : 71,

  305. "search calls" : 1,

  306. "search near calls" : 0,

  307. "truncate calls" : 0,

  308. "update calls" : 0

  309. },

  310. "reconciliation" : {

  311. "dictionary matches" : 0,

  312. "fast-path pages deleted" : 0,

  313. "internal page key bytes discarded using suffix compression" : 506,

  314. "internal page multi-block writes" : 0,

  315. "internal-page overflow keys" : 0,

  316. "leaf page key bytes discarded using prefix compression" : 0,

  317. "leaf page multi-block writes" : 3,

  318. "leaf-page overflow keys" : 0,

  319. "maximum blocks required for a page" : 124,

  320. "overflow values written" : 0,

  321. "page checksum matches" : 37,

  322. "page reconciliation calls" : 5,

  323. "page reconciliation calls for eviction" : 0,

  324. "pages deleted" : 0

  325. },

  326. "session" : {

  327. "object compaction" : 0,

  328. "open cursor count" : 3

  329. },

  330. "transaction" : {

  331. "update conflicts" : 0

  332. }

  333. },

  334. "nindexes" : 1,

  335. "totalIndexSize" : 1474560,

  336. "indexSizes" : {

  337. "_id_" : 1474560

  338. },

  339. "ok" : 1

  340. }

  341. },

  342. "ok" : 1

  343. }

可以看到每一个shard上数据的大小;

也可以通过

printShardingStatus()

来查看shard信息:

 
  1. mongos> printShardingStatus()

  2. --- Sharding Status ---

  3. sharding version: {

  4. "_id" : 1,

  5. "minCompatibleVersion" : 5,

  6. "currentVersion" : 6,

  7. "clusterId" : ObjectId("59d3fc36c6a41efbca5eb228")

  8. }

  9. shards:

  10. { "_id" : "shard0000", "host" : "localhost:30001", "state" : 1 }

  11. { "_id" : "shard0001", "host" : "localhost:30002", "state" : 1 }

  12. active mongoses:

  13. "3.4.9" : 1

  14. autosplit:

  15. Currently enabled: yes

  16. balancer:

  17. Currently enabled: yes

  18. Currently running: no

  19. Balancer lock taken at Tue Oct 03 2017 17:08:06 GMT-0400 (EDT) by ConfigServer:Balancer

  20. Failed balancer rounds in last 5 attempts: 0

  21. Migration Results for the last 24 hours:

  22. 23 : Success

  23. databases:

  24. { "_id" : "test", "primary" : "shard0000", "partitioned" : true }

  25. test.persons

  26. shard key: { "_id" : 1 }

  27. unique: false

  28. balancing: true

  29. chunks:

  30. shard0000 25

  31. shard0001 25

  32. too many chunks to print, use verbose if you want to force print


至此,shard配置成功。

猜你喜欢

转载自blog.csdn.net/zlfing/article/details/81938942