搭建MongoDB Sharding集群 方法

原文地址:http://www.tech126.com/mongodb-sharding-cluster/



从1.6版本起,MongoDB开始正式支持Sharding

同时,MongoDB也推出了Replica Sets,用以替代之前版本的Replica Pairs

通过把Sharding和Replica Sets相结合,我们可以搭建一个分布式的,高可用性,自动水平扩展的集群

一个典型的集群结构如下:

集群由以下3个服务组成:

    Shards Server: 每个shard由一个或多个mongod进程组成,用于存储数据
    Config  Server: 用于存储集群的Metadata信息,包括每个Shard的信息和chunks信息
    Route   Server: 用于提供路由服务,由Client连接,使整个Cluster看起来像单个DB服务器

另外,Chunks是指MongoDB中一段连续的数据块,默认大小是200M,一个Chunk位于其中一台Shard服务器上

下面,搭建一个Cluster,它由4台服务器组成,包括2个Shard,3个Config,1个Route

其中每个Shard由一个Replica Set组成,每个Replica Set由2个Mongod节点,1个vote节点组成

以下是搭建配置的过程:

1. 四台服务器分别启动相应的Mongod进程:

    192.168.95.216
    /usr/local/mongodb/bin/mongod –fork –shardsvr –port 10000 –replSet set1 –dbpath /pvdata/mongodb_data  –logpath /pvdata/mongodb_log/mongod.log
    /usr/local/mongodb/bin/mongod –fork –shardsvr –port 10001 –replSet set2 –dbpath /pvdata/mongodb_data1  –logpath /pvdata/mongodb_log/mongod1.log

    192.168.95.217
    /usr/local/mongodb/bin/mongod –fork –shardsvr –port 10000 –replSet set1 –dbpath /pvdata/mongodb_data  –logpath /pvdata/mongodb_log/mongod.log

    192.168.95.218
    /usr/local/mongodb/bin/mongod –fork –shardsvr –port 10000 –replSet set2 –dbpath /pvdata/mongodb_data  –logpath /pvdata/mongodb_log/mongod.log
    /usr/local/mongodb/bin/mongod –fork –shardsvr –port 10001 –replSet set1 –dbpath /pvdata/mongodb_data1  –logpath /pvdata/mongodb_log/mongod1.log

    192.168.95.137
    /usr/local/mongodb/bin/mongod –fork –shardsvr –port 10000 –replSet set2 –dbpath /opt/mongodb_data  –logpath /opt/mongodb_log/mongod.log

2. 分别配置2组Replica Sets:

    192.168.95.216
    mongo –port 10000
        config = {_id: 'set1', members: [
            {_id: 0, host: '192.168.95.216:10000'},
            {_id: 1, host: '192.168.95.217:10000'},
            {_id: 1, host: '192.168.95.218:10001', arbiterOnly: true}
        ]}
        rs.initiate(config)
        rs.status()

    192.168.95.218
    mongo –port 10000
        config = {_id: 'set2', members: [
            {_id: 0, host: '192.168.95.218:10000'},
            {_id: 1, host: '192.168.95.137:10000'},
            {_id: 1, host: '192.168.95.216:10001', arbiterOnly: true}
        ]}
        rs.initiate(config)
        rs.status()

注意:2台Server上的10001对应的Mongod,它们只负责在某个node down掉后,进行vote选举新的master,它们本身并不存储数据备份

3.配置3台Config Servers:

    mongod –configsvr –fork –logpath /pvdata/mongodb_log/config.log –dbpath /pvdata/mongodb_config_data –port 20000

4.配置1台Route Server:

    192.168.95.216
    /usr/local/mongodb/bin/mongos –fork –chunkSize 1 –configdb "192.168.95.216:20000,192.168.95.217:20000,192.168.95.218:20000" –logpath /pvdata/mongodb_log/mongos.log

chunkSize参数用来设置chunk块的大小,这里为了测试,设置成1M

5..配置2组Shard:

    192.168.95.216
    mongo
        use admin
        db.runCommand({addshard:'set1/192.168.95.216:10000,192.168.95.217:10000'})
        db.runCommand({addshard:'set2/192.168.95.218:10000,192.168.95.137:10000'})
        db.runCommand({enablesharding:'test'})
        db.runCommand({listshards:1})
        printShardingStatus()
        db.runCommand({shardcollection:'test.test', key:{_id:1}, unique : true})

这样整个配置就完成了,下面可以用pymongo来进行测试:

1
2
3
4
5
6



con = pymongo.Connection("192.168.95.216", 27017)
db = con.test
collection = db.test
for i in xrange(10000):
    name = ''.join(random.choice(string.letters) for i in xrange(10))
    collection.save({'_id':name})

然后,进入mongo的命令行,可以在2组的shard中分别查看count值,会发现collection记录被平均的分布到了2组shard server上了

下面,我们再来测试一下automated failover:

将95.218的mongod进程kill -2杀掉后,在95.137的log中,可以看到:

    Wed Sep 29 10:51:04 [ReplSetHealthPollTask] replSet info 192.168.95.218:10000 is now down (or slow to respond)
    Wed Sep 29 10:51:04 [rs Manager] replSet info electSelf 1
    Wed Sep 29 10:51:04 [rs Manager] replSet PRIMARY

说明,新的vote已经完成,95.137变成了新的primary master了

此时,我们再往db中继续写入数据,然后启动95.218,会发现:

    Wed Sep 29 10:52:56 [ReplSetHealthPollTask] replSet 192.168.95.218:10000 SECONDARY

说明,95.218此时又作为secondary来运行了

同时,在218 down掉时,137上写入的数据也会继续同步到218上

整个配置过程还是比较简单的,测试也挺正常

但整个Cluster的稳定性,还有待于应用上线后的观察…

猜你喜欢

转载自ainn2006.iteye.com/blog/1583182