NoSQL(三)——MongoDB

版权声明:欢迎转载,转载请注明出处 https://blog.csdn.net/miss1181248983/article/details/82120048

21.4 MongoDB介绍

什么是MongoDB

MongoDB也是NoSQL的一种,它是一个基于分布式文件存储的文档型数据库,由C++语言编写。因为是文档型数据库,所以它是非关系数据库中功能最丰富、最像关系数据库的。官方网站:https://www.mongodb.com/ ,最新版本4.0.1。

MongoDB将数据存储为一个文档,数据结构将键值(key-value)对组成,MongoDB文档类似于JSON对象,字段值可以包含其它文档、数组及文档数组。关于JSON:http://www.w3school.com.cn/json/index.asp

MongoDB基本概念:

文档(document)是MongoDB中数据的基本单元,非常类似于关系型数据库系统中的行(但是比行要复杂的多)
集合(collection)就是一组文档,如果说MongoDB中的文档类似于关系型数据库中的行,那么集合就如同表
MongoDB的单个计算机可以容纳多个独立的数据库,每一个数据库都有自己的集合和权限
MongoDB自带简洁但功能强大的JavaScript shell,这个工具对于管理MongoDB实例和操作数据作用非常大
每一个文档都有一个特殊的键”_id”,它在文档所处的集合中是唯一的,相当于关系数据库中的表的主键

MongoDB的特点是高性能、易部署、易使用,存储数据非常方便,它的主要特性有:

*面向集合存储,易存储对象类型的数据
模式自由
支持动态查询
支持完全索引,包含内部对象
支持复制和故障恢复
使用高效的二进制数据存储,包括大型对象
文件存储格式为BSON(一种JSON的扩展)

MongoDB安装

  • 创建MongoDB源:
[root@lzx ~]# cd /etc/yum.repos.d/
[root@lzx yum.repos.d]# vim mongo.repo              //写入下面内容
[mongodb-org-3.4]
name=MongoDB Repository
baseurl=https://repo.mongodb.org/yum/redhat/$releasever/mongodb-org/3.4/x86_64/
gpgcheck=1
enabled=1
gpgkey=https://www.mongodb.org/static/pgp/server-3.4.asc
[root@lzx yum.repos.d]# yum list |grep mongodb            
collectd-write_mongodb.x86_64            5.8.0-4.el7                   epel     
mongodb.x86_64                           2.6.12-6.el7                  epel     
mongodb-org.x86_64                       3.4.16-1.el7                  mongodb-org-3.4
mongodb-org-mongos.x86_64                3.4.16-1.el7                  mongodb-org-3.4
mongodb-org-server.x86_64                3.4.16-1.el7                  mongodb-org-3.4
mongodb-org-shell.x86_64                 3.4.16-1.el7                  mongodb-org-3.4
mongodb-org-tools.x86_64                 3.4.16-1.el7                  mongodb-org-3.4          //可以看到有mongodb相关的rpm包
mongodb-server.x86_64                    2.6.12-6.el7                  epel     
mongodb-test.x86_64                      2.6.12-6.el7                  epel     
nodejs-mongodb.noarch                    1.4.7-1.el7                   epel     
php-mongodb.noarch                       1.0.4-1.el7                   epel     
php-pecl-mongodb.x86_64                  1.1.10-1.el7                  epel     
poco-mongodb.x86_64                      1.6.1-3.el7                   epel     
syslog-ng-mongodb.x86_64                 3.5.6-3.el7                   epel     
[root@lzx yum.repos.d]# yum install -y mongodb-org

连接MongoDB

  • 已经安装完mongodb,下面启动服务:
[root@lzx yum.repos.d]# systemctl start mongod
[root@lzx yum.repos.d]# ps aux |grep mongod
mongod     1522  0.8  0.9 972380 37512 ?        Sl   10:30   0:00 /usr/bin/mongod -f /etc/mongo.conf
[root@lzx yum.repos.d]# netstat -lntp |grep mongod
tcp        0      0 127.0.0.1:27017         0.0.0.0:*               LISTEN      1522/mongod  
  • 进入mongodb shell:
[root@lzx yum.repos.d]# mongo
MongoDB shell version v3.4.16
connecting to: mongodb://127.0.0.1:27017              //显示IP及监听端口
MongoDB server version: 3.4.16
Welcome to the MongoDB shell.
For interactive help, type "help".
For more comprehensive documentation, see
    http://docs.mongodb.org/
Questions? Try the support group
    http://groups.google.com/group/mongodb-user
Server has startup warnings: 
2018-08-25T10:30:18.431+0800 I CONTROL  [initandlisten] 
2018-08-25T10:30:18.431+0800 I CONTROL  [initandlisten] ** WARNING: Access control is not enabled for the database.
2018-08-25T10:30:18.431+0800 I CONTROL  [initandlisten] **          Read and write access to data and configuration is unrestricted.
2018-08-25T10:30:18.431+0800 I CONTROL  [initandlisten] 
2018-08-25T10:30:18.431+0800 I CONTROL  [initandlisten] 
2018-08-25T10:30:18.431+0800 I CONTROL  [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/enabled is 'always'.
2018-08-25T10:30:18.431+0800 I CONTROL  [initandlisten] **        We suggest setting it to 'never'
2018-08-25T10:30:18.431+0800 I CONTROL  [initandlisten] 
2018-08-25T10:30:18.431+0800 I CONTROL  [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/defrag is 'always'.
2018-08-25T10:30:18.431+0800 I CONTROL  [initandlisten] **        We suggest setting it to 'never'          //关于内核的警告,可以不用管
2018-08-25T10:30:18.431+0800 I CONTROL  [initandlisten] 
> 

如果监听端口不是默认的27017,则在连接的时候需要加 –port选项,如:

mongo --port 27018

远程连接mongodb,需要加–host,如:

mongo --host 127.0.0.1

如果设置了验证,则在连接的时候需要带上用户名和密码,如:

mongo -u用户名 -p密码 --authenticationDatabase 数据库名             //和mysql比较像

MongoDB用户管理

上面已经进入了mongo shell中,

  • 切换到admin库:
> use admin              //use切换库,只有在admin库中才可以操作用户管理
switched to db admin
> db.createUser( { user: "admin", customData: {description: "superuser"}, pwd: "admin123", roles: [ { role: "root", db:"admin" } ] } )            //创建用户并授权;customData表示对用户的描述,可省略;roles表示角色,指定库的权限
Successfully added user: {
    "user" : "admin",
    "customData" : {
        "description" : "superuser"
    },
    "roles" : [
        {
            "role" : "root",
            "db" : "admin"
        }
    ]
}
> db.system.users.find()          //列出所有用户,需要切换到admin库
{ "_id" : "admin.admin", "user" : "admin", "db" : "admin", "credentials" : { "SCRAM-SHA-1" : { "iterationCount" : 10000, "salt" : "J8r1+mWlE53xnEKSiQ98hA==", "storedKey" : "7dl9ZmLoj8AHTb7LguX/w4C9X9U=", "serverKey" : "ZMpgYfQMxGAULJh9a5rFijPny2E=" } }, "customData" : { "description" : "superuser" }, "roles" : [ { "role" : "root", "db" : "admin" } ] }
//admin.admin表示admin库的admin用户
> show users              //列出当前库下所有的用户
{
    "_id" : "admin.admin",
    "user" : "admin",
    "db" : "admin",
    "customData" : {
        "description" : "superuser"
    },
    "roles" : [
        {
            "role" : "root",
            "db" : "admin"
        }
    ]
}
> db.createUser({user:"lzx",pwd:"123123",roles:[{role:"read",db:"testdb"}]})             //创建lzx用户,设为read角色,针对的库是testdb
Successfully added user: {
    "user" : "lzx",
    "roles" : [
        {
            "role" : "read",
            "db" : "testdb"
        }
    ]
}
> show users              //查看新创建的用户
{
    "_id" : "admin.admin",
    "user" : "admin",
    "db" : "admin",
    "customData" : {
        "description" : "superuser"
    },
    "roles" : [
        {
            "role" : "root",
            "db" : "admin"
        }
    ]
}
{
    "_id" : "admin.lzx",
    "user" : "lzx",
    "db" : "admin",
    "roles" : [
        {
            "role" : "read",
            "db" : "testdb"
        }
    ]
}
> db.dropUser('lzx')            //删除刚创建的lzx用户
true
> show users          
{
    "_id" : "admin.admin",
    "user" : "admin",
    "db" : "admin",
    "customData" : {
        "description" : "superuser"
    },
    "roles" : [
        {
            "role" : "root",
            "db" : "admin"
        }
    ]
}           //新创建的用户已经消失

新增用户之后,若要用户生效,还需要编辑启动脚本:

[root@lzx yum.repos.d]# vim /usr/lib/systemd/system/mongod.service         //做下面修改
Environment="OPTIONS=-f /etc/mongod.conf"    改为    Environment="OPTIONS=--auth -f /etc/mongod.conf"
[root@lzx yum.repos.d]# systemctl daemon-reload         //修改了配置之后需要重载配置
[root@lzx yum.repos.d]# systemctl restart mongod        //重启mongod服务
[root@lzx yum.repos.d]# ps aux |grep mongod
mongod     8049 17.8  1.0 972380 41420 ?        Sl   14:03   0:02 /usr/bin/mongod --auth -f /etc/mongod.conf      //多了--auth,只有这样才可以指定用户名密码登录
[root@lzx yum.repos.d]# mongo --host 127.0.0.1 --port 27017
MongoDB shell version v3.4.16
connecting to: mongodb://127.0.0.1:27017/
MongoDB server version: 3.4.16
> use admin
switched to db admin
> show users
2018-08-25T14:06:39.482+0800 E QUERY    [thread1] Error: not authorized on admin to execute command { usersInfo: 1.0 } :            //这里可以看到刚刚登录的用户没有授权执行命令
_getErrorWithCode@src/mongo/shell/utils.js:25:13
DB.prototype.getUsers@src/mongo/shell/db.js:1539:1
shellHelper.show@src/mongo/shell/utils.js:771:9
shellHelper@src/mongo/shell/utils.js:678:15
@(shellhelp2):1:1
[root@lzx yum.repos.d]# mongo --host 127.0.0.1 --port 27017 -u 'admin' -p 'admin123' --authenticationDatabase "admin"            //指定用户密码和库,再次登录
MongoDB shell version v3.4.16
connecting to: mongodb://127.0.0.1:27017/
MongoDB server version: 3.4.16
Server has startup warnings: 
2018-08-25T14:03:27.518+0800 I CONTROL  [initandlisten] 
2018-08-25T14:03:27.518+0800 I CONTROL  [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/enabled is 'always'.
2018-08-25T14:03:27.518+0800 I CONTROL  [initandlisten] **        We suggest setting it to 'never'
2018-08-25T14:03:27.518+0800 I CONTROL  [initandlisten] 
2018-08-25T14:03:27.518+0800 I CONTROL  [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/defrag is 'always'.
2018-08-25T14:03:27.518+0800 I CONTROL  [initandlisten] **        We suggest setting it to 'never'
2018-08-25T14:03:27.518+0800 I CONTROL  [initandlisten] 
> use admin
switched to db admin
> show users             //上面这样登录再次执行命令就没问题了
{
    "_id" : "admin.admin",
    "user" : "admin",
    "db" : "admin",
    "customData" : {
        "description" : "superuser"
    },
    "roles" : [
        {
            "role" : "root",
            "db" : "admin"
        }
    ]
}
> use db1
switched to db db1
> db.createUser({user:"test1",pwd:"123123",roles:[{role:"readWrite",db:"db1"},{role:"read",db:"db2"}]})         //创建用户test1,对db1库有读写权限,对db2库只读
Successfully added user: {
    "user" : "test1",
    "roles" : [
        {
            "role" : "readWrite",
            "db" : "db1"
        },
        {
            "role" : "read",
            "db" : "db2"
        }
    ]
}

之所以先use db1,表示用户在db1库中创建,用户的信息会跟随数据库

> db.auth('test1','123123')        //在命令行认证用户
1          //返回1说明认证成功
> use db2
switched to db db2
> db.auth('test1','123123')
Error: Authentication failed.
0            //db2下认证失败

MongoDB中有这些角色:

read:允许用户读取指定数据库
readWrite:允许用户读写指定数据库
dbAdmin:允许用户在指定数据库中执行管理函数,如索引创建、删除,查看统计或访问system.profile
userAdmin:允许用户向system.users集合写入,可以在指定数据库里创建、删除和管理用户
clusterAdmin:只在admin数据中可用,赋予用户所有分片和复制集相关函数的管理权限
readAnyDatabase:只在admin数据库中可用,赋予用户所有数据库的读权限
readWriteAnyDatabase:只在admin数据库中可用,赋予用户所有数据库的读写权限
userAdminAnyDatabase:只在admin数据库中可用,赋予所有数据库的userAdmin权限
dbAdminAnyDatabase:只在admin数据库中可用,赋予所有数据库的dbAdmin权限
root:只在admin数据库中可用。超级账号,超级权限

MongoDB创建集合、数据管理

> use db1
switched to db db1
> db.createCollection("mycol",{capped:true,size:6142800,max:10000})           //创建集合;集合名为mycol;option可选,用来配置集合的参数
//capped true/false (可选),如果为true,则启用封顶集合。封顶集合是固定大小的集合,当它达到最大大小,会自动覆盖最早的条目。如果指定为true,也需要指定尺寸参数
//size(可选),指定最大大小字节封顶集合。如果capped是true,那么就必须指定这个字段,单位B
//max(可选),指定封顶集合允许在文件的最大数量
{ "ok" : 1 }           //返回1说明刚刚执行成功
> show tables         //查看集合
mycol
> show collections          //这个也可以查看集合
mycol
> db.Account.insert({AccountID:1,UserName:"123",password:"123456"})          //向集合中插入数据,若集合不存在,则自动创建
WriteResult({ "nInserted" : 1 })
> show collections
Account
mycol
> db.Account.insert({AccountID:2,UserName:"aaa",password:"aaaaaa"})           //再插入一个文档
WriteResult({ "nInserted" : 1 })
> show collections
Account         //仍属于Account集合     
mycol
> db.Account.update({AccountID:1},{"$set":{"Age":20}})         //更新集合中AccountID:1文档的数据
WriteResult({ "nMatched" : 1, "nUpserted" : 0, "nModified" : 1 })
> db.Account.find()         //查看所有文档
{ "_id" : ObjectId("5b80feb826e56a836ac4d168"), "AccountID" : 1, "UserName" : "123", "password" : "123456", "Age" : 20 }
{ "_id" : ObjectId("5b8100cc26e56a836ac4d169"), "AccountID" : 2, "UserName" : "aaa", "password" : "aaaaaa" }
> db.Account.find({AccountID:1})            //指定条件查询
{ "_id" : ObjectId("5b80feb826e56a836ac4d168"), "AccountID" : 1, "UserName" : "123", "password" : "123456", "Age" : 20 }
> db.Account.remove({AccountID:1})
WriteResult({ "nRemoved" : 1 })          //指定条件删除文档
> db.Account.find()
{ "_id" : ObjectId("5b8100cc26e56a836ac4d169"), "AccountID" : 2, "UserName" : "aaa", "password" : "aaaaaa" }
> db.Account.drop()       //删除集合Account中所有文档,即删除集合
true
> show collections
mycol
> db.mycol.drop()
true
> show collections
> db.col123.insert({AccountID:1,UserName:"123",password:"123456"})
WriteResult({ "nInserted" : 1 })
> db.printCollectionStats()           //查看所有集合状态
col123
{
    "ns" : "db1.col123",
    "size" : 80,
    "count" : 1,
    "avgObjSize" : 80,
    "storageSize" : 16384,
    "capped" : false,
    "wiredTiger" : {
        "metadata" : {
            "formatVersion" : 1
        },
        "creationString" : "access_pattern_hint=none,allocation_size=4KB,app_metadata=(formatVersion=1),block_allocation=best,block_compressor=snappy,cache_resident=false,checksum=on,colgroups=,collator=,columns=,dictionary=0,encryption=(keyid=,name=),exclusive=false,extractor=,format=btree,huffman_key=,huffman_value=,ignore_in_memory_cache_size=false,immutable=false,internal_item_max=0,internal_key_max=0,internal_key_truncate=true,internal_page_max=4KB,key_format=q,key_gap=10,leaf_item_max=0,leaf_key_max=0,leaf_page_max=32KB,leaf_value_max=64MB,log=(enabled=true),lsm=(auto_throttle=true,bloom=true,bloom_bit_count=16,bloom_config=,bloom_hash_count=8,bloom_oldest=false,chunk_count_limit=0,chunk_max=5GB,chunk_size=10MB,merge_max=15,merge_min=0),memory_page_max=10m,os_cache_dirty_max=0,os_cache_max=0,prefix_compression=false,prefix_compression_min=4,source=,split_deepen_min_child=0,split_deepen_per_child=0,split_pct=90,type=file,value_format=u",
        "type" : "file",
        "uri" : "statistics:table:collection-4-6088379814182883756",
        "LSM" : {
            "bloom filter false positives" : 0,
            "bloom filter hits" : 0,
            "bloom filter misses" : 0,
            "bloom filter pages evicted from cache" : 0,
            "bloom filter pages read into cache" : 0,
            "bloom filters in the LSM tree" : 0,
            "chunks in the LSM tree" : 0,
            "highest merge generation in the LSM tree" : 0,
            "queries that could have benefited from a Bloom filter that did not exist" : 0,
            "sleep for LSM checkpoint throttle" : 0,
            "sleep for LSM merge throttle" : 0,
            "total size of bloom filters" : 0
        },
        "block-manager" : {
            "allocations requiring file extension" : 3,
            "blocks allocated" : 3,
            "blocks freed" : 0,
            "checkpoint size" : 4096,
            "file allocation unit size" : 4096,
            "file bytes available for reuse" : 0,
            "file magic number" : 120897,
            "file major version number" : 1,
            "file size in bytes" : 16384,
            "minor version number" : 0
        },
        "btree" : {
            "btree checkpoint generation" : 22,
            "column-store fixed-size leaf pages" : 0,
            "column-store internal pages" : 0,
            "column-store variable-size RLE encoded values" : 0,
            "column-store variable-size deleted values" : 0,
            "column-store variable-size leaf pages" : 0,
            "fixed-record size" : 0,
            "maximum internal page key size" : 368,
            "maximum internal page size" : 4096,
            "maximum leaf page key size" : 2867,
            "maximum leaf page size" : 32768,
            "maximum leaf page value size" : 67108864,
            "maximum tree depth" : 3,
            "number of key/value pairs" : 0,
            "overflow pages" : 0,
            "pages rewritten by compaction" : 0,
            "row-store internal pages" : 0,
            "row-store leaf pages" : 0
        },
        "cache" : {
            "bytes currently in the cache" : 952,
            "bytes read into cache" : 0,
            "bytes written from cache" : 175,
            "checkpoint blocked page eviction" : 0,
            "data source pages selected for eviction unable to be evicted" : 0,
            "hazard pointer blocked page eviction" : 0,
            "in-memory page passed criteria to be split" : 0,
            "in-memory page splits" : 0,
            "internal pages evicted" : 0,
            "internal pages split during eviction" : 0,
            "leaf pages split during eviction" : 0,
            "modified pages evicted" : 0,
            "overflow pages read into cache" : 0,
            "overflow values cached in memory" : 0,
            "page split during eviction deepened the tree" : 0,
            "page written requiring lookaside records" : 0,
            "pages read into cache" : 0,
            "pages read into cache requiring lookaside entries" : 0,
            "pages requested from the cache" : 1,
            "pages written from cache" : 2,
            "pages written requiring in-memory restoration" : 0,
            "tracked dirty bytes in the cache" : 0,
            "unmodified pages evicted" : 0
        },
        "cache_walk" : {
            "Average difference between current eviction generation when the page was last considered" : 0,
            "Average on-disk page image size seen" : 0,
            "Clean pages currently in cache" : 0,
            "Current eviction generation" : 0,
            "Dirty pages currently in cache" : 0,
            "Entries in the root page" : 0,
            "Internal pages currently in cache" : 0,
            "Leaf pages currently in cache" : 0,
            "Maximum difference between current eviction generation when the page was last considered" : 0,
            "Maximum page size seen" : 0,
            "Minimum on-disk page image size seen" : 0,
            "On-disk page image sizes smaller than a single allocation unit" : 0,
            "Pages created in memory and never written" : 0,
            "Pages currently queued for eviction" : 0,
            "Pages that could not be queued for eviction" : 0,
            "Refs skipped during cache traversal" : 0,
            "Size of the root page" : 0,
            "Total number of pages currently in cache" : 0
        },
        "compression" : {
            "compressed pages read" : 0,
            "compressed pages written" : 0,
            "page written failed to compress" : 0,
            "page written was too small to compress" : 2,
            "raw compression call failed, additional data available" : 0,
            "raw compression call failed, no additional data available" : 0,
            "raw compression call succeeded" : 0
        },
        "cursor" : {
            "bulk-loaded cursor-insert calls" : 0,
            "create calls" : 1,
            "cursor-insert key and value bytes inserted" : 81,
            "cursor-remove key bytes removed" : 0,
            "cursor-update value bytes updated" : 0,
            "insert calls" : 1,
            "next calls" : 0,
            "prev calls" : 1,
            "remove calls" : 0,
            "reset calls" : 2,
            "restarted searches" : 0,
            "search calls" : 0,
            "search near calls" : 0,
            "truncate calls" : 0,
            "update calls" : 0
        },
        "reconciliation" : {
            "dictionary matches" : 0,
            "fast-path pages deleted" : 0,
            "internal page key bytes discarded using suffix compression" : 0,
            "internal page multi-block writes" : 0,
            "internal-page overflow keys" : 0,
            "leaf page key bytes discarded using prefix compression" : 0,
            "leaf page multi-block writes" : 0,
            "leaf-page overflow keys" : 0,
            "maximum blocks required for a page" : 0,
            "overflow values written" : 0,
            "page checksum matches" : 0,
            "page reconciliation calls" : 2,
            "page reconciliation calls for eviction" : 0,
            "pages deleted" : 0
        },
        "session" : {
            "object compaction" : 0,
            "open cursor count" : 1
        },
        "transaction" : {
            "update conflicts" : 0
        }
    },
    "nindexes" : 1,
    "totalIndexSize" : 16384,
    "indexSizes" : {
        "_id_" : 16384
    },
    "ok" : 1
}
---

PHP的MongoDB扩展

针对PHP的mongodb扩展有两个:mongo.so和mongodb.so,其中,mongo.so针对的是PHP5.x版本,并且比较老,新的扩展是mongodb.so。

两个可以任选一个,下面安装mongodb.so的扩展

  • 下载扩展源码包:
[root@lzx yum.repos.d]# cd /usr/local/src/
[root@lzx src]# wget https://pecl.php.net/get/mongodb-1.3.0.tgz
[root@lzx src]# tar zxf mongodb-1.3.0.tgz 
[root@lzx src]# cd mongodb-1.3.0
[root@lzx mongodb-1.3.0]# /usr/local/php-fpm/bin/phpize 
Configuring for:
PHP Api Version:         20131106
Zend Module Api No:      20131226
Zend Extension Api No:   220131226
  • 编译安装
[root@lzx mongodb-1.3.0]# ./configure --with-php-config=/usr/local/php-fpm/bin/php-config
[root@lzx mongodb-1.3.0]# echo $?
0
[root@lzx mongodb-1.3.0]# make && make install
[root@lzx mongodb-1.3.0]# echo $?
0
[root@lzx mongodb-1.3.0]# vim /usr/local/php-fpm/etc/php.ini          //增加下面一行
extension=mongodb.so
[root@lzx mongodb-1.3.0]# /usr/local/php-fpm/bin/php -m
[PHP Modules]
Core
ctype
curl
date
dom
ereg
exif
fileinfo
filter
ftp
gd
hash
iconv
json
libxml
mbstring
mcrypt
mongodb             //有mongodb说明没问题
mysql
openssl
pcre
PDO
pdo_sqlite
Phar
posix
redis
Reflection
session
SimpleXML
soap
SPL
sqlite3
standard
tokenizer
xml
xmlreader
xmlwriter
zlib

[Zend Modules]
[root@lzx mongodb-1.3.0]# /etc/init.d/php-fpm restart           //重启php-fpm服务
Gracefully shutting down php-fpm  done
Starting php-fpm  done

安装mongo.so的步骤与上面基本相同,这里不做更多赘述

MongoDB副本集

MongoDB也可以做主从配置。在早期,使用master-slave,一主一从和MySQL类似,但slave在此架构中为只读,当主库宕机后,从库不能自动切换为主。目前已经淘汰这种模式。

现在改为副本集,这种模式下,一主(primary)多从(secondary),从仍然只读。同时支持设置权重,当主宕掉后,权重最高的从自动切换为主。
此外,此架构中还可以建立一个仲裁(arbiter)的角色,只负责裁决主是否宕机,而不存储数据,也是为了防止脑裂。因为读写数据都是在主上,所以要想实现负载均衡,还需要手动指定读库的目标server。

场景设置:

三台机器,IP分别为192.168.100.150(主),140,192.168.100.160(从1),192.168.100.170(从2)
三台机器都需要安装mongodb服务,都要关闭防火墙和SElinux

在主上进行操作

  • 修改配置文件:
[root@lzx mongodb-1.3.0]# cd 
[root@lzx ~]# vim /etc/mongod.conf           //做下面修改
port: 27017     改为    port:27018          //更换端口
bindIp: 127.0.0.1    改为     bindIp: 127.0.0.1,192.168.100.150         //后面IP为本机内网IP
#replication:       改为     replication:         //去掉前面#
  oplogSizeMB: 20           //紧接着在下面回车空格两下,增加这行,定义oplog大小;冒号后面记得空格,否则启动会报错
  replSetName: lzx          //再在下面增加这一行,定义副本集的名字
[root@lzx ~]# systemctl restart mongod           //重启mongodb服务
[root@lzx ~]# ps aux |grep mongod
mongod     1510  8.0  1.4 1017096 46800 ?       Sl   21:21   0:00 /usr/bin/mongod --auth -f /etc/mongod.conf
[root@lzx ~]# vim /usr/lib/systemd/system/mongod.service           //做下面修改
Environment="OPTIONS=--auth -f /etc/mongod.conf"    改为   Environment="OPTIONS=-f /etc/mongod.conf"       //方便实验,去掉--auth
[root@lzx ~]# systemctl daemon-reload          //重载配置
[root@lzx ~]# systemctl restart mongod
[root@lzx ~]# ps aux |grep mongod
mongod     1716 14.3  1.3 1017096 44488 ?       Sl   21:40   0:00 /usr/bin/mongod -f /etc/mongod.conf
[root@lzx~]# netstat -lntp |grep mongod
tcp        0      0 192.168.100.150:27017   0.0.0.0:*               LISTEN      1716/mongod          //多出这一行
tcp        0      0 127.0.0.1:27017         0.0.0.0:*               LISTEN      1716/mongod

另外要关闭防火墙和SElinux

在从1上进行操作

  • 修改配置文件:
[root@lzx1 ~]# vim /etc/mongod.conf           //做下面修改
port: 27017     改为    port:27019          //更换端口
bindIp: 127.0.0.1    改为     bindIp: 127.0.0.1,192.168.100.160         //后面IP为本机内网IP
#replication:       改为     replication:         //去掉前面#
  oplogSizeMB: 20           //紧接着在下面回车空格两下,增加这行,定义oplog大小;冒号后面记得空格,否则启动会报错
  replSetName: lzx          //再在下面增加这一行,定义副本集的名字
[root@lzx1 ~]# vim /usr/lib/systemd/system/mongod.service           //做下面修改
Environment="OPTIONS=--auth -f /etc/mongod.conf"    改为   Environment="OPTIONS=-f /etc/mongod.conf"       //方便实验,去掉--auth
[root@lzx1 ~]# systemctl daemon-reload          //重载配置
[root@lzx1 ~]# systemctl restart mongod
[root@lzx1 ~]# ps aux |grep mongod
mongod     2076 14.0  1.2 1017092 48488 ?       Sl   21:40   0:00 /usr/bin/mongod -f /etc/mongod.conf
[root@lzx1 ~]# netstat -lntp |grep mongod
tcp        0      0 192.168.100.160:27017   0.0.0.0:*               LISTEN      2076/mongod          //多出这一行
tcp        0      0 127.0.0.1:27017         0.0.0.0:*               LISTEN      2076/mongod

另外要关闭防火墙和SElinux

在从2上进行操作

  • 修改配置文件:
[root@lzx2 ~]# vim /etc/mongod.conf           //做下面修改
port: 27017     改为    port:27020          //更换端口
bindIp: 127.0.0.1    改为     bindIp: 127.0.0.1,192.168.100.170         //后面IP为本机内网IP
#replication:       改为     replication:         //去掉前面#
  oplogSizeMB: 20           //紧接着在下面回车空格两下,增加这行,定义oplog大小;冒号后面记得空格,否则启动会报错
  replSetName: lzx          //再在下面增加这一行,定义副本集的名字
[root@lzx2 ~]# vim /usr/lib/systemd/system/mongod.service           //做下面修改
Environment="OPTIONS=--auth -f /etc/mongod.conf"    改为   Environment="OPTIONS=-f /etc/mongod.conf"       //方便实验,去掉--auth
[root@lzx2 ~]# systemctl daemon-reload          //重载配置
[root@lzx2 ~]# systemctl restart mongod
[root@lzx2 ~]# ps aux |grep mongod
mongod     2086 11.5  1.1 1017100 44588 ?       Sl   21:39   0:00 /usr/bin/mongod -f /etc/mongod.conf
[root@lzx2 ~]# netstat -lntp |grep mongod
tcp        0      0 192.168.100.170:27017   0.0.0.0:*               LISTEN      2086/mongod         //多出这一行
tcp        0      0 127.0.0.1:27017         0.0.0.0:*               LISTEN      2086/mongod 

另外要关闭防火墙和SElinux

再在主上进行操作

  • 连接主,在主上运行mongo:
[root@lzx ~]# mongo --port 27018
MongoDB shell version v3.4.16
connecting to: mongodb://127.0.0.1:27018
MongoDB server version: 3.4.16
Server has startup warnings: 
2018-08-25T21:40:19.064+0800 I CONTROL  [initandlisten] 
2018-08-25T21:40:19.064+0800 I CONTROL  [initandlisten] ** WARNING: Access control is not enabled for the database.
2018-08-25T21:40:19.064+0800 I CONTROL  [initandlisten] **          Read and write access to data and configuration is unrestricted.
2018-08-25T21:40:19.064+0800 I CONTROL  [initandlisten] 
2018-08-25T21:40:19.065+0800 I CONTROL  [initandlisten] 
2018-08-25T21:40:19.065+0800 I CONTROL  [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/enabled is 'always'.
2018-08-25T21:40:19.065+0800 I CONTROL  [initandlisten] **        We suggest setting it to 'never'
2018-08-25T21:40:19.065+0800 I CONTROL  [initandlisten] 
2018-08-25T21:40:19.065+0800 I CONTROL  [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/defrag is 'always'.
2018-08-25T21:40:19.065+0800 I CONTROL  [initandlisten] **        We suggest setting it to 'never'
2018-08-25T21:40:19.065+0800 I CONTROL  [initandlisten] 
> use admin
switched to db admin
  • 创建副本集:
> config={_id:"lzx",members:[{_id:0,host:"192.168.100.150:27018"},{_id:1,host:"192.168.100.160:27019"},{_id:2,host:"192.168.100.170:27020"}]}
{
    "_id" : "lzx",
    "members" : [
        {
            "_id" : 0,
            "host" : "192.168.100.150:27018"
        },
        {
            "_id" : 1,
            "host" : "192.168.100.160:27019"
        },
        {
            "_id" : 2,
            "host" : "192.168.100.170:27020"
        }
    ]
}
> rs.initiate(config)          //初始化副本集,这一步非常容易报错,必须要保证之前数据库中没有任何数据,否则报错
{ "ok" : 1 }
lzx:OTHER> rs.status()            //查看副本集状态
{
    "set" : "lzx",
    "date" : ISODate("2018-08-26T09:29:41.449Z"),
    "myState" : 1,
    "term" : NumberLong(1),
    "syncingTo" : "",
    "syncSourceHost" : "",
    "syncSourceId" : -1,
    "heartbeatIntervalMillis" : NumberLong(2000),
    "optimes" : {
        "lastCommittedOpTime" : {
            "ts" : Timestamp(1535275776, 1),
            "t" : NumberLong(1)
        },
        "appliedOpTime" : {
            "ts" : Timestamp(1535275776, 1),
            "t" : NumberLong(1)
        },
        "durableOpTime" : {
            "ts" : Timestamp(1535275776, 1),
            "t" : NumberLong(1)
        }
    },
    "members" : [
        {
            "_id" : 0,
            "name" : "192.168.100.150:27018",
            "health" : 1,
            "state" : 1,
            "stateStr" : "PRIMARY",          //192.168.100.150是primary
            "uptime" : 268,
            "optime" : {
                "ts" : Timestamp(1535275776, 1),
                "t" : NumberLong(1)
            },
            "optimeDate" : ISODate("2018-08-26T09:29:36Z"),
            "syncingTo" : "",
            "syncSourceHost" : "",
            "syncSourceId" : -1,
            "infoMessage" : "",
            "electionTime" : Timestamp(1535275644, 1),
            "electionDate" : ISODate("2018-08-26T09:27:24Z"),
            "configVersion" : 1,
            "self" : true,
            "lastHeartbeatMessage" : ""
        },
        {
            "_id" : 1,
            "name" : "192.168.100.160:27019",
            "health" : 1,
            "state" : 2,
            "stateStr" : "SECONDARY",        //192.168.100.160是secondary
            "uptime" : 148,
            "optime" : {
                "ts" : Timestamp(1535275776, 1),
                "t" : NumberLong(1)
            },
            "optimeDurable" : {
                "ts" : Timestamp(1535275776, 1),
                "t" : NumberLong(1)
            },
            "optimeDate" : ISODate("2018-08-26T09:29:36Z"),
            "optimeDurableDate" : ISODate("2018-08-26T09:29:36Z"),
            "lastHeartbeat" : ISODate("2018-08-26T09:29:40.721Z"),
            "lastHeartbeatRecv" : ISODate("2018-08-26T09:29:40.698Z"),
            "pingMs" : NumberLong(0),
            "lastHeartbeatMessage" : "",
            "syncingTo" : "192.168.100.170:27020",
            "syncSourceHost" : "192.168.100.170:27020",
            "syncSourceId" : 2,
            "infoMessage" : "",
            "configVersion" : 1
        },
        {
            "_id" : 2,
            "name" : "192.168.100.170:27020",
            "health" : 1,
            "state" : 2,
            "stateStr" : "SECONDARY",         //192.168.100.170是secondary
            "uptime" : 148,
            "optime" : {
                "ts" : Timestamp(1535275776, 1),
                "t" : NumberLong(1)
            },
            "optimeDurable" : {
                "ts" : Timestamp(1535275776, 1),
                "t" : NumberLong(1)
            },
            "optimeDate" : ISODate("2018-08-26T09:29:36Z"),
            "optimeDurableDate" : ISODate("2018-08-26T09:29:36Z"),
            "lastHeartbeat" : ISODate("2018-08-26T09:29:40.721Z"),
            "lastHeartbeatRecv" : ISODate("2018-08-26T09:29:39.563Z"),
            "pingMs" : NumberLong(0),
            "lastHeartbeatMessage" : "",
            "syncingTo" : "192.168.100.150:27018",
            "syncSourceHost" : "192.168.100.150:27018",
            "syncSourceId" : 0,
            "infoMessage" : "",
            "configVersion" : 1
        }
    ],
    "ok" : 1
}

如果两个从上的状态为”stateStr”:”STARTUP”,则需要执行下面操作:

> var config={_id:"lzx",members:[{_id:0,host:"192.168.100.150:27018"},{_id:1,host:"192.168.100.160:27019"},{_id:2,host:"192.168.100.170:27020"}]}
> rs.reconfig(config) 

此时再次查看rs.status()会发现从的状态变为SECONDARY

如果没有PRIMARY,可以去增加权重,权重高的即为PRIMARY

  • 副本集测试:
lzx:PRIMARY> use mydb
switched to db mydb
lzx:PRIMARY> db.acc.insert({AccountID:1,UserName:"123",password:"123456"})          //创建集合,插入数据
WriteResult({ "nInserted" : 1 })
lzx:PRIMARY> show dbs
admin  0.000GB
local  0.000GB
mydb   0.000GB          //有mydb库
lzx:PRIMARY> use mydb
switched to db mydb
lzx:PRIMARY> show tables
acc            //有acc集合

在从1上进行操作

  • 查看刚刚主上创建的数据:
[root@lzx1 ~]# mongo --port 27019
MongoDB shell version v3.4.16
connecting to: mongodb://127.0.0.1:27019/
MongoDB server version: 3.4.16
Server has startup warnings: 
2018-08-26T05:32:20.355-0400 I CONTROL  [initandlisten] 
2018-08-26T05:32:20.355-0400 I CONTROL  [initandlisten] ** WARNING: Access control is not enabled for the database.
2018-08-26T05:32:20.355-0400 I CONTROL  [initandlisten] **          Read and write access to data and configuration is unrestricted.
2018-08-26T05:32:20.355-0400 I CONTROL  [initandlisten] 
2018-08-26T05:32:20.355-0400 I CONTROL  [initandlisten] 
2018-08-26T05:32:20.355-0400 I CONTROL  [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/enabled is 'always'.
2018-08-26T05:32:20.355-0400 I CONTROL  [initandlisten] **        We suggest setting it to 'never'
2018-08-26T05:32:20.355-0400 I CONTROL  [initandlisten] 
2018-08-26T05:32:20.355-0400 I CONTROL  [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/defrag is 'always'.
2018-08-26T05:32:20.355-0400 I CONTROL  [initandlisten] **        We suggest setting it to 'never'
2018-08-26T05:32:20.355-0400 I CONTROL  [initandlisten] 
lzx:SECONDARY> show dbs
2018-08-26T05:58:31.215-0400 E QUERY    [thread1] Error: listDatabases failed:{
    "ok" : 0,
    "errmsg" : "not master and slaveOk=false",         //有这样的报错信息
    "code" : 13435,
    "codeName" : "NotMasterNoSlaveOk"
} :
_getErrorWithCode@src/mongo/shell/utils.js:25:13
Mongo.prototype.getDBs@src/mongo/shell/mongo.js:62:1
shellHelper.show@src/mongo/shell/utils.js:788:19
shellHelper@src/mongo/shell/utils.js:678:15
@(shellhelp2):1:1       
lzx:SECONDARY> rs.slaveOk()            //设置slaveOk=true
lzx:SECONDARY> show dbs
admin  0.000GB
local  0.000GB
mydb   0.000GB          //可以看到有mydb库
lzx:SECONDARY> use mydb
switched to db mydb
lzx:SECONDARY> show tables
acc          //可以看到有acc集合

在从2上进行操作

  • 查看刚刚主上创建的数据(与上面一致):
[root@lzx2 ~]# mongo --port 27020
MongoDB shell version v3.4.16
connecting to: mongodb://127.0.0.1:27020/
MongoDB server version: 3.4.16
Server has startup warnings: 
2018-08-26T05:25:13.079-0400 I CONTROL  [initandlisten] 
2018-08-26T05:25:13.079-0400 I CONTROL  [initandlisten] ** WARNING: Access control is not enabled for the database.
2018-08-26T05:25:13.079-0400 I CONTROL  [initandlisten] **          Read and write access to data and configuration is unrestricted.
2018-08-26T05:25:13.079-0400 I CONTROL  [initandlisten] 
2018-08-26T05:25:13.079-0400 I CONTROL  [initandlisten] 
2018-08-26T05:25:13.079-0400 I CONTROL  [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/enabled is 'always'.
2018-08-26T05:25:13.079-0400 I CONTROL  [initandlisten] **        We suggest setting it to 'never'
2018-08-26T05:25:13.080-0400 I CONTROL  [initandlisten] 
2018-08-26T05:25:13.080-0400 I CONTROL  [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/defrag is 'always'.
2018-08-26T05:25:13.080-0400 I CONTROL  [initandlisten] **        We suggest setting it to 'never'
2018-08-26T05:25:13.080-0400 I CONTROL  [initandlisten]
lzx:SECONDARY> show dbs
2018-08-26T05:58:31.215-0400 E QUERY    [thread1] Error: listDatabases failed:{
    "ok" : 0,
    "errmsg" : "not master and slaveOk=false",         //有这样的报错信息
    "code" : 13435,
    "codeName" : "NotMasterNoSlaveOk"
} :
_getErrorWithCode@src/mongo/shell/utils.js:25:13
Mongo.prototype.getDBs@src/mongo/shell/mongo.js:62:1
shellHelper.show@src/mongo/shell/utils.js:788:19
shellHelper@src/mongo/shell/utils.js:678:15
@(shellhelp2):1:1       
lzx:SECONDARY> rs.slaveOk()            //设置slaveOk=true
lzx:SECONDARY> show dbs
admin  0.000GB
local  0.000GB
mydb   0.000GB          //可以看到有mydb库
lzx:SECONDARY> use mydb
switched to db mydb
lzx:SECONDARY> show tables
acc          //可以看到有acc集合

继续在主上进行操作:

  • 查看权重:
lzx:PRIMARY> rs.config()
{
    "_id" : "lzx",
    "version" : 1,
    "protocolVersion" : NumberLong(1),
    "members" : [
        {
            "_id" : 0,
            "host" : "192.168.100.150:27018",
            "arbiterOnly" : false,
            "buildIndexes" : true,
            "hidden" : false,
            "priority" : 1,         //权重为1
            "tags" : {

            },
            "slaveDelay" : NumberLong(0),
            "votes" : 1
        },
        {
            "_id" : 1,
            "host" : "192.168.100.160:27019",
            "arbiterOnly" : false,
            "buildIndexes" : true,
            "hidden" : false,
            "priority" : 1,         //权重为1
            "tags" : {

            },
            "slaveDelay" : NumberLong(0),
            "votes" : 1
        },
        {
            "_id" : 2,
            "host" : "192.168.100.170:27020",
            "arbiterOnly" : false,
            "buildIndexes" : true,
            "hidden" : false,
            "priority" : 1,        //权重为1
            "tags" : {

            },
            "slaveDelay" : NumberLong(0),
            "votes" : 1
        }
    ],
    "settings" : {
        "chainingAllowed" : true,
        "heartbeatIntervalMillis" : 2000,
        "heartbeatTimeoutSecs" : 10,
        "electionTimeoutMillis" : 10000,
        "catchUpTimeoutMillis" : 60000,
        "getLastErrorModes" : {

        },
        "getLastErrorDefaults" : {
            "w" : 1,
            "wtimeout" : 0
        },
        "replicaSetId" : ObjectId("5b827271b551f8b2b89e2ff0")
    }
}
  • 增加防火墙规则,模拟宕机
[root@lzx ~]# iptables -I INPUT -p tcp --dport 27018 -j DROP         //将27018端口关闭
  • 在从1上面查看副本集状态:
lzx:SECONDARY> rs.status()
{
    "set" : "lzx",
    "date" : ISODate("2018-08-26T10:13:08.119Z"),
    "myState" : 2,
    "term" : NumberLong(2),
    "syncingTo" : "192.168.100.170:27020",
    "syncSourceHost" : "192.168.100.170:27020",
    "syncSourceId" : 2,
    "heartbeatIntervalMillis" : NumberLong(2000),
    "optimes" : {
        "lastCommittedOpTime" : {
            "ts" : Timestamp(1535277967, 1),
            "t" : NumberLong(2)
        },
        "appliedOpTime" : {
            "ts" : Timestamp(1535277967, 1),
            "t" : NumberLong(2)
        },
        "durableOpTime" : {
            "ts" : Timestamp(1535277967, 1),
            "t" : NumberLong(2)
        }
    },
    "members" : [
        {
            "_id" : 0,
            "name" : "192.168.100.150:27018",
            "health" : 0,
            "state" : 8,
            "stateStr" : "(not reachable/healthy)",            //192.168.100.150变成了不可达
            "uptime" : 0,
            "optime" : {
                "ts" : Timestamp(0, 0),
                "t" : NumberLong(-1)
            },
            "optimeDurable" : {
                "ts" : Timestamp(0, 0),
                "t" : NumberLong(-1)
            },
            "optimeDate" : ISODate("1970-01-01T00:00:00Z"),
            "optimeDurableDate" : ISODate("1970-01-01T00:00:00Z"),
            "lastHeartbeat" : ISODate("2018-08-26T10:13:06.423Z"),
            "lastHeartbeatRecv" : ISODate("2018-08-26T10:13:07.227Z"),
            "pingMs" : NumberLong(0),
            "lastHeartbeatMessage" : "Couldn't get a connection within the time limit",
            "syncingTo" : "",
            "syncSourceHost" : "",
            "syncSourceId" : -1,
            "infoMessage" : "",
            "configVersion" : -1
        },
        {
            "_id" : 1,
            "name" : "192.168.100.160:27019",
            "health" : 1,
            "state" : 2,
            "stateStr" : "SECONDARY",
            "uptime" : 2449,
            "optime" : {
                "ts" : Timestamp(1535277967, 1),
                "t" : NumberLong(2)
            },
            "optimeDate" : ISODate("2018-08-26T10:06:07Z"),
            "syncingTo" : "192.168.100.170:27020",
            "syncSourceHost" : "192.168.100.170:27020",
            "syncSourceId" : 2,
            "infoMessage" : "",
            "configVersion" : 1,
            "self" : true,
            "lastHeartbeatMessage" : ""
        },
        {
            "_id" : 2,
            "name" : "192.168.100.170:27020",
            "health" : 1,
            "state" : 1,
            "stateStr" : "PRIMARY",            //192.168.100.170变成了primary
            "uptime" : 2344,
            "optime" : {
                "ts" : Timestamp(1535277967, 1),
                "t" : NumberLong(2)
            },
            "optimeDurable" : {
                "ts" : Timestamp(1535277967, 1),
                "t" : NumberLong(2)
            },
            "optimeDate" : ISODate("2018-08-26T10:06:07Z"),
            "optimeDurableDate" : ISODate("2018-08-26T10:06:07Z"),
            "lastHeartbeat" : ISODate("2018-08-26T10:13:06.469Z"),
            "lastHeartbeatRecv" : ISODate("2018-08-26T10:13:06.866Z"),
            "pingMs" : NumberLong(0),
            "lastHeartbeatMessage" : "",
            "syncingTo" : "",
            "syncSourceHost" : "",
            "syncSourceId" : -1,
            "infoMessage" : "",
            "electionTime" : Timestamp(1535277926, 2),
            "electionDate" : ISODate("2018-08-26T10:05:26Z"),
            "configVersion" : 1
        }
    ],
    "ok" : 1
}
  • 在主上操作:删除规则,查看是否变回去:
[root@lzx ~]# iptables -D INPUT -p tcp --dport 27018 -j DROP
[root@lzx ~]# mongo --port 27018
MongoDB shell version v3.4.16
connecting to: mongodb://127.0.0.1:27018/
MongoDB server version: 3.4.16
Server has startup warnings: 
2018-08-26T05:25:16.157-0400 I CONTROL  [initandlisten] 
2018-08-26T05:25:16.157-0400 I CONTROL  [initandlisten] ** WARNING: Access control is not enabled for the database.
2018-08-26T05:25:16.157-0400 I CONTROL  [initandlisten] **          Read and write access to data and configuration is unrestricted.
2018-08-26T05:25:16.157-0400 I CONTROL  [initandlisten] 
2018-08-26T05:25:16.157-0400 I CONTROL  [initandlisten] 
2018-08-26T05:25:16.157-0400 I CONTROL  [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/enabled is 'always'.
2018-08-26T05:25:16.157-0400 I CONTROL  [initandlisten] **        We suggest setting it to 'never'
2018-08-26T05:25:16.157-0400 I CONTROL  [initandlisten] 
2018-08-26T05:25:16.157-0400 I CONTROL  [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/defrag is 'always'.
2018-08-26T05:25:16.157-0400 I CONTROL  [initandlisten] **        We suggest setting it to 'never'
2018-08-26T05:25:16.157-0400 I CONTROL  [initandlisten] 
lzx:SECONDARY>               //可以看到,这里192.168.100.150仍然是从,因为三者权重相等,除非它的权重最高才会变回primary

在从2上进行操作(因为是新的主)

  • 设置权重:
lzx:PRIMARY> cfg=rs.conf()
{
    "_id" : "lzx",
    "version" : 1,
    "protocolVersion" : NumberLong(1),
    "members" : [
        {
            "_id" : 0,
            "host" : "192.168.100.150:27018",
            "arbiterOnly" : false,
            "buildIndexes" : true,
            "hidden" : false,
            "priority" : 1,
            "tags" : {

            },
            "slaveDelay" : NumberLong(0),
            "votes" : 1
        },
        {
            "_id" : 1,
            "host" : "192.168.100.160:27019",
            "arbiterOnly" : false,
            "buildIndexes" : true,
            "hidden" : false,
            "priority" : 1,
            "tags" : {

            },
            "slaveDelay" : NumberLong(0),
            "votes" : 1
        },
        {
            "_id" : 2,
            "host" : "192.168.100.170:27020",
            "arbiterOnly" : false,
            "buildIndexes" : true,
            "hidden" : false,
            "priority" : 1,
            "tags" : {

            },
            "slaveDelay" : NumberLong(0),
            "votes" : 1
        }
    ],
    "settings" : {
        "chainingAllowed" : true,
        "heartbeatIntervalMillis" : 2000,
        "heartbeatTimeoutSecs" : 10,
        "electionTimeoutMillis" : 10000,
        "catchUpTimeoutMillis" : 60000,
        "getLastErrorModes" : {

        },
        "getLastErrorDefaults" : {
            "w" : 1,
            "wtimeout" : 0
        },
        "replicaSetId" : ObjectId("5b827271b551f8b2b89e2ff0")
    }
}
lzx:PRIMARY> cfg.members[0].priority = 3           //设置成员0权重为3
3
lzx:PRIMARY> cfg.members[1].priority = 2           //设置成员1权重为2
2
lzx:PRIMARY> cfg.members[2].priority = 1           //设置成员2权重为1
1
lzx:PRIMARY> rs.reconfig(cfg)           //使上面权重设置生效
{ "ok" : 1 }
lzx:PRIMARY>            //按Enter键
2018-08-26T06:21:40.200-0400 I NETWORK  [thread1] trying reconnect to 127.0.0.1:27020 (127.0.0.1) failed
2018-08-26T06:21:40.200-0400 I NETWORK  [thread1] reconnect 127.0.0.1:27020 (127.0.0.1) ok
lzx:SECONDARY>          //从2由主变为从
lzx:SECONDARY> rs.config()             //查看权重
{
    "_id" : "lzx",
    "version" : 2,
    "protocolVersion" : NumberLong(1),
    "members" : [
        {
            "_id" : 0,
            "host" : "192.168.100.150:27018",
            "arbiterOnly" : false,
            "buildIndexes" : true,
            "hidden" : false,
            "priority" : 3,
            "tags" : {

            },
            "slaveDelay" : NumberLong(0),
            "votes" : 1
        },
        {
            "_id" : 1,
            "host" : "192.168.100.160:27019",
            "arbiterOnly" : false,
            "buildIndexes" : true,
            "hidden" : false,
            "priority" : 2,
            "tags" : {

            },
            "slaveDelay" : NumberLong(0),
            "votes" : 1
        },
        {
            "_id" : 2,
            "host" : "192.168.100.170:27020",
            "arbiterOnly" : false,
            "buildIndexes" : true,
            "hidden" : false,
            "priority" : 1,
            "tags" : {

            },
            "slaveDelay" : NumberLong(0),
            "votes" : 1
        }
    ],
    "settings" : {
        "chainingAllowed" : true,
        "heartbeatIntervalMillis" : 2000,
        "heartbeatTimeoutSecs" : 10,
        "electionTimeoutMillis" : 10000,
        "catchUpTimeoutMillis" : 60000,
        "getLastErrorModes" : {

        },
        "getLastErrorDefaults" : {
            "w" : 1,
            "wtimeout" : 0
        },
        "replicaSetId" : ObjectId("5b827271b551f8b2b89e2ff0")
    }
}

MongoDB分片

分片就是将数据库进行拆分,将大型集合分割到不同服务器上。比如,本来100G的数据,可以分割成10份存储到10台服务器上,这样每台机器只有10G的数据。

分片后的数据存储与访问是由一个mongos的进程来实现的,即mongos是整个分片架构的核心。对客户端而言是透明的,客户端只需要把读写操作转达给mongos即可。

虽然分片会把数据分割到很多服务器上,但是每一个节点都需要有一个备用角色的,这样能保证数据的高可用。当系统需要更多空间或资源的时候,分片可以很方便地让我们按需扩展,只需要把mongodb服务的机器加入到分片集群中即可。

MongoDB分片相关概念:

mongos:数据库集群请求的入口,所有的请求都是通过mongos进行协调,不需要再应用程序中添加一个路由选择器,mongos自己就是一个请求分发中心,它负责把对应的数据请求转发到对应的shard服务器上。
在生产环境通常有多mongos作为请求的入口,防止其中一个挂掉而导致所有的mongodb请求都没有办法操作。

config server:配置服务器,存储所有数据库元信息(路由、分片)的配置。mongos本身没有物理存储分片服务器和数据路由信息,只是缓存在内存里,配置放服务器则实际存储这些信息。
mongos第一次启动或者关掉重启就会从配置服务器加载配置信息,以后如果配置服务器信息变化会通知到所有的mongos更新自己的状态,这样mongos就能继续准确路由。在生产环境通常有多个config server,
因为它存储了分片路由的元数据,防止数据丢失。

shard:存储了一个集合部分数据的mongodb实例,每个分片是单独的mongodb服务或者副本集,在生产环境中,所有的分片都应该是副本集。

场景设置:

三台机器 A B C 
A:192.168.100.150 B:192.168.100.160 C:192.168.100.170 
A搭建:mongos、config server、副本集1 主节点、副本集2 从节点、副本集3 仲裁
B搭建:mongos、config server、副本集1 仲裁、副本集2 主节点、副本集3 从节点
C搭建:mongos、config server、副本集1 从节点、副本集2 仲裁、副本集3 主节点
端口分配:mongos 20000、config 21000、副本集1 27001、副本集2 27002、副本集3 27003

分别在三台机器上创建各个角色所需要的目录:
mkdir -p /data/mongodb/mongos/log
mkdir -p /data/mongodb/config/{data,log}
mkdir -p /data/mongodb/shard1/{data,log}
mkdir -p /data/mongodb/shard2/{data,log}
mkdir -p /data/mongodb/shard3/{data,log}

三台机器全部关闭防火墙和SElinux,或者增加对应端口的规则

config server配置

在A机器上进行操作
  • 创建目录:
[root@lzx ~]# mkdir -p /data/mongodb/mongos/log
[root@lzx ~]# mkdir -p /data/mongodb/config/{data,log}
[root@lzx ~]# mkdir -p /data/mongodb/shard1/{data,log}
[root@lzx ~]# mkdir -p /data/mongodb/shard2/{data,log}
[root@lzx ~]# mkdir -p /data/mongodb/shard3/{data,log}
[root@lzx ~]# mkdir /etc/mongod/
  • 修改配置文件:
[root@lzx ~]# vim /etc/mongod/config.conf       //写入下面内容
pidfilepath = /var/run/mongodb/configsrv.pid
dbpath = /data/mongodb/config/data
logpath = /data/mongodb/config/log/configsrv.log
logappend = true
bind_ip = 192.168.100.150
port = 21000
fork = true
configsvr = true
replSet=configs         //副本集名称
maxConns=20000          //设置最大连接数
  • 启动服务:
[root@lzx ~]# mongod -f /etc/mongod/config.conf 
about to fork child process, waiting until server is ready for connections.
forked process: 10212
child process started successfully, parent exiting
[root@lzx ~]# netstat -lntp |grep mongod
tcp        0      0 192.168.100.150:21000    0.0.0.0:*               LISTEN      10212/mongod             //服务已成功启动
tcp        0      0 192.168.100.150:27018    0.0.0.0:*               LISTEN      10185/mongod        
tcp        0      0 127.0.0.1:27018         0.0.0.0:*               LISTEN      10185/mongod       
在B机器上进行操作
  • 创建目录:
[root@lzx1 ~]# mkdir -p /data/mongodb/mongos/log
[root@lzx1 ~]# mkdir -p /data/mongodb/config/{data,log}
[root@lzx1 ~]# mkdir -p /data/mongodb/shard1/{data,log}
[root@lzx1 ~]# mkdir -p /data/mongodb/shard2/{data,log}
[root@lzx1 ~]# mkdir -p /data/mongodb/shard3/{data,log}
[root@lzx1 ~]# mkdir /etc/mongod/
  • 添加配置文件:
[root@lzx1 ~]# vim /etc/mongod/config.conf       //写入下面内容
pidfilepath = /var/run/mongodb/configsrv.pid
dbpath = /data/mongodb/config/data
logpath = /data/mongodb/config/log/configsrv.log
logappend = true
bind_ip = 192.168.100.160
port = 21000
fork = true
configsvr = true
replSet=configs         //副本集名称
maxConns=20000          //设置最大连接数
  • 启动服务:
[root@lzx1 ~]# mongod -f /etc/mongod/config.conf
about to fork child process, waiting until server is ready for connections.
forked process: 10472
child process started successfully, parent exiting
[root@lzx1 ~]# netstat -lntp |grep mongod
tcp        0      0 192.168.100.160:27019    0.0.0.0:*               LISTEN      10446/mongod           
tcp        0      0 127.0.0.1:27019         0.0.0.0:*               LISTEN      10446/mongod        
tcp        0      0 192.168.100.160:21000    0.0.0.0:*               LISTEN      10472/mongod          //服务已成功启动
在C机器上进行操作
  • 创建目录:
[root@lzx2 ~]# mkdir -p /data/mongodb/mongos/log
[root@lzx2 ~]# mkdir -p /data/mongodb/config/{data,log}
[root@lzx2 ~]# mkdir -p /data/mongodb/shard1/{data,log}
[root@lzx2 ~]# mkdir -p /data/mongodb/shard2/{data,log}
[root@lzx2 ~]# mkdir -p /data/mongodb/shard3/{data,log}
[root@lzx2 ~]# mkdir /etc/mongod/
  • 修改配置文件:
[root@lzx2 ~]# vim /etc/mongod/config.conf       //写入下面内容
pidfilepath = /var/run/mongodb/configsrv.pid
dbpath = /data/mongodb/config/data
logpath = /data/mongodb/config/log/configsrv.log
logappend = true
bind_ip = 192.168.100.170
port = 21000
fork = true
configsvr = true
replSet=configs         //副本集名称
maxConns=20000          //设置最大连接数
  • 启动服务:
[root@lzx2 ~]# mongod -f /etc/mongod/config.conf
about to fork child process, waiting until server is ready for connections.
forked process: 10444
child process started successfully, parent exiting
[root@lzx2 ~]# netstat -lntp |grep mongod
tcp        0      0 192.168.100.170:27020    0.0.0.0:*               LISTEN      10418/mongod        
tcp        0      0 127.0.0.1:27020         0.0.0.0:*               LISTEN      10418/mongod        
tcp        0      0 192.168.100.170:21000    0.0.0.0:*               LISTEN      10444/mongod          //服务已成功启动
创建副本集

三台机器都已经启动服务,由于他们权重都一样,所以可以任选一台机器来创建副本集。下面我在A机器上操作。

  • 创建并初始化副本集:
[root@lzx ~]# mongo --host 192.168.100.150 --port 21000
> use admin
switched to db admin
> config={_id:"configs",members:[{_id:0,host:"192.168.100.150:21000"},{_id:1,host:"192.168.100.160:21000"},{_id:2,host:"192.168.100.170:21000"}]}
{
    "_id" : "configs",
    "members" : [
        {
            "_id" : 0,
            "host" : "192.168.100.150:21000"
        },
        {
            "_id" : 1,
            "host" : "192.168.100.160:21000"
        },
        {
            "_id" : 2,
            "host" : "192.168.100.170:21000"
        }
    ]
}
> rs.initiate(config)        //初始化副本集
{ "ok" : 1 }
configs:OTHER> rs.status()           //查看副本集状态
{
    "set" : "configs",
    "date" : ISODate("2018-08-27T05:28:33.632Z"),
    "myState" : 1,
    "term" : NumberLong(1),
    "syncingTo" : "",
    "syncSourceHost" : "",
    "syncSourceId" : -1,
    "configsvr" : true,
    "heartbeatIntervalMillis" : NumberLong(2000),
    "optimes" : {
        "lastCommittedOpTime" : {
            "ts" : Timestamp(1535347694, 1),
            "t" : NumberLong(1)
        },
        "readConcernMajorityOpTime" : {
            "ts" : Timestamp(1535347694, 1),
            "t" : NumberLong(1)
        },
        "appliedOpTime" : {
            "ts" : Timestamp(1535347694, 1),
            "t" : NumberLong(1)
        },
        "durableOpTime" : {
            "ts" : Timestamp(1535347694, 1),
            "t" : NumberLong(1)
        }
    },
    "members" : [
        {
            "_id" : 0,
            "name" : "192.168.100.150:21000",
            "health" : 1,
            "state" : 1,
            "stateStr" : "PRIMARY",         //192.168.100.150是primary
            "uptime" : 1163,
            "optime" : {
                "ts" : Timestamp(1535347694, 1),
                "t" : NumberLong(1)
            },
            "optimeDate" : ISODate("2018-08-27T05:28:14Z"),
            "syncingTo" : "",
            "syncSourceHost" : "",
            "syncSourceId" : -1,
            "infoMessage" : "could not find member to sync from",
            "electionTime" : Timestamp(1535347692, 1),
            "electionDate" : ISODate("2018-08-27T05:28:12Z"),
            "configVersion" : 1,
            "self" : true,
            "lastHeartbeatMessage" : ""
        },
        {
            "_id" : 1,
            "name" : "192.168.100.160:21000",
            "health" : 1,
            "state" : 2,
            "stateStr" : "SECONDARY",
            "uptime" : 32,
            "optime" : {
                "ts" : Timestamp(1535347694, 1),
                "t" : NumberLong(1)
            },
            "optimeDurable" : {
                "ts" : Timestamp(1535347694, 1),
                "t" : NumberLong(1)
            },
            "optimeDate" : ISODate("2018-08-27T05:28:14Z"),
            "optimeDurableDate" : ISODate("2018-08-27T05:28:14Z"),
            "lastHeartbeat" : ISODate("2018-08-27T05:28:32.620Z"),
            "lastHeartbeatRecv" : ISODate("2018-08-27T05:28:31.678Z"),
            "pingMs" : NumberLong(0),
            "lastHeartbeatMessage" : "",
            "syncingTo" : "192.168.100.170:21000",
            "syncSourceHost" : "192.168.100.170:21000",
            "syncSourceId" : 2,
            "infoMessage" : "",
            "configVersion" : 1
        },
        {
            "_id" : 2,
            "name" : "192.168.100.170:21000",
            "health" : 1,
            "state" : 2,
            "stateStr" : "SECONDARY",
            "uptime" : 32,
            "optime" : {
                "ts" : Timestamp(1535347694, 1),
                "t" : NumberLong(1)
            },
            "optimeDurable" : {
                "ts" : Timestamp(1535347694, 1),
                "t" : NumberLong(1)
            },
            "optimeDate" : ISODate("2018-08-27T05:28:14Z"),
            "optimeDurableDate" : ISODate("2018-08-27T05:28:14Z"),
            "lastHeartbeat" : ISODate("2018-08-27T05:28:32.620Z"),
            "lastHeartbeatRecv" : ISODate("2018-08-27T05:28:32.763Z"),
            "pingMs" : NumberLong(0),
            "lastHeartbeatMessage" : "",
            "syncingTo" : "192.168.100.150:21000",
            "syncSourceHost" : "192.168.100.150:21000",
            "syncSourceId" : 0,
            "infoMessage" : "",
            "configVersion" : 1
        }
    ],
    "ok" : 1
}

至此,config server配置完毕

shard配置

在A机器上进行操作
  • 添加配置文件:
[root@lzx ~]# vim /etc/mongod/shard1.conf         //写入下面内容
pidfilepath = /var/run/mongodb/shard1.pid
dbpath = /data/mongodb/shard1/data
logpath = /data/mongodb/shard1/log/shard1.log
logappend = true
bind_ip = 192.168.100.150
port = 27001
fork = true
httpinterface=true        //打开web监控
rest=true
replSet=shard1         //副本集名称
shardsvr = true        
maxConns=20000         //设置最大连接数
[root@lzx ~]# cd /etc/mongod/
[root@lzx mongod]# cp shard1.conf shard2.conf
[root@lzx mongod]# cp shard1.conf shard3.conf
[root@lzx mongod]# vim shard2.conf
[root@lzx mongod]# cat !$
cat shard2.conf
pidfilepath = /var/run/mongodb/shard2.pid
dbpath = /data/mongodb/shard2/data
logpath = /data/mongodb/shard2/log/shard2.log
logappend = true
bind_ip = 192.168.100.150
port = 27002          //注意更改端口
fork = true
httpinterface=true
rest=true
replSet=shard2         
shardsvr = true
maxConns=20000
[root@lzx mongod]# vim shard3.conf
[root@lzx mongod]# cat !$
cat shard3.conf
pidfilepath = /var/run/mongodb/shard3.pid
dbpath = /data/mongodb/shard3/data
logpath = /data/mongodb/shard3/log/shard3.log
logappend = true
bind_ip = 192.168.100.150
port = 27003         //注意更改端口
fork = true
httpinterface=true
rest=true
replSet=shard3         
shardsvr = true
maxConns=20000
  • 启动服务:
[root@lzx mongod]# mongod -f /etc/mongod/shard1.conf 
about to fork child process, waiting until server is ready for connections.
forked process: 10387
child process started successfully, parent exiting
[root@lzx mongod]# mongod -f /etc/mongod/shard2.conf 
about to fork child process, waiting until server is ready for connections.
forked process: 10417
child process started successfully, parent exiting
[root@lzx mongod]# mongod -f /etc/mongod/shard3.conf 
about to fork child process, waiting until server is ready for connections.
forked process: 10446
child process started successfully, parent exiting
[root@lzx mongod]# netstat -lntp |grep mongod
tcp        0      0 192.168.100.150:27001    0.0.0.0:*               LISTEN      10387/mongod        
tcp        0      0 192.168.100.150:27002    0.0.0.0:*               LISTEN      10417/mongod        
tcp        0      0 192.168.100.150:27003    0.0.0.0:*               LISTEN      10446/mongod        
tcp        0      0 192.168.100.150:28001    0.0.0.0:*               LISTEN      10387/mongod        
tcp        0      0 192.168.100.150:28002    0.0.0.0:*               LISTEN      10417/mongod        
tcp        0      0 192.168.100.150:28003    0.0.0.0:*               LISTEN      10446/mongod        
tcp        0      0 192.168.100.150:21000    0.0.0.0:*               LISTEN      10212/mongod        
tcp        0      0 192.168.100.150:27018    0.0.0.0:*               LISTEN      10185/mongod        
tcp        0      0 127.0.0.1:27018         0.0.0.0:*               LISTEN      10185/mongod    
在B机器上进行操作
  • 添加配置文件:
[root@lzx1 ~]# vim /etc/mongod/shard1.conf         //写入下面内容
pidfilepath = /var/run/mongodb/shard1.pid
dbpath = /data/mongodb/shard1/data
logpath = /data/mongodb/shard1/log/shard1.log
logappend = true
bind_ip = 192.168.100.160
port = 27001
fork = true
httpinterface=true        //打开web监控
rest=true
replSet=shard1         //副本集名称
shardsvr = true        
maxConns=20000         //设置最大连接数
[root@lzx1 ~]# cd /etc/mongod/
[root@lzx1 mongod]# cp shard1.conf shard2.conf
[root@lzx1 mongod]# cp shard1.conf shard3.conf
[root@lzx1 mongod]# vim shard2.conf
[root@lzx1 mongod]# cat !$
cat shard2.conf
pidfilepath = /var/run/mongodb/shard2.pid
dbpath = /data/mongodb/shard2/data
logpath = /data/mongodb/shard2/log/shard2.log
logappend = true
bind_ip = 192.168.100.160
port = 27002         //注意更改端口
fork = true
httpinterface=true
rest=true
replSet=shard2         
shardsvr = true
maxConns=20000
[root@lzx1 mongod]# vim shard3.conf
[root@lzx1 mongod]# cat !$
cat shard3.conf
pidfilepath = /var/run/mongodb/shard3.pid
dbpath = /data/mongodb/shard3/data
logpath = /data/mongodb/shard3/log/shard3.log
logappend = true
bind_ip = 192.168.100.160
port = 27003         //注意更改端口
fork = true
httpinterface=true
rest=true
replSet=shard3         
shardsvr = true
maxConns=20000
  • 启动服务:
[root@lzx1 mongod]# mongod -f /etc/mongod/shard1.conf 
about to fork child process, waiting until server is ready for connections.
forked process: 10795
child process started successfully, parent exiting
[root@lzx1 mongod]# mongod -f /etc/mongod/shard2.conf 
about to fork child process, waiting until server is ready for connections.
forked process: 10824
child process started successfully, parent exiting
[root@lzx1 mongod]# mongod -f /etc/mongod/shard3.conf 
about to fork child process, waiting until server is ready for connections.
forked process: 10853
child process started successfully, parent exiting
[root@lzx1 mongod]# netstat -lntp |grep mongod
tcp        0      0 192.168.100.160:27019    0.0.0.0:*               LISTEN      10446/mongod        
tcp        0      0 127.0.0.1:27019         0.0.0.0:*               LISTEN      10446/mongod        
tcp        0      0 192.168.100.160:27001    0.0.0.0:*               LISTEN      10795/mongod        
tcp        0      0 192.168.100.160:27002    0.0.0.0:*               LISTEN      10824/mongod        
tcp        0      0 192.168.100.160:27003    0.0.0.0:*               LISTEN      10853/mongod        
tcp        0      0 192.168.100.160:28001    0.0.0.0:*               LISTEN      10795/mongod        
tcp        0      0 192.168.100.160:28002    0.0.0.0:*               LISTEN      10824/mongod        
tcp        0      0 192.168.100.160:28003    0.0.0.0:*               LISTEN      10853/mongod        
tcp        0      0 192.168.100.160:21000    0.0.0.0:*               LISTEN      10472/mongod        
在C机器上进行操作
  • 添加配置文件:
[root@lzx2 ~]# vim /etc/mongod/shard1.conf         //写入下面内容
pidfilepath = /var/run/mongodb/shard1.pid
dbpath = /data/mongodb/shard1/data
logpath = /data/mongodb/shard1/log/shard1.log
logappend = true
bind_ip = 192.168.100.170
port = 27001
fork = true
httpinterface=true        //打开web监控
rest=true
replSet=shard1         //副本集名称
shardsvr = true        
maxConns=20000         //设置最大连接数
[root@lzx2 ~]# cd /etc/mongod/
[root@lzx2 mongod]# cp shard1.conf shard2.conf
[root@lzx2 mongod]# cp shard1.conf shard3.conf
[root@lzx2 mongod]# vim shard2.conf
[root@lzx2 mongod]# cat !$
cat shard2.conf
pidfilepath = /var/run/mongodb/shard2.pid
dbpath = /data/mongodb/shard2/data
logpath = /data/mongodb/shard2/log/shard2.log
logappend = true
bind_ip = 192.168.100.170
port = 27002          //注意更改端口
fork = true
httpinterface=true
rest=true
replSet=shard2         
shardsvr = true
maxConns=20000
[root@lzx2 mongod]# vim shard3.conf
[root@lzx2 mongod]# cat !$
cat shard3.conf
pidfilepath = /var/run/mongodb/shard3.pid
dbpath = /data/mongodb/shard3/data
logpath = /data/mongodb/shard3/log/shard3.log
logappend = true
bind_ip = 192.168.100.170
port = 27003          //注意更改端口   
fork = true
httpinterface=true
rest=true
replSet=shard3         
shardsvr = true
maxConns=20000
  • 启动服务:
[root@lzx2 mongod]# mongod -f /etc/mongod/shard1.conf 
about to fork child process, waiting until server is ready for connections.
forked process: 10607
child process started successfully, parent exiting
[root@lzx2 mongod]# mongod -f /etc/mongod/shard2.conf 
about to fork child process, waiting until server is ready for connections.
forked process: 10636
child process started successfully, parent exiting
[root@lzx2 mongod]# mongod -f /etc/mongod/shard3.conf 
about to fork child process, waiting until server is ready for connections.
forked process: 10665
child process started successfully, parent exiting
[root@lzx2 mongod]# netstat -lntp |grep mongod
tcp        0      0 192.168.100.170:27020    0.0.0.0:*               LISTEN      10418/mongod        
tcp        0      0 127.0.0.1:27020         0.0.0.0:*               LISTEN      10418/mongod        
tcp        0      0 192.168.100.170:27001    0.0.0.0:*               LISTEN      10607/mongod        
tcp        0      0 192.168.100.170:27002    0.0.0.0:*               LISTEN      10636/mongod        
tcp        0      0 192.168.100.170:27003    0.0.0.0:*               LISTEN      10665/mongod        
tcp        0      0 192.168.100.170:28001    0.0.0.0:*               LISTEN      10607/mongod        
tcp        0      0 192.168.100.170:28002    0.0.0.0:*               LISTEN      10636/mongod        
tcp        0      0 192.168.100.170:28003    0.0.0.0:*               LISTEN      10665/mongod        
tcp        0      0 192.168.100.170:21000    0.0.0.0:*               LISTEN      10444/mongod 
创建副本集

因为每个分片里面都有仲裁节点,而仲裁节点不能作为登录入口,所以只能选择非仲裁节点登录进行操作。

针对shard1,我使用A机器登录操作。
- 创建并初始化副本集shard1:

[root@lzx ~]# mongo --host 192.168.100.150 --port 27001
> use admin
switched to db admin
> config={_id:"shard1",members:[{_id:0,host:"192.168.100.150:27001"},{_id:1,host:"192.168.100.160:27001"},{_id:2,host:"192.168.100.170:27001",arbiterOnly:true}]}        //192.168.100.170作为仲裁节点  
{
    "_id" : "shard1",
    "members" : [
        {
            "_id" : 0,
            "host" : "192.168.100.150:27001"
        },
        {
            "_id" : 1,
            "host" : "192.168.100.160:27001"
        },
        {
            "_id" : 2,
            "host" : "192.168.100.170:27001",
            "arbiterOnly" : true
        }
    ]
}
> rs.initiate(config)      //初始化副本集
{ "ok" : 1 }
shard1:OTHER> rs.status()
{
    "set" : "shard1",
    "date" : ISODate("2018-08-27T04:45:50.273Z"),
    "myState" : 2,
    "term" : NumberLong(0),
    "syncingTo" : "",
    "syncSourceHost" : "",
    "syncSourceId" : -1,
    "heartbeatIntervalMillis" : NumberLong(2000),
    "optimes" : {
        "lastCommittedOpTime" : {
            "ts" : Timestamp(0, 0),
            "t" : NumberLong(-1)
        },
        "appliedOpTime" : {
            "ts" : Timestamp(1535345139, 1),
            "t" : NumberLong(-1)
        },
        "durableOpTime" : {
            "ts" : Timestamp(1535345139, 1),
            "t" : NumberLong(-1)
        }
    },
    "members" : [
        {
            "_id" : 0,
            "name" : "192.168.100.150:27001",
            "health" : 1,
            "state" : 2,
            "stateStr" : "SECONDARY",
            "uptime" : 828,
            "optime" : {
                "ts" : Timestamp(1535345139, 1),
                "t" : NumberLong(-1)
            },
            "optimeDate" : ISODate("2018-08-27T04:45:39Z"),
            "syncingTo" : "",
            "syncSourceHost" : "",
            "syncSourceId" : -1,
            "infoMessage" : "could not find member to sync from",
            "configVersion" : 1,
            "self" : true,
            "lastHeartbeatMessage" : ""
        },
        {
            "_id" : 1,
            "name" : "192.168.100.160:27001",
            "health" : 1,
            "state" : 2,
            "stateStr" : "SECONDARY",
            "uptime" : 10,
            "optime" : {
                "ts" : Timestamp(1535345139, 1),
                "t" : NumberLong(-1)
            },
            "optimeDurable" : {
                "ts" : Timestamp(1535345139, 1),
                "t" : NumberLong(-1)
            },
            "optimeDate" : ISODate("2018-08-27T04:45:39Z"),
            "optimeDurableDate" : ISODate("2018-08-27T04:45:39Z"),
            "lastHeartbeat" : ISODate("2018-08-27T04:45:49.984Z"),
            "lastHeartbeatRecv" : ISODate("2018-08-27T04:45:47.086Z"),
            "pingMs" : NumberLong(0),
            "lastHeartbeatMessage" : "",
            "syncingTo" : "",
            "syncSourceHost" : "",
            "syncSourceId" : -1,
            "infoMessage" : "",
            "configVersion" : 1
        },
        {
            "_id" : 2,
            "name" : "192.168.100.170:27001",
            "health" : 1,
            "state" : 7,
            "stateStr" : "ARBITER",        //仲裁节点
            "uptime" : 10,
            "lastHeartbeat" : ISODate("2018-08-27T04:45:49.984Z"),
            "lastHeartbeatRecv" : ISODate("2018-08-27T04:45:46.960Z"),
            "pingMs" : NumberLong(0),
            "lastHeartbeatMessage" : "",
            "syncingTo" : "",
            "syncSourceHost" : "",
            "syncSourceId" : -1,
            "infoMessage" : "",
            "configVersion" : 1
        }
    ],
    "ok" : 1
}
shard1:SECONDARY>          //多按几次Enter键
shard1:PRIMARY>          //自动变为primary,再次rs.status()就能看到

针对shard2,我使用B机器登录操作。
- 创建并初始化副本集shard2:

[root@lzx1 ~]# mongo --host 192.168.100.160 --port 27002
> use admin
switched to db admin
> config={_id:"shard2",members:[{_id:0,host:"192.168.100.150:27002",arbiterOnly:true},{_id:1,host:"192.168.100.160:27002"},{_id:2,host:"192.168.100.170:27002"}]}
{
    "_id" : "shard2",
    "members" : [
        {
            "_id" : 0,
            "host" : "192.168.100.150:27002",
            "arbiterOnly" : true
        },
        {
            "_id" : 1,
            "host" : "192.168.100.160:27002"
        },
        {
            "_id" : 2,
            "host" : "192.168.100.170:27002"
        }
    ]
}
> rs.initiate(config)
{ "ok" : 1 }
shard2:OTHER> rs.status()
{
    "set" : "shard2",
    "date" : ISODate("2018-08-27T12:28:23.263Z"),
    "myState" : 2,
    "term" : NumberLong(0),
    "syncingTo" : "",
    "syncSourceHost" : "",
    "syncSourceId" : -1,
    "heartbeatIntervalMillis" : NumberLong(2000),
    "optimes" : {
        "lastCommittedOpTime" : {
            "ts" : Timestamp(0, 0),
            "t" : NumberLong(-1)
        },
        "appliedOpTime" : {
            "ts" : Timestamp(1535372897, 1),
            "t" : NumberLong(-1)
        },
        "durableOpTime" : {
            "ts" : Timestamp(1535372897, 1),
            "t" : NumberLong(-1)
        }
    },
    "members" : [
        {
            "_id" : 0,
            "name" : "192.168.100.150:27002",
            "health" : 1,
            "state" : 7,
            "stateStr" : "ARBITER",
            "uptime" : 6,
            "lastHeartbeat" : ISODate("2018-08-27T12:28:22.126Z"),
            "lastHeartbeatRecv" : ISODate("2018-08-27T12:28:19.097Z"),
            "pingMs" : NumberLong(0),
            "lastHeartbeatMessage" : "",
            "syncingTo" : "",
            "syncSourceHost" : "",
            "syncSourceId" : -1,
            "infoMessage" : "",
            "configVersion" : 1
        },
        {
            "_id" : 1,
            "name" : "192.168.100.160:27002",
            "health" : 1,
            "state" : 2,
            "stateStr" : "SECONDARY",
            "uptime" : 987,
            "optime" : {
                "ts" : Timestamp(1535372897, 1),
                "t" : NumberLong(-1)
            },
            "optimeDate" : ISODate("2018-08-27T12:28:17Z"),
            "syncingTo" : "",
            "syncSourceHost" : "",
            "syncSourceId" : -1,
            "infoMessage" : "could not find member to sync from",
            "configVersion" : 1,
            "self" : true,
            "lastHeartbeatMessage" : ""
        },
        {
            "_id" : 2,
            "name" : "192.168.100.170:27002",
            "health" : 1,
            "state" : 2,
            "stateStr" : "SECONDARY",
            "uptime" : 6,
            "optime" : {
                "ts" : Timestamp(1535372897, 1),
                "t" : NumberLong(-1)
            },
            "optimeDurable" : {
                "ts" : Timestamp(1535372897, 1),
                "t" : NumberLong(-1)
            },
            "optimeDate" : ISODate("2018-08-27T12:28:17Z"),
            "optimeDurableDate" : ISODate("2018-08-27T12:28:17Z"),
            "lastHeartbeat" : ISODate("2018-08-27T12:28:22.126Z"),
            "lastHeartbeatRecv" : ISODate("2018-08-27T12:28:19.236Z"),
            "pingMs" : NumberLong(0),
            "lastHeartbeatMessage" : "",
            "syncingTo" : "",
            "syncSourceHost" : "",
            "syncSourceId" : -1,
            "infoMessage" : "",
            "configVersion" : 1
        }
    ],
    "ok" : 1
}
shard2:SECONDARY>          //多按几次Enter键
shard2:PRIMARY>            //自动变为primary,再次rs.status()就能看到

针对shard3,我使用C机器登录操作。
- 创建并初始化副本集shard3:

[root@lzx2 ~]# mongo --host 192.168.100.170 --port 27003
> use admin
switched to db admin
> config={_id:"shard3",members:[{_id:0,host:"192.168.100.150:27003"},{_id:1,host:"192.168.100.160:27003",arbiterOnly:true},{_id:2,host:"192.168.100.170:27003"}]}
{
    "_id" : "shard3",
    "members" : [
        {
            "_id" : 0,
            "host" : "192.168.100.150:27003"
        },
        {
            "_id" : 1,
            "host" : "192.168.100.160:27003",
            "arbiterOnly" : true
        },
        {
            "_id" : 2,
            "host" : "192.168.100.170:27003"
        }
    ]
}
> rs.initiate(config)
{ "ok" : 1 }
shard3:OTHER> rs.status()
{
    "set" : "shard3",
    "date" : ISODate("2018-08-27T12:32:53.016Z"),
    "myState" : 2,
    "term" : NumberLong(0),
    "syncingTo" : "",
    "syncSourceHost" : "",
    "syncSourceId" : -1,
    "heartbeatIntervalMillis" : NumberLong(2000),
    "optimes" : {
        "lastCommittedOpTime" : {
            "ts" : Timestamp(0, 0),
            "t" : NumberLong(-1)
        },
        "appliedOpTime" : {
            "ts" : Timestamp(1535373164, 1),
            "t" : NumberLong(-1)
        },
        "durableOpTime" : {
            "ts" : Timestamp(1535373164, 1),
            "t" : NumberLong(-1)
        }
    },
    "members" : [
        {
            "_id" : 0,
            "name" : "192.168.100.150:27003",
            "health" : 1,
            "state" : 2,
            "stateStr" : "SECONDARY",
            "uptime" : 8,
            "optime" : {
                "ts" : Timestamp(1535373164, 1),
                "t" : NumberLong(-1)
            },
            "optimeDurable" : {
                "ts" : Timestamp(1535373164, 1),
                "t" : NumberLong(-1)
            },
            "optimeDate" : ISODate("2018-08-27T12:32:44Z"),
            "optimeDurableDate" : ISODate("2018-08-27T12:32:44Z"),
            "lastHeartbeat" : ISODate("2018-08-27T12:32:49.921Z"),
            "lastHeartbeatRecv" : ISODate("2018-08-27T12:32:48.571Z"),
            "pingMs" : NumberLong(1),
            "lastHeartbeatMessage" : "",
            "syncingTo" : "",
            "syncSourceHost" : "",
            "syncSourceId" : -1,
            "infoMessage" : "",
            "configVersion" : 1
        },
        {
            "_id" : 1,
            "name" : "192.168.100.160:27003",
            "health" : 1,
            "state" : 7,
            "stateStr" : "ARBITER",
            "uptime" : 8,
            "lastHeartbeat" : ISODate("2018-08-27T12:32:49.921Z"),
            "lastHeartbeatRecv" : ISODate("2018-08-27T12:32:52.646Z"),
            "pingMs" : NumberLong(1),
            "lastHeartbeatMessage" : "",
            "syncingTo" : "",
            "syncSourceHost" : "",
            "syncSourceId" : -1,
            "infoMessage" : "",
            "configVersion" : 1
        },
        {
            "_id" : 2,
            "name" : "192.168.100.170:27003",
            "health" : 1,
            "state" : 2,
            "stateStr" : "SECONDARY",
            "uptime" : 1067,
            "optime" : {
                "ts" : Timestamp(1535373164, 1),
                "t" : NumberLong(-1)
            },
            "optimeDate" : ISODate("2018-08-27T12:32:44Z"),
            "syncingTo" : "",
            "syncSourceHost" : "",
            "syncSourceId" : -1,
            "infoMessage" : "could not find member to sync from",
            "configVersion" : 1,
            "self" : true,
            "lastHeartbeatMessage" : ""
        }
    ],
    "ok" : 1
}
shard3:SECONDARY>         //多按几次Enter键
shard3:PRIMARY>           //自动变为primary,再次rs.status()就能看到

至此,shard配置完毕

mongos配置

为什么要将mongos放在最后配置,是因为mongos要想启动起来,它必须要知道config server和shard是谁。

在A机器上进行操作
  • 添加配置文件:
[root@lzx ~]# vim /etc/mongod/mongos.conf         //写入下面内容
pidfilepath = /var/run/mongodb/mongospid
logpath = /data/mongodb/mongos/log/mongos.log
logappend = true
bind_ip = 192.168.100.150
port = 20000
fork = true
configdb = configs/192.168.100.150:21000,192.168.100.160:21000,192.168.100.170:21000
maxConns=20000
  • 启动服务:
[root@lzx ~]# mongos -f /etc/mongod/mongos.conf        //这里要注意,之前是mongod,这里是mongos
about to fork child process, waiting until server is ready for connections.
forked process: 1562
child process started successfully, parent exiting
[root@lzx ~]# netstat -lntp |grep mongos
tcp        0      0 192.168.100.150:20000    0.0.0.0:*               LISTEN      1562/mongos
在B机器上进行操作
  • 添加配置文件:
[root@lzx1 ~]# vim /etc/mongod/mongos.conf         //写入下面内容
pidfilepath = /var/run/mongodb/mongospid
logpath = /data/mongodb/mongos/log/mongos.log
logappend = true
bind_ip = 192.168.100.160
port = 20000
fork = true
configdb = configs/192.168.100.150:21000,192.168.100.160:21000,192.168.100.170:21000
maxConns=20000
  • 启动服务:
[root@lzx1 ~]# mongos -f /etc/mongod/mongos.conf        //这里要注意,之前是mongod,这里是mongos
about to fork child process, waiting until server is ready for connections.
forked process: 1550
child process started successfully, parent exiting
[root@lzx ~]# netstat -lntp |grep mongos
tcp        0      0 192.168.100.160:20000   0.0.0.0:*               LISTEN      1550/mongos
在C机器上进行操作
  • 添加配置文件:
[root@lzx2 ~]# vim /etc/mongod/mongos.conf        //写入下面内容
pidfilepath = /var/run/mongodb/mongospid
logpath = /data/mongodb/mongos/log/mongos.log
logappend = true
bind_ip = 192.168.100.170
port = 20000
fork = true
configdb = configs/192.168.100.150:21000,192.168.100.160:21000,192.168.100.170:21000
maxConns=20000
  • 启动服务:
[root@lzx2 ~]# mongos -f /etc/mongod/mongos.conf        //这里要注意,之前是mongod,这里是mongos 
about to fork child process, waiting until server is ready for connections.
forked process: 1559
child process started successfully, parent exiting
[root@lzx ~]# netstat -lntp |grep mongos
tcp        0      0 192.168.100.170:20000   0.0.0.0:*               LISTEN      1559/mongos 

至此,mongos配置完毕

启用分片

现在就要启用分片,把所有分片和mongos(路由器)串联起来。任选一台机器操作,这里我使用A机器进行操作。

  • 登录mongos启用分片:
[root@lzx ~]# mongo --host 192.168.100.150 --port 20000          //登录mongos
MongoDB shell version v3.4.16
connecting to: mongodb://192.168.100.150:20000/
MongoDB server version: 3.4.16
Server has startup warnings: 
2018-08-27T08:55:48.279-0400 I CONTROL  [main] 
2018-08-27T08:55:48.279-0400 I CONTROL  [main] ** WARNING: Access control is not enabled for the database.
2018-08-27T08:55:48.279-0400 I CONTROL  [main] **          Read and write access to data and configuration is unrestricted.
2018-08-27T08:55:48.279-0400 I CONTROL  [main] ** WARNING: You are running this process as the root user, which is not recommended.
2018-08-27T08:55:48.279-0400 I CONTROL  [main] 
mongos> sh.addShard("shard1/192.168.100.150:27001,192.168.100.160:27001,192.168.100.170:27001")          //把shard1分片和mongos串联起来
{ "shardAdded" : "shard1", "ok" : 1 }
mongos> sh.addShard("shard2/192.168.100.150:27002,192.168.100.160:27002,192.168.100.170:27002")          //把shard1分片和mongos串联起来
{ "shardAdded" : "shard2", "ok" : 1 }
mongos> sh.addShard("shard3/192.168.100.150:27003,192.168.100.160:27003,192.168.100.170:27003")          //把shard1分片和mongos串联起来
{ "shardAdded" : "shard3", "ok" : 1 }
mongos> sh.status()          //查看分片状态
--- Sharding Status --- 
  sharding version: {
    "_id" : 1,
    "minCompatibleVersion" : 5,
    "currentVersion" : 6,
    "clusterId" : ObjectId("5b83e28d28d74224a9920642")
  }
  shards:
        {  "_id" : "shard1",  "host" : "shard1/192.168.100.150:27001,192.168.100.160:27001",  "state" : 1 }
        {  "_id" : "shard2",  "host" : "shard2/192.168.100.160:27002,192.168.100.170:27002",  "state" : 1 }
        {  "_id" : "shard3",  "host" : "shard3/192.168.100.150:27003,192.168.100.170:27003",  "state" : 1 }
  active mongoses:
        "3.4.16" : 1
  autosplit:
        Currently enabled: yes
  balancer:
        Currently enabled:  yes
        Currently running:  no
NaN
        Failed balancer rounds in last 5 attempts:  0
        Migration Results for the last 24 hours: 
                No recent migrations
  databases:

分片启用完成

分片测试

现在进行分片测试,任选一台机器操作,这里我选择A机器进行操作。

  • 登录mongos进行测试:
[root@lzx ~]# mongo --host 192.168.100.150 --port 20000
mongos> use admin
switched to db admin
mongos> sh.enableSharding("testdb")         //指定要分片的数据库,db.runCommand({enablesharding:"testdb"})也可以实现
{ "ok" : 1 }
mongos> sh.shardCollection("testdb.table1",{"id":1})          //指定数据库里需要分片的集合和片键,db.runCommand({shardcollection:"testdb.table1",key:{id:1}})也可以实现
{ "collectionsharded" : "testdb.table1", "ok" : 1 }
mongos> use testdb
switched to db testdb
mongos> for (var i = 1; i <= 10000; i++) db.table1.save({id:1,"test1":"testval1"})           //插入测试数据,10000条
WriteResult({ "nInserted" : 1 })
mongos> show dbs
admin   0.000GB
config  0.001GB
testdb  0.000GB
mongos> sh.status()
--- Sharding Status --- 
  sharding version: {
    "_id" : 1,
    "minCompatibleVersion" : 5,
    "currentVersion" : 6,
    "clusterId" : ObjectId("5b83e28d28d74224a9920642")
  }
  shards:
        {  "_id" : "shard1",  "host" : "shard1/192.168.100.150:27001,192.168.100.160:27001",  "state" : 1 }
        {  "_id" : "shard2",  "host" : "shard2/192.168.100.160:27002,192.168.100.170:27002",  "state" : 1 }
        {  "_id" : "shard3",  "host" : "shard3/192.168.100.150:27003,192.168.100.170:27003",  "state" : 1 }
  active mongoses:
        "3.4.16" : 1
  autosplit:
        Currently enabled: yes
  balancer:
        Currently enabled:  yes
        Currently running:  no
NaN
        Failed balancer rounds in last 5 attempts:  0
        Migration Results for the last 24 hours: 
                No recent migrations
  databases:
        {  "_id" : "testdb",  "primary" : "shard2",  "partitioned" : true }
                testdb.table1          //有刚刚创建的数据
                        shard key: { "id" : 1 }
                        unique: false
                        balancing: true
                        chunks:
                                shard2  1        //刚刚创建的testdb.table1在shard2里面
                        { "id" : { "$minKey" : 1 } } -->> { "id" : { "$maxKey" : 1 } } on : shard2 Timestamp(1, 0) 

再创建一个库

mongos> sh.enableSharding("db2")
{ "ok" : 1 }
mongos> sh.shardCollection("db2.col2",{"id":1})
{ "collectionsharded" : "db2.col2", "ok" : 1 }
mongos> sh.status()
--- Sharding Status --- 
  sharding version: {
    "_id" : 1,
    "minCompatibleVersion" : 5,
    "currentVersion" : 6,
    "clusterId" : ObjectId("5b83e28d28d74224a9920642")
  }
  shards:
        {  "_id" : "shard1",  "host" : "shard1/192.168.100.150:27001,192.168.100.160:27001",  "state" : 1 }
        {  "_id" : "shard2",  "host" : "shard2/192.168.100.160:27002,192.168.100.170:27002",  "state" : 1 }
        {  "_id" : "shard3",  "host" : "shard3/192.168.100.150:27003,192.168.100.170:27003",  "state" : 1 }
  active mongoses:
        "3.4.16" : 1
  autosplit:
        Currently enabled: yes
  balancer:
        Currently enabled:  yes
        Currently running:  no
NaN
        Failed balancer rounds in last 5 attempts:  0
        Migration Results for the last 24 hours: 
                No recent migrations
  databases:
        {  "_id" : "testdb",  "primary" : "shard2",  "partitioned" : true }
                testdb.table1
                        shard key: { "id" : 1 }
                        unique: false
                        balancing: true
                        chunks:
                                shard2  1
                        { "id" : { "$minKey" : 1 } } -->> { "id" : { "$maxKey" : 1 } } on : shard2 Timestamp(1, 0) 
        {  "_id" : "db2",  "primary" : "shard1",  "partitioned" : true }
                db2.col2          //刚刚创建的db2.col2
                        shard key: { "id" : 1 }
                        unique: false
                        balancing: true
                        chunks:
                                shard1  1          //在shard1里面
                        { "id" : { "$minKey" : 1 } } -->> { "id" : { "$maxKey" : 1 } } on : shard1 Timestamp(1, 0) 

再创建一个库

mongos> sh.enableSharding("db3")
{ "ok" : 1 }
mongos> sh.shardCollection("db3.col3",{"id":1})
{ "collectionsharded" : "db3.col3", "ok" : 1 }
mongos> sh.status()
--- Sharding Status --- 
  sharding version: {
    "_id" : 1,
    "minCompatibleVersion" : 5,
    "currentVersion" : 6,
    "clusterId" : ObjectId("5b83e28d28d74224a9920642")
  }
  shards:
        {  "_id" : "shard1",  "host" : "shard1/192.168.100.150:27001,192.168.100.160:27001",  "state" : 1 }
        {  "_id" : "shard2",  "host" : "shard2/192.168.100.160:27002,192.168.100.170:27002",  "state" : 1 }
        {  "_id" : "shard3",  "host" : "shard3/192.168.100.150:27003,192.168.100.170:27003",  "state" : 1 }
  active mongoses:
        "3.4.16" : 1
  autosplit:
        Currently enabled: yes
  balancer:
        Currently enabled:  yes
        Currently running:  no
NaN
        Failed balancer rounds in last 5 attempts:  0
        Migration Results for the last 24 hours: 
                No recent migrations
  databases:
        {  "_id" : "testdb",  "primary" : "shard2",  "partitioned" : true }
                testdb.table1
                        shard key: { "id" : 1 }
                        unique: false
                        balancing: true
                        chunks:
                                shard2  1
                        { "id" : { "$minKey" : 1 } } -->> { "id" : { "$maxKey" : 1 } } on : shard2 Timestamp(1, 0) 
        {  "_id" : "db2",  "primary" : "shard1",  "partitioned" : true }
                db2.col2
                        shard key: { "id" : 1 }
                        unique: false
                        balancing: true
                        chunks:
                                shard1  1
                        { "id" : { "$minKey" : 1 } } -->> { "id" : { "$maxKey" : 1 } } on : shard1 Timestamp(1, 0) 
        {  "_id" : "db3",  "primary" : "shard3",  "partitioned" : true }
                db3.col3            //刚刚创建的db3.col3
                        shard key: { "id" : 1 }
                        unique: false
                        balancing: true
                        chunks:
                                shard3  1         //在shard3上面
                        { "id" : { "$minKey" : 1 } } -->> { "id" : { "$maxKey" : 1 } } on : shard3 Timestamp(1, 0)

可以看到数据是在三个分片上均衡存储的。这其实也不够明显,因为数据量不够庞大。当生产环境下数据量很庞大的时候,3个分片上面的数据就会很平均。

MongoDB备份和恢复

任选一台机器操作,下面在A机器上进行操作。

  • 备份指定库:
[root@lzx ~]# mkdir /tmp/mongobak; mongodump --host 192.168.100.150 --port 20000 -d testdb -o /tmp/mongobak       //-d 指定备份的数据库,-o 指定存储的位置,这里如果不先创建存储位置目录,就无法备份
2018-08-27T09:44:58.747-0400    writing testdb.table1 to 
2018-08-27T09:44:58.808-0400    done dumping testdb.table1 (10000 documents)
[root@lzx ~]# ls /tmp/mongobak/
testdb
[root@lzx ~]# ls /tmp/mongobak/testdb/
table1.bson  table1.metadata.json
[root@lzx ~]# cd !$
cd /tmp/mongobak/testdb/
[root@lzx testdb]# du -sh *
528K    table1.bson
4.0K    table1.metadata.json
[root@lzx testdb]# vim table1.bson            //里面是二进制文件,存储的是真正的数据
[root@lzx testdb]# cat table1.metadata.json         
{"options":{},"indexes":[{"v":2,"key":{"_id":1},"name":"_id_","ns":"testdb.table1"},{"v":2,"key":{"id":1.0},"name":"id_1","ns":"testdb.table1"}]}      //这里就有刚刚创建的testdb.table1,有所记录
  • 备份所有库:
[root@lzx testdb]# mongodump --host 192.168.100.150 --port 20000  -o /tmp/mongobak           //不使用-d 指定库,就会备份所有库;这里存储位置的目录不存在也会自动创建
2018-08-27T09:52:22.762-0400    writing admin.system.version to 
2018-08-27T09:52:22.764-0400    done dumping admin.system.version (1 document)
2018-08-27T09:52:22.764-0400    writing testdb.table1 to 
2018-08-27T09:52:22.764-0400    writing config.lockpings to 
2018-08-27T09:52:22.764-0400    writing config.changelog to 
2018-08-27T09:52:22.764-0400    writing config.locks to 
2018-08-27T09:52:22.771-0400    done dumping config.lockpings (11 documents)
2018-08-27T09:52:22.771-0400    writing config.chunks to 
2018-08-27T09:52:22.776-0400    done dumping config.changelog (9 documents)
2018-08-27T09:52:22.776-0400    writing config.collections to 
2018-08-27T09:52:22.778-0400    done dumping config.collections (3 documents)
2018-08-27T09:52:22.778-0400    writing config.databases to 
2018-08-27T09:52:22.781-0400    done dumping config.locks (7 documents)
2018-08-27T09:52:22.781-0400    writing config.shards to 
2018-08-27T09:52:22.782-0400    done dumping config.shards (3 documents)
2018-08-27T09:52:22.782-0400    writing config.mongos to 
2018-08-27T09:52:22.785-0400    done dumping config.mongos (2 documents)
2018-08-27T09:52:22.785-0400    writing config.version to 
2018-08-27T09:52:22.786-0400    done dumping config.version (1 document)
2018-08-27T09:52:22.786-0400    writing config.tags to 
2018-08-27T09:52:22.789-0400    done dumping config.tags (0 documents)
2018-08-27T09:52:22.789-0400    writing config.migrations to 
2018-08-27T09:52:22.790-0400    done dumping config.migrations (0 documents)
2018-08-27T09:52:22.790-0400    writing db2.col2 to 
2018-08-27T09:52:22.795-0400    done dumping config.databases (3 documents)
2018-08-27T09:52:22.795-0400    writing db3.col3 to 
2018-08-27T09:52:22.797-0400    done dumping config.chunks (3 documents)
2018-08-27T09:52:22.799-0400    done dumping db2.col2 (0 documents)
2018-08-27T09:52:22.800-0400    done dumping db3.col3 (0 documents)
2018-08-27T09:52:22.941-0400    done dumping testdb.table1 (10000 documents)
[root@lzx testdb]# cd ..
[root@lzx mongobak]# ls         
admin  config  db2  db3  testdb           //每一个目录代表一个库
  • 备份指定集合:
[root@lzx mongobak]# mongodump --host 192.168.100.150 --port 20000 -d testdb -c table1 -o /tmp/mongobak1          //-c 指定备份的集合
2018-08-27T09:57:12.795-0400    writing testdb.table1 to 
2018-08-27T09:57:12.840-0400    done dumping testdb.table1 (10000 documents)
[root@lzx mongobak]# ls !$
ls /tmp/mongobak1
testdb
[root@lzx mongobak]# ls /tmp/mongobak1/testdb/
table1.bson  table1.metadata.json
  • 导出集合为json文件:
[root@lzx mongobak]# mongoexport --host 192.168.100.150 --port 20000 -d testdb -c table1 -o /tmp/table1.json         //mongoexport表示导出,-o 指定存储文件,不存在会自动创建
2018-08-27T10:01:34.462-0400    connected to: 192.168.100.150:20000
2018-08-27T10:01:34.610-0400    exported 10000 records
[root@lzx mongobak]# vim !$
vim /tmp/table1.json
{"_id":{"$oid":"5b83fb338c1de6b83f6a11ab"},"id":1.0,"test1":"testval1"}
{"_id":{"$oid":"5b83fb338c1de6b83f6a11ac"},"id":1.0,"test1":"testval1"}
{"_id":{"$oid":"5b83fb338c1de6b83f6a11ad"},"id":1.0,"test1":"testval1"}
{"_id":{"$oid":"5b83fb338c1de6b83f6a11ae"},"id":1.0,"test1":"testval1"}
{"_id":{"$oid":"5b83fb338c1de6b83f6a11af"},"id":1.0,"test1":"testval1"}
{"_id":{"$oid":"5b83fb338c1de6b83f6a11b0"},"id":1.0,"test1":"testval1"}
{"_id":{"$oid":"5b83fb338c1de6b83f6a11b1"},"id":1.0,"test1":"testval1"}
{"_id":{"$oid":"5b83fb338c1de6b83f6a11b2"},"id":1.0,"test1":"testval1"}
{"_id":{"$oid":"5b83fb338c1de6b83f6a11b3"},"id":1.0,"test1":"testval1"}           //这里就是我们之前插入的10000条数据,不是乱码
  • 恢复指定库:
[root@lzx mongobak]# mongorestore --host 192.168.100.150 --port 20000 -d mydb dir         //-d 后跟要恢复的库,dir表示该库备份时所在的目录
  • 恢复所有库:
[root@lzx mongobak]# mongorestore --host 192.168.100.150 --port 20000  dir         //dir表示该库备份时所在的目录
  • 恢复集合:
[root@lzx mongobak]# mongorestore --host 192.168.100.150 --port 20000 -d mydb  -c tab1 dir      //-c 后跟要恢复的库的集合,dir表示该集合备份时所在的目录
  • 导入集合:
[root@lzx mongobak]# mongoimport --host 192.168.100.150 --port 20000 -d testdb -c table1 --file /tmp/table1.json         //mongoexport表示导出,--file指定导入文件

更多资料参考:
安全部署MongoDB最佳实践
mongo执行js脚本

猜你喜欢

转载自blog.csdn.net/miss1181248983/article/details/82120048
今日推荐