搭建 MongoDB 复制集,添加安全认证,Spring Boot 整合 MongoDB(内容较多慎入)

需求说明

(1)如图搭建一个分片集群 要求每个分片节点中的复制集含有一个仲裁节点

(2)使用权限控制 建立访问你访问的数据库mamba 这个账号名字是rwUser 密码是rwUser 这个账号对数据库有读写权限

(3)使用SpringBoot 进行访问分片集群 对mamba  库中的nba_star进行增加数据

目录

需求说明

一、MongoDB基础环境搭建

1、下载MongoDB安装包并上传到Centos

2、解压安装包和重命名(本次作业目录homework_mongodb_shard_auth)

3、升级ssl(./bin/mongod 命令需要)

4、配置集群(3个节点)、分片集群(4*4个节点)、路由集群(1个节点),分别建立文件夹用于区分存放配置文件和日志文件

二、配置副本集环境搭建

1、配置服务器副本集说明

2、分别在相应节点目录下创建配置文件(点击i键进入插入模式输入内容,按esc键后按:wq然后按回车键保存文件)

3、拷贝刚才编辑好的config1_17011.cfg,并添加配置内容(配置内容参见上面配置文件中的注释)

4、返回MongoDB目录,分别以配置文件方式启动

5、输入命令,进入任意节点

6、添加配置("_id":"configsvr"和配置文件中replSet=configsvr保持一致),并初始化rs.initiate(cfg),然后查看节点状态验证rs.status(),有一个是仲裁节点

三、分片副本集环境搭建

1、分片副本集说明

2、分别在相应节点目录下创建配置文件(以分片副本集1为例)

3、拷贝刚才编辑好的shard1_37011.cfg,并添加配置内容(配置内容参见上面配置文件中的注释)

4、返回MongoDB目录,分别以配置文件方式启动

5、输入命令,进入任意节点

6、添加配置("_id":"shard4svr"和配置文件中replSet=shard1svr保持一致),并初始化rs.initiate(cfg),然后查看节点状态验证rs.status(),有一个是仲裁节点

7、分片副本集2、3、4配置同理

四、路由节点环境搭建

1、路由节点机器说明

2、分别在相应节点目录下创建配置文件(以分片副本集1为例)

3、启动路由节点(注意是./bin/mongos命令),并连接进入,查看状态

4、添加分片节点,分别执行命令

5、开启数据库和集合分片

五、安全认证配置

1、进入路由节点创建管理员和普通用户(要先使用数据库,在数据库下创建用户)

2、关闭所有的配置节点、分片节点、路由节点

3、生成密钥文件并修改权限

4、在配置副本集(3个节点)、分片副本集(4×4个节点)配置文件中,追加配置开启安全认证和指定密钥文件

5、在路由节点配置文件中追加配置指定密钥文件

6、启动所有配置节点、分片节点、路由节点

7、认证测试

六、编写批量启停服务的shell脚本,安利批量关闭进程工具

1、创建并编辑启动脚本(注意顺序是:配置节点、分片节点、路由节点)

2、给启动脚本startup.sh设置权限,ll查看会看到启动脚本变成绿色,代表成功

3、执行脚本,即可批量依次启动所有配置节点,分片节点,路由节点

4、创建并编辑停止脚本

5、给停止脚本shutdown.sh设置权限,ll查看会看到启动脚本变成绿色,代表成功

6、执行停止脚本即可批量停止所有服务

7、下载安装批量杀进程的插件

8、执行命令即可批量杀死进程

七、其他事项

1、如果java程序无法连接虚拟机上的mongodb数据库,需要关闭防火墙

2、如果认证后,在操作过程中提示too many users are authenticated 

八、Spring Boot 整合 MongoDB

1、pom文件引入相关坐标依赖

2、spring配置文件application.properties

3、定义实体类

4、定义dao层接口

5、编写Sping Boot启动类和测试类(增删改查可自己定义)


一、MongoDB基础环境搭建

特殊提醒:

1、机器较多,建议每操作一个小部分先进行测试,再进行下一步,比如添加复制集配置,先配置一台,启动没问题再进行拷贝修改。

2、开启认证前,建议先使用java程序连接,再配置认证。

1、下载MongoDB安装包并上传到Centos

略。

2、解压安装包和重命名(本次作业目录homework_mongodb_shard_auth)

[root@localhost Downloads]# tar -xvf mongodb-linux-x86_64-amazon-4.2.8.tgz
​
[root@localhost Downloads]# ll
drwxr-xr-x.  3 root root         4096 Jun 21 16:47 mongodb-linux-x86_64-amazon-4.2.8
-rw-r--r--.  1 root root    132750301 Jun 20 18:52 mongodb-linux-x86_64-amazon-4.2.8.tgz
​
[root@localhost Downloads]# mv mongodb-linux-x86_64-amazon-4.2.8 homework_mongodb_shard_auth
[root@localhost Downloads]# ll
drwxr-xr-x.  3 root root         4096 Jun 21 16:47 homework_mongodb_shard_auth
-rw-r--r--.  1 root root    132750301 Jun 20 18:52 mongodb-linux-x86_64-amazon-4.2.8.tgz

3、升级ssl(./bin/mongod 命令需要)

[root@localhost Downloads]# yum -y install openssl

4、配置集群(3个节点)、分片集群(4*4个节点)、路由集群(1个节点),分别建立文件夹用于区分存放配置文件和日志文件

[root@localhost Downloads]# cd homework_mongodb_shard_auth/
​
[root@localhost homework_mongodb_shard_auth]# mkdir config_cluster/config_dbpath1 config_cluster/config_dbpath2 config_cluster/config_dbpath3 config_cluster/logs -p
​
[root@localhost homework_mongodb_shard_auth]# mkdir shard_cluster/shard1/shard_dbpath1 shard_cluster/shard1/shard_dbpath2 shard_cluster/shard1/shard_dbpath3 shard_cluster/shard1/shard_dbpath4 shard_cluster/shard1/logs                           shard_cluster/shard2/shard_dbpath1 shard_cluster/shard2/shard_dbpath2 shard_cluster/shard2/shard_dbpath3 shard_cluster/shard2/shard_dbpath4 shard_cluster/shard2/logs                           shard_cluster/shard3/shard_dbpath1 shard_cluster/shard3/shard_dbpath2 shard_cluster/shard3/shard_dbpath3 shard_cluster/shard3/shard_dbpath4 shard_cluster/shard3/logs                           shard_cluster/shard4/shard_dbpath1 shard_cluster/shard4/shard_dbpath2 shard_cluster/shard4/shard_dbpath3 shard_cluster/shard4/shard_dbpath4 shard_cluster/shard4/logs -p
​
[root@localhost homework_mongodb_shard_auth]# mkdir route route/logs -p
​
[root@localhost homework_mongodb_shard_auth]# ll
total 316
drwxr-xr-x. 2 root root   4096 Jun 21 16:47 bin
drwxr-xr-x. 6 root root     80 Jun 21 17:16 config_cluster
-rw-rw-r--. 1  500  500  30608 Jun 11 09:33 LICENSE-Community.txt
-rw-rw-r--. 1  500  500  16726 Jun 11 09:33 MPL-2
-rw-rw-r--. 1  500  500   2617 Jun 11 09:33 README
drwxr-xr-x. 2 root root      6 Jun 21 17:16 route
drwxr-xr-x. 7 root root     97 Jun 21 17:16 shard_cluster
-rw-rw-r--. 1  500  500  75405 Jun 11 09:33 THIRD-PARTY-NOTICES
-rw-rw-r--. 1  500  500 183512 Jun 11 09:35 THIRD-PARTY-NOTICES.gotools

二、配置副本集环境搭建

1、配置服务器副本集说明

配置服务器副本集中的3个节点中不能设置仲裁节点否则会报错(Arbiters are not allowed in replica set configurations being used for config servers)

配置副本集 节点角色 节点目录路径
192.168.127.128:17011 配置节点 /homework_mongodb_shard_auth/config_cluster/config_dbpath1
192.168.127.128:17013 配置节点 /homework_mongodb_shard_auth/config_cluster/config_dbpath2
192.168.127.128:17015 配置节点 /homework_mongodb_shard_auth/config_cluster/config_dbpath3

2、分别在相应节点目录下创建配置文件(点击i键进入插入模式输入内容,按esc键后按:wq然后按回车键保存文件)

[root@localhost config_cluster]# pwd
/home/fanxuebo/Downloads/homework_mongodb_shard_auth/config_cluster
[root@localhost config_cluster]# vi config1_17011.cfg
# 三个配置节点分别是config_cluster/config_dbpath1、config_cluster/config_dbpath2、config_cluster/config_dbpath3
dbpath=config_cluster/config_dbpath1
# 三个配置节点分别是config_cluster/logs/config1_17011.log、config_cluster/logs/config2_17013.log、config_cluster/logs/config3_17015.log
logpath=config_cluster/logs/config1_17011.log
logappend=true
fork=true
bind_ip=0.0.0.0
# 三个配置节点分别是17011、17013、17015
port=17011
configsvr=true
replSet=configsvr

3、拷贝刚才编辑好的config1_17011.cfg,并添加配置内容(配置内容参见上面配置文件中的注释)

[root@localhost config_cluster]# cp config1_17011.cfg config2_17013.cfg
[root@localhost config_cluster]# cp config1_17011.cfg config3_17015.cfg
​
[root@localhost config_cluster]# ll
total 12
-rw-r--r--. 1 root root 168 Jun 21 17:21 config1_17011.cfg
-rw-r--r--. 1 root root 168 Jun 21 17:30 config2_17013.cfg
-rw-r--r--. 1 root root 168 Jun 21 17:30 config3_17015.cfg
drwxr-xr-x. 2 root root   6 Jun 21 17:16 config_dbpath1
drwxr-xr-x. 2 root root   6 Jun 21 17:16 config_dbpath2
drwxr-xr-x. 2 root root   6 Jun 21 17:16 config_dbpath3
drwxr-xr-x. 2 root root   6 Jun 21 17:16 logs
​
[root@localhost config_cluster]# vim config2_17013.cfg
[root@localhost config_cluster]# vim config3_17015.cfg

4、返回MongoDB目录,分别以配置文件方式启动

[root@localhost config_cluster]# cd ..
​
[root@localhost homework_mongodb_shard_auth]# ./bin/mongod -f config_cluster/config1_17011.cfg
about to fork child process, waiting until server is ready for connections.
forked process: 5165
child process started successfully, parent exiting
[root@localhost homework_mongodb_shard_auth]# ./bin/mongod -f config_cluster/config2_17013.cfg
about to fork child process, waiting until server is ready for connections.
forked process: 5364
child process started successfully, parent exiting
[root@localhost homework_mongodb_shard_auth]# ./bin/mongod -f config_cluster/config3_17015.cfg
about to fork child process, waiting until server is ready for connections.
forked process: 5421
child process started successfully, parent exiting
[root@localhost homework_mongodb_shard_auth]#

5、输入命令,进入任意节点

初始化后,首先状态是configsvr:SECONDARY>,过一会后回车,会升级为主节点configsvr:PRIMARY>

[root@localhost homework_mongodb_shard_auth]# ./bin/mongo --port 17011
MongoDB shell version v4.2.8
connecting to: mongodb://127.0.0.1:17011/?compressors=disabled&gssapiServiceName=mongodb
Implicit session: session { "id" : UUID("f8860698-2e46-45f1-8789-aa0cbfb98f76") }
MongoDB server version: 4.2.8
Server has startup warnings:
2020-06-21T17:37:49.579-0700 I  CONTROL  [initandlisten]
2020-06-21T17:37:49.579-0700 I  CONTROL  [initandlisten] ** WARNING: Access control is not enabled for the database.
2020-06-21T17:37:49.579-0700 I  CONTROL  [initandlisten] **          Read and write access to data and configuration is unrestricted.
2020-06-21T17:37:49.579-0700 I  CONTROL  [initandlisten] ** WARNING: You are running this process as the root user, which is not recommended.
2020-06-21T17:37:49.579-0700 I  CONTROL  [initandlisten]
2020-06-21T17:37:49.579-0700 I  CONTROL  [initandlisten]
2020-06-21T17:37:49.579-0700 I  CONTROL  [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/enabled is 'always'.
2020-06-21T17:37:49.579-0700 I  CONTROL  [initandlisten] **        We suggest setting it to 'never'
2020-06-21T17:37:49.579-0700 I  CONTROL  [initandlisten]
2020-06-21T17:37:49.579-0700 I  CONTROL  [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/defrag is 'always'.
2020-06-21T17:37:49.579-0700 I  CONTROL  [initandlisten] **        We suggest setting it to 'never'
2020-06-21T17:37:49.579-0700 I  CONTROL  [initandlisten]
---
Enable MongoDB's free cloud-based monitoring service, which will then receive and display
metrics about your deployment (disk utilization, CPU, operation statistics, etc).
​
The monitoring data will be available on a MongoDB website with a unique URL accessible to you
and anyone you share the URL with. MongoDB may use this information to make product
improvements and to suggest MongoDB products and deployment options to you.
​
To enable free monitoring, run the following command: db.enableFreeMonitoring()
To permanently disable this reminder, run the following command: db.disableFreeMonitoring()
---
​
>

6、添加配置("_id":"configsvr"和配置文件中replSet=configsvr保持一致),并初始化rs.initiate(cfg),然后查看节点状态验证rs.status(),有一个是仲裁节点

> var cfg = {
...     "_id":"configsvr",
...     "protocolVersion":1,
...     "members":[
...         {
...             "_id":1,
...             "host":"192.168.127.128:17011",
...             "priority":10
...         },
...         {
...             "_id":2,
...             "host":"192.168.127.128:17013"
...         },
...         {
...             "_id":3,
...             "host":"192.168.127.128:17015"
...         }]
... }
> rs.initiate(cfg)
{
        "ok" : 1,
        "$gleStats" : {
                "lastOpTime" : Timestamp(1592787125, 1),
                "electionId" : ObjectId("000000000000000000000000")
        },
        "lastCommittedOpTime" : Timestamp(0, 0),
        "$clusterTime" : {
                "clusterTime" : Timestamp(1592787125, 1),
                "signature" : {
                        "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
                        "keyId" : NumberLong(0)
                }
        },
        "operationTime" : Timestamp(1592787125, 1)
}
configsvr:SECONDARY>
configsvr:PRIMARY> rs.status()
{
        "set" : "configsvr",
        "date" : ISODate("2020-06-22T08:09:34.501Z"),
        "myState" : 1,
        "term" : NumberLong(1),
        "syncingTo" : "",
        "syncSourceHost" : "",
        "syncSourceId" : -1,
        "configsvr" : true,
        "heartbeatIntervalMillis" : NumberLong(2000),
        "majorityVoteCount" : 2,
        "writeMajorityCount" : 2,
        "optimes" : {
                "lastCommittedOpTime" : {
                        "ts" : Timestamp(1592813363, 1),
                        "t" : NumberLong(1)
                },
                "lastCommittedWallTime" : ISODate("2020-06-22T08:09:23.300Z"),
                "readConcernMajorityOpTime" : {
                        "ts" : Timestamp(1592813363, 1),
                        "t" : NumberLong(1)
                },
                "readConcernMajorityWallTime" : ISODate("2020-06-22T08:09:23.300Z"),
                "appliedOpTime" : {
                        "ts" : Timestamp(1592813363, 1),
                        "t" : NumberLong(1)
                },
                "durableOpTime" : {
                        "ts" : Timestamp(1592813363, 1),
                        "t" : NumberLong(1)
                },
                "lastAppliedWallTime" : ISODate("2020-06-22T08:09:23.300Z"),
                "lastDurableWallTime" : ISODate("2020-06-22T08:09:23.300Z")
        },
        "lastStableRecoveryTimestamp" : Timestamp(1592813319, 1),
        "lastStableCheckpointTimestamp" : Timestamp(1592813319, 1),
        "electionCandidateMetrics" : {
                "lastElectionReason" : "electionTimeout",
                "lastElectionDate" : ISODate("2020-06-22T06:44:37.923Z"),
                "electionTerm" : NumberLong(1),
                "lastCommittedOpTimeAtElection" : {
                        "ts" : Timestamp(0, 0),
                        "t" : NumberLong(-1)
                },
                "lastSeenOpTimeAtElection" : {
                        "ts" : Timestamp(1592808266, 1),
                        "t" : NumberLong(-1)
                },
                "numVotesNeeded" : 2,
                "priorityAtElection" : 10,
                "electionTimeoutMillis" : NumberLong(10000),
                "numCatchUpOps" : NumberLong(0),
                "newTermStartDate" : ISODate("2020-06-22T06:44:37.936Z"),
                "wMajorityWriteAvailabilityDate" : ISODate("2020-06-22T06:44:39.310Z")
        },
        "members" : [
                {
                        "_id" : 1,
                        "name" : "192.168.127.128:17011",
                        "health" : 1,
                        "state" : 1,
                        "stateStr" : "PRIMARY",
                        "uptime" : 5175,
                        "optime" : {
                                "ts" : Timestamp(1592813363, 1),
                                "t" : NumberLong(1)
                        },
                        "optimeDate" : ISODate("2020-06-22T08:09:23Z"),
                        "syncingTo" : "",
                        "syncSourceHost" : "",
                        "syncSourceId" : -1,
                        "infoMessage" : "",
                        "electionTime" : Timestamp(1592808277, 1),
                        "electionDate" : ISODate("2020-06-22T06:44:37Z"),
                        "configVersion" : 1,
                        "self" : true,
                        "lastHeartbeatMessage" : ""
                },
                {
                        "_id" : 2,
                        "name" : "192.168.127.128:17013",
                        "health" : 1,
                        "state" : 2,
                        "stateStr" : "SECONDARY",
                        "uptime" : 5107,
                        "optime" : {
                                "ts" : Timestamp(1592813363, 1),
                                "t" : NumberLong(1)
                        },
                        "optimeDurable" : {
                                "ts" : Timestamp(1592813363, 1),
                                "t" : NumberLong(1)
                        },
                        "optimeDate" : ISODate("2020-06-22T08:09:23Z"),
                        "optimeDurableDate" : ISODate("2020-06-22T08:09:23Z"),
                        "lastHeartbeat" : ISODate("2020-06-22T08:09:33.271Z"),
                        "lastHeartbeatRecv" : ISODate("2020-06-22T08:09:32.844Z"),
                        "pingMs" : NumberLong(0),
                        "lastHeartbeatMessage" : "",
                        "syncingTo" : "192.168.127.128:17011",
                        "syncSourceHost" : "192.168.127.128:17011",
                        "syncSourceId" : 1,
                        "infoMessage" : "",
                        "configVersion" : 1
                },
                {
                        "_id" : 3,
                        "name" : "192.168.127.128:17015",
                        "health" : 1,
                        "state" : 2,
                        "stateStr" : "SECONDARY",
                        "uptime" : 5107,
                        "optime" : {
                                "ts" : Timestamp(1592813363, 1),
                                "t" : NumberLong(1)
                        },
                        "optimeDurable" : {
                                "ts" : Timestamp(1592813363, 1),
                                "t" : NumberLong(1)
                        },
                        "optimeDate" : ISODate("2020-06-22T08:09:23Z"),
                        "optimeDurableDate" : ISODate("2020-06-22T08:09:23Z"),
                        "lastHeartbeat" : ISODate("2020-06-22T08:09:33.271Z"),
                        "lastHeartbeatRecv" : ISODate("2020-06-22T08:09:33.251Z"),
                        "pingMs" : NumberLong(0),
                        "lastHeartbeatMessage" : "",
                        "syncingTo" : "192.168.127.128:17011",
                        "syncSourceHost" : "192.168.127.128:17011",
                        "syncSourceId" : 1,
                        "infoMessage" : "",
                        "configVersion" : 1
                }
        ],
        "ok" : 1,
        "$gleStats" : {
                "lastOpTime" : Timestamp(1592808266, 1),
                "electionId" : ObjectId("7fffffff0000000000000001")
        },
        "lastCommittedOpTime" : Timestamp(1592813363, 1),
        "$clusterTime" : {
                "clusterTime" : Timestamp(1592813363, 1),
                "signature" : {
                        "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
                        "keyId" : NumberLong(0)
                }
        },
        "operationTime" : Timestamp(1592813363, 1)
}
configsvr:PRIMARY>

三、分片副本集环境搭建

1、分片副本集说明

分片副本集1 节点角色 节点目录路径
192.168.127.128:37011 分片节点 /homework_mongodb_shard_auth/shard_cluster/shard1/shard_dbpath1
192.168.127.128:37013 分片节点 /homework_mongodb_shard_auth/shard_cluster/shard1/shard_dbpath2
192.168.127.128:37015 分片节点 /homework_mongodb_shard_auth/shard_cluster/shard1/shard_dbpath3
192.168.127.128:37017 仲裁节点 /homework_mongodb_shard_auth/shard_cluster/shard1/shard_dbpath4
分片副本集2 节点角色 节点目录路径
192.168.127.128:47011 分片节点 /homework_mongodb_shard_auth/shard_cluster/shard2/shard_dbpath1
192.168.127.128:47013 分片节点 /homework_mongodb_shard_auth/shard_cluster/shard2/shard_dbpath2
192.168.127.128:47015 分片节点 /homework_mongodb_shard_auth/shard_cluster/shard2/shard_dbpath3
192.168.127.128:47017 仲裁节点 /homework_mongodb_shard_auth/shard_cluster/shard2/shard_dbpath4
分片副本集3 节点角色 节点目录路径
192.168.127.128:57011 分片节点 /homework_mongodb_shard_auth/shard_cluster/shard3/shard_dbpath1
192.168.127.128:57013 分片节点 /homework_mongodb_shard_auth/shard_cluster/shard3/shard_dbpath2
192.168.127.128:57015 分片节点 /homework_mongodb_shard_auth/shard_cluster/shard3/shard_dbpath3
192.168.127.128:57017 仲裁节点 /homework_mongodb_shard_auth/shard_cluster/shard3/shard_dbpath4
分片副本集4 节点角色 节点目录路径
192.168.127.128:58011 分片节点 /homework_mongodb_shard_auth/shard_cluster/shard4/shard_dbpath1
192.168.127.128:58013 分片节点 /homework_mongodb_shard_auth/shard_cluster/shard4/shard_dbpath2
192.168.127.128:58015 分片节点 /homework_mongodb_shard_auth/shard_cluster/shard4/shard_dbpath3
192.168.127.128:58017 仲裁节点 /homework_mongodb_shard_auth/shard_cluster/shard4/shard_dbpath4

2、分别在相应节点目录下创建配置文件(以分片副本集1为例)

[root@localhost shard1]# pwd
/home/fanxuebo/Downloads/homework_mongodb_shard_auth/shard_cluster/shard1
[root@localhost shard1]# vi shard1_37011.cfg
# 四个分片节点分别是shard_cluster/shard1/shard_dbpath1、shard_cluster/shard1/shard_dbpath2、shard_cluster/shard1/shard_dbpath3、shard_cluster/shard1/shard_dbpath4
dbpath=shard_cluster/shard1/shard_dbpath1
# 四个分片节点分别是shard_cluster/shard1/logs/shard1_37011.log、shard_cluster/shard1/logs/shard2_37013.log、shard_cluster/shard1/logs/shard3_37015.log、shard_cluster/shard1/logs/shard4_37017.log
logpath=shard_cluster/shard1/logs/shard1_37011.log
logappend=true
fork=true
bind_ip=0.0.0.0
# 四个分片节点分别是37011、37013、17015、37017
port=37011
shardsvr=true
replSet=shard1svr

3、拷贝刚才编辑好的shard1_37011.cfg,并添加配置内容(配置内容参见上面配置文件中的注释)

[root@localhost shard1]# cp shard1_37011.cfg shard2_37013.cfg
[root@localhost shard1]# cp shard1_37011.cfg shard3_37015.cfg
[root@localhost shard1]# cp shard1_37011.cfg shard4_37017.cfg
​
[root@localhost shard1]# ll
total 16
drwxr-xr-x. 2 root root   6 Jun 21 19:08 logs
-rw-r--r--. 1 root root 547 Jun 21 19:30 shard1_37011.cfg
-rw-r--r--. 1 root root 547 Jun 21 19:35 shard2_37013.cfg
-rw-r--r--. 1 root root 547 Jun 21 19:35 shard3_37015.cfg
-rw-r--r--. 1 root root 547 Jun 21 19:35 shard4_37017.cfg
drwxr-xr-x. 2 root root   6 Jun 21 19:08 shard_dbpath1
drwxr-xr-x. 2 root root   6 Jun 21 19:08 shard_dbpath2
drwxr-xr-x. 2 root root   6 Jun 21 19:08 shard_dbpath3
drwxr-xr-x. 2 root root   6 Jun 21 19:08 shard_dbpath4
​
[root@localhost shard1]# vim shard2_37013.cfg
[root@localhost shard1]# vim shard3_37015.cfg
[root@localhost shard1]# vim shard4_37017.cfg

4、返回MongoDB目录,分别以配置文件方式启动

[root@localhost shard1]# cd ../..
​
[root@localhost homework_mongodb_shard_auth]# ./bin/mongod -f shard_cluster/shard1/shard1_37011.cfg
about to fork child process, waiting until server is ready for connections.
forked process: 47212
child process started successfully, parent exiting
[root@localhost homework_mongodb_shard_auth]# ./bin/mongod -f shard_cluster/shard1/shard2_37013.cfg
about to fork child process, waiting until server is ready for connections.
forked process: 47266
child process started successfully, parent exiting
[root@localhost homework_mongodb_shard_auth]# ./bin/mongod -f shard_cluster/shard1/shard3_37015.cfg
about to fork child process, waiting until server is ready for connections.
forked process: 47333
child process started successfully, parent exiting
[root@localhost homework_mongodb_shard_auth]# ./bin/mongod -f shard_cluster/shard1/shard4_37017.cfg
about to fork child process, waiting until server is ready for connections.
forked process: 47398
child process started successfully, parent exiting

5、输入命令,进入任意节点

[root@localhost homework_mongodb_shard_auth]# ./bin/mongo --port 37011
MongoDB shell version v4.2.8
connecting to: mongodb://127.0.0.1:37011/?compressors=disabled&gssapiServiceName=mongodb
Implicit session: session { "id" : UUID("78606909-f485-4e45-ae95-1a80afa4cc72") }
MongoDB server version: 4.2.8
Server has startup warnings:
2020-06-22T00:16:40.255-0700 I  CONTROL  [initandlisten]
2020-06-22T00:16:40.255-0700 I  CONTROL  [initandlisten] ** WARNING: Access control is not enabled for the database.
2020-06-22T00:16:40.255-0700 I  CONTROL  [initandlisten] **          Read and write access to data and configuration is unrestricted.
2020-06-22T00:16:40.255-0700 I  CONTROL  [initandlisten] ** WARNING: You are running this process as the root user, which is not recommended.
2020-06-22T00:16:40.255-0700 I  CONTROL  [initandlisten]
2020-06-22T00:16:40.255-0700 I  CONTROL  [initandlisten]
2020-06-22T00:16:40.255-0700 I  CONTROL  [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/enabled is 'always'.
2020-06-22T00:16:40.255-0700 I  CONTROL  [initandlisten] **        We suggest setting it to 'never'
2020-06-22T00:16:40.255-0700 I  CONTROL  [initandlisten]
2020-06-22T00:16:40.255-0700 I  CONTROL  [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/defrag is 'always'.
2020-06-22T00:16:40.255-0700 I  CONTROL  [initandlisten] **        We suggest setting it to 'never'
2020-06-22T00:16:40.255-0700 I  CONTROL  [initandlisten]
---
Enable MongoDB's free cloud-based monitoring service, which will then receive and display
metrics about your deployment (disk utilization, CPU, operation statistics, etc).
​
The monitoring data will be available on a MongoDB website with a unique URL accessible to you
and anyone you share the URL with. MongoDB may use this information to make product
improvements and to suggest MongoDB products and deployment options to you.
​
To enable free monitoring, run the following command: db.enableFreeMonitoring()
To permanently disable this reminder, run the following command: db.disableFreeMonitoring()
---
​
>

6、添加配置("_id":"shard4svr"和配置文件中replSet=shard1svr保持一致),并初始化rs.initiate(cfg),然后查看节点状态验证rs.status(),有一个是仲裁节点

初始化后,首先状态是shard1svr:SECONDARY>,过一会后回车,会升级为主节点shard1svr:PRIMARY>

​
>var cfg = {
...     "_id":"shard1svr",
...     "protocolVersion":1,
...     "members":[
...         {
...             "_id":1,
...             "host":"192.168.127.128:37011",
...             "priority":10
...         },
...         {
...             "_id":2,
...             "host":"192.168.127.128:37013"
...         },
...         {
...             "_id":3,
...             "host":"192.168.127.128:37015"
...         },
... {
...             "_id":4,
... "arbiterOnly":true,
...             "host":"192.168.127.128:37017"
...         }]
... }
> rs.initiate(cfg)
{
        "ok" : 1,
        "$clusterTime" : {
                "clusterTime" : Timestamp(1592810509, 1),
                "signature" : {
                        "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
                        "keyId" : NumberLong(0)
                }
        },
        "operationTime" : Timestamp(1592810509, 1)
}
shard1svr:SECONDARY>
shard1svr:PRIMARY> rs.status()
{
        "set" : "shard1svr",
        "date" : ISODate("2020-06-22T07:22:05.401Z"),
        "myState" : 1,
        "term" : NumberLong(1),
        "syncingTo" : "",
        "syncSourceHost" : "",
        "syncSourceId" : -1,
        "heartbeatIntervalMillis" : NumberLong(2000),
        "majorityVoteCount" : 3,
        "writeMajorityCount" : 3,
        "optimes" : {
                "lastCommittedOpTime" : {
                        "ts" : Timestamp(1592810520, 3),
                        "t" : NumberLong(1)
                },
                "lastCommittedWallTime" : ISODate("2020-06-22T07:22:00.476Z"),
                "readConcernMajorityOpTime" : {
                        "ts" : Timestamp(1592810520, 3),
                        "t" : NumberLong(1)
                },
                "readConcernMajorityWallTime" : ISODate("2020-06-22T07:22:00.476Z"),
                "appliedOpTime" : {
                        "ts" : Timestamp(1592810520, 3),
                        "t" : NumberLong(1)
                },
                "durableOpTime" : {
                        "ts" : Timestamp(1592810520, 3),
                        "t" : NumberLong(1)
                },
                "lastAppliedWallTime" : ISODate("2020-06-22T07:22:00.476Z"),
                "lastDurableWallTime" : ISODate("2020-06-22T07:22:00.476Z")
        },
        "lastStableRecoveryTimestamp" : Timestamp(1592810520, 3),
        "lastStableCheckpointTimestamp" : Timestamp(1592810520, 3),
        "electionCandidateMetrics" : {
                "lastElectionReason" : "electionTimeout",
                "lastElectionDate" : ISODate("2020-06-22T07:22:00.454Z"),
                "electionTerm" : NumberLong(1),
                "lastCommittedOpTimeAtElection" : {
                        "ts" : Timestamp(0, 0),
                        "t" : NumberLong(-1)
                },
                "lastSeenOpTimeAtElection" : {
                        "ts" : Timestamp(1592810509, 1),
                        "t" : NumberLong(-1)
                },
                "numVotesNeeded" : 3,
                "priorityAtElection" : 10,
                "electionTimeoutMillis" : NumberLong(10000),
                "numCatchUpOps" : NumberLong(0),
                "newTermStartDate" : ISODate("2020-06-22T07:22:00.476Z"),
                "wMajorityWriteAvailabilityDate" : ISODate("2020-06-22T07:22:01.495Z")
        },
        "members" : [
                {
                        "_id" : 1,
                        "name" : "192.168.127.128:37011",
                        "health" : 1,
                        "state" : 1,
                        "stateStr" : "PRIMARY",
                        "uptime" : 326,
                        "optime" : {
                                "ts" : Timestamp(1592810520, 3),
                                "t" : NumberLong(1)
                        },
                        "optimeDate" : ISODate("2020-06-22T07:22:00Z"),
                        "syncingTo" : "",
                        "syncSourceHost" : "",
                        "syncSourceId" : -1,
                        "infoMessage" : "could not find member to sync from",
                        "electionTime" : Timestamp(1592810520, 1),
                        "electionDate" : ISODate("2020-06-22T07:22:00Z"),
                        "configVersion" : 1,
                        "self" : true,
                        "lastHeartbeatMessage" : ""
                },
                {
                        "_id" : 2,
                        "name" : "192.168.127.128:37013",
                        "health" : 1,
                        "state" : 2,
                        "stateStr" : "SECONDARY",
                        "uptime" : 15,
                        "optime" : {
                                "ts" : Timestamp(1592810520, 3),
                                "t" : NumberLong(1)
                        },
                        "optimeDurable" : {
                                "ts" : Timestamp(1592810520, 3),
                                "t" : NumberLong(1)
                        },
                        "optimeDate" : ISODate("2020-06-22T07:22:00Z"),
                        "optimeDurableDate" : ISODate("2020-06-22T07:22:00Z"),
                        "lastHeartbeat" : ISODate("2020-06-22T07:22:04.464Z"),
                        "lastHeartbeatRecv" : ISODate("2020-06-22T07:22:03.562Z"),
                        "pingMs" : NumberLong(0),
                        "lastHeartbeatMessage" : "",
                        "syncingTo" : "192.168.127.128:37011",
                        "syncSourceHost" : "192.168.127.128:37011",
                        "syncSourceId" : 1,
                        "infoMessage" : "",
                        "configVersion" : 1
                },
                {
                        "_id" : 3,
                        "name" : "192.168.127.128:37015",
                        "health" : 1,
                        "state" : 2,
                        "stateStr" : "SECONDARY",
                        "uptime" : 15,
                        "optime" : {
                                "ts" : Timestamp(1592810520, 3),
                                "t" : NumberLong(1)
                        },
                        "optimeDurable" : {
                                "ts" : Timestamp(1592810520, 3),
                                "t" : NumberLong(1)
                        },
                        "optimeDate" : ISODate("2020-06-22T07:22:00Z"),
                        "optimeDurableDate" : ISODate("2020-06-22T07:22:00Z"),
                        "lastHeartbeat" : ISODate("2020-06-22T07:22:04.464Z"),
                        "lastHeartbeatRecv" : ISODate("2020-06-22T07:22:03.561Z"),
                        "pingMs" : NumberLong(0),
                        "lastHeartbeatMessage" : "",
                        "syncingTo" : "192.168.127.128:37011",
                        "syncSourceHost" : "192.168.127.128:37011",
                        "syncSourceId" : 1,
                        "infoMessage" : "",
                        "configVersion" : 1
                },
                {
                        "_id" : 4,
                        "name" : "192.168.127.128:37017",
                        "health" : 1,
                        "state" : 7,
                        "stateStr" : "ARBITER",
                        "uptime" : 15,
                        "lastHeartbeat" : ISODate("2020-06-22T07:22:04.464Z"),
                        "lastHeartbeatRecv" : ISODate("2020-06-22T07:22:04.297Z"),
                        "pingMs" : NumberLong(0),
                        "lastHeartbeatMessage" : "",
                        "syncingTo" : "",
                        "syncSourceHost" : "",
                        "syncSourceId" : -1,
                        "infoMessage" : "",
                        "configVersion" : 1
                }
        ],
        "ok" : 1,
        "$clusterTime" : {
                "clusterTime" : Timestamp(1592810520, 3),
                "signature" : {
                        "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
                        "keyId" : NumberLong(0)
                }
        },
        "operationTime" : Timestamp(1592810520, 3)
}
shard1svr:PRIMARY>

7、分片副本集2、3、4配置同理

# 四个分片节点分别是shard_cluster/shard2/shard_dbpath1、shard_cluster/shard2/shard_dbpath2、shard_cluster/shard2/shard_dbpath3、shard_cluster/shard2/shard_dbpath4
dbpath=shard_cluster/shard2/shard_dbpath1
# 四个分片节点分别是shard_cluster/shard2/logs/shard1_47011.log、shard_cluster/shard2/logs/shard2_47013.log、shard_cluster/shard2/logs/shard3_47015.log、shard_cluster/shard2/logs/shard4_47017.log
logpath=shard_cluster/shard2/logs/shard1_47011.log
logappend=true
fork=true
bind_ip=0.0.0.0
# 四个分片节点分别是47011、47013、47015、47017
port=47011
shardsvr=true
replSet=shard2svr
# 四个分片节点分别是shard_cluster/shard3/shard_dbpath1、shard_cluster/shard3/shard_dbpath2、shard_cluster/shard3/shard_dbpath3、shard_cluster/shard3/shard_dbpath4
dbpath=shard_cluster/shard3/shard_dbpath1
# 四个分片节点分别是shard_cluster/shard3/logs/shard1_57011.log、shard_cluster/shard3/logs/shard2_57013.log、shard_cluster/shard3/logs/shard3_57015.log、shard_cluster/shard3/logs/shard4_57017.log
logpath=shard_cluster/shard3/logs/shard1_57011.log
logappend=true
fork=true
bind_ip=0.0.0.0
# 四个分片节点分别是57011、57013、57015、57017
port=57011
shardsvr=true
replSet=shard3svr
# 四个分片节点分别是shard_cluster/shard4/shard_dbpath1、shard_cluster/shard4/shard_dbpath2、shard_cluster/shard4/shard_dbpath3、shard_cluster/shard4/shard_dbpath4
dbpath=shard_cluster/shard4/shard_dbpath1
# 四个分片节点分别是shard_cluster/shard4/logs/shard1_58011.log、shard_cluster/shard4/logs/shard2_58013.log、shard_cluster/shard4/logs/shard8_57015.log、shard_cluster/shard4/logs/shard4_58017.log
logpath=shard_cluster/shard4/logs/shard1_58011.log
logappend=true
fork=true
bind_ip=0.0.0.0
# 四个分片节点分别是58011、58013、58015、58017
port=58011
shardsvr=true
replSet=shard4svr

四、路由节点环境搭建

1、路由节点机器说明

路由节点机器 节点角色 节点目录路径
192.168.127.128:27017 路由节点 /homework_mongodb_shard_auth/route

2、分别在相应节点目录下创建配置文件(以分片副本集1为例)

[root@localhost route]# pwd
/home/fanxuebo/Downloads/homework_mongodb_shard_auth/route
[root@localhost route]# vi route_27017.cfg
port=27017
bind_ip=0.0.0.0
fork=true
logpath=route/logs/route.log
configdb=configsvr/192.168.127.128:17011,192.168.127.128:17013,192.168.127.128:17015

3、启动路由节点(注意是./bin/mongos命令),并连接进入,查看状态

[root@localhost route]# cd ..
[root@localhost homework_mongodb_shard_auth]# ./bin/mongos -f route/route_27017.cfg
about to fork child process, waiting until server is ready for connections.
forked process: 55471
child process started successfully, parent exiting
​
[root@localhost homework_mongodb_shard_auth]# ./bin/mongo --port 27017
MongoDB shell version v4.2.8
connecting to: mongodb://127.0.0.1:27017/?compressors=disabled&gssapiServiceName=mongodb
Implicit session: session { "id" : UUID("741de845-787f-4b02-9b2d-403cae0a6f39") }
MongoDB server version: 4.2.8
Server has startup warnings:
2020-06-22T01:17:05.023-0700 I  CONTROL  [main]
2020-06-22T01:17:05.023-0700 I  CONTROL  [main] ** WARNING: Access control is not enabled for the database.
2020-06-22T01:17:05.023-0700 I  CONTROL  [main] **          Read and write access to data and configuration is unrestricted.
2020-06-22T01:17:05.023-0700 I  CONTROL  [main] ** WARNING: You are running this process as the root user, which is not recommended.
2020-06-22T01:17:05.023-0700 I  CONTROL  [main]
mongos>
​
mongos> sh.status()
--- Sharding Status ---
  sharding version: {
        "_id" : 1,
        "minCompatibleVersion" : 5,
        "currentVersion" : 6,
        "clusterId" : ObjectId("5ef05356794ed4730f838170")
  }
  shards:
  active mongoses:
  autosplit:
        Currently enabled: yes
  balancer:
        Currently enabled:  yes
        Currently running:  no
        Failed balancer rounds in last 5 attempts:  0
        Migration Results for the last 24 hours:
                No recent migrations
  databases:
        {  "_id" : "config",  "primary" : "config",  "partitioned" : true }
​
mongos>

4、添加分片节点,分别执行命令

mongos> use admin
switched to db admin
mongos> sh.addShard("shard1svr/192.168.127.128:37011,192.168.127.128:37013,192.168.127.128:37015,192.168.127.128:37017")
{
        "shardAdded" : "shard1svr",
        "ok" : 1,
        "operationTime" : Timestamp(1592817739, 6),
        "$clusterTime" : {
                "clusterTime" : Timestamp(1592817739, 6),
                "signature" : {
                        "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
                        "keyId" : NumberLong(0)
                }
        }
}
mongos> sh.addShard("shard2svr/192.168.127.128:47011,192.168.127.128:47013,192.168.127.128:47015,192.168.127.128:47017")
{
        "shardAdded" : "shard2svr",
        "ok" : 1,
        "operationTime" : Timestamp(1592817772, 4),
        "$clusterTime" : {
                "clusterTime" : Timestamp(1592817772, 4),
                "signature" : {
                        "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
                        "keyId" : NumberLong(0)
                }
        }
}
mongos> sh.addShard("shard3svr/192.168.127.128:57011,192.168.127.128:57013,192.168.127.128:57015,192.168.127.128:57017")
{
        "shardAdded" : "shard3svr",
        "ok" : 1,
        "operationTime" : Timestamp(1592817779, 6),
        "$clusterTime" : {
                "clusterTime" : Timestamp(1592817779, 6),
                "signature" : {
                        "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
                        "keyId" : NumberLong(0)
                }
        }
}
mongos> sh.addShard("shard4svr/192.168.127.128:58011,192.168.127.128:58013,192.168.127.128:58015,192.168.127.128:58017")
{
        "shardAdded" : "shard4svr",
        "ok" : 1,
        "operationTime" : Timestamp(1592817786, 7),
        "$clusterTime" : {
                "clusterTime" : Timestamp(1592817786, 7),
                "signature" : {
                        "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
                        "keyId" : NumberLong(0)
                }
        }
}
mongos>

5、开启数据库和集合分片

mongos> sh.enableSharding("mamba")
{
        "ok" : 1,
        "operationTime" : Timestamp(1592817890, 19),
        "$clusterTime" : {
                "clusterTime" : Timestamp(1592817890, 19),
                "signature" : {
                        "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
                        "keyId" : NumberLong(0)
                }
        }
}
mongos> sh.shardCollection("mamba.nba_star",{"name":"hashed"})
{
        "collectionsharded" : "mamba.nba_star",
        "collectionUUID" : UUID("e751fa47-8a76-4b69-9dcb-7e938b90ce42"),
        "ok" : 1,
        "operationTime" : Timestamp(1592902476, 42),
        "$clusterTime" : {
                "clusterTime" : Timestamp(1592902476, 42),
                "signature" : {
                        "hash" : BinData(0,"+7+d0Ju/k8flMhGOmApOOkdhSo8="),
                        "keyId" : NumberLong("6841059462808076305")
                }
        }
}
mongos>

五、安全认证配置

1、进入路由节点创建管理员和普通用户(要先使用数据库,在数据库下创建用户)

[root@localhost homework_mongodb_shard_auth]# ./bin/mongo --port 27017
MongoDB shell version v4.2.8
connecting to: mongodb://127.0.0.1:27017/?compressors=disabled&gssapiServiceName=mongodb
Implicit session: session { "id" : UUID("b0b0acb4-32fe-4ec5-a6c3-ee3fe5e082bb") }
MongoDB server version: 4.2.8
Server has startup warnings:
2020-06-22T02:51:30.598-0700 I  CONTROL  [main]
2020-06-22T02:51:30.598-0700 I  CONTROL  [main] ** WARNING: Access control is not enabled for the database.
2020-06-22T02:51:30.598-0700 I  CONTROL  [main] **          Read and write access to data and configuration is unrestricted.
2020-06-22T02:51:30.598-0700 I  CONTROL  [main] ** WARNING: You are running this process as the root user, which is not recommended.
2020-06-22T02:51:30.598-0700 I  CONTROL  [main]
mongos>
mongos> use admin
mongos> db.createUser({"user":"root","pwd":"root",roles:[{"role":"root","db":"admin"}]})
Successfully added user: {
        "user" : "root",
        "roles" : [
                {
                        "role" : "root",
                        "db" : "admin"
                }
        ]
}
mongos> use mamba
mongos> db.createUser({"user":"rwUser","pwd":"rwUser",roles:[{"role":"readWrite","db":"mamba"}]})
Successfully added user: {
        "user" : "rwUser",
        "roles" : [
                {
                        "role" : "readWrite",
                        "db" : "mamba"
                }
        ]
}
mongos> db.createUser({"user":"rUser","pwd":"rUser",roles:[{"role":"read","db":"mamba"}]})
Successfully added user: {
        "user" : "rUser",
        "roles" : [
                {
                        "role" : "read",
                        "db" : "mamba"
                }
        ]
}
mongos> 

2、关闭所有的配置节点、分片节点、路由节点

参见第五部分

3、生成密钥文件并修改权限

[root@localhost homework_mongodb_shard_auth]# mkdir data/mongodb/keyFile -p
[root@localhost homework_mongodb_shard_auth]# openssl rand -base64 756 > data/mongodb/keyFile/testKeyFile.file
[root@localhost homework_mongodb_shard_auth]# chmod 600 data/mongodb/keyFile/testKeyFile.file
[root@localhost homework_mongodb_shard_auth]#

4、在配置副本集(3个节点)、分片副本集(4×4个节点)配置文件中,追加配置开启安全认证和指定密钥文件

auth=true
keyFile=data/mongodb/keyFile/testKeyFile.file

5、在路由节点配置文件中追加配置指定密钥文件

keyFile=data/mongodb/keyFile/testKeyFile.file

6、启动所有配置节点、分片节点、路由节点

参见第五部分

7、认证测试

参见视频作业演示

六、编写批量启停服务的shell脚本,安利批量关闭进程工具

1、创建并编辑启动脚本(注意顺序是:配置节点、分片节点、路由节点)

[root@localhost homework_mongodb_shard_auth]# vi startup.sh
./bin/mongod -f config_cluster/config1_17011.cfg
./bin/mongod -f config_cluster/config2_17013.cfg
./bin/mongod -f config_cluster/config3_17015.cfg
​
./bin/mongod -f shard_cluster/shard1/shard1_37011.cfg
./bin/mongod -f shard_cluster/shard1/shard2_37013.cfg
./bin/mongod -f shard_cluster/shard1/shard3_37015.cfg
./bin/mongod -f shard_cluster/shard1/shard4_37017.cfg
​
./bin/mongod -f shard_cluster/shard2/shard1_47011.cfg
./bin/mongod -f shard_cluster/shard2/shard2_47013.cfg
./bin/mongod -f shard_cluster/shard2/shard3_47015.cfg
./bin/mongod -f shard_cluster/shard2/shard4_47017.cfg
​
./bin/mongod -f shard_cluster/shard3/shard1_57011.cfg
./bin/mongod -f shard_cluster/shard3/shard2_57013.cfg
./bin/mongod -f shard_cluster/shard3/shard3_57015.cfg
./bin/mongod -f shard_cluster/shard3/shard4_57017.cfg
​
./bin/mongod -f shard_cluster/shard4/shard1_58011.cfg
./bin/mongod -f shard_cluster/shard4/shard2_58013.cfg
./bin/mongod -f shard_cluster/shard4/shard3_58015.cfg
./bin/mongod -f shard_cluster/shard4/shard4_58017.cfg
​
./bin/mongos -f route/route_27017.cfg

2、给启动脚本startup.sh设置权限,ll查看会看到启动脚本变成绿色,代表成功

[root@localhost homework_mongodb_shard_auth]# chmod +x startup.sh

3、执行脚本,即可批量依次启动所有配置节点,分片节点,路由节点

./startup.sh

4、创建并编辑停止脚本

[root@localhost homework_mongodb_shard_auth]# vi shutdown.sh
./bin/mongod --shutdown --config config_cluster/config1_17011.cfg
./bin/mongod --shutdown --config config_cluster/config2_17013.cfg
./bin/mongod --shutdown --config config_cluster/config3_17015.cfg
​
./bin/mongod --shutdown --config shard_cluster/shard1/shard1_37011.cfg
./bin/mongod --shutdown --config shard_cluster/shard1/shard2_37013.cfg
./bin/mongod --shutdown --config shard_cluster/shard1/shard3_37015.cfg
./bin/mongod --shutdown --config shard_cluster/shard1/shard4_37017.cfg
​
./bin/mongod --shutdown --config shard_cluster/shard2/shard1_47011.cfg
./bin/mongod --shutdown --config shard_cluster/shard2/shard2_47013.cfg
./bin/mongod --shutdown --config shard_cluster/shard2/shard3_47015.cfg
./bin/mongod --shutdown --config shard_cluster/shard2/shard4_47017.cfg
​
./bin/mongod --shutdown --config shard_cluster/shard3/shard1_57011.cfg
./bin/mongod --shutdown --config shard_cluster/shard3/shard2_57013.cfg
./bin/mongod --shutdown --config shard_cluster/shard3/shard3_57015.cfg
./bin/mongod --shutdown --config shard_cluster/shard3/shard4_57017.cfg
​
./bin/mongod --shutdown --config shard_cluster/shard4/shard1_58011.cfg
./bin/mongod --shutdown --config shard_cluster/shard4/shard2_58013.cfg
./bin/mongod --shutdown --config shard_cluster/shard4/shard3_58015.cfg
./bin/mongod --shutdown --config shard_cluster/shard4/shard4_58017.cfg
​
./bin/mongod --shutdown --config route/route_27017.cfg

5、给停止脚本shutdown.sh设置权限,ll查看会看到启动脚本变成绿色,代表成功

[root@localhost homework_mongodb_shard_auth]# chmod +x shutdown.sh

6、执行停止脚本即可批量停止所有服务

./shutdown.sh

7、下载安装批量杀进程的插件

yum install psmisc

8、执行命令即可批量杀死进程

killall mongod
killall mongos

七、其他事项

1、如果java程序无法连接虚拟机上的mongodb数据库,需要关闭防火墙

[root@localhost ~]# systemctl stop firewalld
[root@localhost ~]# systemctl disable firewalld.service

2、如果认证后,在操作过程中操作提示too many users are authenticated 

认证的用户过多,exit推出当前命令行,重新连接mongo数据库,重新认证

八、Spring Boot 整合 MongoDB

1、pom文件引入相关坐标依赖

<properties>
        <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
        <project.reporting.outputEncoding>UTF-8</project.reporting.outputEncoding>
        <java.version>1.8</java.version>
        <spring-boot.version>2.3.1.RELEASE</spring-boot.version>
    </properties>
​
    <dependencies>
​
        <!-- https://mvnrepository.com/artifact/org.springframework.boot/spring-boot-starter-web -->
        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-web</artifactId>
            <version>${spring-boot.version}</version>
        </dependency>
​
        <!-- https://mvnrepository.com/artifact/org.springframework.boot/spring-boot-starter-test -->
        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-test</artifactId>
            <version>${spring-boot.version}</version>
            <scope>test</scope>
        </dependency>
​
        <!-- https://mvnrepository.com/artifact/org.springframework.boot/spring-boot-starter-data-mongodb -->
        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-data-mongodb</artifactId>
            <version>${spring-boot.version}</version>
        </dependency>
​
    </dependencies>
​
    <build>
        <plugins>
            <plugin>
                <groupId>org.apache.maven.plugins</groupId>
                <artifactId>maven-compiler-plugin</artifactId>
                <version>3.8.1</version>
                <configuration>
                    <source>1.8</source>
                    <target>1.8</target>
                    <testSource>1.8</testSource>
                    <testTarget>1.8</testTarget>
                </configuration>
            </plugin>
        </plugins>
    </build>

2、spring配置文件application.properties

spring.data.mongodb.host=192.168.127.128
spring.data.mongodb.port=27017
spring.data.mongodb.database=mamba
spring.data.mongodb.username=rwUser
spring.data.mongodb.password=rwUser

3、定义实体类

@Document("nba_star") /** MongoRepository需要使用注解*/
public class NbaStar {
​
    private String id;
    private String name;
    private String city;
    private Date birthday;
    private double expectSalary;
​
    略
}

4、定义dao层接口

@Repository
public interface NbaStarRepository extends MongoRepository<NbaStar, String> {
}

5、编写Sping Boot启动类和测试类(增删改查可自己定义)

@SpringBootApplication
public class MongoApplication {
​
    public static void main(String[] args) {
        SpringApplication.run(MongoApplication.class, args);
    }
}
@RunWith(SpringJUnit4ClassRunner.class)
@SpringBootTest(classes = MongoApplication.class)
public class MongoAuthTest {
​
    @Autowired
    private NbaStarRepository nbaStarRepository;
​
    @Test
    public void testFind() {
        List<NbaStar> nbaStarList = nbaStarRepository.findAll();
        nbaStarList.forEach(nbaStar -> {
            System.out.println(nbaStar.toString());
        });
    }
}  

猜你喜欢

转载自blog.csdn.net/fanxb92/article/details/106918371