1. Install MongoDB
three machines
turn off firewall
systemctl stop firewalld.service
192.168.252.121 | 192.168.252.122 | 192.168.252.123 |
---|---|---|
mongos | mongos | mongos |
config server | config server | config server |
shard server1 master node | shard server1 secondary node | shard server1 quorum |
shard server2 quorum | shard server2 master node | shard server2 secondary node |
shard server3 secondary node | shard server3 quorum | shard server3 master node |
Port assignment:
mongos:20000
config:21000
shard1:27001
shard2:27002
shard3:27003
download and install
wget https://fastdl.mongodb.org/linux/mongodb-linux-x86_64-amazon-3.6.2.tgz
tar -xzvf mongodb-linux-x86_64-amazon-3.6.2.tgz -C /usr/local/
All version binary files, download by yourself
https://www.mongodb.org/dl/win32/x86_64-2008plus-ssl?_ga=2.87139544.1567998244.1517190032-1153843332.1517190032&_gac=1.204211492.1517212002.EAIaIQobChMI44v9_9b82AIV1AcqCh0lcABIEAAYASAAEgKI1_D_BwE
rename
cd /usr/local/
mv mongodb-linux-x86_64-amazon-3.6.2 mongodb
Create six directories: conf, mongos, config, shard1, shard2, and shard3 on each machine. Because mongos does not store data, you only need to create a log file directory.
mkdir -p /usr/local/mongodb/conf \
mkdir -p /usr/local/mongodb/mongos/log \
mkdir -p /usr/local/mongodb/config/data \
mkdir -p /usr/local/mongodb/config/log \
mkdir -p /usr/local/mongodb/shard1/data \
mkdir -p /usr/local/mongodb/shard1/log \
mkdir -p /usr/local/mongodb/shard2/data \
mkdir -p /usr/local/mongodb/shard2/log \
mkdir -p /usr/local/mongodb/shard3/data \
mkdir -p /usr/local/mongodb/shard3/log
Configure environment variables
vi /etc/profile
# MongoDB 环境变量内容
export MONGODB_HOME=/usr/local/mongodb
export PATH=$MONGODB_HOME/bin:$PATH
to take effect immediately
source /etc/profile
2. config server
After mongodb 3.4, the configuration server is required to also create a replica set, otherwise the cluster construction will fail.
(Three machines) Add configuration files
vi /usr/local/mongodb/conf/config.conf
## 配置文件内容
pidfilepath = /usr/local/mongodb/config/log/configsrv.pid
dbpath = /usr/local/mongodb/config/data
logpath = /usr/local/mongodb/config/log/congigsrv.log
logappend = true
bind_ip = 0.0.0.0
port = 21000
fork = true
#declare this is a config db of a cluster;
configsvr = true
#副本集名称
replSet = configs
#设置最大连接数
maxConns = 20000
Start the config server of the three servers
mongod -f /usr/local/mongodb/conf/config.conf
Log in to any configuration server and initialize the configuration replica set
Connect to MongoDB
mongo --port 21000
config variable
config = {
_id : "configs",
members : [
{_id : 0, host : "192.168.252.121:21000" },
{_id : 1, host : "192.168.252.122:21000" },
{_id : 2, host : "192.168.252.123:21000" }
]
}
Initialize the replica set
rs.initiate(config)
Among them, "_id" : "configs" should be consistent with the replicaaction.replSetName configured in the configuration file, and "host" in "members" is the ip and port of the three nodes
The response content is as follows
> config = {
... _id : "configs",
... members : [
... {_id : 0, host : "192.168.252.121:21000" },
... {_id : 1, host : "192.168.252.122:21000" },
... {_id : 2, host : "192.168.252.123:21000" }
... ]
... }
{
"_id" : "configs",
"members" : [
{
"_id" : 0,
"host" : "192.168.252.121:21000"
},
{
"_id" : 1,
"host" : "192.168.252.122:21000"
},
{
"_id" : 2,
"host" : "192.168.252.123:21000"
}
]
}
> rs.initiate(config);
{
"ok" : 1,
"operationTime" : Timestamp(1517369899, 1),
"$gleStats" : {
"lastOpTime" : Timestamp(1517369899, 1),
"electionId" : ObjectId("000000000000000000000000")
},
"$clusterTime" : {
"clusterTime" : Timestamp(1517369899, 1),
"signature" : {
"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
"keyId" : NumberLong(0)
}
}
}
configs:SECONDARY>
At this point you will find that the output on the terminal has changed.
//从单个一个
>
//变成了
configs:SECONDARY>
query status
configs:SECONDARY> rs.status()
3. Configure the shard replica set
3.1 Set up the first shard replica set
(Three machines) Set up the first shard replica set
configuration file
vi /usr/local/mongodb/conf/shard1.conf
#配置文件内容
#——————————————–
pidfilepath = /usr/local/mongodb/shard1/log/shard1.pid
dbpath = /usr/local/mongodb/shard1/data
logpath = /usr/local/mongodb/shard1/log/shard1.log
logappend = true
bind_ip = 0.0.0.0
port = 27001
fork = true
#副本集名称
replSet = shard1
#declare this is a shard db of a cluster;
shardsvr = true
#设置最大连接数
maxConns = 20000
Start shard1 server with three servers
mongod -f /usr/local/mongodb/conf/shard1.conf
Log in to any server and initialize the replica set (except 192.168.252.123)
Connect to MongoDB
mongo --port 27001
Use the admin database
use admin
Defining the replica set configuration
config = {
_id : "shard1",
members : [
{_id : 0, host : "192.168.252.121:27001" },
{_id : 1, host : "192.168.252.122:27001" },
{_id : 2, host : "192.168.252.123:27001" , arbiterOnly: true }
]
}
Initialize replica set configuration
rs.initiate(config)
The response content is as follows
> use admin
switched to db admin
> config = {
... _id : "shard1",
... members : [
... {_id : 0, host : "192.168.252.121:27001" },
... {_id : 1, host : "192.168.252.122:27001" },
... {_id : 2, host : "192.168.252.123:27001" , arbiterOnly: true }
... ]
... }
{
"_id" : "shard1",
"members" : [
{
"_id" : 0,
"host" : "192.168.252.121:27001"
},
{
"_id" : 1,
"host" : "192.168.252.122:27001"
},
{
"_id" : 2,
"host" : "192.168.252.123:27001",
"arbiterOnly" : true
}
]
}
> rs.initiate(config)
{ "ok" : 1 }
At this point you will find that the output on the terminal has changed.
//从单个一个
>
//变成了
shard1:SECONDARY>
query status
shard1:SECONDARY> rs.status()
3.2 Set up the second shard replica set
Set up a second shard replica set
configuration file
vi /usr/local/mongodb/conf/shard2.conf
#配置文件内容
#——————————————–
pidfilepath = /usr/local/mongodb/shard2/log/shard2.pid
dbpath = /usr/local/mongodb/shard2/data
logpath = /usr/local/mongodb/shard2/log/shard2.log
logappend = true
bind_ip = 0.0.0.0
port = 27002
fork = true
#副本集名称
replSet=shard2
#declare this is a shard db of a cluster;
shardsvr = true
#设置最大连接数
maxConns=20000
Start the shard2 server with three servers
mongod -f /usr/local/mongodb/conf/shard2.conf
Connect to MongoDB
mongo --port 27002
Use the admin database
use admin
Defining the replica set configuration
config = {
_id : "shard2",
members : [
{_id : 0, host : "192.168.252.121:27002" , arbiterOnly: true },
{_id : 1, host : "192.168.252.122:27002" },
{_id : 2, host : "192.168.252.123:27002" }
]
}
Initialize replica set configuration
rs.initiate(config)
The response content is as follows
> use admin
switched to db admin
> config = {
... _id : "shard2",
... members : [
... {_id : 0, host : "192.168.252.121:27002" , arbiterOnly: true },
... {_id : 1, host : "192.168.252.122:27002" },
... {_id : 2, host : "192.168.252.123:27002" }
... ]
... }
{
"_id" : "shard2",
"members" : [
{
"_id" : 0,
"host" : "192.168.252.121:27002",
"arbiterOnly" : true
},
{
"_id" : 1,
"host" : "192.168.252.122:27002"
},
{
"_id" : 2,
"host" : "192.168.252.123:27002"
}
]
}
> rs.initiate(config)
{ "ok" : 1 }
shard2:SECONDARY> rs.status()
3.3 Set up the third shard replica set
vi /usr/local/mongodb/conf/shard3.conf
#配置文件内容
#——————————————–
pidfilepath = /usr/local/mongodb/shard3/log/shard3.pid
dbpath = /usr/local/mongodb/shard3/data
logpath = /usr/local/mongodb/shard3/log/shard3.log
logappend = true
bind_ip = 0.0.0.0
port = 27003
fork = true
#副本集名称
replSet=shard3
#declare this is a shard db of a cluster;
shardsvr = true
#设置最大连接数
maxConns=20000
Start the shard3 server with three servers
mongod -f /usr/local/mongodb/conf/shard3.conf
Log in to any server and initialize the replica set (except 192.168.252.121)
mongo --port 27003
Use the admin database
use admin
Defining the replica set configuration
config = {
_id : "shard3",
members : [
{_id : 0, host : "192.168.252.121:27003" },
{_id : 1, host : "192.168.252.122:27003" , arbiterOnly: true},
{_id : 2, host : "192.168.252.123:27003" }
]
}
Initialize replica set configuration
rs.initiate(config)
The response content is as follows
> use admin
switched to db admin
> config = {
... _id : "shard3",
... members : [
... {_id : 0, host : "192.168.252.121:27003" },
... {_id : 1, host : "192.168.252.122:27003" , arbiterOnly: true},
... {_id : 2, host : "192.168.252.123:27003" }
... ]
... }
{
"_id" : "shard3",
"members" : [
{
"_id" : 0,
"host" : "192.168.252.121:27003"
},
{
"_id" : 1,
"host" : "192.168.252.122:27003",
"arbiterOnly" : true
},
{
"_id" : 2,
"host" : "192.168.252.123:27003"
}
]
}
> rs.initiate(config)
{ "ok" : 1 }
shard3:SECONDARY> rs.status()
3.4 Configure routing server mongos
(Three machines) First start the configuration server and the shard server, and then start the routing instance to start the routing instance:
vi /usr/local/mongodb/conf/mongos.conf
#内容
pidfilepath = /usr/local/mongodb/mongos/log/mongos.pid
logpath = /usr/local/mongodb/mongos/log/mongos.log
logappend = true
bind_ip = 0.0.0.0
port = 20000
fork = true
#监听的配置服务器,只能有1个或者3个 configs为配置服务器的副本集名字
configdb = configs/192.168.252.121:21000,192.168.252.122:21000,192.168.252.123:21000
#设置最大连接数
maxConns = 20000
Start the mongos server of the three servers
mongos -f /usr/local/mongodb/conf/mongos.conf
4. Inline routing server
At present, the mongodb configuration server, routing server, and various sharding servers have been built. However, the sharding mechanism cannot be used by the application connecting to the mongos routing server. It is also necessary to set the sharding configuration in the program to make the sharding take effect.
Log in to any mongos
mongo --port 20000
Use the admin database
use admin
Inline Routing Servers and Distributing Replica Sets
sh.addShard("shard1/192.168.252.121:27001,192.168.252.122:27001,192.168.252.123:27001");
sh.addShard("shard2/192.168.252.121:27002,192.168.252.122:27002,192.168.252.123:27002");
sh.addShard("shard3/192.168.252.121:27003,192.168.252.122:27003,192.168.252.123:27003");
View cluster status
sh.status()
The response content is as follows
mongos> sh.status()
--- Sharding Status ---
sharding version: {
"_id" : 1,
"minCompatibleVersion" : 5,
"currentVersion" : 6,
"clusterId" : ObjectId("5a713a37d56e076f3eb47acf")
}
shards:
{ "_id" : "shard1", "host" : "shard1/192.168.252.121:27001,192.168.252.122:27001", "state" : 1 }
{ "_id" : "shard2", "host" : "shard2/192.168.252.122:27002,192.168.252.123:27002", "state" : 1 }
{ "_id" : "shard3", "host" : "shard3/192.168.252.121:27003,192.168.252.123:27003", "state" : 1 }
active mongoses:
"3.6.2" : 3
autosplit:
Currently enabled: yes
balancer:
Currently enabled: yes
Currently running: no
Failed balancer rounds in last 5 attempts: 0
Migration Results for the last 24 hours:
No recent migrations
databases:
{ "_id" : "config", "primary" : "config", "partitioned" : true }
mongos>
5. Enable collection sharding to take effect
At present, the configuration service, routing service, sharding service, and replica set service have been connected in series, but our purpose is to insert data, and the data can be automatically sharded. Connect to mongos and prepare to make the specified database and the specified collection shard take effect.
Log in to any mongos
mongo --port 20000
Use the admin database
use admin
Specify the testdb fragmentation to take effect
db.runCommand( { enablesharding :"testdb"});
Specify the collection and shard key in the database that need to be sharded, hash id shard
db.runCommand( { shardcollection : "testdb.table1",key : {"id": "hashed"} } );
We set the table1 table of testdb to need sharding, and automatically shard to shard1, shard2, and shard3 according to the id. This is because not all mongodb databases and tables require sharding!
Test shard configuration results
Connect to MongoDB routing service
mongo 127.0.0.1:20000
Switch to the testdb database
use testdb;
Insert test data
for(i=1;i<=100000;i++){db.table1.insert({"id":i,"name":"penglei"})};
total number
db.table1.aggregate([{$group : {_id : "$name", totle : {$sum : 1}}}])
Check the sharding situation as follows
- shard1: "count": 33755
- shard2: "count": 33143,
- shard3: "count": 33102
Conclusion The data are basically uniform
db.table1.stats();
mongos> db.table1.stats();
{
"sharded": true,
"capped": false,
"ns": "testdb.table1",
"count": 100000,
"size": 5200000,
"storageSize": 1519616,
"totalIndexSize": 3530752,
"indexSizes": {
"_id_": 892928,
"id_hashed": 2637824
},
"avgObjSize": 52,
"nindexes": 2,
"nchunks": 6,
"shards": {
"shard1": {
"ns": "testdb.table1",
"size": 1755260,
"count": 33755,
"avgObjSize": 52,
"storageSize": 532480,
"capped": false,
"wiredTiger": {
...省略很多
}
},
"shard2": {
"ns": "testdb.table1",
"size": 1723436,
"count": 33143,
"avgObjSize": 52,
"storageSize": 479232,
"capped": false,
"wiredTiger": {
...省略很多
}
},
"shard3": {
"ns": "testdb.table1",
"size": 1721304,
"count": 33102,
"avgObjSize": 52,
"storageSize": 507904,
"capped": false,
"wiredTiger": {
...省略很多
}
}
},
"ok": 1,
"$clusterTime": {
"clusterTime": Timestamp(1517488062, 350),
"signature": {
"hash": BinData(0, "AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
"keyId": NumberLong(0)
}
},
"operationTime": Timestamp(1517488062, 350)
}
mongos>
The total number of group views is: 100000
mongos> db.table1.aggregate([{$group : {_id : "$name", totle : {$sum : 1}}}])
{ "_id" : "penglei", "totle" : 100000 }
mongos>
Post operation and maintenance
refer to
Teach you the installation and detailed use of MongoDB (1)
http://www.ymq.io/2018/01/26/MongoDB-1/
Teach you the installation and detailed use of MongoDB (2)
http://www.ymq.io/2018/01/29/MongoDB-2/
create index
db.table1.createIndex({"name":1})
db.table1.getIndexes()
start up
The startup sequence of mongodb is to start the configuration server first, then start the shard, and finally start mongos.
mongod -f /usr/local/mongodb/conf/config.conf
mongod -f /usr/local/mongodb/conf/shard1.conf
mongod -f /usr/local/mongodb/conf/shard2.conf
mongod -f /usr/local/mongodb/conf/shard3.conf
mongos -f /usr/local/mongodb/conf/mongos.conf
startup error
about to fork child process, waiting until server is ready for connections.
forked process: 1303
child process started successfully, parent exiting
[root@node1 ~]# mongod -f /usr/local/mongodb/conf/shard1.conf
about to fork child process, waiting until server is ready for connections.
forked process: 1384
delete mongod.lock
cd /usr/local/mongodb/shard1/data
rm -rf mongod.lock
Turn off the firewall ( if you don't turn off the firewall, you will also encounter such a problem )
systemctl stop firewalld.service
closure
#debian、ubuntu系统下:
apt-get install psmisc
#centos或、rhel系统下:
yum install psmisc
When closing, kill all processes directly
killall mongod
killall mongos
refer to:
mongodb 3.4 cluster construction: shard + replica set http://www.ityouknow.com/mongodb/2017/08/05/mongodb-cluster-setup.html
Runoob Tutorial: http://www.runoob.com/mongodb/mongodb-tutorial.html
Tutorials Tutorial: Pointhttps://www.tutorialspoint.com/mongodb/mongodb_advantages.htm
MongoDB official website address: https://www.mongodb. com
MongoDB official English documentation: https://docs.mongodb.com/manual
MongoDB download address for each platform: https://www.mongodb.com/download-center#community
MongoDB installation https://docs.mongodb.com/ manual/tutorial/install-mongodb-enterprise-on-ubuntu
Contact
- Author: Peng Lei
- Source: http://www.ymq.io/2018/01/29/MongoDB-2
- Email:[email protected]
- The copyright belongs to the author, please indicate the source when reprinting
- Wechat: Pay attention to the public account, search cloud library, focus on research and knowledge sharing of development technology