conjunto de replicación común
Arquitectura entorno en línea común para el conjunto de réplicas, puede ser entendido como un maestro de esclavos múltiples.
Parte inferior: 1 desde el maestro 2
En pocas palabras: un arbitraje de un maestro
Información del servidor:
Tres máquinas configuradas como 2 100G disco de almacenamiento de memoria 16G nuclear
"host": "10.1.1.159:27020"
"host": "10.1.1.77:27020"
"host": "10.1.1.178:27020
1, tenemos una de la configuración de máquinas:
[root @ 10-1-1-159 ~] # wget https://fastdl.mongodb.org/linux/mongodb-linux-x86_64-rhel70-4.2.1.tgz
[root @ 10-1-1-159 ~ ] # tar -zxvf mongodb-Linux-x86_64-rhel70-4.2.1.tgz -C / datos /
[root @ 10-1-1-159 ~] # mkdir / data / mongodb / {de datos, registros, pid, conf } -p
配置文件:
[root@10-1-1-159 ~]# cat /data/mongodb/conf/mongodb.conf
systemLog:
destination: file
logAppend: true
path: /data/mongodb/logs/mongod.log
storage:
dbPath: /data/mongodb/data
journal:
enabled: true
directoryPerDB: true
wiredTiger:
engineConfig:
cacheSizeGB: 8 #如果一台机器启动一个实例这个可以注释选择默认,如果一台机器启动多个实例,需要设置内存大小,避免互相抢占内存
directoryForIndexes: true
processManagement:
fork: true
pidFilePath: /data/mongodb/pid/mongod.pid
net:
port: 27020
bindIp: 10.1.1.159,localhost #修改为本机IP地址
maxIncomingConnections: 5000
#security:
#keyFile: /data/mongodb/conf/keyfile
#authorization: enabled
replication:
# oplogSizeMB: 1024
replSetName: rs02
2, la configuración negativa replica en otras máquinas:
[root @ 10-1-1-159 ~] # scp -r / datos / [email protected]: / data /
[root @ 10-1-1-159 ~] # scp -r / datos / [email protected] .1.178: datos / /
La estructura de directorios:
[root@10-1-1-178 data]# tree mongodb
mongodb
├── conf
│ └── mongodb.conf
├── data
├── logs
└── pid
3, fueron ejecutados tres máquinas:
groupadd mongod
useradd -g mongod mongod
yum install -y libcurl openssl glibc
cd / datos
ln -s mongodb-linux-x86_64-rhel70-4.2.1 mongodb-4.2
chown -R mongod.mongod / datos
sudo -u mongod / datos / mongodb4 .2.1 / bin / mongod -f /data/mongodb/conf/mongodb.conf
conjunto de replicación Configurar:
# RS02 nombre de conjunto de réplicas y el perfil de replSetName consistente
config = {_id: "RS02", los miembros: [
{_id: 0, host: "10.1.1.159:27010", prioridad: 90},
{_id: 1, host: "10.1.1.77:27010", prioridad: 90 },
{_ID: 2, host: "10.1.1.178:27010", arbiterOnly: true}
]
}
# Inicializar
rs.initiate (config);
4, en la que una máquina que realiza:
[root@10-1-1-159 ~]# /data/mongodb3.6.9/bin/mongo 10.1.1.159:27020
> use admin
switched to db admin
> config = { _id:"rs02", members:[
... {_id:0,host:"10.1.1.159:27020",priority:90},
... {_id:1,host:"10.1.1.77:27020",priority:90},
... {_id:2,host:"10.1.1.178:27020",arbiterOnly:true}
... ]
... }
{
"_id" : "rs02",
"members" : [
{
"_id" : 0,
"host" : "10.1.1.159:27020",
"priority" : 90
},
{
"_id" : 1,
"host" : "10.1.1.77:27020",
"priority" : 90
},
{
"_id" : 2,
"host" : "10.1.1.178:27020",
"arbiterOnly" : true
}
]
}
>
> rs.initiate(config); 初始化副本集########eeeerrrr
{
"ok" : 1,
"operationTime" : Timestamp(1583907929, 1),
"$clusterTime" : {
"clusterTime" : Timestamp(1583907929, 1),
"signature" : {
"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
"keyId" : NumberLong(0)
}
}
}
5. Comprobar el estado del nodo
rs02:PRIMARY> rs.status()
{
"set" : "rs02",
"date" : ISODate("2020-03-13T07:11:09.427Z"),
"myState" : 1,
"term" : NumberLong(1),
"syncingTo" : "",
"syncSourceHost" : "",
"syncSourceId" : -1,
"heartbeatIntervalMillis" : NumberLong(2000),
"optimes" : {
"lastCommittedOpTime" : {
"ts" : Timestamp(1584083465, 1),
"t" : NumberLong(1)
},
"readConcernMajorityOpTime" : {
"ts" : Timestamp(1584083465, 1),
"t" : NumberLong(1)
},
"appliedOpTime" : {
"ts" : Timestamp(1584083465, 1),
"t" : NumberLong(1)
},
"durableOpTime" : {
"ts" : Timestamp(1584083465, 1),
"t" : NumberLong(1)
}
},
"members" : [
{
"_id" : 0,
"name" : "10.1.1.159:27020",
"health" : 1,
"state" : 1,
"stateStr" : "PRIMARY", #主节点
"uptime" : 185477,
"optime" : {
"ts" : Timestamp(1584083465, 1),
"t" : NumberLong(1)
},
"optimeDate" : ISODate("2020-03-13T07:11:05Z"),
"syncingTo" : "",
"syncSourceHost" : "",
"syncSourceId" : -1,
"infoMessage" : "",
"electionTime" : Timestamp(1583907939, 1),
"electionDate" : ISODate("2020-03-11T06:25:39Z"),
"configVersion" : 1,
"self" : true,
"lastHeartbeatMessage" : ""
},
{
"_id" : 1,
"name" : "10.1.1.77:27020",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY", #从节点
"uptime" : 175540,
"optime" : {
"ts" : Timestamp(1584083465, 1),
"t" : NumberLong(1)
},
"optimeDurable" : {
"ts" : Timestamp(1584083465, 1),
"t" : NumberLong(1)
},
"optimeDate" : ISODate("2020-03-13T07:11:05Z"),
"optimeDurableDate" : ISODate("2020-03-13T07:11:05Z"),
"lastHeartbeat" : ISODate("2020-03-13T07:11:08.712Z"),
"lastHeartbeatRecv" : ISODate("2020-03-13T07:11:08.711Z"),
"pingMs" : NumberLong(0),
"lastHeartbeatMessage" : "",
"syncingTo" : "10.1.1.159:27020",
"syncSourceHost" : "10.1.1.159:27020",
"syncSourceId" : 0,
"infoMessage" : "",
"configVersion" : 1
},
{
"_id" : 2,
"name" : "10.1.1.178:27020",
"health" : 1,
"state" : 7,
"stateStr" : "ARBITER", #仲裁节点
"uptime" : 175540,
"lastHeartbeat" : ISODate("2020-03-13T07:11:08.712Z"),
"lastHeartbeatRecv" : ISODate("2020-03-13T07:11:08.711Z"),
"pingMs" : NumberLong(0),
"lastHeartbeatMessage" : "",
"syncingTo" : "",
"syncSourceHost" : "",
"syncSourceId" : -1,
"infoMessage" : "",
"configVersion" : 1
}
],
"ok" : 1,
"operationTime" : Timestamp(1584083465, 1),
"$clusterTime" : {
"clusterTime" : Timestamp(1584083465, 1),
"signature" : {
"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
"keyId" : NumberLong(0)
}
}
}
rs02:PRIMARY>
7, ahora el estado del conjunto de réplicas:
nodo arbitraje 10.1.1.178:27020 ARBITER
10.1.1.77:27020 SECUNDARIA desde el nodo
10.1.1.159:27020 nodo maestro primario
Insertamos algunos datos para comprobar, y después de apagar el nodo maestro,
nodo de registro de arbitraje
podemos ver que cuando después de que el nodo de abajo 10.1.1.159, reelegido: 10.1.1.77:27010 miembro se encuentra ahora en estado PRIMARIA
2020-03-18T14:34:53.636+0800 I NETWORK [conn9] end connection 10.1.1.159:49160 (1 connection now open)
2020-03-18T14:34:54.465+0800 I CONNPOOL [Replication] dropping unhealthy pooled connection to 10.1.1.159:27010
2020-03-18T14:34:54.465+0800 I CONNPOOL [Replication] after drop, pool was empty, going to spawn some connections
2020-03-18T14:34:54.465+0800 I ASIO [Replication] Connecting to 10.1.1.159:27010
......
2020-03-18T14:35:02.473+0800 I ASIO [Replication] Failed to connect to 10.1.1.159:27010 - HostUnreachable: Error connecting to 10.1.1.159:27010 :: caused by :: Connection refused
2020-03-18T14:35:02.473+0800 I CONNPOOL [Replication] Dropping all pooled connections to 10.1.1.159:27010 due to HostUnreachable: Error connecting to 10.1.1.159:27010 :: caused by :: Connection refused
2020-03-18T14:35:02.473+0800 I REPL_HB [replexec-8] Error in heartbeat (requestId: 662) to 10.1.1.159:27010, response status: HostUnreachable: Error connecting to 10.1.1.159:27010 :: caused by :: Connection refused
2020-03-18T14:35:04.463+0800 I REPL [replexec-5] Member 10.1.1.77:27010 is now in state PRIMARY
2020-03-18T14:35:04.473+0800 I ASIO [Replication] Connecting to 10.1.1.159:27010
2020-03-18T14:35:04.473+0800 I ASIO [Replication] Failed to connect to 10.1.1.159:27010 - HostUnreachable: Error connecting to 10.1.1.159:27010 :: caused by :: Connection refused
2020-03-18T14:35:04.473+0800 I CONNPOOL [Replication] Dropping all pooled connections to 10.1.1.159:27010 due to HostUnreachable: Error connecting to 10.1.1.159:27010 :: caused by :: Connection refused
Arquitectura se convertirá en una figura a continuación:
En la actualidad fijado para construir una copia completa, también probó Cuando (al menos tres nodos) problema con un nodo, y no afectará a la lectura y la escritura servicio correctamente.
El siguiente capítulo de empezar la adición de los usuarios: