Necesidad de entorno de despliegue JDK
-
La implementación manual por medio de, subir primero el JDK paquete de alquitrán
[root@iz8vb6evwfagx3tyjx4fl8z soft]# ll total 189496 -rw-r--r-- 1 root root 194042837 Apr 8 14:11 jdk-8u202-linux-x64.tar.gz
-
Descomprimir en un directorio específico
mkdir -p /opt/test/java tar -zxvf jdk-8u202-linux-x64.tar.gz -C /opt/test/java
-
vim / etc / profile añadir variables de entorno de forzado JDK entorno
JAVA_HOME=/opt/test/java/jdk1.8.0_202 CLASSPATH=$JAVA_HOME/lib/ PATH=$PATH:$JAVA_HOME/bin export PATH JAVA_HOME CLASSPATH
-
fuente / etc / perfil para validar la configuración
-
Ver Versión JDK
[root@iz8vb6evwfagx3tyjx4fl8z soft]# java -version java version "1.8.0_202" Java(TM) SE Runtime Environment (build 1.8.0_202-b08) Java HotSpot(TM) 64-Bit Server VM (build 25.202-b08, mixed mode)
racimos de construcción Zookeeper
-
Crear un nodo de carpeta
cd /opt/test/ mkdir cluster/node01 -p && mkdir cluster/node02 -p && mkdir cluster/node03 -p
-
Configuración IP de la máquina
machine_ip=121.89.209.190
-
Ejecutar el nodo 1
docker run -d -p 2181:2181 -p 2887:2888 -p 3887:3888 --name zookeeper_node01 --restart always \ -v $PWD/cluster/node01/volume/data:/data \ -v $PWD/cluster/node01/volume/datalog:/datalog \ -e "TZ=Asia/Shanghai" \ -e "ZOO_MY_ID=1" \ -e "ZOO_SERVERS=server.1=0.0.0.0:2888:3888 server.2=$machine_ip:2888:3888 server.3=$machine_ip:2889:3889" \ zookeeper:3.4.13
-
Ejecutar el nodo 2
docker run -d -p 2182:2181 -p 2888:2888 -p 3888:3888 --name zookeeper_node02 --restart always \ -v $PWD/cluster/node02/volume/data:/data \ -v $PWD/cluster/node2/volume/datalog:/datalog \ -e "TZ=Asia/Shanghai" \ -e "ZOO_MY_ID=2" \ -e "ZOO_SERVERS=server.1=$machine_ip:2887:3887 server.2=0.0.0.0:2888:3888 server.3=$machine_ip:2889:3889" \ zookeeper:3.4.13
-
Ejecutar el Nodo 3
docker run -d -p 2183:2181 -p 2889:2888 -p 3889:3888 --name zookeeper_node03 --restart always \
-v $PWD/cluster/node03/volume/data:/data \
-v $PWD/cluster/node03/volume/datalog:/datalog \
-e "TZ=Asia/Shanghai" \
-e "ZOO_MY_ID=3" \
-e "ZOO_SERVERS=server.1=$machine_ip:2887:3887 server.2=$machine_ip:2888:3888 server.3=0.0.0.0:2888:3888" \
zookeeper:3.4.13
-
Ver registro de espejo ventana acoplable
docker logs -f 容器ID
-
A continuación, encontrará errores de conexión informaron
java.net.ConnectException: Connection refused (Connection refused) at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at java.net.Socket.connect(Socket.java:589) at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:558) at org.apache.zookeeper.server.quorum.QuorumCnxManager.toSend(QuorumCnxManager.java:534) at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.process(FastLeaderElection.java:454) at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.run(FastLeaderElection.java:435) at java.lang.Thread.run(Thread.java:748) 2020-04-08 16:00:44,614 [myid:1] - WARN [RecvWorker:1:QuorumCnxManager$RecvWorker@1025] - Connection broken for id 1, my id = 1, error = java.io.EOFException
-
La razón es que el modo de red por defecto del estibador, mediante la asignación de IP + puerto del host, y no puede encontrar node02 node01 y node03
-
-
Encuentra IP de cada contenedor
docker inspect 容器ID
- node01: 172.17.0.2
- node02: 172.17.0.3
- node03: 172.17.0.4
-
Sabemos que tiene su propia IP, apareció otro problema es que la IP es dinámica, no podemos saber antes de empezar. Hay una solución es crear su propio puente de red, y luego crear el contenedor cuando la dirección IP especificada.
-
Por lo tanto, sobre todo, de reinventar la rueda ......
-
Detener y eliminar todos los espejos
docker stop $(docker ps -a -q) docker rm $(docker ps -a -q)
[Reiniciar]
-
Crear su propia red puenteada
docker network create --driver bridge --subnet=172.18.0.0/16 --gateway=172.18.0.1 zoonet
-
Vista de red ventana acoplable
[root@iz8vb6evwfagx3tyjx4fl8z ~]# docker network ls NETWORK ID NAME DRIVER SCOPE a121ed854d1c bridge bridge local ab9083cbac8a host host local 4d3012b89f70 none null local 26b8cbf5b4c9 zoonet bridge local
-
Compruebe la red puenteada
docker network inspect 26b8cbf5b4c9
-
resultados de la consulta
[ { "Name": "zoonet", "Id": "26b8cbf5b4c9d086b81edc22f4627de5ef71a8745374554b440d394ad40858f4", "Created": "2020-04-08T16:25:00.982635799+08:00", "Scope": "local", "Driver": "bridge", "EnableIPv6": false, "IPAM": { "Driver": "default", "Options": {}, "Config": [ { "Subnet": "172.18.0.0/16", "Gateway": "172.18.0.1" } ] }, "Internal": false, "Attachable": false, "Ingress": false, "ConfigFrom": { "Network": "" }, "ConfigOnly": false, "Containers": {}, "Options": {}, "Labels": {} } ]
-
-
ZOOKEEPER modificar comando de creación de contenedores
-
Ejecutar el nodo 1
docker run -d -p 2181:2181 --name zookeeper_node01 --privileged --restart always --network zoonet --ip 172.18.0.2 \ -v /opt/test/cluster/node01/volume/data:/data \ -v /opt/test/cluster/node01/volume/data/datalog:/datalog \ -v /opt/test/cluster/node01/volume/data/logs:/logs \ -e ZOO_MY_ID=1 \ -e "ZOO_SERVERS=server.1=172.18.0.2:2888:3888 server.2=172.18.0.3:2888:3888 server.3=172.18.0.4:2888:3888" 4ebfb9474e72 #(这个是Zookeeper镜像的ip)
-
Ejecutar el nodo 2
docker run -d -p 2182:2181 --name zookeeper_node02 --privileged --restart always --network zoonet --ip 172.18.0.3 \ -v /opt/test/cluster/node02/volume/data:/data \ -v /opt/test/cluster/node02/volume/datalog:/datalog \ -v /opt/test/cluster/node02/volume/logs:/logs \ -e ZOO_MY_ID=2 \ -e "ZOO_SERVERS=server.1=172.18.0.2:2888:3888 server.2=172.18.0.3:2888:3888 server.3=172.18.0.4:2888:3888" 4ebfb9474e72
-
Ejecutar el Nodo 3
docker run -d -p 2183:2181 --name zookeeper_node03 --privileged --restart always --network zoonet --ip 172.18.0.4 \ -v /opt/test/cluster/node03/volume/data:/data \ -v /opt/test/cluster/node03/volume/datalog:/datalog \ -v /opt/test/cluster/node03/volume/logs:/logs \ -e ZOO_MY_ID=3 \ -e "ZOO_SERVERS=server.1=172.18.0.2:2888:3888 server.2=172.18.0.3:2888:3888 server.3=172.18.0.4:2888:3888" 4ebfb9474e72
-
Ver envase
[root@iz8vb6evwfagx3tyjx4fl8z ~]# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 82753d13ac44 4ebfb9474e72 "/docker-entrypoint.…" 21 seconds ago Up 21 seconds 2888/tcp, 3888/tcp, 0.0.0.0:2183->2181/tcp zookeeper_node03 eee56297eb96 4ebfb9474e72 "/docker-entrypoint.…" 42 seconds ago Up 41 seconds 2888/tcp, 3888/tcp, 0.0.0.0:2182->2181/tcp zookeeper_node02 ee8a9710fa3e 4ebfb9474e7 "/docker-entrypoint.…" About a minute ago Up About a minute 2888/tcp, 0.0.0.0:2181->2181/tcp, 3888/tcp zookeeper_node01
-
Esta vez para ver los registros de contenedores
docker logs -f 容器ID
- No hay error
-
Esta vez vamos a comprobar en el contenedor
# node01 [root@iz8vb6evwfagx3tyjx4fl8z ~]# docker exec -it ee8a9710fa3e bash bash-4.4# zkServer.sh status ZooKeeper JMX enabled by default Using config: /conf/zoo.cfg Mode: follower # node02 [root@iz8vb6evwfagx3tyjx4fl8z ~]# docker exec -it eee56297eb96 bash bash-4.4# zkServer.sh status ZooKeeper JMX enabled by default Using config: /conf/zoo.cfg Mode: leader # node03 [root@iz8vb6evwfagx3tyjx4fl8z ~]# docker exec -it 82753d13ac44 bash bash-4.4# zkServer.sh status ZooKeeper JMX enabled by default Using config: /conf/zoo.cfg Mode: follower
- Cada nodo en buenas condiciones, instalación de clúster se ha completado.
-