1、前言
上一篇介绍了kibana和elasticsearch两个容器间是如何进行通过 --link进行通信的:
单击查看
通过容器间的选项–link指定容器名称进行不同容器间的通信(–link container_name或者将container_name取一个别名)
现在使用另外一种方式替代–link来达到容器间的通信:docker network
2、docker network基础
查看local的网络信息:
root@root-VirtualBox:/$ docker network ls
NETWORK ID NAME DRIVER SCOPE
398eede5bca4 bridge bridge local
135ea9fff8b4 host host local
4a447cb28159 none null local
现在创建一个网络名为somenetwork且driver为bridge的网络:(默认创建的就是bridge)
root@root-VirtualBox:/$ docker network create somenetwork
f4766402b783fdd3070294a7d3b18db1f9548d03c11c16f61c955afa243eef33
root@root-VirtualBox:/$ docker network ls
NETWORK ID NAME DRIVER SCOPE
398eede5bca4 bridge bridge local
135ea9fff8b4 host host local
4a447cb28159 none null local
f4766402b783 somenetwork bridge local
利用–network启动容器提供服务:
启动elasticsearch的脚本修改为:
#!/bin/bash
#Writerriter by ***
#Description use docker run start appserver
#2019.9.16
set -e
#############################################################################################################
docker run -d --name elasticsearch --restart=always \
--network somenetwork --network-alias elasticsearch \
-v /work/elasticsearch/config:/usr/share/elasticsearch/config \
-v /work/elasticsearch/data:/usr/share/elasticsearch/data \
-v /work/elasticsearch/plugins:/usr/share/elasticsearch/plugins \
-p 9200:9200 -p 9300:9300 \
-e "discovery.type=single-node" \
elasticsearch:7.5.0
启动kinaba的脚本修改为:
#!/bin/bash
#Writerriter by ***
#Description use docker run start appserver
#2019.9.16
set -e
#############################################################################################################
docker run -d --name kibana --restart=always \
--network somenetwork --network-alias kinaba \
-p 5601:5601 \
-v /work/kibana/config:/usr/share/kibana/config \
-v /work/kibana/data:/usr/share/kibana/data \
-e "elasticsearch.ssl.verify=false" \
kibana:7.5.0
通过选项–network-alias将取名的somenetwork起了一个别名
docker exec -it kibana /bin/bash
bash-4.2$ ping elasticsearch
PING elasticsearch (172.18.0.2) 56(84) bytes of data.
64 bytes from elasticsearch.somenetwork (172.18.0.2): icmp_seq=1 ttl=64 time=0.041 ms
64 bytes from elasticsearch.somenetwork (172.18.0.2): icmp_seq=2 ttl=64 time=0.060 ms
64 bytes from elasticsearch.somenetwork (172.18.0.2): icmp_seq=3 ttl=64 time=0.041 ms
64 bytes from elasticsearch.somenetwork (172.18.0.2): icmp_seq=4 ttl=64 time=0.064 ms
64 bytes from elasticsearch.somenetwork (172.18.0.2): icmp_seq=5 ttl=64 time=0.054 ms
64 bytes from elasticsearch.somenetwork (172.18.0.2): icmp_seq=6 ttl=64 time=0.048 ms
这里为啥能够ping通elasticsearch这个容器呢,因为这两个容器在同一个网络somenetwork内,而kibana里面ping的elasticsearch是容器名(这里管理的是容器级别),通过network的driver bridge实现了容器间的访问。
上面在容器启动的时候使用的是选项–network,而在compose的配置文件中则是networks,现在通过配置文件来进行阐述该参数的作用:
[root@docker lnmp]# cat docker-compose.yml
version: '3'
services:
kibana:
image: kibana
container_name: kibana
depends_on:
- elasticsearch
ports:
- "5601:5601"
networks:
- "somenetwork"
volumes:
- /work/kibana/config:/usr/share/kibana/config
- /work/kibana/data:/usr/share/kibana/data
elasticsearch:
image: elasticsearch
container_name: elasticsearch
ports:
- "9200:9200"
- "9300:9300"
networks:
- "somenetwork"
volumes:
- /work/elasticsearch/config:/usr/share/elasticsearch/config
- /work/elasticsearch/data:/usr/share/elasticsearch/data
- /work/elasticsearch/plugins:/usr/share/elasticsearch/plugins
networks:
somenetwork:
driver: bridge
由上述文件中可以知道networks定义了一个名称为somenetwork的网络,由于networks是top-level(顶层级别,所以需要在顶层设置),而在创建的网络时候需要指定driver(单一网络使用bridge,swarm集群使用overlay),而且driver内容不能省略,在elasticsearch和kibana两个service中使用了同一网络somenetwork。