CentOS deploys docker based on flannel network

Flannel is essentially an "overlay network", that is, TCP data is packaged in another network packet for routing, forwarding and communication. It currently supports data forwarding methods such as UDP, VxLAN, AWS VPC, and GCE routing. .

Install docker

yum -y install yum-utils
yum-config-manager     --add-repo     https://mirrors.ustc.edu.cn/docker-ce/linux/centos/docker-ce.repo     
yum -y install docker-ce
systemctl start docker
#docker --version
Docker version 20.10.1, build 831ebea

The docker environment has been installed, routing forwarding is turned on, sandbox is turned off, and firewall is turned off

[root@localhost ~]# systemctl disable firewalld
Removed symlink /etc/systemd/system/multi-user.target.wants/firewalld.service.
Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
[root@localhost ~]# systemctl stop firewalld
[root@localhost ~]# vim /etc/selinux/config 
SELINUX=disabled
[root@localhost ~]# setenforce 0
[root@localhost ~]# getenforce 
Permissive

Set firewall default policy

[root@localhost ~]# iptables -P FORWARD ACCEPT

Install Flannel and etcd

[root@localhost ~]# yum -y install flannel etcd

Configure etcd to identify the docker cluster
Here is a docker as an example

[root@localhost ~]# cp -p /etc/etcd/etcd.conf /etc/etcd/etcd.conf.bak
[root@localhost ~]# vim /etc/etcd/etcd.conf
# etcd存放数据目录,为了方便改成了和etcd_name一样的名字
ETCD_DATA_DIR="/var/lib/etcd/etcd1"
# 用于与其他节点通信,写本机ip
ETCD_LISTEN_PEER_URLS="http://192.168.1.11:2380"
# 客户端会连接到这里和 etcd 交互,本机ip和回环ip
ETCD_LISTEN_CLIENT_URLS="http://192.168.1.11:2379,http://127.0.0.1:2379"
# 节点名称,每台主机都不一样
ETCD_NAME="etcd1"
# 该节点同伴监听地址,这个值会告诉集群中其他节点
ETCD_INITIAL_ADVERTISE_PEER_URLS="http://192.168.1.11:2380"
# 对外公告的该节点客户端监听地址,这个值会告诉集群中其他节点
ETCD_ADVERTISE_CLIENT_URLS="http://192.168.1.11:2379"
# 集群中的主机信息,多个docker主机,就name=http://ip:2380,逗号隔开
ETCD_INITIAL_CLUSTER="etcd1=http://192.168.1.11:2380"
# 集群token,建议修改一个,同一个集群中的token一致
ETCD_INITIAL_CLUSTER_TOKEN="etcd-test"
# 新建集群的时候,这个值为new假如已经存在的集群,这个值为 existing
ETCD_INITIAL_CLUSTER_STATE="new"

The etcd service recognizes the value in the configuration file

[root@localhost ~]# vim /usr/lib/systemd/system/etcd.service 
# 在第十三行(ExecStart开头)末尾的前一个引号里面,添加如下
# 添加时不换行,空格隔开即可
--listen-peer-urls=\"${
    
    ETCD_LISTEN_PEER_URLS}\" 
--advertise-client-urls=\"${
    
    ETCD_ADVERTISE_CLIENT_URLS}\" 
--initial-cluster-token=\"${
    
    ETCD_INITIAL_CLUSTER_TOKEN}\" 
--initial-cluster=\"${
    
    ETCD_INITIAL_CLUSTER}\" 
--initial-cluster-state=\"${
    
    ETCD_INITIAL_CLUSTER_STATE}\"

Start the etcd service. When the
entire cluster starts the service, one of them will be blocked. After the other is started, the blocked end will restart. If an error is reported, or it fails to start, please carefully check the two configuration files that have been changed.

systemctl daemon-reload
systemctl start etcd

Check the cluster health status, if there are multiple units in the cluster, multiple units will be displayed

[root@localhost ~]# etcdctl cluster-health
member 12b11316a20f4e7 is healthy: got healthy result from http://192.168.1.11:2379
cluster is healthy

View cluster leader

The above-mentioned blocking at startup is because the leader is not started, other etcd nodes in the cluster will not start

[root@localhost ~]# etcdctl member list
12b11316a20f4e7: name=etcd1 peerURLs=http://192.168.1.11:2380 clientURLs=http://192.168.1.11:2379 isLeader=true

Create a json file that allocates network segments

[root@localhost ~]# vim /root/etcd.json
{
    
    
  "NetWork":"10.10.0.0/16",
  "SubnetLen":24,
  "Backend":{
    
    
    "Type":"vxlan"
  }
}

Import the file into the etcd endpoint

[root@localhost ~]# etcdctl --endpoint=http://192.168.1.11:2379 \
set /usr/local/bin/network/config < /root/etcd.json
{
    
    
  "NetWork":"10.10.0.0/16",
  "SubnetLen":24,
  "Backend":{
    
    
    "Type":"vxlan"
  }
}

View import file

[root@localhost ~]# etcdctl get /usr/local/bin/network/config
{
    
    
  "NetWork":"10.10.0.0/16",
  "SubnetLen":24,
  "Backend":{
    
    
    "Type":"vxlan"
  }
}

Change the flannel configuration file

[root@localhost ~]# vim /etc/sysconfig/flanneld
# 集群中每台都写自己的ip
FLANNEL_ETCD_ENDPOINTS="http://192.168.1.11:2379"
FLANNEL_ETCD_PREFIX="/usr/local/bin/network"

Start the flannel service

[root@localhost ~]# systemctl start flanneld
[root@localhost ~]# systemctl enable flanneld

View the ip allocated by flannel

[root@localhost ~]# ifconfig flannel.1
flannel.1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1450
        inet 10.10.41.0  netmask 255.255.255.255  broadcast 0.0.0.0
        inet6 fe80::f48f:51ff:fe8c:7500  prefixlen 64  scopeid 0x20<link>
        ether f6:8f:51:8c:75:00  txqueuelen 0  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 33 overruns 0  carrier 0  collisions 0

Integrate docker and flannel network communication
view mtu value and gateway

[root@localhost ~]# cat /run/flannel/subnet.env 
FLANNEL_NETWORK=10.10.0.0/16
FLANNEL_SUBNET=10.10.41.1/24
FLANNEL_MTU=1450
FLANNEL_IPMASQ=false

Modify the docker startup item

[root@localhost ~]# vim /usr/lib/systemd/system/docker.service
# 第十四行末尾添加
--bip=10.10.41.1/24 --mtu=1450

Restart docker

[root@localhost ~]# systemctl daemon-reload 
[root@localhost ~]# systemctl restart docker

Check the docker0 network card, which already exists as the gateway of the flannel network

[root@localhost ~]# ifconfig docker0
docker0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
        inet 10.10.41.1  netmask 255.255.255.0  broadcast 10.10.41.255
        ether 02:42:31:92:1c:1e  txqueuelen 0  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

Then running the container directly will use the flannel network by default

Guess you like

Origin blog.csdn.net/weixin_46152207/article/details/113111110