This article will build a flannel network to enable dockers across hosts to communicate with each other, as well as to ensure the network foundation and guarantee of the kubernetes cluster, and the high availability of ha.
The deployed server is:
master1 192.168.206.31
master2 192.168.206.32
master3 192.168.206.33
node1 192.168.206.41
node2 192.168.206.42
node3 192.168.206.43
VIP: 192.168.206.30
ha1 192.168.206.36
ha2 192.168.206.37
1. Generate Flannel network TLS certificate
Install Flannel on all cluster nodes, and the following operations are performed on k8s-master1.
1. Create a certificate signing request
cat > flanneld-csr.json <<EOF
{
"CN": "flanneld",
"hosts": [],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "Zhejiang",
"L": "hangzhou",
"O": "k8s",
"OU": "System"
}
]
}
EOF
该证书只会被 kubectl 当做 client 证书使用,所以 hosts 字段为空;
2. Generate certificate and private key:
cfssl gencert -ca=/data/ssl/ca.pem \
-ca-key=/data/ssl/ca-key.pem \
-config=/data/ssl/ca-config.json \
-profile=kubernetes flanneld-csr.json | cfssljson -bare flanneld
创建证书存放目录:
mkdir /opt/kubernetes/ssl/flannel
这里是复制到3master+3node上
cp flanneld*.pem /opt/kubernetes/ssl/flannel
2. Deploy Flannel
1. Download and install Flannel
wget https://github.com/coreos/flannel/releases/download/v0.10.0/flannel-v0.10.0-linux-amd64.tar.gz
tar -xzvf flannel-v0.10.0-linux-amd64.tar.gz
cp {flanneld,mk-docker-opts.sh} /opt/kubernetes/bin/
2. Write network segment information to etcd.
The following two commands can be executed once in any one of the etcd clusters, which is to create a flannel network segment for docker distribution.
etcdctl --ca-file=/opt/kubernetes/ssl/etcd/ca.pem --cert-file=/opt/kubernetes/ssl/etcd/etcd.pem --key-file=/opt/kubernetes/ssl/etcd/etcd-key.pem mkdir /opt/kubernetes/network
etcdctl --ca-file=/opt/kubernetes/ssl/etcd/ca.pem --cert-file=/opt/kubernetes/ssl/etcd/etcd.pem --key-file=/opt/kubernetes/ssl/etcd/etcd-key.pem mk /opt/kubernetes/network/config '{"Network":"172.30.0.0/16","SubnetLen":24,"Backend":{"Type":"vxlan"}}'
3. Create a system unit file
cat > /etc/systemd/system/flanneld.service << EOF
[Unit]
Description=Flanneld overlay address etcd agent
After=network.target
After=network-online.target
Wants=network-online.target
After=etcd.service
Before=docker.service
[Service]
Type=notify
ExecStart=/opt/kubernetes/bin/flanneld \
-etcd-cafile=/opt/kubernetes/ssl/flannel/ca.pem \
-etcd-certfile=/opt/kubernetes/ssl/flannel/flanneld.pem \
-etcd-keyfile=/opt/kubernetes/ssl/flannel/flanneld-key.pem \
-etcd-endpoints=https://192.168.206.31:2379,https://192.168.206.32:2379,https://192.168.206.33:2379 \
-etcd-prefix=/opt/kubernetes/network
ExecStartPost=/opt/kubernetes/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/docker
Restart=on-failure
[Install]
WantedBy=multi-user.target
RequiredBy=docker.service
EOF
mk-docker-opts.sh 脚本将分配给 flanneld 的 Pod 子网网段信息写入到 /run/flannel/docker 文件中,后续 docker 启动时使用这个文件中参数值设置 docker0 网桥。
flanneld 使用系统缺省路由所在的接口和其它节点通信,对于有多个网络接口的机器(如,内网和公网),可以用 -iface=enpxx 选项值指定通信接口。
4. Start flannel and set it to start automatically
systemctl daemon-reload
systemctl enable flanneld
systemctl start flanneld
5. View the subnet information allocated by flannel
[root@k8s-master1 ~]# cat /run/flannel/docker
DOCKER_OPT_BIP="--bip=172.30.94.1/24"
DOCKER_OPT_IPMASQ="--ip-masq=true"
DOCKER_OPT_MTU="--mtu=1450"
DOCKER_NETWORK_OPTIONS=" --bip=172.30.94.1/24 --ip-masq=true --mtu=1450"
[root@k8s-master1 ~]# cat /run/flannel/subnet.env
FLANNEL_NETWORK=172.30.0.0/16
FLANNEL_SUBNET=172.30.94.1/24
FLANNEL_MTU=1450
FLANNEL_IPMASQ=false
/run/flannel/docker是flannel分配给docker的子网信息,/run/flannel/subnet.env包含了flannel整个大网段以及在此节点上的子网段。
6. Check whether the flannel network is effective
Last login: Thu Nov 19 09:28:40 2020 from 192.168.206.1
[root@k8s-master1 ~]# ip add
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 00:0c:29:6e:7f:49 brd ff:ff:ff:ff:ff:ff
inet 192.168.206.31/24 brd 192.168.206.255 scope global noprefixroute ens33
valid_lft forever preferred_lft forever
inet6 fe80::bbd4:6d75:22b1:e631/64 scope link tentative noprefixroute dadfailed
valid_lft forever preferred_lft forever
inet6 fe80::129b:129d:71ca:5d94/64 scope link tentative noprefixroute dadfailed
valid_lft forever preferred_lft forever
inet6 fe80::1b37:c32:6cc4:be75/64 scope link tentative noprefixroute dadfailed
valid_lft forever preferred_lft forever
3: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN group default
link/ether de:b1:04:6f:d6:57 brd ff:ff:ff:ff:ff:ff
inet 172.30.65.0/32 brd 172.30.65.0 scope global flannel.1
valid_lft forever preferred_lft forever
Three, install docker, configure docker to support flannel network
1, install docker on all nodes
yum install -y yum-utils device-mapper-persistent-data lvm2
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
#安装指定版本,这里安装18.06
yum list docker-ce --showduplicates | sort -r
yum install -y docker-ce-18.06.1.ce-3.el7
systemctl start docker && systemctl enable docker
2. Configure docker to support flannel network, all docker nodes are operated
[root@k8s-master1 ~]# vi /etc/systemd/system/multi-user.target.wants/docker.service
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network-online.target firewalld.service
Wants=network-online.target
[Service]
Type=notify
# the default is not to use systemd for cgroups because the delegate issues still
# exists and systemd currently does not support the cgroup feature set required
# for containers run by docker
EnvironmentFile=/run/flannel/docker
ExecStart=/usr/bin/dockerd $DOCKER_NETWORK_OPTIONS
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
#TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
# restart the docker process if it exits prematurely
Restart=on-failure
StartLimitBurst=3
StartLimitInterval=60s
[Install]
WantedBy=multi-user.target
3. Restart docker to make the configuration take effect
systemctl daemon-reload
systemctl restart docker
4. View the network status of all cluster hosts
etcdctl --ca-file=/opt/kubernetes/ssl/etcd/ca.pem --cert-file=/opt/kubernetes/ssl/etcd/etcd.pem --key-file=/opt/kubernetes/ssl/etcd/etcd-key.pem ls /opt/kubernetes/network/subnets
Fourth, keepalived+haproxy highly available deployment.
Deployment server
ha1 192.168.206.36
ha2 192.168.206.37
1. Install haproxy for all haproxy
yum install -y haproxy
cat <<EOF > /etc/haproxy/haproxy.cfg
global
log 127.0.0.1 local2
chroot /var/lib/haproxy
pidfile /var/run/haproxy.pid
maxconn 4000
user haproxy
group haproxy
daemon
defaults
mode tcp
log global
retries 3
timeout connect 10s
timeout client 1m
timeout server 1m
frontend k8s-api
bind *:6443
bind *:443
mode tcp
option tcplog
default_backend k8s-api
backend k8s-api
mode tcp
option tcplog
option tcp-check
balance roundrobin
default-server inter 10s downinter 5s rise 2 fall 2 slowstart 60s maxconn 250 maxqueue 256 weight 100
server master1 192.168.206.31:6443 check
server master2 192.168.206.32:6443 check
server master3 192.168.206.33:6443 check
EOF
2. Start all haproxy
systemctl start haproxy
systemctl status haproxy
systemctl enable haproxy
3. All haproxy install keepalived
yum install -y keepalived
cat <<EOF > /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
router_id LVS_K8S
}
vrrp_instance VI_1 {
state MASTER(BACKUP)
interface ens33
virtual_router_id 51
priority 100(备50)
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
VIP/24
}
}
EOF
4. All haproxy start keepalived
systemctl restart keepalived
systemctl status keepalived
systemctl enable keepalived
After startup, you can check vip, or close the main ha to see if the vip is offset.