La tercera parte (clúster k8s de implementación binaria --- red de franela y alta disponibilidad keepalived + haproxy)

Este artículo construirá una red de franela para permitir que los dockers de los hosts se comuniquen entre sí, así como para asegurar la base de la red y la garantía del clúster de kubernetes y la alta disponibilidad de ha.
El servidor implementado es:
maestro1 192.168.206.31
maestro2 192.168.206.32
maestro3 192.168.206.33
nodo1
192.168.206.41
nodo2 192.168.206.42 nodo3 192.168.206.43
VIP: 192.168.206.30
ha1 192.168.206.36 ha2
192.168.206.37

1. Genere el certificado TLS de la red Flannel

Instale Flannel en todos los nodos del clúster y las siguientes operaciones se realizarán en k8s-master1.
1. Cree una solicitud de firma de certificado

cat > flanneld-csr.json <<EOF
{
  "CN": "flanneld",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "Zhejiang",
      "L": "hangzhou",
      "O": "k8s",
      "OU": "System"
    }
  ]
}
EOF

该证书只会被 kubectl 当做 client 证书使用,所以 hosts 字段为空;

2. Genere certificado y clave privada:

cfssl gencert -ca=/data/ssl/ca.pem \
  -ca-key=/data/ssl/ca-key.pem \
  -config=/data/ssl/ca-config.json \
  -profile=kubernetes flanneld-csr.json | cfssljson -bare flanneld

创建证书存放目录:
 mkdir /opt/kubernetes/ssl/flannel

这里是复制到3master+3node上
cp flanneld*.pem /opt/kubernetes/ssl/flannel

2. Implementar Flannel
1. Descargar e instalar Flannel

wget https://github.com/coreos/flannel/releases/download/v0.10.0/flannel-v0.10.0-linux-amd64.tar.gz
tar -xzvf flannel-v0.10.0-linux-amd64.tar.gz
cp {flanneld,mk-docker-opts.sh} /opt/kubernetes/bin/

2. Escriba la información del segmento de red en etcd.
Los dos comandos siguientes se pueden ejecutar una vez en cualquiera de los clústeres de etcd, que es para crear un segmento de red de franela para la distribución de Docker.

etcdctl --ca-file=/opt/kubernetes/ssl/etcd/ca.pem --cert-file=/opt/kubernetes/ssl/etcd/etcd.pem --key-file=/opt/kubernetes/ssl/etcd/etcd-key.pem mkdir /opt/kubernetes/network
etcdctl --ca-file=/opt/kubernetes/ssl/etcd/ca.pem --cert-file=/opt/kubernetes/ssl/etcd/etcd.pem --key-file=/opt/kubernetes/ssl/etcd/etcd-key.pem mk /opt/kubernetes/network/config '{"Network":"172.30.0.0/16","SubnetLen":24,"Backend":{"Type":"vxlan"}}'

3. Cree un archivo de unidad del sistema

cat > /etc/systemd/system/flanneld.service << EOF
[Unit]
Description=Flanneld overlay address etcd agent
After=network.target
After=network-online.target
Wants=network-online.target
After=etcd.service
Before=docker.service

[Service]
Type=notify
ExecStart=/opt/kubernetes/bin/flanneld \
  -etcd-cafile=/opt/kubernetes/ssl/flannel/ca.pem \
  -etcd-certfile=/opt/kubernetes/ssl/flannel/flanneld.pem \
  -etcd-keyfile=/opt/kubernetes/ssl/flannel/flanneld-key.pem \
  -etcd-endpoints=https://192.168.206.31:2379,https://192.168.206.32:2379,https://192.168.206.33:2379 \
  -etcd-prefix=/opt/kubernetes/network
ExecStartPost=/opt/kubernetes/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/docker
Restart=on-failure

[Install]
WantedBy=multi-user.target
RequiredBy=docker.service
EOF

mk-docker-opts.sh 脚本将分配给 flanneld 的 Pod 子网网段信息写入到 /run/flannel/docker 文件中,后续 docker 启动时使用这个文件中参数值设置 docker0 网桥。
flanneld 使用系统缺省路由所在的接口和其它节点通信,对于有多个网络接口的机器(如,内网和公网),可以用 -iface=enpxx 选项值指定通信接口。

4. Comience a usar franela y configúrelo para que comience automáticamente

systemctl daemon-reload
systemctl enable flanneld
systemctl start flanneld

5. Ver la información de subred asignada por franela

[root@k8s-master1 ~]# cat /run/flannel/docker 
DOCKER_OPT_BIP="--bip=172.30.94.1/24"
DOCKER_OPT_IPMASQ="--ip-masq=true"
DOCKER_OPT_MTU="--mtu=1450"
DOCKER_NETWORK_OPTIONS=" --bip=172.30.94.1/24 --ip-masq=true --mtu=1450"

[root@k8s-master1 ~]# cat /run/flannel/subnet.env 
FLANNEL_NETWORK=172.30.0.0/16
FLANNEL_SUBNET=172.30.94.1/24
FLANNEL_MTU=1450
FLANNEL_IPMASQ=false

/run/flannel/docker是flannel分配给docker的子网信息,/run/flannel/subnet.env包含了flannel整个大网段以及在此节点上的子网段。

6. Compruebe si la red de franela es eficaz

Last login: Thu Nov 19 09:28:40 2020 from 192.168.206.1
[root@k8s-master1 ~]# ip add
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:0c:29:6e:7f:49 brd ff:ff:ff:ff:ff:ff
    inet 192.168.206.31/24 brd 192.168.206.255 scope global noprefixroute ens33
       valid_lft forever preferred_lft forever
    inet6 fe80::bbd4:6d75:22b1:e631/64 scope link tentative noprefixroute dadfailed 
       valid_lft forever preferred_lft forever
    inet6 fe80::129b:129d:71ca:5d94/64 scope link tentative noprefixroute dadfailed 
       valid_lft forever preferred_lft forever
    inet6 fe80::1b37:c32:6cc4:be75/64 scope link tentative noprefixroute dadfailed 
       valid_lft forever preferred_lft forever
3: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN group default 
    link/ether de:b1:04:6f:d6:57 brd ff:ff:ff:ff:ff:ff
    inet 172.30.65.0/32 brd 172.30.65.0 scope global flannel.1
       valid_lft forever preferred_lft forever

Tres, instale la ventana acoplable, configure la ventana acoplable para admitir la red de franela
1, instale la ventana acoplable en todos los nodos

yum install -y yum-utils device-mapper-persistent-data lvm2
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

#安装指定版本,这里安装18.06
yum list docker-ce --showduplicates | sort -r
yum install -y docker-ce-18.06.1.ce-3.el7

systemctl start docker && systemctl enable docker

2. Configure la ventana acoplable para admitir la red de franela, todos los nodos de la ventana acoplable están operados

[root@k8s-master1 ~]# vi /etc/systemd/system/multi-user.target.wants/docker.service
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network-online.target firewalld.service
Wants=network-online.target

[Service]
Type=notify
# the default is not to use systemd for cgroups because the delegate issues still
# exists and systemd currently does not support the cgroup feature set required
# for containers run by docker
EnvironmentFile=/run/flannel/docker
ExecStart=/usr/bin/dockerd $DOCKER_NETWORK_OPTIONS
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
#TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
# restart the docker process if it exits prematurely
Restart=on-failure
StartLimitBurst=3
StartLimitInterval=60s

[Install]
WantedBy=multi-user.target

3. Reinicie la ventana acoplable para que la configuración surta efecto.

systemctl daemon-reload
systemctl restart docker

4. Ver el estado de la red de todos los hosts del clúster.

etcdctl --ca-file=/opt/kubernetes/ssl/etcd/ca.pem --cert-file=/opt/kubernetes/ssl/etcd/etcd.pem --key-file=/opt/kubernetes/ssl/etcd/etcd-key.pem ls /opt/kubernetes/network/subnets

Cuarto, implementación de alta disponibilidad keepalived + haproxy.
Servidor de implementación
ha1 192.168.206.36 ha2 192.168.206.37 1. Instale
haproxy para
todos los haproxy

yum install -y haproxy

cat <<EOF > /etc/haproxy/haproxy.cfg
global
    log         127.0.0.1 local2

    chroot      /var/lib/haproxy
    pidfile     /var/run/haproxy.pid
    maxconn     4000
    user        haproxy
    group       haproxy
    daemon
defaults
    mode                    tcp
    log                     global
    retries                 3
    timeout connect         10s
    timeout client          1m
    timeout server          1m
frontend  k8s-api
    bind        *:6443
    bind        *:443
    mode        tcp
    option      tcplog
    default_backend k8s-api
backend k8s-api
    mode        tcp
    option      tcplog
    option      tcp-check
    balance     roundrobin
    default-server inter 10s downinter 5s rise 2 fall 2 slowstart 60s maxconn 250 maxqueue 256 weight 100
    server master1 192.168.206.31:6443 check
    server master2 192.168.206.32:6443 check
    server master3 192.168.206.33:6443 check
EOF

2. Inicie todo haproxy

systemctl start haproxy
systemctl status haproxy
systemctl enable haproxy

3. Todos los haproxy instalan keepalived

yum install -y keepalived

cat <<EOF > /etc/keepalived/keepalived.conf
! Configuration File for keepalived

global_defs {
   router_id LVS_K8S
}

vrrp_instance VI_1 {
    state MASTER(BACKUP)
    interface ens33
    virtual_router_id 51
    priority 100(备50)
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        VIP/24
    }
}
EOF

4. Todos los haproxy comienzan a estar vivos

systemctl restart keepalived
systemctl status keepalived
systemctl enable keepalived

Después del inicio, puede verificar vip o cerrar la ha principal para ver si el vip está compensado.

Supongo que te gusta

Origin blog.51cto.com/14033037/2552447
Recomendado
Clasificación