kubernetes1.18.x高可用性バイナリ展開

kubernetes1.18.x高可用性バイナリ展開

ラベル(スペースで区切る):kubernetesシリーズ


  • 1つ:kubernetesの高可用性の概要
  • 2:kubernetesの高可用性展開

1つ:kubernetesの高可用性の概要

1.1kubernetesの高可用性の概要

高可用架构(扩容多Master架构)

Kubernetes作为容器集群系统,通过健康检查+重启策略实现了Pod故障自我修复能力,通过调度算法实现将Pod分布式部署,并保持预期副本数,根据Node失效状态自动在其他Node拉起Pod,实现了应用层的高可用性。

针对Kubernetes集群,高可用性还应包含以下两个层面的考虑:Etcd数据库的高可用性和Kubernetes Master组件的高可用性。而Etcd我们已经采用3个节点组建集群实现高可用,本节将对Master节点高可用进行说明和实施。

Master节点扮演着总控中心的角色,通过不断与工作节点上的Kubelet进行通信来维护整个集群的健康工作状态。如果Master节点故障,将无法使用kubectl工具或者API做任何集群管理。

Master节点主要有三个服务kube-apiserver、kube-controller-mansger和kube-scheduler,其中kube-controller-mansger和kube-scheduler组件自身通过选择机制已经实现了高可用,所以Master高可用主要针对kube-apiserver组件,而该组件是以HTTP API提供服务,因此对他高可用与Web服务器类似,增加负载均衡器对其负载均衡即可,并且可水平扩容。

1.2展開構造図

image_1eg26cjs6bev1pk8ioekrg13m69.png-397.7kB

1.3マルチマスター構造図

image_1eg26eoql1es9sq7r481sh61l4sm.png-1393.7kB


2:上記を実施するための展開

之前步骤参考:
https://blog.51cto.com/flyfish225/2504511

在node04.flyfish 节点上面部署docker 

2.1 解压二进制包

tar zxvf docker-19.03.9.tgz
mv docker/* /usr/bin

image_1eg26nm471e421q9bqt91vphs8p9.png-74.4kB

image_1eg26o800nuu5nfjoh1r0a113qm.png-43.6kB

2.2 systemd管理docker

cat > /usr/lib/systemd/system/docker.service << EOF
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network-online.target firewalld.service
Wants=network-online.target
[Service]
Type=notify
ExecStart=/usr/bin/dockerd
ExecReload=/bin/kill -s HUP $MAINPID
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
TimeoutStartSec=0
Delegate=yes
KillMode=process
Restart=on-failure
StartLimitBurst=3
StartLimitInterval=60s
[Install]
WantedBy=multi-user.target
EOF
2.3 创建配置文件

mkdir /etc/docker
cat > /etc/docker/daemon.json << EOF
{
  "registry-mirrors": ["https://b9pmyelo.mirror.aliyuncs.com"]
}
EOF

registry-mirrors 阿里云镜像加速器

image_1eg26pmuj15f7161ajdn163vlfi13.png-188.2kB

2.4 启动并设置开机启动

systemctl daemon-reload
systemctl start docker
systemctl enable docker

image_1eg26q3j216m511bp16khurj1rit1g.png-107.9kB


部署Master2 Node(192.168.100.14)

Master2 与已部署的Master1所有操作一致。所以我们只需将Master1所有K8s文件拷贝过来,再修改下服务器IP和主机名启动即可。

1. 创建etcd证书目录

在Master2创建etcd证书目录:

mkdir -p /opt/etcd/ssl

image_1eg274r7ig4olnt1pn81gc85f1t.png-25.4kB

2. 拷贝文件(Master1操作)

拷贝Master1上所有K8s文件和etcd证书到Master2:

scp -r /opt/kubernetes [email protected]:/opt
scp -r /opt/cni/ [email protected]:/opt
scp -r /opt/etcd/ssl [email protected]:/opt/etcd
scp /usr/lib/systemd/system/kube* [email protected]:/usr/lib/systemd/system
scp /usr/bin/kubectl  [email protected]:/usr/bin

image_1eg275qf71r3tltc1t7nalebh82a.png-218.8kB

3. 删除证书文件

删除kubelet证书和kubeconfig文件:

rm -f /opt/kubernetes/cfg/kubelet.kubeconfig
rm -f /opt/kubernetes/ssl/kubelet*

image_1eg276cac1mm41tsvonf86i18bj2n.png-54.5kB


4. 修改配置文件IP和主机名

修改apiserver、kubelet和kube-proxy配置文件为本地IP:

vim /opt/kubernetes/cfg/kube-apiserver.conf
...
--bind-address=192.168.100.14 \
--advertise-address=192.168.100.14 \
...

vim /opt/kubernetes/cfg/kubelet.conf
--hostname-override=node04.flyfish

vim /opt/kubernetes/cfg/kube-proxy-config.yml
hostnameOverride: node04.flyfish

image_1eg27aj5g17ld15ss1u5e1nn6ddc34.png-184.5kB

image_1eg27f6nr19fa1bgiauf176m1t003h.png-112.2kB

image_1eg27g5ehqsaphssquok4133k3u.png-100.3kB


5. 启动设置开机启动

systemctl daemon-reload
systemctl start kube-apiserver
systemctl start kube-controller-manager
systemctl start kube-scheduler
systemctl start kubelet
systemctl start kube-proxy
systemctl enable kube-apiserver
systemctl enable kube-controller-manager
systemctl enable kube-scheduler
systemctl enable kubelet
systemctl enable kube-proxy

image_1eg27hiq41hmjrmq1emds4kkb94b.png-258.5kB


kubectl get cs 

image_1eg27iuo4hqd1mpqq7m1o3h1eer4o.png-67.3kB


7. 批准kubelet证书申请
   在node01.flyfish 节点 上面 批准授权

kubectl get csr

kubectl certificate approve node-csr-fyeyjxpS4JMpC2QvfmLOyeBbYUiMoYTSTGQETWVlqD4

image_1eg27l0c82vq1bt8rbq1n9ctbk55.png-129kB

image_1eg27oqg0oie1hd1kl6p108nb5i.png-86.4kB


kubectl get node

image_1eg27qd7v1k3p1n05pve1u6b1j1u5v.png-73.9kB


3:nginxロードバランシングサーバーを展開する

kube-apiserver高可用架构图:

image_1eg2ajbbsknc16sk1bml1jm01j866c.png-143.3kB


在node05.flyfish 与node07.flyfish 上面部署 nginx 与keepalive 

注意在 node06.flyfish 上面部署了vmware harbor 

 yum install epel-release -y
 yum install nginx keepalived -y

cat > /etc/nginx/nginx.conf << "EOF"
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;

include /usr/share/nginx/modules/*.conf;

events {
    worker_connections 1024;
}

# 四层负载均衡,为两台Master apiserver组件提供负载均衡
stream {

    log_format  main  '$remote_addr $upstream_addr - [$time_local] $status $upstream_bytes_sent';

    access_log  /var/log/nginx/k8s-access.log  main;

    upstream k8s-apiserver {
       server 192.168.100.11:6443;   # Master1 APISERVER IP:PORT
       server 192.168.100.14:6443;   # Master2 APISERVER IP:PORT
    }

    server {
       listen 6443;
       proxy_pass k8s-apiserver;
    }
}

http {
    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"';

    access_log  /var/log/nginx/access.log  main;

    sendfile            on;
    tcp_nopush          on;
    tcp_nodelay         on;
    keepalive_timeout   65;
    types_hash_max_size 2048;

    include             /etc/nginx/mime.types;
    default_type        application/octet-stream;

    server {
        listen       80 default_server;
        server_name  _;

        location / {
        }
    }
}
EOF

### 3. keepalived構成ファイル(Nginxマスター)

cat > /etc/keepalived/keepalived.conf << EOF
global_defs {
   notification_email {
     [email protected]
     [email protected]
     [email protected]
   }
   notification_email_from [email protected]
   smtp_server 127.0.0.1
   smtp_connect_timeout 30
   router_id NGINX_MASTER
}
vrrp_script check_nginx {
    script "/etc/keepalived/check_nginx.sh"
}
vrrp_instance VI_1 {
    state MASTER
    interface ens33
    virtual_router_id 51 # VRRP 路由 ID实例,每个实例是唯一的
    priority 100    # 优先级,备服务器设置 90
    advert_int 1    # 指定VRRP 心跳包通告间隔时间,默认1秒
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    # 虚拟IP
    virtual_ipaddress {
        192.168.100.100/24
    }
    track_script {
        check_nginx
    }
}
EOF

vrrp_script:指定检查nginx工作状态脚本(根据nginx状态判断是否故障转移)

virtual_ipaddress:虚拟IP(VIP)

检查nginx状态脚本:

cat > /etc/keepalived/check_nginx.sh  << "EOF"
#!/bin/bash
count=$(ps -ef |grep nginx |egrep -cv "grep|$$")

if [ "$count" -eq 0 ];then
    exit 1
else
    exit 0
fi
EOF
chmod +x /etc/keepalived/check_nginx.sh

4:keepalived構成ファイル(Nginxバックアップ)

cat > /etc/keepalived/keepalived.conf << EOF
global_defs {
   notification_email {
     [email protected]
     [email protected]
     [email protected]
   }
   notification_email_from [email protected]
   smtp_server 127.0.0.1
   smtp_connect_timeout 30
   router_id NGINX_BACKUP
}
vrrp_script check_nginx {
    script "/etc/keepalived/check_nginx.sh"
}
vrrp_instance VI_1 {
    state BACKUP
    interface ens33
    virtual_router_id 51 # VRRP 路由 ID实例,每个实例是唯一的
    priority 90
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        192.168.31.88/24
    }
    track_script {
        check_nginx
    }
}
EOF
上述配置文件中检查nginx运行状态脚本:

cat > /etc/keepalived/check_nginx.sh  << "EOF"
#!/bin/bash
count=$(ps -ef |grep nginx |egrep -cv "grep|$$")

if [ "$count" -eq 0 ];then
    exit 1
else
    exit 0
fi
EOF
chmod +x /etc/keepalived/check_nginx.sh

注:keepalived根据脚本返回状态码(0为工作正常,非0不正常)判断是否故障转移。

5. 启动并设置开机启动

systemctl daemon-reload
systemctl start nginx
systemctl start keepalived
systemctl enable nginx
systemctl enable keepalived

image_1eg2de1gah2s18tv1rqdl4g1ttl6p.png-156.8kB

image_1eg2deg3pn6hkg1clb1oag9vl76.png-183.6kB


6. 查看keepalived工作状态
ip addr 

在node05.flyfish 上面 有一个 虚拟VIP 

image_1eg2djmdav50lega0n16rtjpb7j.png-139.3kB


7. Nginx+Keepalived高可用测试

关闭主节点Nginx,测试VIP是否漂移到备节点服务器。

杀掉node05.flyfish 的nginx 

pkill nginx 

查看浮动IP 是否 飘到了node07.flyfish 节点

image_1eg2doohu1njnq351dkp1bn31c1j80.png-128.3kB

可以看到 浮动VIP 已经飘到了node07.flyfish 主机上面了

image_1eg2dpbrq58bi3q11hm10p3lj48d.png-131.6kB


去任意一个k8s 节点查看 服务器VIP 是否能够 获取到kube-apiserver 的 信息

curl -k https://192.168.100.100:6443/version

image_1eg2duhts1jn4vrut8f12c515ap9a.png-83.9kB


检查 nignx 日志

image_1eg2e1u5m61ne4a57nm518ek9n.png-78.1kB

7.4すべてのワーカーノードを変更してLBVIPに接続する

虽然我们增加了Master2和负载均衡器,但是我们是从单Master架构扩容的,也就是说目前所有的Node组件连接都还是Master1,如果不改为连接VIP走负载均衡器,那么Master还是单点故障。

因此接下来就是要改所有Node组件配置文件,由原来192.168.100.11修改为192.168.100.100(VIP):

image_1eg2e7qra1cv01soe1evd5p2s2fa4.png-82kB

所有node 节点执行命令

sed -i 's#192.168.31.71:6443#192.168.31.88:6443#' /opt/kubernetes/cfg/*
systemctl restart kubelet
systemctl restart kube-proxy

kubectl get node 

image_1eg2ebjltk3v18o7ksl1l701jptah.png-222.2kB

至此 k8s 多节点master 集群配置完成

おすすめ

転載: blog.51cto.com/flyfish225/2575683