[k8s集群系列-05]Kubernetes Master高可用

使用阿里云SLB使master节点高可用

服务器是在阿里云上的,17年的时候vpc网络是有HAVIP的,就可以使用haproxy+keepalived.今年阿里云可能关闭了这个功能,需要对应的客户经理去申请开通,只能使用VPC内的SLB

监听配置

使用tcp 4层负载均衡,不使用7层,跳过证书验证的环节,tcp 4层不能关闭健康检查

前端端口 后端端口 健康检查端口
80 8080 8080
443 6443 6443

配置hosts

ip  fqdn

注意:阿里云的负载均衡是四层TCP负责,不支持后端ECS实例既作为Real Server又作为客户端向所在的负载均衡实例发送请求。因为返回的数据包只在云服务器内部转发,不经过负载均衡,所以在后端ECS实例上去访问负载均衡的服务地址是不通的。什么意思?就是如果你要使用阿里云的SLB的话,那么你不能在apiserver节点上使用SLB(比如在apiserver 上安装kubectl,然后将apiserver的地址设置为SLB的负载地址使用),因为这样的话就可能造成回环了,所以简单的做法是另外用两个新的节点做HA实例,然后将这两个实例添加到SLB 机器组中。

使用haproxy+keepalived实现

安装配置haproxy

下载

wget http://www.haproxy.org/download/1.8/src/haproxy-1.8.9.tar.gz
tar -xf haproxy-1.8.9.tar.gz -C /usr/local/src/

编译安装haproxy

yum install gcc gcc-c++ autoconf automake -y
cd /usr/local/src/haproxy-1.8.9/
make TARGET=linux2628 ARCH=x86_64 PREFIX=/usr/local/haproxy
make install PREFIX=/usr/local/haproxy

> /usr/local/haproxy/sbin/haproxy -v
HA-Proxy version 1.8.9-83616ec 2018/05/18
Copyright 2000-2018 Willy Tarreau <[email protected]>

创建相关目录,默认配置文件

mkdir /etc/haproxy

cat > /etc/sysconfig/haproxy <<EOF
# Add extra options to the haproxy daemon here. This can be useful for
# specifying multiple configuration files with multiple -f options.
# See haproxy(1) for a complete list of options.
OPTIONS=""
EOF

systemd unit服务启动脚本

cat > /usr/lib/systemd/system/haproxy.service <<EOF
[Unit]
Description=HAProxy Load Balancer
After=syslog.target network.target

[Service]
EnvironmentFile=/etc/sysconfig/haproxy
ExecStart=/usr/local/haproxy/sbin/haproxy  -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid $OPTIONS
ExecReload=/bin/kill -USR2 $MAINPID
KillMode=mixed

[Install]
WantedBy=multi-user.target
EOF

haproxy的配置

cat > /etc/haproxy/haproxy.cfg <<EOF
listen stats
  bind    *:9000
  mode    http
  stats   enable
  stats   hide-version
  stats   uri       /stats
  stats   refresh   60s
  stats   realm     Haproxy\ Statistics
  stats   auth      admin:admin
  stats   admin     if TRUE
global
    maxconn   85535
    #log 127.0.0.1  local3 warning
    log 127.0.0.1  local0 info
    tune.comp.maxlevel 7   #压缩级别
    maxcompcpuusage 35     #压缩最大可使用CPU资源为50%
defaults
    mode      http
    log       global
    option    dontlognull
    option    http-server-close
    option    forwardfor except 127.0.0.0/8
    option    redispatch
    retries   3
    timeout http-request    30s
    #timeout queue           1m
    timeout connect         60s
    timeout client          1m
    timeout server          3m
    maxconn                 85535
frontend k8s-api-https
    bind 0.0.0.0:443
    mode tcp
    option tcplog
    maxconn                 85535
    tcp-request inspect-delay 5s
    tcp-request content accept if { req.ssl_hello_type 1 }
    default_backend k8s-api-https
backend k8s-api-https
    mode tcp
    option tcplog
    option tcp-check
    balance roundrobin
    default-server inter 10s downinter 5s rise 2 fall 2 slowstart 60s maxconn 500 maxqueue 256 weight 100
    server k8s-api-https-1 192.168.16.235:6443 check
    server k8s-api-https-2 192.168.16.236:6443 check
    server k8s-api-https-3 192.168.16.237:6443 check
frontend k8s-api-http
    bind 0.0.0.0:80
    mode tcp
    maxconn  85535
    option tcplog
    default_backend k8s-api-http
backend k8s-api-http
    mode tcp
    option tcplog
    option tcp-check
    balance roundrobin
    default-server inter 10s downinter 5s rise 2 fall 2 slowstart 60s maxconn 5000 maxqueue 256 weight 100
    server k8s-api-http-1 192.168.16.235:8080 check
    server k8s-api-http-2 192.168.16.236:8080 check
    server k8s-api-http-3 192.168.16.237:8080 check
EOF

haproxy日志
Haproxy默认不会直接输出文件日志,需要借助Linux的rsyslog输出日志。

haproxy.cfg配置文件中global和defaults域中添加如下字段:

global
    ......
    log 127.0.0.1 local0 info
    log 127.0.0.1 local1 warning
    ......

defaults
    ......
    log global
    ......

global中的配置,是将info级别及以上的日志输出到rsyslog的local0接口,将warn级别及以上的日志输出到rsyslog。 info会打印每个请求的日志,生产环境中建议将日志级别设置为notice

为ryslog添加haproxy日志的配置/etc/rsyslog.d/haproxy.conf

mkdir /var/log/haproxy

cat > /etc/rsyslog.d/haproxy.conf << EOF
\$ModLoad imudp
\$UDPServerRun 514
local0.*     /var/log/haproxy/haproxy.log
local1.*     /var/log/haproxy/haproxy_warn.log
EOF

修改ryslog的启动参数/etc/sysconfig/rsyslog:

SYSLOGD_OPTIONS="-c 2 -r -m 0"

重启rsyslog:

systemctl restart rsyslog

启动haproxy

systemctl daemon-reload
systemctl start haproxy
systemctl enable haproxy
systemctl status haproxy

然后我们可以通过上面9000端口监控我们的haproxy的运行状态(192.168.16.245:9000/stats):

另外一台也这样安装配置

安装配置keepalived

节点 ip
192.168.16.245 Master
192.168.16.246 Backup
192.168.16.247 Vip

下载

wget http://www.keepalived.org/software/keepalived-1.4.4.tar.gz
tar -xf keepalived-1.4.4.tar.gz -C /usr/local/src/

编译安装keepalived

yum install libnl-devel libnfnetlink-devel openssl-devel -y

cd /usr/local/src/keepalived-1.4.4/
./configure --prefix=/usr/local/keepalived
make -j 4 && make install

注册为系统服务
安装完成后在/usr/lib/systemd/system/keepalived.service自动生成

/usr/local/src/keepalived-1.4.4/keepalived/keepalived.service

配置keepalived

ln -sf /usr/local/keepalived/etc/keepalived/ /etc/keepalived
cd /etc/keepalived
mv keepalived.conf keepalived.conf.default

Master

cat > /etc/keepalived/keepalived.conf <<EOF
! Configuration File for keepalived

global_defs {
   notification_email {
   }
   router_id kube_api
}

vrrp_script check_haproxy {
    # 自身状态检测
    script "/etc/keepalived/chk_haproxy_master.sh"
    interval 3
    weight 5
}

vrrp_instance haproxy-vip {
    # 使用单播通信,默认是组播通信
    unicast_src_ip 192.168.16.245
    unicast_peer {
        192.168.16.246
    }
    # 初始化状态
    state MASTER
    # 虚拟ip 绑定的网卡
    interface ens192
    # 此ID 要与Backup 配置一致
    virtual_router_id 51
    # 默认启动优先级,要比Backup 大点,但要控制量,保证自身状态检测生效
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        # 虚拟ip 地址
        192.168.16.247
    }
    track_script {
        check_haproxy
    }
}
EOF

Backup

cat > /etc/keepalived/keepalived.conf <<EOF
! Configuration File for keepalived

global_defs {
   notification_email {
   }
   router_id kube_api
}

vrrp_script check_haproxy {
    # 自身状态检测
    script "/etc/keepalived/chk_haproxy_master.sh"
    interval 3
    weight 5
}

vrrp_instance haproxy-vip {
    # 使用单播通信,默认是组播通信
    unicast_src_ip 192.168.16.246
    unicast_peer {
        192.168.16.245
    }
    # 初始化状态
    state BACKUP
    # 虚拟ip 绑定的网卡
    interface ens192
    # 此ID 要与Backup 配置一致
    virtual_router_id 51
    # 默认启动优先级,要比Backup 大点,但要控制量,保证自身状态检测生效
    priority 90
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        # 虚拟ip 地址
        192.168.16.247
    }
    track_script {
        check_haproxy
    }
}
EOF

haproxy检查脚本

cat /etc/keepalived/chk_haproxy_master.sh
#!/bin/bash
if [ $(ps -C haproxy --no-header | wc -l) -eq 0 ]; then
   systemctl start haproxy
   sleep 3

   if [ $(ps -C haproxy --no-header | wc -l) -eq 0 ]; then
        systemctl stop keepalived
   fi
fi

chmod +x /etc/keepalived/chk_haproxy_master.sh

启动keepalived服务

systemctl enable keepalived
systemctl start keepalived

2: ens192: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 00:50:56:98:2e:9c brd ff:ff:ff:ff:ff:ff
    inet 192.168.16.245/24 brd 192.168.16.255 scope global noprefixroute ens192
       valid_lft forever preferred_lft forever
    inet 192.168.16.247/32 scope global ens192
       valid_lft forever preferred_lft forever
    inet6 fe80::822:4c72:993b:a056/64 scope link noprefixroute
       valid_lft forever preferred_lft forever
    inet6 fe80::6450:5694:d6c3:b30e/64 scope link tentative noprefixroute dadfailed
       valid_lft forever preferred_lft forever

另外节点也这样安装

后面验证下vip漂移就ok

kube-controller-manager 和kube-scheduler 的高可用

Kubernetes 的管理层服务包括kube-schedulerkube-controller-managerkube-schedulerkube-controller-manager使用一主多从的高可用方案,在同一时刻只允许一个服务处以具体的任务。Kubernetes中实现了一套简单的选主逻辑,依赖Etcd实现scheduler和controller-manager的选主功能。如果scheduler和controller-manager在启动的时候设置了leader-elect参数,它们在启动后会先尝试获取leader节点身份,只有在获取leader节点身份后才可以执行具体的业务逻辑。它们分别会在Etcd中创建kube-scheduler和kube-controller-manager的endpoint,endpoint的信息中记录了当前的leader节点信息,以及记录的上次更新时间。leader节点会定期更新endpoint的信息,维护自己的leader身份。每个从节点的服务都会定期检查endpoint的信息,如果endpoint的信息在时间范围内没有更新,它们会尝试更新自己为leader节点。scheduler服务以及controller-manager服务之间不会进行通信,利用Etcd的强一致性,能够保证在分布式高并发情况下leader节点的全局唯一性。整体方案如下图所示:

当集群中的leader节点服务异常后,其它节点的服务会尝试更新自身为leader节点,当有多个节点同时更新endpoint时,由Etcd保证只有一个服务的更新请求能够成功。通过这种机制sheduler和controller-manager可以保证在leader节点宕机后其它的节点可以顺利选主,保证服务故障后快速恢复。当集群中的网络出现故障时对服务的选主影响不是很大,因为scheduler和controller-manager是依赖Etcd进行选主的,在网络故障后,可以和Etcd通信的主机依然可以按照之前的逻辑进行选主,就算集群被切分,Etcd也可以保证同一时刻只有一个节点的服务处于leader状态。

猜你喜欢

转载自www.cnblogs.com/knmax/p/9213040.html