kubernetes High Availability Cluster (multi-master, v1.15 the latest official version)

  1. Opening presentation

        kubernetes have in our production environment runs nearly a year, the current stable operation. From the system's built migration project, middle encountered many problems. Multi-master node production kubernetes of high availability with load balancing master haproxy + keepalived. Now taking the time to summarize the process of building the system to help you quickly build their own k8s system.

    The following is a screenshot of my run production environment

    image.png


         image.png

    kubernente updated version of iterations very quickly, when I was building a production environment kubernetes latest official version is v1.11, now officially has been updated to v1.15, the latest version of this paper, an overview.


2. kubernetes Profile

    kubernetes is based automated deployment google borg open source scheduling engine layout container, a container for the cluster, expansion and operation and maintenance of open-source platform.  kubernetes have improved cluster management capabilities, including multi-level security and access mechanism, multi-tenant application support capabilities, transparent service registration and service discovery mechanism, built-in load balancing, fault finding and self-healing capabilities, service rolling upgrade and online capacity expansion, scalable automatic resource scheduling mechanism, resource quota management capabilities and more granularity. kubernetes also provide comprehensive management tool, covering the development, deployment, testing, operation and maintenance monitoring and other links. kubernetes as one of the most important members CNCF (Cloud Native Computing Foundation), its goal is not just a scheduling system, but to provide a specification that allows you to describe the cluster architecture, the definition of the final status of services, kubernetes can help you will automatically be achieved and maintained in this state.


3. kubernetes architecture 

image.png


    在这张系统架构图中,可以把服务分为运行在工作节点上的服务和组成集群级别控制节点的服务。kubernetes 节点有运行应用容器必备的服务,而这些都是受master的控制。每次个节点上都要运行docker,docker来负责所有具体的映像下载和容器运行。

    kubernetes主要由以下几个核心组件组成:

  • etcd保存了整个集群的状态;

  • apiserver提供了资源操作的唯一入口,并提供认证、授权、访问控制、API注册和发现等机制;

  • controller manager负责维护集群的状态,比如故障检测、自动扩展、滚动更新等;

  • scheduler负责资源的调度,按照预定的调度策略将Pod调度到相应的机器上;

  • kubelet负责维护容器的生命周期,同时也负责Volume(CVI)和网络(CNI)的管理;

  • Container runtime负责镜像管理以及Pod和容器的真正运行(CRI);

  • kube-proxy负责为Service提供cluster内部的服务发现和负载均衡;

    除了核心组件,还有一些推荐的组件:

  • kube-dns负责为整个集群提供DNS服务

  • Ingress Controller为服务提供外网入口

  • Heapster提供资源监控

  • Dashboard提供GUI

  • Federation提供跨可用区的集群

  • Fluentd-elasticsearch提供集群日志采集、存储与查询


4. 搭建过程

    下面开始咱们今天的干货,集群的搭建过程。

4.1 环境准备

机器名称 机器配置
机器系统 IP地址 角色
haproxy1

8C16G

ubuntu16.04 192.168.10.1 haproxy+keepalived VIP:192.168.10.10
haproxy1 8C16G ubuntu16.04 192.168.10.2 haproxy+keepalived VIP:192.168.10.10
master1 8C16G ubuntu16.04 192.168.10.3 主节点1
master2 8C16G ubuntu16.04 192.168.10.4 主节点2
master3 8C16G ubuntu16.04 192.168.10.5 主节点3
node1 8C16G ubuntu16.04 192.168.10.6 工作节点1
node2 8C16G ubuntu16.04 192.168.10.7 工作节点2
node3 8C16G ubuntu16.04 192.168.10.8 工作节点3


4.2 环境说明

    本文采用三台master和三台node搭建kubernetes集群,采用两台机器搭建haproxy+keepalived负载均衡master,保证master高可用,从而保证整个kubernetes高可用。官方要求机器配置必须>=2C2G,操作系统>=16.04。


4.3 搭建过程

4.3.1 基本设置

    修改hosts文件,8台机器全部修改

root@haproxy1:~# cat /etc/hosts
192.168.10.1     haproxy1
192.168.10.2     haproxy2
192.168.10.3     master1
192.168.10.4     master2
192.168.10.5     master3
192.168.10.6     node1
192.168.10.7     node2
192.168.10.8     node3
192.168.10.10    kubernetes.haproxy.com

4.3.2 haproxy+keepalived搭建

    安装haproxy

root@haproxy1:/data# wget https://github.com/haproxy/haproxy/archive/v2.0.0.tar.gz
root@haproxy1:/data# tar -xf v2.0.0.tar.gz
root@haproxy1:/data# cd haproxy-2.0.0/
root@haproxy1:/data/haproxy-2.0.0# make TARGET=linux-glibc
root@haproxy1:/data/haproxy-2.0.0# make install PREFIX=/data/haproxy
root@haproxy1:/data/haproxy# mkdir conf
root@haproxy1:/data/haproxy# vim  conf/haproxy.cfg
global
  log 127.0.0.1 local0 err
  maxconn 50000
  user haproxy
  group haproxy
  daemon
  nbproc 1
  pidfile haproxy.pid
defaults
  mode tcp
  log 127.0.0.1 local0 err
  maxconn 50000
  retries 3
  timeout connect 5s
  timeout client 30s
  timeout server 30s
  timeout check 2s
listen admin_stats
  mode http
  bind 0.0.0.0:1080
  log 127.0.0.1 local0 err
  stats refresh 30s
  stats uri     /haproxy-status
  stats realm   Haproxy\ Statistics
  stats auth    will:will
  stats hide-version
  stats admin if TRUE
frontend k8s
  bind 0.0.0.0:8443
  mode tcp
  default_backend k8s
backend k8s
  mode tcp
  balance roundrobin
  server master1 172.20.2.31:6443 weight 1 maxconn 1000 check inter 2000 rise 2 fall 3
  server master2 172.20.2.32:6443 weight 1 maxconn 1000 check inter 2000 rise 2 fall 3
  server master3 172.20.2.33:6443 weight 1 maxconn 1000 check inter 2000 rise 2 fall 3
  
root@haproxy1:/data/haproxy# id -u haproxy &> /dev/null || useradd -s /usr/sbin/nologin -r haproxy
root@haproxy1:/data/haproxy# mkdir /usr/share/doc/haproxy
root@haproxy1:/data/haproxy# wget -qO - https://github.com/haproxy/haproxy/blob/master/doc/configuration.txt | gzip -c > /usr/share/doc/haproxy/configuration.txt.gz
root@haproxy1:/data/haproxy# vim /etc/default/haproxy 
# Defaults file for HAProxy
#
# This is sourced by both, the initscript and the systemd unit file, so do not
# treat it as a shell script fragment.

# Change the config file location if needed
#CONFIG="/etc/haproxy/haproxy.cfg"

# Add extra flags here, see haproxy(1) for a few options
#EXTRAOPTS="-de -m 16"

root@haproxy1:/data# vim /lib/systemd/system/haproxy.service 
[Unit]
Description=HAProxy Load Balancer
Documentation=man:haproxy(1)
Documentation=file:/usr/share/doc/haproxy/configuration.txt.gz
After=network.target syslog.service
Wants=syslog.service

[Service]
Environment=CONFIG=/data/haproxy/conf/haproxy.cfg
EnvironmentFile=-/etc/default/haproxy
ExecStartPre=/data/haproxy/sbin/haproxy -f ${CONFIG} -c -q
ExecStart=/data/haproxy/sbin/haproxy -W  -f ${CONFIG} -p /data/haproxy/conf/haproxy.pid $EXTRAOPTS
ExecReload=/data/haproxy/sbin/haproxy -c -f ${CONFIG}
ExecReload=/bin/kill -USR2 $MAINPID
KillMode=mixed
Restart=always
Type=forking

[Install]
WantedBy=multi-user.target

root@haproxy2:/data/haproxy# systemctl daemon-reload 
root@haproxy2:/data/haproxy# systemctl start haproxy
root@haproxy2:/data/haproxy# systemctl status haproxy

    安装keepalived

root@haproxy1:/data# wget https://www.keepalived.org/software/keepalived-2.0.16.tar.gz
root@haproxy1:/data# tar -xf keepalived-2.0.16.tar.gz
root@haproxy1:/data# cd keepalived-2.0.16/
root@haproxy1:/data/keepalived-2.0.16# ./configure --prefix=/data/keepalived
root@haproxy1:/data/keepalived-2.0.16# ./configure --prefix=/data/keepalived
root@haproxy1:/data/keepalived-2.0.16# make && make install
root@haproxy1:/data/keepalived# mkdir conf
root@haproxy1:/data/keepalived# vim conf/keepalived.conf
! Configuration File for keepalived
global_defs {
  notification_email {
    root@localhost
  }
 
  notification_email_from keepalived@localhost
  smtp_server 127.0.0.1
  smtp_connect_timeout 30
  router_id haproxy1
}
 
vrrp_script chk_haproxy {                                   #HAproxy 服务监控脚本                    
  script "/data/keepalived/check_haproxy.sh"
  interval 2
  weight 2
}
 
vrrp_instance VI_1 {
  state MASTER
  interface ens160
  virtual_router_id 1
  priority 100
  advert_int 1
  authentication {
    auth_type PASS
    auth_pass 1111
  }
  track_script {
    chk_haproxy
  }
  virtual_ipaddress {
    172.20.2.60/22
  }
}

root@haproxy1:/data/keepalived# vim /etc/default/keepalived
# Options to pass to keepalived

# DAEMON_ARGS are appended to the keepalived command-line
DAEMON_ARGS=""

root@haproxy1:/data/keepalived# vim /lib/systemd/system/keepalived.service
[Unit]
Description=Keepalive Daemon (LVS and VRRP)
After=network-online.target
Wants=network-online.target
# Only start if there is a configuration file
ConditionFileNotEmpty=/data/keepalived/conf/keepalived.conf

[Service]
Type=forking
KillMode=process
Environment=CONFIG=/data/keepalived/conf/keepalived.conf
# Read configuration variable file if it is present
EnvironmentFile=-/etc/default/keepalived
ExecStart=/data/keepalived/sbin/keepalived -f ${CONFIG} -p /data/keepalived/conf/keepalived.pid $DAEMON_ARGS
ExecReload=/bin/kill -HUP $MAINPID

[Install]
WantedBy=multi-user.target

root@haproxy1:/data/keepalived# systemctl daemon-reload
root@haproxy1:/data/keepalived# systemctl start keepalived.service
root@haproxy1:/data/keepalived# vim /data/keepalived/check_haproxy.sh
#!/bin/bash
A=`ps -C haproxy --no-header | wc -l`
if [ $A -eq 0 ];then
systemctl start haproxy.service
sleep 3
if [ `ps -C haproxy --no-header | wc -l ` -eq 0 ];then
systemctl stop keepalived.service
fi
fi

    同理haproxy2机器上安装haproxy和keepalived


4.3.3 kubernetes集群搭建

    基本设置

    关闭交换分区,kubernetes集群的6台机器必须全部关闭

root@master1:~# free -m
              total        used        free      shared  buff/cache   available
Mem:          16046         128       15727           8         190       15638
Swap:           979           0         979
root@master1:~# swapoff -a
root@master1:~# free -m
              total        used        free      shared  buff/cache   available
Mem:          16046         128       15726           8         191       15638
Swap:             0           0           0

    安装docker

    6台机器均需安装

# 使apt能够使用https访问
root@master1:~# apt-get install -y apt-transport-https ca-certificates curl gnupg-agent software-properties-common
root@master1:~# curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
OK
root@master1:~# apt-key fingerprint 0EBFCD88
pub   4096R/0EBFCD88 2017-02-22
      Key fingerprint = 9DC8 5822 9FC7 DD38 854A  E2D8 8D81 803C 0EBF CD88
uid                  Docker Release (CE deb) <[email protected]>
sub   4096R/F273FCD8 2017-02-22

# 增加docker apt源
root@master1:~# add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
# 安装docker
root@master1:~# apt-get update
root@master1:~# apt-get install -y docker-ce docker-ce-cli containerd.io
root@master1:~# docker --version 
Docker version 18.09.6, build 481bc77


    安装kubernetes组件

# 安装kubeadm,kubelet,kubectl 6台机器均需安装
root@master1:~# apt-get update
root@master1:~# apt-get install -y apt-transport-https curl
root@master1:~# curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
OK
root@master1:~# cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
> deb https://apt.kubernetes.io/ kubernetes-xenial main
> EOF
root@master1:~# apt-get update
root@master1:~# apt-get install -y kubelet kubeadm kubectl
root@master1:~# apt-mark hold kubelet kubeadm kubectl
kubelet set on hold.
kubeadm set on hold.
kubectl set on hold.

    创建集群

    控制节点1

root@master1:~# vim kubeadm-config.yaml
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
kubernetesVersion: stable
controlPlaneEndpoint: "kubernetes.haproxy.com:8443"
networking:
    podSubnet: "10.244.0.0/16"
    
root@master1:~# kubeadm init --config=kubeadm-config.yaml --upload-certs

    完成后截图如下

image.png

root@master1:~# mkdir -p $HOME/.kube
root@master1:~# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
root@master1:~# chown $(id -u):$(id -g) $HOME/.kube/config
# 安装网络组件,这里采用fannel
root@master1:~# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/62e44c867a2846fefb68bd5f178daf4da3095ccb/Documentation/kube-flannel.yml

    查看安装结果

root@master1:~# kubectl get pod -n kube-system -w

image.png


    控制节点2

root@master2:~# kubeadm join kubernetes.haproxy.com:8443 --token a3g3x0.zc6qxcdqu60jgtz1     --discovery-token-ca-cert-hash sha256:d48d8e4e7f8bc2c66a815a34b7e6a23809ad53bdae4a600e368a3ff28ad7a7d5     --experimental-control-plane --certificate-key a2a84ebc181ba34a943e5003a702b71e2a1e7e236f8d1d687d9a19d2bf803a77
root@master2:~# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
root@master2:~# chown $(id -u):$(id -g) $HOME/.kube/config

    查看安装结果

root@master2:~# kubectl get nodes

image.png

    控制节点3

root@master3:~# kubeadm join kubernetes.haproxy.com:8443 --token a3g3x0.zc6qxcdqu60jgtz1     --discovery-token-ca-cert-hash sha256:d48d8e4e7f8bc2c66a815a34b7e6a23809ad53bdae4a600e368a3ff28ad7a7d5     --experimental-control-plane --certificate-key a2a84ebc181ba34a943e5003a702b71e2a1e7e236f8d1d687d9a19d2bf803a77
root@master3:~# mkdir -p $HOME/.kube
root@master3:~# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
root@master3:~# chown $(id -u):$(id -g) $HOME/.kube/config

    查看安装结果

root@master3:~# kubectl get nodes

image.png

    添加工作节点

root@node1:~# kubeadm join kubernetes.haproxy.com:8443 --token a3g3x0.zc6qxcdqu60jgtz1     --discovery-token-ca-cert-hash sha256:d48d8e4e7f8bc2c66a815a34b7e6a23809ad53bdae4a600e368a3ff28ad7a7d5
root@node2:~# kubeadm join kubernetes.haproxy.com:8443 --token a3g3x0.zc6qxcdqu60jgtz1     --discovery-token-ca-cert-hash sha256:d48d8e4e7f8bc2c66a815a34b7e6a23809ad53bdae4a600e368a3ff28ad7a7d5
root@node3:~# kubeadm join kubernetes.haproxy.com:8443 --token a3g3x0.zc6qxcdqu60jgtz1     --discovery-token-ca-cert-hash sha256:d48d8e4e7f8bc2c66a815a34b7e6a23809ad53bdae4a600e368a3ff28ad7a7d5

    整个集群搭建完成查看结果

    任一master上执行

root@master1:~# kubectl get pods --all-namespaces

image.png

root@master1:~# kubectl get nodes

image.png

至此,整个高可用集群搭建完毕


5. 参考文档

https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/#before-you-begin

https://www.kubernetes.org.cn/docs


Guess you like

Origin blog.51cto.com/13053917/2418747
Recommended