kubeadm installs a highly available K8S-1.19 cluster (external etcd method)


The control plane and Etcd are decoupled, the cluster risk is small, and a single master or etcd has little impact on the cluster. Etcd is easy to maintain and restore externally.
Insert picture description here

Cluster planning

Host ip Roles
vip 192.168.100.240 Virtual VIP
master01 192.168.100.241 etcd1、master
master02 192.168.100.242 etcd2、master
node01 192.168.100.243 etcd3、node

Initialization work (master01-master02-node01 needs to be operated)

## 安装系统
Linux node02 3.10.0-1062.el7.x86_64 #1 SMP Wed Aug 7 18:08:02 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux

## 配置系统
###### 关闭 防火墙
systemctl stop firewalld
systemctl disable firewalld

###### 关闭 SeLinux
setenforce 0
sed -i "s/SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config

###### 关闭 swap
swapoff -a
yes | cp /etc/fstab /etc/fstab_bak
cat /etc/fstab_bak |grep -v swap > /etc/fstab

###### yum epel源
yum install wget telnet -y
mv /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo.backup
wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo
yum clean all
yum makecache

###### 修改 /etc/sysctl.conf
modprobe br_netfilter
cat <<EOF >  /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
vm.swappiness=0
EOF
sysctl -p /etc/sysctl.d/k8s.conf

###### 开启 ipvs
cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF
chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4

# 设置 yum repository
yum install -y yum-utils \
device-mapper-persistent-data \
lvm2
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

# 安装并启动 docker
yum install -y docker-ce-18.09.8 docker-ce-cli-18.09.8 containerd.io

# 添加ipvs支持
yum install -y nfs-utils ipset ipvsadm

installation steps

One, install keepalived to create VIP

1. Install keepalived
master01 master02 two master operations

yum install -y keepalived

2. Configure
master01 to configure
vim /etc/keepalived/keepalived.conf

! Configuration File for keepalived
 
vrrp_instance VI_1 {
 state MASTER    #BACKUP上修改为BACKUP
 interface eth0   #主网卡信息
 virtual_router_id 44   #虚拟路由标识,主从相同
 priority 100
 advert_int 1
 authentication {
 auth_type PASS
 auth_pass 1111   #主从认证密码必须一致
 }
 virtual_ipaddress {  #虚拟IP(VIP)
 192.168.100.240
 }
}

virtual_server 192.168.100.240 6443 {   #对外虚拟IP地址
 delay_loop 6    #检查真实服务器时间,单位秒
 lb_algo rr      #设置负载调度算法,rr为轮训
 lb_kind DR     #设置LVS负载均衡NAT模式
 protocol TCP    #使用TCP协议检查realserver状态
 
 real_server 192.168.100.241 6443 {  #第一个节点
  weight 3          #节点权重值
  TCP_CHECK {       #健康检查方式
  connect_timeout 3 #连接超时
  nb_get_retry 3    #重试次数
  delay_before_retry 3  #重试间隔/S
   }
  }
  
 real_server 192.168.100.242 6443 {  #第二个节点
  weight 3
  TCP_CHECK {
  connect_timeout 3
  nb_get_retry 3
  delay_before_retry 3
    }
  }
}

master02 configure
vim /etc/keepalived/keepalived.conf

! Configuration File for keepalived
 
vrrp_instance VI_1 {
 state BACKUP   #BACKUP上修改为BACKUP
 interface eth0
 virtual_router_id 44   #虚拟路由标识,主从相同
 priority 80   #权重
 advert_int 1
 authentication {
 auth_type PASS
 auth_pass 1111   #主从认证密码必须一致
 }
 virtual_ipaddress {  #虚拟IP(VIP)
 192.168.100.240
 }
}

virtual_server 192.168.100.240 6443 {   #对外虚拟IP地址
 delay_loop 6    #检查真实服务器时间,单位秒
 lb_algo rr      #设置负载调度算法,rr为轮训
 lb_kind DR     #设置LVS负载均衡NAT模式
 protocol TCP    #使用TCP协议检查realserver状态
 
 real_server 192.168.100.241 6443 {  #第一个节点
  weight 3          #节点权重值
  TCP_CHECK {       #健康检查方式
  connect_timeout 3 #连接超时
  nb_get_retry 3    #重试次数
  delay_before_retry 3  #重试间隔/S
   }
  }
  
 real_server 192.168.100.242 6443 {  #第二个节点
  weight 3
  TCP_CHECK {
  connect_timeout 3
  nb_get_retry 3
  delay_before_retry 3
    }
  }
}

3. Start the service

systemctl start keepalived
systemctl enable keepalived
systemctl status keepalived

4. View

[root@master01 ~]# ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  192.168.100.240:6443 rr
# 这里只会有一个vip地址、等集群启动后有6443端口就会出现rr轮训地址

2. Build a highly available etcd cluster

1. Install cfssl on master1

wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64

chmod +x cfssl_linux-amd64 cfssljson_linux-amd64

mv cfssl_linux-amd64 /usr/local/bin/cfssl
mv cfssljson_linux-amd64 /usr/local/bin/cfssljson

2. Install etcd binary file

# 创建目录
mkdir -p /data/etcd/bin
# 下载
cd /tmp
wget https://storage.googleapis.com/etcd/v3.3.25/etcd-v3.3.25-linux-amd64.tar.gz
tar zxf etcd-v3.3.25-linux-amd64.tar.gz
cd etcd-v3.3.25-linux-amd64
mv etcd etcdctl /data/etcd/bin/

3. Create the CA certificate. The certificate
Etcd between the client, server, and nodes belongs to the server, and etcdctl belongs to the client, and the two communicate through the http protocol.

  • The authoritative certificate signed by the ca certificate itself, used to sign other certificates
  • server certificate etcd certificate
  • client certificate client, such as etcdctl's certificate
  • Peer certificate The certificate for the communication between the node and the node

1) Create a directory

mkdir -p /data/etcd/ssl
cd /data/etcd/ssl

2) Create CA certificate
Create vim ca-config.json

{
    
    
    "signing": {
    
    
        "default": {
    
    
            "expiry": "438000h"
        },
        "profiles": {
    
    
            "server": {
    
    
                "expiry": "438000h",
                "usages": [
                    "signing",
                    "key encipherment",
                    "server auth",
                    "client auth"
                ]
            },
            "client": {
    
    
                "expiry": "438000h",
                "usages": [
                    "signing",
                    "key encipherment",
                    "client auth"
                ]
            },
            "peer": {
    
    
                "expiry": "438000h",
                "usages": [
                    "signing",
                    "key encipherment",
                    "server auth",
                    "client auth"
                ]
            }
        }
    }
}

server auth means that the client can use the ca to verify the certificate provided by the server
client auth means that the server can use the ca to verify the certificate provided by the client
Create a certificate signing request ca-csr.json
vim ca-csr.json

{
    
    
    "CN": "etcd",
    "key": {
    
    
        "algo": "rsa",
        "size": 2048
    }
}

Generate CA certificate and private key

cfssl gencert -initca ca-csr.json | cfssljson -bare ca
# ls ca*
# ca-config.json ca.csr ca-csr.json ca-key.pem ca.pem

3) Generate client certificate
vim client.json

{
    
    
    "CN": "client",
    "key": {
    
    
        "algo": "ecdsa",
        "size": 256
    }
}

Generate

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=client client.json  | cfssljson -bare client -
# ls ca*
# ca-config.json ca.csr ca-csr.json ca-key.pem ca.pem client-key.pem client.pem

4) Generate server and peer certificates
Create configuration vim etcd.json

{
    
    
    "CN": "etcd",
    "hosts": [
        "192.168.100.241",
        "192.168.100.242",
        "192.168.100.243"
    ],
    "key": {
    
    
        "algo": "ecdsa",
        "size": 256
    },
    "names": [
        {
    
    
            "C": "CN",
            "L": "BJ",
            "ST": "BJ"
        }
    ]
}

Generate

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=server etcd.json | cfssljson -bare server

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=peer etcd.json | cfssljson -bare peer

5) Synchronize the /data/etcd/ssl directory of master01 to master02 and node01

scp -r /data/etcd 192.168.100.242:/data/etcd
scp -r /data/etcd 192.168.100.243:/data/etcd

4. The systemd configuration file
vim /usr/lib/systemd/system/etcd.service has
different configurations for the three hostsIt is best to delete the comment when using it

[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
Documentation=https://github.com/coreos

[Service]
Type=notify
WorkingDirectory=/data/etcd/
ExecStart=/data/etcd/bin/etcd \
  --name=etcd1 \      # 这里须要改
  --cert-file=/data/etcd/ssl/server.pem \
  --key-file=/data/etcd/ssl/server-key.pem \
  --peer-cert-file=/data/etcd/ssl/peer.pem \
  --peer-key-file=/data/etcd/ssl/peer-key.pem \
  --trusted-ca-file=/data/etcd/ssl/ca.pem \
  --peer-trusted-ca-file=/data/etcd/ssl/ca.pem \
  --initial-advertise-peer-urls=https://192.168.100.241:2380 \  # 改为本机ip
  --listen-peer-urls=https://192.168.100.241:2380 \  # 改为本机ip
  --listen-client-urls=https://192.168.100.241:2379 \  # 改为本机ip
  --advertise-client-urls=https://192.168.100.241:2379 \  # 改为本机ip
  --initial-cluster-token=etcd-cluster-0 \
  --initial-cluster=etcd1=https://192.168.100.241:2380,etcd2=https://192.168.100.242:2380,etcd3=https://192.168.100.243:2380 \
  --initial-cluster-state=new \
  --data-dir=/data/etcd \
  --snapshot-count=50000 \
  --auto-compaction-retention=1 \
  --max-request-bytes=10485760 \
  --quota-backend-bytes=8589934592
Restart=always
RestartSec=15
LimitNOFILE=65536
OOMScoreAdjust=-999

[Install]
WantedBy=multi-user.target

5. Start etcd

systemctl daemon-reload
systemctl enable etcd
systemctl start etcd
systemctl status etcd

6, the verification is successful

cd /data/etcd/ssl
# 查看状态
../bin/etcdctl --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem --endpoints="https://192.168.100.241:2379" cluster-health
# 查看集群主机
../bin/etcdctl --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem --endpoints="https://192.168.100.241:2379" member list

3. Install kubeadm, kubelet and kubectl

Install kubeadm, kubelet on all nodes. kubectl is optional, you can install it on all machines or just one master1.

1. Add domestic yum source

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
       http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

2. Install the specified version

yum install -y kubelet-1.19.2 kubeadm-1.19.2 kubectl-1.19.2

3. On all nodes where kubelet is installed, set kubelet to boot up

systemctl enable kubelet 

Fourth, initialize the master

1) Copy the ca certificate and client certificate generated when building etcd on master1 to the designated location and rename it, as follows

[root@master01] ~$ mkdir -p /etc/kubernetes/pki/etcd/

#etcd集群的ca证书
[root@master01] ~$ cp /data/etcd/ssl/ca.pem /etc/kubernetes/pki/etcd/

#etcd集群的client证书,apiserver访问etcd使用
[root@master01] ~$ cp /data/etcd/ssl/client.pem /etc/kubernetes/pki/apiserver-etcd-client.pem

#etcd集群的client私钥
[root@master01] ~$ cp /data/etcd/ssl/client-key.pem /etc/kubernetes/pki/apiserver-etcd-client-key.pem

#确保
[root@master01] ~$ tree /etc/kubernetes/pki/
/etc/kubernetes/pki/
├── apiserver-etcd-client-key.pem
├── apiserver-etcd-client.pem
└── etcd
    └── ca.pem

1 directory, 3 files

2) Create the initial configuration file to
generate the default configuration file

kubeadm config print init-defaults > kubeadm-init.yaml

After the last modification

apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 192.168.100.241  # 本机ip
  bindPort: 6443
nodeRegistration:
  criSocket: /var/run/dockershim.sock
  name: master01  # 本机hostname
  taints:
  - effect: NoSchedule
    key: node-role.kubernetes.io/master
---
apiServer:
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {
    
    }
dns:
  type: CoreDNS
etcd: 
#  local:
#    dataDir: /var/lib/etcd  # 下面为自定义etcd集群
  external:
    endpoints:
    - https://192.168.100.241:2379
    - https://192.168.100.242:2379
    - https://192.168.100.243:2379
    caFile: /etc/kubernetes/pki/etcd/ca.pem  #搭建etcd集群时生成的ca证书
    certFile: /etc/kubernetes/pki/apiserver-etcd-client.pem   #搭建etcd集群时生成的客户端证书
    keyFile: /etc/kubernetes/pki/apiserver-etcd-client-key.pem  #搭建etcd集群时生成的客户端密钥
imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers
kind: ClusterConfiguration
kubernetesVersion: v1.19.0
controlPlaneEndpoint: 192.168.100.240  # vip地址
networking:
  dnsDomain: cluster.local
  podSubnet: 10.244.0.0/16
  serviceSubnet: 10.96.0.0/12
scheduler: {
    
    }
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: "ipvs"

3) Perform initialization

kubeadm init --config=kubeadm-init.yaml

4) Configure kubectl
To use kubectl to manage the cluster operation cluster, you need to do the following configuration:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Under the test, whether kubectl is normal, it should be noted that the notready state of master1 is normal at this time, because we have not yet deployed the flannel network plug-in

[root@master01] # kubectl get node
NAME       STATUS     ROLES    AGE   VERSION
master01   NotReady   master   66s   v1.19.2

Add master02
1) First, scp the CA certificate shared by the cluster generated in master1 to other master machines.

scp -r  /etc/kubernetes/pki/* 192.168.100.242:/etc/kubernetes/pki/

2) Copy the initial configuration file to master2

scp kubeadm-init.yaml 192.168.100.242:/root/

3) Initialize master2
After the modification, initialize the specific modification content according to the above standard file annotation modification

kubeadm init --config=kubeadm-init.yaml

Five, join the worker node to the cluster

Generate and join key on master01

kubeadm token create --print-join-command

Add directly on the node host

ubeadm join 192.168.100.240:6443 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash \
    sha256:fb4e252253b55974edff65cb4765e9979f8785cd67a6ed41f87c83c6bcc3ac4a

Six, install the plug-in flannel, metrics-server

Slightly, please check the previous article

Seven, test cluster

Node status

kubectl get nodes
NAME           STATUS   ROLES    AGE    VERSION
uat-master01   Ready    master   46h    v1.19.2
uat-master02   Ready    master   45h    v1.19.2
uat-node01     Ready    <none>   46h    v1.19.2

Component status

kubectl get cs
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME                 STATUS    MESSAGE             ERROR
scheduler            Healthy   ok                  
controller-manager   Healthy   ok                  
etcd-2               Healthy   {
    
    "health":"true"}   
etcd-1               Healthy   {
    
    "health":"true"}   
etcd-0               Healthy   {
    
    "health":"true"}   

Service account

kubectl get serviceaccount
NAME      SECRETS   AGE
default   1         46h

Cluster information, note that the api address here is the VIP address we built.

kubectl cluster-info
Kubernetes master is running at https://192.168.100.240:6443
KubeDNS is running at https://192.168.100.240:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
Metrics-server is running at https://192.168.100.240:6443/api/v1/namespaces/kube-system/services/https:metrics-server:/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

Guess you like

Origin blog.csdn.net/lswzw/article/details/109027255