K8S 1.21.3 高可用部署

k8s安装包
https://github.com/gghuogg/k8s-Installation-package

https://gitee.com/gaohaixiang192/k8s


系统版本
[root@k8s-node2 ~]# cat /etc/redhat-release 
CentOS Linux release 7.4.1708 (Core) 


1、关闭firewalld和selinux(所有主机)
vi /etc/selinux/config
systemctl stop firewalld
 systemctl disable firewalld

2、配置解析/etc/hosts(所有主机)
vim /etc/hosts
192.168.73.100 k8s-node1
192.168.73.101 k8s-node2
192.168.73.102 k8s-node3
192.168.73.103 node1


3、添加内核参数文件 /etc/sysctl.d/k8s.conf(所有主机)
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
vm.swappiness = 0

4、执行命令(所有主机)
modprobe br_netfilter
sysctl -p /etc/sysctl.d/k8s.conf


5、关闭swap(所有主机)
swapoff  -a
sysctl -p /etc/sysctl.d/k8s.conf 
注释掉/etc/fstab中的swap条目
 mount -a
echo "KUBELET_EXTRA_ARGS=--fail-swap-on=false" > /etc/sysconfig/kubelet

6、安装doker及k8s组件(所有主机)
yum install -y yum-utils device-mapper-persistent-data lvm2
yum install -y epel-release conntrack ipvsadm ipset jq sysstat curl iptables libseccomp

yum -y remove libselinux-python libselinux-utils
yum -y install libsepol-2.5-10.el7.x86_64.rpm
 yum -y install libselinux-2.5-15.el7.x86_64.rpm
yum -y install libselinux-python-2.5-15.el7.x86_64.rpm libselinux-utils-2.5-15.el7.x86_64.rpm
 yum -y install libsemanage-2.5-14.el7.x86_64.rpm libsemanage-python-2.5-14.el7.x86_64.rpm
 yum -y install setools-libs-3.3.8-4.el7.x86_64.rpm socat-1.7.3.2-2.el7.x86_64.rpm yum-utils-1.1.31-54.el7_8.noarch.rpm
yum -y install policycoreutils-2.5-34.el7.x86_64.rpm policycoreutils-python-2.5-34.el7.x86_64.rpm
yum -y install python-IPy-0.75-6.el7.noarch.rpm selinux-policy-3.13.1-268.el7_9.2.noarch.rpm selinux-policy-targeted-3.13.1-268.el7_9.2.noarch.rpm
yum -y install device-mapper-persistent-data-0.8.5-3.el7_9.2.x86_64.rpm
yum -y install checkpolicy-2.5-8.el7.x86_64.rpm conntrack-tools-1.4.4-7.el7.x86_64.rpm
yum -y install audit-2.8.5-4.el7.x86_64.rpm audit-libs-2.8.5-4.el7.x86_64.rpm audit-libs-python-2.8.5-4.el7.x86_64.rpm
yum -y install  container-selinux-2.119.2-1.911c772.el7_8.noarch.rpm
yum -y install docker-ce-18.06.3.ce-3.el7.x86_64.rpm
docker -v 
 yum -y install 14bfe6e75a9efc8eca3f638eb22c7e2ce759c67f95b43b16fae4ebabde1549f3-cri-tools-1.13.0-0.x86_64.rpm
 yum -y install db7cb5cb0b3f6875f54d10f02e625573988e3e91fd4fc5eef0b1876bb18604ad-kubernetes-cni-0.8.7-0.x86_64.rpm 7e38e980f058e3e43f121c2ba73d60156083d09be0acc2e5581372136ce11a1c-kubelet-1.21.3-0.x86_64.rpm
 yum -y install b04e5387f5522079ac30ee300657212246b14279e2ca4b58415c7bf1f8c8a8f5-kubectl-1.21.3-0.x86_64.rpm
 yum -y install  23f7e018d7380fc0c11f0a12b7fda8ced07b1c04c4ba1c5f5cd24cd4bdfb304d-kubeadm-1.21.3-0.x86_64.rpm
 
kubelet --version
kubectl version
kubeadm version

systemctl enable docker kubelet
 systemctl restart docker

7、下载及操作需要的镜像文件
docker pull  registry.cn-hangzhou.aliyuncs.com/gaohaixiangk8s/kube-apiserver:v1.21.3
docker pull  registry.cn-hangzhou.aliyuncs.com/gaohaixiangk8s/kube-scheduler:v1.21.3
docker pull  registry.cn-hangzhou.aliyuncs.com/gaohaixiangk8s/kube-proxy:v1.21.3
docker pull  registry.cn-hangzhou.aliyuncs.com/gaohaixiangk8s/kube-controller-manager:v1.21.3
docker pull  registry.cn-hangzhou.aliyuncs.com/gaohaixiangk8s/coredns:v1.8.0
docker pull  registry.cn-hangzhou.aliyuncs.com/gaohaixiangk8s/pause:3.4.1
docker pull  registry.cn-hangzhou.aliyuncs.com/gaohaixiangk8s/flannel:v0.13.0-amd64
docker pull  registry.cn-hangzhou.aliyuncs.com/gaohaixiangk8s/etcd:3.4.13-0

docker tag      registry.cn-hangzhou.aliyuncs.com/gaohaixiangk8s/kube-apiserver:v1.21.3     k8s.gcr.io/kube-apiserver:v1.21.3 
docker tag      registry.cn-hangzhou.aliyuncs.com/gaohaixiangk8s/kube-scheduler:v1.21.3     k8s.gcr.io/kube-scheduler:v1.21.3 
docker tag      registry.cn-hangzhou.aliyuncs.com/gaohaixiangk8s/kube-proxy:v1.21.3     k8s.gcr.io/kube-proxy:v1.21.3 
docker tag      registry.cn-hangzhou.aliyuncs.com/gaohaixiangk8s/kube-controller-manager:v1.21.3     k8s.gcr.io/kube-controller-manager:v1.21.3 
docker tag      registry.cn-hangzhou.aliyuncs.com/gaohaixiangk8s/coredns:v1.8.0     k8s.gcr.io/coredns/coredns:v1.8.0 
docker tag      registry.cn-hangzhou.aliyuncs.com/gaohaixiangk8s/pause:3.4.1     k8s.gcr.io/pause:3.4.1  
docker tag      registry.cn-hangzhou.aliyuncs.com/gaohaixiangk8s/flannel:v0.13.0-amd64     quay.io/coreos/flannel:v0.13.0-amd64 
docker tag      registry.cn-hangzhou.aliyuncs.com/gaohaixiangk8s/etcd:3.4.13-0     k8s.gcr.io/etcd:3.4.13-0  

docker rmi      registry.cn-hangzhou.aliyuncs.com/gaohaixiangk8s/kube-apiserver:v1.21.3
docker rmi      registry.cn-hangzhou.aliyuncs.com/gaohaixiangk8s/kube-scheduler:v1.21.3
docker rmi      registry.cn-hangzhou.aliyuncs.com/gaohaixiangk8s/kube-proxy:v1.21.3
docker rmi      registry.cn-hangzhou.aliyuncs.com/gaohaixiangk8s/kube-controller-manager:v1.21.3
docker rmi      registry.cn-hangzhou.aliyuncs.com/gaohaixiangk8s/coredns:v1.8.0
docker rmi      registry.cn-hangzhou.aliyuncs.com/gaohaixiangk8s/pause:3.4.1
docker rmi      registry.cn-hangzhou.aliyuncs.com/gaohaixiangk8s/flannel:v0.13.0-amd64
docker rmi      registry.cn-hangzhou.aliyuncs.com/gaohaixiangk8s/etcd:3.4.13-0

8、haproxy安装
k8s-node3节点上完成
 yum install  haproxy -y

[root@k8s-node3 yum.repos.d]# cat  /etc/haproxy/haproxy.cfg
# /etc/haproxy/haproxy.cfg
#---------------------------------------------------------------------
# Global settings
#---------------------------------------------------------------------
global
    log /dev/log local0
    log /dev/log local1 notice
    daemon

#---------------------------------------------------------------------
# common defaults that all the 'listen' and 'backend' sections will
# use if not designated in their block
#---------------------------------------------------------------------
#defaults
listen stats 0.0.0.0:12345
    mode                    http
    log                     global
    maxconn 10
    stats enable
    stats hide-version
    stats refresh 30s
    stats show-node
    #stats auth admin:p@sssw0rd
    stats uri /stats
    
    option                  httplog
    option                  dontlognull
    option http-server-close
    option forwardfor       except 127.0.0.0/8
    option                  redispatch
    retries                 1
    timeout http-request    10s
    timeout queue           20s
    timeout connect         5s
    timeout client          20s
    timeout server          20s
    timeout http-keep-alive 10s
    timeout check           10s

#---------------------------------------------------------------------
# apiserver frontend which proxys to the masters
#---------------------------------------------------------------------
frontend apiserver
    bind 0.0.0.0:12567
    mode tcp
    option tcplog
    default_backend kube-api-server

#---------------------------------------------------------------------
# round robin balancing for apiserver
#---------------------------------------------------------------------
backend kube-api-server
    option httpchk GET /healthz
    http-check expect status 200
    mode tcp
    option ssl-hello-chk
    balance     roundrobin
    server k8s-master1 192.168.73.100:6443 check
    server k8s-master2 192.168.73.101:6443 check
    server k8s-master3 192.168.73.102:6443 check

[root@k8s-node3 yum.repos.d]# systemctl enable haproxy --now
[root@k8s-node3 yum.repos.d]# systemctl restart haproxy

使用页面打开,查看各个主节点的状态
http://192.168.73.102:12345/stats


9、初始化k8s-node3节点做为主节点
[root@ZYC2-YWGLXT-DWJK-mysql01 xtjk06]#  kubeadm init --control-plane-endpoint "192.168.73.102:12567"   --service-cidr=10.1.0.0/16 --pod-network-cidr=10.244.0.0/16 --upload-certs --kubernetes-version=v1.21.3  --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/12 --ignore-preflight-errors=Swap
[init] Using Kubernetes version: v1.21.3
[preflight] Running pre-flight checks
        [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
        [WARNING Hostname]: hostname "zyc2-ywglxt-dwjk-mysql01" could not be reached
        [WARNING Hostname]: hostname "zyc2-ywglxt-dwjk-mysql01": lookup zyc2-ywglxt-dwjk-mysql01 on [::1]:53: read udp [::1]:45193->[::1]:53: read: connection refused
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local zyc2-ywglxt-dwjk-mysql01] and IPs [10.96.0.1 10.153.167.6]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost zyc2-ywglxt-dwjk-mysql01] and IPs [10.153.167.6 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost zyc2-ywglxt-dwjk-mysql01] and IPs [10.153.167.6 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "admin.conf" kubeconfig file
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 19.552316 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.21" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[upload-certs] Using certificate key:
1445a1305b932fd4aab6773ec687534470e0b6dbbea885255d758d1f5687024c
[mark-control-plane] Marking the node zyc2-ywglxt-dwjk-mysql01 as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node zyc2-ywglxt-dwjk-mysql01 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: 5ulz6e.xs9iv2kb77e3ak6f
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of the control-plane node running the following command on each as root:

  kubeadm join 192.168.73.102:12567 --token 5ulz6e.xs9iv2kb77e3ak6f \
        --discovery-token-ca-cert-hash sha256:6ce49a6fa50ce60af62f01a504bca2eb4da3a595602b988cf5ab6f1d926eebbc \
        --control-plane --certificate-key 1445a1305b932fd4aab6773ec687534470e0b6dbbea885255d758d1f5687024c

Please note that the certificate-key gives access to cluster sensitive data, keep it secret!
As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use
"kubeadm init phase upload-certs --upload-certs" to reload certs afterward.

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.73.102:12567 --token 5ulz6e.xs9iv2kb77e3ak6f \
        --discovery-token-ca-cert-hash sha256:6ce49a6fa50ce60af62f01a504bca2eb4da3a595602b988cf5ab6f1d926eebbc


10、开网
kube-flannel.yml文件地址
https://gitee.com/gaohaixiang192/k8s/blob/main/kube-flannel.yml
kubectl create -f  /data/bao/kube-flannel.yml

11、其他主节点加入k8s-node3节点
[root@ZYC2-YWGLXT-DWJK-mysql02 xtjk06]#   kubeadm join 192.168.73.102:12567 --token 5ulz6e.xs9iv2kb77e3ak6f \
>         --discovery-token-ca-cert-hash sha256:6ce49a6fa50ce60af62f01a504bca2eb4da3a595602b988cf5ab6f1d926eebbc \
>         --control-plane --certificate-key 1445a1305b932fd4aab6773ec687534470e0b6dbbea885255d758d1f5687024c
[preflight] Running pre-flight checks
        [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
        [WARNING Hostname]: hostname "zyc2-ywglxt-dwjk-mysql02" could not be reached
        [WARNING Hostname]: hostname "zyc2-ywglxt-dwjk-mysql02": lookup zyc2-ywglxt-dwjk-mysql02 on [::1]:53: read udp [::1]:51661->[::1]:53: read: connection refused
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[preflight] Running pre-flight checks before initializing the new control plane instance
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[download-certs] Downloading the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local zyc2-ywglxt-dwjk-mysql02] and IPs [10.96.0.1 10.153.167.7 10.153.167.6]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost zyc2-ywglxt-dwjk-mysql02] and IPs [10.153.167.7 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost zyc2-ywglxt-dwjk-mysql02] and IPs [10.153.167.7 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[certs] Using the existing "sa" key
[kubeconfig] Generating kubeconfig files
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "admin.conf" kubeconfig file
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[check-etcd] Checking that the etcd cluster is healthy
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
[etcd] Announced new etcd member joining to the existing etcd cluster
[etcd] Creating static Pod manifest for "etcd"
[etcd] Waiting for the new etcd member to join the cluster. This can take up to 40s
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[mark-control-plane] Marking the node zyc2-ywglxt-dwjk-mysql02 as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node zyc2-ywglxt-dwjk-mysql02 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]

This node has joined the cluster and a new control plane instance was created:

* Certificate signing request was sent to apiserver and approval was received.
* The Kubelet was informed of the new secure connection details.
* Control plane (master) label and taint were applied to the new node.
* The Kubernetes control plane instances scaled up.
* A new etcd member was added to the local/stacked etcd cluster.

To start administering your cluster from this node, you need to run the following as a regular user:

        mkdir -p $HOME/.kube
        sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
        sudo chown $(id -u):$(id -g) $HOME/.kube/config

Run 'kubectl get nodes' to see this node join the cluster.

        mkdir -p $HOME/.kube
       cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
       chown $(id -u):$(id -g) $HOME/.kube/config


[root@ZYC2-YWGLXT-DWJK-agent01 bao]#   kubeadm join 192.168.73.102:12567 --token 5ulz6e.xs9iv2kb77e3ak6f \
>         --discovery-token-ca-cert-hash sha256:6ce49a6fa50ce60af62f01a504bca2eb4da3a595602b988cf5ab6f1d926eebbc \
>         --control-plane --certificate-key 1445a1305b932fd4aab6773ec687534470e0b6dbbea885255d758d1f5687024c
[preflight] Running pre-flight checks
        [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
        [WARNING Hostname]: hostname "zyc2-ywglxt-dwjk-agent01" could not be reached
        [WARNING Hostname]: hostname "zyc2-ywglxt-dwjk-agent01": lookup zyc2-ywglxt-dwjk-agent01 on [::1]:53: read udp [::1]:43482->[::1]:53: read: connection refused
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[preflight] Running pre-flight checks before initializing the new control plane instance
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[download-certs] Downloading the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost zyc2-ywglxt-dwjk-agent01] and IPs [10.153.167.8 127.0.0.1 ::1]
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost zyc2-ywglxt-dwjk-agent01] and IPs [10.153.167.8 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local zyc2-ywglxt-dwjk-agent01] and IPs [10.96.0.1 10.153.167.8 10.153.167.6]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[certs] Using the existing "sa" key
[kubeconfig] Generating kubeconfig files
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "admin.conf" kubeconfig file
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[check-etcd] Checking that the etcd cluster is healthy
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
[etcd] Announced new etcd member joining to the existing etcd cluster
[etcd] Creating static Pod manifest for "etcd"
[etcd] Waiting for the new etcd member to join the cluster. This can take up to 40s
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[mark-control-plane] Marking the node zyc2-ywglxt-dwjk-agent01 as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node zyc2-ywglxt-dwjk-agent01 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]

This node has joined the cluster and a new control plane instance was created:

* Certificate signing request was sent to apiserver and approval was received.
* The Kubelet was informed of the new secure connection details.
* Control plane (master) label and taint were applied to the new node.
* The Kubernetes control plane instances scaled up.
* A new etcd member was added to the local/stacked etcd cluster.

To start administering your cluster from this node, you need to run the following as a regular user:

        mkdir -p $HOME/.kube
        sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
        sudo chown $(id -u):$(id -g) $HOME/.kube/config

Run 'kubectl get nodes' to see this node join the cluster.


12、添加工作节点到主节点中
[root@node1 bao2]# kubeadm join 192.168.73.102:12567 --token igdypq.ze9ee5xry1o1mnan \
> --discovery-token-ca-cert-hash sha256:b54b6fc0123618bdcf3609f1f8a58f7283905bcf3fa7f4e5ab607fa8a32e7a97 
[preflight] Running pre-flight checks
    [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
    [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.


13、时间过长,添加工作节点到主节点会报错,如下
加入报错
error execution phase preflight: couldn't validate the identity of the API Server: could not find a JWS signature in the cluster-info ConfigMap for token ID "5ulz6e"
To see the stack trace of this error execute with --v=5 or higher

重新生成token认证,然后再加入
[root@ZYC2-YWGLXT-DWJK-mysql01 bao]# kubeadm token generate
9psojs.mqrgtud16qjymfok
[root@ZYC2-YWGLXT-DWJK-mysql01 bao]# kubeadm token create 9psojs.mqrgtud16qjymfok  --print-join-command --ttl=0  
kubeadm join 10.153.167.6:12567 --token 9psojs.mqrgtud16qjymfok --discovery-token-ca-cert-hash sha256:6ce49a6fa50ce60af62f01a504bca2eb4da3a595602b988cf5ab6f1d926eebbc

加入成功
[root@jk-host1 bao]# kubeadm join 192.168.73.102:12567 --token 9psojs.mqrgtud16qjymfok --discovery-token-ca-cert-hash sha256:6ce49a6fa50ce60af62f01a504bca2eb4da3a595602b988cf5ab6f1d926eebbc
[preflight] Running pre-flight checks
        [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.


14、部署测试环境进行集群测试是否正常,如部署mysql然后再测试,等等

おすすめ

転載: blog.csdn.net/liao__ran/article/details/119461069
おすすめ