k8s升级记录:1.18.5到1.18.6

一、环境说明

本文升级的环境,为kubeadm部署的测试环境,一台master,一台node,可作为升级参考

二、升级详细记录,以及一些说明

执行以下脚本前,需要先下载k8s更新镜像,可用我的上篇博客在国内环境下下载镜像的脚本

1、升级kubeadm版本

不然升级时会报错。只升级Master就能完成下面的操作了,但是我还是连同Node的一起升级了。

yum update -y kubeadm

查看kubeadm版本

kubeadm version

查看kubeadm升级计划

root@node-1 ~]# kubeadm upgrade plan
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[preflight] Running pre-flight checks.
[upgrade] Running cluster health checks
[upgrade] Fetching available versions to upgrade to
[upgrade/versions] Cluster version: v1.18.5
[upgrade/versions] kubeadm version: v1.18.5
W0721 11:02:38.021509    2388 version.go:102] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable.txt": Get https://storage.googleapis.com/kubernetes-release/release/stable.txt: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
W0721 11:02:38.021544    2388 version.go:103] falling back to the local client version: v1.18.5
[upgrade/versions] Latest stable version: v1.18.5
[upgrade/versions] Latest stable version: v1.18.5
[upgrade/versions] Latest version in the v1.18 series: v1.18.6
[upgrade/versions] Latest version in the v1.18 series: v1.18.6

Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply':
COMPONENT   CURRENT       AVAILABLE
Kubelet     2 x v1.18.5   v1.18.6

Upgrade to the latest version in the v1.18 series:

COMPONENT            CURRENT   AVAILABLE
API Server           v1.18.5   v1.18.6
Controller Manager   v1.18.5   v1.18.6
Scheduler            v1.18.5   v1.18.6
Kube Proxy           v1.18.5   v1.18.6
CoreDNS              1.6.7     1.6.7
Etcd                 3.4.3     3.4.3-0

You can now apply the upgrade by executing the following command:

        kubeadm upgrade apply v1.18.6

Note: Before you can perform this upgrade, you have to update kubeadm to v1.18.6.

_____________________________________________________________________

按照任务指引进行升级

[root@node-1 ~]# kubeadm upgrade apply v1.18.6
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[preflight] Running pre-flight checks.
[upgrade] Running cluster health checks
[upgrade/version] You have chosen to change the cluster version to "v1.18.6"
[upgrade/versions] Cluster version: v1.18.5
[upgrade/versions] kubeadm version: v1.18.6
[upgrade/confirm] Are you sure you want to proceed with the upgrade? [y/N]: y
[upgrade/prepull] Will prepull images for components [kube-apiserver kube-controller-manager kube-scheduler etcd]
[upgrade/prepull] Prepulling image for component etcd.
[upgrade/prepull] Prepulling image for component kube-apiserver.
[upgrade/prepull] Prepulling image for component kube-controller-manager.
[upgrade/prepull] Prepulling image for component kube-scheduler.
[apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-kube-apiserver
[apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-etcd
[apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-kube-scheduler
[apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-kube-controller-manager
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-apiserver
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-etcd
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-scheduler
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-controller-manager
[upgrade/prepull] Prepulled image for component kube-apiserver.
[upgrade/prepull] Prepulled image for component etcd.
[upgrade/prepull] Prepulled image for component kube-scheduler.
[upgrade/prepull] Prepulled image for component kube-controller-manager.
[upgrade/prepull] Successfully prepulled the images for all the control plane components
[upgrade/apply] Upgrading your Static Pod-hosted control plane to version "v1.18.6"...
Static pod: kube-apiserver-node-1 hash: 885fee7a9fbfdeee96639f51278b09b1
Static pod: kube-controller-manager-node-1 hash: 84607590e836f839dcbf22068a9b216c
Static pod: kube-scheduler-node-1 hash: 3415bde3e2a04810cc416f7719a3f6aa
[upgrade/etcd] Upgrading to TLS for etcd
[upgrade/etcd] Non fatal issue encountered during upgrade: the desired etcd version for this Kubernetes version "v1.18.6" is "3.4.3-0", but the current etcd version is "3.4.3". Won't downgrade etcd, instead just continue
[upgrade/staticpods] Writing new Static Pod manifests to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests249686084"
W0721 11:10:32.301810   10692 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[upgrade/staticpods] Preparing for "kube-apiserver" upgrade
[upgrade/staticpods] Renewing apiserver certificate
[upgrade/staticpods] Renewing apiserver-kubelet-client certificate
[upgrade/staticpods] Renewing front-proxy-client certificate
[upgrade/staticpods] Renewing apiserver-etcd-client certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-apiserver.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2020-07-21-11-10-29/kube-apiserver.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-apiserver-node-1 hash: 885fee7a9fbfdeee96639f51278b09b1
Static pod: kube-apiserver-node-1 hash: 885fee7a9fbfdeee96639f51278b09b1
Static pod: kube-apiserver-node-1 hash: ce5c99a18293147c422f8b70a4772a08
[apiclient] Found 1 Pods for label selector component=kube-apiserver
[upgrade/staticpods] Component "kube-apiserver" upgraded successfully!
[upgrade/staticpods] Preparing for "kube-controller-manager" upgrade
[upgrade/staticpods] Renewing controller-manager.conf certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-controller-manager.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2020-07-21-11-10-29/kube-controller-manager.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-controller-manager-node-1 hash: 84607590e836f839dcbf22068a9b216c
Static pod: kube-controller-manager-node-1 hash: a498455ef3ba2eb7d5fd01ed8271dea6
[apiclient] Found 1 Pods for label selector component=kube-controller-manager
[upgrade/staticpods] Component "kube-controller-manager" upgraded successfully!
[upgrade/staticpods] Preparing for "kube-scheduler" upgrade
[upgrade/staticpods] Renewing scheduler.conf certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-scheduler.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2020-07-21-11-10-29/kube-scheduler.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-scheduler-node-1 hash: 3415bde3e2a04810cc416f7719a3f6aa
Static pod: kube-scheduler-node-1 hash: 3dd66788a2c7782d910d05ea37b91678
[apiclient] Found 1 Pods for label selector component=kube-scheduler
[upgrade/staticpods] Component "kube-scheduler" upgraded successfully!
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.18" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

[upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.18.6". Enjoy!

[upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven't already done so.

在Master上执行以下命令升级Node

[root@node-1 ~]# kubeadm upgrade node
[upgrade] Reading configuration from the cluster...
[upgrade] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[upgrade] Upgrading your Static Pod-hosted control plane instance to version "v1.18.6"...
Static pod: kube-apiserver-node-1 hash: ce5c99a18293147c422f8b70a4772a08
Static pod: kube-controller-manager-node-1 hash: a498455ef3ba2eb7d5fd01ed8271dea6
Static pod: kube-scheduler-node-1 hash: 3dd66788a2c7782d910d05ea37b91678
[upgrade/etcd] Upgrading to TLS for etcd
[upgrade/etcd] Non fatal issue encountered during upgrade: the desired etcd version for this Kubernetes version "v1.18.6" is "3.4.3-0", but the current etcd version is "3.4.3". Won't downgrade etcd, instead just continue
[upgrade/staticpods] Writing new Static Pod manifests to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests472349924"
W0721 11:35:50.284639    8362 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[upgrade/staticpods] Preparing for "kube-apiserver" upgrade
[upgrade/staticpods] Current and new manifests of kube-apiserver are equal, skipping upgrade
[upgrade/staticpods] Preparing for "kube-controller-manager" upgrade
[upgrade/staticpods] Current and new manifests of kube-controller-manager are equal, skipping upgrade
[upgrade/staticpods] Preparing for "kube-scheduler" upgrade
[upgrade/staticpods] Current and new manifests of kube-scheduler are equal, skipping upgrade
[upgrade] The control plane instance for this node was successfully updated!
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.18" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[upgrade] The configuration for this node was successfully updated!
[upgrade] Now you should go ahead and upgrade the kubelet package using your package manager.

查看版本,发现Server已经升级了,Client还没有升级。这个是需要升级kubectl才会看到升级的。

[root@node-1 ~]# kubectl version
Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.5", GitCommit:"e6503f8d8f769ace2f338794c914a96fc335df0f", GitTreeState:"clean", BuildDate:"2020-06-26T03:47:41Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.6", GitCommit:"dff82dc0de47299ab66c83c626e08b245ab19037", GitTreeState:"clean", BuildDate:"2020-07-15T16:51:04Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}

在各节点上执行

yum update kubectl

此时查看Node,还是老版本。这个是需要升级kubelet并重启kubelet才看得到升级的。

[root@node-1 ~]# kubectl get nodes
NAME     STATUS   ROLES    AGE     VERSION
node-1   Ready    master   5d14h   v1.18.5
node-2   Ready    <none>   5d12h   v1.18.5

在各个Node上升级kubelet

yum update -y kubelet
systemctl daemon-reload  # 就算不执行,系统也会提示要执行这条命令的
systemctl restart kubelet

再次查看,已经升级成功

[root@node-1 ~]# kubectl get nodes
NAME     STATUS   ROLES    AGE     VERSION
node-1   Ready    master   5d14h   v1.18.6
node-2   Ready    <none>   5d12h   v1.18.6

 注:升级还会更新集群的证书

升级前

[root@node-1 pki]# openssl x509 -in /etc/kubernetes/pki/apiserver.crt -noout -text |grep ' Not '
            Not Before: Jul 15 13:35:12 2020 GMT
            Not After : Jul 15 13:35:13 2021 GMT

升级后

[root@node-1 ~]# openssl x509 -in /etc/kubernetes/pki/apiserver.crt -noout -text |grep ' Not '
            Not Before: Jul 15 13:35:12 2020 GMT
            Not After : Jul 21 03:10:32 2021 GMT

猜你喜欢

转载自blog.csdn.net/CHEndorid/article/details/107486094