使用kubeadm安装Kubernetes 1.13(2)

2.3 安装Pod Network

$ kubectl apply -f “https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d ‘\n’)”
去掉docker的科学上网
配置docker私仓
/etc/docker/daemon.json
{
“insecure-registries”: [“私仓URL:私仓端口”]
}
重启docker
将外网镜像全部推到私仓上
docker tag weaveworks/weave-npc:2.6.0 私仓URL:私仓端口/weaveworks/weave-npc:2.6.0
docker push 私仓URL:私仓端口/weaveworks/weave-npc
docker tag weaveworks/weave-kube:2.6.0 私仓URL:私仓端口/weaveworks/weave-kube:2.6.0
docker push 私仓URL:私仓端口/weaveworks/weave-kube
docker tag k8s.gcr.io/coredns:1.6.2 私仓URL:私仓端口/k8s.gcr.io/coredns:1.6.2
docker push 私仓URL:私仓端口/k8s.gcr.io/coredns
docker tag k8s.gcr.io/kube-proxy:v1.13.0 私仓URL:私仓端口/k8s.gcr.io/kube-proxy:v1.13.0
docker push 私仓URL:私仓端口/k8s.gcr.io/kube-proxy
docker tag k8s.gcr.io/kube-apiserver:v1.13.0 私仓URL:私仓端口/k8s.gcr.io/kube-apiserver:v1.13.0
docker push 私仓URL:私仓端口/k8s.gcr.io/kube-apiserver
docker tag k8s.gcr.io/kube-controller-manager:v1.13.0 私仓URL:私仓端口/k8s.gcr.io/kube-controller-manager:v1.13.0
docker push 私仓URL:私仓端口/k8s.gcr.io/kube-controller-manager
docker tag k8s.gcr.io/kube-scheduler:v1.13.0 私仓URL:私仓端口/k8s.gcr.io/kube-scheduler:v1.13.0
docker push 私仓URL:私仓端口/k8s.gcr.io/kube-scheduler
docker tag k8s.gcr.io/coredns:1.2.6 私仓URL:私仓端口/k8s.gcr.io/coredns:1.2.6
docker push 私仓URL:私仓端口/k8s.gcr.io/coredns
docker tag k8s.gcr.io/etcd:3.2.24 私仓URL:私仓端口/k8s.gcr.io/etcd:3.2.24
docker push 私仓URL:私仓端口/k8s.gcr.io/etcd
docker tag k8s.gcr.io/pause:3.1 私仓URL:私仓端口/k8s.gcr.io/pause:3.1
docker push 私仓URL:私仓端口/k8s.gcr.io/pause

2.4 master node参与工作负载

使用kubeadm初始化的集群,出于安全考虑Pod不会被调度到Master Node上,也就是说Master Node不参与工作负载。这是因为当前的master节点node1被打上了node-role.kubernetes.io/master:NoSchedule的污点:
kubectl describe node node1 | grep Taint
Taints: node-role.kubernetes.io/master:NoSchedule
因为这里搭建的是测试环境,去掉这个污点使node1参与工作负载:
kubectl taint nodes node1 node-role.kubernetes.io/master-
node “node1” untainted

2.5 向Kubernetes集群中添加Node节点

在node节点上,安装docker、kubeadm,版本和主节点保持一致,不需要单独对docker进行代理配置,因为基础镜像都通过tag的方式从本地仓库拉取
(对本地仓镜像拉取并tag)
需要先配置私仓并重启登陆私仓
docker pull 私仓URL:私仓端口/weaveworks/weave-npc:2.6.0
docker tag 私仓URL:私仓端口/weaveworks/weave-npc:2.6.0 weaveworks/weave-npc:2.6.0
docker pull 私仓URL:私仓端口/weaveworks/weave-kube:2.6.0
docker tag 私仓URL:私仓端口/weaveworks/weave-kube:2.6.0 weaveworks/weave-kube:2.6.0
docker pull 私仓URL:私仓端口/k8s.gcr.io/coredns:1.6.2
docker tag 私仓URL:私仓端口/k8s.gcr.io/coredns:1.6.2 k8s.gcr.io/coredns:1.6.2
docker pull 私仓URL:私仓端口/k8s.gcr.io/kube-proxy:v1.13.0
docker tag 私仓URL:私仓端口/k8s.gcr.io/kube-proxy:v1.13.0 k8s.gcr.io/kube-proxy:v1.13.0
docker pull 私仓URL:私仓端口/k8s.gcr.io/kube-apiserver:v1.13.0
docker tag 私仓URL:私仓端口/k8s.gcr.io/kube-apiserver:v1.13.0 k8s.gcr.io/kube-apiserver:v1.13.0
docker pull 私仓URL:私仓端口/k8s.gcr.io/kube-controller-manager:v1.13.0
docker tag 私仓URL:私仓端口/k8s.gcr.io/kube-controller-manager:v1.13.0 kube-controller-manager:v1.13.0
docker pull 私仓URL:私仓端口/k8s.gcr.io/kube-scheduler:v1.13.0
docker tag 私仓URL:私仓端口/k8s.gcr.io/kube-scheduler:v1.13.0 k8s.gcr.io/kube-scheduler:v1.13.0
docker pull 私仓URL:私仓端口/k8s.gcr.io/coredns:1.2.6
docker tag 私仓URL:私仓端口/k8s.gcr.io/coredns:1.2.6 k8s.gcr.io/coredns:1.2.6
docker pull 私仓URL:私仓端口/k8s.gcr.io/etcd:3.2.24
docker tag 私仓URL:私仓端口/k8s.gcr.io/etcd:3.2.24 k8s.gcr.io/etcd:3.2.24
docker pull 私仓URL:私仓端口/k8s.gcr.io/pause:3.1
docker tag 私仓URL:私仓端口/k8s.gcr.io/pause:3.1 k8s.gcr.io/pause:3.1

下面我们将node2这个主机添加到Kubernetes集群中,因为我们同样在node2上的kubelet的启动参数中去掉了必须关闭swap的限制,所以同样需要–ignore-preflight-errors=Swap这个参数。 在node2上执行:
查看token
kubeadm token list | awk -F" " ‘{print $1}’ |tail -n 1
查看ca哈希
openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed ‘s/^ .* //’

kubeadm join 192.168.61.11(主节点):6443 --token 702gz5.49zhotgsiyqimwqw --discovery-token-ca-cert-hash sha256:2bc50229343849e8021d2aa19d9d314539b40ec7a311b5bb6ca1d3cd10957c2f
–ignore-preflight-errors=Swap

[preflight] Running pre-flight checks
[WARNING Swap]: running with swap on is not supported. Please disable swap
[discovery] Trying to connect to API Server “192.168.61.11:6443”
[discovery] Created cluster-info discovery client, requesting info from “https://192.168.61.11:6443”
[discovery] Requesting info from “https://192.168.61.11:6443” again to validate TLS against the pinned public key
[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server “192.168.61.11:6443”
[discovery] Successfully established connection with API Server “192.168.61.11:6443”
[join] Reading configuration from the cluster…
[join] FYI: You can look at this config file with ‘kubectl -n kube-system get cm kubeadm-config -oyaml’
[kubelet] Downloading configuration for the kubelet from the “kubelet-config-1.13” ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file “/var/lib/kubelet/config.yaml”
[kubelet-start] Writing kubelet environment file with flags to file “/var/lib/kubelet/kubeadm-flags.env”
[kubelet-start] Activating the kubelet service
[tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap…
[patchnode] Uploading the CRI Socket information “/var/run/dockershim.sock” to the Node API object “node2” as an annotation

This node has joined the cluster:

  • Certificate signing request was sent to apiserver and a response was received.
  • The Kubelet was informed of the new secure connection details.

Run ‘kubectl get nodes’ on the master to see this node join the cluster.
node2加入集群很是顺利,下面在master节点上执行命令查看集群中的节点:
kubectl get nodes
NAME STATUS ROLES AGE VERSION
node1 Ready master 16m v1.13.0
node2 Ready 4m5s v1.13.0

如何从集群中移除Node
如果需要从集群中移除node2这个Node执行下面的命令:
在master节点上执行:
kubectl drain node2 --delete-local-data --force --ignore-daemonsets
kubectl delete node node2

在node2上执行:
kubeadm reset
ifconfig cni0 down
ip link delete cni0
ifconfig flannel.1 down
ip link delete flannel.1
rm -rf /var/lib/cni/

在node1上执行:
kubectl delete node node2

2.6 kube-proxy开启ipvs

修改ConfigMap的kube-system/kube-proxy中的config.conf,mode: “ipvs”:
kubectl edit cm kube-proxy -n kube-system
之后重启各个节点上的kube-proxy pod:
kubectl get pod -n kube-system | grep kube-proxy | awk ‘{system(“kubectl delete pod “$1” -n kube-system”)}’
kubectl get pod -n kube-system | grep kube-proxy
kube-proxy-pf55q 1/1 Running 0 9s
kube-proxy-qjnnc 1/1 Running 0 14s

kubectl logs kube-proxy-pf55q -n kube-system
I1208 06:12:23.516444 1 server_others.go:189] Using ipvs Proxier.
W1208 06:12:23.516738 1 proxier.go:365] IPVS scheduler not specified, use rr by default
I1208 06:12:23.516840 1 server_others.go:216] Tearing down inactive rules.
I1208 06:12:23.575222 1 server.go:464] Version: v1.13.0
I1208 06:12:23.585142 1 conntrack.go:52] Setting nf_conntrack_max to 131072
I1208 06:12:23.586203 1 config.go:202] Starting service config controller
I1208 06:12:23.586243 1 controller_utils.go:1027] Waiting for caches to sync for service config controller
I1208 06:12:23.586269 1 config.go:102] Starting endpoints config controller
I1208 06:12:23.586275 1 controller_utils.go:1027] Waiting for caches to sync for endpoints config controller
I1208 06:12:23.686959 1 controller_utils.go:1034] Caches are synced for endpoints config controller
I1208 06:12:23.687056 1 controller_utils.go:1034] Caches are synced for service config controller
日志中打印出了Using ipvs Proxier,说明ipvs模式已经开启。

注意
• 如果kubectl不可用,则需要unset http_proxy https_proxy,不然命令会报错

发布了12 篇原创文章 · 获赞 0 · 访问量 554

猜你喜欢

转载自blog.csdn.net/weixin_42305433/article/details/103931045