Production environment kubernetes cluster installation deployment -1.15.3

Version Introduction


NAME                   VERSION   INTERNAL-IP         
cnvs-kubm-101-103      v1.15.3   172.20.101.103   

OS-IMAGE                KERNEL-VERSION              CONTAINER-RUNTIME
CentOS Linux 7 (Core)   5.2.9-1.el7.elrepo.x86_64   docker://18.6.1    

project address:

https://gitlab.com/PtmindDev/devops/kub-deploy/tree/cn-k8s-prod

分支:
cn-k8s-prod

Cluster description

#master
[kub-m]
172.20.101.103 name=cnvskubm-101-103  
172.20.101.104 name=cnvskubm-101-104  
172.20.101.105 name=cnvskubm-101-105  

#node
[kub-n]
172.20.101.106 name=cnvs-kubnode-101-106 
172.20.101.107 name=cnvs-kubnode-101-107
172.20.101.108 name=cnvs-kubnode-101-108
172.20.101.118 name=cnvs-kubnode-101-118 
172.20.101.120 name=cnvs-kubnode-101-120
172.20.101.122 name=cnvs-kubnode-101-122
172.20.101.123 name=cnvs-kubnode-101-123 
172.20.101.124 name=cnvs-kubnode-101-124

ansible installation environment:

cd /workspace/kub-deploy/roles

1: Upgrading the kernel - Demand

ansible-playbook  1-kernelup.yaml  

Verify the effect

ansible kub-all -a "uname -a"

Linux kubm-01 5.2.9-1.el7.elrepo.x86_64 #1 SMP Fri Aug 16 08:17:55 EDT 2019 x86_64 x86_64 x86_64 GNU/Linux

system initialization

ansible-playbook 2-basic.yml

#单独指定其中一台运行:
ansible-playbook -i /etc/ansible/hosts 2-basic.yml --limit 172.20.101.103

Install nginx

ansible-playbook 3-nginx.yaml

verification

#版本
[root@kubm-01 roles]# ansible kub-m -a "nginx -v"     

172.20.101.103 | CHANGED | rc=0 >>
nginx version: nginx/1.16.1
....

#端口
ansible kub-m -m shell -a  "lsof -n -i:16443"

172.20.101.103 | CHANGED | rc=0 >>
COMMAND   PID  USER   FD   TYPE DEVICE SIZE/OFF NODE NAME
nginx   21392  root    5u  IPv4 434526      0t0  TCP *:16443 (LISTEN)
。。。。

Installation keepalived

ansible-playbook 4-keepalived.yml 

return

 

********
ok: [172.20.101.103] => {
    "output.stdout_lines": [
        "    inet 172.20.101.253/32 scope global eth0"
    ]
.......
ok: [172.20.101.105] => {
    "output.stdout_lines": []
}

Monitoring vip

[root@kubm-01 roles]# ping 172.20.101.253
PING 172.20.101.253 (172.20.101.253) 56(84) bytes of data.
64 bytes from 172.20.101.253: icmp_seq=1 ttl=64 time=0.059 ms

New installation deployment directory

mkdir -p /etc/kubeinstall
cd /etc/kubeinstall

Create an initial initialization file (kubm-01 execution)

flannel network plugins I use to configure network parameters --pod-network-cidr = 10.244.0.0 / 16.

cat <<EOF > /etc/kubeinstall/kubeadm-config.yaml
apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 172.20.101.103
  bindPort: 6443
nodeRegistration:
  criSocket: /var/run/dockershim.sock
  name: cnvs-kubm-101-103
  taints:
  - effect: NoSchedule
    key: node-role.kubernetes.io/master
---
apiServer:
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: cn-k8s-prod
controlPlaneEndpoint: "172.20.101.253:16443"
controllerManager: {}
dns:
  type: CoreDNS
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: k8s.gcr.io
kind: ClusterConfiguration
kubernetesVersion: v1.15.0
networking:
  dnsDomain: cluster.local
  serviceSubnet: 10.245.0.0/16
  podSubnet: "10.244.0.0/16"
scheduler: {}
EOF

Note that I do use nginx proxy

master上面都配置Nginx反向代理 API Server;
172.20.101.253 是master节点的vip;
Nginx 代理端口为 16443 端口;
API Server使用 6443 端口;

Use config to specify initialization cluster.

kubeadm init \
--config=/etc/kubeinstall/kubeadm-config.yaml \
--upload-certs 

master node:

[kub-m]
172.20.101.103 name=cnvs-kubm-101-103  
172.20.101.104 name=cnvs-kubm-101-104  
172.20.101.105 name=cnvs-kubm-101-105  

The first initialization master node returns the result

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of the control-plane node running the following command on each as root:

  kubeadm join 172.20.101.253:16443 --token hgep1g.fwo8y7rt8o8xqjml \
    --discovery-token-ca-cert-hash sha256:08462cf2017a1e3292ea355a7fc56c49ac713b84d5af45b649d7c8be539b97cf \
    --control-plane --certificate-key 1c20a3656bbcc9be4b5a16bcb4c4bab5445d221d4721900bf31b5b196b733cec

Please note that the certificate-key gives access to cluster sensitive data, keep it secret!
As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use 
"kubeadm init phase upload-certs --upload-certs" to reload certs afterward.

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 172.20.101.253:16443 --token hgep1g.fwo8y7rt8o8xqjml \
    --discovery-token-ca-cert-hash sha256:08462cf2017a1e3292ea355a7fc56c49ac713b84d5af45b649d7c8be539b97cf 

Perform operations on the execution environment initialization k8s node.

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Node Authentication:

[root@cnvs-kubnode-101-103 kubeinstall]# 

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

#节点状态
[root@cnvs-kubnode-101-103 kubeinstall]# kubectl get nodes
NAME                STATUS     ROLES    AGE     VERSION
cnvs-kubm-101-103   NotReady   master   3m35s   v1.15.3    <=== 状态 NotReady,安装网络插件后恢复

#服务状态
[root@cnvs-kubnode-101-103 kubeinstall]# kubectl get cs
NAME                 STATUS    MESSAGE             ERROR
controller-manager   Healthy   ok                  
scheduler            Healthy   ok                  
etcd-0               Healthy   {"health":"true"} 

Network deployment flannel

PodSubnet above configuration using matched pod CIDR plug mounting CNI, modified according to the actual situation.

kubernetes version update fast, read the documentation before the recommendation to deploy, use the matching version of the network plug-ins. ! ! !
https://github.com/coreos/flannel#flannel

  kubeadm join 172.20.101.253:16443 --token hgep1g.fwo8y7rt8o8xqjml \
    --discovery-token-ca-cert-hash sha256:08462cf2017a1e3292ea355a7fc56c49ac713b84d5af45b649d7c8be539b97cf \
    --control-plane --certificate-key 1c20a3656bbcc9be4b5a16bcb4c4bab5445d221d4721900bf31b5b196b733cec

Verify node status:

[root@cnvs-kubnode-101-103 kubeinstall]# kubectl get nodes
NAME                STATUS   ROLES    AGE     VERSION
cnvs-kubm-101-103   Ready    master   4m51s   v1.15.3     <=== Ready

#服务状态全部为running
root@cnvs-kubm-101-103 kubeinstall]# kubectl get pods -n kube-system
NAME                                        READY   STATUS    RESTARTS   AGE
coredns-5c98db65d4-kl66m                    1/1     Running   0          83s
coredns-5c98db65d4-xjlkl                    0/1     Running   0          83s
etcd-cnvs-kubm-101-103                      1/1     Running   0          40s
kube-apiserver-cnvs-kubm-101-103            1/1     Running   0          25s
kube-controller-manager-cnvs-kubm-101-103   1/1     Running   0          27s
kube-flannel-ds-amd64-jln7d                 1/1     Running   0          17s
kube-proxy-g2b2p                            1/1     Running   0          83s
kube-scheduler-cnvs-kubm-101-103            1/1     Running   0          35s

Add the first (2 ~ 3) master node performs the following operations

  kubeadm join 172.20.101.253:16443 --token m1n5s7.ktdbt3ce3yj4czm1 \
    --discovery-token-ca-cert-hash sha256:0eca032dcb2354f8c9e4f3ecfd2a19941b8a7b0c6cc4cc0764dc61a3a8e5ff68 \
    --control-plane --certificate-key e5b5fe5b9576a604b7107bbe12a8aa09d4ddc309c9d9447bc5552fdd481df627   

Do as follows on the implementation of the node, initialize it k8s environment.

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

verification

All master node ready

[root@cnvs-kubm-101-105 ~]# kubectl get nodes
NAME                STATUS   ROLES    AGE     VERSION
cnvs-kubm-101-103   Ready    master   4m35s   v1.15.3
cnvs-kubm-101-104   Ready    master   96s     v1.15.3
cnvs-kubm-101-105   Ready    master   22s     v1.15.3

All nodes node, perform the following operations

[kub-n]
172.20.101.106
172.20.101.107
172.20.101.108
172.20.101.118
172.20.101.120
172.20.101.122
172.20.101.123
172.20.101.124

Single-node installation

kubeadm join 172.20.101.253:16443 --token hgep1g.fwo8y7rt8o8xqjml \
    --discovery-token-ca-cert-hash sha256:08462cf2017a1e3292ea355a7fc56c49ac713b84d5af45b649d7c8be539b97cf 

ansible installation

ansible kub-n -m shell -a "kubeadm join 172.20.101.253:16443 --token hgep1g.fwo8y7rt8o8xqjml \
    --discovery-token-ca-cert-hash sha256:08462cf2017a1e3292ea355a7fc56c49ac713b84d5af45b649d7c8be539b97cf"

return

[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.15" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Activating the kubelet service
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

verification:

[root@cnvs-kubm-101-104 ~]# kubectl get node
NAME                   STATUS   ROLES    AGE     VERSION
cnvs-kubm-101-103      Ready    master   8m32s   v1.15.3
cnvs-kubm-101-104      Ready    master   5m33s   v1.15.3
cnvs-kubm-101-105      Ready    master   4m19s   v1.15.3
cnvs-kubnode-101-106   Ready    <none>   28s     v1.15.3
cnvs-kubnode-101-107   Ready    <none>   28s     v1.15.3
cnvs-kubnode-101-108   Ready    <none>   28s     v1.15.3
cnvs-kubnode-101-118   Ready    <none>   28s     v1.15.3
cnvs-kubnode-101-120   Ready    <none>   28s     v1.15.3
cnvs-kubnode-101-122   Ready    <none>   13s     v1.15.3
cnvs-kubnode-101-123   Ready    <none>   13s     v1.15.3
cnvs-kubnode-101-124   Ready    <none>   2m31s   v1.15.3

add tag

To prepare for the deployment traefik

kubectl label nodes {cnvs-kubnode-101-106,cnvs-kubnode-101-107} traefik=traefik-outer --overwrite

kubectl label nodes {cnvs-kubnode-101-123,cnvs-kubnode-101-124} traefik=traefik-inner --overwrite

verification

[root@cnvs-kubm-101-103 kub-deploy]# kubectl get node  -l "traefik=traefik-outer"
NAME                   STATUS   ROLES    AGE     VERSION
cnvs-kubnode-101-106   Ready    <none>   5m25s   v1.15.3
cnvs-kubnode-101-107   Ready    <none>   5m25s   v1.15.3

[root@cnvs-kubm-101-103 kub-deploy]# kubectl get node  -l "traefik=traefik-inner"
NAME                   STATUS   ROLES    AGE     VERSION
cnvs-kubnode-101-123   Ready    <none>   5m18s   v1.15.3
cnvs-kubnode-101-124   Ready    <none>   7m36s   v1.15.3

Cluster overall verification

#所有服务状态均为 running
[root@cnvs-kubm-101-103 kub-deploy]# kubectl get pods -n kube-system
NAME                                        READY   STATUS    RESTARTS   AGE
coredns-5c98db65d4-kl66m                    1/1     Running   0          13m
coredns-5c98db65d4-xjlkl                    1/1     Running   0          13m
etcd-cnvs-kubm-101-103                      1/1     Running   0          13m
etcd-cnvs-kubm-101-104                      1/1     Running   0          7m57s
etcd-cnvs-kubm-101-105                      1/1     Running   0          5m26s
kube-apiserver-cnvs-kubm-101-103            1/1     Running   0          13m
kube-apiserver-cnvs-kubm-101-104            1/1     Running   1          7m47s
kube-apiserver-cnvs-kubm-101-105            1/1     Running   0          4m8s
kube-controller-manager-cnvs-kubm-101-103   1/1     Running   1          13m
kube-controller-manager-cnvs-kubm-101-104   1/1     Running   0          6m38s
kube-controller-manager-cnvs-kubm-101-105   1/1     Running   0          4m11s
kube-flannel-ds-amd64-2nfbb                 1/1     Running   2          88s
kube-flannel-ds-amd64-2pbqs                 1/1     Running   1          104s
kube-flannel-ds-amd64-4w7cb                 1/1     Running   2          92s
kube-flannel-ds-amd64-gxzhw                 1/1     Running   1          3m58s
kube-flannel-ds-amd64-jln7d                 1/1     Running   0          12m
kube-flannel-ds-amd64-lj9t4                 1/1     Running   2          92s
kube-flannel-ds-amd64-mbp8k                 1/1     Running   2          91s
kube-flannel-ds-amd64-r8t9c                 1/1     Running   1          7m57s
kube-flannel-ds-amd64-rdsfm                 1/1     Running   0          3m5s
kube-flannel-ds-amd64-w8gww                 1/1     Running   1          5m26s
kube-flannel-ds-amd64-x7rh7                 1/1     Running   2          92s
kube-proxy-4kxjv                            1/1     Running   0          5m26s
kube-proxy-4vqpf                            1/1     Running   0          92s
kube-proxy-677lf                            1/1     Running   0          92s
kube-proxy-b9kr2                            1/1     Running   0          104s
kube-proxy-dm9kd                            1/1     Running   0          3m5s
kube-proxy-g2b2p                            1/1     Running   0          13m
kube-proxy-m79jv                            1/1     Running   0          3m58s
kube-proxy-snqhr                            1/1     Running   0          92s
kube-proxy-t7mkx                            1/1     Running   0          91s
kube-proxy-z2f67                            1/1     Running   0          7m57s
kube-proxy-zjpwn                            1/1     Running   0          88s
kube-scheduler-cnvs-kubm-101-103            1/1     Running   1          13m
kube-scheduler-cnvs-kubm-101-104            1/1     Running   0          7m4s
kube-scheduler-cnvs-kubm-101-105            1/1     Running   0          4m32s

#所有节点状态为ready
[root@cnvs-kubm-101-103 kub-deploy]# kubectl get nodes
NAME                   STATUS   ROLES    AGE     VERSION
cnvs-kubm-101-103      Ready    master   15m     v1.15.3
cnvs-kubm-101-104      Ready    master   9m32s   v1.15.3
cnvs-kubm-101-105      Ready    master   7m1s    v1.15.3
cnvs-kubnode-101-106   Ready    <none>   3m6s    v1.15.3
cnvs-kubnode-101-107   Ready    <none>   3m19s   v1.15.3
cnvs-kubnode-101-108   Ready    <none>   3m7s    v1.15.3
cnvs-kubnode-101-118   Ready    <none>   3m7s    v1.15.3
cnvs-kubnode-101-120   Ready    <none>   3m7s    v1.15.3
cnvs-kubnode-101-122   Ready    <none>   3m3s    v1.15.3
cnvs-kubnode-101-123   Ready    <none>   4m40s   v1.15.3
cnvs-kubnode-101-124   Ready    <none>   5m33s   v1.15.3

Batch Cleanup Cluster

kubectl delete node --all
ansible kub-all -m shell -a "kubeadm reset -f"
ansible kub-all -m shell -a "rm -rf /etc/kubernetes && rm -rf /var/lib/etcd && rm -rf /var/lib/kubelet && rm -rf /var/lib/kubelet && rm -rf $HOME/.kube/config "
ansible kub-all -m shell -a "iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X"
ansible kub-all -m shell -a "systemctl restart docker && systemctl enable kubelet"
ansible kub-all -m shell -a "ip link del flannel.1 && ip a|grep flannel "

Recommended clean up the environment

If the configuration before the configuration is too k8s or not successful the first time, etc., it is recommended to clean up the system environment, each node.

systemctl stop kubelet
docker rm -f -v $(docker ps -a -q)

rm -rf /etc/kubernetes
rm -rf /var/lib/etcd
rm -rf /var/lib/kubelet
rm -rf $HOME/.kube/config
ip link del flannel.1 
iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X

yum reinstall -y kubelet
systemctl daemon-reload
systemctl restart docker
systemctl enable kubelet

Reference Documents

https://www.cnblogs.com/net2817/p/10513369.html
https://k8smeetup.github.io/docs/reference/setup-tools/kubeadm/kubeadm-config/

Guess you like

Origin blog.51cto.com/michaelkang/2431889