Cloud native kubernetes deployment

Table of contents

1. Experimental environment

2. Initialize the environment (three nodes are synchronized)

3. Configure the mapping relationship (three nodes are synchronized)

4. Pass the bridged IPv4 traffic to the iptables chain (three nodes are synchronized)

5. Install docker (three nodes are synchronized)

6. Configure K8S source, install kubelet, kubeadm, kubectl components (three nodes are synchronized)

7. Master node configuration

8. Node node configuration (both nodes are done)

9. master configuration

10. Deployment completed


1. Experimental environment

the host IP
master 192.168.159.10
node01 192.168.159.13
node02 192.168.159.11

This experiment uses kubeadm to install

2. Initialize the environment (three nodes are synchronized)

[root@zwb_master ~]# systemctl stop firewalld ## Turn off the firewall
[root@zwb_master ~]# systemctl disable firewalld ## Disable booting
[root@zwb_master ~]# setenforce 0 ## Close the information security center
setenforce: SELinux is disabled
[ root@zwb_master ~]# swapoff -a ## Close the swap partition
[root@zwb_master ~]# free -g ## View swap, it shows that it is closed
              total used free shared buff/cache available
Mem: 3 0 2 0 0 3
Swap: 0 0 0

3. Configure the mapping relationship (three nodes are synchronized)

[root@zwb_node02 ~]# vim /etc/hosts

......................................

192.168.159.10 master
192.168.159.11 node02
192.168.159.13 node03                     

4. Pass the bridged IPv4 traffic to the iptables chain (three nodes are synchronized)

[root@zwb_master ~]# cat > /etc/sysctl.d/k8s.conf << EOF
> net.bridge.bridge-nf-call-ip6tables = 1
> net.bridge.bridge-nf-call-iptables = 1
> EOF

sysctl --system ## reload

[root@master yum.repos.d]# ntpdate ntp1.aliyun.com #### Time synchronization 
 4 Nov 16:53:27 ntpdate[10411]: adjust time server 120.25.115.20 offset -0.002465 sec

5. Install docker (three nodes are synchronized)

[root@zwb_master ~]# yum install -y yum-utils device-mapper-persistent-data lvm2
                                 ### Installation dependent environment

[root@zwb_master ~]# yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo #### Set Alibaba Cloud mirror source

[root@zwb_master ~]# yum install -y docker-ce ## Install community version docker

[root@zwb_master ~]# systemctl enable docker.service --now ## Set docker to boot and start immediately

6. Configure K8S source, install kubelet, kubeadm, kubectl components (three nodes are synchronized)

[root@zwb_master ~]# cat > /etc/yum.repos.d/kubernetes.repo << EOF
> [kubernetes]
> name=Kubernetes
> baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
> enabled=1
> gpgcheck=0
> repo_gpgcheck=0
> gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
> https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
> EOF

[root@zwb_master ~]# yum install -y kubelet-1.21.3 kubeadm-1.21.3 kubectl-1.21.3

[root@zwb_master ~]# systemctl enable kubelet.service --now ## Set to boot and run immediately

7. Master node configuration

[root@zwb_master ~]# kubeadm init --apiserver-advertise-address=192.168.226.130 --image-repository registry.cn-hangzhou.aliyuncs.com/google_containers --kubernetes-version v1.21.3 --service-cidr=10.125.0.0/16 --pod-network-cidr=10.150.0.0/16
 

[root@zwb_master ~]#   mkdir -p $HOME/.kube                              ## 提权
[root@zwb_master ~]#   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@zwb_master ~]#   sudo chown $(id -u):$(id -g) $HOME/.kube/config

8. Node node configuration (both nodes are done)

#Copy, record and apply to join the cluster command

kubeadm join 192.168.159.10:6443 --token z78qqi.94i3znnuu0sundzr \
    --discovery-token-ca-cert-hash sha256:4521a93aefff86da70790811841cb25885bb7ed4e0b338ed0cc194f5b6127129

9. master configuration

## Upload kube-flannel.yml 

[root@zwb_master ~]# cd /opt
[root@zwb_master opt]# ls
kube-flannel.yml

[root@zwb_master opt]# kubectl apply -f kube-flannel.yml

### View all namespaces

[root@zwb_master opt]# kubectl get pods -A
NAMESPACE      NAME                             READY   STATUS    RESTARTS   AGE
kube-flannel   kube-flannel-ds-7clld            1/1     Running   0          60s
kube-flannel   kube-flannel-ds-psgvb            1/1     Running   0          60s
kube-flannel   kube-flannel-ds-xxncr            1/1     Running   0          60s
kube-system    coredns-6f6b8cc4f6-lbvl5         1/1     Running   0          10m
kube-system    coredns-6f6b8cc4f6-m6brz         1/1     Running   0          10m
kube-system    etcd-master                      1/1     Running   0          10m
kube-system    kube-apiserver-master            1/1     Running   0          10m
be-system be-controller-manager-master 1/1 Running 0 10m
be-system be-proxy-jwpnz 1/1 Running 0 6m2s
be-system be-proxy-xqcqm 1/1 Running 0 6m7s
be-system be-proxy -z6rhl 1/1 Running 0 10m
kube-system kube-scheduler-master 1/1 Running 0 10m
 

### output specified namespace

[root@zwb_master opt]# kubectl get pods -n kube-flannel
NAME                    READY   STATUS    RESTARTS   AGE
kube-flannel-ds-7clld   1/1     Running   0          6m6s
kube-flannel-ds-psgvb   1/1     Running   0          6m6s
kube-flannel-ds-xxncr   1/1     Running   0          6m6s
 

### View, found two Unhealthy

[root@zwb_master opt]# kubectl get cs
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME                 STATUS      MESSAGE                                                                                       ERROR
scheduler            Unhealthy   Get "http://127.0.0.1:10251/healthz": dial tcp 127.0.0.1:10251: connect: connection refused   
controller-manager   Unhealthy   Get "http://127.0.0.1:10252/healthz": dial tcp 127.0.0.1:10252: connect: connection refused   
etcd-0               Healthy     {"health":"true"}  

## Change setting

[root@zwb_master ~]# cd /etc/kubernetes/manifests/
[root@zwb_master manifests]# ls
etcd.yaml  kube-apiserver.yaml  kube-controller-manager.yaml  kube-scheduler.yaml
[root@zwb_master manifests]# vim kube-controller-manager.yaml

Line 26 logout

[root@zwb_master manifests]# vim kube-scheduler.yaml

## View status is successful

[root@zwb_master manifests]# kubectl get cs
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME                 STATUS    MESSAGE             ERROR
scheduler            Healthy   ok                  
controller-manager   Healthy   ok                  
etcd-0               Healthy   {"health":"true"}  

[root@zwb_master manifests]# kubectl get pods -A
NAMESPACE      NAME                             READY   STATUS    RESTARTS   AGE
kube-flannel   kube-flannel-ds-7clld            1/1     Running   0          26m
kube-flannel   kube-flannel-ds-psgvb            1/1     Running   0          26m
kube-flannel   kube-flannel-ds-xxncr            1/1     Running   0          26m
kube-system    coredns-6f6b8cc4f6-lbvl5         1/1     Running   0          36m
kube-system    coredns-6f6b8cc4f6-m6brz         1/1     Running   0          36m
kube-system    etcd-master                      1/1     Running   0          36m
kube-system    kube-apiserver-master            1/1     Running   0          36m
be-system be-controller-manager-master 1/1 Running 0 2m53s
be-system be-proxy-jwpnz 1/1 Running 0 31m
be-system be-proxy-xqcqm 1/1 Running 0 31m
be-system be-proxy -z6rhl 1/1 Running 0 36m
kube-system kube-scheduler-master 1/1 Running 0 2m5s
 

10. Deployment completed

[root@zwb_master manifests]# kubectl get nodes
NAME     STATUS   ROLES                  AGE   VERSION
master   Ready    control-plane,master   44m   v1.21.3
node01   Ready    <none>                 39m   v1.21.3
node02   Ready    <none>                 39m   v1.21.3
### 打标签

[root@zwb_master manifests]# kubectl label node node01 node-role.kubernetes.io/node=node
node/node01 labeled
[root@zwb_master manifests]# kubectl label node node02 node-role.kubernetes.io/node=node
node/node02 labeled
[root@zwb_master manifests]# kubectl get nodes
NAME     STATUS   ROLES                  AGE   VERSION
master   Ready    control-plane,master   47m   v1.21.3
node01   Ready    node                   42m   v1.21.3
node02   Ready    node                   42m   v1.21.3
 

Guess you like

Origin blog.csdn.net/m0_62948770/article/details/127638675