Installation and deployment of k8s

Table of contents

What is k8s? what's the effect?

What is CNCF?

 What components are there in k8s?

k8s architecture diagram 

Installation and deployment of k8s

1. IP address planning:

2. Close firewalld and selinux (operate on both the k8s cluster master and node)

3. Install docker on all machines

Install yum related tools and download the docker-ce.repo file

 Install the docker-ce software

Start the docker service, set docker to start automatically

4. Close the swap partition

5. Rename the host name and modify the hosts file

6. Modify some kernel parameters

 7. Install kubadm, kubctl, kublet software

8. Deploy kubernetes master

9. Add the node node server to the k8s cluster

10. Install the network plug-in flannel (executed on the master node) 

 deploy flannel

 View the cluster status (the status is Ready, which means k8s deployment is successful)

View the details of each node

Check the namespaces in k8s (created by k8s) 

Check which node the pod is running on,


What is k8s? what's the effect?

        kubernetes is the full name of k8s

        k8s is used to manage containers, can be used to manage containers, can be used to deploy, scale, and manage containers

        Where there is code, there can be k8s

Container runtime software:

        Software that manages operations related to container creation, startup, shutdown, and mirroring

        docker

        korean 

What is CNCF?

CNCF, full name Cloud Native Computing Foundation, translated as Cloud Native Computing Foundation

 What components are there in k8s?

In terms of roles: master (management node) and node (worker node)

The control plane component on the master:

        1、be apiserver 

                Component: It is a pod, there are many containers in the pod, running related software, there will be corresponding docker image files

                The API server is a component of the Kubernetes control plane. This component is responsible for exposing the kubernetes API and handling the work of accepting requests. The api server is the front end of the Kubernetes control plane.

                The entrance of k8s, through this interface, you can understand the information and resources of the entire k8s

        2. etcd: a highly available key-value database, a background database that stores all Kubernetes cluster data

        3. scheduler scheduler: responsible for monitoring the newly created pods that do not specify the running node (node), and select the node to let the Pod run on it

                        Pod: It is the smallest computing unit in k8s, which can contain many containers, and all containers share an ip address

        4. Management innovation of controller-manager controller

                        k8s has many controllers

                                deployment The controller that deploys pods

                                replicaSET replica controller

        5、cloud-controller-manager 

                        A Kubernetes control plane component that embeds cloud-specific control logic. The cloud controller manager (cloud controller manager) allows you to connect your cluster to a cloud provider's API and interact with that platform. Separate from the components that interact with your cluster

Node component

        1. The kubelet will run on each node in the cluster. It ensures that the containers (containers) are all running in the pod, and the pod will be started on the node node server.

        2. kube-proxy maintains some network rules on the nodes, which allow network communication with pods from network sessions inside or outside the cluster

                Network communication: network communication between multiple nodes, load balancing

The use of k8s

        pod

        pv

        pvc

        controller

        hpa

#############################################################################  

k8s architecture diagram 

#############################################################################  

Installation and deployment of k8s

        1, minikube

        2. The officially recommended way to install k8s by kubeadm k8s

        3. Binary installation

        4. Third-party deployment tools: rancher, etc.

Architecture of k8s cluster

        1. Single master and multiple nodes

        2. Multi-master and multi-node -- high availability

                3master 3node 1 load balancing (nginx)

lab environment

        1 master, 3 nodes

        Software: centos7.9 docker

        Hardware: 2G/2c

1. IP address planning:

        k8s-master:192.168.44.210

        k8s-node1:192.168.44.211

        k8s-node2:192.168.44.212

        k8s-node3:192.168.44.213

############################################################################# 

2. Close firewalld and selinux (operate on both the k8s cluster master and node)

[root@k8s-master ~]# service firewalld stop
Redirecting to /bin/systemctl stop firewalld.service
[root@k8s-master ~]# systemctl disable firewalld
Removed symlink /etc/systemd/system/multi-user.target.wants/firewalld.service.
Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
[root@k8s-master ~]# setenforce 0
[root@k8s-master ~]# getenforce
Permissive

############################################################################# 

3. Install docker on all machines

Install yum related tools and download the docker-ce.repo file

yum install -y yum-utils
yum-config-manager \
    --add-repo \
    https://download.docker.com/linux/centos/docker-ce.repo

 Install the docker-ce software

yum install docker-ce docker-ce-cli containerd.io docker-compose-plugin -y

Start the docker service, set docker to start automatically

[root@k8s-master yum.repos.d]# systemctl start docker
[root@k8s-master yum.repos.d]# ps aux|grep docker
root      11877  6.1  2.6 1027340 48624 ?       Ssl  17:35   0:00 /usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
root      12001  0.0  0.0 112824   976 pts/0    S+   17:35   0:00 grep --color=auto docker
[root@k8s-master yum.repos.d]# systemctl enable docker
Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.
[root@k8s-master yum.repos.d]# 

#############################################################################

4. Close the swap partition

[root@k8s-master yum.repos.d]# swapoff -a
[root@k8s-master yum.repos.d]# free -m
              total        used        free      shared  buff/cache   available
Mem:           1819         298         779           9         741        1354
Swap:             0           0           0
[root@k8s-master yum.repos.d]# 

#############################################################################

5. Rename the host name and modify the hosts file

[root@k8s-master yum.repos.d]# cat >> /etc/hosts << EOF
> 192.168.44.210 k8s-master
> 192.168.44.211 k8s-node1
> 192.168.44.212 k8s-node2
> 192.168.44.213 k8s-node3
> EOF

############################################################################# 

6. Modify some kernel parameters

[root@k8s-master yum.repos.d]# cat <<EOF >>  /etc/sysctl.conf 
> net.bridge.bridge-nf-call-ip6tables = 1
> net.bridge.bridge-nf-call-iptables = 1
> net.ipv4.ip_nonlocal_bind = 1
> net.ipv4.ip_forward = 1
> vm.swappiness=0
> EOF

sysctl -p makes the parameters take effect into the kernel

[root@k8s-master yum.repos.d]# sysctl -p
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_nonlocal_bind = 1
net.ipv4.ip_forward = 1
vm.swappiness = 0

#############################################################################

 7. Install kubadm, kubctl, kublet software

Add kubernetes yum software source

[root@k8s-master ~]# cat > /etc/yum.repos.d/kubernetes.repo << EOF
> [kubernetes]
> name=Kubernetes
> baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
> enabled=1
> gpgcheck=0
> repo_gpgcheck=0
> gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
> EOF

Install kubeadm, kubelet, kubectl, and specify the version, because the default runtime environment of version 1.24 is not docker

yum install -y kubelet-1.23.6 kubeadm-1.23.6 kubectl-1.23.6

Set the boot to start automatically, because kubelet is the agent of k8s on the node node, it must be started to run

[root@k8s-master ~]# systemctl enable  kubelet
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.

#############################################################################

8. Deploy kubernetes master

 Prepare the image of coredns:1.8.4 in advance, you need to use it later, you need to download the image on each machine

[root@k8s-master ~]# docker pull  coredns/coredns:1.8.4
1.8.4: Pulling from coredns/coredns
c6568d217a00: Pull complete 
bc38a22c706b: Pull complete 
Digest: sha256:6e5a02c21641597998b4be7cb5eb1e7b02c0d8d23cce4dd09f4682d463798890
Status: Downloaded newer image for coredns/coredns:1.8.4
docker.io/coredns/coredns:1.8.4
[root@k8s-master ~]# docker images
REPOSITORY        TAG       IMAGE ID       CREATED         SIZE
coredns/coredns   1.8.4     8d147537fb7d   16 months ago   47.6MB
[root@k8s-master ~]# docker tag coredns/coredns:1.8.4 registry.aliyuncs.com/google_containers/coredns:v1.8.4
[root@k8s-master ~]# docker images
REPOSITORY                                        TAG       IMAGE ID       CREATED         SIZE
coredns/coredns                                   1.8.4     8d147537fb7d   16 months ago   47.6MB
registry.aliyuncs.com/google_containers/coredns   v1.8.4    8d147537fb7d   16 months ago   47.6MB

Perform initialization operations on the master server

[root@k8s-master ~]# kubeadm init \
> --apiserver-advertise-address=192.168.44.210 \
> --image-repository registry.aliyuncs.com/google_containers \
> --service-cidr=10.1.0.0/16 \
> --pod-network-cidr=10.244.0.0/16
I0924 22:36:57.310381   20845 version.go:255] remote version is much newer: v1.25.2; falling back to: stable-1.23
[init] Using Kubernetes version: v1.23.12
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.1.0.1 192.168.44.210]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.44.210 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.44.210 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 7.503688 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.23" in namespace kube-system with the configuration for the kubelets in the cluster
NOTE: The "kubelet-config-1.23" naming of the kubelet ConfigMap is deprecated. Once the UnversionedKubeletConfigMap feature gate graduates to Beta the default name will become just "kubelet-config". Kubeadm upgrade will handle this transition transparently.
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node k8s-master as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node k8s-master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: 0xp3gm.wzbsahhxwa1dtaeh
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.44.210:6443 --token 0xp3gm.wzbsahhxwa1dtaeh \
	--discovery-token-ca-cert-hash sha256:bc28a61b1de3bfa7cb95c619ef050fe67238471347b16d9e34e400e405efe0bb 
[root@k8s-master ~]# 

Complete the operation of creating new files and directories for initialization, and complete them on the master 

[root@k8s-master ~]#   mkdir -p $HOME/.kube
[root@k8s-master ~]#   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@k8s-master ~]#   sudo chown $(id -u):$(id -g) $HOME/.kube/config

############################################################################# 

9. Add the node node server to the k8s cluster

Execute on all three node node servers

[root@k8s-node1 ~]# kubeadm join 192.168.44.210:6443 --token 0xp3gm.wzbsahhxwa1dtaeh --discovery-token-ca-cert-hash sha256:bc28a61b1de3bfa7cb95c619ef050fe67238471347b16d9e34e400e405efe0bb 
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
[root@k8s-node2 ~]# kubeadm join 192.168.44.210:6443 --token 0xp3gm.wzbsahhxwa1dtaeh --discovery-token-ca-cert-hash sha256:bc28a61b1de3bfa7cb95c619ef050fe67238471347b16d9e34e400e405efe0bb 
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
[root@k8s-node3 ~]# kubeadm join 192.168.44.210:6443 --token 0xp3gm.wzbsahhxwa1dtaeh --discovery-token-ca-cert-hash sha256:bc28a61b1de3bfa7cb95c619ef050fe67238471347b16d9e34e400e405efe0bb 
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

View node node information on the master

[root@k8s-master ~]# kubectl get nodes
NAME         STATUS     ROLES                  AGE     VERSION
k8s-master   NotReady   control-plane,master   11m     v1.23.6
k8s-node1    NotReady   <none>                 3m49s   v1.23.6
k8s-node2    NotReady   <none>                 100s    v1.23.6
k8s-node3    NotReady   <none>                 96s     v1.23.6

NotReady indicates that there is still a problem with the communication between the master and node nodes, and the communication between containers is not ready yet

############################################################################# 

10. Install the network plug-in flannel (executed on the master node) 

 k8s network plug-in: the function is to realize the communication between pods between different hosts

        1、flannel  --》overlay --》vxlan

        2、calico --》 BGP pipe

The kube-flannel.yaml file needs to be created by yourself, the content is as follows:

---
kind: Namespace
apiVersion: v1
metadata:
  name: kube-flannel
  labels:
    pod-security.kubernetes.io/enforce: privileged
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: flannel
rules:
- apiGroups:
  - ""
  resources:
  - pods
  verbs:
  - get
- apiGroups:
  - ""
  resources:
  - nodes
  verbs:
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - nodes/status
  verbs:
  - patch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: flannel
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: flannel
subjects:
- kind: ServiceAccount
  name: flannel
  namespace: kube-flannel
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: flannel
  namespace: kube-flannel
---
kind: ConfigMap
apiVersion: v1
metadata:
  name: kube-flannel-cfg
  namespace: kube-flannel
  labels:
    tier: node
    app: flannel
data:
  cni-conf.json: |
    {
      "name": "cbr0",
      "cniVersion": "0.3.1",
      "plugins": [
        {
          "type": "flannel",
          "delegate": {
            "hairpinMode": true,
            "isDefaultGateway": true
          }
        },
        {
          "type": "portmap",
          "capabilities": {
            "portMappings": true
          }
        }
      ]
    }
  net-conf.json: |
    {
      "Network": "10.244.0.0/16",
      "Backend": {
        "Type": "vxlan"
      }
    }
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kube-flannel-ds
  namespace: kube-flannel
  labels:
    tier: node
    app: flannel
spec:
  selector:
    matchLabels:
      app: flannel
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: kubernetes.io/os
                operator: In
                values:
                - linux
      hostNetwork: true
      priorityClassName: system-node-critical
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni-plugin
       #image: flannelcni/flannel-cni-plugin:v1.1.0 for ppc64le and mips64le (dockerhub limitations may apply)
        image: docker.io/rancher/mirrored-flannelcni-flannel-cni-plugin:v1.1.0
        command:
        - cp
        args:
        - -f
        - /flannel
        - /opt/cni/bin/flannel
        volumeMounts:
        - name: cni-plugin
          mountPath: /opt/cni/bin
      - name: install-cni
       #image: flannelcni/flannel:v0.19.1 for ppc64le and mips64le (dockerhub limitations may apply)
        image: docker.io/rancher/mirrored-flannelcni-flannel:v0.19.1
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
       #image: flannelcni/flannel:v0.19.1 for ppc64le and mips64le (dockerhub limitations may apply)
        image: docker.io/rancher/mirrored-flannelcni-flannel:v0.19.1
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
          limits:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: false
          capabilities:
            add: ["NET_ADMIN", "NET_RAW"]
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        - name: EVENT_QUEUE_DEPTH
          value: "5000"
        volumeMounts:
        - name: run
          mountPath: /run/flannel
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
        - name: xtables-lock
          mountPath: /run/xtables.lock
      volumes:
      - name: run
        hostPath:
          path: /run/flannel
      - name: cni-plugin
        hostPath:
          path: /opt/cni/bin
      - name: cni
        hostPath:
          path: /etc/cni/net.d
      - name: flannel-cfg
        configMap:
          name: kube-flannel-cfg
      - name: xtables-lock
        hostPath:
          path: /run/xtables.lock
          type: FileOrCreate

 deploy flannel

[root@k8s-master ~]#  kubectl apply -f kube-flannel.yml
Warning: policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
podsecuritypolicy.policy/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds created

 View the cluster status (the status is Ready, which means k8s deployment is successful)

[root@k8s-master ~]# kubectl get nodes
NAME         STATUS   ROLES                  AGE   VERSION
k8s-master   Ready    control-plane,master   21m   v1.23.6
k8s-node1    Ready    <none>                 20m   v1.23.6
k8s-node2    Ready    <none>                 20m   v1.23.6
k8s-node3    Ready    <none>                 20m   v1.23.6

View the details of each node

[root@k8s-master ~]# kubectl get pod -n kube-system
NAME                                 READY   STATUS    RESTARTS   AGE
coredns-6d8c4cb4d-92g7b              1/1     Running   0          20m
coredns-6d8c4cb4d-kl4q5              1/1     Running   0          20m
etcd-k8s-master                      1/1     Running   0          20m
kube-apiserver-k8s-master            1/1     Running   0          20m
kube-controller-manager-k8s-master   1/1     Running   0          20m
kube-proxy-422b5                     1/1     Running   0          19m
kube-proxy-6qpcz                     1/1     Running   0          19m
kube-proxy-ggnnt                     1/1     Running   0          20m
kube-proxy-vjcnc                     1/1     Running   0          19m
kube-scheduler-k8s-master            1/1     Running   0          20m

Check the namespaces in k8s (created by k8s) 

[root@k8s-master ~]# kubectl get ns
NAME              STATUS   AGE
default           Active   22m
kube-flannel      Active   10m
kube-node-lease   Active   22m
kube-public       Active   22m
kube-system       Active   22m

Check which node the pod is running on,

[root@k8s-master ~]# kubectl get pod -n kube-flannel 
NAME                    READY   STATUS    RESTARTS   AGE
kube-flannel-ds-c7crw   1/1     Running   0          29m
kube-flannel-ds-pr5pr   1/1     Running   0          29m
kube-flannel-ds-rphnc   1/1     Running   0          29m
kube-flannel-ds-v8rxz   1/1     Running   0          29m
[root@k8s-master ~]# kubectl get pod -n kube-flannel -o wide
NAME                    READY   STATUS    RESTARTS   AGE   IP               NODE         NOMINATED NODE   READINESS GATES
kube-flannel-ds-c7crw   1/1     Running   0          29m   192.168.44.211   k8s-node1    <none>           <none>
kube-flannel-ds-pr5pr   1/1     Running   0          29m   192.168.44.212   k8s-node2    <none>           <none>
kube-flannel-ds-rphnc   1/1     Running   0          29m   192.168.44.210   k8s-master   <none>           <none>
kube-flannel-ds-v8rxz   1/1     Running   0          29m   192.168.44.213   k8s-node3    <none>           <none>

#############################################################################

Guess you like

Origin blog.csdn.net/qq_48391148/article/details/127017827