Kunpeng 920 arm builds k8s cluster and installs dashboard

Kunpeng 920 builds k8s cluster and installs dashboard

1. Environment configuration:

Note: master and all node nodes must execute

//关闭防火墙
systemctl stop firewalld
systemctl disable firewalld

//关闭selinux
sed -i 's/enforcing/disabled/' /etc/selinux/config
setenforce 0

//临时关闭 swap
swapoff -a

cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF

sysctl --system

2. Install time synchronization server

Note: master and all node nodes must execute

yum install chrony -y
systemctl enable chronyd.service
systemctl start chronyd.service

//查看chrony状态
systemctl status chronyd.service chronyc sources
● chronyd.service - NTP client/server
   Loaded: loaded (/usr/lib/systemd/system/chronyd.service; enabled; vendor preset: enabled)
   Active: active (running) since Mon 2021-03-22 14:32:33 CST; 11min ago
     Docs: man:chronyd(8)
           man:chrony.conf(5)
 Main PID: 611 (chronyd)
   CGroup: /system.slice/chronyd.service
           └─611 /usr/sbin/chronyd

Mar 22 14:32:33 localhost systemd[1]: Starting NTP client/server...
Mar 22 14:32:33 localhost chronyd[611]: chronyd version 3.4 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +ASYNCDNS +... +DEBUG)
Mar 22 14:32:33 localhost chronyd[611]: Frequency -3.712 +/- 0.085 ppm read from /var/lib/chrony/drift
Mar 22 14:32:33 localhost systemd[1]: Started NTP client/server.
Mar 22 14:32:38 k8s-master chronyd[611]: Selected source 100.125.0.251

3. The required dependencies for installation:

Note: master and all node nodes must execute

yum install -y conntrack ipvsadm ipset jq sysstat curl iptables libseccomp bash-completion yum-utils device-mapper-persistent-data lvm2 net-tools conntrack-tools vim libtool-ltdl

4. Set hostname

#设置每个机器自己的hostname
hostnamectl set-hostname xxx

eg:
作为主节点的服务器设置为:  hostnamectl set-hostname k8s-master
其余两台工作服务器  
第一台 :hostnamectl set-hostname node1
第一台 :hostnamectl set-hostname node2

Then let each server find the master and set the host (every server executes)

echo "10.13.166.115  k8s-master" >> /etc/hosts  # 将IP 改为你master节点的IP

5. Install docker

Note: master and all node nodes must execute

sudo yum remove docker*
//配置repo
wget -O /etc/yum.repos.d/docker-ce.repo https://repo.huaweicloud.com/docker-ce/linux/centos/docker-ce.repo
sudo sed -i 's+download.docker.com+repo.huaweicloud.com/docker-ce+' /etc/yum.repos.d/docker-ce.repo
export releasever=7
export basearch=aarch64

//安装Docker-CE
sudo yum makecache
sudo yum -y install docker-ce-3:24.0.2-1.el8.aarch64 --allowerasing    # --allowerasing 表示允许安装新的替换旧的

//配置Docker-CE
systemctl start docker
systemctl enable docker.service

If the above operation reports an error: the corresponding daocker version cannot be found, you can perform the following operations to search

yum list docker-ce

6. Install kubeadm and other components

Note: master and all node nodes must execute

# 设置镜像源
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-aarch64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
       http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

# 安装
yum install -y kubelet-1.20.0 kubeadm-1.20.0 kubectl-1.20.0 --disableexcludes=kubernetes 

#启动
systemctl enable kubelet
systemctl start kubelet --now

7. Initialize the master:

Solve this problem in advance before initialization:

Encountered a problem (other warnings can be ignored, but those involving network plugins must be resolved):

Note: It should also be executed on other nodes (such as checking when joining a work node)

问题:
[WARNING FileExisting-tc]: tc not found in system path

解决:
dnf install -y iproute-tc

Initialize :

Execute only on the master and --apiserver-advertise-address=192.168.1.205 \replace the ip in it with the ip address of the master

Important: --pod-network-cidr=172.168.0.0/16 It should be determined according to the ip of the cluster to avoid ip conflicts

kubeadm init \
--apiserver-advertise-address=192.168.1.205 \
--control-plane-endpoint=k8s-master \
--image-repository registry.aliyuncs.com/google_containers \
--kubernetes-version v1.20.0 \
--service-cidr=10.1.0.0/16 \
--pod-network-cidr=172.168.0.0/16

**Cancel initialization: (Note that you don’t need to execute it here, only execute the following code if you report an error or want to retry) **For example, if you encounter the above error report and solve it, then re-initialize the master node

sudo kubeadm reset

rm -rf $HOME/.kube

rm -rf /var/lib/cni/
rm -rf /etc/cni/
ifconfig cni0 down
ip link delete cni0

insert image description here

After executing the above command, all containers docker and pod will be deleted

details:

After the initialization is executed, detailed information will be generated: (Note that it should be recorded here, and it will be used when adding node nodes later)

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:

  kubeadm join k8s-master:6443 --token 02f7y1.k9dgww0wcobl0s9c \
    --discovery-token-ca-cert-hash sha256:07b802f242cfb73780d757e056978731a822fcc339ba4c9ee95afa5a64819090 \
    --control-plane 

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join k8s-master:6443 --token 02f7y1.k9dgww0wcobl0s9c \
    --discovery-token-ca-cert-hash sha256:07b802f242cfb73780d757e056978731a822fcc339ba4c9ee95afa5a64819090 

Then execute it on the master node (content in the details)

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Wait a moment to see the result:insert image description here

It can be seen that doredns has not yet started. These two need to install the network plug-in before they can be officially running, so don’t worry.

8. Deploy the container network plug-in:

Note: Execute only on the master node

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

If you can’t pull the above files, you can directly copy the ones in the appendix for application

Encounter problems:

Question one:

The status of coredns is always ContainerCreating
Error Reason: Check the node status and find that /run/flannel/subnet.env file is missing

kubectl describe pod coredns-6d8c4cb4d-drcgw -n kube-system
#logs 输出:
kubernetes installation and kube-dns: open /run/flannel/subnet.env: no such file or directory

Solution: Manually write to the /run/flannel/subnet.env file, and view it as running again

cat > /run/flannel/subnet.env << EOF
FLANNEL_NETWORK=172.100.0.0/16
FLANNEL_SUBNET=172.100.1.0/24
FLANNEL_MTU=1450
FLANNEL_IPMASQ=true
EOF

Question two,

Problem description
Failed to deploy the flannel network plugin:

NAMESPACE      NAME                             READY   STATUS              RESTARTS   AGE
kube-flannel   kube-flannel-ds-55nbz            0/1     CrashLoopBackOff    3          2m48s

Cause Analysis:
View pod logs: kubectl get pod -n kube-flannel kube-flannel-ds-55nbz

E0825 09:00:25.015344       1 main.go:330] Error registering network: failed to acquire lease: subnet "10.244.0.0/16" specified in the flannel net config doesn't contain "172.16.0.0/24" PodCIDR of the "master" node.
W0825 09:00:25.018642       1 reflector.go:436] github.com/flannel-io/flannel/subnet/kube/kube.go:403: watch of *v1.Node ended with: an error on the server ("unable to decode an event from the watch stream: context canceled") has prevented the request from succeeding
I0825 09:00:25.018698       1 main.go:447] Stopping shutdownHandler...

Reason: When initializing the master node, specify –pod-network-cidr=172.16.0.0/16
, then this part of the network segment in kube-flannel.yml should also be corresponding, the default is "10.244.0.0/16"

solution:

vi kube-flannel.yml

 
 # 修改位置
 net-conf.json: |
    {
    
    
      "Network": "172.168.0.0/16", #  修改为 master初始化时 pod 的网段
      "Backend": {
    
    
        "Type": "vxlan"
      }
    }

Reapply after performing the above changes:

kubectl delete -f kube-flannel.yml

kubectl apply -f kube-flannel.yml

9. Join the working node:

Solve the problems that will be encountered after a while in advance

Encounter problems:

Warning:detected “cgroupfs” as the Docker cgroup driver. The recommended driver is “systemd”.

[WARNING IsDockerSystemdCheck]: Detected "cgroupfs" as a Docker cgroup driver. The recommended driver is "systemd".

So let's replace the driver.

Solution: modify docker

Create daemon.json under /etc/docker and edit:

vi /etc/docker/daemon.json

Add the following:

{
    
    
 "exec-opts":["native.cgroupdriver=systemd"]
}

insert image description here

restart docker

systemctl restart docker
systemctl status docker

Note: Execute only on the node node

The specific token comes from the content in the details generated when kubeadm init

kubeadm join k8s-master:6443 --token 02f7y1.k9dgww0wcobl0s9c \
    --discovery-token-ca-cert-hash sha256:07b802f242cfb73780d757e056978731a822fcc339ba4c9ee95afa5a64819090

10. Install k8s dashboard:

Note: Execute only on the master node

1. Deployment

Visual interface officially provided by kubernetes

https://github.com/kubernetes/dashboard

kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.3.1/aio/deploy/recommended.yaml

2. Set the access port

kubectl edit svc kubernetes-dashboard -n kubernetes-dashboard

Executing the above command will open a file and change the type: ClusterIP to type: NodePort

Check:

kubectl get svc -A |grep kubernetes-dashboard

insert image description here

3. Create an access account

  1. vi dash.yaml Paste the following content to create a user admin—user

    #创建访问账号,准备一个yaml文件; vi dash.yaml
    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: admin-user
      namespace: kubernetes-dashboard
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRoleBinding
    metadata:
      name: admin-user
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: ClusterRole
      name: cluster-admin
    subjects:
    - kind: ServiceAccount
      name: admin-user
      namespace: kubernetes-dashboard
    
  2. implement:

    kubectl apply -f dash.yaml
    

    User created successfully:[External link picture transfer failed, the source site may have an anti-leeching mechanism, it is recommended to save the picture and upload it directly (img-IXiQ7ojB-1686793347040)(https://gitee.com/dachang-rolling-dog/note-pic/raw /master/k8s/%E9%B2%B2%E9%B9%8F920%20%E6%90%AD%E5%BB%BAk8s%E9%9B%86%E7%BE%A4%E5%B9%B6% E5%AE%89%E8%A3%85dashboard.assets/image-20230614220707237.png)]

4. Token access

#获取访问令牌
kubectl -n kubernetes-dashboard get secret $(kubectl -n kubernetes-dashboard get sa/admin-user -o jsonpath="{.secrets[0].name}") -o go-template="{
    
    {.data.token | base64decode}}"

Token:

eyJhbGciOiJSUzI1NiIsImtpZCI6ImlDd2FhalJURElqcFlhaG81Q2ViZVNJejJmekJVdVR3eWszbEl1WkNjOWsifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLTRndDJ0Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiJjYjUxZjBiOC02MDYzLTRlNDktODY0Mi0yNWI0YWNhNmUxYTEiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZXJuZXRlcy1kYXNoYm9hcmQ6YWRtaW4tdXNlciJ9.roM2nTjHy5zeMgOdHFvS5pEJe2gLvYBs3Fax15BQgOCqKrUK8DpE7LYRQLoANFfXPMqJEjz-Q5QObwZKq7HoVAJNqyKp4m78SVRvwmBTO_PG6uCgUKFQ44vxFDdMarKs5Mn2Pzcl7pe-Cu8vUBo1XhUnDroJMFHMhXcSzfxcmPkNSrWfTcj8s48nDp5bIKFweuRQ0B6Ash4jUTvMIyr02GdzNAWhU9QOjXg8HYQGceLyrAvIgO2Li4D9yrlSJZaJeKPhQumvt0-I2kvI_fmW6rJcMNxBLwNMX0ickuw68km_Mbtyx14fcTwOpZ8YI0YjsByUeQ_87qtB82kc3lZsyQ

5. Interface

Note that you need to use the Firefox browser to access because the certificate has not been configured here (because the certificate is relatively old and needs to be replaced).

[External link picture transfer failed, the source site may have an anti-theft link mechanism, it is recommended to save the picture and upload it directly (img-CsDY1dYV-1686793347040)(https://gitee.com/dachang-rolling-dog/note-pic/raw /master/k8s/%E9%B2%B2%E9%B9%8F920%20%E6%90%AD%E5%BB%BAk8s%E9%9B%86%E7%BE%A4%E5%B9%B6% E5%AE%89%E8%A3%85dashboard.assets/image-20230614220837233.png)]

[External link picture transfer failed, the source site may have an anti-leeching mechanism, it is recommended to save the picture and upload it directly (img-IUP8lTxe-1686793347041)(https://gitee.com/dachang-rolling-dog/note-pic/raw /master/k8s/%E9%B2%B2%E9%B9%8F920%20%E6%90%AD%E5%BB%BAk8s%E9%9B%86%E7%BE%A4%E5%B9%B6% E5%AE%89%E8%A3%85dashboard.assets/image-20230614220845600.png)]

Afterword:

**I also want to build kubesphere on this cluster, but kubesphere doesn't support arm cpu yet, so it can't be realized yet**

insert image description here

appendix:

kube-flannel.yml:

---
kind: Namespace
apiVersion: v1
metadata:
  name: kube-flannel
  labels:
    k8s-app: flannel
    pod-security.kubernetes.io/enforce: privileged
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  labels:
    k8s-app: flannel
  name: flannel
rules:
- apiGroups:
  - ""
  resources:
  - pods
  verbs:
  - get
- apiGroups:
  - ""
  resources:
  - nodes
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - nodes/status
  verbs:
  - patch
- apiGroups:
  - networking.k8s.io
  resources:
  - clustercidrs
  verbs:
  - list
  - watch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  labels:
    k8s-app: flannel
  name: flannel
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: flannel
subjects:
- kind: ServiceAccount
  name: flannel
  namespace: kube-flannel
---
apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: flannel
  name: flannel
  namespace: kube-flannel
---
kind: ConfigMap
apiVersion: v1
metadata:
  name: kube-flannel-cfg
  namespace: kube-flannel
  labels:
    tier: node
    k8s-app: flannel
    app: flannel
data:
  cni-conf.json: |
    {
    
    
      "name": "cbr0",
      "cniVersion": "0.3.1",
      "plugins": [
        {
    
    
          "type": "flannel",
          "delegate": {
    
    
            "hairpinMode": true,
            "isDefaultGateway": true
          }
        },
        {
    
    
          "type": "portmap",
          "capabilities": {
    
    
            "portMappings": true
          }
        }
      ]
    }
  net-conf.json: |
    {
    
    
      "Network": "192.168.0.0/16",
      "Backend": {
    
    
        "Type": "vxlan"
      }
    }
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kube-flannel-ds
  namespace: kube-flannel
  labels:
    tier: node
    app: flannel
    k8s-app: flannel
spec:
  selector:
    matchLabels:
      app: flannel
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: kubernetes.io/os
                operator: In
                values:
                - linux
      hostNetwork: true
      priorityClassName: system-node-critical
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni-plugin
        image: docker.io/flannel/flannel-cni-plugin:v1.1.2
       #image: docker.io/rancher/mirrored-flannelcni-flannel-cni-plugin:v1.1.2
        command:
        - cp
        args:
        - -f
        - /flannel
        - /opt/cni/bin/flannel
        volumeMounts:
        - name: cni-plugin
          mountPath: /opt/cni/bin
      - name: install-cni
        image: docker.io/flannel/flannel:v0.22.0
       #image: docker.io/rancher/mirrored-flannelcni-flannel:v0.22.0
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
        image: docker.io/flannel/flannel:v0.22.0
       #image: docker.io/rancher/mirrored-flannelcni-flannel:v0.22.0
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: false
          capabilities:
            add: ["NET_ADMIN", "NET_RAW"]
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        - name: EVENT_QUEUE_DEPTH
          value: "5000"
        volumeMounts:
        - name: run
          mountPath: /run/flannel
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
        - name: xtables-lock
          mountPath: /run/xtables.lock
      volumes:
      - name: run
        hostPath:
          path: /run/flannel
      - name: cni-plugin
        hostPath:
          path: /opt/cni/bin
      - name: cni
        hostPath:
          path: /etc/cni/net.d
      - name: flannel-cfg
        configMap:
          name: kube-flannel-cfg
      - name: xtables-lock
        hostPath:
          path: /run/xtables.lock
          type: FileOrCreate

Command detailed set:

Environment configuration:

# 将 SELinux 设置为 permissive 宽容模式(相当于将其禁用)
sudo setenforce 0
sudo sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config

#关闭swap
swapoff -a  
sed -ri 's/.*swap.*/#&/' /etc/fstab

#允许 iptables 检查桥接流量
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
br_netfilter
EOF

cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sudo sysctl --system

explain:

  1. sudo setenforce 0
    • Function: This command is used to temporarily disable the enforcing mode of SELinux (Security-Enhanced Linux).
    • Explanation: SELinux is a security module for enforcing mandatory access control on the Linux operating system. setenforcecommand to change the enforcing mode state of SELinux. By setting it to 0, which is "Permissive" (permissive mode), SELinux will log violations, but not prevent them.
  2. sudo sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config
    • Function: This command is used to modify /etc/selinux/configthe SELinux configuration in the file.
    • Explanation: sedIt is a text processing tool, and this command uses sedto /etc/selinux/configedit the file. -iThe option indicates seddirect editing in the original file, rather than output to standard output. The regular expression s/^SELINUX=enforcing$/SELINUX=permissive/is used to search for SELINUX=enforcinglines starting with and replace them with SELINUX=permissive. This will change SELinux's enforcement mode from "Enforcing" to "Permissive". The modified configuration will take effect after the system restarts.
  3. Close swap:
    • Order:swapoff -a
    • Function: This command is used to close the swap partition in the system.
    • Explanation: Swap is a mechanism for dumping data to disk when physical memory is insufficient. In Kubernetes clusters, it is recommended to disable swap, because swap may cause performance degradation for containerized applications. By executing swapoff -athe command, the system will close all swap partitions.
  4. Comment out the swap partition:
    • Order:sed -ri 's/.*swap.*/#&/' /etc/fstab
    • Function: This command is used to /etc/fstabcomment out the relevant lines of the swap partition in the file.
    • Explanation: /etc/fstabIt is a file used to store file system mount information. By executing the above command, it will use a regular expression to /etc/fstabcomment out the line containing "swap" in the file, thus disabling the system's swap partition.
  5. Allow iptables to inspect bridged traffic:
    • Order:cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf br_netfilter EOF
    • Function: This command is used to create and edit /etc/modules-load.d/k8s.confthe file to load br_netfilterthe module.
    • Explanation: Bridge network is used in Kubernetes cluster, br_netfiltermodule is required for iptables to check on bridge traffic. By executing the above command, add br_netfilterto /etc/modules-load.d/k8s.confthe file to ensure that the module is loaded at system startup.
  6. Set network parameters:
    • Order:cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 EOF sudo sysctl --system
    • Function: This command is used to create and edit /etc/sysctl.d/k8s.conffiles, set network parameters, and sysctl --systemmake them take effect through commands.
    • Explanation: Kubernetes requires specific configuration of network parameters. The above command sets net.bridge.bridge-nf-call-ip6tablesand net.bridge.bridge-nf-call-iptablesto 1 to ensure Linux bridged networking works correctly. By executing sudo sysctl --system, the configured network parameters will take effect after the system is restarted.

Install Time Synchronizer

yum install chrony -y
systemctl enable chronyd.service
systemctl start chronyd.service
  1. Install Chrony:
    • Order:yum install chrony -y
    • Role: This command uses the yum package manager to install the Chrony time synchronization service on the system.
    • Explanation: Chrony is a Network Time Protocol (NTP) client and server for synchronizing time between computer systems. By executing the above command, the Chrony package will be installed using yum and its dependencies will be resolved automatically.
  2. Enable the Chrony service:
    • Order:systemctl enable chronyd.service
    • Function: This command is used to enable the Chrony service when the system starts.
    • Explanation: systemctl enableThe command is used to set the service to be automatically enabled when the system starts. By executing the above command, Chrony service will be started automatically on system boot.
  3. Start the Chrony service:
    • Order:systemctl start chronyd.service
    • Function: This command is used to start the Chrony service.
    • Explanation: systemctl startThe command is used to start the specified service. By executing the above command, the Chrony service will immediately start and start synchronizing the system time.

Note that the Chrony service is installed and started to ensure time synchronization in the system, which is especially important in distributed systems and clusters.

effect:

  1. Consistency and coordination: In a distributed system, different computing nodes or servers need to coordinate and communicate with each other. If the nodes' times are out of sync, it can lead to data inconsistencies, operational conflicts, and other coordination issues. Time synchronization ensures that the clocks on each node are consistent, providing consistent views and behavior.
  2. Computational and Log Consistency: Many distributed systems and applications rely on timestamps for computation, ordering, and logging of events. If the times of the different nodes are not synchronized, this can lead to miscalculations, out-of-order events, or inaccurate logs. Time synchronization ensures the consistency and accuracy of these operations.
  3. Security and Authentication: Many security protocols and mechanisms rely on time to verify and authenticate the sequence and timestamp of events. For example, the validity period of the certificate, the aging of the authentication token, etc. If the times are not synchronized, these security mechanisms may be compromised or ineffective. Through time synchronization, the correct operation of security protocols and authentication mechanisms can be ensured.
  4. Troubleshooting and log analysis: Timestamps play a key role in troubleshooting and log analysis when a system malfunctions or issues occur. Through time synchronization, you can ensure that the time stamp of the event is accurate, thereby helping to locate and solve the problem

set hostname

hostnamectl set-hostname k8s-master
  • hostnamectlis a command-line tool for managing hostnames and related system settings.
  • set-hostnameis hostnamectla subcommand for setting the hostname.
  • k8s-masteris the new hostname you wish to set.

By executing this command, the hostname of the system will be changed to "k8s-master". Note that changing the hostname may require a system restart to take effect. The hostname is very important for network communication and identifying the role of the host in the cluster, so setting a meaningful and easily identifiable hostname can improve the efficiency of system management and maintenance.

Configure master domain name

#所有机器配置master域名
echo "10.13.166.115  k8s-master" >> /etc/hosts   # 注意点: ip 改为master 的ip 也就是这台服务器的ip
  • echois a command-line tool for outputting text content.
  • "10.13.166.115 k8s-master"is the mapping of IP addresses and hostnames you want to add.
  • >> /etc/hostsAppend the output to /etc/hoststhe file.

By executing this command, add the IP address "10.13.166.115" and hostname "k8s-master" to /etc/hoststhe end of the file. The purpose of this is to establish the mapping relationship between the IP address and the host name on the local host, so that the host name can be used for access during network communication.

Note that mapping the correct IP address to hostname to /etc/hoststhe file is important for communication and name resolution between hosts. Make sure to replace "10.13.166.115" with the actual IP address to match your environment.

Install web tools and trace

#可能需要下面命令
yum install -y conntrack
yum install -y socat
  1. Install Conntrack:
    • Order:yum install -y conntrack
    • Role: This command uses the yum package manager to install Conntrack on the system.
    • Explanation: Conntrack is a connection tracking tool for tracking network connections and status. In a Kubernetes cluster, Conntrack is used to manage and track network connections to ensure proper routing and delivery of network traffic. By executing the above command, the Conntrack package will be installed using yum and its dependencies will be resolved automatically.
  2. Install Socat:
    • Order:yum install -y socat
    • Role: This command uses the yum package manager to install Socat on the system.
    • Explanation: Socat is a versatile network tool for establishing connections and transferring data between different network layers. In a Kubernetes cluster, Socat may be used to proxy, forward, and debug network traffic. By executing the above command, the Socat package will be installed using yum and its dependencies will be resolved automatically.

These two tools play an important role in the installation and operation of the Kubernetes cluster, ensuring the correctness and stability of the network connection. Installing them satisfies the cluster's dependencies and provides the required networking capabilities.

To taint master:

kubectl taint nodes k8s-master node-role.kubernetes.io=master:NoSchedule

#  k8s-master 是 master节点的名称  可以通过 kubectl get nodes查看
  • kubectlis a Kubernetes command-line tool for interacting with Kubernetes clusters.

  • taintis kubectla subcommand of , used to add or remove Taints from nodes.

  • nodes k8s-masterIndicates that the name of the target node is "k8s-master".

  • node-role.kubernetes.io=master:NoSchedule
    

    is a key-value pair representation of the Taint to add.

    • node-role.kubernetes.io=masterThe key representing Taint is "node-role.kubernetes.io" and the value is "master".
    • NoScheduleis the effect of Taint, which will prevent the scheduler from running Pods that do not meet the Taint requirements on the node.

By executing this command, a Taint will be added to the node named "k8s-master", indicating that this node is a master node (master) and will not be used by the scheduler to run Pods that do not meet the Taint requirements. The purpose of this is to ensure that the master node is not overloaded and maintains its capacity for running critical components of the system.

Note that before executing the command, make sure you have sufficient privileges and administrative access to the target node.

problem solved:

dnf install -y iproute-tc
  • dnfis a package manager used on systems such as Fedora, CentOS, and RHEL (Red Hat Enterprise Linux).
  • install -yis dnfa subcommand and option for installing packages and automatically confirming all prompts.
  • iproute-tcis the name of the package to install.

By executing this command, iproute-tcthe package named will be installed using DNF. iproute-tcis a tool for configuring and managing network routing and traffic control. It provides a wealth of network management functions, including setting routing rules, configuring network interfaces, and controlling traffic, etc.

Note that this command may need to be executed under user or administrator privileges with appropriate privileges.

Guess you like

Origin blog.csdn.net/qq_63946922/article/details/131221363