Guide to installing kubernetes cluster based on Ubuntu

Basic version and environment information:

MacBook Pro Apple M2 Max

VMware Fusion Player version 13.0.2 (21581413)

ubuntu-22.04.2-live-server-arm64

k8s-v1.27.3

docker 24.0.2

Install VMware Fusion on MacBook, and then virtualize 6 ubuntu nodes. Use kubeadm to install k8s+containerd to form a non-highly available k8s cluster. The network uses the flannel plug-in. The installation of vmware and ubuntu will not be introduced here. There are many articles online for reference. This experimental k8s cluster has a total of 6 ubuntu nodes, 1 as master and 5 as workers. All subsequent operations are performed under a non-root user. The Linux user name used by the author is: zhangzk.

hostname IP address k8s role Configuration
zzk-1 192.168.19.128 worker 2 cores & 4G
zzk-2 192.168.19.130 worker 2 cores & 4G
zzk-3         192.168.19.131 worker 2 cores & 4G
zzk-4         192.168.19.132 worker 2 cores & 4G
zzk-5 192.168.19.133 worker 2 cores & 4G
test 192.168.19.134 master 2 cores & 4G

1. Update environment (all nodes)

sudo apt update

2. Close swap permanently (all nodes)

sudo swapon --show

If Swap partition is enabled, you will see the path and size of the Swap partition file.

You can also check it with the free command:

free -h

If Swap partition is enabled, the total size and usage of Swap partition will be displayed.

Run the following command to disable Swap partition

sudo swapoff -a

Then delete the Swap partition file:

sudo rm /swap.img

Next modify the fstab file so that the Swap partition file is not re-created after the system restarts. Comment or delete /etc/fstab:

/swap.img none swap sw 0 0

Run sudo swapon –show again to check if it is disabled, if so the command should have no output.

3. Turn off the firewall (all nodes)

View the current firewall status: sudo ufw status

The inactive state is the firewall off state and the active state is on.

Turn off the firewall: sudo ufw disable

4. Allow iptables to inspect bridge traffic (all nodes)

1. Load the two kernel modules overlay and br_netfilter

sudo modprobe overlay && sudo modprobe br_netfilter

Load the above two modules persistently to avoid restart failure.

cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF

Verify that the br_netfilter module is loaded by running lsmod | grep br_netfilter

Verify that the overlay module is loaded by running lsmod | grep overlay

2. Modify the kernel parameters to ensure that the second-layer bridge will also be filtered by the FORWARD rules of iptables when forwarding packets.

cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward = 1
EOF

Apply sysctl parameters without rebooting

 sudo sysctl --system

Why does K8s need to enable the bridge-nf-call-iptables kernel parameter? Let me explain it to you with a case

What does it mean to enable the bridge function (enable bridge-nf-call-iptables during k8s installation)

5. Install docker (all nodes)

Use the latest version of docker directly. When installing docker, containerd will be installed automatically.

For the specific installation process, please refer to the official documentation:

Install Docker Engine on Ubuntu | Docker Documentation

Uninstall old versions

Before you can install Docker Engine, you must first make sure that any conflicting packages are uninstalled.

for pkg in docker.io docker-doc docker-compose podman-docker containerd runc; do sudo apt-get remove $pkg; done

Install using the apt repository

Before you install Docker Engine for the first time on a new host machine, you need to set up the Docker repository. Afterward, you can install and update Docker from the repository.

Set up the repository

  1. Update the apt package index and install packages to allow apt to use a repository over HTTPS:

    sudo apt-get update
    sudo apt-get install ca-certificates curl gnupg
    
  2. Add Docker’s official GPG key:

    sudo install -m 0755 -d /etc/apt/keyrings
    curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
    sudo chmod a+r /etc/apt/keyrings/docker.gpg
    
  3. Use the following command to set up the repository:

    echo \
      "deb [arch="$(dpkg --print-architecture)" signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu \
      "$(. /etc/os-release && echo "$VERSION_CODENAME")" stable" | \
      sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
    

    Note

    If you use an Ubuntu derivative distro, such as Linux Mint, you may need to use UBUNTU_CODENAME instead of VERSION_CODENAME.

   Install Docker Engine

  1. Update the apt package index:

    sudo apt-get update
    
  2. Install Docker Engine, containerd, and Docker Compose.

    • Latest
    • Specific version
     

    To install the latest version, run:

     sudo apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
    

  3. Verify that the Docker Engine installation is successful by running the hello-world image.

    sudo docker run hello-world
    

    This command downloads a test image and runs it in a container. When the container runs, it prints a confirmation message and exits.

6. Install k8s (all nodes)

Refer to the official documentation: Installing kubeadm | Kubernetes

1. the apt package index and install packages needed to use the Kubernetes apt repository:

sudo apt-get update
sudo apt-get install -y apt-transport-https ca-certificates curl

2.Download the Google Cloud public signing key:

     #Use google source (requires scientific Internet access)

curl -fsSL https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-archive-keyring.gpg

     #Use aliyun source (recommended)

curl -fsSL https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-archive-keyring.gpg

 3.Add the Kubernetes apt repository:

    #Use google source (requires scientific Internet access)

echo "deb [signed-by=/etc/apt/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list

     #Use aliyun source (recommended)

echo "deb [signed-by=/etc/apt/keyrings/kubernetes-archive-keyring.gpg] https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list

    4.Update apt package index, install kubelet, kubeadm and kubectl, and pin their version:

sudo apt-get update
sudo apt-get install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl

Note: Install the specified version of kubectl\kubeadm\kubelet, taking 1.23.6 as an example:

#Install version 1.23.6

apt-get install kubelet=1.23.6-00

apt-get install kubeadm=1.23.6-00

apt-get install kubectl=1.23.6-00

#View version

kubectl version --client && kubeadm version && kubelet --version

#boot

systemctl enable kubelet

7. Modify the runtime containerd configuration (all nodes)

    First generate the default configuration file of containerd:

containerd config default | sudo tee /etc/containerd/config.toml

    1. The first modification is to enable systemd:

    Modify the file: /etc/containerd/config.toml

    Find containerd.runtimes.runc.options and modify SystemdCgroup = true

[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]

Root = ""

ShimCgroup = ""

SystemdCgroup = true

     2. The second modification is to change the remote download address from Google’s to Alibaba Cloud:

    Modify the file: /etc/containerd/config.toml

    Put this line: sandbox_image = "registry.k8s.io/pause:3.6"

    改成:sandbox_image = "registry.aliyuncs.com/google_containers/pause:3.9"

[plugins."io.containerd.grpc.v1.cri"]

restrict_oom_score_adj = false

#sandbox_image = "registry.k8s.io/pause:3.6"

sandbox_image = "registry.aliyuncs.com/google_containers/pause:3.9"

selinux_category_range = 1024

     Note that not changing to "pause:3.9" will result in the following error when using kubeadm to initialize the master:

W0628 15:02:49.331359 21652 checks.go:835] detected that the sandbox image "registry.aliyuncs.com/google_containers/pause:3.6" of the container runtime is inconsistent with that used by kubeadm. It is recommended that using "registry.aliyuncs.com/google_containers/pause:3.9" as the CRI sandbox image.

    Restart and set to auto-start at boot

sudo systemctl restart containerd
sudo systemctl enable containerd

    Check the image version number

kubeadm config images list

8. Master node initialization (master node only)

Generate initialization configuration information:

kubeadm config print init-defaults > kubeadm.conf

Modify kubeadm.conf configuration

criSocket: unix:///var/run/containerd/containerd.sock
  imagePullPolicy: IfNotPresent
  name: test #Change to the host name of the master node
  taints: null
---
apiServer:
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta3
certifiapiVersion: kubeadm.k8s.io/v1beta3
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 192.168 .19.134 #Modify to the address of the master machine
  bindPort: 6443
nodeRegistration:
  catesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns: {}
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: registry.aliyuncs.com/google_containers #Modify to Alibaba Cloud source
kind: ClusterConfiguration
kubernetesVersion : 1.27.0
networking:
  dnsDomain: cluster.local
 podSubnet: 10.244.0.0/16 # Add 
  serviceSubnet: 10.96.0.0/12
scheduler: {}

Complete the initialization of the master node master

sudo kubeadm init --config=kubeadm.conf

The output log is as follows:

[init] Using Kubernetes version: v1.27.0
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local test] and IPs [10.96.0.1 192.168.19.134]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost test] and IPs [192.168.19.134 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost test] and IPs [192.168.19.134 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 3.501515 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node test as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node test as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]
[bootstrap-token] Using token: abcdef.0123456789abcdef
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.19.134:6443 --token abcdef.0123456789abcdef 
--discovery-token-ca-cert-hash sha256:f8139e292a34f8371b4b541a86d8f360f166363886348a596e31f2ebd5c1cdbf

Since a non-root user is used, you need to execute the following command on the master to configure kubectl

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

10. Configure the network (only master node)

On the master node, add the network plug-in fannel

​kubectl apply -f https://github.com/flannel-io/flannel/releases/latest/download/kube-flannel.yml

If the above times out, try a few more times, and if it continues to fail, you need to use scientific Internet access.

 You can also download the contents of kube-flannel.yml locally and then configure the network.

wget https://github.com/flannel-io/flannel/releases/latest/download/kube-flannel.yml

kubectl apply -f kube-flannel.yml

11. Worker node joins (executed by non-master node)

Execute the following command on the master node to print the join command

kubeadm token create --print-join-command

Execute the above join command on the worker node that needs to join the cluster.

sudo kubeadm join 192.168.19.134:6443 --token lloamy.3g9y3tx0bjnsdhqk --discovery-token-ca-cert-hash sha256:f8139e292a34f8371b4b541a86d8f360f166363886348a596e31f2ebd5c1cdbf

Configure kubectl of worker node

Copy the " /etc/kubernetes/admin.conf " file in the master node to the same directory as the slave node, and then execute the following command on the slave node (non-root):

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

View cluster nodes

zhangzk@test:~$ kubectl get nodes

NAME STATUS ROLES AGE VERSION

test Ready control-plane 13h v1.27.3

zzk-1 Ready 2m21s v1.27.3

zzk-2 Ready 89s v1.27.3

zzk-3 Ready 75s v1.27.3

zzk-4 Ready 68s v1.27.3

zzk-5 Ready 42s v1.27.3

12. Common commands

Get all pods running in all namespaces: kubectl get po --all-namespaces

Get the labels of all pods running in all namespaces: kubectl get po --show-labels

Get all namespaces of the node: kubectl get namespace

View nodes: kubectl get nodes

At this point, you can enter the world of cloud native and start flying.

Guess you like

Origin blog.csdn.net/zhangzhaokun/article/details/131452979