Ubuntu 22.04 LTS K8S creates StorageClass based on NFS

1. Select a slave machine to install the NFS service

As NFSa server node, install the required components:

sudo apt install nfs-kernel-server nfs-common 

For all nodes that can be scheduled to execute Pods, you need to install:

sudo apt install nfs-common 

Create shared directory:

sudo mkdir -p /nfs/k8s
sudo chmod 777 /nfs/k8s

Modify configuration file:

sudo vim /etc/exports

Type the following and save (* means allow access from all network segments):

/nfs/k8s *(rw,sync,no_root_squash)

2. Restart the NFS service and check

sudo systemctl restart nfs-kernel-server
sudo systemctl enable nfs-kernel-server
sudo showmount -e localhost

3. Install Helm on the Master node

Refer to Helm's official installation instructions to install Helm:

curl https://baltocdn.com/helm/signing.asc | gpg --dearmor | sudo tee /usr/share/keyrings/helm.gpg > /dev/null
sudo apt-get install apt-transport-https --yes
echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/helm.gpg] https://baltocdn.com/helm/stable/debian/ all main" | sudo tee /etc/apt/sources.list.d/helm-stable-debian.list
sudo apt-get update
sudo apt-get install helm

4. The Master node uses Helm to install NFS Subdir External Provisioner

Refer to the NFS Provisioner installation instructions :

helm repo add nfs-subdir-external-provisioner https://kubernetes-sigs.github.io/nfs-subdir-external-provisioner

When executed below, it will nfs.server=x.x.x.x be replaced with the specific NFSmachine IP address of the deployment service and nfs.path=/exported/paththe previously configured shared directory path:

helm install nfs-subdir-external-provisioner nfs-subdir-external-provisioner/nfs-subdir-external-provisioner \
    --set nfs.server=x.x.x.x \
    --set nfs.path=/exported/path

5. Problems and troubleshooting

Check the running status of pods and you can see the status due to network problems ImagePullBackOff:

kubectl get pods
--------------------------------------------------------------------
NAME                                              READY   STATUS             RESTARTS   AGE
nfs-subdir-external-provisioner-94dbb6bcf-78hwd   0/1     ImagePullBackOff   0          14m

See specific questions:

kubectl describe pod nfs-subdir-external-provisioner-94dbb6bcf-78hwd
----------------------------------------------------------------------
···
Node:             xa-lenovo-right/192.168.1.103
···
  Warning  Failed       114s                  kubelet  Error: ImagePullBackOff
  Normal   Pulling      100s (x2 over 2m29s)  kubelet  Pulling image "registry.k8s.io/sig-storage/nfs-subdir-external-provisioner:v4.0.2"
  Warning  Failed       68s (x2 over 115s)    kubelet  Failed to pull image "registry.k8s.io/sig-storage/nfs-subdir-external-provisioner:v4.0.2": rpc error: code = Unknown desc = Error response from daemon: Head "https://us-west2-docker.pkg.dev/v2/k8s-artifacts-prod/images/sig-storage/nfs-subdir-external-provisioner/manifests/v4.0.2": dial tcp 64.233.189.82:443: i/o timeout
  Warning  Failed       68s (x2 over 115s)    kubelet  Error: ErrImagePull
  Normal   BackOff      57s (x2 over 114s)    kubelet  Back-off pulling image "registry.k8s.io/sig-storage/nfs-

You can see the node on which the Pod is running xa-lenovo-right/192.168.1.103and the images that cannot be pulled registry.k8s.io/sig-storage/nfs-subdir-external-provisioner:v4.0.2.
Then just pull the missing image on that node.

Solution (for reference only):

Pull the image on a machine with suitable network conditions:

docker pull registry.k8s.io/sig-storage/nfs-subdir-external-provisioner:v4.0.2

Log in to Harbor private warehouse:

docker login xxxxxxxx.com:442

Label the private warehouse:

docker tag registry.k8s.io/sig-storage/nfs-subdir-external-provisioner:v4.0.2 xxxxxxxx.com:442/docker_images/nfs-subdir-external-provisioner:v4.0.2

Push to private repository:

docker tag registry.k8s.io/sig-storage/nfs-subdir-external-provisioner:v4.0.2 xxxxxxxx.com:442/docker_images/nfs-subdir-external-provisioner:v4.0.2

Manually pull and label the node where the image needs to be pulled:

docker pull xxxxxxxx.com:442/docker_images/nfs-subdir-external-provisioner:v4.0.2
docker tag xxxxxxxx.com:442/docker_images/nfs-subdir-external-provisioner:v4.0.2 registry.k8s.io/sig-storage/nfs-subdir-external-provisioner:v4.0.2

Finally, the Master node checks the Pod and StorageClass status:

kubectl get pods
kubectl get sc
NAME                                              READY   STATUS    RESTARTS   AGE
nfs-subdir-external-provisioner-94dbb6bcf-78hwd   1/1     Running   0          18h
NAME         PROVISIONER                                     RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
nfs-client   cluster.local/nfs-subdir-external-provisioner   Delete          Immediate           true                   18h

6. Test whether StorageClass is running normally

Create PVC file pvc-test.yaml:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pvc-test
spec:
  storageClassName: nfs-client #必须指定storageClassName
  resources:
    requests:
      storage: 1Gi
  volumeMode: Filesystem
  accessModes:
    - ReadWriteOnce

Create this resource:

kubectl create -f pvc-test.yaml

View PVC and automatically created PV:

kubectl get pvc
kubectl get pv
NAME       STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
pvc-test   Bound    pvc-6221c79e-eda3-4c7e-b614-5e420a7e44a8   1Gi        RWO            nfs-client     6m15s
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM              STORAGECLASS   REASON   AGE
pvc-6221c79e-eda3-4c7e-b614-5e420a7e44a8   1Gi        RWO            Delete           Bound    default/pvc-test   nfs-client              6m14s

You can see that the PV was successfully created

Guess you like

Origin blog.csdn.net/weixin_43461724/article/details/132119988