Ubuntu 22.04 LTS K8S 基于 NFS 创建 StorageClass

1、选定一台从机安装 NFS 服务

作为NFS服务器的节点,安装所需组件:

sudo apt install nfs-kernel-server nfs-common 

对于所有可以被调度执行Pod的节点需要安装:

sudo apt install nfs-common 

创建共享目录:

sudo mkdir -p /nfs/k8s
sudo chmod 777 /nfs/k8s

修改配置文件:

sudo vim /etc/exports

键入以下内容并保存(*表示允许所有网段访问):

/nfs/k8s *(rw,sync,no_root_squash)

2、重启 NFS 服务并检查

sudo systemctl restart nfs-kernel-server
sudo systemctl enable nfs-kernel-server
sudo showmount -e localhost

3、Master 节点安装Helm

参考Helm官方安装说明,安装Helm:

curl https://baltocdn.com/helm/signing.asc | gpg --dearmor | sudo tee /usr/share/keyrings/helm.gpg > /dev/null
sudo apt-get install apt-transport-https --yes
echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/helm.gpg] https://baltocdn.com/helm/stable/debian/ all main" | sudo tee /etc/apt/sources.list.d/helm-stable-debian.list
sudo apt-get update
sudo apt-get install helm

4、Master 节点使用 Helm 安装 NFS Subdir External Provisioner

参考NFS Provisioner安装说明

helm repo add nfs-subdir-external-provisioner https://kubernetes-sigs.github.io/nfs-subdir-external-provisioner

下面执行时将nfs.server=x.x.x.x 替换成具体的部署NFS服务的机器ip地址,nfs.path=/exported/path替换为之前配置的共享目录路径:

helm install nfs-subdir-external-provisioner nfs-subdir-external-provisioner/nfs-subdir-external-provisioner \
    --set nfs.server=x.x.x.x \
    --set nfs.path=/exported/path

5、问题及排错

查看pods运行状态,可以看到由于网络问题出现ImagePullBackOff状态:

kubectl get pods
--------------------------------------------------------------------
NAME                                              READY   STATUS             RESTARTS   AGE
nfs-subdir-external-provisioner-94dbb6bcf-78hwd   0/1     ImagePullBackOff   0          14m

查看具体问题:

kubectl describe pod nfs-subdir-external-provisioner-94dbb6bcf-78hwd
----------------------------------------------------------------------
···
Node:             xa-lenovo-right/192.168.1.103
···
  Warning  Failed       114s                  kubelet  Error: ImagePullBackOff
  Normal   Pulling      100s (x2 over 2m29s)  kubelet  Pulling image "registry.k8s.io/sig-storage/nfs-subdir-external-provisioner:v4.0.2"
  Warning  Failed       68s (x2 over 115s)    kubelet  Failed to pull image "registry.k8s.io/sig-storage/nfs-subdir-external-provisioner:v4.0.2": rpc error: code = Unknown desc = Error response from daemon: Head "https://us-west2-docker.pkg.dev/v2/k8s-artifacts-prod/images/sig-storage/nfs-subdir-external-provisioner/manifests/v4.0.2": dial tcp 64.233.189.82:443: i/o timeout
  Warning  Failed       68s (x2 over 115s)    kubelet  Error: ErrImagePull
  Normal   BackOff      57s (x2 over 114s)    kubelet  Back-off pulling image "registry.k8s.io/sig-storage/nfs-

可以看到该Pod运行的节点xa-lenovo-right/192.168.1.103以及无法拉取的镜像registry.k8s.io/sig-storage/nfs-subdir-external-provisioner:v4.0.2
之后只要在该节点拉取缺少的镜像即可。

解决方式(仅供参考):

在具有合适网络条件的机器上拉取该镜像:

docker pull registry.k8s.io/sig-storage/nfs-subdir-external-provisioner:v4.0.2

登录Harbor私有仓库:

docker login xxxxxxxx.com:442

打上私有仓库的标签:

docker tag registry.k8s.io/sig-storage/nfs-subdir-external-provisioner:v4.0.2 xxxxxxxx.com:442/docker_images/nfs-subdir-external-provisioner:v4.0.2

推送到私有仓库:

docker tag registry.k8s.io/sig-storage/nfs-subdir-external-provisioner:v4.0.2 xxxxxxxx.com:442/docker_images/nfs-subdir-external-provisioner:v4.0.2

在需要拉取镜像的节点手动拉取并打标签:

docker pull xxxxxxxx.com:442/docker_images/nfs-subdir-external-provisioner:v4.0.2
docker tag xxxxxxxx.com:442/docker_images/nfs-subdir-external-provisioner:v4.0.2 registry.k8s.io/sig-storage/nfs-subdir-external-provisioner:v4.0.2

最后 Master 节点检查 Pod 及 StorageClass 状况:

kubectl get pods
kubectl get sc
NAME                                              READY   STATUS    RESTARTS   AGE
nfs-subdir-external-provisioner-94dbb6bcf-78hwd   1/1     Running   0          18h
NAME         PROVISIONER                                     RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
nfs-client   cluster.local/nfs-subdir-external-provisioner   Delete          Immediate           true                   18h

6、测试 StorageClass 是否正常运行

创建PVC文件pvc-test.yaml

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pvc-test
spec:
  storageClassName: nfs-client #必须指定storageClassName
  resources:
    requests:
      storage: 1Gi
  volumeMode: Filesystem
  accessModes:
    - ReadWriteOnce

创建该资源:

kubectl create -f pvc-test.yaml

查看PVC及自动创建的PV:

kubectl get pvc
kubectl get pv
NAME       STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
pvc-test   Bound    pvc-6221c79e-eda3-4c7e-b614-5e420a7e44a8   1Gi        RWO            nfs-client     6m15s
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM              STORAGECLASS   REASON   AGE
pvc-6221c79e-eda3-4c7e-b614-5e420a7e44a8   1Gi        RWO            Delete           Bound    default/pvc-test   nfs-client              6m14s

可以看到PV被成功创建

猜你喜欢

转载自blog.csdn.net/weixin_43461724/article/details/132119988