【nfs搭建、k8s、StroageClass动态创建PV】K8S通过StroageClass完成动态的PV创建

StorageClass设置默认存储后端及动态创建存储

1.检查操作系统的环境

[linguang@backup ~]$ cat /etc/redhat-release

CentOS release 6.10 (Final)

[linguang@backup ~]$ uname -r

2.6.32-754.el6.x86_64      显示操作系统发行版本

[linguang@backup ~]$ uname -m

x86_64                 显示机器(硬件)类型

2、NFS服务端需要安装的软件包

nfs-utils:nfs服务的主程序,包括rpc.nfsd、rpc.mountd两个daemons和相关的文档说明及执行命令文件等

rpcbind:centos6下面的rpc主程序(centos5下的是portmap)

安装相应的软件包及检查

安装:

[root@nfs01 ~]# yum install nfs-utils rpcbind -y

检查:

[root@nfs01 ~]# rpm -qa nfs-utils rpcbind

rpcbind-0.2.0-13.el6_9.1.x86_64

nfs-utils-1.2.3-75.el6_9.x86_64

注意:在安装完该软件包后会自动创建nfsnobody用户

[root@nfs01 ~]# id nfsnobody

uid=65534(nfsnobody) gid=65534(nfsnobody) groups=65534(nfsnobody)

3、 启动NFS相关的服务

【特别注意上面的启动顺序】必须先启动rpcbind服务之后,才能启动nfs服务

启动rpcbind服务并进行检查

启动rpcbind:

检查:

###############################################

[root@nfs01 ~]# /etc/init.d/rpcbind status

rpcbind (pid 2309) is running...

或者执行下面命令(效果等同  表示rpcbind 服务已经启动)

root@master-01 ~]# /usr/sbin/rpcbind status

rpcbind: another rpcbind is already running. Aborting

操作有些资料可能会要求启动  service portmap start 服务,(百度得知原来在Fedora8中间portmap改名了,换了个马甲。)

##############################################

如果上述#########之间这段命令执行不了,原因是版本问题,执行下面的操作即可:

[root@master-01 ~]# service rpcbind restart
Redirecting to /bin/systemctl restart rpcbind.service

发现提示我们到这个目录去启动:执行下面的命令即可启动

/bin/systemctl restart rpcbind.service

1586311773%281%29.pnguploading.4e448015.gif转存失败重新上传取消1586311773%281%29.pnguploading.4e448015.gif转存失败重新上传取消1586311773%281%29.pnguploading.4e448015.gif转存失败重新上传取消1586311773%281%29.pnguploading.4e448015.gif转存失败重新上传取消1586311773%281%29.pnguploading.4e448015.gif转存失败重新上传取消

启动NFS服务,和上述提示一样 要求在/etc/systemctl 目录下执行

[root@master-01 ~]# /bin/systemctl start nfs.service

查看状态,发现NFS   服务已经启动

[root@master-01 ~]# /bin/systemctl status nfs.service

1586312098%281%29.pnguploading.4e448015.gif转存失败重新上传取消1586312098%281%29.pnguploading.4e448015.gif转存失败重新上传取消1586312098%281%29.pnguploading.4e448015.gif转存失败重新上传取消1586312098%281%29.pnguploading.4e448015.gif转存失败重新上传取消1586312098%281%29.pnguploading.4e448015.gif转存失败重新上传取消

查看rpc的端口:

[root@nfs01 ~]# netstat -lntup |grep rpc

1586312330.pnguploading.4e448015.gif转存失败重新上传取消1586312330.pnguploading.4e448015.gif转存失败重新上传取消1586312330.pnguploading.4e448015.gif转存失败重新上传取消1586312330.pnguploading.4e448015.gif转存失败重新上传取消1586312330.pnguploading.4e448015.gif转存失败重新上传取消

如果没有netstat命令 执行下面命令

yum install net-tools -y  来安装net工具

查看端口映射情况:

[root@nfs01 ~]# rpcinfo -p localhost

   1586312461%281%29.pnguploading.4e448015.gif转存失败重新上传取消1586312461%281%29.pnguploading.4e448015.gif转存失败重新上传取消1586312461%281%29.pnguploading.4e448015.gif转存失败重新上传取消1586312461%281%29.pnguploading.4e448015.gif转存失败重新上传取消1586312461%281%29.pnguploading.4e448015.gif转存失败重新上传取消

注意:在未启动nfs服务时,不能看到nfs端口的映射情况

原因:nfs可以视为一个rpc程序,在启动任何一个rpc程序之前,需要做好端口和功能的映射工作,这个映射工作就是由rpcbind服务来完成的,因此在提供nfs服务之前,必须要先启动rpcbind服务

检查nfs和rpc进程

[root@nfs01 ~]# ps -ef |egrep "rpc|nfs"

1586312714%281%29.pnguploading.4e448015.gif转存失败重新上传取消1586312714%281%29.pnguploading.4e448015.gif转存失败重新上传取消1586312714%281%29.pnguploading.4e448015.gif转存失败重新上传取消1586312714%281%29.pnguploading.4e448015.gif转存失败重新上传取消1586312714%281%29.pnguploading.4e448015.gif转存失败重新上传取消

4、将相关的服务添加到开机自启动中

推荐

加入:

[root@nfs01 ~]# echo "/etc/init.d/rpcbind start" >>/etc/rc.local

[root@nfs01 ~]# echo "/etc/init.d/nfs start" >>/etc/rc.local

查看:

[root@nfs01 ~]# tail -2 /etc/rc.local

/etc/init.d/rpcbind start

/etc/init.d/nfs start

如图:

1586312843%281%29.pnguploading.4e448015.gif转存失败重新上传取消1586312843%281%29.pnguploading.4e448015.gif转存失败重新上传取消1586312843%281%29.pnguploading.4e448015.gif转存失败重新上传取消1586312843%281%29.pnguploading.4e448015.gif转存失败重新上传取消1586312843%281%29.pnguploading.4e448015.gif转存失败重新上传取消

【注意】

在实际的生产环境中用方法2,因为可以方便运维人员的管理

5、配置nfs服务的配置文件/etc/exports

默认情况下该配置文件是空的:

[root@nfs01 ~]# ll -h /etc/exports

-rw-r--r--. 1 root root 0 Jan 12 2010 /etc/exports

编辑该配置文件:

[root@nfs01 ~]# vim /etc/exports

查看:

[root@nfs01 ~]# cat /etc/exports

#share /data by oldboy for bingbing at 2018-3-12

/data 172.16.1.0/24(rw,sync)

上面这段是重中之重

6、创建共享目录(这个目录自定义,需要和后续包括上面配置中的目录一致就行,我后面的操作可能用的是/nfsdata,都是一个意思。名称自定义,取一个你认为好的名字就行)

[root@nfs01 ~]# mkdir /data -p

[root@nfs01 ~]# ll -d /data/

drwxr-xr-x. 2 root root 4096 Nov 19 10:45 /data/

7、更改共享目录的权限

[root@nfs01 ~]# chown -R nfsnobody.nfsnobody /data

[root@nfs01 ~]# ll -d /data

drwxr-xr-x 2 nfsnobody nfsnobody 4096 Mar 12 19:27 /data

8、重新加载NFS服务

[root@master-01 nfsdata]# /bin/systemctl reload nfs.service
[root@master-01 nfsdata]# 

 修改完/etc/exports配置后,需要重新加载NFS服务

 9、 检查有权限挂载的服务器是否能够挂载

【方法1】利用showmount来进行检查

[root@nfs01 ~]# showmount -e 172.16.1.31

Export list for 172.16.1.31:

/data 172.16.1.0/24

或者

[root@nfs01 ~]# showmount -e localhost

Export list for localhost:

/data 172.16.1.0/24

【注意】

 出现上面信息是,说明服务器可以挂载

 测试的IP地址为NFS服务器的IP地址

【方法2】可以在把NFS服务器当做客户端来进行挂载测试

[root@master ~]# mount -t nfs 10.10.36.112:/nfsdata /mnt
mount.nfs: access denied by server while mounting 10.10.36.112:/nfsdata

此时发现上述错误:

挂载失败原因分析:

解决方案:

第一步:查看最近挂载的日志  分析发现:

cat /var/log/messages | grep mount

Apr 8 09:51:54 master-01 rpc.mountd[26157]: Version 1.3.0 starting
Apr 8 09:53:45 master-01 rpc.mountd[26157]: refused mount request from 10.10.27.90 for /nfsdata (/nfsdata): unmatched host
Apr 8 09:55:47 master-01 rpc.mountd[26157]: refused mount request from 10.10.27.90 for /nfsdata (/nfsdata): unmatched host
Apr 8 09:57:49 master-01 rpc.mountd[26157]: refused mount request from 10.10.27.90 for /nfsdata (/nfsdata): unmatched host
Apr 8 09:59:51 master-01 rpc.mountd[26157]: refused mount request from 10.10.27.90 for /nfsdata (/nfsdata): unmatched host
Apr 8 10:01:53 master-01 rpc.mountd[26157]: refused mount request from 10.10.27.90 for /nfsdata (/nfsdata): unmatched host
Apr 8 10:03:56 master-01 rpc.mountd[26157]: refused mount request from 10.10.27.90 for /nfsdata (/nfsdata): unmatched host
Apr 8 10:05:58 master-01 rpc.mountd[26157]: refused mount request from 10.10.27.90 for /nfsdata (/nfsdata): unmatched host
Apr 8 10:08:00 master-01 rpc.mountd[26157]: refused mount request from 10.10.27.90 for /nfsdata (/nfsdata): unmatched host
Apr 8 10:10:02 master-01 rpc.mountd[26157]: refused mount request from 10.10.27.90 for /nfsdata (/nfsdata): unmatched host
Apr 8 10:12:04 master-01 rpc.mountd[26157]: refused mount request from 10.10.27.90 for /nfsdata (/nfsdata): unmatched host
Apr 8 10:14:06 master-01 rpc.mountd[26157]: refused mount request from 10.10.27.90 for /nfsdata (/nfsdata): unmatched host
Apr 8 10:16:08 master-01 rpc.mountd[26157]: refused mount request from 10.10.27.90 for /nfsdata (/nfsdata): unmatched host
Apr 8 10:18:11 master-01 rpc.mountd[26157]: refused mount request from 10.10.27.90 for /nfsdata (/nfsdata): unmatched host
Apr 8 10:20:13 master-01 rpc.mountd[26157]: refused mount request from 10.10.27.90 for /nfsdata (/nfsdata): unmatched host
Apr 8 10:22:15 master-01 rpc.mountd[26157]: refused mount request from 10.10.27.90 for /nfsdata (/nfsdata): unmatched host
Apr 8 10:24:12 master-01 rpc.mountd[26157]: Caught signal 15, un-registering and exiting.
Apr 8 10:24:12 master-01 rpc.mountd[29833]: Version 1.3.0 starting
Apr 8 10:24:17 master-01 rpc.mountd[29833]: refused mount request from 10.10.27.90 for /nfsdata (/nfsdata): unmatched host
Apr 8 10:26:19 master-01 rpc.mountd[29833]: refused mount request from 10.10.27.90 for /nfsdata (/nfsdata): unmatched host
Apr 8 10:28:21 master-01 rpc.mountd[29833]: refused mount request from 10.10.27.90 for /nfsdata (/nfsdata): unmatched host
Apr 8 10:28:51 master-01 rpc.mountd[29833]: refused mount request from 10.10.36.112 for /nfsdata (/nfsdata): unmatched host
Apr 8 10:29:28 master-01 rpc.mountd[29833]: refused mount request from 10.10.27.87 for /nfsdata (/nfsdata): unmatched host
Apr 8 10:30:23 master-01 rpc.mountd[29833]: refused mount request from 10.10.27.90 for /nfsdata (/nfsdata): unmatched host
 

上述发现大多数都是unmatched host :

原因:主机不可达  那么问题肯定出在、 /etc/exports  配置  将ip直接改为* 或者对应的IP即可

1586314900%281%29.pnguploading.4e448015.gif转存失败重新上传取消1586314900%281%29.pnguploading.4e448015.gif转存失败重新上传取消1586314900%281%29.pnguploading.4e448015.gif转存失败重新上传取消1586314900%281%29.pnguploading.4e448015.gif转存失败重新上传取消1586314900%281%29.pnguploading.4e448015.gif转存失败重新上传取消

改完后 一定记【上述操作属于危险操作,生产环境还是指定IP为好】

得重新reload一下nfs服务

此时我们再客户端机器执行挂载命令:

mount -t nfs 10.10.36.112:/nfsdata /mnt

1586314650%281%29.pnguploading.4e448015.gif转存失败重新上传取消1586314650%281%29.pnguploading.4e448015.gif转存失败重新上传取消1586314650%281%29.pnguploading.4e448015.gif转存失败重新上传取消1586314650%281%29.pnguploading.4e448015.gif转存失败重新上传取消1586314650%281%29.pnguploading.4e448015.gif转存失败重新上传取消

[root@nfs01 ~]# umount /mnt 测试完后取消挂载

[root@nfs01 ~]# df -h

10、NFS客户端的配置

 检查操作系统的环境

[root@ client1 ~]# cat /etc/redhat-release

CentOS release 6.9 (Final)

[root@ client1 ~]# uname -r

2.6.32-696.el6.x86_64

[root@ client1 ~]# uname -m

x86_64

11、安装客户端软件rpcbind和nfs-utils

安装:

[root@client1 ~]# yum install nfs-utils rpcbind -y

检查:

[root@ client1 ~]# rpm -qa nfs-utils rpcbind

rpcbind-0.2.0-13.el6_9.1.x86_64

nfs-utils-1.2.3-75.el6_9.x86_64

【注意】

 安装nfs-utils软件的目的是为了使用showmount等功能,所以客户端最好也装上,但是不启动NFS服务

12、 启动RPC服务并进行查看

启动:

[root@client1 ~]# /etc/init.d/rpcbind start

Starting rpcbind: [ OK ]

检查:

[root@ client1 ~]# /etc/init.d/rpcbind status

rpcbind (pid 2370) is running...

13、检查能否访问服务端

【方法1】

[root@ client1 ~]# showmount -e 172.16.1.31 此ip地址为服务器端的ip地址

Export list for 172.16.1.31:

/data 172.16.1.0/24

出现上面的情况说明可以访问服务端

【方法2】

[root@ client1 ~]# telnet 172.16.1.31 111                       #111为rpc服务的端口

Trying 172.16.1.31...

Connected to 172.16.1.31.

Escape character is '^]'.

出现上面的情况说明可以访问服务端

14、 挂载NFS共享目录

挂载:

[root@ client1 ~]# mount -t nfs 172.16.1.31:/data /mnt

查看:

[root@ client1 ~]# df -h

Filesystem Size Used Avail Use% Mounted on

/dev/sda3 8.8G 1.5G 6.9G 18% /

tmpfs 491M 0 491M 0% /dev/shm

/dev/sda1 190M 35M 146M 19% /boot

172.16.1.31:/data 8.8G 1.5G 6.9G 18% /mnt

查看:

[root@ client1 ~]# mount

/dev/sda3 on / type ext4 (rw)

proc on /proc type proc (rw)

sysfs on /sys type sysfs (rw)

devpts on /dev/pts type devpts (rw,gid=5,mode=620)

tmpfs on /dev/shm type tmpfs (rw)

/dev/sda1 on /boot type ext4 (rw)

none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)

sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw)

172.16.1.31:/data on /mnt type nfs (rw,vers=4,addr=172.16.1.31,clientaddr=172.16.1.8)

15、测试读写数据

在/mnt目录下创建测试文件

[root@ client1 ~]# cd /mnt/

[root@ client1 mnt]# touch test.txt

[root@ client1 mnt]# ls

test.txt

16、在NFS服务端/data目录下进行查看

[root@nfs01 ~]# cd /data

[root@nfs01 data]# ls

test.txt

至此NFS客户端挂载成功

17、将挂载命令加入开机自启动

[root@client1 ~]# echo "mount -t nfs 172.16.1.31:/data /mnt" >>/etc/rc.local

[root@client1 ~]# tail -1 /etc/rc.local

mount -t nfs 172.16.1.31:/data /mnt

---------------------------------------------------------------------------------------------------------------------------------------

NFS共享挂载的验证场景:

 为容器挂载共享存储卷:

即使容器在不通的主机之间漂移,容器挂载的存储卷始终指向NFS存储卷服务,在启动节点上同样从NFS中进行数据恢复:

涉及几个概念一定要清楚:

    PVC 、 PV、  NFS、nfs-client、StoageClass

      简单理解可以认为:这段内容帮助我们理解上面的概念

     PVC是开发人员描述需要申请的存储要求,PV是运维人员需要完成开发人员提的要求去创建对应的PV。

    上述情况,在简单的业务场景下可以实现,由运维人员手动去创建PV。

    但是当后续的开发人员越来越多的情况下,意味着PVC的请求将更加的繁多,PVC绑定PV也将更加的复杂。

    StorageClass主要解决了上述问题:

    自动的去创建对应的PV,并将PVC绑定到对应的PV上。

    nfs的出现更加完善了存储卷挂载的情况:

    通过nfs-client将数据挂载到nfs服务器目录下。解决了容器不通宿主机之间漂移,挂载的数据也跟随漂移的问题。

    

     1、 首先:PVC和PV 假如我需要使用存储,那么需要去申请那么就是PVC干的事,它就会去上PV申请磁盘空间【这种静态的申请不能延伸】

     2、然后引入StorageClass概念:就是动态的去申请PV,不再需要我使用PV,然后手动去创建PV,StorageClass就能动态的去申请。

     3、NFS  是后端存储服务,这个没什么需要解释的。

     4、nfs-client 就是通过一个deployment进程,里面配置NFS服务器的相关信息,然后将客户端磁盘挂载到NFS服务器上去。

-----------------------------------------------------------------------------------------------------------------------------------------------

第一步:

配置Deployment, 修改NFS服务器和挂载目录【里面有个参数是ServiceAccountName】因此我们还需要创建ServiceAccount 即第二步操作;

kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: nfs-client-provisioner1
spec:
replicas: 1
strategy:
type: Recreate
template:
metadata:
labels:
app: nfs-client-provisioner1
spec:
serviceAccountName: nfs-client-provisioner1
containers:
- name: nfs-client-provisioner1
image: 10.10.27.86/appdeploy/nfs-client-provisioner:latest
volumeMounts:
- name: nfs-client-root
mountPath: /persistentvolumes
env:
- name: PROVISIONER_NAME
value: fuseim.pri/ifs
- name: NFS_SERVER
value: 10.10.36.112
- name: NFS_PATH
value: /nfsdata
volumes:
- name: nfs-client-root
nfs:
server: 10.10.36.112
path: /nfsdata
 

第二步:

创建ServiceAccount即创建sa,绑定权限,申明一个集群角色, 在里面定义一下对PV的增删改查的权限 ,通过ServiceAccount来动态的创建PV;

apiVersion: v1
kind: ServiceAccount
metadata:
name: nfs-client-provisioner1

---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: nfs-client-provisioner-runner1
rules:
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["list", "watch", "create", "update", "patch"]
- apiGroups: [""]
resources: ["endpoints"]
verbs: ["create", "delete", "get", "list", "watch", "patch", "update"]

---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: run-nfs-client-provisioner1
subjects:
- kind: ServiceAccount
name: nfs-client-provisioner1
namespace: default
roleRef:
kind: ClusterRole
name: nfs-client-provisioner-runner1
apiGroup: rbac.authorization.k8s.io

第三步:

创建StoageClass对象

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: course-nfs-storage
provisioner: fuseim.pri/ifs # or choose another name, must match deployment's env PROVISIONER_NAME'
 

执行创建上面三种对象:

$ kubectl create -f nfs-client.yaml

$ kubectl create -f nfs-client-sa.yaml

$ kubectl create -f nfs-client-class.yaml

测试:为了验证pod在不通的node节点之间都能挂载到nfs服务上

需要给node节点打标签,方便测试:

kubectl label nodes node-01  type=node-01

kubectl label nodes node-02  type=node-02

查看打上标签没有:

kubectl get nodes --show-labels

创建statfulSet.yaml

vi statfulSet.yaml

apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: nfs-web-1
spec:
serviceName: "nginx" #声明它属于哪个Headless Service.
replicas: 3
template:
metadata:
labels:
app: nfs-web-1
spec:
terminationGracePeriodSeconds: 10
nodeSelector:
type: node-02
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
name: web
volumeMounts:
- name: www
mountPath: /usr/share/nginx/html
volumeClaimTemplates:
- metadata:
name: www
annotations:
volume.beta.kubernetes.io/storage-class: course-nfs-storage
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 1Gi
 

然后查看Pod,都已经启动了 三个副本都到node2节点上,符合我们预期结果;

1586337295%281%29.pnguploading.4e448015.gif转存失败重新上传取消1586337295%281%29.pnguploading.4e448015.gif转存失败重新上传取消1586337295%281%29.pnguploading.4e448015.gif转存失败重新上传取消1586337295%281%29.pnguploading.4e448015.gif转存失败重新上传取消1586337295%281%29.pnguploading.4e448015.gif转存失败重新上传取消

查看 PVC ,PV都已经自动创建了,这些完成都是有storageClass完成的

1586337434%281%29.pnguploading.4e448015.gif转存失败重新上传取消1586337434%281%29.pnguploading.4e448015.gif转存失败重新上传取消1586337434%281%29.pnguploading.4e448015.gif转存失败重新上传取消1586337434%281%29.pnguploading.4e448015.gif转存失败重新上传取消1586337434%281%29.pnguploading.4e448015.gif转存失败重新上传取消

最后到nfs服务器上去查看挂载情况 发现都已经挂载到nfsdata目录下了。

1586337045%281%29.pnguploading.4e448015.gif转存失败重新上传取消1586337045%281%29.pnguploading.4e448015.gif转存失败重新上传取消1586337045%281%29.pnguploading.4e448015.gif转存失败重新上传取消1586337045%281%29.pnguploading.4e448015.gif转存失败重新上传取消1586337045%281%29.pnguploading.4e448015.gif转存失败重新上传取消

为了演示方便:我们杀掉本节点的pod实例,分别在node1和node2 节点进行测试

1586398287.pnguploading.4e448015.gif转存失败重新上传取消1586398287.pnguploading.4e448015.gif转存失败重新上传取消1586398287.pnguploading.4e448015.gif转存失败重新上传取消1586398287.pnguploading.4e448015.gif转存失败重新上传取消1586398287.pnguploading.4e448015.gif转存失败重新上传取消

【slave1节点】指定到node1节点运行

1586398510%281%29.pnguploading.4e448015.gif转存失败重新上传取消1586398510%281%29.pnguploading.4e448015.gif转存失败重新上传取消1586398510%281%29.pnguploading.4e448015.gif转存失败重新上传取消1586398510%281%29.pnguploading.4e448015.gif转存失败重新上传取消1586398510%281%29.pnguploading.4e448015.gif转存失败重新上传取消

然后查看pod运行事件,看看挂载的是哪个磁盘

1586398654%281%29.pnguploading.4e448015.gif转存失败重新上传取消1586398654%281%29.pnguploading.4e448015.gif转存失败重新上传取消1586398654%281%29.pnguploading.4e448015.gif转存失败重新上传取消1586398654%281%29.pnguploading.4e448015.gif转存失败重新上传取消1586398654%281%29.pnguploading.4e448015.gif转存失败重新上传取消

查看nfs服务器上挂载的目录【根据生成的pvc名称唯一】

1586398815%281%29.pnguploading.4e448015.gif转存失败重新上传取消1586398815%281%29.pnguploading.4e448015.gif转存失败重新上传取消1586398815%281%29.pnguploading.4e448015.gif转存失败重新上传取消1586398815%281%29.pnguploading.4e448015.gif转存失败重新上传取消1586398815%281%29.pnguploading.4e448015.gif转存失败重新上传取消

同上:【Node2节点】删掉node1的pod、指定到node2运行pod

1586399287.pnguploading.4e448015.gif转存失败重新上传取消1586399287.pnguploading.4e448015.gif转存失败重新上传取消1586399287.pnguploading.4e448015.gif转存失败重新上传取消1586399287.pnguploading.4e448015.gif转存失败重新上传取消1586399287.pnguploading.4e448015.gif转存失败重新上传取消

然后查看pod运行事件,看看挂载的是哪个磁盘

1586399445%281%29.pnguploading.4e448015.gif转存失败重新上传取消1586399445%281%29.pnguploading.4e448015.gif转存失败重新上传取消1586399445%281%29.pnguploading.4e448015.gif转存失败重新上传取消1586399445%281%29.pnguploading.4e448015.gif转存失败重新上传取消1586399445%281%29.pnguploading.4e448015.gif转存失败重新上传取消

查看nfs服务器上挂载的目录【根据生成的pvc名称唯一】

发现即使我们切换pod到不同的node节点,nfs服务挂载路径还是同一个

1586399523%281%29.pnguploading.4e448015.gif转存失败重新上传取消1586399523%281%29.pnguploading.4e448015.gif转存失败重新上传取消1586399523%281%29.pnguploading.4e448015.gif转存失败重新上传取消1586399523%281%29.pnguploading.4e448015.gif转存失败重新上传取消1586399523%281%29.pnguploading.4e448015.gif转存失败重新上传取消

以上就是关于NFS挂载验证的全过程

------------------------------------------------------------------------------------------------------------------------------------------------------

总结:pod启动的时候检测到需要申请的PVC请求,然后StorageClass就会去动态的创建PV,nfs-clinet完成与nfs-server之间的通信,将挂载的请求发送到nfs服务端,完成pv的创建。

猜你喜欢

转载自blog.csdn.net/qq_16481385/article/details/105404984