CKA Exam Preparation Lab | NFS Storage

Book source: "CKA/CKAD Test Guide: From Docker to Kubernetes Complete Raiders"

Organize the teacher's course content and experimental notes while studying, and share them with everyone. Any infringement will be deleted. Thank you for your support!

Attach a summary post: CKA test preparation experiment |


Regardless of emptyDir or hostPath storage, although data can be stored on the server, for example, pod1 in Figure 6-2 wrote some data aaabbb and put it in the volume, but it is only stored on the worker1 node, and the data is not synchronized to worker2.

If there is a problem with pod1 at this time, a new pod will be automatically generated through deployment (described later). If this pod is still running on worker1, there will be no problem. But if the new pod is running on worker 2, then the data cannot be read, because the data is placed on worker1, as shown in Figure 6-3.

If network storage is used, such problems can be avoided, as shown in Figure 6-4.

Here pod1 mounts a shared directory of the storage server. When data is written in pod1, the data is actually written to the storage server. If one day pod1 hangs up, and the newly generated pod is running on worker2, as shown in Figure 6-5.

The new pod1 will also mount the shared directory on the storage server, and still be able to see the data that has already been written. The following demonstrates the use of NFS as shared storage to share pod data. The full name of NFS is Network File System, which is used for sharing between UNIX-like systems and is relatively simple to configure.

Step 1: Build an NFS server.

Build an NFS server by yourself. In this environment, the IP of the NFS server is 192.168.26.30, and the shared directory is /123. Please pay attention to the sharing permissions (please refer to the relevant information for setting up the NFS server).

##########实操验证##########
[root@vms30 ~]# yum install -y nfs-common nfs-utils  rpcbind
[root@vms30 ~]# mkdir /123
[root@vms30 ~]# chmod 666 /123
[root@vms30 ~]# chown nfsnobody /123
[root@vms30 ~]# cat /etc/exports
/123        *(rw,sync,no_root_squash)
[root@vms30 ~]# systemctl start rpcbind
[root@vms30 ~]# systemctl enable rpcbind
Created symlink from /etc/systemd/system/multi-user.target.wants/rpcbind.service to /usr/lib/systemd/system/rpcbind.service.
[root@vms30 ~]# systemctl start nfs
[root@vms30 ~]# systemctl enable nfs
Created symlink from /etc/systemd/system/multi-user.target.wants/nfs-server.service to /usr/lib/systemd/system/nfs-server.service.
[root@vms30 ~]#

Step 2: Install yum install nfs-u* -y on all nodes, and test whether the shared directory can be mounted normally on all nodes.

##########实操验证##########
[root@vms10 volume]# yum install nfs-u* -y
[root@vms10 volume]# showmount -e 192.168.1.130
Export list for 192.168.1.130:
/123 *
[root@vms10 volume]# mount 192.168.1.130:/123 /mnt
[root@vms10 volume]# umount /mnt

Step 3: Create a pod yaml file nfs.yaml with the following content.

##########实操验证##########
[root@vms10 volume]# cat nfs.yaml 
apiVersion: v1
kind: Pod
metadata:
  labels:
    run: nginx 
  name: demo
spec:
  volumes:
  - name: volume1
    nfs:
      server: 192.168.1.130
      path: "/123"
  containers:
  - image: busybox 
    name: demo1
    imagePullPolicy: IfNotPresent 
    command: ['sh', '-c', 'sleep 5000']
    volumeMounts:
    - name: volume1
      mountPath: /xx 
[root@vms10 volume]#

A volume named volume1 and type NFS is defined here, the address of the NFS server is 192.168.26.30 (specified by the server), and the shared directory on the NFS server is /123.

In the container demo1, the volume volume1 is mounted to the directory /xx of the container. In essence, /xx in the container will mount 192.168.26.30:/123. If /xx does not exist, it will be created automatically.

Step 4: Create and view pods.

##########实操验证##########
[root@vms10 volume]# kubectl apply -f nfs.yaml 
pod/demo created
[root@vms10 volume]# kubectl get pods
NAME   READY   STATUS    RESTARTS   AGE
demo   1/1     Running   0          5s
[root@vms10 volume]#

Step 5: Copy a file to this pod.

##########实操验证##########
[root@vms10 volume]# kubectl cp /etc/hosts demo:/xx
[root@vms10 volume]#

Step 6: Switch to vms30 and check the data in /123.

##########实操验证##########
[root@vms30 ~]#  ls /123
hosts
[root@vms30 ~]#

It can be seen that the data written in /xx of the pod is finally written into the nfs shared directory.

Step 7: Delete this pod.

Guess you like

Origin blog.csdn.net/guolianggsta/article/details/131473661