K8s practical collation (18)-k8s storage NFS
1 Introduction to NFS
NFS is short for Network File System, that is, network file system. NFS is one of the file systems supported by FreeBSD. NFS is implemented based on RPC (Remote Procedure Call), which allows a system to share directories and files with others on the network. By using NFS, users and programs can access files on remote systems just like local files. NFS is a very stable and portable network file system. With features such as scalability and high performance, it has reached enterprise-level application quality standards. Due to the increase in network speed and the decrease in latency, the NFS system has always been a competitive choice for providing file system services over the network.
1.1 NFS principle
NFS uses the RPC (Remote Procedure Call) mechanism for implementation. RPC allows the client to call the server's functions. At the same time, due to the existence of VFS, the client can use the NFS file system like other ordinary file systems. Through the kernel of the operating system, the call request of the NFS file system is sent to the NFS service of the server through TCP/IP. The NFS server performs related operations and returns the operation results to the client.
The main processes of NFS service include:
-
rpc.nfsd: The most important NFS process, manages whether the client can log in
-
rpc.mountd: mount and unmount NFS file system, including permission management
-
rpc.lockd: Not necessary, manage file locks to avoid simultaneous write errors
-
rpc.statd: non-essential, check file consistency and repair files
The key tools of NFS include:
-
Main configuration file: /etc/exports;
-
NFS file system maintenance command: /usr/bin/exportfs;
-
Log file of shared resources: /var/lib/nfs/*tab;
-
Client query shared resource command: /usr/sbin/showmount;
-
Port configuration: /etc/sysconfig/nfs.
1.2 Shared configuration
When the main configuration file of the NFS server is /etc/exports, the shared file directory can be set through this configuration file. Each configuration record consists of three parts: NFS shared directory, NFS client address, and parameters. The format is as follows:
[NFS shared directory] [NFS client address 1 (parameter 1, parameter 2, parameter 3...)] [client address 2 (parameter 1, parameter 2, parameter 3...)]
-
NFS shared directory: the file directory shared on the server;
-
NFS client address: the client address of the NFS server that it is allowed to access, which can be the client IP address or a network segment (192.168.64.0/24);
-
Access parameters: comma-separated items in brackets, mainly some permission options.
1) Access authority parameters
2) User mapping parameter chart
3) Other configuration parameter charts
2 NFS server configuration
Before nfs is used as a network file storage system, first, you need to install nfs and rpcbind services; then, you need to create users who use the shared directory; then, you need to configure the shared directory, which is a relatively important and complicated step; finally, Need to start rpcbind and nfs services for application use.
2.1 Install nfs service
1) Install nfs service and rpcbind service through yum directory:
1 |
|
2) Check whether the nfs service is installed normally
1 |
|
2.2 Create user
Add users for the NFS service, create a shared directory, and set the user to set the access permissions of the shared directory:
1 2 3 |
|
2.3 Configure the shared directory
Configure the shared directory for the client in the nfs server:
1 |
|
The configuration takes effect by executing the following command:
1 |
|
2.4 Start service
1) Since the rpcbind service must be started first, and then the nfs service, so that the nfs service can be successfully registered on the rpcbind service:
1 |
|
2) Start the nfs service:
1 |
|
3) Set rpcbind and nfs-server to boot up:
1 2 |
|
2.5 Check whether the nfs service starts normally
1 2 |
|
3 NFS as Volume
nfs can be used directly as a storage volume. Below is a YAML configuration file for redis deployment. In this example, the persistent data of redis in the container is stored in the /data directory; the storage volume uses nfs, the service address of nfs is: 192.168.8.150, and the storage path is: /k8s-nfs/redis/data. The container determines the storage volume used by the value of volumeMounts.name.
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: redis
spec:
selector:
matchLabels:
app: redis
revisionHistoryLimit: 2
template:
metadata:
labels:
app: redis
spec:
containers:
# 应用的镜像
- image: redis
name: redis
imagePullPolicy: IfNotPresent
# 应用的内部端口
ports:
- containerPort: 6379
name: redis6379
env:
- name: ALLOW_EMPTY_PASSWORD
value: "yes"
- name: REDIS_PASSWORD
value: "redis"
# 持久化挂接位置,在docker中
volumeMounts:
- name: redis-persistent-storage
mountPath: /data
volumes:
# 宿主机上的目录
- name: redis-persistent-storage
nfs:
path: /k8s-nfs/redis/data
server: 192.168.8.150
4 NFS as PersistentVolum
In the current version of Kubernetes, a persistent storage volume of type nfs can be created to provide storage volumes for PersistentVolumClaim. In the PersistenVolume YAML configuration file below, a persistent storage volume named nfs-pv is defined. This storage volume provides 5G storage space and can only be read and written by one PersistentVolumClaim. The NFS server address used by this persistent storage volume is 192.168.5.150, and the storage path is /tmp.
apiVersion: v1
kind: PersistentVolume
metadata:
name: nfs-pv
spec:
capacity:
storage: 5Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Recycle
storageClassName: slow
mountOptions:
- hard
- nfsvers=4.1
# 此持久化存储卷使用nfs插件
nfs:
# nfs共享目录为/tmp
path: /tmp
# nfs服务器的地址
server: 192.168.5.150
The above persistent storage volume can be created by executing the following command:
kubectl create -f {path}/nfs-pv.yaml
After the storage volume is successfully created, it will be in a usable state, waiting for PersistentVolumClaim to use it. PersistentVolumClaim will automatically select the appropriate storage volume based on the access mode and storage space, and bind it.
5 NFS is provided as dynamic storage
5.1 fs nfs commissions
Select the storage volume for storing state and data for the nfs-provisioner instance, and attach the storage volume to the container's /export command.
...
volumeMounts:
- name: export-volume
mountPath: /export
volumes:
- name: export-volume
hostPath:
path: /tmp/nfs-provisioner
...
Choose a provider name for StorageClass and set it in deploy/kubernetes/deployment.yaml.
args:
- "-provisioner=example.com/nfs"
...
The content of the complete deployment.yaml file is as follows:
kind: Service
apiVersion: v1
metadata:
name: nfs-provisioner
labels:
app: nfs-provisioner
spec:
ports:
- name: nfs
port: 2049
- name: mountd
port: 20048
- name: rpcbind
port: 111
- name: rpcbind-udp
port: 111
protocol: UDP
selector:
app: nfs-provisioner
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nfs-provisioner
spec:
replicas: 1
strategy:
type: Recreate
template:
metadata:
labels:
app: nfs-provisioner
spec:
containers:
- name: nfs-provisioner
image: quay.io/kubernetes_incubator/nfs-provisioner:v1.0.8
ports:
- name: nfs
containerPort: 2049
- name: mountd
containerPort: 20048
- name: rpcbind
containerPort: 111
- name: rpcbind-udp
containerPort: 111
protocol: UDP
securityContext:
capabilities:
add:
- DAC_READ_SEARCH
- SYS_RESOURCE
args:
# 定义提供者的名称,存储类通过此名称指定提供者
- "-provisioner=nfs-provisioner"
env:
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: SERVICE_NAME
value: nfs-provisioner
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
imagePullPolicy: "IfNotPresent"
volumeMounts:
- name: export-volume
mountPath: /export
volumes:
- name: export-volume
hostPath:
path: /srv
After setting up the deploy/kubernetes/deployment.yaml file, deploy the nfs-provisioner in the Kubernetes cluster through the kubectl create command.
kubectl create -f {path}/deployment.yaml
5.2 Create StorageClass
The following is the StorageClass configuration file of example-nfs. This configuration file defines a storage class named nfs-storageclass. The provider of this storage class is nfs-provisioner.
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: nfs-storageclass
provisioner: nfs-provisioner
Use the above configuration file to create with the kubectl create -f command:
# kubectl create -f deploy/kubernetes/class.yaml
storageclass “example-nfs” created
After the storage class is created correctly, you can create a PersistenetVolumeClaim to request the StorageClass, and the StorageClass will automatically create an available PersistentVolume for the PersistenetVolumeClaim.
5.3 Create PersistenetVolumeClaim
PersistenetVolumeClaim is a statement to PersistenetVolume, that is, PersistenetVolume is the storage provider, and PersistenetVolumeClaim is the storage consumer. The following is the YAML configuration file of PersistentVolumeClaim. This configuration file specifies the storage class used by the metadata.annotations[].volume.beta.kubernetes.io/storage-class field.
In this configuration file, use the nfs-storageclass storage class to create PersistenetVolume for PersistenetVolumeClaim. The required PersistenetVolume storage space is 1Mi, which can be read and written by multiple containers.
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nfs-pvc
annotations:
volume.beta.kubernetes.io/storage-class: "nfs-storageclass"
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Mi
Create the aforementioned persistent storage volume declaration through the kubectl create command:
1 |
|
5.4 Create a deployment using PersistenVolumeClaim
Define a deployment YAML configuration file named busybox-deployment here, and the image used is busybox. Containers based on the busybox image need to persist the data in the /mnt directory. Specify the PersistenVolumeClaim named nfs in the YAML file to persist the data of the container.
# This mounts the nfs volume claim into /mnt and continuously
# overwrites /mnt/index.html with the time and hostname of the pod.
apiVersion: v1
kind: Deployment
metadata:
name: busybox-deployment
spec:
replicas: 2
selector:
name: busybox-deployment
template:
metadata:
labels:
name: busybox-deployment
spec:
containers:
- image: busybox
command:
- sh
- -c
- 'while true; do date > /mnt/index.html; hostname >> /mnt/index.html; sleep $(($RANDOM % 5 + 5)); done'
imagePullPolicy: IfNotPresent
name: busybox
volumeMounts:
# name must match the volume name below
- name: nfs
mountPath: "/mnt"
#
volumes:
- name: nfs
persistentVolumeClaim:
claimName: nfs-pvc
Create busy-deployment deployment via kubectl create:
1 |
|