K8S persistent storage -NFS

Kubernetes of PersistentVolume subsystem provides an API for the user and administrator, which provide a way to abstract the details of storage from the stored use. K8S introduces two new API resources: PersistentVolumeh and PersistentVolumeClaim.

Persistent volume (PV) is stored in the cluster section, or provided by an administrator to provide dynamic storage class. It is a cluster of resources, like a cluster node is a resource. PV is the volume with the volume of the plug similar to, but independent of its life cycle use of any individual PV Pod. This API objects may be implemented to achieve storage NFS, iSCSI or a particular provider cloud storage system.

PersistentVolumeClaim (PVC) is stored in the user request. pod node resource consumption, while consumption of PV pvc resources. Pods can request a particular level of resources (CPU and memory). pvc can request a particular size and access mode (e.g., which can mount a read / write or read-only times).

What is NFS?

NFS is the abbreviation for Network File System, which is the biggest feature is that you can through the network, so that different machines, different operating systems can share each other's files.

NFS server allows the network PC NFS server shared directory is mounted to the end of the local file system, and in the local end system point of view, the directory that the remote host is like a disk partition as their own.

NFS workflow

  1. First, start the RPC server-side service, and open 111 port;
  2. Start the NFS server service, and RPC ports registration information;
  3. Client initiates RPC (portmap service), requesting NFS server port to the server-side RPC (portmap) service;
  4. Server-side RPC (portmap) service feedback NFS port information to the client;
  5. NFS client and server to establish connections and transmit data via NFS port acquired.

Under Linux NFS server software and services needed to deploy NFS main configuration file

NFS installation service, you need to install two software are:

RPC main program: rpcbind

NFS RPC can actually be seen as a service, because before starting any RPC service, we all need to do correspondence (mapping) of the working port of the job, the job is actually "rpcbind" This service is responsible! In other words, before you start any RPC service, we need to start the rpcbind job! (In CentOS 5.x before this software called portmap, after CentOS 6.x only called rpcbind's!).

NFS main program: nfs-utils

Rpc.nfsd is to provide software and rpc.mountd both NFS daemons and other related documents and documentation, execution of documents! This is the main software NFS service needs.

NFS-related documents:

The main configuration file: / etc / exports
which is the main NFS configuration files. The file is empty, some systems may file does not exist, mainly created manually. NFS configuration generally configured at this file.
NFS file system maintains command: / usr / sbin / exportfs
this directive is to maintain NFS share resources, can use this command to re-share the resource directory / etc / exports change, the NFS Server share directory excluded or re-share.
* Share resources Log file: / var / lib / nfs / the Tab
logged NFS file servers are placed in / var / lib / nfs / directory, there are two more important to log files in the directory, is a etab the main record full authority setting out the value of NFS shared directory; another xtab relevant client data used to link to the NFS server is recorded.
The client queries the server to share resources command: / usr / sbin / showmount
This is another important NFS command. exportfs is used in the NFS Server end, and showmount is mainly used in the Client side. showmount can be used to look out of the NFS share directory resource.

Lunix Server Configuration

Re-deploy a CentOS 7, IP: 172.16.7.100

Step 1: Install NFS and rpc

[root@nfs-server ~]# yum install -y  nfs-utils  # 安装nfs服务
[root@nfs-server ~]# yum install -y rpcbind   # 安装rpc服务

Step Two: Start to start the service and set to open

[root@nfs-server ~]# systemctl start rpcbind   
[root@nfs-server ~]# systemctl enable rpcbind.service 
Created symlink from /etc/systemd/system/multi-user.target.wants/rpcbind.service to /usr/lib/systemd/system/rpcbind.service.
[root@nfs-server ~]# systemctl start nfs-server
[root@nfs-server ~]# systemctl start nfs-secure-server
[root@nfs-server ~]# systemctl enable nfs-secure
[root@nfs-server ~]# firewall-cmd --permanent --add-service=nfs
success
[root@nfs-server ~]#  firewall-cmd  --reload 
success

Step Three: Configure the shared file directory, edit the configuration file

[root@nfs-server ~]#  mkdir /public
[root@nfs-server ~]# vim /etc/exports
/public    172.16.7.0/24(rw,sync,no_root_squash,no_all_squash)

Profile Description:

格式: 共享目录的路径 允许访问的NFS客户端(共享权限参数)
如上,共享目录为/public , 允许访问的客户端为192.168.245.0/24网络用户,权限为只读。
请注意,NFS客户端地址与权限之间没有空格。
NFS输出保护需要用到kerberos加密(none,sys,krb5,krb5i,krb5p),格式sec=XXX
none:以匿名身份访问,如果要允许写操作,要映射到nfsnobody用户,同时布尔值开关要打开,setsebool nfsd_anon_write 1
sys:文件的访问是基于标准的文件访问,如果没有指定,默认就是sys, 信任任何发送过来用户名
krb5:客户端必须提供标识,客户端的表示也必须是krb5,基于域环境的认证
krb5i:在krb5的基础上做了加密的操作,对用户的密码做了加密,但是传输的数据没有加密
krb5p:所有的数据都加密

Parameters for configuring NFS service program configuration file:


parameter effect
ro Read-only
rw Read and write
root_squash When the NFS client access to the root administrator, mapped to the anonymous user NFS server
no_root_squash When the NFS client access to the root administrator, mapped to the root administrator of the NFS server
all_squash No matter what NFS client account access, are mapped to the NFS server anonymous users
sync While writing data to the memory and hard drive to ensure that no data is lost
async Priority will save the data to memory, and then written to disk; this higher efficiency, but may lose data
[root@nfs-server ~]# systemctl reload nfs 
[root@nfs-server ~]# showmount -e localhost
Export list for localhost:
/public 172.16.7.0/24

Step 4: Client Authentication

Use showmount command to check the nfs server to share information. The output format as "shared directory name allows the use of client addresses."

[root@k8s-node3 ~]# showmount -e 172.16.7.100
Export list for 172.16.7.100:
/public 172.16.7.0/24
[root@k8s-node3 ~]# mkdir /nfs
[root@k8s-node3 ~]# mount -t nfs 172.16.7.100:/public /nfs
[root@k8s-node3 ~]# df -h
Filesystem               Size  Used Avail Use% Mounted on
devtmpfs                 979M     0  979M   0% /dev
······
172.16.7.100:/public      18G  3.7G   15G  21% /nfs
[root@k8s-node3 ~]# umount /nfs

Use showmount command

parameter effect
-e Display a list of shared NFS servers
-a Display case situation NFS resource file resource native mount
-v Displays the version number

You can also create a directory on the client side and mount the shared directory

[root@k8s-node3 ~]# vim /etc/fstab 
#在该文件中挂载,使系统每次启动时都能自动挂载
	172.16.7.100:/public  /nfs       nfs    defaults 0 0

Create an NFS persistent storage in K8S

1. Create PV and PVC

Create and apply pv-test.yaml

apiVersion: v1
kind: PersistentVolume
metadata:
  name: nfs
spec:
  storageClassName: manual
  capacity:
    storage: 1Gi
  accessModes:
    - ReadWriteMany
  nfs:
    server: 172.16.7.100
    path: /public
[root@k8s-master ~]# kubectl apply -f pv-test.yaml 
persistentvolume/nfs created
[root@k8s-master ~]# kubectl get pv
NAME   CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM   STORAGECLASS   REASON   AGE
nfs    1Gi        RWX            Retain           Available           manual                  6s
[root@k8s-master ~]# kubectl describe pv nfs       
Name:            nfs
Labels:          <none>
Annotations:     kubectl.kubernetes.io/last-applied-configuration:
                  {"apiVersion":"v1","kind":"PersistentVolume","metadata":{"annotations":{},"name":"nfs"},"spec":{"accessModes":["ReadWriteMany"],"capacity"...
Finalizers:      [kubernetes.io/pv-protection]
StorageClass:    manual
Status:          Available
Claim:           
Reclaim Policy:  Retain
Access Modes:    RWX
VolumeMode:      Filesystem
Capacity:        1Gi
Node Affinity:   <none>
Message:         
Source:
   Type:      NFS (an NFS mount that lasts the lifetime of a pod)
   Server:    172.16.7.100
   Path:      /public
   ReadOnly:  false
Events:        <none>

Create and apply pvc-test.yaml

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: nfs
spec:
  accessModes:
  - ReadWriteMany
  storageClassName: manual
  resources:
    requests:
      storage: 1Gi                     
[root@k8s-master ~]# kubectl apply -f pvc-test.yaml 
persistentvolumeclaim/nfs created
[root@k8s-master ~]# kubectl get pvc
NAME   STATUS   VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   AGE
nfs    Bound    nfs      1Gi        RWX            manual         7s
[root@k8s-master ~]# kubectl describe pvc nfs 
Name:          nfs
Namespace:     default
StorageClass:  manual
Status:        Bound
Volume:        nfs
Labels:        <none>
Annotations:   kubectl.kubernetes.io/last-applied-configuration:
                 {"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"nfs","namespace":"default"},"spec":{"accessModes":[...
               pv.kubernetes.io/bind-completed: yes
               pv.kubernetes.io/bound-by-controller: yes
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:      1Gi
Access Modes:  RWX
VolumeMode:    Filesystem
Mounted By:    <none>
Events:        <none>

2. Create Pod yaml file and application nfs pvc

apiVersion: v1
kind: Pod
metadata:
  name: web
  namespace: default
  labels:
    role: web
spec:
  containers:
  - name: web
    image: nginx:latest
    ports:
      - name: web
        containerPort: 80
    volumeMounts:
      - name: nfs
        mountPath: "usr/share/nginx/html"
  volumes:
  - name: nfs
    persistentVolumeClaim:
      claimName: nfs
[root@k8s-master ~]# vim pod-nfs.yaml              
[root@k8s-master ~]# kubectl apply -f pod-nfs.yaml 
pod/web created
[root@k8s-master ~]# kubectl get pod web 
NAME   READY   STATUS    RESTARTS   AGE
web    1/1     Running   0          8m51s
[root@k8s-master ~]# kubectl describe pod web             
Name:         web
Namespace:    default
Priority:     0
Node:         k8s-node2/172.16.7.12
Start Time:   Fri, 14 Feb 2020 15:10:54 +0800
Labels:       role=web
Annotations:  kubectl.kubernetes.io/last-applied-configuration:
                {"apiVersion":"v1","kind":"Pod","metadata":{"annotations":{},"labels":{"role":"web"},"name":"web","namespace":"default"},"spec":{"containe...
              podpreset.admission.kubernetes.io/podpreset-cache-pod-perset: 773827
Status:       Running
IP:           10.244.2.42
IPs:
  IP:  10.244.2.42
Containers:
  web:
    Container ID:   docker://efe845b2c5ef89642f991a576afd2aa8021f294a913b55db0b20f8091497c8f0
    Image:          nginx:latest
    Image ID:       docker-pullable://nginx@sha256:ad5552c786f128e389a0263104ae39f3d3c7895579d45ae716f528185b36bc6f
    Port:           80/TCP
    Host Port:      0/TCP
    State:          Running
      Started:      Fri, 14 Feb 2020 15:11:01 +0800
    Ready:          True
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /cache from cache-volume (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-r6kq6 (ro)
      usr/share/nginx/html from nfs (rw)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  nfs:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  nfs
    ReadOnly:   false
  cache-volume:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:     
    SizeLimit:  <unset>
  default-token-r6kq6:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-r6kq6
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type    Reason     Age    From                Message
  ----    ------     ----   ----                -------
  Normal  Scheduled  9m13s  default-scheduler   Successfully assigned default/web to k8s-node2
  Normal  Pulling    9m9s   kubelet, k8s-node2  Pulling image "nginx:latest"
  Normal  Pulled     9m6s   kubelet, k8s-node2  Successfully pulled image "nginx:latest"
  Normal  Created    9m6s   kubelet, k8s-node2  Created container web
  Normal  Started    9m6s   kubelet, k8s-node2  Started container web

verification

[root@k8s-master ~]# kubectl get pod web -o wide
NAME   READY   STATUS    RESTARTS   AGE   IP            NODE        NOMINATED NODE   READINESS GATES
web    1/1     Running   0          25m   10.244.2.42   k8s-node2   <none>           <none>
[root@k8s-master ~]# kubectl exec -it web  bash
root@web:/# ls /usr/share/nginx/html/
root@web:/# touch /usr/share/nginx/html/test
root@web:/# ls /usr/share/nginx/html/       
test

NFS server above view

[root@nfs-server ~]# ls /public/
test



参考文章:
CSDN博主「曹世宏的博客」的原创文章《NFS服务器搭建与配置》
原文链接:https://blog.csdn.net/qq_38265137/article/details/83146421
Released six original articles · won praise 0 · Views 1061

Guess you like

Origin blog.csdn.net/weixin_43394724/article/details/104311360