kubernetes pv pvc nfs nginx combat persistent storage volumes, persistent storage volumes statement combat

PV use

Stateful application and data have persistent applications, we have through hostPath or emptyDir way to persist our data, it is clear that we need a more reliable storage to store persistent data applications, after reconstruction such container you can still use the previous data. But it is clear memory and CPU resources and memory resources are very different, in order to mask the underlying technical implementation details, allowing users to more easily use, Kubernetes will be introduced PV and PVC two important resources object to manage the store. This is the core of our lesson and we need to explain: PV and PVC.

Concept
PV stands for: PersistentVolume (persistence volume), is an abstract of the underlying shared storage, PV be created and configured by an administrator, and it shared storage implementation specific techniques related to the underlying, such as the Ceph, GlusterFS, NFS, are complete and docking mechanisms through plug-in shared storage.

Stands for PVC: PersistentVolumeClaim (volume persistence statement), PVC is a declaration of the user store, PVC and similar comparison Pod, Pod node is consumed, the consumption of PVC PV resources, Pod may request CPU and memory, and PVC you can request a particular storage space and access mode. For users really do not care about using the stored memory of the underlying implementation details, only need to direct the use of PVC.

However, by request PVC to a certain memory space it is likely insufficient to meet all the requirements for the application storage device, and different applications may vary storage performance requirements, such as access speed, concurrent performance, to solve this problem, Kubernetes but also for us to introduce a new resource objects: StorageClass, by defining StorageClass, administrators can store a certain type of resource definition of resources, such as flash memory, slow storage etc., according to user StorageClass description can be very intuitive to know the particular characteristics of the various memory resources, and thus depending on the application characteristics apply to the right storage resources.

NFS mount

We are here for the convenience of presentation, decided to use this relatively simple NFS storage resources, then we come to install NFS service node 10.129.247.241, data directory: / data / k8s /

Turn off the firewall

systemctl stop firewalld.service
systemctl disable firewalld.service

Installation configuration nfs

yum -y install nfs-utils rpcbind

Setting shared directory permissions:

chmod 755 /data/k8s/

Configuring NFS, NFS default configuration file in the / etc / exports file, add the following configuration information in the file:

vi /etc/exports
/data/k8s  *(rw,sync,no_root_squash)

Configuration instructions:

/ data / k8s: a shared data directory
*: that anyone who has permission to connect, of course, can be a network, an IP, can also be a domain name
rw: read-write permissions
sync: indicates that the file is written to disk and at the same time memory
no_root_squash: when users log on NFS host using the shared directory is root, its mandate will be converted into an anonymous user, usually it's UID and GID, nobody will become the identity
of course there are many nfs configuration, interest students can go online to find it.

Start service nfs need to register to rpc, rpc, once restarted, registration files will be lost to him registered service needs to restart
pay attention to the startup sequence, start rpcbind

$ systemctl start rpcbind.service
$ systemctl enable rpcbind
$ systemctl status rpcbind

And then start the nfs service:

$ systemctl start nfs.service
$ systemctl enable nfs
$ systemctl status nfs
[root@k8smaster k8s]# systemctl status nfs
● nfs-server.service - NFS server and services
   Loaded: loaded (/usr/lib/systemd/system/nfs-server.service; enabled; vendor preset: disabled)
  Drop-In: /run/systemd/generator/nfs-server.service.d
           └─order-with-mounts.conf
   Active: active (exited) since 五 2020-01-10 09:53:13 CST; 1 weeks 2 days ago
 Main PID: 4090 (code=exited, status=0/SUCCESS)
    Tasks: 0
   Memory: 0B
   CGroup: /system.slice/nfs-server.service

Warning: Journal has been rotated since unit was started. Log output is incomplete or unavailable.

Started the proof NFS Server startup successful.

In addition, we can confirm at the following command:

[root@k8smaster k8s]# rpcinfo -p|grep nfs
    100003    3   tcp   2049  nfs
    100003    4   tcp   2049  nfs
    100227    3   tcp   2049  nfs_acl
    100003    3   udp   2049  nfs
    100003    4   udp   2049  nfs
    100227    3   udp   2049  nfs_acl

Mount View specific directory permissions:

$ Cat / var / lib / nfs / ETAB

[root@k8smaster k8s]# cat /var/lib/nfs/etab
/data/k8s       *(rw,sync,wdelay,hide,nocrossmnt,secure,no_root_squash,no_all_squash,no_subtree_check,secure_locks,acl,no_pnfs,anonuid=65534,anongid=65534,sec=sys,rw,secure,no_root_squash,no_all_squash)

Here we put the nfs server to the installation was successful, then we installed in the node 10.129.247.242 up nfs client to verify the next nfs

Nfs current installation also need to turn off the firewall:

$ systemctl stop firewalld.service
$ systemctl disable firewalld.service

Then install nfs

$ yum -y install nfs-utils rpcbind

After installation is complete, and the above methods, like, start rpc, and then start the nfs:

$ systemctl start rpcbind.service 
$ systemctl enable rpcbind.service 
$ systemctl start nfs.service    
$ systemctl enable nfs.service

Mount Data Directory client is started after the completion of our client to mount under test under nfs:
First check whether nfs have shared directory under:

[root@k8smaster k8s]# showmount -e 10.129.247.241
Export list for 10.129.247.241:
/data/k8s *

Then we create a new directory on the client:

$ mkdir -p /root/course/kubeadm/data

Nfs mount the shared directory to the directory above:

$ mount -t nfs 10.129.247.240:/data/k8s /root/course/kubeadm/data

After a successful mount, create a new file in the client above directory, and then see if we shared directory under nfs server will appear below the file:

$ touch /root/course/kubeadm/data/test.txt

Then nfs server View:

$ ls -ls /data/k8s/
total 4

If the above appeared test.txt file, then prove our nfs mount successful.

// 上面用到的脚本
systemctl disable firewalld.service
systemctl stop firewalld.service
yum -y install nfs-utils rpcbind
mkdir -p /data/k8s
chmod 755 /data/k8s/
vi /etc/exports
systemctl start rpcbind.service
systemctl enable rpcbind
systemctl status rpcbind
systemctl start nfs.service
systemctl enable nfs
systemctl status nfs
systemctl disable firewalld.service
systemctl stop firewalld.service
yum -y install nfs-utils rpcbind
systemctl start rpcbind.service 
systemctl enable rpcbind.service 
systemctl start nfs.service    
systemctl enable nfs.service
showmount -e 10.129.247.241
mkdir -p /root/course/kubeadm/data
mount -t nfs 10.129.247.241:/data/k8s /root/course/kubeadm/data
touch /root/course/kubeadm/data/test.txt

PV

With NFS shared storage above, below since we can use the PV and PVC. PV as storage resources, including key information storage capacity, access mode, storage type, recovery strategies, let's create a new PV object, use nfs type of back-end storage, storage space, 1G access mode for ReadWriteOnce, recovery strategy is recyle, corresponding YAML following documents: (pv1-demo.yaml)

pv

apiVersion: v1
kind: PersistentVolume
metadata:
  name:  pv1
spec:
  capacity: 
    storage: 1Gi
  accessModes:
  - ReadWriteOnce
  persistentVolumeReclaimPolicy: Recycle
  nfs:
    path: /data/k8s
    server: 10.129.247.241

PVC

pvc

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: pvc-nfs
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi

use

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: nfs-pvc
spec:
  replicas: 3
  template:
    metadata:
      labels:
        app: nfs-pvc
    spec:
      containers:
      - name: nginx
        image: nginx:1.7.9
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 80
          name: web
        volumeMounts:
        - name: www
          mountPath: /usr/share/nginx/html
      volumes:
      - name: www
        persistentVolumeClaim:
          claimName: pvc-nfs-2

---

apiVersion: v1
kind: Service
metadata:
  name: nfs-pvc
  labels:
    app: nfs-pvc
spec:
  type: NodePort
  ports:
  - port: 80
    targetPort: web
  selector:
    app: nfs-pvc
[root@k8smaster k8s]# kubectl get nodes -o wide
NAME        STATUS   ROLES    AGE   VERSION   INTERNAL-IP      EXTERNAL-IP   OS-IMAGE                KERNEL-VERSION          CONTAINER-RUNTIME
k8smaster   Ready    master   46h   v1.15.4   10.129.247.241   <none>        CentOS Linux 7 (Core)   3.10.0-957.el7.x86_64   docker://18.9.7
k8snode1    Ready    <none>   46h   v1.15.4   10.129.247.242   <none>        CentOS Linux 7 (Core)   3.10.0-957.el7.x86_64   docker://18.9.7
k8snode2    Ready    <none>   46h   v1.15.4   10.129.247.243   <none>        CentOS Linux 7 (Core)   3.10.0-957.el7.x86_64   docker://18.9.7
[root@k8smaster k8s]# kubectl get svc
NAME         TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE
kubernetes   ClusterIP   10.96.0.1        <none>        443/TCP        46h
nfs-pvc      NodePort    10.108.174.179   <none>        80:13507/TCP   4m26s

In / data / k8s / index.html New directory

Outside the cluster / internal access

http://10.129.247.241:13507/
http://10.129.247.242:13507/
http://10.129.247.243:13507/

CLUSTER-IP access

[root@k8smaster k8s]# curl http://10.108.174.179/
hello pvc

Kubernetes support the PV There are many types, such as common Ceph, GlusterFs, NFS, even HostPath can, but we have said before HostPath only be used for stand-alone test, more of the type of support can go Kubernetes PV official document to view it, because each storage type has its own characteristics, so we can go to view the document to set the corresponding parameter when in use.

Then the same, can be used directly kubectl created:

$ kubectl create -f pv1-demo.yaml
persistentvolume "pv1" created
$ kubectl get pv
NAME      CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM               STORAGECLASS   REASON    AGE
pv1       1Gi        RWO            Recycle          Available                                                12s

We can see pv1 has been created successfully, the status is Available, expressed pv1 ready to be applied for PVC. Let each of the above attributes some interpretation.

Capacity (storage capacity)
Generally speaking, a PV must specify a target storage capacity, through capacity attribute to set the PV, currently only supports setting of storage space, is the storage of us here = 1Gi, but the future may join IOPS configuration throughput metrics.

AccessModes (access mode)
AccessModes PV is used to set the access mode for describing user application access to the storage resources, including access to the following ways:

ReadWriteOnce (RWO): read and write permissions, but only a single node is mounted
ReadOnlyMany (ROX): read-only access, may be mounted a plurality of nodes
ReadWriteMany (RWX): read and write access, may be mounted a plurality of nodes
noted : Some PV may support multiple access modes, but can only use one access mode when mounted, multiple access mode is not in effect.

Below is some common plug-in support Volume access mode: voluem-accessmodes

persistentVolumeReclaimPolicy (recovery strategy)
I am here to designated collection strategy for the PV There are three Recycle, the current PV support policies:

Retain (Reserved) - retain the data, the data need to be manually clean up
Recycle (recycling) - Clear Data PV in effect equivalent to the implementation -rf RM / thevoluem / *
the Delete (delete) - PV connected to the back-end storage and complete volume deletion, of course, this is common in the cloud storage service provider, such as ASW EBS.
But note that currently only two types of support NFS and HostPath recovery strategy. Of course, in general or to Retain this strategy a little insurance.

Published 55 original articles · won praise 11 · views 20000 +

Guess you like

Origin blog.csdn.net/AAA17864308253/article/details/104047713