K8s storage volume:
It has four storage volumes:
1. emptyDir: empty directory, such as the storage volume will be deleted Pod is empty, it is generally used as a cache directory, or temporary directory,
when as a cache directory, usually a memory space will be mapped to the directory, let Pod use as the cache directory.
HostPath 2.
a SAN (storage area network): the iSCSI, the FB
the NAS (network attached storage): nfs, cifs
distributed storage: Glusterfs, ceph (rbd), cephfs
cloud storage: EBS (elastic block storage), Azure Disk
Example #emptyDir storage volume: Vim POD - volume.yaml apiVersion: V1 kind: Pod Metadata: name: POD - Demo namespace : default Labels: App: MyApp Tier: frontend Annotations: magedu.com / created- by: "Cluster ADMIN " spec: Containers: - name: httpd Image: busybox: Latest . imagePullPolicy: IfNotPresent # mirrored set policy for not going even if there is no local download mirror the Command: [ " / bin / httpd " , " -f " , " -h" , " / Data / Web / HTML " ] the ports: - name: HTTP containerPort: 80 volumeMounts: # storage volume can be mounted in a plurality of containers Pod who need to mount the definition.. - name: HTML MountPath: / Data / Web / HTML / - name: busybox Image: busybox: Latest imagePullPolicy: IfNotPresent volumeMounts: - name: HTML MountPath: / Data / Command: - "/ bin / SH" - "- C" - " the while to true ; do echo $ (date) >> /data/index.html; sleep 2; done " volumes: - name: html emptyDir: {}
# Start creating Pod
kubectl the Apply -f POD-volume.yaml
kubectl GET PODS
kubectl Exec -it pod-demo -c busybox - / bin / SH # linked into the container pod-demo called busybox in the Pod in.
# Note:
when there are a plurality of containers Pod, Pod start when an error occurs, you can
see where each container status kubectl describe pods pod-demo #, to confirm the operational status of each container.
gitRepo type storage volume:
gitRepo: This type of storage volumes when the container is started, the remote data repository git (eg: Site Code) to the local copy of cloned, and then start using that data container to provide clone service, this clone of data during the operation of the vessel will not be updated to synchronize data to the git repository while updating git repository will not be synchronized to the container, to achieve when the git repository changes, can synchronized to the storage volume of the container, but also through the auxiliary tank, went to clone a git intervals in the data warehouse, when the local data change, then the sync git repository.
hostPath types of storage volumes:
it has about several types:
1. DirectoryOrCreate: It can be an existing directory on the host, may also be absent, if there is automatically created.
2. Directory: It must be a host on a directory that already exists.
3. FileOrCreate: It can be a file on the host, if this file is created to mount an empty file does not exist.
4. File: a file that already exists on the host, if there is an error.
5. Socket: host on an existing Unix Socket file.
6. The CharDevice:. Already exists on the host a character type of device files
7. BlockDevice: already exists on the host a block type device file.
# Create a hostPath exemplary type of storage volume: Vim POD -hostpath- vol.yaml apiVersion: V1 kind: Pod Metadata: name: POD -hostpath- Vol namespace : default spec: Containers: - name: MyApp Image: ikubernetes / MyApp: V1 volumeMounts: - name: HTML MountPath: / usr / Share / Nginx / HTML / volumes: - name: HTML hostPath: # this is the directory defined in the host as the storage volume container myapp a. path: / Data / POD / volume1 type: DirectoryOrCreate
Creating preparation before Pod:
1. Pod because of the uncertainty will be scheduled on that node, so the two nodes and then create / data / pod / volume1
and then the directory will create a page file, and intends to make two web page contents of the file directory on a different node
for viewing.
Creating Pod
kubectl the Apply -f POD-hostpath-vol.yaml
kubectl GET PODS -o Wide
curl HTTP: // Pod_IP
# delete schedule to see if it is on the second node, continue to visit Pod_IP, will find still accessible.
# But this is only realized the node level data persistence, if the node hung up, data is still not guaranteed!
kubectl delete -f pod-hostpath-vol.yaml
NFS type of network shared storage volume:
1. Pre-arranged Preparation
node10:
started on a host outside the cluster NFS service, it provides shared storage.
Yum the install -Y NFS NFS-utils
Vim / etc / Exports
/ Data / Volumes 192.168.111.0/24(rw,no_root_squash)
mkdir -pv / Data / Volumes
systemctl Start nfs
on both the host in the cluster 2. run the test Pod mount nfs share, if mounted, the Pod You must be able to use NFS shared storage.
-y install nfs-utils yum
Mount -t nfs node10: / the Data / Volumes / mnt # mount if successful, can be uninstalled.
3 . Create a configuration list NFS shared storage volume: vim POD -nfs- vol.yaml apiVersion: v1 kind: Pod the Metadata: name: POD -nfs- Vol namespace : default spec: Containers: - name: myapp Image: ikubernetes / myapp : v1 volumeMounts: - name: HTML MountPath: / usr / report this content share / nginx / HTML / Volumes: - name: HTML nfs: path: / the Data / Volumes Server: # node10.test.com to test directly Pod mount NFS shares
# Apply the above list of documents, created to test the Pod
the Apply -f nfs-POD kubectl-vol.yaml
kubectl GET PODS
node10: # NFS shared directory created in index.html
echo "<h1> NFS report this content share Stroage Server </ h1>" >> /data/volumes/index.html
back to the node where the POD:
curl HTTP: // # Pod_IP can see normal return information NFS Share ....'s.
PVC Example of use:
access model of PV:
accessModes:
ReadWriteOnce [abbreviated: RWO]: single write, i.e., a node can have only mount reader
ReadOnlyMany [ROX]: Multiple read only
ReadWriteMany] [RWX: Read multiple write
1. prepare backend storage environment:
node10:
mkdir / Data / Volumes / {V1, V2, V3, V4, V5}
Vim / etc / Exports
/ Data / Volumes / 192.168.111.0/24(rw,no_root_squash V1)
/ Data / Volumes / 192.168.111.0/24(rw,no_root_squash V2)
/ Data / Volumes / V3 192.168.111.0/24(rw,no_root_squash)
the exportfs -arv
the showmount -e
2 . These few shared storage volume is defined on K8s the PV vim pv - demo.yaml apiVersion: v1 kind: PersistentVolume the Metadata: name: PV01 #PV is a cluster-level resources, so it can not be defined in the namespace, it can be used in any namespace. namespace can not be nested, because it is the cluster-level resources. labels: name: PV01 rate: High speed # add a label to indicate its higher storage efficiency, so that the latter tag selector select. spec: nfs: path: / the Data / Volumes / v1 Server: node10.test.com # to access model, can be defined as a subset of the underlying shared storage, but can not be a superset # namely: NFS support RWO, ROX, RWX, but when I create PV, can only provide one or more. accessModes: [ " ReadWriteMany " , "ReadWriteOnce"] Capacity: # For storage capacity, its units: T, P, G, M , K or Ti, Pi, Gi, Mi, Ki differences: i is applied is converted to 1024 units. Storage: 2Gi --- apiVersion: V1 kind: PersistentVolume Metadata: name: PV02 Labels: name: PV02 Rate: High spec: NFS: path: / Data / Volumes / V2 Server: node10.test.com accessModes: [ " ReadWriteMany " , " ReadOnlyMany " ] Capacity: Storage: 5Gi --- apiVersion: v1 kind: PersistentVolume the Metadata: name: PV03 Labels: name: pv03 rate: high spec: nfs: path: /data/volumes/v3 server: node10.test.com accessModes: ["ReadWriteMany","ReadOnlyMany"] capacity: storage: 10Gi
Creating PV
kubectl the Apply -f pv-demo.yaml
kubectl GET pv
RECLAIM POLICY: Recycling Strategy
Retain: the reservation that if a Pod bound PVC, PVC bundled with a PV, Pod later removed, the data to be in PV how to deal with?
Retain the default recovery strategy, that the data retained, in order to use Pod can create again.
Released: This recovery strategy is not retained data, namely Pod deleted, PV auto recycling, empty the inside of the data and allow other Pod bindings use.
The Delete: Delete, namely PVC and PV after unbinding, PV automatically deleted the data will be cleared.
Create a PVC: vim POD -PVC- vol.yaml apiVersion: v1 kind: PersistentVolumeClaim the Metadata: name: mypvc namespace : default spec: # it must be a subset of PV that PV must be able to meet its requirements. AccessModes: [ " ReadWriteMany " ] Resources: Requests: Storage: 6Gi --- apiVersion: v1 kind: Pod the Metadata: name: POD -vol- PVC namespace : default spec: Containers: - name: myapp Image: ikubernetes /myapp:v1 volumeMounts: - name: html mountPath: /usr/share/nginx/html/ volumes: - name: html persistentVolumeClaim: claimName: mypvc
# kubectl apply -f pod-pvc-vol.yaml #验证: # kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE pv01 1Gi RWO,RWX Retain Available 7m3s pv02 2Gi ROX,RWX Retain Available 7m3s pv03 5Gi ROX,RWX Retain Available 6m3s pv04 10Gi ROX,RWX Retain Bound default/mypvc 6m3s pv05 20Gi ROX,RWX Retain Available 6m3s # kubectl describe pod pod-vol-pvc Name: pod-vol-pvc Namespace: default ..... Mounts: /var/lib/nginx/html from html (rw) /var/run/secrets/kubernetes.io/serviceaccount from default-token-6xlcj (ro) ....... Volumes: html: Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace) ClaimName: mypvc ReadOnly: false # kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE mypvc Bound pv04 10Gi ROX,RWX 100s # kubectl describe pvc mypvc Name: mypvc Namespace: default StorageClass: Status: Bound Volume: pv04 ....... Capacity: 10Gi Access Modes: ROX,RWX VolumeMode: Filesystem Events: <none> Mounted By: POD -VOL PVC-PVC # Check current here may be mounted to the container that, in the present embodiment is mounted to VOL--POD PVC of this container. Note: PVC is the standard resource K8s in, it is stored in etcd API Server (the cluster status is stored) is stored in the database, even if the Pod because the fault is removed, still does not affect the existence of PVC, after the next start Pod, you can still the use of PVC.