八,kubernetes集群存储卷基础。

存储卷

pod运行状态分为四种:

  1. 有状态,需要存储
  2. 有状态,无需存储
  3. 无状态,需要存储
  4. 无状态,无需存储

存储的分类

emptyDir:Pod挂载在本地的磁盘或者内存,被称为emptyDir ,称为临时空目录,随着Pod删除,也会被删除。

gitrepo:本质上还是一个emptyDir,创建的那一刻从git上clone下来文件,不会在更新,所以会借助sidecar容器来更新或者推送目录中的文件代码

hostPath :挂载到宿主机目录,

  • 应用场景

一. 某些应用需要用到docker的内部文件,这个时候只需要挂在本机的/var/lib/docker作为hostPath!!!!!!
二. 在容器中运行cAdvisor,这个时候挂在/dev/cgroups!!!!!!

传统的SAN或者NAS设备:SAN(iSCSI...),NAS(nfs,cifs)

分布式存储:glusterfs, ceph-rbd, cephfs。

云存储:EBS,Azure Disk

emptyDir 测试及使用

emptyDir 表示使用的是本地磁盘或者内存(如果是内存,则表示当做缓存来使用).

创建相应的清单文件如下:

[root@master volume]# cat pod-vol-demo.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: pod-demo
  namespace: default
  labels:
    app: myapp
    tier: frontend
  annotations:
    jubaozhu.com/created-by: "cluster admin"
spec:
  containers:
  - name: myapp
    image: ikubernetes/myapp:v1
    imagePullPolicy: IfNotPresent
    ports:
    - name: http
      containerPort: 80
    volumeMounts:
    - name: html
      mountPath: /usr/share/nginx/html/         # myapp 容器中,把名称为html的卷挂载到该pod的 /usr/share/nginx/html/ 目录下
  - name: busybox
    image: busybox:latest
    imagePullPolicy: IfNotPresent
    volumeMounts:
    - name: html
      mountPath: /data/                         # busybox 容器中,,把名称为html的卷挂载到该pod的 /data/ 目录下
    command: ["/bin/sh", "-c", "while true; do echo $$(date) >> /data/index.html; sleep 2; done"]       # 这里的往 /data/index.html 写入时间,用于myapp容器中web访问使用
  volumes:
  - name: html          # 创建一个名称为html的volumes
    emptyDir: {}        # 这里一个空字典,表示 emptyDir下的 medium 使用默认参数 和 sizeLimit 不限制空间大小。
    

由于事例中myapp容器和busybox容器共用同一个卷(html卷),所以当busybox容器生成了新index.html时,myapp容器就能访问到,配置中的mountPath只是针对该容器内的挂载

创建

[root@master volume]# kubectl apply -f pod-vol-demo.yaml 
pod/pod-demo created
[root@master volume]# kubectl get pods -o wide
NAME                             READY   STATUS    RESTARTS   AGE     IP            NODE                NOMINATED NODE   READINESS GATES
pod-demo                         2/2     Running   0          29s     10.244.2.25   node02.kubernetes   <none>           <none>

测试访问Pod 对应的 ip

[root@master volume]# curl 10.244.2.25
Thu Aug 1 08:41:18 UTC 2019
Thu Aug 1 08:41:20 UTC 2019
Thu Aug 1 08:41:22 UTC 2019
Thu Aug 1 08:41:24 UTC 2019
Thu Aug 1 08:41:26 UTC 2019
Thu Aug 1 08:41:28 UTC 2019
Thu Aug 1 08:41:30 UTC 2019
Thu Aug 1 08:41:32 UTC 2019
Thu Aug 1 08:41:34 UTC 2019
Thu Aug 1 08:41:36 UTC 2019
Thu Aug 1 08:41:38 UTC 2019
Thu Aug 1 08:41:40 UTC 2019
Thu Aug 1 08:41:42 UTC 2019
Thu Aug 1 08:41:44 UTC 2019
Thu Aug 1 08:41:46 UTC 2019
Thu Aug 1 08:41:48 UTC 2019
Thu Aug 1 08:41:50 UTC 2019
Thu Aug 1 08:41:52 UTC 2019

可以看到写如和访问都正常,达到了期望的效果。

hostpath实例

可实现针对某一节点的数据持久化,如果节点宕机了,那数据就丢失了

apiVersion: v1
kind: Pod
metadata:
    name: pod-volume-hostpath
    namespace: default
spec:
    containers:
    - name: myapp
      image: ikubernetes/myapp:v1
      volumeMounts:
      - name: html
        mountPath: /usr/share/nginx/html
    volumes:
    - name: html
      hostPath:
        path: /data/pod/volume1
        type: DirectoryOrCreate

Pod测试挂在共享NFS

本次测试,在master节点上安装了 NFS,配置如下

[root@master volume]# cat /etc/exports
/data/volumes   *(rw,no_root_squash)

启动测试

[root@master data]# yum install -y nfs-utils rpcbind
[root@master data]# systemctl start nfs
[root@master data]# systemctl start rpcbind
[root@master data]# showmount -e localhost
Export list for localhost:
/data/volumes 0.0.0.0/0

注意

需要在所有节点安装 `nfs-utils` 组件,否则当Pod被分配到没有组件的节点,会启动失败,因为没有`mount.nfs`
[root@node2 ~]# yum install -y nfs-utils 
[root@node2 ~]# mount -t nfs node1:/data/volumes /mnt #在每个节点上执行挂载 nfs

写入测试页面

[root@master volume]# echo '<h1>NFS stor01</h1>' > /data/volumes/index.html

写测试清单

[root@master volume]# cat pod-vol-nfs.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: pod-vol-nfs
  namespace: default
spec:
  containers:
  - name: myapp
    image: ikubernetes/myapp:v1
    volumeMounts:
    - name: html
      mountPath: /usr/share/nginx/html/
  volumes:
  - name: html
    nfs:
      path: /data/volumes
      server: 172.27.1.241

创建查看

[root@master volume]# kubectl apply -f pod-vol-nfs.yaml 
pod/pod-vol-nfs created
[root@master volume]# kubectl get pods -o wide
NAME               READY   STATUS    RESTARTS   AGE   IP            NODE                NOMINATED NODE   READINESS GATES
pod-hostpath-vol   1/1     Running   0          88m   10.244.3.32   node01.kubernetes   <none>           <none>
pod-vol-nfs        1/1     Running   0          5s    10.244.1.29   node03.kubernetes   <none>           <none>

可以看到Pod分配在 node03 节点上

测试

测试访问

[root@master volume]# curl 10.244.1.29
<h1>NFS stor01</h1>     # 访问正常

删除Pod后再次创建测试

[root@master volume]# kubectl delete -f pod-vol-nfs.yaml 
pod "pod-vol-nfs" deleted
[root@master volume]# kubectl apply -f pod-vol-nfs.yaml 
pod/pod-vol-nfs created
[root@master volume]# kubectl get pods -o wide
NAME               READY   STATUS    RESTARTS   AGE   IP            NODE                NOMINATED NODE   READINESS GATES
pod-hostpath-vol   1/1     Running   0          90m   10.244.3.32   node01.kubernetes   <none>           <none>
# 下面的数据可以看到Pod分配在 node02 上
pod-vol-nfs        1/1     Running   0          2s    10.244.2.27   node02.kubernetes   <none>           <none>
[root@master volume]# curl 10.244.2.27
<h1>NFS stor01</h1>     # 测试访问正常

pv, pvc

PV 是属于集群资源, 在集群中所有名称空间都可用, 全程 PersistentVolume.

PVC 是名称空间级别, 也就是一个标准资源类,全程 PersistentVolumeClaim.

在Pod定义PVC, 之后会根据定义的容量大小,PVC会自动绑定对应大于等于某一个PV.

创建几个PV

[root@master volumes]# mkdir /data/volumes/v{1,2,3,4,5} -p
[root@master volume]# cat pv-demo.yaml 
apiVersion: v1
kind: PersistentVolume              # 资源名称
metadata:   
  name: pv001                       # PV 的名称
  labels:
    name: pv001                     # 标签
spec:
  nfs:
    path: /data/volumes/v1          # PV所对应的目录
    server: 172.27.1.241
  accessModes: ["ReadWriteMany", "ReadWriteOnce"]   # 权限
  capacity:
    storage: 2Gi                    # PV空间大小
---
apiVersion: v1
kind: PersistentVolume
metadata: 
  name: pv002
  labels:
    name: pv002
spec:
  nfs:
    path: /data/volumes/v2
    server: 172.27.1.241
  accessModes: ["ReadWriteMany"]
  capacity:
    storage: 5Gi
---
apiVersion: v1
kind: PersistentVolume
metadata: 
  name: pv003
  labels:
    name: pv003
spec:
  nfs:
    path: /data/volumes/v3
    server: 172.27.1.241
  accessModes: ["ReadWriteMany", "ReadWriteOnce"]
  capacity:
    storage: 20Gi
---
apiVersion: v1
kind: PersistentVolume
metadata: 
  name: pv004
  labels:
    name: pv004
spec:
  nfs:
    path: /data/volumes/v4
    server: 172.27.1.241
  accessModes: ["ReadWriteMany", "ReadWriteOnce"]
  capacity:
    storage: 10Gi
---
apiVersion: v1
kind: PersistentVolume
metadata: 
  name: pv005
  labels:
    name: pv005
spec:
  nfs:
    path: /data/volumes/v5
    server: 172.27.1.241
  accessModes: ["ReadWriteMany", "ReadWriteOnce"]
  capacity:
    storage: 10Gi

创建查看

[root@master volume]# kubectl apply -f pv-demo.yaml 
persistentvolume/pv001 created
persistentvolume/pv002 created
persistentvolume/pv003 created
persistentvolume/pv004 created
persistentvolume/pv005 created
[root@master volume]# kubectl get pv -o wide
NAME    CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM   STORAGECLASS   REASON   AGE   VOLUMEMODE
pv001   2Gi        RWO,RWX        Retain           Available                                   51s   Filesystem
pv002   5Gi        RWX            Retain           Available                                   29s   Filesystem
pv003   20Gi       RWO,RWX        Retain           Available                                   29s   Filesystem
pv004   10Gi       RWO,RWX        Retain           Available                                   29s   Filesystem
pv005   10Gi       RWO,RWX        Retain           Available                                   51s   Filesystem

创建测试的Pod 和 PVC

[root@master volume]# cat pod-vol-pvc.yaml 
apiVersion: v1
kind: PersistentVolumeClaim     # PVC资源
metadata:
  name: mypvc
  namespace: default            # 名称空间
spec:
  accessModes: ["ReadWriteMany"]    # 权限
  resources:
    requests:
      storage: 6Gi              # 定义的磁盘空间带下
---
apiVersion: v1
kind: Pod
metadata:
  name: pod-vol-pvc
  namespace: default
spec:
  containers:
  - name: myapp
    image: ikubernetes/myapp:v1
    volumeMounts:
    - name: html
      mountPath: /usr/share/nginx/html/
  volumes:
  - name: html
    persistentVolumeClaim:
      claimName: mypvc

创建和查看 PVC PV 状态

[root@master volume]# kubectl apply -f pod-vol-pvc.yaml 
persistentvolumeclaim/mypvc created
pod/pod-vol-pvc created
[root@master volume]# kubectl get pods
NAME          READY   STATUS    RESTARTS   AGE
pod-vol-pvc   1/1     Running   0          3s
[root@master volume]# kubectl get pvc
NAME    STATUS   VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   AGE
mypvc   Bound    pv005    10Gi       RWO,RWX                       36s          # 这里看到PVC 已经正常绑定了一个PV,PV名称是 pv005, 空间是 10G
[root@master volume]# kubectl get pv
NAME    CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM           STORAGECLASS   REASON   AGE
pv001   2Gi        RWO,RWX        Retain           Available                                           9m37s
pv002   5Gi        RWX            Retain           Available                                           9m15s
pv003   20Gi       RWO,RWX        Retain           Available                                           9m15s
pv004   10Gi       RWO,RWX        Retain           Available                                           9m15s
pv005   10Gi       RWO,RWX        Retain           Bound       default/mypvc                           9m37s        # 这里看到状态是 Bound, 回收策略是 Retain

猜你喜欢

转载自www.cnblogs.com/peng-zone/p/11651067.html