kubernetes integrated management pv glusterfs

Better reading experience under the original recommendation click on the link.
Original Address: http: //maoqide.live/post/cloud/glusterfs-kubernetes/

Use glusterfs in kubernetes as pv.

environment

  • centos7
  • 3.10.0-957.27.2.el7.x86_64

Machine (virtualbox virtual machine)

  • centos10 - 172.27.32.165 - kubernetes master node / glusterfs node
  • centos12l - 172.27.32.182 - glusterfs node / kubernetes node node
  • centos11 - 172.27.32.164 - glusterfs node / kubernetes node node

virtualbox adding a hard disk

  • Turn off the virtual machine
  • Set - memory - the disk controller SATA- New - fixed size
  • Start the virtual machine

Boot execution fdisk -l, which / dev / sdb to create a new disk.

[root@centos12l ~]$ fdisk -l
Disk /dev/sda: 54.5 GB, 54495248384 bytes, 106436032 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x0001552e

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *        2048     2099199     1048576   83  Linux
/dev/sda2         2099200   106434559    52167680   8e  Linux LVM

Disk /dev/sdb: 10.7 GB, 10737418240 bytes, 20971520 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/mapper/centos-root: 51.3 GB, 51266977792 bytes, 100130816 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/mapper/centos-swap: 2147 MB, 2147483648 bytes, 4194304 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

If you use gluster-kubernetes provided gk-deployscript configuration glusterfs, without the need for what disk mount operation.

Disk Mount

The following is the steps to manually deploy glusterfs using gk-deployuse glusterfs on kubernetes skip this step, otherwise it will create fail.

Execution fdisk /dev/sdband follow the prompts disk write

[root@centos12l ~]$ fdisk /dev/sdb 
Welcome to fdisk (util-linux 2.23.2).

Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.

Device does not contain a recognized partition table
Building a new DOS disklabel with disk identifier 0xf6e6b69c.

Command (m for help): n
Partition type:
   p   primary (0 primary, 0 extended, 4 free)
   e   extended
Select (default p): p
Partition number (1-4, default 1): 
First sector (2048-20971519, default 2048): 
Using default value 2048
Last sector, +sectors or +size{K,M,G} (2048-20971519, default 20971519): 
Using default value 20971519
Partition 1 of type Linux and of size 10 GiB is set

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.

Need ext4 module, lsmod | grep ext4see if loaded, if not execution modprobe ext4load.

[root@centos12l ~]$ lsmod | grep ext4  
ext4                  579979  0 
mbcache                14958  1 ext4
jbd2                  107478  1 ext4

Perform mkfs.ext4 /dev/sdb1formatted disk.

[root@centos12l ~]$ mkfs.ext4 /dev/sdb1
mke2fs 1.42.9 (28-Dec-2013)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
655360 inodes, 2621184 blocks
131059 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=2151677952
80 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks: 
    32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632

Allocating group tables: done                            
Writing inode tables: done                            
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done 

Will mount the disk datadirectory
mkdir /data
mount -t ext4 /dev/sdb1 /data

[root@centos12l ~]$ df -h
Filesystem               Size  Used Avail Use% Mounted on
/dev/mapper/centos-root   48G  2.7G   46G   6% /
devtmpfs                 908M     0  908M   0% /dev
tmpfs                    920M     0  920M   0% /dev/shm
tmpfs                    920M  9.2M  910M   1% /run
tmpfs                    920M     0  920M   0% /sys/fs/cgroup
/dev/sda1               1014M  189M  826M  19% /boot
tmpfs                    184M     0  184M   0% /run/user/0
/dev/sdb1                9.8G   37M  9.2G   1% /data

Write fstab boot automatically mount
vim /etc/fstab

/dev/sdb1                   /data                ext4    defaults        0 0

Glusterfs server installation

yum install -y centos-release-gluster
yum install -y glusterfs-server
systemctl start glusterd
systemctl enable glusterd

The following can be named performed from a selected node glusterfs.
gluster peer probe 172.27.32.182, Add a remote node.

[root@centos11 ~]$ gluster peer probe 172.27.32.182
peer probe: success. 

gluster peer statusView the remote node status.

root@centos11 ~]$ gluster peer status
Number of Peers: 1

Hostname: 172.27.32.182
Uuid: 3ad2f5fc-2cd6-4d0a-a42d-d3325eb0c687
State: Peer in Cluster (Connected)

gluster pool listView the list of nodes.

[root@centos11 ~]$ gluster pool list
UUID                    Hostname        State
3ad2f5fc-2cd6-4d0a-a42d-d3325eb0c687    172.27.32.182   Connected 
1717b41d-c7cd-457e-bfe3-1c825d837488    localhost       Connected 

Installation glusterfs

glusterfs requires the following kernel modules:

  • dm_snapshot
  • dm_mirror
  • dm_thin_pool
    execution lsmod | grep <name>View module exists, if it does not exist to perform modprobe <name>load module.

Installation glusterfs:

# install
yum install glusterfs-fuse -y
# version
glusterfs --version

Creating glusterfs volume

The following is the steps to manually deploy glusterfs using gk-deployuse glusterfs on kubernetes skip this step, otherwise it will create fail.

mkdir /data/gvolCreate a directory volume on the node.

The following can be named performed from a selected node glusterfs.
gluster volume create gvol1 replica 2 172.27.32.182:/data/gvol 172.27.32.164:/data/gvolCreate volume.
shell # 提示两个节点容易发生脑裂,测试目的可以直接选择继续,生产建议3个节点。 [root@centos11 ~]$ gluster volume create gvol1 replica 2 172.27.32.182:/data/gvol 172.27.32.164:/data/gvol Replica 2 volumes are prone to split-brain. Use Arbiter or Replica 3 to avoid this. See: http://docs.gluster.org/en/latest/Administrator%20Guide/Split%20brain%20and%20ways%20to%20deal%20with%20it/. Do you still want to continue? (y/n) y volume create: gvol1: success: please start the volume to access data
This volume is Repicate type, other types can view the official document volume-types .

gluster volume start gvol1Start volume.

[root@centos11 ~]$ gluster volume start gvol1
volume start: gvol1: success

gluster volume info gvol1View volume information.

[root@centos11 ~]$ gluster volume info gvol1
Volume Name: gvol1
Type: Replicate
Volume ID: ed8662a9-a698-4730-8ac7-de579890b720
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: 172.27.32.182:/data/vol1
Brick2: 172.27.32.164:/data/vol1
Options Reconfigured:
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off

Mount glusterfs volume, the glusterfs of vloume gvol1 mount /data/gfsdirectory.
mkdir -p /data/gfs
mount -t glusterfs 172.27.32.164:/gvol1 /data/gfs

# df -h 可以看到已经挂载上
[root@centos11 gfs]$ df -h
Filesystem               Size  Used Avail Use% Mounted on
/dev/mapper/centos-root   46G  3.3G   42G   8% /
devtmpfs                 1.9G     0  1.9G   0% /dev
tmpfs                    1.9G     0  1.9G   0% /dev/shm
tmpfs                    1.9G  9.5M  1.9G   1% /run
tmpfs                    1.9G     0  1.9G   0% /sys/fs/cgroup
/dev/sda1               1014M  189M  826M  19% /boot
tmpfs                    379M     0  379M   0% /run/user/0
/dev/sdb1                9.8G   37M  9.2G   1% /data
172.27.32.182:/gvol1     9.8G  136M  9.2G   2% /data/gfs

In data/gfswritten or changed files automatically synchronized to all nodes glusterfs gvol1directory under.
Add the following to the configuration /etc/fstab, when the system is restarted automatically mount directory.

172.27.32.182:/gvol1 /data/gfs glusterfs  defaults,_netdev 0 0

gluster-kubernetes

gluster-kubernetes project by the script provided by the official, integrated glusterfs on kubernetes.
No special instructions following commands are executed on kubernetes master node.

# 下载项目代码到 /root 文件夹下
git clone https://github.com/gluster/gluster-kubernetes.git
# 进入到 deploy 目录,这也是脚本所在的工作目录
# deploy/kube-templates/ 文件夹下为需要在kubernetes上创建的资源的 yaml 文件。
cd /gluster-kubernetes/deploy

# 修改 topology.json, 描述你的 glusterfs 集群的信息
mv topology.json.sample topology.json
vim topology.json
{
  "clusters": [
    {
      "nodes": [
        {
          "node": {
            "hostnames": {
              "manage": [
                "172.27.32.164"
              ],
              "storage": [
                "172.27.32.164"
              ]
            },
            "zone": 1
          },
          "devices": [
            "/dev/sdc"
          ]
        },
        {
          "node": {
            "hostnames": {
              "manage": [
                "172.27.32.182"
              ],
              "storage": [
                "172.27.32.182"
              ]
            },
            "zone": 1
          },
          "devices": [
            "/dev/sdc"
          ]
        }
      ]
    }
  ]
}

gk-deployYou need to be initialized disks, so here are mounted on both nodes of the new disk devices /dev/sdc, and execute the following command.

# 每台 glusterfs 节点上执行如下命令
# /dev/sdc 需要是新的为初始化的设备
dd if=/dev/urandom of=/dev/sdc bs=512 count=64

Use base64 generate key heketi need, you need to specify this node can log in glusterfs node private key.
If you do not specify --ssh-keyfile, gk-deploywill by default create a new glusterfs pod in kubernetes, instead of using the existing local.

# generate key
echo -n hello | base64
# gk-deploy
./gk-deploy -h
./gk-deploy --admin-key aGVsbG8= --user-key aGVsbG8=  --ssh-keyfile /root/.ssh/id_rsa
[root@centos10 deploy]$ ./gk-deploy --admin-key aGVsbG8= --user-key aGVsbG8=  --ssh-keyfile /root/.ssh/id_rsa
Welcome to the deployment tool for GlusterFS on Kubernetes and OpenShift.

Before getting started, this script has some requirements of the execution
environment and of the container platform that you should verify.

The client machine that will run this script must have:
 * Administrative access to an existing Kubernetes or OpenShift cluster
 * Access to a python interpreter 'python'

Each of the nodes that will host GlusterFS must also have appropriate firewall
rules for the required GlusterFS ports:
 * 2222  - sshd (if running GlusterFS in a pod)
 * 24007 - GlusterFS Management
 * 24008 - GlusterFS RDMA
 * 49152 to 49251 - Each brick for every volume on the host requires its own
   port. For every new brick, one new port will be used starting at 49152. We
   recommend a default range of 49152-49251 on each host, though you can adjust
   this to fit your needs.

The following kernel modules must be loaded:
 * dm_snapshot
 * dm_mirror
 * dm_thin_pool

For systems with SELinux, the following settings need to be considered:
 * virt_sandbox_use_fusefs should be enabled on each node to allow writing to
   remote GlusterFS volumes

In addition, for an OpenShift deployment you must:
 * Have 'cluster_admin' role on the administrative account doing the deployment
 * Add the 'default' and 'router' Service Accounts to the 'privileged' SCC
 * Have a router deployed that is configured to allow apps to access services
   running in the cluster

Do you wish to proceed with deployment?

[Y]es, [N]o? [Default: Y]: y
Using Kubernetes CLI.
Using namespace "default".
Checking for pre-existing resources...
  GlusterFS pods ... not found.
  deploy-heketi pod ... not found.
  heketi pod ... not found.
  gluster-s3 pod ... not found.
Creating initial resources ... serviceaccount "heketi-service-account" created
clusterrolebinding.rbac.authorization.k8s.io "heketi-sa-view" created
clusterrolebinding.rbac.authorization.k8s.io "heketi-sa-view" labeled
OK
secret "heketi-config-secret" created
secret "heketi-config-secret" labeled
service "deploy-heketi" created
deployment.extensions "deploy-heketi" created
Waiting for deploy-heketi pod to start ... OK
Creating cluster ... ID: e5558e7dacc4f24c75f62a68168105fc
Allowing file volumes on cluster.
Allowing block volumes on cluster.
Creating node 172.27.32.164 ... ID: 23abb66f328935c437b6d0274388027f
Adding device /dev/sdc ... OK
Creating node 172.27.32.182 ... ID: 7f99fb669bad6434cdf16258e507dbb7
Adding device /dev/sdc ... OK
heketi topology loaded.
Error: Failed to allocate new volume: No space
command terminated with exit code 255
Failed on setup openshift heketi storage
This may indicate that the storage must be wiped and the GlusterFS nodes must be reset.

Direct execution will occur as error No space, which is due to our glusterfs only two cluster nodes, and heketi default requires at least three nodes, can be executed gk-deploytogether when --single-ndoeparameter skip this error.

Before you perform again, we need to do the next clean up the environment, execute the following command on glusterfs node. (Pv pv concept k8s here with no relation)

# 查看 pv,第二行 /dev/sdc 即为 heketi 创建的 pv,需要删除
[root@centos11 ~]$ pvs
  PV         VG                                  Fmt  Attr PSize   PFree
  /dev/sda2  centos                              lvm2 a--  <49.00g 4.00m
  /dev/sdc   vg_bf7e75e181a24a59edc0d38e33d5ee9c lvm2 a--    7.87g 7.87g
## 删除pv
[root@centos11 ~]$ pvremove /dev/sdc  -ff
  WARNING: PV /dev/sdc is used by VG vg_bf7e75e181a24a59edc0d38e33d5ee9c.
Really WIPE LABELS from physical volume "/dev/sdc" of volume group "vg_bf7e75e181a24a59edc0d38e33d5ee9c" [y/n]? y
  WARNING: Wiping physical volume label from /dev/sdc of volume group "vg_bf7e75e181a24a59edc0d38e33d5ee9c".
  Labels on physical volume "/dev/sdc" successfully wiped.

Cleanup gk-deploykubernetes resource script created.

kubectl delete sa heketi-service-account
kubectl delete clusterrolebinding heketi-sa-view
kubectl delete secret heketi-config-secret
kubectl delete svc deploy-heketi
kubectl delete deploy deploy-heketi

Modify topology.json

{
  "clusters": [
    {
      "nodes": [
        {
          "node": {
            "hostnames": {
              "manage": [
                "172.27.32.164"
              ],
              "storage": [
                "172.27.32.164"
              ]
            },
            "zone": 1
          },
          "devices": [
            "/dev/sdc"
          ]
        },
        {
          "node": {
            "hostnames": {
              "manage": [
                "172.27.32.182"
              ],
              "storage": [
                "172.27.32.182"
              ]
            },
            "zone": 1
          },
          "devices": [
            "/dev/sdc"
          ]
        },
        {
          "node": {
            "hostnames": {
              "manage": [
                "172.27.32.165"
              ],
              "storage": [
                "172.27.32.165"
              ]
            },
            "zone": 1
          },
          "devices": [
            "/dev/sdc"
          ]
        }
      ]
    }
  ]
}

Creating again:

# heketi 默认要求至少3个glusterfs 节点,否则会报错 no space,添加一个节点 172.27.32.165
[root@centos10 deploy]$ ./gk-deploy --admin-key aGVsbG8= --user-key aGVsbG8=  --ssh-keyfile /root/.ssh/id_rsa
Welcome to the deployment tool for GlusterFS on Kubernetes and OpenShift.

Before getting started, this script has some requirements of the execution
environment and of the container platform that you should verify.

The client machine that will run this script must have:
 * Administrative access to an existing Kubernetes or OpenShift cluster
 * Access to a python interpreter 'python'

Each of the nodes that will host GlusterFS must also have appropriate firewall
rules for the required GlusterFS ports:
 * 2222  - sshd (if running GlusterFS in a pod)
 * 24007 - GlusterFS Management
 * 24008 - GlusterFS RDMA
 * 49152 to 49251 - Each brick for every volume on the host requires its own
   port. For every new brick, one new port will be used starting at 49152. We
   recommend a default range of 49152-49251 on each host, though you can adjust
   this to fit your needs.

The following kernel modules must be loaded:
 * dm_snapshot
 * dm_mirror
 * dm_thin_pool

For systems with SELinux, the following settings need to be considered:
 * virt_sandbox_use_fusefs should be enabled on each node to allow writing to
   remote GlusterFS volumes

In addition, for an OpenShift deployment you must:
 * Have 'cluster_admin' role on the administrative account doing the deployment
 * Add the 'default' and 'router' Service Accounts to the 'privileged' SCC
 * Have a router deployed that is configured to allow apps to access services
   running in the cluster

Do you wish to proceed with deployment?

[Y]es, [N]o? [Default: Y]: y
Using Kubernetes CLI.
Using namespace "default".
Checking for pre-existing resources...
  GlusterFS pods ... not found.
  deploy-heketi pod ... not found.
  heketi pod ... not found.
  gluster-s3 pod ... not found.
Creating initial resources ... serviceaccount "heketi-service-account" created
clusterrolebinding.rbac.authorization.k8s.io "heketi-sa-view" created
clusterrolebinding.rbac.authorization.k8s.io "heketi-sa-view" labeled
OK
secret "heketi-config-secret" created
secret "heketi-config-secret" labeled
service "deploy-heketi" created
deployment.extensions "deploy-heketi" created
Waiting for deploy-heketi pod to start ... OK
Creating cluster ... ID: d46fce0516378e5aa913bd1baf97d08b
Allowing file volumes on cluster.
Allowing block volumes on cluster.
Creating node 172.27.32.164 ... ID: 8058c666087f1b411738b802b6cf1d5d
Adding device /dev/sdc ... OK
Creating node 172.27.32.182 ... ID: b08d71449838ed66e2d5aa10f2b8771b
Adding device /dev/sdc ... OK
Creating node 172.27.32.165 ... ID: 9b173570da2fee54f25ed03e74f11c72
Adding device /dev/sdc ... OK
heketi topology loaded.
Saving /tmp/heketi-storage.json
secret "heketi-storage-secret" created
endpoints "heketi-storage-endpoints" created
service "heketi-storage-endpoints" created
job.batch "heketi-storage-copy-job" created
service "heketi-storage-endpoints" labeled
pod "deploy-heketi-bf46f97fb-k42wr" deleted
service "deploy-heketi" deleted
deployment.apps "deploy-heketi" deleted
job.batch "heketi-storage-copy-job" deleted
secret "heketi-storage-secret" deleted
service "heketi" created
deployment.extensions "heketi" created
Waiting for heketi pod to start ... 
OK
Flag --show-all has been deprecated, will be removed in an upcoming release

heketi is now running and accessible via http://10.244.106.5:8080 . To run
administrative commands you can install 'heketi-cli' and use it as follows:

  # heketi-cli -s http://10.244.106.5:8080 --user admin --secret '<ADMIN_KEY>' cluster list

You can find it at https://github.com/heketi/heketi/releases . Alternatively,
use it from within the heketi pod:

  # /usr/bin/kubectl -n default exec -i heketi-77f4797494-8sqng -- heketi-cli -s http://localhost:8080 --user admin --secret '<ADMIN_KEY>' cluster list

For dynamic provisioning, create a StorageClass similar to this:

---
apiVersion: storage.k8s.io/v1beta1
kind: StorageClass
metadata:
  name: glusterfs-storage
provisioner: kubernetes.io/glusterfs
parameters:
  resturl: "http://10.244.106.5:8080"
  restuser: "user"
  restuserkey: "aGVsbG8="


Deployment complete!

Creating Success!

kubectl delete secret heketi-storage-secret
kubectl delete endpoint heketi-storage-endpoints
kubectl delete svc heketi-storage-endpoints
kubectl delete job heketi-storage-copy-job
kubectl delete svc heketi
kubectl delete deploy heketi

Creating storageclass test

kubectl create -f glusterfs-storage.yaml

---
apiVersion: storage.k8s.io/v1beta1
kind: StorageClass
metadata:
  name: glusterfs-storage
provisioner: kubernetes.io/glusterfs
parameters:
  resturl: "http://10.254.130.133:8080" # heketi service ip
  restuser: "admin"
  restuserkey: "aGVsbG8="

kubectl create -f pvc.yaml

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: gluster-pvc-test
spec:
  accessModes:
    - ReadWriteOnce
  storageClassName: glusterfs-storage
  resources:
    requests:
      storage: 1Gi

Creating pvc, after waiting for some time, from the Pending state becomes Bound

[root@centos10 gfs]$ kubectl get pvc -w
NAME               STATUS    VOLUME    CAPACITY   ACCESS MODES   STORAGECLASS        AGE
gluster-pvc-test   Pending                                       glusterfs-storage   13s
gluster-pvc-test   Pending   pvc-cacc8019-d9e4-11e9-b223-0800272600e0   0                   glusterfs-storage   25s
gluster-pvc-test   Bound     pvc-cacc8019-d9e4-11e9-b223-0800272600e0   1Gi       RWO       glusterfs-storage   25s

Automatically creates a corresponding pv

[root@centos10 gfs]$ kubectl get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS    CLAIM                      STORAGECLASS        REASON    AGE
pvc-cacc8019-d9e4-11e9-b223-0800272600e0   1Gi        RWO            Delete           Bound     default/gluster-pvc-test   glusterfs-storage             52s

kubectl create -f nginx.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx
spec:
  selector:
    matchLabels:
      app: nginx
  replicas: 1
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: harbor.guahao-inc.com/test4engine/nginx:1.15-alpine
        volumeMounts:
        - mountPath: "/root/"
          name: root
        ports:
        - containerPort: 80
        resources:
          limits:
            cpu: "1"
            memory: 5Mi
          requests:
            2: 500m
            memory: 5Mi
      volumes:
        - name: root
          persistentVolumeClaim:
            claimName: gluster-pvc-test

Creating Pod bound to the PVC, pod running.

[root@centos10 gfs]# kubectl get pod  -owide
NAME                      READY     STATUS    RESTARTS   AGE       IP              NODE
...............
nginx-6dc67b9dc5-84sfg    1/1       Running   0          2m        10.244.145.35   172.27.32.164
# 向 pod 中写入数据
kubectl exec -it nginx-6dc67b9dc5-84sfg sh
echo hello > /root/hello.txt
# 每台 glusterfs node 节点都会生成类似如下目录(每台节点的vg_xxx和brick_xxx名称不同),并可看到目录中有刚才写入pod的文件.
ls /var/lib/heketi/mounts/vg_2c32b0932a02c5b2098de24592b9a2f1/brick_1bf403d9677e9ae11e370f5fcaf8b9bb/brick/
# 删除 deployment,pv 数据仍然存在,重新创建deployment,仍然可以绑定到原有pv。
kubectl delete -f nginx.yaml
# 删除 pvc,对应pv会变成Released状态,并稍后被删除。
kubectl delete -f pvc.yaml
[root@centos10 gfs]$ kubectl get pv -w
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS     CLAIM                      STORAGECLASS        REASON    AGE
pvc-cacc8019-d9e4-11e9-b223-0800272600e0   1Gi        RWO            Delete           Relegiased   default/gluster-pvc-test   glusterfs-storage             9h
pvc-cacc8019-d9e4-11e9-b223-0800272600e0   1Gi       RWO       Delete    Failed    default/gluster-pvc-test   glusterfs-storage             9h

Creating pv pvc test

Since glusterfs pod cluster in kubernetes

Guess you like

Origin www.cnblogs.com/maoqide/p/11689885.html