kubernetes存储之ceph-csi

kubernetes存储之ceph-csi

scofield 菜鸟运维杂谈

0、前提


因为kubernetes默认的provisioner: kubernetes.io/rbd 无法正常使用,改用ceph官网提供的provisioner: rbd.csi.ceph.com,需要部署相关的csi插件以及配置ceph连接授权,所以本文记录如下。

1、下载部署清单



git clone https://github.com/ceph/ceph-csi.git
cd ceph-csi/deploy/rbd/kubernetes
[root@qd01-stop-k8s-master001 kubernetes]# ls -l
total 36
-rw-r--r-- 1 root root  304 Feb 23 16:24 csi-config-map.yaml
-rw-r--r-- 1 root root 1674 Feb 23 16:20 csi-nodeplugin-psp.yaml
-rw-r--r-- 1 root root  747 Feb 23 16:20 csi-nodeplugin-rbac.yaml
-rw-r--r-- 1 root root 1300 Feb 23 16:20 csi-provisioner-psp.yaml
-rw-r--r-- 1 root root 2915 Feb 23 16:20 csi-provisioner-rbac.yaml
-rw-r--r-- 1 root root 7123 Feb 23 16:34 csi-rbdplugin-provisioner.yaml
-rw-r--r-- 1 root root 5841 Feb 23 16:34 csi-rbdplugin.yaml

如下是需要的镜像


k8s.gcr.io/sig-storage/csi-provisioner:v2.0.4
k8s.gcr.io/sig-storage/csi-snapshotter:v3.0.2
k8s.gcr.io/sig-storage/csi-attacher:v3.0.2
k8s.gcr.io/sig-storage/csi-resizer:v1.0.1
k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.0.1
quay.io/cephcsi/cephcsi:canary

如果您的网络不能从k8s.gcr.io下载镜像,可以改成如下
scofield/csi-provisioner:v2.0.4
scofield/csi-snapshotter:v3.0.2
scofield/csi-attacher:v3.0.2
scofield/csi-resizer:v1.0.1
scofield/csi-node-driver-registrar:v2.0.1
scofield/cephcsi:canary

2、修改配置


将kms所在行注释掉,因为没有这个配置,否则会部署失败


root@qd01-stop-k8s-master001 kubernetes]# vim csi-rbdplugin-provisioner.yaml
[root@qd01-stop-k8s-master001 kubernetes]# vim csi-rbdplugin.yaml
        #- name: ceph-csi-encryption-kms-config
        #  mountPath: /etc/ceph-csi-encryption-kms-config/
        #- name: ceph-csi-encryption-kms-config
        #  configMap:
        #    name: ceph-csi-encryption-kms-config

3、执行部署



[root@qd01-stop-k8s-master001 kubernetes]# kubectl apply -f . -n csi
configmap/ceph-csi-config created
podsecuritypolicy.policy/rbd-csi-nodeplugin-psp created
role.rbac.authorization.k8s.io/rbd-csi-nodeplugin-psp created
rolebinding.rbac.authorization.k8s.io/rbd-csi-nodeplugin-psp created
serviceaccount/rbd-csi-nodeplugin created
clusterrole.rbac.authorization.k8s.io/rbd-csi-nodeplugin created
clusterrolebinding.rbac.authorization.k8s.io/rbd-csi-nodeplugin created
podsecuritypolicy.policy/rbd-csi-provisioner-psp created
role.rbac.authorization.k8s.io/rbd-csi-provisioner-psp created
rolebinding.rbac.authorization.k8s.io/rbd-csi-provisioner-psp created
serviceaccount/rbd-csi-provisioner created
clusterrole.rbac.authorization.k8s.io/rbd-external-provisioner-runner created
clusterrolebinding.rbac.authorization.k8s.io/rbd-csi-provisioner-role created
role.rbac.authorization.k8s.io/rbd-external-provisioner-cfg created
rolebinding.rbac.authorization.k8s.io/rbd-csi-provisioner-role-cfg created
service/csi-rbdplugin-provisioner created
deployment.apps/csi-rbdplugin-provisioner created
daemonset.apps/csi-rbdplugin created
service/csi-metrics-rbdplugin created

等待部署完成即可
[root@qd01-stop-k8s-master001 UseRBD]# kubectl get po -n csi
NAME                                        READY   STATUS    RESTARTS   AGE
csi-rbdplugin-5xtbz                         3/3     Running   0          29m
csi-rbdplugin-hwrsr                         3/3     Running   0          29m
csi-rbdplugin-mtscj                         3/3     Running   0          29m
csi-rbdplugin-pmqjv                         3/3     Running   0          29m
csi-rbdplugin-provisioner-b96dc4989-fd7kt   7/7     Running   0          29m
csi-rbdplugin-provisioner-b96dc4989-tk9bv   7/7     Running   0          29m
csi-rbdplugin-provisioner-b96dc4989-xrxgz   7/7     Running   0          29m
csi-rbdplugin-qzsjr                         3/3     Running   0          29m
csi-rbdplugin-tt4b9                         3/3     Running   0          29m
csi-rbdplugin-w429q                         3/3     Running   0          29m
csi-rbdplugin-w6xp7                         3/3     Running   0          29m
csi-rbdplugin-wxc94                         3/3     Running   0          29m

4、使用ceph -rbd


1、创建需要的secret
创建csi-rbd-secret.yaml


---
apiVersion: v1
kind: Secret
metadata:
  name: csi-rbd-secret
  namespace: csi
stringData:
  userID: admin
  userKey: AQALpatf81ZmNhAAz6xt03v4boTYj7o5MOa0iQ==
[root@qd01-stop-k8s-master001 UseRBD]# kubectl apply -f csi-rbd-secret.yaml
secret/csi-rbd-secret created

2、创建存储类sc
创建storageclass.yaml


---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
   name: rbd
provisioner: rbd.csi.ceph.com
parameters:
   clusterID: ec7ee19a-f7c6-4ed0-93a7-f48af473352c
   pool: k8s
   imageFeatures: layering
   csi.storage.k8s.io/provisioner-secret-name: csi-rbd-secret
   csi.storage.k8s.io/provisioner-secret-namespace: csi
   csi.storage.k8s.io/controller-expand-secret-name: csi-rbd-secret
   csi.storage.k8s.io/controller-expand-secret-namespace: csi
   csi.storage.k8s.io/node-stage-secret-name: csi-rbd-secret
   csi.storage.k8s.io/node-stage-secret-namespace: csi
   csi.storage.k8s.io/fstype: xfs
reclaimPolicy: Delete
allowVolumeExpansion: true
mountOptions:
   - discard
[root@qd01-stop-k8s-master001 UseRBD]# kubectl apply -f storageclass.yaml
storageclass.storage.k8s.io/rbd created

[root@qd01-stop-k8s-master001 UseRBD]# kubectl get sc
NAME   PROVISIONER        RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
rbd    rbd.csi.ceph.com   Delete          Immediate           true                   20m

3、创建PVC验证sc是否可用
创建raw-block-pvc.yaml


---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: raw-block-pvc
spec:
  accessModes:
    - ReadWriteOnce
  volumeMode: Block
  resources:
    requests:
      storage: 1Gi
  storageClassName: rbd
[root@qd01-stop-k8s-master001 UseRBD]# kubectl apply -f raw-block-pvc.yaml
persistentvolumeclaim/raw-block-pvc created

[root@qd01-stop-k8s-master001 UseRBD]# kubectl get pvc
NAME            STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
raw-block-pvc   Bound    pvc-84bf2ffb-7aee-41bd-9e6d-614c9f29eab4   1Gi        RWO            rbd            39s

4、测试动态分配PVC
创建demo-statefulset-csi.yaml


apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: demo-nginx
  namespace: default
  labels:
    app: demo-nginx
spec:
  serviceName: demo-nginx
  replicas: 2
  selector:
    matchLabels:
      app: demo-nginx
  template:
    metadata:
      labels:
        app: demo-nginx
    spec:
      terminationGracePeriodSeconds: 180
      initContainers:
        - name: init
          image: busybox
          command: ["chmod","777","-R","/data"]
          imagePullPolicy: Always
          volumeMounts:
          - name: volume
            mountPath: /data
      containers:
      - name: demo-nginx
        image: nginx
        ports:
        - containerPort: 80
          name: port
        volumeMounts:
        - name: volume
          mountPath: /data
  volumeClaimTemplates:
  - metadata:
      name: volume
    spec:
      accessModes: ["ReadWriteOnce"]
      storageClassName: rbd
      resources:
        requests:
          storage: 5Gi
[root@qd01-stop-k8s-master001 UseRBD]# kubectl apply -f demo-statefulset-csi.yaml
statefulset.apps/demo-nginx created

5、验证可以看到,pvs自动创建好,并且正常挂载到pod中


[root@qd01-stop-k8s-master001 UseRBD]# kubectl get pvc
NAME                  STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
volume-demo-nginx-0   Bound    pvc-b0e3c919-10ad-49f7-a225-4337c07133ea   5Gi        RWO            rbd            6m37s
volume-demo-nginx-1   Bound    pvc-cb526baa-62ae-43ee-a544-eb1655c9c8c6   5Gi        RWO            rbd            2m24s
[root@qd01-stop-k8s-master001 UseRBD]# kubectl get po 
NAME           READY   STATUS    RESTARTS   AGE
demo-nginx-0   1/1     Running   0          5m5s
demo-nginx-1   1/1     Running   0          2m31s

进入到其中一个pod中可以看到,挂载了一个/dev/rbd2块存储,大小为制定的5G
[root@qd01-stop-k8s-master001 UseRBD]# kubectl exec -ti demo-nginx-0 /bin/sh
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
# df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/rbd2       5.0G   38M  5.0G   1% /data

PS:文章会同步到dev.kubeops.net

猜你喜欢

转载自blog.51cto.com/15060545/2656508