kubernetes mount ceph rbd and cephfs

table of Contents


  • k8s mount Ceph RBD

    • Create secret

    • Create StorageClass

    • Create PVC

    • Create secret

    • Create PV

    • Create PVC

    • Create deployment and mount PVC

    • PV & PVC method

    • StorageClass method

  • k8s mount Cephfs

k8s mount Ceph RBD

There are two ways for k8s to mount Ceph RBD. One is the traditional PV&PVC way, which means that the administrator needs to create the relevant PV and PVC in advance, and then the corresponding deployment or replication can be used to mount the PVC. After k8s 1.4, kubernetes provides a more convenient way to dynamically create PV, namely StorageClass. When using StorageClass, there is no need to create a fixed-size PV in advance to wait for the user to create the PVC to use, but directly create the PVC to use.

It should be noted that if you want the node node of k8s to execute the instruction of mounting ceph rbd, you need to install the ceph-common package on all nodes. You can install it directly through yum.

PV & PVC method

Create secret

#获取管理key并进行64位编码
ceph auth get-key client.admin | base64

Create a ceph-secret.yml file with the following content:

apiVersion: v1
kind: Secret
metadata:
 name: ceph-secret
data:
#Please note this value is base64 encoded.
# echo "keystring"|base64
 key: QVFDaWtERlpzODcwQWhBQTdxMWRGODBWOFZxMWNGNnZtNmJHVGc9PQo=

Create PV

Create a test.pv.yml file with the following content:

apiVersion: v1
kind: PersistentVolume
metadata:
 name: test-pv
spec:
 capacity:
   storage: 2Gi
 accessModes:
   - ReadWriteOnce
 rbd:
   #ceph的monitor节点
   monitors:      
     - 10.5.10.117:6789
     - 10.5.10.236:6789
     - 10.5.10.227:6789
   #ceph的存储池名字
   pool: data
   #在存储池里创建的image的名字
   image: data        
   user: admin
   secretRef:
     name: ceph-secret
   fsType: xfs
   readOnly: false
 persistentVolumeReclaimPolicy: Recycle
kubectl create -f test.pv.yml

Create PVC

Create a test.pvc.yml file with the following content:

kind: PersistentVolumeClaim
apiVersion: extensions/v1beta1
metadata:
 name: test-pvc
spec:
 accessModes:
   - ReadWriteOnce
 resources:
   requests:
     storage: 2Gi
kubectl create -f test.pvc.yml

Create deployment and mount PVC

Create a test.dm file with the following content:

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
 name: test
spec:
 replicas: 1
 template:
   metadata:
     labels:
       app: test
   spec:
     containers:
     - name: test
       image: dk-reg.op.douyuyuba.com/op-base/openresty:1.9.15
       ports:
       - containerPort: 80
       volumeMounts:
         - mountPath: "/data"
           name: data
     volumes:
       - name: data
         persistentVolumeClaim:
           claimName: test-pvc
kubectl create -f test.dm.yml

StorageClass method

Create secret

Since StorageClass requires that the secret type of ceph must be kubernetes.io/rbd, the secret created in the above PV & PVC method cannot be used and needs to be recreated. as follows:

# 其中key的部分为ceph原生的key,未使用base64重新编码
kubectl create secret generic ceph-secret --type="kubernetes.io/rbd" --from-literal=key='AQCikDFZs870AhAA7q1dF80V8Vq1cF6vm6bGTg==' --namespace=kube-system
kubectl create secret generic ceph-secret --type="kubernetes.io/rbd" --from-literal=key='AQCikDFZs870AhAA7q1dF80V8Vq1cF6vm6bGTg==' --namespace=default

Create StorageClass

Create test.sc.yml file with the following content:

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
 name: test-storageclass
provisioner: kubernetes.io/rbd
parameters:
 monitors: 192.168.1.11:6789,192.168.1.12:6789,192.168.1.13:6789
 # Ceph 客户端用户ID(非k8s的)
 adminId: admin
 adminSecretName: ceph-secret
 adminSecretNamespace: kube-system
 pool: data
 userId: admin
 userSecretName: ceph-secret

Create PVC

Create a test.pvc.yml file with the following content:

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
 name: test-sc-pvc
 annotations:
   volume.beta.kubernetes.io/storage-class: test-storageclass
spec:
 accessModes:
   - ReadWriteOnce
 resources:
   requests:
     storage: 2Gi
kubectl create -f test.pvc.yml

As for mounting, it is the same as PV & PVC, so the explanation will not be repeated.

k8s mount Cephfs

The above roughly explains the method of using k8s to mount the ceph rbd block device. Here is a brief description of how k8s mounts the ceph file system.

First, the secret can be reused directly with the above, no need to create it separately. There is no need to create pv and pvc anymore. Just mount it directly in deployment, the method is as follows:

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
 name: test
spec:
 replicas: 1
 template:
   metadata:
     labels:
       app: test
   spec:
     containers:
     - name: test
       image: dk-reg.op.douyuyuba.com/op-base/openresty:1.9.15
       ports:
       - containerPort: 80
       volumeMounts:
         - mountPath: "/data"
           name: data
     volumes:
       - name: data
         cephfs:
           monitors:
             - 10.5.10.117:6789
             - 10.5.10.236:6789
             - 10.5.10.227:6789
           path: /data
           user: admin
           secretRef:
             name: ceph-secret


Guess you like

Origin blog.51cto.com/15127502/2655045