k8s cluster deployment and use of rook-ceph storage system

1. Introduction to Rook and ceph


Ceph distributed storage system

Ceph is a highly scalable distributed storage solution that provides object, file and block storage. On each storage node, you will find the file system where Ceph stores objects and the Ceph OSD (Object Storage Daemon) process. On the Ceph cluster, you can also find the Ceph MON (monitoring) daemon, which ensures that the Ceph cluster remains highly available.

Rook
Rook is an open source cloud-native storage orchestration that provides platforms and frameworks; it provides platforms, frameworks and support for various storage solutions to integrate locally with the cloud native environment.
Rook transforms storage software into self-managing, self-expanding, and self-healing storage services. It achieves this goal through automated deployment, guidance, configuration, provisioning, expansion, upgrade, migration, disaster recovery, monitoring, and resource management.
Rook uses the tools provided by the underlying cloud native container management, scheduling, and orchestration platform to implement its own functions.
Rook currently supports Ceph, NFS, Minio Object Store and CockroachDB.
k8s cluster deployment and use of rook-ceph storage system


2. Preliminary preparation

1. There is already a k8s cluster that can run applications normally

2. At least three nodes are available in the cluster to meet the high availability requirements of ceph, and the server has an unformatted and unpartitioned hard disk.

3. Rook-ceph project address: https://github.com/rook/rook

https://github.com/rook/rook/blob/master/Documentation/ceph-quickstart.md deployment document

4. Rook uses storage

Rook uses all resources of all nodes by default, rook operator automatically starts OSD devices on all nodes, and Rook uses the following standards to monitor and discover available devices:

  • The device has no partition
  • The device does not have a formatted file system
  • Rook will not use equipment that does not meet the above standards. You can also modify the configuration file to specify which nodes or devices will be used.

Three, deploy Rook Operator

需要用到的镜像,部署服务前首先得将镜像导入
rook/ceph:v1.4.1

ceph/ceph:v15.2.4

quay.io/cephcsi/cephcsi:v3.1.0

#github clone项目部署文件,可以指定不同的版本,如果不指定默认则克隆Master分支测试1.4.1版本pv可以动态创建
git clone --single-branch --branch v1.4.1 https://github.com/rook/rook.git
#移动到项目目录下
cd rook/cluster/examples/kubernetes/ceph

#所有的pod都会在rook-ceph命名空间下创建
kubectl create -f common.yaml

#部署Rook操作员
kubectl create -f operator.yaml

#创建Rook Ceph集群
kubectl create -f cluster.yaml

#部署Ceph toolbox 命令行工具
#默认启动的Ceph集群,是开启Ceph认证的,这样你登陆Ceph组件所在的Pod里,是没法去获取集群状态,以及执行CLI命令,这时需要部署Ceph toolbox,命令如下
kubectl create -f toolbox.yaml

#进入ceph tool容器
kubectl exec -it pod/rook-ceph-tools-545f46bbc4-qtpfl -n rook-ceph bash

#查看ceph状态
ceph status

#至此已经部署完成了,查看rook-ceph命名空间下的pod,首先看pod的情况,有operator、mgr、agent、discover、mon、osd、tools,且osd-prepare是completed的状态,其它是running的状态:

k8s cluster deployment and use of rook-ceph storage system

#暴露方式有多种选择适合自己的一个即可
https://github.com/rook/rook/blob/master/Documentation/ceph-dashboard.md

#执行完cluster.yaml后rook会自动帮我们创建ceph的Dashboard,pod及service如下图,默认dashboard为ClusterIP,需要我们改为NodePort对外暴露服务。
kubectl  edit svc rook-ceph-mgr-dashboard -n rook-ceph

k8s cluster deployment and use of rook-ceph storage system
k8s cluster deployment and use of rook-ceph storage system


Fourth, access the Web Ceph Dashboard

k8s cluster deployment and use of rook-ceph storage system

访问地址,注意是https,http会访问不成功
https://192.168.10.215:32111/#/dashboard

默认用户名为
admin

密码获取方式执行如下命令
kubectl -n rook-ceph get secret rook-ceph-dashboard-password -o jsonpath="{['data']['password']}" | base64 --decode && echo

Ceph Dashboard首页,点击首页小齿轮修改admin的密码

k8s cluster deployment and use of rook-ceph storage system

Five, ceph distributed storage use

  • RBD
1.安装rbd插件storageclass
 kubectl  apply -f /opt/k8s-install-tool/rook-ceph/rook/cluster/examples/kubernetes/ceph/csi/rbd/storageclass.yaml
2.查看创建rbd结果
kubectl  get storageclasses.storage.k8s.io

k8s cluster deployment and use of rook-ceph storage system

3. Create pvc and specify storageClassName as rook-ceph-block

k8s cluster deployment and use of rook-ceph storage system

  • CEPHFS installation and use
1.安装cephfs元数据存储池及插件storageclass

kubectl  apply -f /opt/k8s-install-tool/rook-ceph/rook/cluster/examples/kubernetes/ceph/filesystem.yaml
kubectl  apply -f /opt/k8s-install-tool/rook-ceph/rook/cluster/examples/kubernetes/ceph/csi/cephfs/storageclass.yaml

2.以pod的形式部署在rook-ceph命名空间中,会有两个pod。
kubectl -n rook-ceph get pod -l app=rook-ceph-mds

NAME                                    READY   STATUS    RESTARTS   AGE
rook-ceph-mds-myfs-a-6b9cc74d4d-tgvv6   1/1     Running   0          14m
rook-ceph-mds-myfs-b-6b885f5884-qw8tk   1/1     Running   0          14m

3.查看创建rbd结果
kubectl  get storageclasses.storage.k8s.io

NAME              PROVISIONER                     RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
rook-ceph-block   rook-ceph.rbd.csi.ceph.com      Delete          Immediate           true                   18h
rook-cephfs       rook-ceph.cephfs.csi.ceph.com   Delete          Immediate           true                   13m

4.cephfs使用和rbd一样指定storageClassName的值即可,需要注意的是rbd只支持ReadWriteOnce,cephfs可以支持ReadWriteMany。

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  namespace: default
  name: "airflow-service-log-pvc"
spec:
  accessModes:
    #- ReadWriteOnce
    - ReadWriteMany
  resources:
    requests:
      storage: 2Gi
  storageClassName: rook-cephfs

知识点:

pv的三种访问模式
ReadWriteOnce,RWO,仅可被单个节点读写挂载
ReadOnlyMany,ROX,可被多节点同时只读挂载
ReadWriteMany,RWX,可被多节点同时读写挂载

pv回收策略
Retain,保持不动,由管理员手动回收
Recycle,空间回收,删除所有文件,仅NFS和hostPath支持
Delete,删除存储卷,仅部分云端存储支持

Guess you like

Origin blog.51cto.com/14034751/2542998