Ceph is very convenient to use as shared storage in k8s. Ceph is an old-fashioned distributed storage system, which is very mature and powerful. It supports three modes (fast storage, file system storage, and object storage), so how to use it in k8s What about using ceph?
Rook is an open source cloud-native storage orchestration tool that provides a platform, framework, and support for various storage solutions for native integration with cloud-native environments. Rook turns storage software into a self-managing, self-scaling, and self-healing storage service by automating deployment, startup, configuration, provisioning, scaling, upgrades, migrations, disaster recovery, monitoring, and resource management. Rook's underlying layer uses the capabilities provided by cloud-native container management, scheduling, and orchestration platforms to provide these capabilities.
Rook uses extended functions to deeply integrate it into the cloud-native environment, and provides a seamless experience for scheduling, lifecycle management, resource management, security, monitoring, etc. Rook currently supports Ceph, NFS, Minio Object Store and CockroachDB .
Since the source of the image is abroad, it cannot be downloaded in China. Here you need to modify some images or download tags in advance. The operation is as follows:
cd rook/deploy/examples/
#(registry.aliyuncs.com/google_containers/<image>:<tag>),后四个镜像我FQ下
docker pull registry.aliyuncs.com/google_containers/csi-node-driver-registrar:v2.5.1
docker tag registry.aliyuncs.com/google_containers/csi-node-driver-registrar:v2.5.1 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1
docker pull registry.aliyuncs.com/google_containers/csi-snapshotter:v6.1.0
docker tag registry.aliyuncs.com/google_containers/csi-snapshotter:v6.1.0 registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
docker pull registry.aliyuncs.com/google_containers/csi-attacher:v4.0.0
docker tag registry.aliyuncs.com/google_containers/csi-attacher:v4.0.0 registry.k8s.io/sig-storage/csi-attacher:v4.0.0
docker pull registry.aliyuncs.com/google_containers/csi-resizer:v1.6.0
docker tag registry.aliyuncs.com/google_containers/csi-resizer:v1.6.0 registry.k8s.io/sig-storage/csi-resizer:v1.6.0
docker pull registry.aliyuncs.com/google_containers/csi-resizer:v1.6.0
docker tag registry.aliyuncs.com/google_containers/csi-resizer:v1.6.0 registry.k8s.io/sig-storage/csi-resizer:v1.6.0
docker pull registry.aliyuncs.com/google_containers/csi-provisioner:v3.3.0
docker tag registry.aliyuncs.com/google_containers/csi-provisioner:v3.3.0 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
② Deploy Rook Operator
cd rook/deploy/examples
kubectl create -f crds.yaml -f common.yaml -f operator.yaml
# 检查
kubectl -n rook-ceph get pod
Now that the Rook Operator is in the Running state, you can create a Ceph cluster next. In order to make the cluster unaffected after restarting, please ensure that the set dataDirHostPath attribute value is a valid host path:
cd rook/deploy/examples
kubectl apply -f cluster.yaml
④ Deploy the Rook Ceph tool
cd rook/deploy/examples
kubectl create -f toolbox.yaml
⑤ Deploy Ceph Dashboard
cd rook/deploy/examples
kubectl apply -f dashboard-external-https.yaml
# 获取 dashboard admin密码
kubectl -n rook-ceph get secret rook-ceph-dashboard-password -o jsonpath="{['data']['password']}"| base64 -d
View Ceph cluster status through Ceph Dashboard:
# 查看对外端口
kubectl get svc -n rook-ceph
⑥ check
kubectl get pods,svc -n rook-ceph
⑦ View the ceph cluster status through the ceph-tool tool pod