k8s 基于ceph存储动态卷的使用

kubernetes 使用Ceph RBD进行动态卷配置

1. 实验环境简述:

  本实验主要演示了将现有Ceph集群用作k8s 动态创建持久性存储(pv)的示例。假设您的环境已经建立了一个工作的Ceph集群。

2. 配置步骤:

1. k8s所有节点安装ceph-common软件包

yum install -y ceph-common
# 在每一台k8s节点安装ceph-common软件包,无论是master节点还是node节点
如果k8s节点比较多,可以使用ansible安装
ansible kube-master -m copy -a "src=ceph.repo backup=yes dest=/etc/yum.repos.d"
ansible kube-master -m yum -a "name=ceph-common state=present"
ansible kube-node -m copy -a "src=ceph.repo backup=yes dest=/etc/yum.repos.d"
ansible kube-node -m yum -a "name=ceph-common state=present"

2. Create Pool for Dynamic Volumes
  在ceph管理节点上面创建一个pool,名称为kube

ceph osd pool create kube 1024
[root@k8sdemo-ceph1 cluster]# ceph df
GLOBAL:
    SIZE      AVAIL     RAW USED     %RAW USED
    3809G     3793G       15899M          0.41
POOLS:
    NAME        ID     USED       %USED     MAX AVAIL     OBJECTS
    rbd         0           0         0         1196G           0
    k8sdemo     1           0         0         1196G           0
    kube        2      72016k         0         1196G          30
[root@k8sdemo-ceph1 cluster]# cd /cluster
#创建密钥,用于k8s认证
ceph auth get-or-create client.kube mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=kube' -o ceph.client.kube.keyring
[root@k8sdemo-ceph1 cluster]# ls ceph.client.kube.keyring
ceph.client.kube.keyring
[root@k8sdemo-ceph1 cluster]#

3. 在k8s集群上面创建一个ceph集群的secret

[root@k8sdemo-ceph1 cluster]# ceph auth get-key client.admin | base64
QVFEdkJhZGN6ZW41SFJBQUQ5RzNJSVU0djlJVXRRQzZRZjBnNXc9PQ==
[root@k8sdemo-ceph1 cluster]#
#   使用该命令在其中一个Ceph MON节点上生成此base64密钥ceph auth get-key client.admin | base64,然后复制输出并将其粘贴为密钥的值

[root@master-01 ceph]# cat ceph-secret.yaml
apiVersion: v1
kind: Secret
metadata:
  name: ceph-secret
  namespace: kube-system
data:
  key: QVFEdkJhZGN6ZW41SFJBQUQ5RzNJSVU0djlJVXRRQzZRZjBnNXc9PQ==
type: kubernetes.io/rbd
[root@master-01 ceph]#

kubectl apply -f ceph-secret.yaml

[root@master-01 ceph]# kubectl describe secrets -n kube-system ceph-secret
Name:         ceph-secret
Namespace:    kube-system
Labels:       <none>
Annotations:
Type:         kubernetes.io/rbd

Data
====
key:  40 bytes
[root@master-01 ceph]# 
# k8s上面使用Ceph RBD 动态供给卷需要配置ceph secret

3. 在k8s集群上面创建一个ceph集群的用户 secret

root@k8sdemo-ceph1 cluster]# ceph auth get-key client.kube | base64
QVFDTks2ZGNjcEZoQmhBQWs4anVvbmVXZnZUeitvMytPbGZ6OFE9PQ==
[root@k8sdemo-ceph1 cluster]#   
# 使用该命令在其中一个Ceph MON节点上生成此base64密钥ceph auth get-key client.kube | base64,然后复制输出并将其粘贴为密钥的值。
[root@master-01 ceph]# cat ceph-user-secret.yaml 
apiVersion: v1
kind: Secret
metadata:
  name: ceph-user-secret
  namespace: kube-system
data:
  key: QVFDTks2ZGNjcEZoQmhBQWs4anVvbmVXZnZUeitvMytPbGZ6OFE9PQ==
type: kubernetes.io/rbd
[root@master-01 ceph]#

kubectl apply -f ceph-user-secret.yaml

[root@master-01 ceph]# kubectl get secrets -n kube-system ceph-user-secret 
NAME               TYPE                DATA   AGE
ceph-user-secret   kubernetes.io/rbd   1      3h45m
[root@master-01 ceph]#

# k8s上面使用Ceph RBD 动态供给卷需要配置ceph user secret

4. 在k8s集群上面创建dynamic volumes

[root@master-01 ceph]# cat ceph-storageclass.yaml 
apiVersion: storage.k8s.io/v1beta1
kind: StorageClass
metadata:
  name: dynamic
  annotations:
    storageclass.beta.kubernetes.io/is-default-class: "true"
provisioner: kubernetes.io/rbd
parameters:
  monitors: 10.83.32.224:6789,10.83.32.225:6789,10.83.32.234:6789
  adminId: admin
  adminSecretName: ceph-secret
  adminSecretNamespace: kube-system 
  pool: kube
  userId: kube
  userSecretName: ceph-user-secret
[root@master-01 ceph]#
kubectl apply -f ceph-storageclass.yaml
# 配置了ceph mon节点的地址和端口,在pool中可以创建image的Ceph client ID
# Secret Name for adminId. It is required. The provided secret must have type kubernetes.io/rbd. 
# The namespace for adminSecret. Default is default.
# Ceph RBD pool. Default is rbd, but that value is not recommended.
# Ceph client ID that is used to map the Ceph RBD image. Default is the same as adminId.
# The name of Ceph Secret for userId to map Ceph RBD image. It must exist in the same namespace as PVCs. It is required unless its set as the default in new projects. 

5. 在k8s集群上面创建持久卷声明(PVC)

  持久卷声明(PVC)指定所需的访问模式和存储容量。目前,仅基于这两个属性,PVC被绑定到单个PV。一旦PV绑定到PVC,该PV基本上与PVC的项目相关联,并且不能被另一个PVC绑定。PV和PVC的一对一映射。但是,同一项目中的多个pod可以使用相同的PVC。
  对于PV,accessModes不强制执行访问权限,而是充当标签以将PV与PVC匹配

[root@master-01 ceph]# cat ceph-class.yaml 
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: ceph-claim
  namespace: kube-system
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 2Gi
[root@master-01 ceph]# 
kubectl apply -f ceph-class.yaml 

6. 在k8s集群上面创建Pod,使用ceph RDB自动关联的pvc

  卷的名称。此名称在containers和 volumes部分中必须相同。

[root@master-01 ceph]# cat ceph-pod.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: ceph-pod1
  namespace: kube-system
spec:
  containers:
  - name: ceph-busybox
    image: busybox
    command: ["sleep","60000"]
    volumeMounts:
    - name: ceph-vol1
      mountPath: /usr/share/busybox
      readOnly: false
  volumes:
  - name: ceph-vol1
    persistentVolumeClaim:
      claimName: ceph-claim
[root@master-01 ceph]#
kubectl apply -f  ceph-pod.yaml
[root@master-01 ceph]# kubectl get pods -n kube-system 
NAME                                    READY   STATUS    RESTARTS   AGE
ceph-pod1                               1/1     Running   0          3h21m

# 进入容器,查看挂载
[root@master-01 ceph]# kubectl exec -it -n kube-system ceph-pod1 -- /bin/sh
/ # df -h|grep busybox
/dev/rbd0                 1.9G      6.0M      1.9G   0% /usr/share/busybox
/ # 

推荐关注我的个人微信公众号 “云时代IT运维”,周期性更新最新的应用运维类技术文档。关注虚拟化和容器技术、CI/CD、自动化运维等最新前沿运维技术和趋势;k8s  基于ceph存储动态卷的使用

猜你喜欢

转载自blog.51cto.com/zgui2000/2374614