k8s 使用GlusterFS做持久化存储

一、创建GlusterFS

首先找几台主机做GlusterFS存储,这里用了3台主机:
10.244.0.10
10.244.0.11
10.244.0.12


安装GlusterFS

安装过程如下:

  1. 安装 gluster 源
yum install centos-release-gluster -y
  1. 安装 glusterfs 组件
yum install -y glusterfs glusterfs-server glusterfs-fuse glusterfs-rdma
  1. 创建 glusterfs 目录
mkdir /opt/glusterd
  1. 修改 glusterd 目录
sed -i 's/var\/lib/opt/g' /etc/glusterfs/glusterd.vol
  1. 启动 glusterfs
systemctl start glusterd.service
  1. 设置开机启动
systemctl enable glusterd.service
  1. 查看状态
systemctl status glusterd.service
  1. 配置 hosts
 > vi /etc/hosts
 
10.244.0.10 glusterfs1 
10.244.0.11 glusterfs2
10.244.0.12 glusterfs3
  1. 创建存储目录
mkdir /opt/gfs_data
  1. 添加节点到集群,在任意一台主机添加另外两台主机即可
gluster peer probe glusterfs2
gluster peer probe glusterfs3
  1. 查看集群状态
> gluster peer status
Number of Peers: 2

Hostname: glusterfs2
Uuid: f255xxxx
State: Peer in Cluster (Connected)

Hostname: glusterfs3
Uuid: 428xxxx
State: Peer in Cluster (Connected)
配置GlusterFS volume
  1. 创建分布卷
gluster volume create k8s-volume transport tcp glusterfs1:/opt/gfs_data glusterfs2:/opt/gfs_data glusterfs3:/opt/gfs_data force

  1. 启动 分布卷
gluster volume start k8s-volume

二、kubernetes整合GlusterFS

  1. 在所有 k8s node 中安装 glusterfs 客户端,并配置hosts
> yum install -y glusterfs glusterfs-fuse

> vi /etc/hosts
10.244.0.10 glusterfs1 
10.244.0.11 glusterfs2
10.244.0.12 glusterfs3
  1. 配置endpoints
    官方提供了endpoints的example文件,如下:[ https://github.com/kubernetes/examples/blob/master/staging/volumes/glusterfs/glusterfs-endpoints.json ]
    其中有如下一段配置,address配置GlusterFS节点IP,port配置1-65535中的可用的port即可
 "subsets": [
    {
      "addresses": [{ "ip": "10.240.106.152" }],
      "ports": [{ "port": 1 }]
    },
    {
      "addresses": [{ "ip": "10.240.79.157" }],
      "ports": [{ "port": 1 }]
    }
  ]
  1. 创建endpoint,
 > kubectl create -f  glusterfs-endpoints.json
 > kubectl get endpoints
 NAME                     ENDPOINTS                           AGE
glusterfs-cluster        10.244.0.10:1,10.244.0.11:1990   7h35m
  1. 为endpoint创建service.
    模板连接如下:
    [ https://github.com/kubernetes/examples/blob/master/staging/volumes/glusterfs/glusterfs-service.json ]
> kubectl create -f glusterfs-service.json
  1. 创建一个pod用于测试
    示例如下,[ https://github.com/kubernetes/examples/blob/master/staging/volumes/glusterfs/glusterfs-pod.json ]

其中:

"volumes": [
  {
    "name": "glusterfsvol",
    "glusterfs": {
      "endpoints": "glusterfs-cluster",
      "path": " k8s-volume",
      "readOnly": true
    }
  }
]

endpoints:为刚刚创建的endponits名称。
path:glusterfs卷名
readOnly:通过true/false设置挂载点是只读还是读写。
6. 验证

> kubectl get pods
NAME             READY     STATUS    RESTARTS   AGE
glusterfs        1/1       Running   0          3m
> kubectl exec glusterfs -- mount | grep gluster
10.244.0.10:kube_vol on /mnt/glusterfs type fuse.glusterfs (rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072)

可能会有如下报错:


 the following error information was pulled from the glusterfs log to help diagnose this issue: 
[2019-07-04 04:12:08.856792] E [MSGID: 100026] [glusterfsd.c:2351:glusterfs_process_volfp] 0-: failed to construct the graph
[2019-07-04 04:12:08.856967] E [graph.c:1142:glusterfs_graph_destroy] (-->/usr/sbin/glusterfs(mgmt_getspec_cbk+0x532) [0x55fc018ea332] -->/usr/sbin/glusterfs(glusterfs_process_volfp+0x150) [0x55fc018e3e60] -->/lib64/libglusterfs.so.0(glusterfs_graph_destroy+0x84) [0x7f506a7ef1e4] ) 0-graph: invalid argument: graph [Invalid argument]
  Warning  FailedMount  4s  kubelet, worker2  Unable to mount volumes for pod "glusterfs_jx(bd61f071-9e11-11e9-a889-00163e0cdcc7)": timeout expired waiting for volumes to attach or mount for pod "jx"/"glusterfs". list of unmounted volumes=[glusterfsvol]. list of unattached volumes=[glusterfsvol default-token-bt9rk]

原因是k8s集群节点的gluster客户端版本和GlusterFS集群的版本不一致,只需要版本拉齐即可。

三、创建PV和PVC

  1. 创建PV
apiVersion: v1
kind: PersistentVolume
metadata:
  name: gluster-volume
spec:
  capacity:
    storage: 1Gi
  accessModes:
    - ReadWriteOnce
  persistentVolumeReclaimPolicy: Recycle
  storageClassName: glusterfs
  glusterfs:
    endpoints: "glusterfs-cluster"
    path: "k8s-volume"
    readOnly: false

capacity:指定PV的容量为1GB
accessModes:指定访问模式为ReadWriteOnce,支持的访问模式有3种:ReadWriteOnce表示PV能以read-write模式mount到单个节点,ReadOnlyMany表示PV能以read-only模式mount到多个节点, ReadWriteMany表示PV能以read-write模式mount到多个节点。
persistentVolumeReclaimPolicy指定PV的回收策略为Recycle,支持的策略有3种:Retain表示需要手动回收;Recycle 表示清除PV中的数据,效果相当于执行了 rm -rf /volumename/*;Delete 表示删除Storage Provider上的对应存储资源。
storageClassName 指定PV的class为glusterfs。相当于为PV设置了一个分类,PVC可以指定 class 申请相应 class 的PV。

> kubectl apply -f pv.yml 
persistentvolume/gluster-volume created
> kubectl get pv
NAME             CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM   STORAGECLASS   REASON   AGE
gluster-volume   1Gi        RWO            Recycle          Available           glusterfs               9s

STATUS为Available,表示 PV 就绪,可以被 PVC 申请

  1. 创建PVC
    PVC只需要指定PV的容量、访问模式、class即可。
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: glusterfs-nginx
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi
  storageClassName: glusterfs

创建PVC 并查看PV和PVC的状态

kubectl apply -f pvc.yml 
persistentvolumeclaim/glusterfs-nginx created
> kubectl get pvc
NAME                     STATUS    VOLUME           CAPACITY   ACCESS MODES   STORAGECLASS   AGE
glusterfs-nginx          Bound     gluster-volume   1Gi        RWO            glusterfs      9s
tufted-sasquatch-mysql   Pending                                                             8d
> kubectl get pv
NAME             CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                STORAGECLASS   REASON   AGE
gluster-volume   1Gi        RWO            Recycle          Bound    jx/glusterfs-nginx   glusterfs               37m

可以看出,PVC已经绑定了PV。
接下来就可以在pod中使用PVC了。

发布了48 篇原创文章 · 获赞 31 · 访问量 4万+

猜你喜欢

转载自blog.csdn.net/weixin_44723434/article/details/94619424