KubeSphere uses GlusterFS as persistent storage


提示: k8s中可以使用静态、动态的方式供给glusterfs存储资源,但是两者的配置不相同,如果是静态,那么直接挂盘,创建lvm,创建文件系统,挂载,安装glusterfs集群,创建卷,k8s使用gluster即可。但是动态方式的话,需要结合heketi服务+glusterfs,由heketi服务管理、创建glusterfs集群,k8s中配置存储类指定heketi即可。heketi服务要求glusterfs服务器的磁盘必须是裸盘,不能是格式化的文件系统或lvm,这有点坑。所以这一点需要注意了。

I. Introduction

Environment: centos7.9, Kubernetes v1.22.12
Official website of glusterfs: https://docs.gluster.org/en/latest
glusterfs was deprecated in v1.25. For details, you can view the k8s storage support matrix

1. Host preparation

  Prepare three virtual machines or physical machines to install the glusterfs file system as the storage server, and mount a separate disk to store data, because GlusterFS does not recommend sharing the root directory.
Here I am using 3 hosts.

192.168.54.52
192.168.54.53
192.168.54.54

Configure the host name, turn off the firewall, and turn off selinux.
Skip it. It's too simple. Do it yourself.

2. Prepare disk

提示:三个存储节点都要挂载一块未经格式化的的硬盘出来。
  If the virtual machine directly virtualizes a hard disk, the cloud server I use directly virtualizes a hard disk. Be careful not to do any operations on the hard disk first . After virtualizing it, execute the command to check.

sudo lsblk

  You can see the drive letter vdb
Insert image description here

2. Install glusterfs server

1. Configure glusterfs yum source

#配置glusterfs的yum源
cat > /etc/yum.repos.d/glusterfs.repo <<EOF
[glusterfs]
name=glusterfs
baseurl=https://buildlogs.centos.org/centos/7/storage/x86_64/gluster-9/	
enabled=1
gpgcheck=0
EOF

2. Install gluster service

#安装glusterfs-server
yum install glusterfs-server

3. Start the service and start it at boot

systemctl start glusterd.service
systemctl enable  glusterd.service
systemctl status  glusterd.service

4. The port of glusterfs

#gluster守护进程端口24007,每创建一个逻辑卷,就会启动一个进程和端口,
进程端口默认是49152,以此类推,第二个卷端口就是49153。
[root@gluster1 /]# netstat  -lntup | grep gluster
tcp        0      0 0.0.0.0:49152           0.0.0.0:*               LISTEN      12946/glusterfsd    
tcp        0      0 0.0.0.0:49153           0.0.0.0:*               LISTEN      15586/glusterfsd    
tcp        0      0 0.0.0.0:24007           0.0.0.0:*               LISTEN      3503/glusterd  

3. Install the Heketi service (the Heketi service is required to implement k8s dynamic provision of glusterfs storage)

  heketi provides RESTFUL API for glusterfs cluster, and manages and controls glusterfs cluster through restful style.

  It is written in the k8s official document that glusterfs should be used as the backend to dynamically provide storage. The heketi interface must be configured in the storage class. Glusterfs is not directly configured, so we need to install the Heketi service.

  There are three installation methods for Heketi services: OpenShift cluster installation, Standalone independent stand-alone installation, and Kubernetes installation. We use Standalone independent stand-alone installation.

1. Heketi installation

  1. Install heketi server and client tools online using yum. Heketi is also in the glusterfs source. If there is no glusterfs source, refer to the yum source for configuring glusterfs above.
yum install  heketi heketi-client
  1. #Offline installation
    Refer to the official website to add a link description , download the heketi binary package and configure the service to be started by systemd management.

2. Create heketi user and configure password-free login

  Create a password-free login user, because the official website requires the heketi host to be able to log in to the glusterfs host without a password. Here I will not create a heketi user, but directly use the root user. If it is a non-root user, it needs sudo permissions.

#建议使用root用户做免密登录
ssh-keygen -t rsa
ssh-copy-id -i ~/.ssh/id_rsa.pub [email protected]
ssh-copy-id -i ~/.ssh/id_rsa.pub [email protected]
ssh-copy-id -i ~/.ssh/id_rsa.pub [email protected]

3. Modify heketi configuration file

vim /etc/heketi/heketi.json
{
    
    
  "_port_comment": "Heketi Server Port Number",
  "port": "7080",      #Heketi服务端口,默认是8080,可以自定义

  "_use_auth": "Enable JWT authorization. Please enable for deployment",
  "use_auth": true,   true,	#是否使用JWT authorization,设置为true

  "_jwt": "Private keys for access",
  "jwt": {
    
    
    "_admin": "Admin has access to all APIs",
    "admin": {
    
    
      "key": "abc.123456"     #配置admin的密码
    },
    "_user": "User only has access to /volumes endpoint",
    "user": {
    
    
      "key": "abc.123456"     #配置普通账号的密码
    }
  },

  "_glusterfs_comment": "GlusterFS Configuration",
  "glusterfs": {
    
    
    "_executor_comment": [
      "Execute plugin. Possible choices: mock, ssh",
      "mock: This setting is used for testing and development.",
      "      It will not send commands to any node.",
      "ssh:  This setting will notify Heketi to ssh to the nodes.",
      "      It will need the values in sshexec to be configured.",
      "kubernetes: Communicate with GlusterFS containers over",
      "            Kubernetes exec api."
    ],
    "executor": "ssh",    #命令执行器配置为ssh

    "_sshexec_comment": "SSH username and private key file information",
    "sshexec": {
    
                  #命令执行器为ssh方式,改下面这段
      "keyfile": "/root/.ssh/id_rsa",      #ssh的密钥
      "user": "root",           #ssh的用户,我使用的是root用户            
      "port": "22",             #ssh的端口     
      "fstab": "/etc/fstab"     #存储挂载点的fstab文件,保持默认即可
    },

    "_kubeexec_comment": "Kubernetes configuration",
    "kubeexec": {
    
             #命令执行器没有用到kubernetes,不用改
      "host" :"https://kubernetes.host:8443",
      "cert" : "/path/to/crt.file",
      "insecure": false,
      "user": "kubernetes username",
      "password": "password for kubernetes user",
      "namespace": "OpenShift project or Kubernetes namespace",
      "fstab": "Optional: Specify fstab file on node.  Default is /etc/fstab"
    },

    "_db_comment": "Database file name",
    "db": "/var/lib/heketi/heketi.db",  #heketi数据库文件,保持默认

    "_loglevel_comment": [
      "Set log level. Choices are:",
      "  none, critical, error, warning, info, debug",
      "Default is warning"
    ],
    "loglevel" : "warning"             #定义日志几级别
  }
}

4. Start the heketi service

  Modify the user who runs the heketi service. The default is heketi user. The heketi user is created by default when yum is installed. However, we use ssh as the root user, so we need to modify the user who runs the heketi service.

vim /usr/lib/systemd/system/heketi.service
User=root	#改为root

start up

systemctl daemon-reload 
systemctl start  heketi
systemctl enable heketi
systemctl status heketi

After testing, if you see the following print, it means that the Heketi service starts normally.

curl http://192.168.54.52:7080/hello
Hello from Heketi

5. Configure the environment variables of the hekeit-cli client tool

echo 'export HEKETI_CLI_SERVER=http://192.168.54.52:7080' >> ~/.bash_profile	#永久设置环境变量
echo 'export HEKETI_CLI_USER=admin'  >> ~/.bash_profile							#永久设置环境变量
echo 'export HEKETI_CLI_KEY=abc.123456'  >> ~/.bash_profile						#永久设置环境变量
source  ~/.bash_profile															#立即生效

6. Set the topology file of hekeit

  The official website says that Heketi must be provided with information about the system topology. This allows Heketi to decide which nodes, disks and clusters to use. https://github.com/heketi/heketi/blob/master/docs/admin/topology.md

  You can use the command line client to create a cluster, then add nodes to the cluster, and then add disks to each node. If using the command line, this process can be quite tedious. Therefore, the command line client supports loading this information into Heketi using a topology file that describes the cluster, cluster nodes, and disk information on each node.

  Write a topology file. This topology file is a JSON format file that describes the clusters, nodes and disks to be added to Heketi. Official website example

{
    
    
    "clusters": [
        {
    
    
            "nodes": [
                {
    
    
                    "node": {
    
    
                        "hostnames": {
    
    
                            "manage": [
                                "192.168.54.52"
                            ],
                            "storage": [
                                "192.168.54.52"
                            ]
                        },
                        "zone": 1
                    },
                    "devices": [
                        {
    
    
                            "name": "/dev/vdb",
                            "destroydata": true
                        }
                    ]
                },
                {
    
    
                    "node": {
    
    
                        "hostnames": {
    
    
                            "manage": [
                                "192.168.54.53"
                            ],
                            "storage": [
                                "192.168.54.53"
                            ]
                        },
                        "zone": 2
                    },
                    "devices": [
                        {
    
    
                            "name": "/dev/vdb",
                            "destroydata": true
                        }
                    ]
                },
                {
    
    
                    "node": {
    
    
                        "hostnames": {
    
    
                            "manage": [
                                "192.168.54.54"
                            ],
                            "storage": [
                                "192.168.54.54"
                            ]
                        },
                        "zone": 2
                    },
                    "devices": [
                        {
    
    
                            "name": "/dev/vdb",
                            "destroydata": true
                        }
                    ]
                }
            ]
        }
    ]
}

Load glusterfs nodes to Heketi through topology files

[root@gluster1]# heketi-cli topology load --json=/etc/heketi/topology.json --server=http://192.168.54.52:7080--user=admin --secret=abc.123456
	Found node 192.168.54.52 on cluster f5dca07fd4e2edbe2e0b0ce5161a1cf7
		Adding device /dev/vdb ... OK
	Found node 192.168.54.53 on cluster f5dca07fd4e2edbe2e0b0ce5161a1cf7
		Adding device /dev/vdb ... OK
	Found node 192.168.54.54 on cluster f5dca07fd4e2edbe2e0b0ce5161a1cf7
		Found device /dev/vdb

  Check the information. Because the environment variables of the heketi-cli client tool were added to the /root/.bash_profile file earlier, there is no need to write parameters such as –server here.

heketi-cli topology info

7. Test creating storage volume

heketi-cli volume create --size=2 --replica=3

output

Name: vol_aa8a1280b5133a36b32cf552ec9dd3f3
Size: 2
Volume Id: aa8a1280b5133a36b32cf552ec9dd3f3
Cluster Id: b89a07c2c6e8c533322591bf2a4aa613
Mount: 192.168.54.52:vol_aa8a1280b5133a36b32cf552ec9dd3f3
Mount Options: backup-volfile-servers=192.168.54.52,192.168.54.53
Block: false
Free Size: 0
Reserved Size: 0
Block Hosting Restriction: (none)
Block Volumes: []
Durability Type: replicate
Distribute Count: 1
Replica Count: 3

4. Kubesphere uses glusterfs as back-end storage (dynamic provision of glusterfs storage)

For kubesphere installation, please install the Kubesphere official website tutorial
提示:经过反复测试,使用kubesphere界面进行gluster存储配置,存储卷声明(PVC)始终处于pending的状态,无法成功挂载,这里使用K8s命令直接进行存储类的创建,然后使用kubesphere界面进行存储卷声明配置,可以成功挂载并持久化gluster集群中。 by yourself. This may be a problem with my method. I will update it after the subsequent test is successful. This article first records the experimental steps to avoid more coders stepping into the trap.

Proceed as follows

  • 1. Create a srecrt, and the user saves the password of the user of the heketi service, just the password (you can use kubesphere or yaml file);
  • 2. Create a storage class (using yaml file);
  • 3. A preparer (Provisioner), k8s has a built-in glusterfs type Provisioner driver, so there is no need to create it separately (no need to create it);
  • 4. Create pvc (using kubesphere);
  • 5. Create a deployment and pod uses pvc for data persistence (using kubesphere);

1. Create a secret and store only the password

Recommend a base64 password conversion website Base64 online verification

vim heketi-secret.yaml			#创建secret,主要用于保存heketi服务的admin用户密码
apiVersion: v1
kind: Secret
metadata:
  name: heketi-secret
  namespace: kube-system
data:
  # base64 encoded password. E.g.: echo -n "abc.123456" | base64
  key: YWJjLjEyMzQ1Ng==		#这个是heketi服务的admin用户密码abc.123456,仅密码而已
type: kubernetes.io/glusterfs
#创建成功
kubectl apply -f heketi-secret.yaml	

2. Create storage class

Refer to the official website reference document

vim glusterfs-storageclass.yaml	
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: glusterfs-storageclass						#存储名称
provisioner: kubernetes.io/glusterfs				#指定存储类的provisioner,这个provisioner是k8s内置的驱动程序
reclaimPolicy: Retain								#pvc删除,pv采用何种方式处理,这里是保留
volumeBindingMode: Immediate						#卷的绑定模式,表示创建pvc立即绑定pv
allowVolumeExpansion: true							#是否运行卷的扩容,配置为true才能实现扩容pvc
parameters:											#glusterfs的配置参数
  resturl: "http://192.168.54.52:7080"			    #heketi服务的地址和端口
  clusterid: "0cb591249e2542d0a73c6a9c8351baa2"		#集群id,在heketi服务其上执行heketi-cli cluster list能看到
  restauthenabled: "true"							#Gluster REST服务身份验证布尔值,用于启用对 REST 服务器的身份验证
  restuser: "admin"									#heketi的用户admin
  secretNamespace: "kube-system"						#secret所属的密码空间
  secretName: "heketi-secret"						#secret,使用secret存储了heketi的用户admin的登陆密码
  volumetype: "replicate:3"                         #挂载类型,三个副本,生产常用
#创建成功
kubectl apply -f glusterfs-storageclass.yaml

To view the storage type, you can use the following command to view it. If the creation is successful, the corresponding storage type can also be seen on the kubesphere interface.

[root@master /]# kubectl  get sc
NAME                          PROVISIONER               RECLAIMPOLICY   VOLUMEBINDINGMODE      ALLOWVOLUMEEXPANSION   AGE
glusterfs (default)           kubernetes.io/glusterfs   Delete          Immediate              true                   5h27m
glusterfs-storageclass        kubernetes.io/glusterfs   Retain          Immediate              true                   4h21m
glusterfs-test-storageclass   kubernetes.io/glusterfs   Retain          Immediate              true                   3h44m
local (default)               openebs.io/local          Delete          WaitForFirstConsumer   false                  20d

Insert image description here
Next, you can define the storage claim (PVC) through the kubesphere interface
Insert image description here

Create a workload, selecting the storage claim you created earlier.

Insert image description here

5. Explore where the log data is stored

Executing lsblk on any node we find the following:

[root@gluster1 /]# cat /etc/fstab

Already automatically mounted
Insert image description here

[root@gluster1 /]# lsblk

There are many more partitions
Insert image description here
. /dev/vdb was originally an unformatted bare disk, but now a mount point appears. After entering the mount point to view it, I found that the file has been persisted into the glusterfs file system, and several other nodes have also There is also data.
Insert image description here
Redeploy the load and use the same storage claim (PVC). The data still exists. This solution has been basically verified. Perfect!

6. Summary

  1. k8s deprecated glusterfs in version v1.25. Is glusterfs still popular?
  2. When k8s dynamically provides glusterfs storage, Heketi requires that the hard disk must be a bare disk. Since it is a bare disk, we do not need to participate in creating a file system. Are the files mounted randomly? What is the glusterfs logic behind data placement?
  3. Heketi requires that the hard disk must be a bare disk. Is the formatted hard disk really unusable? Wouldn't this be a waste of hardware resources?

Reference article, special thanks to
https://zhuanlan.zhihu.com/p/495108133
https://www.likecs.com/show-306086916.html
https://blog.csdn.net/MssGuo/article/details/128409865

Guess you like

Origin blog.csdn.net/bacawa/article/details/130319878