Detailed and kubernetes glusterfs build heketi-glusterfs

This article contains:

  • Detailed gluster each storage volumes, create and use
  • gluster-kubernetes storage structures glusterfs

    Foreword

    The traditional operation and maintenance, often requires the administrator to manually assign storage space in the cluster, and then go into the application to mount. The latest version of Kubernetes, dynamic provisioning upgrade to the beta, and support for dynamic provisioning of multiple storage services, allowing more efficient use of storage capacity of the storage environment, the purpose of on-demand storage space. This article describes the dynamic provisioning of this feature, and to GlusterFS to illustrate the butt of storage services and k8s.

    Brief introduction

                 ⚠️ familiar with the junior partner skip it

    dynamic provisioning:
     storage is a very important part of the container choreography. Kubernetes v1.2 from the start, this powerful feature provides dynamic provisioning, to provide on-demand storage to the cluster, and can support many cloud AWS-EBS, GCE-PD, Cinder-Openstack, Ceph, GlusterFS , etc. storage. Unofficial support of storage can be supported by writing a plugin way.
      In the absence of dynamic provisioning, the container in order to use Volume, need to pre-allocate storage well in the end, this process is often the administrator manually. Following the introduction of dynamic provisioning, Kubernetes will be based on the size of the desired container volume, by calling storage service interface dynamically created to meet the required storage.

Storageclass:
 Administrators can configure storageclass, to describe the type of storage available. To AWS-EBS for example, the administrator can define two storageclass are: slow and fast. slow docking sc1 (mechanical hard drive), fast docking gp2 (SSDs). Applications can be based on the performance needs of the business, respectively choose between two storageclass.

Glusterfs:
 an open source distributed file system, with a strong ability to scale, by expanding the number of support PB storage capacity and processing thousands of clients. GlusterFS With TCP / IP network or InfiniBandRDMA physically distributed storage resources together, using a single global namespace for managing data.
⚠️Glusterfs architecture design feature is not the maximum metadata server component, that is, no master / min from a server, each node may be a master server

Heketi:
 Heketi (https://github.com/heketi/heketi), a RESTful API GlusterFS volume based management framework.
 Heketi cloud platform and can be easily integrated RESTful API provided for Kubernetes calls, volume management to achieve more than glusterfs cluster. Further, heketi bricks and advantages as well as to ensure its uniform distribution in the corresponding copy of the cluster available for different areas.

gluster-kubernetes storage structures glusterfs

heketi official website recommended by gluster-kubernetes structures, the production environment can directly use the script gluster-kubernetes provided structures, reduce complexity, personal point of view, eyes of the beholder, the wise see wisdom

surroundings

  • k8s 1.14.1
  • 4 nodes with volume: /dev/vdb
  • 1 master

    Note ⚠️

    1. At least three kubernetes slave nodes to deploy glusterfs cluster and three slave nodes each node requires at least one spare disk
    2. Check the running kernel modules lsmod |grep thin, each kubernetes node cluster running modprobe dm_thin_pool, load the kernel module.

Download the script

git clone https://github.com/gluster/gluster-kubernetes.git
cd xxx/gluster-kubernetes/deploy

Modify topology.json

cp topology.json.sample topology.json
Modifying the host name corresponding to (nodes), ip, and data volumes

{
  "clusters": [
    {
      "nodes": [
        {
          "node": {
            "hostnames": {
              "manage": [
                "10.8.4.92"
              ],
              "storage": [
                "10.8.4.92"
              ]
            },
            "zone": 1
          },
          "devices": [
            "/dev/vdb"
          ]
        },
        {
          "node": {
            "hostnames": {
              "manage": [
                "10.8.4.93"
              ],
              "storage": [
                "10.8.4.93"
              ]
            },
            "zone": 1
          },
          "devices": [
            "/dev/vdb"
          ]
        },
        {
          "node": {
            "hostnames": {
              "manage": [
                "10.8.4.131"
              ],
              "storage": [
                "10.8.4.131"
              ]
            },
            "zone": 1
          },
          "devices": [
            "/dev/vdb"
          ]
        },
     {
          "node": {
            "hostnames": {
              "manage": [
                "10.8.4.132"
              ],
              "storage": [
                "10.8.4.132"
              ]
            },
            "zone": 1
          },
          "devices": [
            "/dev/vdb"
           ]
       }
     ]
    }
  ]
}

Modify heketi.json.template

{
        "_port_comment": "Heketi Server Port Number",
        "port" : "8080",

        "_use_auth": "Enable JWT authorization. Please enable for deployment",
        "use_auth" : true, #开启用户认证

        "_jwt" : "Private keys for access",
        "jwt" : {
                "_admin" : "Admin has access to all APIs",
                "admin" : {
                        "key" : "adminkey" #管理员密码
                },
                "_user" : "User only has access to /volumes endpoint",
                "user" : {
                        "key" : "userkey" #用户密码
                }
        },

        "_glusterfs_comment": "GlusterFS Configuration",
        "glusterfs" : {

                "_executor_comment": "Execute plugin. Possible choices: mock, kubernetes, ssh",
                "executor" : "${HEKETI_EXECUTOR}",#本文搭建为kubernete方式

                "_db_comment": "Database file name",
                "db" : "/var/lib/heketi/heketi.db", #heketi数据存储

                "kubeexec" : {
                        "rebalance_on_expansion": true
                },

                "sshexec" : {
                        "rebalance_on_expansion": true,
                        "keyfile" : "/etc/heketi/private_key",
                        "port" : "${SSH_PORT}",
                        "user" : "${SSH_USER}",
                        "sudo" : ${SSH_SUDO}
                }
        },

        "backup_db_to_kube_secret": false
}

gk-deploy script Overview

./gk-deploy -hOutline

-g, --deploy-gluster #pod部署gluster使用
-s, --ssh-keyfile    #ssh方式管理gluster使用,/root/.ssh/id_rsa.pub
--admin-key ADMIN_KEY#管理员secret设置
--user-key USER_KEY  #用户secret设置
--abort              #删除heketi资源使用

vi gk-deployThe main contents of the script

  • Create Resource
  • Add glusterfs device node
  • Heketi be mounted on storage

⚠️ want to delve into understanding what the scripts do, you can view https://www.kubernetes.org.cn/3893.html

#添加glusterfs设备节点
heketi_cli="${CLI} exec -i ${heketi_pod} -- heketi-cli -s http://localhost:8080 --user admin --secret '${ADMIN_KEY}'"

  load_temp=$(mktemp)
  eval_output "${heketi_cli} topology load --json=/etc/heketi/topology.json 2>&1" | tee "${load_temp}"

Execute scripts

Relatively slow ⚠️Adding device, be patient

kubectl create ns glusterfs
./gk-deploy -y -n glusterfs -g --user-key=userkey --admin-key=adminkey

Using namespace "glusterfs".
Checking that heketi pod is not running ... OK
serviceaccount "heketi-service-account" created
clusterrolebinding "heketi-sa-view" created
node "10.8.4.92" labeled
node "10.8.4.93" labeled
node "10.8.4.131" labeled
node "10.8.4.132" labeled
daemonset "glusterfs" created
Waiting for GlusterFS pods to start ... OK
service "deploy-heketi" created
deployment "deploy-heketi" created
Waiting for deploy-heketi pod to start ... OK
Creating cluster ... ID: 4cfe35ce3cdc64b8afb8dbc46cad0e09
Creating node 10.8.4.92 ... ID: 1d323ddf243fd4d8c7f0ed58eb0e2c0ab
Adding device /dev/vdb ... OK
Creating node 10.8.4.93 ... ID: 12df23f339dj4jf8jdk3oodd31ba9e12c52
Adding device /dev/vdb ... OK
Creating node 10.8.4.131 ... ID: 1c529sd3ewewed1286e29e260668a1
Adding device /dev/vdb ... OK
Creating node 10.8.4.132 ... ID: 12ff323cd1121232323fddf9e260668a1
Adding device /dev/vdb ... OK
heketi topology loaded.
Saving heketi-storage.json
secret "heketi-storage-secret" created
endpoints "heketi-storage-endpoints" created
service "heketi-storage-endpoints" created
job "heketi-storage-copy-job" created
service "deploy-heketi" deleted
job "heketi-storage-copy-job" deleted
deployment "deploy-heketi" deleted
secret "heketi-storage-secret" deleted
service "heketi" created
deployment "heketi" created
Waiting for heketi pod to start ... OK
heketi is now running and accessible via http://10.10.23.148:8080/
Ready to create and provide GlusterFS volumes.

kubectl get po -o wide -n glusterfs
glusterfs

[root@k8s1-master1 deploy]# export HEKETI_CLI_SERVER=$(kubectl get svc/heketi -n glusterfs --template 'http://{{.spec.clusterIP}}:{{(index .s
pec.ports 0).port}}')
[root@k8s1-master1 deploy]# echo $HEKETI_CLI_SERVER
http://10.0.0.131:8080
[root@k8s1-master1 deploy]# curl $HEKETI_CLI_SERVER/hello
Hello from Heketi

Failure retry

kubectl delete -f kube-templates/deploy-heketi-deployment.yaml
kubectl delete -f kube-templates/heketi-deployment.yaml
kubectl delete -f kube-templates/heketi-service-account.yaml
kubectl delete -f kube-templates/glusterfs-daemonset.yaml
#每个节点执行
rm -rf /var/lib/heketi
rm -rf /var/lib/glusterd

Problem: Unable to add device, try formatting vdb

#每个节点执行
dd if=/dev/zero of=/dev/vdb bs=1k count=1
blockdev --rereadpt /dev/vdb

Other error investigation

Connected state

[root@k8s1-master2 ~]# kubectl exec -ti glusterfs-sb7l9 -n glusterfs bash
[root@k8s1-master2 /]# gluster peer status

Number of Peers: 3

Hostname: 10.8.4.93
Uuid: 52824c41-2fce-468a-b9c9-7c3827ed7a34
State: Peer in Cluster (Connected)

Hostname: 10.8.4.131
Uuid: 6a27b31f-dbd9-4de5-aefd-73c1ac9b81c5
State: Peer in Cluster (Connected)

Hostname: 10.8.4.132
Uuid: 7b7b53ff-af7f-49aa-b371-29dd1e784ad1
State: Peer in Cluster (Connected)

Storage has been mounted

[root@k8s1-master2 ~]# kubectl exec -ti glusterfs-sb7l9 -n glusterfs bash
[root@k8s1-master2 /]# gluster volume info

Volume Name: heketidbstorage
Type: Replicate
Volume ID: 02fd891f-dd43-4c1b-a2ba-87e1be7c706f
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: 10.8.4.132:/var/lib/heketi/mounts/vg_5634269dc08edd964032871801920f1e/brick_b980d3f5ce7b1b4314c4b57c8aaf35fa/brick
Brick2: 10.8.4.93:/var/lib/heketi/mounts/vg_1d2cf75ab474dd63edb917a78096e429/brick_b375443687051038234e50fe3cd5fe12/brick
Brick3: 10.8.4.92:/var/lib/heketi/mounts/vg_a5d145795d59c51d2335153880049760/brick_e8f9ec722a235448fbf6730c25d7441a/brick
Options Reconfigured:
user.heketi.id: dfed68e6dca82c7cd5911c8ddda7746b
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off

Build StorageClass

vi storageclass-dev-glusterfs.yaml

---
apiVersion: v1
kind: Secret
metadata:
  name: heketi-secret
  namespace: glusterfs
data:
  # base64 encoded password. E.g.: echo -n "adminkey" | base64
  key: YWRtaW5rZXk=
type: kubernetes.io/glusterfs
---
apiVersion: storage.k8s.io/v1beta1
kind: StorageClass
metadata:
  name: glusterfs
provisioner: kubernetes.io/glusterfs
parameters:
  resturl: "http://10.8.4.91:42951"
  clusterid: "364a0a72b3343c537c20db5576ffd46c"
  restauthenabled: "true"
  restuser: "admin"
  secretNamespace: "glusterfs"
  secretName: "heketi-secret"
  #restuserkey: "adminkey"
  gidMin: "40000"
  gidMax: "50000"
  volumetype: "none"

Properties Overview

  • resturl: heketi address
  • clusterid: heketi-cli --user admin --secret adminkey cluster listenter Pod heketi-549c999b6f-5l8spGets
  • restauthenabled: whether to open Certification
  • restuser: User
  • secretName: Password

Mainly referring to the next volumetype

  • volumetype

     volumetype : The volume type and its parameters can be configured with this optional value. If the volume type is not mentioned, it’s up to the provisioner to decide the volume type.
     For example:
    Replica volume: volumetype: replicate:3 where ‘3’ is replica count.
    Disperse/EC volume: volumetype: disperse:4:2 where ‘4’ is data and ‘2’ is the redundancy count.
    Distribute volume: volumetype: none

  • volumetype: disperse:4:2

    Volume correction, should need six servers, only the author of four experimental volumetype: disperse: 4: 1, pv does not automatically create, but to manually create the volume to be successful, can enter the Pod glusterfs-5jzdhperform, pay attention Type.

gluster volume create gv1 disperse 4 redundancy 1 10.8.4.92:/var/lib/heketi/mounts/gv1 10.8.4.93:/var/lib/heketi/mounts/gv1 10.8.4.131:/var/lib/heketi/mounts/gv1 10.8.4.132:/var/lib/heketi/mounts/gv1

gluster volume start gv1

gluster volume info

Output follows

Volume Name: gv2
Type: Disperse
Volume ID: e072f9fa-6139-4471-a163-0e0dde0265ef
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x (3 + 1) = 4
Transport-type: tcp
Bricks:
Brick1: 10.8.4.92:/var/lib/heketi/mounts/gv2
Brick2: 10.8.4.93:/var/lib/heketi/mounts/gv2
Brick3: 10.8.4.131:/var/lib/heketi/mounts/gv2
Brick4: 10.8.4.132:/var/lib/heketi/mounts/gv2
Options Reconfigured:
transport.address-family: inet
nfs.disable: on
  • volumetype: replicate:3

    Create three copies, copy mode volume, consumption of resources, but a bad disk or node count down to normal use, gluster volume infosee below, noteType

Volume Name: vol_d78f449dbeab2286267c7e1842086a8f
Type: Replicate
Volume ID: 02fd891f-dd43-4c1b-a2ba-87e1be7c706f
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: 10.8.4.132:/var/lib/heketi/mounts/vg_5634269dc08edd964032871801920f1e/brick_b980d3f5ce7b1b4314c4b57c8aaf35fa/brick
Brick2: 10.8.4.93:/var/lib/heketi/mounts/vg_1d2cf75ab474dd63edb917a78096e429/brick_b375443687051038234e50fe3cd5fe12/brick
Brick3: 10.8.4.92:/var/lib/heketi/mounts/vg_a5d145795d59c51d2335153880049760/brick_e8f9ec722a235448fbf6730c25d7441a/brick
Options Reconfigured:
user.heketi.id: dfed68e6dca82c7cd5911c8ddda7746b
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off
  • volumetype: none

    Distributed volume, distributed by a hash algorithm to Brick, considered bad disk unusable or node goes down, gluster volume info see below, noteType

Volume Name: vol_e1b27d580cbe18a96b0fdf7cbfe69cc2
Type: Distribute
Volume ID: cb4a7e4f-3850-4809-b159-fc8000527d71
Status: Started
Snapshot Count: 0
Number of Bricks: 1
Transport-type: tcp
Bricks:
Brick1: 10.8.4.93:/var/lib/heketi/mounts/vg_1d2cf75ab474dd63edb917a78096e429/brick_8f62218753db589204b753295a318795/brick
Options Reconfigured:
user.heketi.id: e1b27d580cbe18a96b0fdf7cbfe69cc2
transport.address-family: inet
nfs.disable: on

Creating pvc

vi glusterfs-pv.yaml

---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: glusterfs
  annotations:
    volume.beta.kubernetes.io/storage-class: "glusterfs"
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 1Gi

Dear friends, you should make a choice depending on the circumstances, we want to continue to understand the mode of storage volumes, and usage, see "gluster explain each storage volume, create and use" (after typeset upload)
hand code no pit, there Welcome to disturb problem, give praise Yo! ! ! awesome! ! Not to spend money! ! ! !

Guess you like

Origin www.cnblogs.com/keep-live/p/11425844.html