"Rook-Ceph"-Cluster Environment Construction@20210224

brief introduction

This section will introduce how to deploy Rook services on a Kubernetes cluster and use Ceph storage at the bottom.

Precautions

Just like the official document, this article is only "basic settings" and just "quick start". For more technical details and deployment methods, please refer to the official document.

Note that this article is only for reference and provides an overview of the work that needs to be done when deploying a cluster.

System environment

This table is some environmental information when I deploy Rook:

   
System environment: CentOS Linux release 7.4.1708 (Core)
Software version: Governors v1.12.1
Rook: release-1.0

(Sometimes, some problems, some features, happen to be right after the version you are using...)

Environmental requirements

Before installing Rook, the cluster needs to meet some conditions. For details, please refer to the official " Prerequisites " document. Here is a simple arrangement:

# Minimum version: It needs to be Kubernetes v1.10 or above.

# Permissions and RBAC: In order to manage storage in the cluster, Rook permissions need to be granted. Check the details in " Using Rook with Pod Security Policies " to set the Rook "in the cluster where the Pod security policy is activated".

# Configure FlexVolume: This is one of the key steps. Rook Agent needs to be set as a Flex Volume plug-in to manage additional storage in the cluster. Please refer to " release-1.0/Flex Volume Configuration " to configure Kubernetes deployment to load the Rook volume plugin. (This is the key to the success of Rook, because my cluster is version 1.12, so it can only be mounted using FlexVolume, and starting from 1.13, it should be mounted using CSI drivers)

# Kernel RBD module: Rook Ceph needs to use the RBD module, so the RBD module needs to be loaded. (The configuration method is skipped)

# Kernel module directory: The default kernel module is in the /lib/modules directory. If some distributions use different directories, they need to be set through the LIB_MODULES_DIR_PATH environment variable or the agent.libModulesDirPath of helm-operator . (Usually this problem is not encountered, after all, we usually use CentOS distribution)

# Extra Agent Directory: In some specific releases, it is necessary to mount the host directory to the Agent. You can use the AGENT_MOUNTS environment variable or the agent.mounts variable in the helm-operator. (I haven't met the demand in this area yet)

# Install LVM package: The LVM package needs to be installed on all storage nodes.

# Mirror warehouse with certification: If the mirror warehouse is certified, the corresponding ServiceAccount object needs to be modified. (Since there is currently no demand in this area, skip it)

# Data persistence: Because Rook Cluster needs to persist some data, if you use dataDirHostPath to save data on the kubernetes host, you need to ensure that the host has at least 5GB of free space on the specified path.

Test environment deployment

Execute the following commands and apply YAML files (note that the following commands can only be used in the test environment):

The first step is to prepare the working directory

#!/bin/sh

git clone https://github.com/rook/rook.git
git checkout tags/release-1.0 -v release-1.0
cd cluster/examples/kubernetes/ceph

Step 2: Deploy Agent and Operator components

########################################################################################################################
#
########################################################################################################################
# For the deployment of Operator, please refer to the document: https://github.com/rook/rook/blob/release-1.0/Documentation/ceph-examples.md
kubectl create -f common.yaml
kubectl create -f operator.yaml

# Verify that rook-ceph-operator, rook-ceph-agent, rook-discover these pods are running.
kubectl -n rook-ceph get pod

The third step is to create a Ceph cluster

########################################################################################################################
# Create a cluster
########################################################################################################################
# Make sure that all Pods in the previous step are running
# The cluster configuration can refer to the document: https://github.com/rook/rook/blob/release-1.0/Documentation/ceph-cluster-crd.md
kubectl create -f cluster-test.yaml

# Check if the cluster is created successfully
# Check the number of OSD Pods, which depends on the number of nodes, devices, and configured directories.
kubectl -n rook-ceph get pod

The fourth step, check the cluster status

########################################################################################################################
# Check the health of the cluster
########################################################################################################################
# Use Rook toolbox to view the health information of the cluster
# https://github.com/rook/rook/blob/release-1.0/Documentation/ceph-toolbox.md

Production environment deployment

For the production environment, additional storage devices need to be added to the node.

In the deployment of the test environment, relax the requirements for local storage devices so that the cluster can be quickly established and run as a "test" environment to test Rook. This creates a "Ceph File Storage OSD" in the directory without the need for equipment.

For the production environment, you need to follow the example in cluster.yaml instead of the cluster-test.yaml file in order to configure the device instead of the test directory. For more details, please refer to the " Ceph Examples " example.

references

GitHub/rook/rook/Documentation/k8s-pre-reqs.md
Ceph Storage Quickstart
master/FlexVolume Configuration

Guess you like

Origin blog.csdn.net/u013670453/article/details/114043677