6. Ceph ISCSI Gateway deployment

The following conditions must be met to enable iscsi gateway:

  1. A running Ceph Luminous (12.2.x) cluster or higher
  2. CentOS 7.5 (or higher version); Linux kernel v4.16 (or higher version)
  3. The ceph-iscsi software package is installed on all iSCSI gateway nodes
  4. If the Ceph iSCSI gateway is not located on the OSD node, copy the Ceph configuration file located in /etc/ceph/ from the running Ceph node in the storage cluster to the iSCSI Gateway node. The Ceph configuration file must exist in /etc/ceph/ under the iSCSI gateway node.

The schematic diagram of iscsi gw is as follows:
Insert picture description here

1. Installation environment

The ceph cluster has been deployed

Host IP YOU
ceph01 10.0.21.213 (internal)
10.0.4.213 (external)
Centos 7.6
ceph02 10.0.21.214 (internal)
10.0.4.213 (external)
Centos 7.6
ceph03 10.0.21.215 (internal)
10.0.4.213 (external)
Centos 7.6

2. Configure ceph-iscsi YUM source

Configure ceph-iscsi yum source on all iscsi gw nodes

echo '[ceph-iscsi]
name=ceph-iscsi noarch packages
baseurl=http://download.ceph.com/ceph-iscsi/3/rpm/el7/noarch
enabled=1
gpgcheck=1
gpgkey=https://download.ceph.com/keys/release.asc
type=rpm-md
[ceph-iscsi-source]
name=ceph-iscsi source packages
baseurl=http://download.ceph.com/ceph-iscsi/3/rpm/el7/SRPMS
enabled=0
gpgcheck=1
gpgkey=https://download.ceph.com/keys/release.asc
type=rpm-md
[tcmu-runner]
name=tcmu-runner
baseurl=https://3.chacra.ceph.com/r/tcmu-runner/master/eef511565078fb4e2ed52caaff16e6c7e75ed6c3/centos/7/flavors/default/x86_64/
priority=1
gpgcheck=0
[ceph-iscsi-conf]
name=ceph-iscsi-config
baseurl=https://3.chacra.ceph.com/r/ceph-iscsi-config/master/7496f1bc418137230d8d45b19c47eab3165c756a/centos/7/flavors/default/noarch/
priority=1
gpgcheck=0        
' > /etc/yum.repos.d/ceph-iscsi.repo

Reminder


The tcmul software package is not included in the commonly used third-party yum sources, only the official source of redhat, but it cannot be used without subscription, so some individual users have made tcmu-runner sources, but the personal sources cannot be guaranteed to be valid.

3. Install ceph-iscsi

Install the ceph-iscsi-tools package on each iSCSI gateway node. Installing ceph-iscsi will automatically install the tcmu-runner package

yum -y install ceph-iscsi

Restart tcmu-runner

systemctl start tcmu-runner.service
systemctl status tcmu-runner.service
systemctl enable tcmu-runner.service 

Create image pool

ceph osd pool create iscsi-images 128 128 replicated
ceph osd pool application enable iscsi-images rbd

Edit iscsi gateway configuration file

Configure the iscsi gateway configuration file on each iscsi gw node, cluster_client_name is client.admin user, trusted_ip_list is all iscsi gateway IP addresses, api port is 5000, user is admin.

echo '[config]
cluster_client_name = client.admin
pool = iscsi-images
minimum_gateways = 1
fqdn_enabled=true
api_port = 5000
api_user = admin
api_password = admin
api_secure = false
#Log level
logger_level = WARNING  
trusted_ip_list = 10.0.4.213,10.0.4.214,10.0.4.215 ' >/etc/ceph/iscsi-gateway.cfg

Restart the rbd-target service and set it to start at boot

systemctl start rbd-target-api.service
systemctl status rbd-target-api.service
systemctl enable rbd-target-api.service

4. Configure ceph-iscsi

执行gwcli命令
gwcli

Enter icsi-target to create a target

> cd iscsi-targets
/iscsi-targets> create iqn.2003-01.com.redhat.iscsi-gw:iscsi-images

Create an iSCSI gateway. The IP used below is the IP used for iSCSI data transmission. They can be the same as or different from the IP listed in the trusted_ip_list for management operations, depending on whether there is multiple network card separation.

/iscsi-targets> cd /iscsi-targets/iqn.2003-01.com.redhat.iscsi-gw:iscsi-images/gateways/
/iscsi-target...ages/gateways> create ceph01 10.0.4.213 skipchecks=true
/iscsi-target...ages/gateways> create ceph02 10.0.4.214 skipchecks=true
/iscsi-target...ages/gateways> create ceph02 10.0.4.215 skipchecks=true

Adding the client's iqn value must be kept with the client's wwn

/iscsi-target...:iscsi-images> cd hosts/
/iscsi-target...-images/hosts> create client_iqn=iqn.1998-01.com.vmware:59a6f0cd-ca36-101c-a551-d485644bc7d8-03fccdb9

Insert picture description here
Create chap username and password, because username and password have special requirements, if you are not sure, just set it as I gave, and chap must be set, otherwise the server is forbidden to connect

/iscsi-target...-images/hosts> cd iqn.1998-01.com.vmware:59a6f0cd-ca36-101c-a551-d485644bc7d8-03fccdb9/
/iscsi-target...c7d8-03fccdb9> auth username=ceph-iscsi password=p@ssw0rdp@ssw0rd

Create a rbd device disk_hdd_1

/iscsi-target...c7d8-03fccdb9> cd /disks/
/disks> create pool=iscsi-images image=disk_hdd_1 size=1700G
/disks> ls
  o- disks ................................................ [1796G, Disks: 1]
  o- iscsi-images .................................. [iscsi-images (1796G)]
    o- disk_hdd_1 ....................... [iscsi-images/disk_hdd_1 (1796G)]
/disks> cd /iscsi-targets/iqn.2003-01.com.redhat.iscsi-gw:iscsi-images/hosts/iqn.1998-01.com.vmware:59a6f0cd-ca36-101c-a551-d485644bc7d8-03fccdb9/
/iscsi-target...c7d8-03fccdb9> disk disk=iscsi-images/disk_hdd_1 

Insert picture description here

Verify that ceph-iscsi storage is used as the back-end shared storage of vMware ESXi

How to add ESXi

Note: There
Insert picture description here
will be three more paths after success
Insert picture description here

Add storage
Insert picture description here

In this way, we can select the ceph storage disk for the disk when creating the virtual machine.

Insert picture description here
Implement node failover

Memory Select ceph Right Properties - Path Management - The path selection change circulation mode This enables a node failover.
Insert picture description here

Guess you like

Origin blog.csdn.net/weixin_43357497/article/details/113531406