接上篇内容,本篇内容主要是记录如何在虚拟机中利用iscsi构建RAC共享存储。
1、共享存储配置
添加一台服务器模拟存储服务器,配置一个局域网地址和两个私有地址,私网地址和rac客户端连接多路径,磁盘划分和配置。
目标:从存储中划分出来两台主机可以同时看到的共享LUN,一共六个:3个1G的盘用作OCR和Voting Disk,1个50G的盘做GIMR,其余规划做数据盘和FRA(快速恢复区)。
为存储服务器加93G的硬盘
//lv划分 asmdisk1 1G asmdisk2 1G asmdisk3 1G asmdisk4 50G asmdisk5 20G asmdisk6 20G
1.1 检查存储网络
rac为存储客户端,VMware建立vlan10,vlan20,两个rac节点、存储服务器上的两块网卡,分别划分到vlan10、vlan20,这样就可以通过多路径和存储进行连接。
存储(服务端):10.0.0.111、10.0.0.222
rac-jydb1(客户端):10.0.0.5、10.0.0.11
rac-jydb2(客户端):10.0.0.6、10.0.0.22
最后测试网路互通没问题即可进行下一步
1.2 安装iscsi软件包
--服务端
yum install scsi-target-utils
--客户端
yum install iscsi-initiator-utils
1.3模拟存储加盘(服务端操作)
填加一个93G的盘,实际就是用来模拟存储新增实际的一块盘。
我这里新增加的盘显示为/dev/sdb,我将它创建成lvm
# pvcreate /dev/sdb Physical volume "/dev/sdb" successfully created # vgcreate vg_storage /dev/sdb Volume group "vg_storage" successfully created # lvcreate -L 10g -n lv_lun1 vg_storage //按照之前划分的磁盘容量分配多少g Logical volume "lv_lun1" created
1.4 配置iscsi服务端
iSCSI服务端主要配置文件:/etc/tgt/targets.conf
所以我这里按照规范设置的名称,添加好如下配置:
<target iqn.2018-03.com.cnblogs.test:alfreddisk> backing-store /dev/vg_storage/lv_lun1 # Becomes LUN 1 backing-store /dev/vg_storage/lv_lun2 # Becomes LUN 2 backing-store /dev/vg_storage/lv_lun3 # Becomes LUN 3 backing-store /dev/vg_storage/lv_lun4 # Becomes LUN 4 backing-store /dev/vg_storage/lv_lun5 # Becomes LUN 5 backing-store /dev/vg_storage/lv_lun6 # Becomes LUN 6 </target>
配置完成后,就启动服务和设置开机自启动:
[root@Storage ~]# service tgtd start Starting SCSI target daemon: [ OK ] [root@Storage ~]# chkconfig tgtd on [root@Storage ~]# chkconfig --list|grep tgtd tgtd 0:off 1:off 2:on 3:on 4:on 5:on 6:off [root@Storage ~]# service tgtd status tgtd (pid 1763 1760) is running...
然后查询下相关的信息,比如占用的端口、LUN信息(Type:disk):
[root@Storage ~]# service tgtd start Starting SCSI target daemon: [ OK ] [root@Storage ~]# chkconfig tgtd on [root@Storage ~]# chkconfig --list|grep tgtd tgtd 0:off 1:off 2:on 3:on 4:on 5:on 6:off [root@Storage ~]# service tgtd status tgtd (pid 1763 1760) is running... 然后查询下相关的信息,比如占用的端口、LUN信息(Type:disk): [root@Storage ~]# netstat -tlunp |grep tgt tcp 0 0 0.0.0.0:3260 0.0.0.0:* LISTEN 1760/tgtd tcp 0 0 :::3260 :::* LISTEN 1760/tgtd [root@Storage ~]# tgt-admin --show Target 1: iqn.2018-03.com.cnblogs.test:alfreddisk System information: Driver: iscsi State: ready I_T nexus information: LUN information: LUN: 0 Type: controller SCSI ID: IET 00010000 SCSI SN: beaf10 Size: 0 MB, Block size: 1 Online: Yes Removable media: No Prevent removal: No Readonly: No Backing store type: null Backing store path: None Backing store flags: LUN: 1 Type: disk SCSI ID: IET 00010001 SCSI SN: beaf11 Size: 10737 MB, Block size: 512 Online: Yes Removable media: No Prevent removal: No Readonly: No Backing store type: rdwr Backing store path: /dev/vg_storage/lv_lun1 Backing store flags: Account information: ACL information: ALL
1.5 配置iscsi客户端
确认开机启动项设置开启:
# chkconfig --list|grep scsi iscsi 0:off 1:off 2:off 3:on 4:on 5:on 6:off iscsid 0:off 1:off 2:off 3:on 4:on 5:on 6:off
使用iscsiadm命令扫描服务端的LUN(探测iSCSI Target)
[root@jydb1 ~]# iscsiadm -m discovery -t sendtargets -p 10.0.0.111 10.0.0.111:3260,1 iqn.2018-03.com.cnblogs.test:alfreddisk [root@jydb1 ~]# iscsiadm -m discovery -t sendtargets -p 10.0.0.222 10.0.0.222:3260,1 iqn.2018-03.com.cnblogs.test:alfreddisk
查看iscsiadm -m node
[root@jydb1 ~]# iscsiadm -m node 10.0.0.111:3260,1 iqn.2018-03.com.cnblogs.test:alfreddisk 10.0.0.222:3260,1 iqn.2018-03.com.cnblogs.test:alfreddis
查看/var/lib/iscsi/nodes/下的文件:
[root@jydb1 ~]# ll -R /var/lib/iscsi/nodes/ /var/lib/iscsi/jydbs/: 总用量 4 drw------- 4 root root 4096 3月 29 00:59 iqn.2018-03.com.cnblogs.test:alfreddisk /var/lib/iscsi/jydbs/iqn.2018-03.com.cnblogs.test:alfreddisk: 总用量 8 drw------- 2 root root 4096 3月 29 00:59 10.0.1.99,3260,1 drw------- 2 root root 4096 3月 29 00:59 10.0.2.99,3260,1 /var/lib/iscsi/jydbs/iqn.2018-03.com.cnblogs.test:alfreddisk/10.0.0.111,3260,1: 总用量 4 -rw------- 1 root root 2049 3月 29 00:59 default /var/lib/iscsi/jydbs/iqn.2018-03.com.cnblogs.test:alfreddisk/10.0.0.222,3260,1: 总用量 4 -rw------- 1 root root 2049 3月 29 00:59 default
挂载iscsi磁盘
根据上面探测的结果,执行下面命令,挂载共享磁盘:
iscsiadm -m node -T iqn.2018-03.com.cnblogs.test:alfreddisk --login
[root@jydb1 ~]# iscsiadm -m node -T iqn.2018-03.com.cnblogs.test:alfreddisk --login Logging in to [iface: default, target: iqn.2018-03.com.cnblogs.test:alfreddisk, portal: 10.0.0.111,3260] (multiple) Logging in to [iface: default, target: iqn.2018-03.com.cnblogs.test:alfreddisk, portal: 10.0.0.222,3260] (multiple) Login to [iface: default, target: iqn.2018-03.com.cnblogs.test:alfreddisk, portal: 10.0.0.111,3260] successful. Login to [iface: default, target: iqn.2018-03.com.cnblogs.test:alfreddisk, portal: 10.0.0.222,3260] successful. 显示挂载成功
通过(fdisk -l或lsblk)命令查看挂载的iscsi硬盘
[root@jydb1 ~]# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 35G 0 disk ├─sda1 8:1 0 200M 0 part /boot ├─sda2 8:2 0 7.8G 0 part [SWAP] └─sda3 8:3 0 27G 0 part / sr0 11:0 1 3.5G 0 rom /mnt sdb 8:16 0 1G 0 disk sdc 8:32 0 1G 0 disk sdd 8:48 0 1G 0 disk sde 8:64 0 1G 0 disk sdf 8:80 0 1G 0 disk sdg 8:96 0 1G 0 disk sdi 8:128 0 40G 0 disk sdk 8:160 0 10G 0 disk sdm 8:192 0 10G 0 disk sdj 8:144 0 10G 0 disk sdh 8:112 0 40G 0 disk sdl 8:176 0 10G 0 disk
1.6 配置multipath多路径(客户端操作)
安装多路径软件包:
rpm -qa |grep device-mapper-multipath 没有安装则yum安装 #yum install -y device-mapper-multipath 或下载安装这两个rpm device-mapper-multipath-libs-0.4.9-72.el6.x86_64.rpm device-mapper-multipath-0.4.9-72.el6.x86_64.rpm
添加开机启动
chkconfig multipathd on
生成多路径配置文件
--生成multipath配置文件 /sbin/mpathconf --enable --显示多路径的布局 multipath -ll --重新刷取 multipath -v2 或-v3
--清空所有多路径--重新生成时需先清空 multipath -F
参考如下操作:
root@jydb1 ~]# multipath -v2 [root@jydb1 ~]# multipath -ll Mar 29 03:40:10 | multipath.conf line 109, invalid keyword: multipaths Mar 29 03:40:10 | multipath.conf line 115, invalid keyword: multipaths Mar 29 03:40:10 | multipath.conf line 121, invalid keyword: multipaths Mar 29 03:40:10 | multipath.conf line 127, invalid keyword: multipaths Mar 29 03:40:10 | multipath.conf line 133, invalid keyword: multipaths Mar 29 03:40:10 | multipath.conf line 139, invalid keyword: multipaths asmdisk6 (1IET 00010006) dm-5 IET,VIRTUAL-DISK //wwid size=10.0G features='0' hwhandler='0' wp=rw |-+- policy='round-robin 0' prio=1 status=active | `- 33:0:0:6 sdj 8:144 active ready running `-+- policy='round-robin 0' prio=1 status=enabled `- 34:0:0:6 sdm 8:192 active ready running asmdisk5 (1IET 00010005) dm-2 IET,VIRTUAL-DISK size=10G features='0' hwhandler='0' wp=rw |-+- policy='round-robin 0' prio=1 status=active | `- 33:0:0:5 sdh 8:112 active ready running `-+- policy='round-robin 0' prio=1 status=enabled `- 34:0:0:5 sdl 8:176 active ready running asmdisk4 (1IET 00010004) dm-4 IET,VIRTUAL-DISK size=40G features='0' hwhandler='0' wp=rw |-+- policy='round-robin 0' prio=1 status=active | `- 33:0:0:4 sdf 8:80 active ready running `-+- policy='round-robin 0' prio=1 status=enabled `- 34:0:0:4 sdk 8:160 active ready running asmdisk3 (1IET 00010003) dm-3 IET,VIRTUAL-DISK size=1.0G features='0' hwhandler='0' wp=rw |-+- policy='round-robin 0' prio=1 status=active | `- 33:0:0:3 sdd 8:48 active ready running `-+- policy='round-robin 0' prio=1 status=enabled `- 34:0:0:3 sdi 8:128 active ready running asmdisk2 (1IET 00010002) dm-1 IET,VIRTUAL-DISK size=1.0G features='0' hwhandler='0' wp=rw |-+- policy='round-robin 0' prio=1 status=active | `- 33:0:0:2 sdc 8:32 active ready running `-+- policy='round-robin 0' prio=1 status=enabled `- 34:0:0:2 sdg 8:96 active ready running asmdisk1 (1IET 00010001) dm-0 IET,VIRTUAL-DISK size=1.0G features='0' hwhandler='0' wp=rw |-+- policy='round-robin 0' prio=1 status=active | `- 33:0:0:1 sdb 8:16 active ready running `-+- policy='round-robin 0' prio=1 status=enabled `- 34:0:0:1 sde 8:64 active ready running
启动multipath服务
#service multipathd start
配置multipath
修改第一处: #建议user_friendly_names设为no。如果设定为 no,即指定该系统应使用WWID 作为该多路径的别名。如果将其设为 yes,系统使用文件 #/etc/multipath/mpathn 作为别名。 #当将 user_friendly_names 配置选项设为 yes 时,该多路径设备的名称对于一个节点来说是唯一的,但不保证对使用多路径设备的所有节点都一致。也就是说, 在节点一上的mpath1和节点二上的mpath1可能不是同一个LUN,但是各个服务器上看到的相同LUN的WWID都是一样的,所以不建议设为yes,而是设为#no,用WWID作为别名。 defaults { user_friendly_names no path_grouping_policy failover //表示multipath工作模式为主备,path_grouping_policy multibus为主主 } 添加第二处:绑定wwid<br>这里的wwid在multipath -l中体现 multipaths { multipath { wwid "1IET 00010001" alias asmdisk1 } multipaths { multipath { wwid "1IET 00010002" alias asmdisk2 } multipaths { multipath { wwid "1IET 00010003" alias asmdisk3 } multipaths { multipath { wwid "1IET 00010004" alias asmdisk4 } multipaths { multipath { wwid "1IET 00010005" alias asmdisk5 } multipaths { multipath { wwid "1IET 00010006" alias asmdisk6
配置完成要生效得重启multipathd
service multipathd restart
绑定后查看multipath别名
[root@jydb1 ~]# cd /dev/mapper/ [root@jydb1 mapper]# ls asmdisk1 asmdisk2 asmdisk3 asmdisk4 asmdisk5 asmdisk6 control
udev绑定裸设备
首先进行UDEV权限绑定,否则权限不对安装时将扫描不到共享磁盘
修改之前:
[root@jydb1 ~]# ls -lh /dev/dm* brw-rw---- 1 root disk 253, 0 4月 2 16:18 /dev/dm-0 brw-rw---- 1 root disk 253, 1 4月 2 16:18 /dev/dm-1 brw-rw---- 1 root disk 253, 2 4月 2 16:18 /dev/dm-2 brw-rw---- 1 root disk 253, 3 4月 2 16:18 /dev/dm-3 brw-rw---- 1 root disk 253, 4 4月 2 16:18 /dev/dm-4 brw-rw---- 1 root disk 253, 5 4月 2 16:18 /dev/dm-5 crw-rw---- 1 root audio 14, 9 4月 2 16:18 /dev/dmmidi
我这里系统是RHEL6.6,对于multipath的权限,手工去修改几秒后会变回root。所以需要使用udev去绑定好权限。
搜索对应的配置文件模板:
[root@jyrac1 ~]# find / -name 12-* /usr/share/doc/device-mapper-1.02.79/12-dm-permissions.rules
根据模板新增12-dm-permissions.rules文件在/etc/udev/rules.d/下面进行修改:
vi /etc/udev/rules.d/12-dm-permissions.rules # MULTIPATH DEVICES # # Set permissions for all multipath devices ENV{DM_UUID}=="mpath-?*", OWNER:="grid", GROUP:="asmadmin", MODE:="660" //修改这里 # Set permissions for first two partitions created on a multipath device (and detected by kpartx) # ENV{DM_UUID}=="part[1-2]-mpath-?*", OWNER:="root", GROUP:="root", MODE:="660"
成后启动start_udev,权限正常则OK
[root@jydb1 ~]# start_udev 正在启动 udev:[确定] [root@jydb1 ~]# ls -lh /dev/dm* brw-rw---- 1 grid asmadmin 253, 0 4月 2 16:25 /dev/dm-0 brw-rw---- 1 grid asmadmin 253, 1 4月 2 16:25 /dev/dm-1 brw-rw---- 1 grid asmadmin 253, 2 4月 2 16:25 /dev/dm-2 brw-rw---- 1 grid asmadmin 253, 3 4月 2 16:25 /dev/dm-3 brw-rw---- 1 grid asmadmin 253, 4 4月 2 16:25 /dev/dm-4 brw-rw---- 1 grid asmadmin 253, 5 4月 2 16:25 /dev/dm-5 crw-rw---- 1 root audio 14, 9 4月 2 16:24 /dev/dmmidi
磁盘设备绑定
查询裸设备的主设备号、次设备号
[root@jydb1 ~]# ls -lt /dev/dm-* brw-rw---- 1 grid asmadmin 253, 5 3月 29 04:00 /dev/dm-5 brw-rw---- 1 grid asmadmin 253, 3 3月 29 04:00 /dev/dm-3 brw-rw---- 1 grid asmadmin 253, 2 3月 29 04:00 /dev/dm-2 brw-rw---- 1 grid asmadmin 253, 4 3月 29 04:00 /dev/dm-4 brw-rw---- 1 grid asmadmin 253, 1 3月 29 04:00 /dev/dm-1 brw-rw---- 1 grid asmadmin 253, 0 3月 29 04:00 /dev/dm-0 [root@jydb1 ~]# dmsetup ls|sort asmdisk1 (253:0) asmdisk2 (253:1) asmdisk3 (253:3) asmdisk4 (253:4) asmdisk5 (253:2) asmdisk6 (253:5) ###这里要注意系统分区时是否划分了LVM,如划分则会占据编号,在下面操作绑定裸设备时需注意对应编号####
根据对应关系绑定裸设备 vi /etc/udev/rules.d/60-raw.rules # Enter raw device bindings here. # # An example would be: # ACTION=="add", KERNEL=="sda", RUN+="/bin/raw /dev/raw/raw1 %N" # to bind /dev/raw/raw1 to /dev/sda, or # ACTION=="add", ENV{MAJOR}=="8", ENV{MINOR}=="1", RUN+="/bin/raw /dev/raw/raw2 %M %m" # to bind /dev/raw/raw2 to the device with major 8, minor 1. ACTION=="add", ENV{MAJOR}=="253", ENV{MINOR}=="0", RUN+="/bin/raw /dev/raw/raw1 %M %m" ACTION=="add", ENV{MAJOR}=="253", ENV{MINOR}=="1", RUN+="/bin/raw /dev/raw/raw2 %M %m" ACTION=="add", ENV{MAJOR}=="253", ENV{MINOR}=="2", RUN+="/bin/raw /dev/raw/raw3 %M %m" ACTION=="add", ENV{MAJOR}=="253", ENV{MINOR}=="3", RUN+="/bin/raw /dev/raw/raw4 %M %m" ACTION=="add", ENV{MAJOR}=="253", ENV{MINOR}=="4", RUN+="/bin/raw /dev/raw/raw5 %M %m" ACTION=="add", ENV{MAJOR}=="253", ENV{MINOR}=="5", RUN+="/bin/raw /dev/raw/raw6 %M %m" ACTION=="add", KERNEL=="raw1", OWNER="grid", GROUP="asmadmin", MODE="660" ACTION=="add", KERNEL=="raw2", OWNER="grid", GROUP="asmadmin", MODE="660" ACTION=="add", KERNEL=="raw3", OWNER="grid", GROUP="asmadmin", MODE="660" ACTION=="add", KERNEL=="raw4", OWNER="grid", GROUP="asmadmin", MODE="660" ACTION=="add", KERNEL=="raw5", OWNER="grid", GROUP="asmadmin", MODE="660" ACTION=="add", KERNEL=="raw6", OWNER="grid", GROUP="asmadmin", MODE="660"
完成后查看
[root@jydb1 ~]# start_udev 正在启动 udev:[确定] [root@jydb1 ~]# ll /dev/raw/raw* crw-rw---- 1 grid asmadmin 162, 1 5月 25 05:03 /dev/raw/raw1 crw-rw---- 1 grid asmadmin 162, 2 5月 25 05:03 /dev/raw/raw2 crw-rw---- 1 grid asmadmin 162, 3 5月 25 05:03 /dev/raw/raw3 crw-rw---- 1 grid asmadmin 162, 4 5月 25 05:03 /dev/raw/raw4 crw-rw---- 1 grid asmadmin 162, 5 5月 25 05:03 /dev/raw/raw5 crw-rw---- 1 grid asmadmin 162, 6 5月 25 05:03 /dev/raw/raw6 crw-rw---- 1 root disk 162, 0 5月 25 05:03 /dev/raw/rawctl
至此本篇结束,下一篇讲记录如何安装grid.
https://blog.csdn.net/weixin_40283570/article/details/80927901