Linux Multipath+iscsi实现多路径配置

版权声明:本文为博主原创文章,未经博主允许不得转载。 https://blog.csdn.net/Dream_ya/article/details/88674952

一、Multipath介绍


1、Multipath简介

普通的电脑主机都是一个硬盘挂接到一个总线上,这里是一对一的关系。而到了有光纤组成的SAN环境,由于主机和存储通过了光纤交换机连接,这样的话,就构成了多对多的关系。也就是说,主机到存储可以有多条路径可以选择。主机到存储之间的IO由多条路径可以选择。

2、实现功能

  • 故障的切换和恢复

  • IO流量的负载均衡

  • 磁盘的虚拟化

二、实验环境


selinux iptables off

主机名 IP 操作系统 已有磁盘 安装服务
server1(客户端) 10.10.10.1(eth0)、172.25.254.1(eth1) rhel7.3 sda、sdb iscsi-initiator-utils(发现iscsi)、multipath
server2(服务器端) 10.10.10.2(eth0)、172.25.254.2(eth1) rhel7.3 sda、sdb targetcli.noarch(iscsi服务)

三、iscsi安装(服务器端)


1、安装iscsi

[root@server2 ~]# yum install -y targetcli.noarch

2、配置iscsi

[root@server2 ~]# targetcli 
Warning: Could not load preferences file /root/.targetcli/prefs.bin.
targetcli shell version 2.1.fb41
Copyright 2011-2013 by Datera, Inc and others.
For help on commands, type 'help'.

/> /backstores/block create dream:storage1 /dev/sdb
Created block storage object dream:storage1 using /dev/sdb.

/> /iscsi create iqn.2019-03.com.example:server2 
Created target iqn.2019-03.com.example:server2.
Created TPG 1.
Global pref auto_add_default_portal=true
Created default portal listening on all IPs (0.0.0.0), port 3260.

/> /iscsi/iqn.2019-03.com.example:server2/tpg1/acls create iqn.2019-03.com.example:server2 
Created Node ACL for iqn.2019-03.com.example:server2

/> /iscsi/iqn.2019-03.com.example:server2/tpg1/luns create /backstores/block/dream:storage1 
Created LUN 0.
Created LUN 0->0 mapping in node ACL iqn.2019-03.com.example:server2

/> exit
Global pref auto_save_on_exit=true
Last 10 configs saved in /etc/target/backup.
Configuration saved to /etc/target/saveconfig.json

四、iscsi配置(客户端)


1、安装并加入认证

[root@server1 ~]# yum install -y iscsi-initiator-utils
[root@server1 ~]# vim /etc/iscsi/initiatorname.iscsi         
InitiatorName=iqn.2019-03.com.example:server2

2、发现服务器端iscsi

[root@server1 ~]# iscsiadm -m discovery -t st -p 10.10.10.2
10.10.10.2:3260,1 iqn.2019-03.com.example:server2

[root@server1 ~]# iscsiadm -m discovery -t st -p 172.25.254.2
172.25.254.2:3260,1 iqn.2019-03.com.example:server2

3、登陆

[root@server1 ~]# iscsiadm -m node -T iqn.2019-03.com.example:server2 -p 10.10.10.2 -l        
Logging in to [iface: default, target: iqn.2019-03.com.example:server2, portal: 10.10.10.2,3260] (multiple)
Login to [iface: default, target: iqn.2019-03.com.example:server2, portal: 10.10.10.2,3260] successful.

[root@server1 ~]# iscsiadm -m node -T iqn.2019-03.com.example:server2 -p 172.25.254.2 -l
Logging in to [iface: default, target: iqn.2019-03.com.example:server2, portal: 172.25.254.2,3260] (multiple)
Login to [iface: default, target: iqn.2019-03.com.example:server2, portal: 172.25.254.2,3260] successful.

4、查看磁盘

可以发现多了sdc、和sdd二块磁盘,我们不加入multipath会识别成二块磁盘,所以我们可以加入mutipath防止网卡坏掉!!!

[root@server1 ~]# fdisk -l
Disk /dev/sda: 21.5 GB, 21474836480 bytes, 41943040 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x00007256

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *        2048      616447      307200   83  Linux
/dev/sda2          616448     2715647     1049600   82  Linux swap / Solaris
/dev/sda3         2715648    41943039    19613696   83  Linux

Disk /dev/sdb: 21.5 GB, 21474836480 bytes, 41943040 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

Disk /dev/sdc: 21.5 GB, 21474836480 bytes, 41943040 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 4194304 bytes

Disk /dev/sdd: 21.5 GB, 21474836480 bytes, 41943040 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 4194304 bytes

五、multipath安装及配置(客户端)


1、安装multipath

[root@server1 ~]# yum -y install device-mapper-multipath
[root@server1 ~]# modprobe dm-multipath
[root@server1 ~]# modprobe dm-round-robin
[root@server1 ~]# lsmod | grep dm_multipath
dm_multipath           23065  1 dm_round_robin
dm_mod                114430  3 dm_multipath,dm_log,dm_mirror

[root@server1 ~]# rpm -ql device-mapper-multipath     ###查看生成文件
[root@server1 ~]# cp /usr/share/doc/device-mapper-multipath-0.4.9/multipath.conf /etc/

### 也可以通过下面命令生成配置文件
[root@server1 ~]# mpathconf --enable                 ###生成配置文件/etc/multipath.conf

2、查看wwid

[root@server1 ~]# /usr/lib/udev/scsi_id --whitelisted --device=/dev/sdc
360014056393309b846f47bcae82517a0

3、配置multipath

[root@server1 ~]# vim /etc/multipath.conf 
defaults {
        user_friendly_names yes
        find_multipaths yes
}
multipaths {
    multipath {
        wwid    360014056393309b846f47bcae82517a0
        alias   mpathao
  }
}

[root@server1 ~]# systemctl start multipathd
[root@server1 ~]# ls /dev/mapper/
control  mpathao

4、格式化挂载

[root@server1 ~]# mkfs.xfs /dev/mapper/mpathao
meta-data=/dev/mapper/mpathao    isize=512    agcount=4, agsize=1310720 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=0, sparse=0
data     =                       bsize=4096   blocks=5242880, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
log      =internal log           bsize=4096   blocks=2560, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0

[root@server1 ~]# df -h
Filesystem           Size  Used Avail Use% Mounted on
/dev/sda3             19G  1.4G   18G   8% /
devtmpfs             227M     0  227M   0% /dev
tmpfs                237M     0  237M   0% /dev/shm
tmpfs                237M  4.7M  232M   2% /run
tmpfs                237M     0  237M   0% /sys/fs/cgroup
/dev/sda1            297M  119M  178M  41% /boot
tmpfs                 48M     0   48M   0% /run/user/0
/dev/mapper/mpathao   20G   33M   20G   1% /data

[root@server1 ~]# mkdir /data               
[root@server1 ~]# mount /dev/mapper/mpathao /data/
[root@server1 ~]# df -h
Filesystem           Size  Used Avail Use% Mounted on
/dev/sda3             19G  1.4G   18G   8% /
devtmpfs             227M     0  227M   0% /dev
tmpfs                237M     0  237M   0% /dev/shm
tmpfs                237M  4.7M  232M   2% /run
tmpfs                237M     0  237M   0% /sys/fs/cgroup
/dev/sda1            297M  119M  178M  41% /boot
tmpfs                 48M     0   48M   0% /run/user/0
/dev/mapper/mpathao   20G   33M   20G   1% /data

5、查看状态

[root@server1 ~]# multipath -rr
[root@server1 ~]# multipath -ll          

六、multipath配置简介


可用的节关键字如下:
    - defaults:全局属性的默认设置。
    - blacklist:黑名单,multipath会忽略黑名单中的设备。
    - blacklist_exceptions:免除黑名单,加入黑名单内,但包含在这里的设备不会被忽略。
    - multipaths:多路径相关配置。
    - devices:存储设备相关配置。
    
配置文件:
uid_attribute:用udev的哪个属性唯一标识一个设备,默认值为ID_SERIAL
path_grouping_policy       ###路径分组策略
    - failover             ###一条路径一个组(默认值)
    - multibus             ###所有路径在一个组
    - group_by_serial      ###根据序列号分组
    - group_by_prio        ###根据优先级分组
    - group_by_node_name   ###根据名字分组

path_selector              ###I/O操作算法
    - service-time 0       ###选择IO服务时间最短的路径(默认值)
    - round-robin 0        ###多个路径间不断循环
    - queue-length 0       ###选择当前处理IO数最少的路径

failback                   ###回复失败管理
    - immediate            ###立刻回复到活跃的优先级最高的组
    - manual               ###不操作,需自行处理
    - followover           ###第一个路径变为活动状态时应执行自动故障回复。 这可以防止节点在另一个节点请求故障转移时自动失败。
    
no_path_retry              ###多少次后禁止排队
    - fail                 ###失败后就立刻失败,没有排队
    - queue                ###一直进行排队,直到修复

猜你喜欢

转载自blog.csdn.net/Dream_ya/article/details/88674952