Distributed block device DRBD9 Basic usage (Quick Start Tutorial)

1 Overview

1.1 Introduction

DRBD (Distributed Replicated Block Device) is a distributed storage system on the Linux platform. Which contains the core modules, a number of space management program and the user shell scripts, commonly used for high-availability (high availability, HA) cluster. Similar DRBD disk array RAID 1 (mirroring), RAID only within the same computer, through the network and it is DRBD.

1.2 The basic principle

DRBD is a distributed storage system of the storage layer in the linux kernel, architecture is divided into two parts: one is the core module for a virtual block device; the other is a user space management program, and a communication kernel modules DRBD to manage DRBD resources. In DRBD, the resource for all aspects specific to certain storage device replication. Includes resource name, DRBD devices (/ dev / drbdm, where m is the minimum number of devices, the maximum number to be 147), the disk configuration (so that local data may be used DRBD), the network configuration (communication with each other).
 DRBD a system composed of two or more nodes, points, and has the same HA cluster master node and standby nodes (DRBD allows only read and write access to a node, DRBD device on the host to mount a directory using . preparation machine DRBD device is not mounted, because it is used to receive host data from DRBD Agency.), on the node with the main device, applications and operating systems can run and access DRBD device.
Before DRBD block will be loaded onto a normal block device, under the file system, forming an intermediate layer between the file system and the disk, when the user data is written to the master node of the file system, data written to disk is officially DRBD system is intercepted, DRBD system disk when written to capture, space management program notifies the user copy of the data, written to the remote DRBD mirror, and then saved to disk (spare node) DRBD mapped image. Figure:
Schematic run

DRBD system to a virtual image data is written in blocks, supports three protocols:
  A: Once the data is written to disk and sent to the network to complete the write operation is considered
  B: receiving an acknowledgment received write operation is considered completed
  C: received written confirmation is considered the completion of the write operation
for safety reasons we generally select the protocol C.

2. prepare the experimental environment

2.1 System Environment

This tutorial is based on the latest version of DRBD + latest version CENTOS, Updated: 2019-07-04

CPU name system version IP addresses DRBD partition the hard disk
node1 centos7.6_minial 192.168.10.30 /dev/mapper/drbd-data
node2 centos7.6_minial 192.168.10.40 /dev/mapper/drbd-data

You can also use the local hard disk directly, for example: sdb, sdc, etc., as drbd hard disk, using the logical here to follow drbd disk expansion ready to experiment

2.2 DRBD Software Download

DRBD official website address:https://www.linbit.com

Software name Remark version
DRBD 9 Linux Kernel Driver DRBD9 kernel components drbd-9.0.18-1.tar
DRBD Utilities DRBD9 management component drbd90-utils-sysvinit-9.3.1-1.el7.elrepo.x86_64..>
DRBD Sysvinit DRBD9 management component drbd90-utils-9.6.0-1.el7.elrepo.x86_64.rpm

3. Start deployed (deployed standby node)

Some basic initialization is not described here (selinux, ntpdate, firewalld, hosts, etc.)

3.1 kernel update

# yum install kernel kernel-devel gcc glibc -y
# 重启服务器生效
# reboot

3.2 kernel component deployment DRBD9

[root@node1 drbd]# tar zxf drbd-9.0.18-1.tar.gz 
[root@node1 drbd]# cd drbd-9.0.18-1
# 查看此文件可知DRBD9需要管理工具版本 >= 9.3
[root@node1 drbd-9.0.18-1]# cat README.drbd-utils 
=======================================================================
  With DRBD module version 8.4.5, we split out the management tools
  into their own repository at https://github.com/LINBIT/drbd-utils
  (tarball at http://links.linbit.com/drbd-download)

  That started out as "drbd-utils version 8.9.0",
  has a different release cycle,
  and provides compatible drbdadm, drbdsetup and drbdmeta tools
  for DRBD module versions 8.3, 8.4 and 9.

  Again: to manage DRBD 9 kernel modules and above,
  you want drbd-utils >= 9.3 from above url.
=======================================================================
# 指定KDIR参数,将drbd编译进系统内核
[root@node1 drbd-9.0.18-1]# make DIR=/usr/src/kernels/3.10.0-957.21.3.el7.x86_64/
[root@node1 drbd-9.0.18-1]# make install
# 将DRBD加载到系统内核
[root@node1 drbd-9.0.18-1]# modprobe drbd
# 确认是否成功加载
[root@node1 drbd-9.0.18-1]# lsmod | grep drbd
drbd                  558570  0 
libcrc32c              12644  2 xfs,drbd

3.3 deployment DRBD9 management component

[root@node1 drbd]# yum install drbd90-utils-9.6.0-1.el7.elrepo.x86_64.rpm -y
[root@node1 drbd]# yum install drbd90-utils-sysvinit-9.6.0-1.el7.elrepo.x86_64.rpm -y

3.4 Configuration DRBD9

After installing DRDB software must be configured substantially the same storage space on either side of the server, any of the following A storage device you can use:

  • A physical disk device
  • A software RAID device
  • A LVM logical volume
  • Any block device

DRBD demand on the network: a direct connection between the server recommended drbd through the switch, through the intermediate routers is not recommended, of course, this is not mandatory, DRBD 7788 and 7799 need to be based on TCP port

3.4.1 Configuring your resources (master and slave machines need to be performed at the same time)

/Etc/drbd.conf DRBD configuration file, the current contains only the following two lines

include "/etc/drbd.d/global_common.conf";
include "/etc/drbd.d/*.res";

Wherein
/etc/drbd.d/global_common.conf: DRBD comprises the global and universal module configuration
/etc/drbd.d/*.res: User resource configuration module

The following is a case of a simple configuration of drbd.conf:

vim /etc/drbd.d/global_common.conf

global {
  usage-count yes;
}
common {
  net {
    protocol C;
  }
}
vim /etc/drbd.d/r0.res

resource r0 {
  on node1 {  # on 主机名
    device    /dev/drbd1;  # 映射的drbd磁盘,可默认
    disk      /dev/mapper/drbd-data;   # 设置后面存放数据的drbd磁盘
    address   192.168.10.30:7789;
    meta-disk internal;
  }
  on node2 {
    device    /dev/drbd1;
    disk      /dev/mapper/drbd-data;
    address   192.168.10.40:7789;
    meta-disk internal;
  }
}
3.4.5 Start drbd Service

After the initial configuration complete service, please remember your resource name (r0)

# 创建device metadta
[root@node1 ~]# drbdadm create-md r0

  --==  Thank you for participating in the global usage survey  ==--
The server's response is:

you are the 859th user to install this version
initializing activity log
initializing bitmap (192 KB) to all zero
Writing meta data...
New drbd meta data block successfully created.
success
# 启动资源并查看状态
[root@node1 ~]# drbdadm up r0
[root@node1 ~]# drbdadm status r0
r0 role:Secondary
  disk:Inconsistent
  node2 role:Secondary
    peer-disk:Inconsistent

# 可以看到当前资源状态都是Inconsistent(数据未同步状态)
# 此操作只能在一个设备上执行,因为需要将此设备设置为主设备
[root@node1 ~]# drbdadm primary --force r0
[root@node1 ~]# drbdadm status r0
r0 role:Primary
  disk:UpToDate
  node2 role:Secondary
    replication:SyncSource peer-disk:Inconsistent done:49.11

# 看到此时数据的状态为UpToDate(数据正在同步,单未完全同步),且已经同步49.11

# 如下代表数据已经全部同步完成
[root@node1 ~]# drbdadm status r0
r0 role:Primary
  disk:UpToDate
  node2 role:Secondary
    peer-disk:UpToDate
3.4.6 Details Service Status
[root@node1 ~]# drbdsetup status r0 --verbose --statistics
r0 node-id:0 role:Primary suspended:no
    write-ordering:flush
  volume:0 minor:1 disk:UpToDate quorum:yes
      size:6291228 read:6292272 written:0 al-writes:0 bm-writes:0 upper-pending:0 lower-pending:0 al-suspended:no blocked:no
  node2 node-id:1 connection:Connected role:Secondary congested:no ap-in-flight:0 rs-in-flight:0
    volume:0 replication:Established peer-disk:UpToDate resync-suspended:no
        received:0 sent:6291228 out-of-sync:0 pending:0 unacked:0

DRBD run state diagram

3.5 formatted file system and mount

This operation needs to be performed only in the master node

[root@node1 ~]# mkfs.xfs /dev/drbd1
meta-data=/dev/drbd1             isize=512    agcount=4, agsize=393202 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=0, sparse=0
data     =                       bsize=4096   blocks=1572807, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
log      =internal log           bsize=4096   blocks=2560, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
[root@node1 ~]# mkdir /mydata
[root@node1 ~]# mount /dev/drbd1 /mydata/
[root@node1 ~]# df -h
Filesystem               Size  Used Avail Use% Mounted on
/dev/mapper/centos-root  8.0G  1.5G  6.6G  19% /
devtmpfs                 475M     0  475M   0% /dev
tmpfs                    487M     0  487M   0% /dev/shm
tmpfs                    487M  7.6M  479M   2% /run
tmpfs                    487M     0  487M   0% /sys/fs/cgroup
/dev/sda1               1014M  156M  859M  16% /boot
tmpfs                     98M     0   98M   0% /run/user/0
/dev/drbd1               6.0G   33M  6.0G   1% /mydata

3.6 write files and standby switching test

# 进入挂载目录mydata下,随便创建一些文件
[root@node1 ~]# cd /mydata/
[root@node1 mydata]# touch {a,b,c,d,e,f}.txtx
[root@node1 mydata]# ll
total 0
-rw-r--r-- 1 root root 0 Jul  5 12:18 a.txtx
-rw-r--r-- 1 root root 0 Jul  5 12:18 b.txtx
-rw-r--r-- 1 root root 0 Jul  5 12:18 c.txtx
-rw-r--r-- 1 root root 0 Jul  5 12:18 d.txtx
-rw-r--r-- 1 root root 0 Jul  5 12:18 e.txtx
-rw-r--r-- 1 root root 0 Jul  5 12:18 f.txtx

# 尝试切换原先primary节点为secondary
[root@node1 ~]# umount /mydata/
[root@node1 ~]# drbdadm secondary r0
[root@node1 ~]# drbdadm status r0
r0 role:Secondary
  disk:UpToDate
  node2 role:Secondary
    peer-disk:UpToDate

# 将node2的secondary切换为primary节点并挂载
[root@node2 ~]# mkdir /mydata
[root@node2 ~]# drbdadm primary r0
[root@node2 ~]# mount /dev/drbd1 /mydata/
[root@node2 ~]# df -h
Filesystem               Size  Used Avail Use% Mounted on
/dev/mapper/centos-root  8.0G  1.5G  6.6G  19% /
devtmpfs                 475M     0  475M   0% /dev
tmpfs                    487M     0  487M   0% /dev/shm
tmpfs                    487M  7.6M  479M   2% /run
tmpfs                    487M     0  487M   0% /sys/fs/cgroup
/dev/sda1               1014M  156M  859M  16% /boot
tmpfs                     98M     0   98M   0% /run/user/0
/dev/drbd1               6.0G   33M  6.0G   1% /mydata
[root@node2 ~]# cd /mydata/
[root@node2 mydata]# ll
total 0
-rw-r--r-- 1 root root 0 Jul  5 12:18 a.txtx
-rw-r--r-- 1 root root 0 Jul  5 12:18 b.txtx
-rw-r--r-- 1 root root 0 Jul  5 12:18 c.txtx
-rw-r--r-- 1 root root 0 Jul  5 12:18 d.txtx
-rw-r--r-- 1 root root 0 Jul  5 12:18 e.txtx
-rw-r--r-- 1 root root 0 Jul  5 12:18 f.txtx
[root@node2 mydata]# drbdadm status r0
r0 role:Primary
  disk:UpToDate
  node1 role:Secondary
    peer-disk:UpToDate

Fully operation can be found node2 node successfully switched to the master node, and before the data has been created node1 node2 to synchronize, in this case the new file node2, node1 also synchronized to the network node completes function raid1

Guess you like

Origin blog.51cto.com/11267188/2417532