DRBD distributed block device replication

Author: independence pen widow @TaoCloud

DRBD (Distributed Replicated Block Device) is a software-implemented, shared-free storage and replication solution that mirrors the contents of block devices between servers. It can be simply understood as network RAID.

The core function of DRBD is realized by the kernel of Linux, which is closest to the IO stack of the system. The location of DRBD is below the file system and closer to the operating system kernel and IO stack than the file system.

DRBD distributed block device replication

1. Prepare the environment

node CPU name IP address Disk operating system
Node 1 node1 172.16.201.53 sda, sdb centos7.6
Node 2 node2 172.16.201.54 sda, sdb centos7.6

Turn off the firewall and selinux

#2节点都需要配置
systemctl stop firewalld
systemctl disable firewalld
setenforce 0
sed -i "s/SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config

Configure epel source

#2节点都需要配置
yum install epel-release

Two, install drbd

If there is a complete drbd software in the yum source, you can install it directly through yum. If yum cannot find some software packages, you can install it through compilation. Choose one of the following 2 methods.

1.yum install drbd

yum install drbd drbd-bash-completion drbd-udev drbd-utils kmod-drbd

The kmod-drbd software package may not be found when installing in yum mode, so it needs to be compiled and installed.

2. Compile and install drbd

2.1 Prepare the compilation environment

yum update
yum -y install gcc gcc-c++ make automake autoconf help2man libxslt libxslt-devel flex rpm-build kernel-devel pygobject2 pygobject2-devel
reboot

2.2 Download the source code package on the official website,

Obtain the source code package download address in the official website https://www.linbit.com/en/drbd-community/drbd-download/, and download it .

wget https://www.linbit.com/downloads/drbd/9.0/drbd-9.0.21-1.tar.gz
wget https://www.linbit.com/downloads/drbd/utils/drbd-utils-9.13.0.tar.gz
wget https://www.linbit.com/downloads/drbdmanage/drbdmanage-0.99.18.tar.gz
mkdir -p rpmbuild/{BUILD,BUILDROOT,RPMS,SOURCES,SPECS,SRPMS} 
mkdir DRBD9

2.3. Compile and generate rpm package

tar xvf drbd-9.0.21-1.tar.gz
cd drbd-9.0.21-1
make kmp-rpm
cp /root/rpmbuild/RPMS/x86_64/*.rpm /root/DRBD9/
tar xvf drbdmanage-0.99.18.tar.gz 
cd drbdmanage-0.99.18
make rpm
cp dist/drbdmanage-0.99.18*.rpm /root/DRBD9/

2.4. Start to install drbd

#2节点都需要安装
cd /root/DRBD9
yum install drbd-kernel-debuginfo-9.0.21-1.x86_64.rpm drbdmanage-0.99.18-1.noarch.rpm drbdmanage-0.99.18-1.src.rpm kmod-drbd-9.0.21_3.10.0_1160.6.1-1.x86_64.rpm

Three, configure DRBD

1. The main node divides vg

#节点1操作
pvcreate /dev/sdb1 
vgcreate drbdpool /dev/sdb1 

2. Initialize the DRBD cluster and add nodes

#节点1操作
[root@node1 ~]# drbdmanage init 172.16.201.53

You are going to initialize a new drbdmanage cluster.
CAUTION! Note that:
  * Any previous drbdmanage cluster information may be removed
  * Any remaining resources managed by a previous drbdmanage installation
    that still exist on this system will no longer be managed by drbdmanage

Confirm:

  yes/no: yes
Empty drbdmanage control volume initialized on '/dev/drbd0'.
Empty drbdmanage control volume initialized on '/dev/drbd1'.
Waiting for server: .
Operation completed successfully

#添加节点2
[root@node1 ~]# drbdmanage add-node node2 172.16.201.54
Operation completed successfully
Operation completed successfully
Host key verification failed.
Give leader time to contact the new node
Operation completed successfully
Operation completed successfully

Join command for node node2:
drbdmanage join -p 6999 172.16.201.54 1 node1 172.16.201.53 0 G3F1h/pAcGwV1LnlxhFE

Record the last line in the returned result: "drbdmanage join -p 6999 172.16.201.54 1 node1 172.16.201.53 0 G3F1h/pAcGwV1LnlxhFE" and execute it on node 2 to join the cluster.

3. Divide vg from node

#节点2操作
pvcreate /dev/sdb 
vgcreate drbdpool /dev/sdb 

4. Join the cluster from the node

#节点2操作
[root@node2 ~]# drbdmanage join -p 6999 172.16.201.54 1 node1 172.16.201.53 0 G3F1h/pAcGwV1LnlxhFE
You are going to join an existing drbdmanage cluster.
CAUTION! Note that:

  * Any previous drbdmanage cluster information may be removed
  * Any remaining resources managed by a previous drbdmanage installation
    that still exist on this system will no longer be managed by drbdmanage

Confirm:

  yes/no: yes
Waiting for server to start up (can take up to 1 min)
Operation completed successfully

5. Check the cluster status

#节点1操作,以下返回结果为正常状态
[root@node1 ~]# drbdadm status
.drbdctrl role:Primary
  volume:0 disk:UpToDate
  volume:1 disk:UpToDate
  node2 role:Secondary
    volume:0 peer-disk:UpToDate
    volume:1 peer-disk:UpToDate

6. Create resources

#节点1操作
#创建资源test01
[root@node1 ~]# drbdmanage add-resource test01
Operation completed successfully
[root@node1 ~]# drbdmanage list-resources
+----------------+
| Name   | State |
|----------------|
| test01 |    ok |
+----------------+

7. Create Volume

#节点1操作
#创建5GB的卷test01
[root@node1 ~]# drbdmanage add-volume test01 5GB
Operation completed successfully
[root@node1 ~]# drbdmanage list-volumes
+-----------------------------------------------------------------------------+
| Name   | Vol ID |     Size | Minor |                                | State |
|-----------------------------------------------------------------------------|
| test01 |      0 | 4.66 GiB |   100 |                                |    ok |
+-----------------------------------------------------------------------------+
[root@node1 ~]#

8. Deploy resources

The number "2" at the end indicates the number of nodes

#节点1操作
[root@node1 ~]# drbdmanage deploy-resource test01 2
Operation completed successfully

#创建完时,状态为Inconsistent,正在进行同步
[root@node1 ~]# drbdadm status
.drbdctrl role:Primary
  volume:0 disk:UpToDate
  volume:1 disk:UpToDate
  node2 role:Secondary
    volume:0 peer-disk:UpToDate
    volume:1 peer-disk:UpToDate

test01 role:Secondary
  disk:UpToDate
  node2 role:Secondary
    replication:SyncSource peer-disk:Inconsistent done:5.70

#同步完成后,状态内容如下
[root@node1 ~]# drbdadm status
.drbdctrl role:Primary
  volume:0 disk:UpToDate
  volume:1 disk:UpToDate
  node2 role:Secondary
    volume:0 peer-disk:UpToDate
    volume:1 peer-disk:UpToDate

test01 role:Secondary
  disk:UpToDate
  node2 role:Secondary
    peer-disk:UpToDate

9. After configuring the DRBD device, create a file system and mount it

#节点1操作
# [/dev/drbd***]的数字,是通过命令[drbdmanage list-volumes]获取的[Minor]值 
[root@node1 ~]# mkfs.xfs /dev/drbd100 
meta-data=/dev/drbd100           isize=512    agcount=4, agsize=305176 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=0, sparse=0
data     =                       bsize=4096   blocks=1220703, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
log      =internal log           bsize=4096   blocks=2560, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
[root@node1 ~]# mount /dev/drbd100 /mnt/
[root@node1 ~]# echo "Hello World" > /mnt/test.txt
[root@node1 ~]# ll /mnt/
total 4
-rw-r--r-- 1 root root 12 Nov 26 15:43 test.txt
[root@node1 ~]# cat /mnt/test.txt 
Hello World

10. Mount the DRBD device on node 2, and perform the following operations:

#在节点1操作
#卸载/mnt目录,配置为从节点
[root@node1 ~]# umount  /mnt/
[root@node1 ~]# drbdadm secondary test01

#在节点2操作
#配置为主节点
[root@node2 ~]# drbdadm primary test01
[root@node2 ~]# mount /dev/drbd100 /mnt/
[root@node2 ~]# df -hT
Filesystem              Type      Size  Used Avail Use% Mounted on
devtmpfs                devtmpfs  3.9G     0  3.9G   0% /dev
tmpfs                   tmpfs     3.9G     0  3.9G   0% /dev/shm
tmpfs                   tmpfs     3.9G  8.9M  3.9G   1% /run
tmpfs                   tmpfs     3.9G     0  3.9G   0% /sys/fs/cgroup
/dev/mapper/centos-root xfs        35G  1.5G   34G   5% /
/dev/sda1               xfs      1014M  190M  825M  19% /boot
tmpfs                   tmpfs     783M     0  783M   0% /run/user/0
/dev/drbd100            xfs       4.7G   33M  4.7G   1% /mnt
[root@node2 ~]# ls -l /mnt/
total 4
-rw-r--r-- 1 root root 12 Nov 26 15:43 test.txt
[root@node2 ~]# cat /mnt/test.txt 
Hello World

Pay attention to the WeChat public account "Cloud combat", welcome more questions
DRBD distributed block device replication

Guess you like

Origin blog.51cto.com/9099998/2554964