Configure and mount OCFS2 file system using DRBD

This time, I specially organize the multi-node shared file system

background

I have installed Oracle RAC before, applied for a redundant shared disk, configured the OCFS2 system, and mounted it to all nodes to store database backups. In this way, the backup content can be seen by all nodes. Of course, it is also possible to use ACFS of Oracle ASM (it has been done before, but there is no record, and I will organize it when I have time).

This time, the same version of RAC was deployed in another new place, and I wanted to use the same method, but an error was reported in the o2cb.init configure step of configuring OCFS2, and it got stuck, and the time was tight. Only mounted on one of the nodes for backup.

After leaving, I reviewed the reason, because the previous successful OCFS2 configuration was on the Oracle Linux 7 operating system, the default is to start with the uek kernel, and the default is to support the OCFS2 file system. And this time, we are Redhat7, which does not support OCFS2 file system by default.

Today, I mainly share that DRBD uses local disk synchronization to configure OCFS2 shared file system

DRBD is a software-implemented, shared-nothing, storage-replication solution for mirroring block device content between servers. This is not much to say. It has been around for many years. Before there is no money to buy high-end hardware storage, it is a cheap and highly available disk synchronization solution, but it is really good to use.

Date: 2023-05-23

1 Test environment

oracle linux 7.9 (uek) test group: 2 use local disks and no shared disks

2 Preparations

Create logical volumes /dev/vg1/lv1 and /dev/vg1/lv2

#2个节点都是本地磁盘,创建lvm逻辑卷
pvcreate /dev/sdb
vgcreate vg1 /dev/sdb
lvcreate -n lv1 -l 40%VG vg1
lvcreate -n lv2 -l 40%VG vg1
ll /dev/vg1

3 If the host configuration is not specified, all nodes must execute

hostnamectl set-hostname db01 #主机1执行,设置主机名称
hostnamectl set-hostname db02 #主机2执行,设置主机名称

#关闭seliux, 防火墙允许drbd和ocfs端口
firewall-cmd --permanent --add-port=7788/tcp --add-port=7789/tcp --add-port=7777/tcp && firewall-cmd --reload && firewall-cmd --permanent --list-all
sed -i "s/SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config ; sed -i "s/SELINUX=permissive/SELINUX=disabled/g" /etc/selinux/config
setenforce 0

#安装软件包
yum -y install http://www.elrepo.org/elrepo-release-7.0-3.el7.elrepo.noarch.rpm
yum -y install drbd90-utils kmod-drbd90
systemctl enable drbd --now
lsmod |grep drbd

#配置drbd
cat > /etc/drbd.d/global_common.conf << EOF
global {
usage-count no;
}
common {
net {
protocol C;
}
}
EOF

#drbd1不允许双主
cat > /etc/drbd.d/drbd1.res << EOF
resource drbd1 {
disk /dev/vg1/lv1;
device /dev/drbd1;
meta-disk internal;
on db01 {
address 192.168.55.144:7788;
}
on db02 {
address 192.168.55.185:7788;
}
}
EOF

#drbd2允许双主
cat > /etc/drbd.d/drbd2.res << EOF
resource drbd2 {
net {
allow-two-primaries;
}
disk /dev/vg1/lv2;
device /dev/drbd2;
meta-disk internal;
on db01 {
address 192.168.55.144:7789;
}
on db02 {
address 192.168.55.185:7789;
}
}
EOF

drbdadm create-md drbd1 #初始化资源
drbdadm up drbd1 #启动资源
drbdadm primary drbd1 --force #节点1执行 设置主设备
drbdadm role drbd1 #查看资源角色 此时是主从
drbdadm status drbd1 #查看状态
drbdsetup status drbd1 --verbose --statistics #查看同步状态
drbdadm create-md drbd2 #初始化资源
drbdadm up drbd2 #启动资源
drbdadm primary drbd2 --force #节点1执行 设置主设备
drbdadm primary drbd2 #节点2执行 设置主设备
drbdadm role drbd2 #查看资源角色 此时是双主架构
drbdadm status drbd2 #查看状态
drbdsetup status drbd2 --verbose --statistics #查看同步状态

#以下是测试 drbd1 主从,顺带测试 和ocfs2没关系
#节点1执行
parted /dev/drbd1 mklabel gpt
parted /dev/drbd1 mkpart p1 ext4 1 100%
mkfs.xfs /dev/drbd1 -f
mkdir /u01 && mount /dev/drbd1 /u01
date > /u01/test ; cat /u01/test
umount /u01
drbdadm secondary drbd1
echo '[ "$(drbdadm role drbd1)" = "Secondary/Secondary" ] && drbdadm primary drbd1 ; sleep 10 ; [ "$(drbdadm role drbd1)" = "Primary/Secondary" ] && mount /dev/drbd1 /u01' >> /etc/rc.local #持久化磁盘挂载
#节点2执行
drbdadm primary drbd1
mkdir /u01 && mount /dev/drbd1 /u01
cat /u01/test
echo '[ "$(drbdadm role drbd1)" = "Primary/Secondary" ] && mount /dev/drbd1 /u01' >> /etc/rc.local #持久化磁盘挂载

#测试结果: 主从同步没问题,主可以挂载,任何修改会同步到从,可以角色切换,将从升为主,挂载

4 ocfs2 configuration

Note that the name should be consistent with the machine hostname

yum -y install ocfs2-tools
yum -y install ocfs2-tools-devel #linux 7 only
[ -d /etc/ocfs2 ] || mkdir -p /etc/ocfs2
cat >/etc/ocfs2/cluster.conf <<EOF
cluster:
node_count = 2
name = ocfs2
node:
ip_port = 7777
ip_address = 192.168.55.144
number = 0
name = db01
cluster = ocfs2
node:
ip_port = 7777
ip_address = 192.168.55.185
number = 1
name = db02
cluster = ocfs2
EOF
#初始化ocfs2配置 第一项选yes,第三项集群名称填上面配置文件里的,默认是ocfs2,其他默认, 配置结果存入配置文件 /etc/sysconfig/o2cb
o2cb.init configure
#确保o2cb ocfs2服务启动并设置为开机自启
systemctl enable o2cb --now
systemctl enable ocfs2 --now
#任选一个主机 分区并格式化分区
parted /dev/drbd2 mklabel gpt
parted /dev/drbd2 mkpart p1 ext4 1 100%
mkfs.ocfs2 /dev/drbd2
#另一个主机 探测分区变化
partprobe /dev/drbd2
#所有主机 持久化磁盘挂载
[ -d /u02 ] || mkdir /u02 ; mount /dev/drbd2 /u02
echo 'sleep 10 ; drbdadm primary drbd2 ; sleep 10 ; mount /dev/drbd2 /u02' >> /etc/rc.local

test

#在crontab job中添加如下内容
* * * * * echo ${HOSTNAME} $(date) >> /u02/test.log

in conclusion

Can be mounted at the same time, both nodes can write data and see each other

Shutdown and restart are executed according to the instructions, no problem

5 Other drbd commands

#查看同步
drbdadm status drbd1
drbdsetup status drbd1 --verbose --statistics
#查看和切换资源角色
drbdadm role drbd1
drbdadm primary drbd1
drbdadm secondary drbd1
#查看资源状态
drbdadm dstate drbd1
drbdadm cstate drbd1
#脑裂
1. 从节点
drbdadm secondary drbd1
drbdadm -- --discard-my-data connect drbd1
2. 主节点
drbdadm connect drbd1

Guess you like

Origin blog.csdn.net/weixin_44496870/article/details/131656701