1. Concept:
-
CNFS (Cluster Network File System) is a mode in GPFS used to configure and manage file sharing and data access between multiple servers (nodes)
-
It allows multiple nodes to simultaneously access and share file system data to achieve high-performance, high-availability storage solutions
2. Create a CNFS file system:
-
Modify the original GPFS file system to cnfs file system:
[root@node1 ~]# mmchfs gpfs1 -o syncnfs mmchfs: Propagating the cluster configuration data to all affected nodes. This is an asynchronous process.
-
Modify the configuration file /etc/exports (this is the configuration file of the nfs server),
#| 共享目录 | 主机名 (权限)| /usr/qv123/nfsdata 192.168.73.0/24(rw)
man exports # 查看所有参数
Permission parameter value illustrate rw/ro rw: readable and writable, ro: read-only, still related to the rwx of the file system sync/async sync: the data will be written to the memory and hard disk simultaneously, async: means the data will be temporarily stored in the memory first. no_root_squash/root_squash no_root_squash means to display the root user and root group; root_squash means to map the root user and group to anonymous users and groups (default setting). all_squash/no_all_squash allsquash: When all users on the client create a file, the client will map the user and group of the file to anonymous users and groups no_all_squash: The UID and GID of the file created by ordinary users on the client will be displayed on the server (default setting ) anonuid=anonuid= Map the user and group of the file to the specified UID and GID. If not specified, the default is 65534 (nfsnobody) [root@node1 ~]# cat << eof > /etc/exports /gpfs1/nfs 192.168.10.0/24(rw,fsid=11) eof [root@node2 ~]# cat << eof > /etc/exports /gpfs1/nfs 192.168.10.0/24(rw,fsid=11) eof
-
Make the configuration effective
[root@node1 ~]# exportfs -r [root@node1 ~]# exportfs -r
-
Set nfsd to automatically start on each server
[root@node1 ~]# systemctl enable nfs-server Created symlink from /etc/systemd/system/multi-user.target.wants/nfs-server.service to /usr/lib/systemd/system/nfs-server.service. [root@node2 ~]# systemctl enable nfs-server Created symlink from /etc/systemd/system/multi-user.target.wants/nfs-server.service to /usr/lib/systemd/system/nfs-server.service.
-
Specify the shared root directory of the CNFS server, preferably a separate small file system that is not shared by NFS
[root@node1 ~]# mmchconfig cnfsSharedRoot=/gpfs1/cnfs mmchconfig: Command successfully completed mmchconfig: Propagating the cluster configuration data to all affected nodes. This is an asynchronous process.
-
Specify the network interface used for CNFS services on the CNFS (Cluster Network File System) server node
ip_address_list is the IP configured above specifically for NFS, node is the host name of this node in GPFS; CNFS allows multiple Spectrum Scale nodes to access remote direct memory through RDMA ( ) protocol to access shared data.
[root@node1 ~]# mmchnode --cnfs-interface=192.168.10.151 -N node1 Wed Sep 27 05:24:58 EDT 2023: mmchnode: Processing node node1 mmnfsinstall: CNFS has modified configuration file /etc/sysconfig/network-scripts/ifcfg-ens38. Restarting monitor mmchnode: Propagating the cluster configuration data to all affected nodes. This is an asynchronous process. [root@node2 ~]# mmchnode --cnfs-interface=192.168.10.152 -N node2 Wed Sep 27 05:33:53 EDT 2023: mmchnode: Processing node node2 mmnfsinstall: CNFS has modified configuration file /etc/sysconfig/network-scripts/ifcfg-ens38. Restarting monitor mmchnode: Propagating the cluster configuration data to all affected nodes. This is an asynchronous process.
-
Specify the
mountd
service listening port number on the CNFS server[root@node1 ~]# mmchconfig cnfsMountdPort=3000 -N node1 mmchconfig: Command successfully completed mmchconfig: Propagating the cluster configuration data to all affected nodes. This is an asynchronous process. [root@node2 ~]# mmchconfig cnfsMountdPort=3000 -N node2 mmchconfig: Command successfully completed mmchconfig: Propagating the cluster configuration data to all affected nodes. This is an asynchronous process.
-
View the CNFS nodes in the cluster:
[root@node1 ~]# mmlscluster --cnfs GPFS cluster information ======================== GPFS cluster name: gpfs.node1 GPFS cluster id: 1484988891362745278 Cluster NFS global parameters ----------------------------- Shared root directory: /gpfs1/cnfs rpc.mountd port number: 3000 nfsd threads: 32 Reboot on failure enabled: yes CNFS monitor enabled: yes Node Daemon node name IP address CNFS state group CNFS IP address list ------------------------------------------------------------------------------------------- 1 node1 192.168.10.101 enabled 0 192.168.10.151 2 node2 192.168.10.102 enabled 0 192.168.10.152
-
Test whether the shared directory exists:
[root@node1 ~]# showmount -e 192.168.10.151 Export list for 192.168.10.151: /gpfs1/nfs 192.168.10.0/24
3. Client mounting:
-
Method 1: Temporarily mount
-
Install nfs-utils
[root@gpfs-client ~]# yum install -y nfs-utils
-
Check whether you can connect to the shared directory:
[root@gpfs-client ~]# showmount -e 192.168.10.151 Export list for 192.168.10.151: /gpfs1/nfs 192.168.10.0/24 [root@gpfs-client ~]# showmount -e 192.168.10.152 Export list for 192.168.10.152: /gpfs1/nfs 192.168.10.0/24
-
Create a mount directory, mount directory or unmount directory
[root@gpfs-client ~]# mount -o sync,hard,intr 192.168.10.151:/gpfs1/nfs /mnt/nfs
-
View mounts:
[root@gpfs-client ~]# df -h | grep nfs 192.168.10.151:/gpfs1/nfs 20G 3.7G 17G 19% /mnt/nfs
-
-
Method 2: Automatically mount
-
Install autofs:
yum install -y autofs
-
Configuration file:
autofs.conf:针对服务autofs的配置 timeout = 300, # dismount_interval = 300 # 挂载超时时间
auto.master:是针对目录对应的挂载配置文件 /misc这个目录自动挂载的信息autofs在 /etc/auto.misc中 配置语法: 目录 自动挂载配置文件的目录
auto.xxx:具体的挂载的信息 cd -fstype=iso9660,ro,nosuid,nodev :/dev/cdrom 挂载的目录 挂载的选项 :要挂载的设备 boot -fstype=ext2 :/dev/hda1
-
Modify configuration file:
cat << eof > /etc/auto.master /mnt /etc/auto.nfs eof
cat << eof > /etc/auto.nfs # 添加内容 #本地端子目录 -挂载参数 服务器所提供的目录 nfs 192.168.10.151:/gpfs1/nfs eof
parameter Parameter function fgbg When executing a mount, will the mount be executed in the foreground (fg) or the background (bg)? If executed in the foreground, mount will continue to try to mount until successful or time out; if executed in the background, mount Mount will continue to be performed multiple times in the background without affecting the running of the program in the foreground. softhard Hard means that when any host between the two goes offline, RPC will continue to call until the other party restores the connection. If it is soft, the RPC will call repeatedly after time out instead of continuing to call. IN When mounting using the hard method mentioned above, if the intr parameter is added, the call can be interrupted when RPC continues to call. rsizewsize Block size for reading (rsize) and writing (wsize). This setting value can affect the buffer memory capacity of data transmission between the client and the server. -
Restart the autofs service
systemctl restart autofs
-
View mount information:
mount | grep /nfs
-
Automatic mounting is triggered when entering a subdirectory, and demounting is triggered after exiting the mounting directory for a certain period of time:
cd /mnt/nfs
-
3. Delete cnfs node:
mmchnode --cnfs-interface=DELETE -N "node1,node2"