[Actual] shared storage cluster real-time backup (to solve the problem of single point nfs shared storage)

Of a single point of storage 1. nfs

If the nfs server is down, all nfs clients will be affected. Once down, users will lose some data. In order to solve the problem of a single point, the need to achieve real-time backup of shared storage, namely: the real-time backup data in shared directory nfs server to the backup server (or other storage devices) to ensure data integrity.

 

2. NFS share data in real-time push synchronization backup

The company has two web servers has been to provide services, but as more and more users of business development, the site features more powerful, all kinds of pictures, video and other take up hard disk space is growing.

Thus, the leading web server that stores the data directly to the NFS server to use as a storage; NFS server and to prevent the occurrence of single point of failure, leading to desired content stored in a web server on the real-time synchronization Rsync to the backup server. Now up to you to complete the program needs leadership.

Specific requirements are as follows:

  • It requires NFS server as follows:
    • Server's shared directory called / data directory;
    • Permission requirements can only access network segment and is readable and writable, from time to time synchronization;
    • To facilitate the management of personnel management, you need to specify NFS virtual account to zuma, uid = 888, gid = 888
    • The identity of all visitors have the lowest compression identity
    • The / data contents of the directory synchronization always pushed to the backup server / data directory (inotify + rsync)
  • web server will mount the NFS share to the unified directory / var / html / www directory

Ideas:

1. NFS server and storage Rsync backup server, deployment server running Rsync rsync --daemon service, NFS server as a client Rsync can be produced by rsync -avz / data [email protected] :: nfsbackup / --password-file = /etc/rsync.password command, file / data directory Rsync backup to the backup server.

2. NFS storage server deployment is complete, under normal operating conditions.

3. NFS between the storage server and the backup server Rsync by crond + rsync service, the scheduled task backup service, the backup data to the backup server Rsync

4. want to achieve real-time backup, then on NFS storage server via inotify, sersync or irsync service, NFS storage server monitoring shared change block disk directory, triggering push synchronization.

 

 

2.1 Environment Preparation

Operating system and kernel version

[root@web01-8 ~]# cat /etc/redhat-release 
CentOS release 6.7 (Final)
[root@web01-8 ~]# uname -r
2.6.32-573.el6.x86_64

 

Role -ip

Roles CPU name eth0 (outside the network) eth1 (network)
C1-NFS server nfs01 10.0.0.31 192.168.0.31
C2-Rsync storage server backup 10.0.0.41 192.168.0.41

 

2.2 NFS deployment services

For details, see the deployment process: https://www.cnblogs.com/zoe233/p/11973710.html

[NFS server]

Copy the code
System Environment # 
[@ NFS-31 is the root mnt] # CAT / etc / RedHat-Release 
the CentOS Release 6.10 (Final) 
[@ NFS-31 is the root mnt] # the uname -R & lt 
2.6.32-573.el6.x86_64 
[the root NFS @ mnt -31] # uname -m 
x86_64 

# View rpcbind and nfs services, and set the boot from the start 
[root @ nfs-31 mnt] # RPM -qa nfs-utils rpcbind 
rpcbind-0.2.0-16.el6.x86_64 
nFS-safe locking 1.2.3-78.el6_10.1.x86_64-utils 

[root @ nfs-31 mnt] # /etc/init.d/rpcbind Status 
the rpcbind has stopped 
[root @ nfs-31 mnt] # /etc/init.d/ rpcbind start 
starting rpcbind: [OK] 

[root @ nfs-31 mnt] # /etc/init.d/nfs Status 
rpc.svcgssd has stopped 
rpc.mountd has stopped 
nfsd has stopped 
rpc.rquotad has stopped 
[root @ nfs-31 mnt] # / etc / init.d / nfs start # /etc/init.d/nfs start 
start NFS services: [OK] 
to turn off the NFS quota: [OK] 
to start NFS mountd: [OK] 
to start the NFS daemon: [OK] 
Starting RPC idmapd: [OK] 

[root @ nfs-31 mnt] # the chkconfig --list the rpcbind 
the rpcbind 0: Close 1: Close 2: enabled 3: enable 4: enable 5: 6 enable: off 
[root @ nfs-31 mnt] NFS the chkconfig --list # 
NFS 0: Close 1: Close 2: enabled 3: enable 4: enable 5: 6 enable: off 

[root @ nfs-31 mnt] # tail -3 /etc/rc.local # chkconfig and / etc / rc.local arranged to choose one. 
Start up Service nfs # 
/etc/init.d/rpcbind Start 
/etc/init.d/nfs Start 


# the need to create a shared directory and authorize 
mkdir / data -p 
grep nfsnobody is / etc / the passwd 
chown -R & lt nfsnobody.nfsnobody / Data 
LS -ld / Data 


# configuration NFS service profile, and view the information in local mount 
[root @ nfs-31 mnt] CAT # / etc / Exports 
# Shared / Test Data for Zoe by AT 20,191,205 
/ Data 192.168.0.0/24(rw,sync) 

[@ NFS-31 is the root mnt] # # -rv the exportfs loading configuration, the configuration file can be used to check legality 
Exporting 192.168.0.0/24:/data 

# mount NFS server to see the local situation 
showmount -e 192.168.0.31 
showmount -e localhost 

# parameters by looking nfs server configuration files (including the default parameters loaded) 
[root @ nfs mnt -31] CAT # / var / lib / NFS / Etab 
/ Data 192.168.0.0/24(rw,sync,wdelay,hide,nocrossmnt,secure,root_squash,no_all_squash,no_subtree_check,secure_locks,acl,anonuid=65534,anongid= 65534, sec = sys, rw, root_squash, no_all_squash)
Copy the code

  

In the local server, but also to mount the test as a client:

Copy the code
[root@nfs-31 mnt]# mount -t nfs 192.168.0.31:/data /mnt
[root@nfs-31 mnt]# df -h
Filesystem          Size  Used Avail Use% Mounted on
/dev/sda3           6.9G  1.9G  4.7G  28% /
tmpfs               499M     0  499M   0% /dev/shm
/dev/sda1           190M   67M  114M  37% /boot
192.168.0.31:/data  6.9G  1.9G  4.7G  28% /mnt
Copy the code

Successfully nfs shared directory is mounted in the / mnt directory.

 

[NFS client]

Operations performed on all NFS clients are the same.

Copy the code
# System Environment 
[Backup-41 is the root @ ~] # CAT / etc / RedHat-Release 
the CentOS Release 6.10 (Final) 
[Backup-41 is the root @ ~] # the uname -R & lt 
2.6.32-573.el6.x86_64 
[Backup the root @ ~ -41] # the uname -m 
the x86_64 

# check installation package 
[Backup-41 is the root @ ~] # RPM -qa the rpcbind 
the rpcbind-0.2.0-16.el6.x86_64 
# showmount used for other functions, also preferably all clients NFS software installation, but does not enable NFS 
RPM -qa NFS-utils 

# start rpc service (the service does not need to start the NFS) 
[Backup-41 is the root @ ~] Status # /etc/init.d/rpcbind 
the rpcbind IS stopped 
[the root @ ~-41 is Backup] # /etc/init.d/rpcbind Start 
Starting the rpcbind: [the OK] 

[Backup-41 is the root @ ~] # the showmount -e 192.168.0.31 
the Export List for 192.168.0.31: 
/ Data 192.168.0.0/24

# Mount the NFS share / Data 
[Backup-41 is the root @ ~] # Mount NFS 192.168.0.31:/data -t / mnt 
[Backup-41 is the root @ ~] # DF -H 
the Filesystem Size Used Avail the Use% Mounted ON 
/ dev / sda3 1.9G 4.7G 6.9 g of 28% / 
tmpfs 499M 0 499M 0% / dev / SHM 
/ dev / sda1 114M 190M 67M 37 [% / Boot 
192.168.0.31:/data 1.9G 4.7G 6.9 g of 28% / mnt 

# test 
[@ Backup-41 is the root mnt] # CD / mnt 
[@ Backup-41 is the root mnt] LS # 
[@ Backup-41 is the root mnt] # mkdir / mnt / Backup / the rpcbind / test -p 
[@ Backup-41 is the root mnt ] # LS 
Backup File 

# View nfs server in a shared directory / the Data 
[root @ nfs-31 mnt] # LS / the Data 
Backup File
Copy the code

 

The rpcbind service and mount the boot from the start to join:

[root@backup-41 mnt]# tail -3 /etc/rc.local
# rpcbind start and mount shared directory ip:/data
/etc/init.d/rpcbind start
/bin/mount -t nfs 192.168.0.31:/data /mnt

 

2.3 Rsync service deployment

For details, see the deployment process: https://www.cnblogs.com/zoe233/p/11962110.html

This section can be unified nfs backup data on a module, the module can be added /etc/rsyncd.conf by modifying the configuration file

[root@backup-41 192.168.0.8]# tail -20 /etc/rsyncd.conf

ignore errors
read only = false
list = false
hosts allow = 192.168.0.31/24
hosts deny = 0.0.0.0/32
auth users = rsync_backup
secrets file = /etc/rsync.password

######
[backup]
path = /backup/

[multi_module_1]
path = /multi_module_1/

[nfsbackup]
path = /nfsbackup/

## rsync_config____end ## 

 

Add Module [nfsbackup] in the configuration file.

Note: a plurality of modules of the same information can be integrated on the top of the module, and other parameters such as ignore errors.

 

Restart rsync --daemon service.

[root@backup-41 192.168.0.8]# pkill rsync
[root@backup-41 192.168.0.8]# lsof -i tcp:873
[root@backup-41 192.168.0.8]# rsync --daemon
[root@backup-41 192.168.0.8]# lsof -i tcp:873
COMMAND   PID USER   FD   TYPE DEVICE SIZE/OFF NODE NAME
rsync   16317 root    3u  IPv4 211440      0t0  TCP *:rsync (LISTEN)
rsync   16317 root    5u  IPv6 211441      0t0  TCP *:rsync (LISTEN)

 

 

The new module configuration requirements nfsbackup to create / nfsbackup directory, and a directory provided owner and is a group.

[root@backup-41 192.168.0.8]# mkdir /nfsbackup -p
[root@backup-41 192.168.0.8]# chown -R rsync.rsync /nfsbackup
[root@backup-41 192.168.0.8]# ll -d /nfsbackup/
drwxr-xr-x 2 rsync rsync 4096 Dec 12 13:00 /nfsbackup/

 

 

2.4 inotify

inotify specific content view: https://www.cnblogs.com/zoe233/p/12035383.html

2.4.1 inotify installation

View the current system supports inotify

Copy the code
[NFS-31 is the root @ Data] # the uname -R & lt 
2.6.32-573.el6.x86_64 
[@ NFS-31 is the root Data] -l # LS / proc / SYS / FS / the inotify / 
Total 0 
-rw-r-- On Dec the root 0. 1 the root r-- 12 is 18:59 max_queued_events 
-rw-R & lt - r--. 1 the root 12 is the root 0 On Dec 18:59 max_user_instances 
-rw-R & lt - r--. 1 the root 12 is the root 0 On Dec 18:59 max_user_watches # show these three documents to prove the support inotify
Copy the code

 

Inotify install the software:

[root@nfs-31 inotify]# rpm -qa inotify-tools
[root@nfs-31 inotify]# yum install inotify-tools -y  # Error: Nothing to do

 

Installation failed to obtain source with wget, then yum install:

Copy the code
wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-6.repo
yum -y install inotify-tools

 [root@nfs-31 inotify]# rpm -qa inotify-tools
  inotify-tools-3.14-2.el6.x86_64

Copy the code

Total installed two tools, namely inotifywait and inotifywatch.

    • inotifywait: waiting for a specific file system events (open, close, delete, etc.) on the monitored file or directory occurs, the execution is blocked, suitable for use in shell scripts.
    • inotifywatch: file system of collecting statistical data to be monitored, the number of times the file system event occurs statistics.

 

2.4.2 inotifywatch command

The more important parameters of meaning:

inotifywait parameters Meaning Description
-r --recursive Recursive directory inquiries
-q --quiet Print little information, just print the information monitor events
-m,--monitor Always listening state event
--exclude Exclude files or directories are not case sensitive.
--timefmt The output format specified time
--format Similar to the output print using the specified format string
-e, - event You can specify parameters need to be monitored by this event, as shown in a list

-e, - event events events meanings:

Events meaning
access File or directory is read
modify The contents of a file or directory is modified
attrib File or directory attributes are changed
close 文件或目录封闭,无论读/写模式
open 文件或目录被打开
moved_to 文件或目录被移动至另外一个目录
move 文件或目录被移动到另一个目录或从另一个目录移动至当前目录
create 文件或目录被创建在当前目录
delete 文件或目录被删除
umount 文件系统被卸载

--format 的格式意义:

  • %w 发生事件的监视文件的名称 
    • This will be replaced with the name of the Watched file on which an event occurred.
  • %f  当一个事件发生在一个目录中时,它将被替换为导致该事件发生的文件名。否则,将替换为空字符串。
    • When an event occurs within a directory, this will be replaced with the name of the File which caused the event to occur. Otherwise, this will be replaced with an empty string.
  • %e  发生的事件,以逗号分隔。
    • Replaced with the Event(s) which occurred, comma-separated.
  • %Xe 发生的事件,用“X”中的任何字符分隔。
    • Replaced with the Event(s) which occurred, separated by whichever character is in the place of ‘X’.
  • %T   替换为--timefmt选项指定格式的当前时间,该格式字符串应适合传递给strftime(3)。
    • Replaced with the current Time in the format specified by the --timefmt option, which should be a format string suitable for passing to strftime(3).

 

2.4.3 人工测试同步

开启两个窗口

测试create事件

在第一个窗口开启inotifywait,监听/backup目录:

Copy the code
[root@nfs-31 inotify]# inotifywait -mrq --timefmt '%y/%m/%d %H:%M' --format '%T %w%f' -e create /backup  
# 命令说明:
# -mrq:-m 实时监听,-r递归监控整个目录,包括子目录,-q 只输出简短信息
# --timefmt:指定输出的时间格式
# --format:输出输出的格式
# -e create:指定监控的事件类型,监控创建create事件
Copy the code

 

第二个窗口,进入/backup目录,创建两个文件,触发create事件

[root@nfs-31 backup]# cd /backup
[root@nfs-31 backup]# touch inotifywait_create_event_1
[root@nfs-31 backup]# touch inotifywait_create_event_2

 

触发事件后,查看第一个窗口会发现,屏幕输出了创建事件的内容(时间和创建的文件路径加名称)

[root@nfs-31 inotify]# inotifywait -mrq --timefmt '%y/%m/%d %H:%M' --format '%T %w%f' -e create /backup   
19/12/12 19:26 /backup/inotifywait_create_event_1
19/12/12 19:41 /backup/inotifywait_create_event_2

 

2.4.4 编写inotify实时监控脚本

Copy the code
[root@nfs-31 /]# cd /server/scripts
[root@nfs-31 scripts]# ls
backup.sh
[root@nfs-31 scripts]# vi inotifywait_nfs_to_backup.sh
[root@nfs-31 scripts]# cat inotifywait_nfs_to_backup.sh 
#!/bin/bash


Path=/data backup_Server=192.168.0.41 /usr/bin/inotifywait -mrq --format '%w%f' -e close_write,delete $Path|while read line do if [ -f $line ];then rsync -az $line --delete rsync_backup@$backup_Server::nfsbackup --password-file=/etc/rsync.password else cd $Path &&\ rsync -az ./ --delete rsync_backup@$backup_Server::nfsbackup --password-file=/etc/rsync.password fi done
Copy the code

 

脚本可以加入开机启动:

echo "/bin/sh /server/scripts/inotifywait_nfs_to_backup.sh &" >> /etc/rc.local

prompt:

  • A & representatives from the background starts running this command is

Guess you like

Origin www.cnblogs.com/zoe233/p/12028573.html