Table of contents
Linux autofs automatic mounting service
etc/auto.master file content format
Case 1 --- The server creates a shared directory and the client implements automatic mounting
Case 2 --- Automatically mount the CD
Linux autofs automatic mounting service
cause
In the process of using the general NFS file system, if the client wants to use the file system provided by the server, it can be /etc/fstab
set to automatically mount at startup , or it can be manually mounted using mount after logging in to the system.
Due to network problems , the connection between the NFS server and the client will not always exist . After we mount the NFS server, either party going offline may cause the other party to wait for a timeout. If the resource is not used for a long time after it is mounted, it will also cause Waste of server hardware resources
In order to solve this problem, the following ideas emerged:
Only allow the system to automatically mount when the client needs to use the NFS file system
When the NFS file system is finished using (the default automatic unmount time of autofs is 300s or 5 minutes), let NFS automatically unmount
The autofs automatic mounting service can help us solve this problem. The service is a Linux system daemon running on the client. It is dynamically mounted only when the user needs to use the file system, thus saving network resources and server hardware . resource
Install
[root@localhost ~]# yum install autofs -y
Profile analysis
file path
/etc/auto.master
effect
Linux servers in production environments generally manage the mounting operations of many devices at the same time. If all these device mounting information are written into the main configuration file of the autofs service, it will undoubtedly make the main configuration file bloated, which is not conducive to service execution efficiency and is not conducive to modifying the configuration content in the future. In this case, the mounted The device is configured independently to form a sub-file . The main configuration file only stores the file name of the configuration mount settings.
etc/auto.master file content format
Mount directory subconfiguration file
Mount directory --- does not need to exist in advance , because autofs will actively create the directory
Sub-configuration file---the file name can be customized
Example --- /nfs /etc/auto.nfs
The sub-configuration file is created by yourself, and the content format is:
Local mount directory [-mount parameters] Server address: directory
Example --- testmnt 192.168.48.130:/data
Mount parameters
parameter | Function |
---|---|
fg/bg | When executing a mount, will the mount be executed in the foreground (fg) or the background (bg) ? If executed in the foreground, mount will continue to try to mount until successful or time out; if executed in the background, mount Mount will continue to be performed multiple times in the background without affecting the running of the program in the foreground. |
soft/hard | Hard means that when any host between the two goes offline, RPC will continue to call until the other party restores the connection. If it is soft, the RPC will call repeatedly after time out instead of continuing to call. |
intr | When mounting using the hard method mentioned above, if the intr parameter is added, the call can be interrupted when RPC continues to call. |
rsize/wsize | Block size for reading (rsize) and writing (wsize). This setting value can affect the client and server |
Case
Case 1 --- The server creates a shared directory and the client implements automatic mounting
Step 1: Create a new shared directory on the server host and edit the nfs configuration file
[root@localhost ~]# mkdir /data
[root@localhost ~]# chmod -Rf 777 /data/
[root@localhost ~]# ls /data/
[root@localhost ~]# echo "this is test " > /data/file.txt
[root@localhost ~]# ls /data/
file.txt
[root@localhost ~]# vim /etc/exports
#编写以下内容
/data *(rw,sync,all_squash)
Step 2: Start the service on the server host. Note: Start the rpcbind service first.
[root@localhost ~]# systemctl start rpcbind
[root@localhost ~]# systemctl start nfs-server
[root@localhost ~]# systemctl enable rpcbind
[root@localhost ~]# systemctl enable nfs-server
Step 3: Client node1 operation, edit the automatically mounted main configuration file
Plan to mount the directory locally: /nfs/testmnt
[root@localhost ~]# vim /etc/auto.master
# 编辑第7行,修改为如下
/nfs /etc/auto.nfs
# /nfs为最终挂载目录的第一级
# /etc/auto.nfs为自动挂载的自配置文件,文件名任意命名
[root@localhost ~]# showmount -e 192.168.149.128
[root@localhost nfs1]# vim /etc/auto.nfs # 编辑自动挂载的子配置文件
#编辑以下内容
testmnt 192.168.149.128:/data
Step 4: Restart the service on the client node1 host
[root@localhost ~]# systemctl start autofs
[root@localhost ~]# systemctl enable autofs
Step 5: Client node1 host test
[root@localhost nfs]# df -h # 查看系统挂载信息
[root@localhost nfs]# cd /nfs #自动创建该目录
[root@localhost nfs]# ls
[root@localhost nfs]# cd testmnt
[root@localhost testmnt]# ls
file.txt
Case 2 --- Automatically mount the CD
Step 1: Modify the main configuration file and sub-configuration files of autofs
# 计划本地光盘挂载目录:/media/cdrom
[root@localhost ~]# vim /etc/auto.master
/media /etc/iso.aa
# /media为计划挂载目录的第一级别
# /etc/iso.aa为子配置文件
Step 2: Edit sub-configuration files
[root@localhost ~]# vim /etc/iso.aa
cdrom -fstype=iso9660,ro,nosuid,nodev :/dev/sr0 # 冒号前有空格
Step 3: Start the service
[root@localhost ~]# systemctl restart autofs
[root@localhost ~]# systemctl enable autofs
Step 4: Test, note---if the CD is already mounted, it needs to be unmounted first
[root@localhost ~]# df -h # 查看光盘是否已经挂载
文件系统 容量 已用 可用 已用% 挂载点
devtmpfs 4.0M 0 4.0M 0% /dev
tmpfs 968M 0 968M 0% /dev/shm
tmpfs 388M 9.5M 378M 3% /run
/dev/mapper/rhel-root 16G 4.2G 12G 27% /
/dev/nvme0n1p1 395M 235M 160M 60% /boot
tmpfs 194M 104K 194M 1% /run/user/0
/dev/sr0 8.5G 8.5G 0 100% /run/media/root/RHEL-9-1-0-BaseOS-x86_64 # 显示已挂载
[root@localhost ~]# umount /dev/sr0 # 先卸载光盘设备
[root@localhost ~]# df -h # 再次查看
[root@localhost ~]# cd /media/
[r[root@localhost media]# cd cdrom # 触发自动挂载
[r[root@localhost cdrom]# df -h
文件系统 容量 已用 可用 已用% 挂载点
devtmpfs 4.0M 0 4.0M 0% /dev
tmpfs 968M 0 968M 0% /dev/shm
tmpfs 388M 9.5M 378M 3% /run
/dev/mapper/rhel-root 16G 4.2G 12G 27% /
/dev/nvme0n1p1 395M 235M 160M 60% /boot
tmpfs 194M 104K 194M 1% /run/user/0
/dev/sr0 8.5G 8.5G 0 100% /media/cdrom # 已经自动挂载