This article Environment: CentOS 7
Brief introduction
NFS (Network File System) is a historic agreement by the SUN company in 1984 to develop years for UNIX-like to share some directories between systems, using C / S architecture, NFS server is a large file server, some of the shared directory you want to set as the output directory, then the client will need to mount a directory to a directory its own file system, the client after various operations within that folder (create, delete, copy, etc. ) actually operate in the real directory corresponding to the remote server.
RPC protocol
RPC (Romote Procedure Call) that is remote procedure call, the agreement simply means: the programmer can call as a local call method (or function) as a method (or function) is located in a remote computer, and this method (or function) the calling process is completely transparent to the programmer, and NFS requires the use of multiple communication ports, and these ports is to use RPC consultations and selected.
Permissions issue
Specific user identity documents within the shared directory of the client operation has been mounted is determined by the main configuration file, if all users are compressed (all_squash) or only root is compressed (root_squash) , then compressed user will be mapped to nobody user identity documents at this time operated under the shared directory is the nobody ;
Further, the shared files in the directory server belongs UID and GID is mapped to a user corresponding to the client, for example,
The server has the following documents,
readme.txt UID=1000(Tom) GID=1000(Tom)
install.sh UID = 1001 (Bob) gid = 1001 (Bob)
startgui.py UID = 1009 (Jerry) GID = 1009 (Jerry)
Client passwd part of the file is as follows,
Jack:x:1000:1000
Natasha:x:1001:1001
Bob:x:1002:1002
Simoth:x:1003:1003
So in the case file belongs to the client to mount a shared directory in the following,
readme.txt UID=1000(Jack) GID=1000(Jack)
install.sh UID=1001(Natasha) GID=1001(Natasha)
startgui.py UID = 1009 GID = 1009
If you want to access remove.py file, depending on other people access to this file!
Configuration
1. Server Installation:
Used here YUM source is mounted,
[root@localhost ~]# yum -y install nfs-utils
Will be automatically installed when you install rpcbind software, this software is the RPC a specific implementation of the protocol, used to help NFS selected port.
2. Configure the primary configuration file
NFS main configuration file is / etc / Exports , as its literal meaning, the document for setting the NFS output directory, when NFS when the service is started or restarted automatically read this file, then this file according to the parameters set by the output corresponding directory, this profile is relatively simple configuration rules, syntax is as follows,
Contents to be output host 1 ( selection 1, select 2 [, ...]) Host 2 ( selection 1, select 2 [, ...]) [...]
Common options are as follows, (ie the output directory shared directory in the NFS in the output directory, also known as the shared directory)
RO : Read-only shared directory, the default option
rw : read-write shared directory
root_squash : If the client using the root user to access this shared directory, were compressed nobody user, other users are not compressed, the default option
all_squash : for all users to access this shared directory are compressed into a nobody
no_root_squash : not compressed to access this shared directory for all users, including root will not be compressed, so this option is more dangerous
anonuid : the user of the compressed designated UID , default is nobody 's UID
anongid : the user of the compressed designated GID , a default is nobody 's GID
Sync : synchronization update the client to write data to a shared directory, immediately written to the server's hard disk, the default option
the async : asynchronous updates, the client writes data to the shared directory, write first data remain in memory in the client machine, to be idle again written to the server's hard disk
in the insecure : allow clients to use non-reserved port (greater than 1024 ) to connect to the server
Here is a simple example configuration (please create your own / Home / nfsshare , and the permissions changed to 777 ),
/home/nfsshare 192.168.88.0/24(rw,async)
Explained: 192.168.88.0/24 for setting this segment can also be used 192.168.88. * Instead of, but is not recommended, because the wildcard * used for matching host names.
3. Verify the configuration.
In the restart nfs before serving to ensure that the rpcbind service is running, the start order of the two services is: first start rpcbind , restart nfs ,
[root@localhost ~]# systemctl start rpcbind
[root@localhost ~]# systemctl start nfs
Use exportfs command can not restart nfs update the output directory service,
exportfs : no arguments prints the current directory being output
-a the exportfs : Output all columns in the / etc / exports directory
-r exportfs : re-export all columns in the / etc / exports directory
-v exportfs : print detailed information being output directory
4. Client Configuration
If clients need to use showmount command also you need to install the nfs-utils software, use showmount can view the output directory provided by a remote server, enter the following command as the client,
[root@localhost ~]# showmount -e 192.168.88.128
Export list for 192.168.88.128:
/home/nfsshare 192.168.88.0/24
If the command displays the following error,
clnt_create: RPC: Port mapper failure - Unable to receive: errno 113 (No route to host)
This is because the firewall did not release the corresponding port, you can turn off the firewall server,
[root@localhost ~]# systemctl stop firewalld
If the command displays the following error,
clnt_create: RPC:Program not registered
This is due to rpcbind did not start, restart the nfs service (since the restart nfs time, nfs starts automatically rpcbind service), if the restart does not work, manually in order to restart these services,
[root@localhost ~]# systemctl stop nfs rpcbind
[root@localhost ~]# systemctl start rpcbind
[root@localhost ~]# systemctl start nfs
Now the client to hang in the shared directory,
[root@localhost ~]# mount 192.168.88.128:/home/nfsshare /media
The shared directory is mounted to the / media directory, view the current file system,
[root@localhost ~]# df -hT
Filesystem Type Size Used Avail Use% Mounted on
/dev/mapper/centos-root xfs 2.4G 1.3G 1.2G 54% /
devtmpfs devtmpfs 223M 0 223M 0% /dev
tmpfs tmpfs 235M 0 235M 0% /dev/shm
tmpfs tmpfs 235M 5.6M 229M 3% /run
tmpfs tmpfs 235M 0 235M 0% /sys/fs/cgroup
/ Dev / sr0 iso9660 4.3G 4.3G 0 100% / mnt / localyumrepo
/dev/mapper/centos-home xfs 505M 26M 479M 6% /home
/dev/sda1 xfs 125M 107M 19M 86% /boot
/ Dev / folders / Cento was xfs 509M 207m 303m 41% / var
tmpfs tmpfs 47M 0 47M 0% /run/user/0
192.168.88.128:/home/nfsshare nfs4 505M 26M 480M 6% /media
We can see in this remote shared directory has been mounted, and the file system type is NFS4 , now the mount command without the -t parameter to specify the type of hanging in the directory, mount command can automatically distinguish the most common file system type , -t parameter is used as follows,
[root@localhost ~]# mount -t nfs 192.168.88.128:/home/nfsshare /media
5. The client permanently mount
The above method to mount the file system is temporary, and when the next system mounted after the restart nfs shared directory disappeared, want to permanently mount the need to mount the information is written to / etc / fstab file, as literally, fstab full name is file system table , i.e., the file system table, each operating system provided the opportunity to open these documents automatically mount file systems, enter the following code can be realized nfsshare permanently mount the shared directory,
[root@localhost ~]# echo "192.168.88.128:/home/nfsshare /media nfs defaults 0 0" >> /etc/fstab
Note that permanent mount, ensure that the server is running when the client starts up, and be able to communicate (such as a server firewall has been turned off or release the corresponding port), or because the system can not mount the shared directory results in startup failure to enter emergency mode.
So far, NFS basic configuration has been completed!