Linux NFS server set up in

What is NFS?
NFS is an acronym for Network File System, that is network file system. Agreement for use in distributed file system developed by Sun Microsystems,
in 1984 public. Function through the network to allow different machines, different operating systems to share data with each individual, so that the client application
data server located on a network disk access, is a method for disk file sharing between Unix-like systems

 Its main function is to allow the network can share files and directories between different machines each other systems. NFS server can allow NFS client far
end of the shared directory to mount NFS server to a local NFS client. In local NFS client machine seems, NFS server shared directory
if your disk partitions and directories. General client mount to a local directory name can be casually, but for the convenience of management, and we want the server side
as better.
NFS is generally used to store shared video, pictures and other static data.

 

 

 

NFS mount principle
NFS is for data transmission between server and client over the network. To transfer data between them there should be a corresponding network port wants to pass

lose. NFS server in the end what the network port to transmit data, NFS server is actually a randomly selected port for data transmission. That NFS client
-side and know how to use the NFS server in the end is which port it? In fact, by remote procedure calls when the NFS server (remote procedure call
referred RPC) protocol / service to achieve. That NFS RPC services will be unified management of the port, the client and server communicate via NFS RPC to the first
use which ports, after the re-use of these ports (less than 1024) for transmission of data.
NFS RPC port that is assigned management server, the client data to be passed, and that the client's RPC will talk to the server to be an RPC server-side
port, and then to establish a connection to the port, and then transfer the data.

That between RPC and NFS and how to communicate with each other?
When the first NFS started, it will use some random port, then NFS RPC will go to register these ports. RPC will record these ports. And
and RPC will open 111 ports, RPC request to wait for the client, if the client has requested that the service side of the recorded NFS RPC will port information reported
to know the client.

RPC and NFS boot sequence is like?
Before starting the NFS SERVER, you must first start the RPC service (ie portmap service, the same below) or NFS SERVER will not be able to register with the RPC service area,
in addition, if the RPC service is restarted, the already registered good NFS port data will be lost . Therefore, at this time of NFS RPC service management program can also
be restarted to re-register with the RPC. Special Note: After the general modify NFS configuration file is no need to restart NFS, do /etc/init.d/nfs reload directly in command

Summary: Client NFS and server NFS communication process
1) First, the server start the RPC service, and open 111 port
2) enable NFS and RPC registered port information
3) client initiates RPC (portmap service), to the service side RPC (portmap) NFS requests the service server port
4) of the server RPC (portmap) NFS service feedback information to the client port.
5) NFS client and server to establish a connection and transmitting data acquired by the NFS port.

 

 

 The NFS protocol and software installation and management

Protocol:
RPC (Remote Procedure Call Protocol) - remote procedure call protocol
software:
nfs-utils- *: including NFS command and monitoring procedures
rpcbind- *: supports secure NFS RPC connection service
Note: Under normal circumstances, as the system the default installation package
Cent OS6. * before called portmap rpcbind

 NFS system daemons

nfs: It is the basic NFS daemons, the main function is manages the client can log server
rpcbind: The main function is to port mapping work. When a client attempts to connect and use the RPC services provided by the server (such as NFS service), the rpcbind
will be managed by the port corresponding to services provided to the client, so that the customer can request services from the server through the port.

 Configuring NFS server
configuration NFS server is relatively simple, only needs to be set in the appropriate configuration file, and then start the NFS server.
NFS Services configuration file is / etc / exports, the NFS file is the main configuration file, but the system does not default values, so this file varies
will certainly exist, you may want to use vim to manually create and write configuration file inside content.
/ etc / exports file content format:
the shared directory of the client 1 (access rights, user mapping, other)

root_squash: The root user access mapped to anonymous (nfsnobody) user uid and gid; (default into force)
no_root_squash: Administrator rights reserved to the management server administrator privileges;
all_squash: The remote access users and groups are mapped to belong specify the uid, gid of the anonymous user;

anonuid = xxx: all remote access users are mapped to the anonymous user's specified uid;
anongid = xxx: remote access to all user groups are mapped to the specified gid anonymous group accounts;
other options:
Sync: synchronize data written to the memory buffer and disk, inefficiency, but you can guarantee data consistency (synchronous);
the async: the first data stored in the memory buffer, when necessary written to disk (asynchronous);

 NFS server to start and stop
1, start the S NFS server
to enable NFS server to work properly, you need to start two rpcbind and nfs services, and rpcbind must first launched in nfs.

# service rpcbind start
# service nfs start

 2, the query S NFS Server Status

# service rpcbind status
# service nfs status

3, stop S NFS server
To stop the NFS operation need to stop the service and then stopped rpcbind nfs service, when the system for other services (such as NIS) required for use, does not need to
stop the service rpcbind

# service nfs stop
# service rpcbind stop

4, set the S NFS server autostart state
set 2345 rpcbind and nfs service starts automatically run the system level.

# chkconfig --level 2345 rpcbind on
# chkconfig --level 2345 nfs on

5, see C RPC server which ports to open

rpcinfo –p localhost

NFS server installation services steps:

Step 1: Install NFS and rpc.

[root@k8s-master ~]# yum install nfs-utils-*
BDB2053 Freeing read locks for locker 0x208: 44894/140076180760384
BDB2053 Freeing read locks for locker 0x20a: 44894/140076180760384
已加载插件:fastestmirror, langpacks
Loading mirror speeds from cached hostfile
epel/x86_64/metalink                                                                                                                             | 8.3 kB  00:00:00
 * base: mirrors.nju.edu.cn
 * elrepo: mirrors.tuna.tsinghua.edu.cn

 

rpc system comes with start

[root@k8s-master ~]# netstat -antp|grep rpc*
tcp6       0      0 :::111                  :::*                    LISTEN      907/rpcbind
[root@k8s-master ~]# service rpcbind status
Redirecting to /bin/systemctl status rpcbind.service
● rpcbind.service - RPC bind service
   Loaded: loaded (/usr/lib/systemd/system/rpcbind.service; enabled; vendor preset: enabled)
   Active: active (running) since 一 2020-02-24 14:45:59 CST; 1 weeks 6 days ago
 Main PID: 907 (rpcbind)
    Tasks: 1
   Memory: 540.0K
   CGroup: /system.slice/rpcbind.service
           └─907 /sbin/rpcbind -w

2月 24 14:45:53 k8s-master systemd[1]: Starting RPC bind service...
2月 24 14:45:59 k8s-master systemd[1]: Started RPC bind service.

Step Two: Start to start the service and set to open:

[root @ K8S-Master ~] # systemctl Start the rpcbind # first start rpc services 
[root @ K8S-Master ~] # systemctl enable the rpcbind # Set boot 
[root @ K8S-Master ~] # systemctl Start nfs # first start the nfs service 
[K8S the root-Master @ ~] # systemctl enable NFS boot provided #

Step Three: Configure the shared file directory, edit the configuration file:

First, create a shared directory, you can then edit the configuration in / etc / exports configuration file.

[root@k8s-master wgr]# exportfs
/wgr            192.168.180.140
[root@k8s-master wgr]# cat /etc/exports
/wgr 192.168.180.140(rw,sync,no_root_squash)
[root@k8s-master wgr]#

Client Configuration

[wgr@k8s-node01 wgr]$ showmount -e 192.168.180.139
Export list for 192.168.180.139:
/wgr 192.168.180.140
[wgr@k8s-node01 wgr]$
[K8S the root-amdha01 @ ~] # Mount 192.168.180.139:/wgr / WGR / 
[@ K8S the root-amdha01 WGR] # DF -H 
file system capacity has been available for use with% mount point 
devtmpfs 976m      0 0% 976m / dev 
tmpfs 992m      0 0% 992m / dev / SHM 
tmpfs 992m   a 1.7M 990M. 1% / RUN 
tmpfs 992m      0 0% 992m / SYS / FS / a cgroup
 / dev / Mapper / CentOS the root. 17G-8.3 g of 8.7 g of 49% / 
/ dev / 26 is sda1 756M 259M% of 10-14m / Boot 
tmpfs and 36K 199M 199M     . 1% / RUN / User / 1000
/dev/sr0                 4.3G  4.3G     0  100% /run/media/wgr/CentOS 7 x86_64
tmpfs                    199M     0  199M    0% /run/user/0
192.168.180.139:/wgr      17G   12G  5.5G   69% /wgr
[root@k8s-node01 wgr]#

 

[@ K8S the root-amdha01 WGR] # Vim 1.txt 
[@ K8S the root-amdha01 WGR] # Touch 2.txt 
[@ K8S the root-amdha01 WGR] # LL 
total volume. 4 
-rw-R & lt - r-- the root. 1 dated root 6. 3. 1 15:11. 9 .txt
 -rw-R & lt - r--. 1 the root the root. 3 0 2 15:15 dated. 9 .txt 
[the root @ K8S-amdha01 WGR] # SU - WGR 
last logged in: a 2 dated 24 14:49:32 CST 2020: 0 on 
[WGR @ K8S-amdha01 ~] $ CD / WGR 
[WGR @ K8S-amdha01 WGR] $ LL 
total volume . 4 
-rw-R & lt - r-- the root the root. 1 63. 1 15:11 dated. 9 .txt
 -rw-R & lt - r--. 1 the root the root. 3 0 2 15:15 dated. 9 .txt 
[WGRK8S the WGR-node01 @] $ 111 Touch .txt 
Touch: Unable to create " 111.txt " : enough authority

 

Uninstall:

[K8S the root-amdha01 @ ~] # umount 192.168.180.139:/wgr / WGR / 
[the root-amdha01 K8S @ ~] # DF -H 
file system capacity has been available for use with% mount point 
devtmpfs 976m      0 0% 976m / dev 
tmpfs 992m      0 0% 992m / dev / SHM 
tmpfs 992m   a 1.7M 990M. 1% / RUN 
tmpfs 992m      0 0% 992m / SYS / FS / a cgroup
 / dev / Mapper / CentOS-8.6 g of the root. 17G 8.5G 50% / 
/ dev / 26 is sda1 756M 259M% of 10-14m / Boot 
tmpfs 199M 199M and 36K     . 1% / RUN / User / 1000
/ dev / sr0 4.3G 4.3G 0 100% / run / media / wgr / CentOS 7 x86_64 
tmpfs 199M      0 199M 0% / run / user / 0

exportfs command
after if we start the NFS has modified the / etc / exports, are we going to restart the nfs it? This time we can use the exportfs
command to make the changes effective immediately, the command format is as follows:
Format: exportfs [-aruv]
-a whole mounting or unmounting the / etc / exports in
-r to re-read / etc / exports in information and synchronize updates / etc / exports, / var / lib / nfs / xtab
-u uninstall a single directory (and to uninstall all used with -a / etc / exports file directory)

-v in export when the verbose output to the screen.

Specific examples:
# exportfs -au uninstall all the shared directory
# exportfs -ra to re-catalog and share all output details

[root@k8s-master wgr]# exportfs -au
[root@k8s-master wgr]# exportfs -ar
[root@k8s-master wgr]# exportfs -v
/wgr            192.168.180.140(sync,wdelay,hide,no_subtree_check,sec=sys,rw,secure,no_root_squash,no_all_squash)
[root @ K8S-node01 the WGR] # LS 
LS: Can not open directory: enough authority 
[root @ K8S-node01 the WGR] # LS 
1.txt 2 .txt 
[root @ K8S-node01 the WGR] #

 

Published 407 original articles · won praise 2 · Views 6803

Guess you like

Origin blog.csdn.net/qq_29860591/article/details/104766318