NFS network file sharing (Note 1)

NFS (Network File System)

A, NFS works

  1. What is the NFS server

        NFS is the abbreviation for Network File System, which is the biggest feature is that you can through the network, so that different machines, different operating systems can share each other's files.

  NFS server can make PC to a network NFS server shared directory is mounted to the local end of the file system, and in the local end system point of view, catalog the remote host is like one of their own disk partition, as in the use very convenient;

  2, NFS mounts principle

    NFS mount configuration diagram of a server:

As illustrated:

  When we set up a shared directory / home / public in the NFS server, the other has access to the NFS server NFS client can mount this directory to a mount point of its own file system, this can mount point their definition, as in FIG client a and client B is not the same end mounted directory. And after you mount all the data we can see the server / home / public locally. If the client server-side configuration of the end of the read-only, then the client will only be able to read-only. If the configuration read-write, clients are able to read and write. Once mounted, NFS client to view the disk information command: #df -h.

Since NFS is for data transmission between server and client over the network, the data to be transferred must have a network port corresponding to think between the two, in the end NFS server which port to use for data transmission it? Basically this NFS server port to open in 2049, but the file system is very complex. Therefore NFS There are other programs to start the extra ports, these additional ports to transfer data is randomly selected, the port is less than 1024 ; since it is random, then the client is aware of how to use the NFS server in the end which port it? Then you need to call (Remote Procedure Call, RPC) protocol remote procedure to achieve it!

 

3, how RPC and NFS Communications

  Because NFS support functions quite a bit, and different functions will use a different program to start, a start of each function will be to enable some port to transmit data, the NFS functionality corresponding port is not fixed , the client wants to know NFS server end of the relevant port to establish a connection for data transmission, and RPC is used for unified management of NFS service port, and the common external port is 111, RPC will record information about NFS port, so we can service and client can be achieved through RPC communication port information. PRC main function is specified for each NFS function corresponding port number, and notifies the client that the client can connect up to a normal port.

  So how do you know RPC is NFS function of each port it?

  When the first NFS started, it will use some random port, then NFS RPC will go to register these ports, RPC will record these ports, and RPC will open 111 ports, RPC request to wait for the client, if the client there is a request, the server-side RPC will be recorded before the NFS port information to inform the client. So the client will get NFS server port information, the data will be transmitted to the actual port.

Tip: Before you start the NFS SERVER, we must first start the RPC service (ie portmap service, the same below) or NFS SERVER will not be able to register with the RPC service area, in addition, if the RPC service is restarted, the already registered data will be good NFS port all is lost. Therefore, at this time of NFS RPC service management program should be restarted to re-register with the RPC.

Special Note: General NFS modified configuration file is no need to restart NFS, do /etc/init.d/nfs reload or exportfs -rv directly in order to modify the / etc / exports take effect.

4, NFS client and NFS server communication process

 

1) First, start the RPC server-side service, and open port 111

2) Start the NFS server service, and RPC ports registration information

3) client initiates RPC (portmap service), a service request to the NFS server port server-side RPC (portmap)

4) the server's RPC (portmap) service feedback NFS port information to the client.

5) NFS client and server to establish a connection and transmitting data acquired by the NFS port.

Two, NFS deployment

 

1, view system information

[root@server7 ~]# cat /etc/redhat-release
CentOS release 7.3.1611 (AltArch)
root@server7 ~]# uname -a
Linux server7.ctos.zu 3.10.0-514.el7.centos.plus.i686 #1 SMP Wed Jan 25 12:55:04 UTC 2017 i686 i686 i386 GNU/Linux

To develop a habit to check the system version and the kernel parameters. The same software between different versions of the kernel is different, so the method of deployment are not the same, not because of this unnecessary errors.

 

2, NFS software installation

To deploy Services for NFS, you must install the following two packages: nfs-utils: NFS main program, rpcbind: PRC main program; 

Client and NFS server side needs to install these two software.

Note: NFS RPC servers, under Centos5 name for portmap, CentOS6 and lower CentOS7 name rcpbind

NFS software package

       utils-NFS: the NFS main program, comprising two rpc.nfsd rpc.mount deamons

       the rpcbind: RPC main program

2.1, see NFS software package

       [root@server7 ~]# rpm -qa | egrep "nfs|rpcbind"

      [root@server7 ~]#

My CentOS release 7.3.1611 is to minimize installation, the default is not installed nfs and rpcbind

Package is the presence of Yum Search

[root@server7 ~]# yum search nfs-utils  rpcbind

2.2, install the NFS and RPC services

       [root@server7 ~]# yum install nfs-utils  rpcbind

       [root@server7 ~]# rpm -qa  | egrep "nfs|rpcbind"

  rpcbind-0.2.0-38.el7_3.1.i686

  nfs-utils-1.3.0-0.33.el7_3.i686

  libnfsidmap-0.25-15.el7.i686

Check these two packages in the computer what files are installed;

[root@server7 ~]# rpm -ql nfs-utils

3, start the NFS service

3.1, before starting the first start NFS service rpcbind service

View rcpbind state

[root@server7 ~]# systemctl status rpcbind

● rpcbind.service - RPC bind service

   Loaded: loaded (/usr/lib/systemd/system/rpcbind.service; indirect; vendor preset: enabled)

   Active: active (running) since 一 2017-09-04 10:03:20 CST; 1s ago

  Process: 3583 ExecStart=/sbin/rpcbind -w $RPCBIND_ARGS (code=exited, status=0/SUCCESS)

 Main PID: 3584 (rpcbind)

   CGroup: /system.slice/rpcbind.service

           └─3584 /sbin/rpcbind -w

9月 04 10:03:19 server7.ctos.zu systemd[1]: Starting RPC bind service...

9月 04 10:03:20 server7.ctos.zu systemd[1]: Started RPC bind service.

Note: After a successful installation rpcbind default has been opened, and automatically starts to boot. If it does not, then we have to restart the service rcpbind

[root@server7 ~]# systemctl restart  rpcbind

Check PRC port

[root@server7 ~]# yum install net-tools lsof

[root@server7 ~]# lsof  -i:111

COMMAND  PID USER   FD   TYPE DEVICE SIZE/OFF NODE NAME

systemd    1 root   56u  IPv6  43164      0t0  TCP *:sunrpc (LISTEN)

systemd    1 root   57u  IPv4  43165      0t0  TCP *:sunrpc (LISTEN)

rpcbind 3584  rpc    4u  IPv6  43164      0t0  TCP *:sunrpc (LISTEN)

rpcbind 3584  rpc    5u  IPv4  43165      0t0  TCP *:sunrpc (LISTEN)

rpcbind 3584  rpc    8u  IPv4  44975      0t0  UDP *:sunrpc

rpcbind 3584  rpc   10u  IPv6  44977      0t0  UDP *:sunrpc

[root@server7 ~]# netstat -tlunp |grep rpcbind

udp        0      0 0.0.0.0:111             0.0.0.0:*                         3584/rpcbind       

udp        0      0 0.0.0.0:791             0.0.0.0:*                           3584/rpcbind       

udp6       0      0 :::111                  :::*                                3584/rpcbind       

udp6       0      0 :::791                  :::*                                3584/rpcbind 

View NFS NFS service is not started until the port information registered with the PRC

[root@server7 ~]# rpcinfo -p localhost

   program vers proto   port  service

    100000    4   tcp    111  portmapper

    100000    3   tcp    111  portmapper

    100000    2   tcp    111  portmapper

    100000    4   udp    111  portmapper

    100000    3   udp    111  portmapper

   100000    2   udp    111  portmapper

3.2, RPC service starts and then start the NFS service

View Status

[root@server7 ~]# systemctl status  nfs

● nfs-server.service - NFS server and services

   Loaded: loaded (/usr/lib/systemd/system/nfs-server.service; disabled; vendor preset: disabled)

   Active: inactive (dead)

The default is not started, do not start after the system power-on reset, start the nfs service will be set to boot.

[root@server7 ~]# systemctl start nfs

[root@server7 ~]# systemctl enable nfs

Created symlink from /etc/systemd/system/multi-user.target.wants/nfs-server.service to /usr/lib/systemd/system/nfs-server.service.

[root@server7 ~]# systemctl status  nfs

● nfs-server.service - NFS server and services

   Loaded: loaded (/usr/lib/systemd/system/nfs-server.service; enabled; vendor preset: disabled)

   Active: active (exited) since 一 2017-09-04 10:15:21 CST; 19s ago

 Main PID: 3654 (code=exited, status=0/SUCCESS)

   CGroup: /system.slice/nfs-server.service

 

9月 04 10:15:21 server7.ctos.zu systemd[1]: Starting NFS server and services...

9月 04 10:15:21 server7.ctos.zu systemd[1]: Started NFS server and services.

After starting the NFS Once again, we see the port of registration information rpc

[root@server7 ~]# rpcinfo -p localhost

   program vers proto   port  service

    100000    4   tcp    111  portmapper

    100000    3   tcp    111  portmapper

    100000    2   tcp    111  portmapper

    100000    4   udp    111  portmapper

    100000    3   udp    111  portmapper

    100000    2   udp    111  portmapper

    100024    1   udp  56626  status

    100024    1   tcp  42691  status

    100005    1   udp  20048  mountd

    100005    1   tcp  20048  mountd

    100005    2   udp  20048  mountd

    100005    2   tcp  20048  mountd

   100005    3   udp  20048  mountd

    100005    3   tcp  20048  mountd

    100003    3   tcp   2049  nfs

    100003    4   tcp   2049  nfs

    100227    3   tcp   2049  nfs_acl

    100003    3   udp   2049  nfs

    100003    4   udp   2049  nfs

    100227    3   udp   2049  nfs_acl

    100021    1   udp  57225  nlockmgr

    100021    3   udp  57225  nlockmgr

    100021    4   udp  57225  nlockmgr

   100021    1   tcp  35665  nlockmgr

    100021    3   tcp  35665  nlockmgr

  100021    4   tcp  35665  nlockmgr

After confirming useless startup problems we take a look at NFS in the end open which ports

[root@server7 ~]# netstat -tulnp |grep -E '(rpc|nfs)'

tcp        0      0 0.0.0.0:42691           0.0.0.0:*               LISTEN      3634/rpc.statd     

tcp        0      0 0.0.0.0:20048           0.0.0.0:*               LISTEN      3642/rpc.mountd    

tcp6       0      0 :::39614                :::*                    LISTEN      3634/rpc.statd     

tcp6       0      0 :::20048                :::*                    LISTEN      3642/rpc.mountd    

udp        0      0 127.0.0.1:842           0.0.0.0:*                           3634/rpc.statd     

udp        0      0 0.0.0.0:20048           0.0.0.0:*                           3642/rpc.mountd    

udp        0      0 0.0.0.0:111             0.0.0.0:*                           3584/rpcbind       

udp        0      0 0.0.0.0:791             0.0.0.0:*                           3584/rpcbind       

udp        0      0 0.0.0.0:56626           0.0.0.0:*                           3634/rpc.statd     

udp6       0      0 :::56122                :::*                                3634/rpc.statd     

udp6       0      0 :::20048                :::*                                3642/rpc.mountd    

udp6       0      0 :::111                  :::*                                3584/rpcbind       

udp6       0      0 :::791                  :::*                                3584/rpcbind       

4, NFS common process Detailed

[root@server7 ~]# ps -ef |egrep "rpc|nfs“

rpc       3584     1  0 10:03 ?        00:00:00 /sbin/rpcbind -w

rpcuser   3634     1  0 10:15 ?        00:00:00 /usr/sbin/rpc.statd --no-notify

root      3637     2  0 10:15 ?        00:00:00 [rpciod]

root      3642     1  0 10:15 ?        00:00:00 /usr/sbin/rpc.mountd

root      3652     1  0 10:15 ?        00:00:00 /usr/sbin/rpc.idmapd

root      3657     2  0 10:15 ?        00:00:00 [nfsd4_callbacks]

root      3663     2  0 10:15 ?        00:00:00 [nfsd]

root      3664     2  0 10:15 ?        00:00:00 [nfsd]

root      3665     2  0 10:15 ?        00:00:00 [nfsd]

root      3666     2  0 10:15 ?        00:00:00 [nfsd]

root      3667     2  0 10:15 ?        00:00:00 [nfsd]

root      3668     2  0 10:15 ?        00:00:00 [nfsd]

root      3669     2  0 10:15 ?        00:00:00 [nfsd]

root      3670     2  0 10:15 ?        00:00:00 [nfsd]

root      3705  3267  0 10:23 pts/0    00:00:00 grep -E --color=auto rpc|nfs

  • nfsd

  The main NFS service provider, this daemon main function is to manage whether the client can use the server file system mount information, which also includes determining the user's login ID.

  •   rpc.mountd

  This daemon main function is to manage the NFS file system. When the client end successfully passed rpc.nfsd login host before it can use the NFS server specified file, the file will go through the certification process using privileges. It will go to read NFS configuration file / etc / exports to compare the client's permission, when overcome this obstacle, client end will get permission to use the NFS file.

  •   rpc.lockd (non-essential)

  This daemon is used to manage file locking aspect, when multiple clients simultaneously attempt to write to a file it can cause some problems to the file. rpc.lockd can be used to overcome this problem. But rpc.lockd must be both the client and the server are turned down.

  •  rpc.statd (non-essential)

  This daemon can be used to check the consistency of the file, if it occurs because the clients use the same cause file corruption when a file, rpc.statd can be used to detect and attempt to recover the file

5. Configure the NFS Service

  NFS software is very simple, the main configuration file: / etc / exports, the default content inside is empty, if it does not, you can use vim take the initiative to create the file. As also set up NFS server is as simple as editing the main configuration file / etc / exports after the first start rpcbind (if already started, do not restart), then start nfs, NFS success.

     Then the / etc / exports should be how to set?

[root@server7 etc]# vi /etc/exports

/tmp/data      192.168.1.0/24(ro)          client-A.ctos.zu(rw,sync)

# [Shared directory] [Client Address 1 (permissions)] [Client Address 2 (authority)]

The above is a simple case of configuration, each front row is to be shared out directory, pay attention to is the directory unit

Shared directory: present in our directory on the machine, we want to share it with other hosts on the network use. If I want to share / tmp / data directory, then this option can directly write / tmp / data directory, which can be shared to different hosts in accordance with different permissions.

A client address (parameter 1, parameter 2): a client network address can be provided, a single host may be provided. Parameters: such as read and write permissions rw, sync update sync, compression visiting the account all_squash, anonymous account anonuid compressed = uid, anongid = gid, and so on;

Set client address mainly in the following ways:

1), you can use the full IP network or a number, e.g. 192.168.100.100 or 192.168.8.0/24

2), you can use a host name, but the host name must be in the / etc / hosts, or you can use DNS to find the name of the job, anyway, the focus is to find the IP on the line, if the host name, you can also support wildcards, For example '*' or '? 'Acceptable; for example: host [1-8] .ctos.zu, server .test.com?

NFS permissions

NFS configuration permissions, i.e., / etc / exports configuration file format small brackets () in the set of parameters;

Parameters command

Parameter Purpose

rw

Represents read and write

ro

Read-only means that only read access

Sync

After the request or write data, the data synchronous writes to the hard disk NFS server will return

no_root_squas

Nfs server users to access a shared directory if you are root, then it has root access to the directory. This configuration was originally prepared for the diskless users. Users should avoid using!

root_squash

NFS server for users to access the shared directory, if you are root, then nobody will be compressed into a user's identity.

all_squash

Regardless of the shared directory access nfs server user identity including root, its authority will be compressed into an anonymous user, and they will become the udi and gid or uid nobody nfsnobody accounts, gid. Read and write data simultaneously in multiple nfs server nfs client, this parameter is useful to ensure that all data written permission of the same.

But there are different systems might anonymous users uid, different gid. Because here we need between the user and client service are the same. For example: the server specified anonymous user's UID is 2000, then the client also must exist before 2000 this account can

anonuid

anonuid is anonymous uid and gid. Description Client what permissions to access the service side, is nfsnobody by default. Uid65534.

anongid

With anongid, it is to replace the uid gid it

 

Instance:

/home/test  1192.168.1.0/24(rw,sync,all_squash,anonuid=2000,anongid=2000)

### Note that the red part can not have spaces! ! Production environment commonly used configuration for a multi-client shared NFS directory. All_squash That no matter what capacity the client is to visit, will be compressed into the ground behind all_squash user and group identity. Here with anonuid, anongid number to mark. =

summary:

Shared server configuration format:

1) Basic format: shared directory ip / 24 (shared property) -> without spaces Note

2) share permissions:

rw read-write property

sync files written to disk before the actual return

all_squash: all the user access to the user are compressed into a subsequent contact.

anonuid: User default compression

anongid: default compression user group

Then the client to access in what capacity?

Client Access server default is to use nfsnobody the user to visit. uid and gid 65534. When the default server share, but also added all_squash this parameter. And to develop anonuid 65534 (that is, nfsnobayd user). Of course, if the system is nfsnobody other uid, then it may cause access problems. So the best we can set up a user to access a unified UID, GID.

How do I mount situation?

There are two important documents, can solve this question. / Var / lib / nfs / etab, / var / lib / nfs / rmtab these two files on the server will be able to see what shared directory, in the end how much the client to mount a share, and you can view the client's specific mount information.

1, etab this file can see the shared directory on the server which performs who can use, and set the parameters why.

2, rmtab this document is to be able to view the situation shared directory is mounted.

发布了13 篇原创文章 · 获赞 1 · 访问量 7811

Guess you like

Origin blog.csdn.net/weixin_40482816/article/details/100128792
Recommended