Recent ready to build kubesphere on kubernetes, nfs use as a back-end storage, k8s cluster built on ucloud machine, and a configuration is not high, Ali cloud machine, a way to build on top of NFS, the province occupies too much disk machine ucloud space.
Environment This example demonstrates the following:
Name | IP Addr | Descprition |
---|---|---|
nfs server | 47.1.1.100 | The server public IP, server Ali cloud server |
nfs client | 36.1.1.100 | Client public network IP, this server is a server cloud ucloud |
Server installation
NFS mounting package mounted with yum.
$ sudo yum install nfs-utils
注意:只安装 nfs-utils 即可,rpcbind 属于它的依赖,也会安装上。
yum install nfs-utils -y
Server configuration
Setting Up NFS Services boot
[root@47 nfs]# systemctl enable rpcbind
[root@47 nfs]# systemctl enable nfs
Created symlink from /etc/systemd/system/multi-user.target.wants/nfs-server.service to /usr/lib/systemd/system/nfs-server.service.
Configure shared directory
After the service starts, we configure a shared directory on the server
mkdir /data
chmod 755 /data
According to the directory, the corresponding configuration export directory
vim /etc/exports
Add the following configuration
/data/ 192.168.0.0/24(rw,sync,no_root_squash,no_all_squash)
/ data: the shared directory location.
192.168.0.0/24
: Client IP Range, *
on behalf of all, that there is no limit (I'm in the experiment will be set *
).
rw
: Permissions, read and write.
sync
: Synchronize shared directory.
no_root_squash
: You can use root authorization.
no_all_squash
: You can use ordinary user authorization.
:wq
Save Settings.
Start the NFS Services
[root@47 nfs]# systemctl start rpcbind
[root@47 nfs]# systemctl start nfs
Firewall needs to open rpc-bind and nfs service
$ sudo firewall-cmd --zone=public --permanent --add-service={rpc-bind,mountd,nfs}
success
$ sudo firewall-cmd --reload
success
an examination
You can check the local shared directory
$ showmount -e localhost
Export list for localhost:
/data *
In this way, the server it is configured.
View port
[root@47 ~]# rpcinfo -p localhost
program vers proto port service
100000 4 tcp 111 portmapper
100000 3 tcp 111 portmapper
100000 2 tcp 111 portmapper
100000 4 udp 111 portmapper
100000 3 udp 111 portmapper
100000 2 udp 111 portmapper
100024 1 udp 40535 status
100024 1 tcp 48227 status
100005 1 udp 20048 mountd
100005 1 tcp 20048 mountd
100005 2 udp 20048 mountd
100005 2 tcp 20048 mountd
100005 3 udp 20048 mountd
100005 3 tcp 20048 mountd
100003 3 tcp 2049 nfs
100003 4 tcp 2049 nfs
100227 3 tcp 2049 nfs_acl
100003 3 udp 2049 nfs
100003 4 udp 2049 nfs
100227 3 udp 2049 nfs_acl
100021 1 udp 53325 nlockmgr
100021 3 udp 53325 nlockmgr
100021 4 udp 53325 nlockmgr
100021 1 tcp 37953 nlockmgr
100021 3 tcp 37953 nlockmgr
100021 4 tcp 37953 nlockmgr
Zhang map you can see, when multiple ports will randomly start NFS started and registered RPC, so if you use NFS iptables to restrict port will be a bit of trouble, you can change the profile fixed NFS port-related services.
Assign ports, edit the configuration file:
vi /etc/sysconfig/nfs
Add to:
RQUOTAD_PORT=30001
LOCKD_TCPPORT=30002
LOCKD_UDPPORT=30002
MOUNTD_PORT=30003
STATD_PORT=30004
Restart nfs:
systemctl restart nfs
Now look at the start port:
[root@47 data]# rpcinfo -p localhost
program vers proto port service
100000 4 tcp 111 portmapper
100000 3 tcp 111 portmapper
100000 2 tcp 111 portmapper
100000 4 udp 111 portmapper
100000 3 udp 111 portmapper
100000 2 udp 111 portmapper
100024 1 udp 40535 status
100024 1 tcp 48227 status
100005 1 udp 9903 mountd
100005 1 tcp 9903 mountd
100005 2 udp 9903 mountd
100005 2 tcp 9903 mountd
100005 3 udp 9903 mountd
100005 3 tcp 9903 mountd
100003 3 tcp 2049 nfs
100003 4 tcp 2049 nfs
100227 3 tcp 2049 nfs_acl
100003 3 udp 2049 nfs
100003 4 udp 2049 nfs
100227 3 udp 2049 nfs_acl
100021 1 udp 9902 nlockmgr
100021 3 udp 9902 nlockmgr
100021 4 udp 9902 nlockmgr
100021 1 tcp 9901 nlockmgr
100021 3 tcp 9901 nlockmgr
100021 4 tcp 9901 nlockmgr
iptables设置,或者在阿里云控制台设置放行入出端口
。
iptables -A INPUT -p tcp --dport 111 -j ACCEPT
iptables -A INPUT -p tcp --dport 2049 -j ACCEPT
iptables -A INPUT -p tcp --dport 9902 -j ACCEPT
iptables -A INPUT -p tcp --dport 9901 -j ACCEPT
iptables -A INPUT -p tcp --dport 9900 -j ACCEPT
iptables -A INPUT -p tcp --dport 9903 -j ACCEPT
iptables -A INPUT -p tcp --dport 9904 -j ACCEPT
iptables -A INPUT -p tcp --dport 9905 -j ACCEPT
iptables -A INPUT -p tcp --dport 40535 -j ACCEPT
iptables -A INPUT -p tcp --dport 48227 -j ACCEPT
NFS client connections
First check the server's shared directory, two server cloud vendors, we need to use public IP:
$ showmount -e 47.1.1.100
The implementation of the above error may be reported as: clnt_create: RPC: Port mapper failure - Timed out
, online search a lot of information is still not resolved. But direct mount
is not the problem, read on.
Create a directory on the client
$ mkdir -p /mny/nfs-data
Mounting
$ mount -t nfs 47.1.1.100:/data /mnt/nfs-data
After the mount, you can use the mount command to check
$ mount
47.1.1.100:/data on /mnt/nfs-data type nfs4 (rw,relatime,vers=4.1,rsize=262144,wsize=262144,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=10.0.2.175,local_lock=none,addr=47.1.1.100)
This shows that have mounted successful.
Test NFS
Test, create a file to the shared directory on the client
$ cd /mnt/nfs-data
$ mkdir test-nfs
After taking a look at NFS server
$ cd /data
$ ll
drwxr-xr-x 2 root root 4096 Dec 15 15:51 test-nfs
You can see the shared directory has been written.
The client automatically mount
Automatically mount very common, client settings click.
$ vim /etc/fstab
Similar follows the end of
#
# /etc/fstab
# Created by anaconda on Sun Oct 15 15:19:00 2017
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
#....
47.1.1.100:/data /mnt/nfs-data nfs defaults 0 0
Since the modified / etc / fstab, need to reload systemctl.
$ systemctl daemon-reload
reference
How to solve RPC: Port mapper failure - Timeout Error
fixed NFS port to facilitate start iptables disposed
CentOS yum install and configure the NFS 7