hdfs file system mount

A brief description:

Use fuse will hdfs file system is mounted on a remote server, such as the use nfs and glusterfs can mount shared storage

fuse installation

fuse可以编译安装或者通过CDH或ambari源yum安装
此处使用ambari
配置ambari官方源,然后就可以yum安装了
sudo wget -nv http://public-repo-1.hortonworks.com/ambari/centos7/2.x/updates/2.6.1.5/ambari.repo -O /etc/yum.repos.d/ambari.repo

sudo wget –nv http://public-repo-1.hortonworks.com/HDP/centos7/2.x/updates/2.6.4.0/hdp.repo -O  /etc/yum.repos.d/hdp.repo

sudo wget -nv http://public-repo-1.hortonworks.com/HDP-GPL/centos7/2.x/updates/2.6.4.0/hdp.gpl.repo -O /etc/yum.repos.d/hdp.gpl.repo
yum install hadoop-hdfs-fuse -y

 

Note: The installation fuse only need the client (ie hdfs need to mount installed on a server) does not require installation on hadoop cluster. 

hadoop下载地址
https://archive.apache.org/dist/hadoop/common/hadoop-2.7.3/
部署hadoop集群
slightly. . . . 
Here you can choose their own method of installation. Such as: CDH, HDP, or apache hadoop
purpose of this article: hadoop cluster address:
hdfs://192.168.103.220:9000

 

 配置环境变量 
export LD_LIBRARY_PATH=/usr/hdp/2.6.4.0-91/usr/lib/:/usr/local/lib:/usr/lib:$LD_LIBRARY_PATH:$HADOOP_HOME/build/c++/Linux-amd64-64/lib:${JAVA_HOME}/jre/lib/amd64/server echo "user_allow_other" >> /etc/fuse.conf 

 Hdfs mount the file system

Hdfs user to switch to 
CD / usr / HDP / 2.6 . 4.0 - 91 is / Hadoop 
 . / Bin / Hadoop-FUSE-DFS hdfs: // 192.168.103.220:9000 / mnt 
 
 Description: hdfs: // 192.168.103.220:9000 need hdfs mounted directory, where represents the root, pay attention to whether default or custom port. Note that modification 

 [the root @ node1 Hadoop] # DF - hT 
file system type has been used with the available capacity % mount point
 / dev / sda3 XFS. 18G. 15G   . 3 .2G    83 % / 
devtmpfs devtmpfs        . 1 .4G      0   . 1 .4G     0 % / dev
tmpfs          tmpfs          1.4G     0  1.4G    0% /dev/shm
tmpfs          tmpfs          1.4G  9.7M  1.4G    1% /run
tmpfs          tmpfs          1.4G     0  1.4G    0% /sys/fs/cgroup
/dev/sda1      xfs             97M   97M   96K  100% /boot
tmpfs          tmpfs          283M     0  283M    0% /run/user/0
fuse_dfs       fuse.fuse_dfs   36G     0   36G    0% /mnt
[root@node1 hadoop]# 


 

 注意权限 使用用户要和hdfs集群用户一致 这时候 在本地读写,在hdfs集群就可以看到数据来 本地:
[hadoop@node1 ~]$ cd /mnt/
[hadoop@node1 mnt]$ ls
test
[hadoop@node1 mnt]$ echo "aaa" >> test/a.txt 
[hadoop@node1 mnt]$ 

 

 集群:
[hadoop@k8s-node2 hadoop-2.7.3]$ ./bin/hadoop fs -cat /test/a.txt
11
111
111
111
222
111
aaa

 

 

 

 

Guess you like

Origin www.cnblogs.com/xuliang666/p/11112713.html