HDFS command operation
First, we need to start hadoop in xshell
start-all.sh or
start-hdfs.sh
-
FS -ls hadoop / (display all files in the current directory)
-
FS -du hadoop / sevenclass (display size of all files in a directory)
-
FS -cat /a.txt hadoop (see the current version information)
-
FS -text /a.txt hadoop (see the current version information)
-
hadoop fs -count /test
Display DIR_COUNT specified file or directory (number of subdirectories), FILE_COUNT (number of files), CONTENT_SIZE (number of bytes), FILE_NAME (directory name)
Create a file mkdir
FS -mkdir hadoop / test1 / test2
(simultaneously create two folders)
FS -mkdir hadoop / a / b
(including create a folder and create a folder b, if not a folder, it will throw the wrong)
FS -mkdir -p hadoop / a / b
(not a folder and then create a account creation b)
cp Copy
FS -cp Hadoop / a / b / C /
(a / a / b are copied to the / c)
copyFromLocal (from the local file system to upload files to HDFS)
hadoop fs -copyFromLocal /usr/a.txt /test
HDFS path to a local file
copyToLocal (download files from HDFS to the local file system)
hadoop fs -copyToLocal /test/a.txt /usr
Local HDFS file path
The command has not been realized moveToLocal
put (upload files from the local file system into the HDFS)
Note: different copyFromLocal: a plurality of source path can be copied to the destination file system, but also supports reads input from standard input write target file system
hadoop fs -put /usr/a.txt /test
HDFS path to a local file
hadoop fs -put /usr/a.txt /usr/b.txt /test
Local files (multiple files) HDFS path
get (copy a file to the local file system)
hadoop fs -get /test/a.txt /usr
Music Videos (move files from the source path to the destination path, this command allows a plurality of source path, but the path must be a certain directory, not allowed to move files between different file systems.)
FS -MV Hadoop / User / Hadoop / file1 / User / Hadoop / file2
(mobile and a name change) the source file path and renamed file1 file2 moving path
FS -mv hadoop / test / test1
(to move to the next folder test test1)
touchz (create a 0 byte empty file )
hadoop fs -touchz pathname
to create an empty file named pathname of
appendToFile (additional content to existing file)
hadoop fs -appendToFile /home/test.txt /1.txt
1.txt is an existing file. /home/test.txt to append to 1.txt
getmerge (receiving a source and a destination directory file as input, and the cost of connection to the target file in the source directory of all files)
hadoop fs -getmerge /test1/test test.txt
Copy / test1 / test everything to test.txt
rm delete the specified file
hadoop fs -rm /user/a.txt
rmr recursively delete files
hadoop fs -rmr /user/hadoop/dir
linux common operations
-
passwd (Change Password)
Use: direct input Enter the command passwd, follow the prompts to enter
-
clear (clear screen)
-
SU (switches to another user)
Enter su root Enter, and then enter the root password; use the exit to exit the current user
-
pwd (display the current location in the Linux file system)
-
chown (change owner of file)
chown
hadoop
:
hadoop
a.txt
(represents the
a.txt
owner changed
hadoop
where the group is
hadoop
)
HDFS command operation
First, we need to start hadoop in xshell
start-all.sh
or
start-hdfs.sh
-
FS -ls hadoop / (display all files in the current directory)
-
FS -du hadoop / sevenclass (display size of all files in a directory)
-
FS -cat hadoop / a.txt (see the current version information)
-
FS -text hadoop /a.txt (see the current version information)
-
hadoop fs -count /test
Display DIR_COUNT specified file or directory (number of subdirectories), FILE_COUNT (number of files), CONTENT_SIZE (number of bytes), FILE_NAME (directory name)
Create a file mkdir
FS -mkdir hadoop
/ test1
/ test2
(simultaneously create two folders)
hadoop fs -mkdir
/a/b
(创建a文件夹并在内创建b文件夹,如果没有a文件夹,会抛错
)
hadoop fs -mkdir -p
/a/b
(没有a文件夹创建a 然后在内创建b)
cp 复制
hadoop fs -cp
/a/b
/c/
(将/a/b拷贝到/c下)
copyFromLocal
(从本地文件系统上传文件到HDFS)
hadoop fs -copyFromLocal
/usr/a.txt
/test
本地文件
HDFS路径
copyToLocal
(从HDFS下载文件到本地文件系统)
hadoop fs -copyToLocal
/test/a.txt
/usr
HDFS文件 本地路径
moveToLocal
该命令还未实现
put
(从本地文件系统上传文件到HDFS )
区别于copyFromLocal: 可以同时复制多个源路径到目标文件系统,也支持从标准输入中读取输入写入目标文件系统
hadoop fs -put
/usr/a.txt
/test
本地文件
HDFS路径
hadoop fs -put
/usr/a.txt
/usr/b.txt
/test
本地文件
HDFS路径
get
(复制文件到本地文件系统)
hadoop fs -get
/test/a.txt
/usr
mv
(将文件从源路径移动到目标路径,这个命令允许有多个源路径。但目标路径必须是一个目录,不允许在不同文件系统间移动文件。)
hadoop fs -mv
/user/hadoop/file1
/user/hadoop/file2
(移动并改名称) 源文件路径file1 移动路径并改名为file2
hadoop fs -mv
/test
/test1
(把文件夹test移到test1下)
touchz
(创建一个0字节的空文件)
hadoop fs -touchz pathname 创建名为pathname的空文件
appendToFile
(向现有文件中追加内容)
hadoop fs -appendToFile
/home/test.txt
/1.txt
1.txt是现有文件。
/home/test.txt把内容追加到1.txt
getmerge
(接收一个源目录和一个目标文件作为输入,并且将源目录中所有的文件连接成本地目标文件)
hadoop fs -getmerge
/test1/test
test.txt
把/test1/test所有内容 复制到test.txt
rm
删除指定的文件
hadoop fs -rm /user/a.txt
rmr
递归删除文件
hadoop fs -rmr /user/hadoop/dir