"File / directory operation command."

File / directory operation command:

# CD: Jump to specified directory 
CD / home / hadoop # to / home / hadoop as the current directory 
CD ../ # back one level 
CD ../ .. # Return two directory 
CD ~ # into the current Linux system login user's home directory (or home folder). In the Linux system, - represents the main folder of the user, namely "/ home / username" directory, if the currently logged on user name hadoop, then - on behalf of the "/ home / hadoop /" directory 
# LS: View file information about 
LS # view the files in the current directory 
LS the -l # View file and directory permissions information 
LS -a # show all files that contain hidden files, including the current path. Among them, the hidden files are. "" Beginning 
LS -h # compared to ls -l, the size of the file to show the form K, M and G, and the rest of the content is unchanged 
LS -help # View other functions 
# touch: create an empty file 
touch filename # create an empty file
touch file 1 file 2 ... # create an empty file empty file 1 ... 2 
# mkdir: create directories, remove empty folder 
mkdir input # create input subdirectories in the current directory 
mkdir -p src / main / Scala # in the current directory, create a multi-level subdirectory src / main / Scala 
mkdir empty folder # remove empty folder 
# CAT: View 
CAT / proc / version # view the Linux kernel version information 
cat / home / hadoop / word. TXT # the entire contents /home/hadoop/word.txt this file to the screen 
CAT file1 file2> file3 # put file1 file2 in the current directory and the two files are merged to generate a file file3   
CAT file4 >> file3 # put in file4 appended to the contents of file3 
# head: first n rows of the result output 
head -5 WORD.TXT #The first five lines word.txt contents of the current directory of the file is displayed on the screen 
# cp: copy a file or folder 
cp file file 2 1 / folder 2 # copy files, covering the situation may arise 
cp -i file 1 2 file / folder 2 # copy files. To overwrite the file, you will be prompted to "yes" or "no", to avoid the emergence of coverage. 
/home/hadoop/word.txt cp / usr / local / # copy /home/hadoop/word.txt files to "/ usr / local" directory 
# copy multi-level directory. Assume that the source directory dir1, the target directory is dir2, how to copy all the files in dir1 to the next dir2? The following options are available: 
# directory dir2 does not exist, you can use 
cp -r dir1 dir2 # directory dir2 exists, can be used 
. Cp -r dir1 / dir2 # copy multiple files 
cp File * folder2 / # same copy name in part multiple files to folder2 / 
cp file1copy file2 folder1 / # multiple files are copied to the same folder folder1 / under 
#rm: delete a file or directory Note: rm performed after the operation is not returned 
rm ./word.txt # delete word.txt files in the current directory 
rm -i / the I f1 F2 F3 F4 # to delete multiple files (more than 3 will be prompted)  
RM -r ./test # delete test directory and all of the following files in the current directory, r will be replaced by R 
RM test -r * # all to delete the beginning of the next test in person directory directories and files   
rm -rf directory name # delete non-empty directory 
# mv: rename the directory name, cut files to the new path 
mv spark-2.1.0 the Spark # the spark-2.1.0 directory renamed the Spark 
mv original path to the destination file path # to cut the original file to the destination path 
# chown 
chown -R hadoop: hadoop ./spark # hadoop is the current user name to log on Linux systems, all rights spark subdirectories in the current directory, given to the user hadoop 
# sudo privileges to the system administrator for processing
sudo tar -zxf ~ / -C download /hadoop-2.7.1.tar.gz / usr / local # use command suffix compressed tar file named .tar.gz (or .tgz) is decompressed, the decompression / usr / local in 
cd / usr / local /  
sudo mv. /hadoop-2.7.1/ ./hadoop # folder name to hadoop 
sudo chown -R hadoop ./hadoop # modify file permissions 
"" " 
above tar command indicates the "/ home / hadoop / download /hadoop-2.7.1.tar.gz" to save the file after the extract to "/ usr / local" directory wherein the meaning of the various parameters is as follows:. 
* X: from the tar package the extracted document; 
* Z: denotes tar gzip compressed package is off, it is necessary to use gunzip it decompression; 
* F: indicates the file followed afterwards; 
* C: a rear unzip the file to the specified directory. 
after the files are extracted, hadoop user no permissions to the directory after the extract obtained after, so here give permission for the / usr / local / hadoop this directory by chown command hadoop user "" " 
# ln the soft links 
ln -s source file destination # goal of establishing file -> source file soft link
# Use a soft connection, you can save space, do not need to store files in the same source file / object files 
rm -rf target file # delete soft link 
# awk: statistics file corresponding to the first column of all characters occurrences 
awk ' { CNT [($. 1)] ++} for the END {(Key in CNT) Print Key ":" CNT [Key]} ' file   # file: local file location (relative or absolute path can be)

 

Guess you like

Origin www.cnblogs.com/luckylele/p/11938978.html