[Articles] --Hadoop Hadoop Common Command Summary

[Articles] --Hadoop Hadoop Common Command Summary

First, the aforementioned

Hadoop share a common command summary, the commonly used Hadoop command are summarized below.

Second, concrete

1, start hadoop all processes
start-all.sh equivalent to start-dfs.sh + start-yarn.sh

But generally not recommended start-all.sh (because open source frameworks inside the command to start a lot of problems).


2, a single process starts.

sbin/start-dfs.sh

---------------

    sbin/hadoop-daemons.sh --config .. --hostname .. start namenode ...
    sbin/hadoop-daemons.sh --config .. --hostname .. start datanode ...
    sbin/hadoop-daemons.sh --config .. --hostname .. start sescondarynamenode ...
    sbin/hadoop-daemons.sh --config .. --hostname .. start zkfc ...         //

 

sbin/start-yarn.sh
--------------  
    libexec/yarn-config.sh
    sbin/yarn-daemon.sh --config $YARN_CONF_DIR  start resourcemanager
    sbin/yarn-daemons.sh  --config $YARN_CONF_DIR  start nodemanager

3, commonly used commands

    1, view the contents of the specified directory under

   hdfs dfs -ls [directory]

    hdfs dfs -ls -R / // explicit directory structure

    EG:  HDFS DFS -ls /user/wangkai.pt

   2, open a file that already exists

    HDFS DFS -cat [file_path]

   EG: HDFS DFS -cat / User / wangkai .pt / data.txt

  . 3, to the local file storage Hadoop

     HDFS DFS -put [local address] [Hadoop directory]

     HDFS  DFS -put /home/t/file.txt / User / T  

  4, stored in the local folder hadoop

    HDFS DFS -put [local directory] [hadoop directory] 
    HDFS -put DFS / Home / T / dir_name / User / T

   (dir_name a folder name)

  5, a file on the hadoop down to the presence of a local directory

     hadoop dfs -get [directory] [local directory]

     hadoop the DFS -get /user/t/ok.txt / Home / t

  6, delete the specified file on hadoop

     HDFS the DFS -rm [address file]

     the DFS -rm /user/t/ok.txt HDFS

  7, on hadoop delete the specified folder (including subdirectories, etc.)

     HDFS the DFS -rm [directory address]

     HDFS -rmr the DFS / the User / t

  8, created in the specified directory hadoop The new directory

      hdfs dfs -mkdir / user / t

      the DFS -mkdir HDFS - the p-/ the User / CentOS / hadoop  

  9, the new directory specified in hadoop an empty file

    using the command touchz:

    HDFS the DFS -touchz /user/new.txt

  10, will rename a file on hadoop

   use the mv command :

   HDFS the DFS -mv /user/test.txt /user/ok.txt    (test.txt will rename ok.txt)

  11, will save all the contents of a file to the specified directory hadoop, and down to the local

   hdfs dfs - getmerge / User / Home / T

  12 is, the running job hadoop kill off

   hadoop job -kill [job-id]

  13. Help

  hdfs dfs -help        

4, Safe Mode

  (1) exit safe mode

      NameNode into safe mode automatically at startup. Safe Mode is a state NameNode, at this stage, the file system does not allow any changes.

      The system displays the Name node in safe mode, explained the system is in safe mode, then only need to wait ten seconds, also can exit safe mode by following commands: / usr / local / hadoop $ bin / hadoop dfsadmin -safemode the Leave

  (2) into safe mode
    if necessary, can be placed into a safe mode by HDFS commands: / usr / local / bin Hadoop $ / Hadoop dfsadmin -safemode Enter

 

5, the node is added

to add a new DataNode node, first installed Hadoop on a newly added node, going NameNode use the same configuration (either directly from NameNode replication), modified H A D O O P H O M E / C O n- F / m A S T E R & lt text element , plus the N A m E N O D E master machine name . Then after the N A m E N O D E Day point on repairChange HADOOPHOME / conf / master document, adding NameNode host name. Then modify the NameNode node HADOOP_HOME / conf / slaves file, adding a new node name, and then create a new node added SSH connection without a password, run the start command:/usr/local/hadoop$bin/start-all.sh

 

6, load balancing

HDFS data distribution in each DataNode might very uneven, or when new DataNode node failure appears in the DataNode particular node. When new data block NameNode DataNode node selection strategy may also cause uneven distribution of the data block. Users can use the command to rebalance the distribution of data blocks on DataNodes: /usr/local/hadoop$bin/start-balancer.sh

7, supplement

1. Command format hdfs operation of the DFS is hdfs  
1.1 -ls expressed view of a directory hdfs of
1.2 -lsr represent recursive directory hdfs View
1.3 -mkdir create a directory
1.4 -put upload files from Linux to hdfs
1.5 -get hdfs to download files from the Linux
1.6 -text view the contents of the file
represents 1.7 -rm delete files
1.7 -rmr expressed recursively delete files
2.hdfs be stored in the data block is divided, if the file size exceeds the block, then divided according to the size of the block; not as good as the block size is divided into a block, the size of the actual data.
***** ********** PermissionDenyException insufficient permissions  
hadoop commonly used commands:  
HDFS Hadoop HDFS support the DFS to see all the commands   
hdfs dfs -ls listed in the directory and file information   
hdfs dfs -lsr circulation list directories, subdirectory and file information   
hdfs dfs -put test.txt / user / sunlightcs copy test.txt local file system to the HDFS file system / user / sunlightcs directory   
hdfs dfs -get /user/sunlightcs/test.txt. the HDFS copy of test.txt to the local file system, and command opposite -put   
hdfs dfs -cat /user/sunlightcs/test.txt view HDFS file system test.txt content   
hdfs dfs -tail /user/sunlightcs/test.txt view the contents of the last 1KB of   
hdfs dfs -rm / user / sunlightcs / test. txt delete test.txt file from the HDFS file system, rm command can also delete empty directories   
hdfs dfs -rmr / user / sunlightcs delete / user / sunlightcs directory and all subdirectories   
hdfs dfs -copyFromLocal test.txt / user / sunlightcs / test. txt files from the local file system copies the file to the HDFS system, equivalent to the put command   
hdfs dfs -copyToLocal /user/sunlightcs/test.txt test.txt from the HDFS file system copy the file to the local file system, is equivalent to the get command   
hdfs dfs -chgrp [-R] / user / sunlightcs modify HDFS system / user / sunlightcs directory belongs to the group, -R option recursively, with the linux command as   
hdfs dfs -chown [-R] / user / sunlightcs modify HDFS system / user / sunlightcs owner directory option -R recursively   
hdfs dfs -chmod [-R] MODE / user / sunlightcs modified HDFS system / user / sunlightcs directory permissions, MODE may be appropriate permissions or 3 digits + / - {rwx}, -R option recursively
hdfs dfs -count [-q] PATH Check PATH directory, sub-directory number, number of files, file size, file name / directory name   
hdfs dfs -cp SRC [SRC ...] DST copy files from SRC to DST, if you specify a plurality of the SRC, DST it must be a directory   
hdfs dfs -du PATH display size of each file in the directory or directory   
hdfs dfs -dus PATH du like, when PATH directory, displays the total size of the directory   
hdfs dfs - expunge empty the Recycle Bin, the file is deleted, it first to a temporary directory .Trash /, when after more than delay time, the file will be permanently deleted   
hdfs dfs -getmerge SRC [SRC ...] LOCALDST [addnl] obtained by SRC All specified file, merge them into a single file, and written to the local file system LOCALDST, addnl options will be added at the end of a line break in each file   
hdfs dfs -touchz PATH creating an empty file length of 0   
hdfs dfs -test - [ezd] PATH PATH following types of inspection: -e PATH exists, if PATH exists, returns 0, otherwise 1 -z whether the file is empty, if the length 0, returns 0, otherwise if 1 -d is a directory, if PATH is a directory, returns 0, otherwise it returns 1   
HDFS -text the DFS PATH contents of a file, when the file is a text file that is equivalent to the cat, file compression when the format (gzip and hadoop binary sequence file format), will be unpacked before hdfs dfs -help ls to view a [ls] command help document

Forwarded to: https: //www.cnblogs.com/LHWorldBlog/p/8514994.html

Reproduced in: https: //www.cnblogs.com/yifeiyu/p/11044290.html

Guess you like

Origin blog.csdn.net/weixin_33774615/article/details/93311138