Shell commands provided by Hadoop accomplish the same task

  1. Create a file txt in the "/home/hadoop/" directory of the local Linux file system, and enter some words at will.
    mkdir hadoop
    cd hadoop
    touch test.txt
    gedit test.txt
  2. View file location locally (ls)
    ls -al
  3. Display file contents locally
    cat test.txt
  4. Use the command to upload "txt" in the local file system to the input directory of the current user directory in HDFS.
    cd /usr/local/hadoop
    ./sbin/start-dfs.sh
    ./bin/hdfs dfs -mkdir input
    ./bin/hdfs dfs -put ~/hadoop/test.txt input
  5. View files in hdfs (-ls)
    ./bin/hdfs dfs -ls
  6. Display the content of the file in hdfs
    ./bin/hdfs dfs -cat input/test.txt
  7. delete the local txt file and view the directory
    rm -r ~/hadoop/test.txt
  8. Download the txt to the original local location from hdfs.
    cd /usr/local/hadoop
    ./sbin/start-dfs.sh
    ./bin/hdfs dfs -get input/test.txt ~/hadoop
  9. delete txt from hdfs and view directory
    ./bin/hdfs dfs -rm -r input/test.txt
    ./bin/hdfs dfs -ls -R

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=324906525&siteId=291194637