Big Data learning (1) build -Hadoop HDFS distributed cluster basis and use

Environment: A Case Study with CentOS6.5 (3 nodes)

  • Close and close the firewall from the start
  • selinux set to close
  • Set the host name
  • Setting up host mapping jdk1.7
  • ssh-free secret landing
  • hadoop-2.5.2

ready

Decompression hadoop, and created in the root directory hadoop data / tmp directory

Modify the configuration file

* Configuration file location: etc / hadoop hadoop directory in
hadoop-env.sh

    export JAVA_HOME=/usr/java/jdk1.7.0_71    #jdk路径

core-site.xml

 <!--  用于设置namenode并且作为Java程序的访问入口  --->
 <!--  hadoop1.msk.com 为主机名  -->
        <property>		
             <name>fs.defaultFS</name>
             <value>hdfs://hadoop1.msk.com:8020</value>
        </property>
       <!--  存储NameNode持久化的数据,DataNode块数据  -->
       <!--  手工创建$HADOOP_HOME/data/tmp  -->
        <property>
	         <name>hadoop.tmp.dir</name>
	         <value>/opt/install/hadoop-2.5.2/data/tmp</value>
         </property>

hdfs-site.xml

 <!--  设置副本数量 默认是3 可自行根据需求更改  -->
        <property>		
            <name>dfs.replication</name>
            <value>3</value>
        </property>
<!--  权限,可省略  -->
         <property>
             <name>dfs.permissions.enabled</name>
             <value>false</value>
          </property>

mapred-site.xml

<!--  yarn 与 MapReduce相关  -->
       <property>	 	        		
            <name>mapreduce.framework.name</name>
            <value>yarn</value>
       </property>

yarn-site.xml

 <property>
          <name>yarn.nodemanager.aux-services</name>
          <value>mapreduce_shuffle</value>
  </property>

slaves
configured here DataNode hostname of the machine is limited, there is also a node 1 is the DataNode NameNode

hadoop1.msk.com
hadoop2.msk.com
hadoop3.msk.com

NameNode format

Objective function: hdfs system format, and generates data blocks stored in the directory

   bin/hdfs namenode -format 

Start | stop running NameNode hadoop [node]

   sbin/start-dfs.sh
   sbin/stop-dfs.sh

I want to test whether a successful start jps command to view the available process

HDFS shell access

  1. Check the directory structure
	bin/hdfs dfs -ls 路径
  1. Create a folder
    bin/hdfs dfs -mkdir /a
    bin/hdfs dfs -mkdir -p /a/b
  1. Upload files to the local hdfs
    bin/hdfs dfs -put local_path hdfs_path
  1. View the file contents
   bin/hdfs dfs -text /a/c
   bin/hdfs dfs -cat /a/c
  1. delete
    bin/hdfs dfs -rm /a/c
  1. Delete non-empty folders
    bin/hdfs dfs -rmr /a
  1. From hdfs download files to local
    bin/hdfs dfs -get hdfs_path local_path

HDFC browser access

http://hadoop1.msk.com:50070 access HDFS
http://hadoop1.msk.com:8088 access yarn

Published 19 original articles · won praise 8 · views 4554

Guess you like

Origin blog.csdn.net/M283592338/article/details/90937171