Hadoop2.6.5 pseudo-distributed large data structures

1, the installation jdk

rpm -i jdk-8u231-linux-x64.rpm

2, configure the environment variables java

we / etc / profile

export JAVA_HOME=/usr/java/jdk1.8.0_231-amd64
PATH=$PATH:$JAVA_HOME/bin

source /etc/profile

3, configure ssh keyless landing

ssh localhost
ssh-keygen -t dsa -P '' -f ~/.ssh/id_dsa
cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys

 

4, extract the installation package hadoop

mkdir -p /opt/ycyz
tar xf hadoop-2.6.5.tar.gz -C /opt/ycyz/

 

5, configure hadoop environment variables

we + / etc / profile
export HADOOP_HOME=/opt/ycyz/hadoop-2.6.5
PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin

source /etc/profile

 

6, hadoop of java environment variable configuration

cd $HADOOP_HOME/etc/hadoop/
vi hadoop-env.sh
vi mapred-env.sh
vi yarn-env.sh
export JAVA_HOME=/usr/java/jdk1.8.0_231-amd64

 

7, arranged core-site.xml

you core- site.xml

    <property>
        <name>fs.defaultFS</name>
        <value>hdfs://hadoop-1:9000</value>
    </property>
    <property>
        <name>hadoop.tmp.dir</name>
        <value>/var/ycyz/hadoop/local</value>
    </property>

 

8, the configuration hdfs-site.xml

vi hdfs- site.xml
              
    <property>
        <name>dfs.replication</name>
        <value>1</value>
    </property>
    <property>
        <name>dfs.namenode.secondary.http-address</name>
        <value>hadoop-1:50090</value>
    </property>

 

9, slaves configuration file

we are slaves
    hadoop-1

10, format hdfs

hdfs namenode -format (only format once again not to start the implementation of the cluster)

 

11, start the cluster

start-dfs.sh

 

Note:

角色进程查看:jps
帮助: hdfs
       hdfs dfs     

查看web UI: IP:50070
     创建目录:hdfs dfs  -mkdir -p  /user/root
        
     查看目录:  hdfs dfs -ls   /
    
     上传文件: hdfs dfs -put  hadoop-2.6.5.tar.gz   /user/root                
    
      停止集群:stop-dfs.sh

 

Guess you like

Origin www.cnblogs.com/mstoms/p/11741278.html