12. Hadoop pseudo-distributed construction under linux

Foreword:

My directory level, there are two folders under opt, one is the installation address, the other is the original file, I also created elk (put elksearch related files), hadoop (put hadoop related files), Java ( Put jdk and tomcat)
Insert picture description here
Insert picture description here
1. First transfer the hadoop-native-64-2.6.0.tar and hadoop-2.6.0-cdh5.14.2.tar.gz files via xftp to /opt/indtall/hadoop
Insert picture description here
2. hadoop-2.6.0-cdh5.14.2.tar.gz unzip to /opt/bigdata/hadoop

tar -zxvf hadoop-2.6.0-cdh5.14.2.tar.gz  -C /opt/bigdata/hadoop

3. Go to /opt/bigdata/hadoop and change the decompressed file name to hadoop260

mv  (解压完的名称)  hadoop260

4. Modify directory permissions

[root@vbserver hadoop]# chown -R root:root hadoop260/

5. Associate jdk

[root@vbserver hadoop260]# vi etc/hadoop/hadoop-env.sh
export JAVA_HOME=/opt/bigdata/java/jdk180

6.hadoop fs file system

[root@vbserver hadoop260]# vi etc/hadoop/core-site.xml
<configuration>
  <property>
    <name>fs.defaultFS</name>
    <value>hdfs://(虚拟机ip):9000</value>
  </property>
  <property>
    <name>hadoop.tmp.dir</name>
    <value>/opt/bigdata/hadoop/hadoop260</value>
  </property>
  <property>
    <name>hadoop.proxyuser.root.hosts</name>
    <value>*</value>
  </property>
  <property>
    <name>hadoop.proxyuser.root.groups</name>
    <value>*</value>
  </property>
</configuration>

7. hadoop replicas backup

[root@vbserver hadoop]# pwd
/opt/bigdata/hadoop

[root@vbserver hadoop]# mkdir hdfs
[root@vbserver hadoop]# ls
hadoop260  hdfs

[root@vbserver hadoop]# cd hdfs/
[root@vbserver hdfs]# mkdir namenode datanode
[root@vbserver hdfs]# ls
datanode  namenode

[root@vbserver hadoop260]# vi etc/hadoop/hdfs-site.xml
<configuration>
  <property>
    <name>dfs.replication</name>
    <value>1</value>
  </property>
  <property>
    <name>dfs.namenode.dir</name>
    <value>/opt/bigdata/hadoop/hdfs/namenode</value>
  </property>
  <property>
    <name>dfs.datanode.dir</name>
    <value>/opt/bigdata/hadoop/hdfs/datanode</value>
  </property>
</configuration>

8.hadoop mapreduce computing framework

[root@vbserver hadoop260]# cp etc/hadoop/mapred-site.xml.template etc/hadoop/mapred-site.xml
[root@vbserver hadoop260]# vi etc/hadoop/mapred-site.xml
<configuration>
  <property>
    <name>mapreduce.framework.name</name>
    <value>yarn</value>
  </property>
</configuration>

9. hadoop yarn management scheduling

[root@vbserver hadoop260]# vi etc/hadoop/yarn-site.xml
<configuration>
  <property>
    <name>yarn.nodemanager.aux-services</name>
    <value>mapreduce_shuffle</value>
  </property>
  <property>
	<name>yarn.log-aggregation-enable</name>
	<value>true</value>
  </property>
</configuration>

10. hadoop slaves hostname

[root@vbserver hadoop260]# vi etc/hadoop/slaves
vbserver(当前虚拟机的hostname我的是vbserver)

11.hadoop environment variables

[root@vbserver hadoop260]# vi /etc/profile
export JAVA_HOME=/opt/bigdata/java/jdk180
export TOMCAT_HOME=/opt/bigdata/java/tomcat85
export NODE_HOME=/opt/bigdata/elk/node891

export HADOOP_HOME=/opt/bigdata/hadoop/hadoop260
export HADOOP_MAPRED_HOME=$HADOOP_HOME
export HADOOP_COMMON_HOME=$HADOOP_HOME
export HADOOP_HDFS_HOME=$HADOOP_HOME
export YARN_HOME=$HADOOP_HOME
export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native
export HADOOP_OPTS="-Djava.library.path=$HADOOP_HOME/lib"

export PATH=$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$NODE_HOME/bin:$JAVA_HOME/bin:$TOMCAT_HOME/bin:$PATH
export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar

[root@vbserver hadoop260]# source /etc/profile

12.hadoop format HDFS

[root@vbserver hadoop260]# cd bin
[root@vbserver bin]# hdfs namenode -format

(See the following information to indicate success)
Insert picture description here
13.hadoop-native library

[root@vbserver bin]# cd /opt/install/hadoop
[root@vbserver hadoop]# tar -xf hadoop-native-64-2.6.0.tar -C /opt/bigdata/hadoop/hadoop260/lib/native/

14. Start hadoop

[root@vbserver hadoop]# cd /opt/bigdata/hadoop/hadoop260/bin
[root@vbserver bin]# start-all.sh
[root@vbserver bin]# stop-all.sh

15. The solution to the password required to start or close hadoop

[root@vbserver bin]# cd ~
[root@vbserver ~]# cd .ssh/
[root@vbserver .ssh]# ls
authorized_keys  id_rsa  id_rsa.pub  known_hosts
[root@vbserver .ssh]# cat id_rsa.pub >> authorized_keys 
[root@vbserver .ssh]# ssh localhost

16. Start jobhistory

[root@vbserver bin]# cd ../sbin/
[root@vbserver sbin]# jps
[root@vbserver sbin]# ./mr-jobhistory-daemon.sh start historyserver

17. View the status of hadoop service

[root@vbserver sbin]# jps
6800 NodeManager
7329 Jps
6387 DataNode
6548 SecondaryNameNode
6264 NameNode
6697 ResourceManager
7259 JobHistoryServer

18.http://192.168.6.200:8088/
Insert picture description here
19.http://192.168.6.200:50070/
Insert picture description here
20.http://192.168.6.200:19888/
Insert picture description here

Guess you like

Origin blog.csdn.net/weixin_44695793/article/details/108010521