Hadoop pseudo-distributed construction (hadoop2.x general)

先说下我的环境 centos7.6(64位)

安装包准备:

jdk-8u231-linux-x64.tar.gz

hadoop-2.6.5.tar.gz

If the reader is using ubuntu or other linux versions, the idea is the same as this article, but the commands are slightly different.

1. Turn off the firewall first (recommended)

carried out systemctl stop firewalld.service

# 查看是否防火墙是否关闭
[root@lft soft]# systemctl status firewalld
● firewalld.service - firewalld - dynamic firewall daemon
   Loaded: loaded (/usr/lib/systemd/system/firewalld.service; disabled; vendor preset: enabled)
   Active: inactive (dead)
     Docs: man:firewalld(1)

2. Configure key-free

ssh-keygen -t dsa -P '' -f ~/.ssh/id_dsa
cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys

Insert picture description here

3. Unzip, install jdk and hadoop and configure environment variables

(1) Unzip jdk and configure environment variables
tar -xf jdk-8u231-linux-x64.tar.gz

[root@lft jdk1.8.0_231]# pwd
/root/soft/jdk1.8.0_231
[root@lft jdk1.8.0_231]# vim /etc/profile
# 在文件末尾追加
export JAVA_HOME=/root/soft/jdk1.8.0_231
export CLASSPATH=$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar:. 
PATH=$PATH:$JAVA_HOME
# 使配置生效
[root@lft jdk1.8.0_231]# source /etc/profile
# 查看成功没有
[root@lft jdk1.8.0_231]# java -version
openjdk version "1.8.0_262"
OpenJDK Runtime Environment (build 1.8.0_262-b10)
OpenJDK 64-Bit Server VM (build 25.262-b10, mixed mode)

(2) Unzip hadoop and configure environment variables
tar -xf hadoop-2.6.5.tar.gz

export HADOOP_HOME=/root/soft/hadoop-2.6.5
PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin

(3) secondary configuration JAVA_HOMEenvironment variable

The ../xx/hadoop-2.6.5/etc/hadoop/following three documents in the value of JAVA_HOME into just configured: export JAVA_HOME=/root/soft/jdk1.8.0_231.

vi hadoop-env.sh
vi mapred-env.sh
vi yarn-env.sh

4. Placement core-site.xml

Modification ../xx/hadoop-2.6.5/etc/hadoop/under core-site.xmlFile
vi core-site.xml

<configuration>
    <property>
        <name>fs.defaultFS</name>
        <value>hdfs://lft:9000</value>
        <!-- 以上ip地址或主机名要按实际情况修改 -->
    </property>
    <property>
        <name>hadoop.tmp.dir</name>
        <value>/var/lft/hadoop/local</value>
    </property>
</configuration>

5. Configure hdfs-site.xml

Modification ../xx/hadoop-2.6.5/etc/hadoop/under hdfs-site.xmlFile
vi hdfs-site.xml

<configuration>
    <property>
        <name>dfs.replication</name>
        <value>1</value>
    </property>
    <property>
        <name>dfs.namenode.secondary.http-address</name>
        <value>lft:50090</value>
        <!-- 以上ip地址或主机名要按实际情况修改 -->
    </property>
</configuration>

6. Placement mapred-site.xml

Modification ../xx/hadoop-2.6.5/etc/hadoop/under the mapred-site.xml.templatefile
first renamed after configuration
cp mapred-site.xml.template mapred-site.xml
vi mapred-site.xml

<property>
    <name>mapreduce.framework.name</name>
    <value>yarn</value>
</property>

7. Deployment yarn-site.xml

Modification ../xx/hadoop-2.6.5/etc/hadoop/under yarn-site.xmlFile
vi yarn-site.xml

    <property>
        <name>yarn.resourcemanager.hostname</name>
        <value>lft</value>
        <!-- 以上主机名或IP地址按实际情况修改 -->
    </property>
    <property>
        <name>yarn.nodemanager.aux-services</name>
        <value>mapreduce_shuffle</value>
    </property>

8. Configure the slaves file

Modification ../xx/hadoop-2.6.5/etc/hadoop/under slavesFile

vi slaves

The file only needs to edit one line of content: lft(localhost is also OK, I changed this to my own host name)

9. Format hdfs

hdfs namenode -format (It can only be formatted once, do not execute when restarting the cluster)

10. Start the cluster and verify that the environment is successfully set up

start-dfs.sh

(1) View the role process: jps

Insert picture description here

帮助: hdfs 
      hdfs dfs 	

(2) Browser view web UI: http://你的IP:50070

Insert picture description here

Guess you like

Origin blog.csdn.net/weixin_44285445/article/details/108663764