Linux上搭建伪集群 Hadoop3.1

首先使用的是jdk1.8,将jdk 添加到环境变量中,hadoop 路径添加到环境变量中。

vim /etc/profile #设置Hadoop环境变量
export HADOOP_HOME=/usr/hadoop
export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin
export JAVA_HOME=/opt/java1.8
export PATH$PATH:JAVA_HOME/bin

然后配置文件修改,对应的目录/usr/hadoop/etc/hadoop/

修改etc/hadoop/core-site.xml

vim /usr/hadoop/etc/hadoop/core-site.xml

#设置configuration
<configuration>
    <property>
        <name>fs.defaultFS</name>
        <value>hdfs://localhost:9000</value>
    </property>
</configuration>

修改etc/hadoop/hdfs-site.xml

vim /usr/hadoop/etc/hadoop/hdfs-site.xml

#设置configuration
<configuration>
    <property>
        <name>dfs.replication</name>
        <value>1</value>
    </property>
    <property>
                <name>dfs.namenode.name.dir</name>
                <value>/opt/hadoop/data/dfs/namenode</value>
        </property>
        <property>
                <name>dfs.datanode.data.dir</name>
                <value>/opt/hadoop/data/dfs/datanode</value>
        </property>
</configuration>

或者直接写:

<property>
      <name>fs.defaultFS</name>
      <value>hdfs://localhost:9000</value>
 </property>
 <!-- 用来指定Hadoop运行时产生文件的存放目录 -->
 <property>
      <name>hadoop.tmp.dir</name>
      <value>/opt/CI/hadoop-3.1.0/HadoTmp</value>
      </property>

修改etc\hadoop\mapred-site.xml


<configuration>
        <property>
           <name>mapreduce.framework.name</name>
           <value>yarn</value>
        </property>
</configuration

修改etc\hadoop\yarn-site.xml

<configuration>
        <property>
           <name>yarn.nodemanager.aux-services</name>
           <value>mapreduce_shuffle</value>
        </property>
        <property>
           <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
           <value>org.apache.hadoop.mapred.ShuffleHandler</value>
        </property>
</configuration>

然后在hadoop-env.sh 配置如下:

JAVA_HOME=/usr/java1.8

然后先执行:

hadoop namenode -format   //初始化 hdfs

然后执行 start-all.sh 报错:

hadoop3.1 需要配置上述报错的用户名,在hadoop-env.sh文件中:

# JAVA_HOME
export JAVA_HOME=/home/root/jdk/jdk1.8.0_171
 
# USERS
export HDFS_NAMENODE_USER=root
export HDFS_DATANODE_USER=root
export HDFS_SECONDARYNAMENODE_USER=root
 
export YARN_RESOURCEMANAGER_USER=root
export YARN_NODEMANAGER_USER=root

------------------------------------------------------------------------启动----------------------------------------------------------------------------

首先执行:bin/hdfs namenode -format

然后 启动namenode和datanode守护进程

sbin/start-dfs.sh

报错:

需要设置 ssh 免密码登录:

解决方式:SSH-permission denied (publickey,gssapi-keyex,gssapi-with-mic,password)

猜你喜欢

转载自blog.csdn.net/lxlmycsdnfree/article/details/81606356
今日推荐