hadoop and pseudo single distribution installation (based hadoop v2.7.2)


Warm suggestion: You can install all the components to a fixed directory, such as: I put down here opt directory.

/opt

Upload hadoop archive to the next opt, unzip

tar -zxvf /software/hadoop-2.7.2.tar.gz -C /opt/

Enter the following path, to do some configuration changes ./hadoop-2.7.2/etc/hadoop/

Edit hadoop-env.sh configure your own path jdk

Here Insert Picture Description

Edit core-site.xml, IP configuration, ports, read the file buffer size

<!--配置HDFS文件系统的命名空间-->
<property>
<name>fs.defaultFS</name>
<value>hdfs://houda这里写自己的ip或者用户名:9000</value>
</property>
<!--HDFS读取文件的缓冲大小-->
<property>
<name>io.file.buffer.size</name>
<value>4096</value>
</property>

Edit hdfs-site.xml, configure the number of copies and so on. Figure

The inside of the user's name was changed to own

<!--配置hdfs文件系统的副本数-->
<property>
<name>dfs.replication</name>
<value>3</value>
</property>
<!--指定hdfs文件系统的元数据存放目录-->
<property>
<name>dfs.namenode.name.dir</name>
<value>file:///opt/hadoopdata/dfs/name</value>
</property>
<!--指定hdfs文件系统的数据块存放目录-->
<property>
<name>dfs.datanode.data.dir</name>
<value>file:///opt/hadoopdata/dfs/data</value>
</property>
<!--配置HDFS的web管理地址-->
<property>
<name>dfs.http.address</name>
<value>houda:50070</value>
</property>
<!--配置secondaryNamenode的web管理地址-->
<property>
<name>dfs.secondary.http.address</name>
<value>houda02:50090</value>
</property>
<!--配置是否打开web管理-->
<property>
<name>dfs.webhdfs.enabled</name>
<value>true</value>
</property>
<!--指定hdfs文件系统权限是否开启-->
<property>
<name>dfs.permissions</name>
<value>false</value>
</property>

Copy and modify a profile name

cp mapred-site.xml.template mapred-site.xml

Edit mapred-site.xml, configure framework moniker, history, components, etc.

<!--指定mapreduce运行的框架名-->
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
<final>true</final>
</property>
<!--配置mapreduce的历史记录组件的内部通信地址-->
<property>
<name>mapreduce.jobhistory.address</name>
<value>houda:10020</value>
</property>
<!--配置mapreduce的历史记录服务的web管理地址-->
<property>
<name>mapreduce.jobhistory.webapp.address</name>
<value>houda:19888</value>
</property>
<property>
<name>mapreduce.job.ubertask.enable</name>
<value>true</value>
</property>
<property>
<name>mapreduce.job.ubertask.maxmaps</name>
<value>9</value>
</property>
<property>
<name>mapreduce.job.ubertask.maxreduces</name>
<value>1</value>
</property>

Edit yarn-site.xml, configuration information related resourcemanager

<!-- Site specific YARN configuration properties -->
<!--指定resourcemanager所启动服务的主机名/ip-->
<property>
<name>yarn.resourcemanager.hostname</name>
<value>houda</value>
</property>
<!--指定mapreduce的shuffle处理数据方式-->
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<!--配置resourcemanager内部通讯地址-->
<property>
<name>yarn.resourcemanager.address</name>
<value>houda:8032</value>
</property>
<!--配置resourcemanager的scheduler组件的内部通信地址-->
<property>
<name>yarn.resourcemanager.scheduler.address</name>
<value>houda:8030</value>
</property>
<!--配置resource-tracker组件的内部通信地址-->
<property>
<name>yarn.resourcemanager.resource-tracker.address</name>
<value>houda:8031</value>
</property>
<!--配置resourcemanager的admin的内部通信地址-->
<property>
<name>yarn.resourcemanager.admin.address</name>
<value>houda:8033</value>
</property>
<!--配置yarn的web管理地址-->
<property>
<name>yarn.resourcemanager.webapp.address</name>
<value>houda:8088</value>
</property>

<!--yarn的聚合日志是否开启-->
  <property>
    <name>yarn.log-aggregation-enable</name>
    <value>true</value>
  </property>
<!--聚合日志报错hdfs上的时间-->
  <property>
    <name>yarn.log-aggregation.retain-seconds</name>
    <value>86400</value>
  </property>
  <!--聚合日志的检查时间段-->
  <property>
    <name>yarn.log-aggregation.retain-check-interval-seconds</name>
    <value>3600</value>
  </property>
<!---->
  <property>
    <name>yarn.nodemanager.log.retain-seconds</name>
    <value>10800</value>
  </property>
<!--当应用程序运行结束后,日志被转移到的HDFS目录(启用日志聚集功能时有效)-->
  <property>
    <name>yarn.nodemanager.remote-app-log-dir</name>
    <value>/opt/hadoopdata/logs</value>
  </property>

Edit slaves

slaves in writing the host name of the virtual machine, there are several virtual machines, write the name of the host several virtual machines
on my side a test
Here Insert Picture Description

Configuration mapping information

vim /etc/hosts

Here Insert Picture Description

If it is more than one virtual machine can be configured hadoop directory sent to other virtual machines

scp /opt/hadoop-2.7.2/ root@192.168.8.121 /opt/

Each machine configuration environment variable

vim /etc/profile

Here Insert Picture Description

Configuring ssh-free landing, free landing generate ssh keys

ssh-keygen -t rsa (无脑敲四个回车)

// After completion of this command is executed, two files are generated id_rsa (private), id_rsa.pub (public key)
// copy the public key you want to avoid landing on the machine
side should be noted: multiple virtual machines, you We need to send a few

ssh-copy-id ody

Formatting hadoop of service namenode

hadoop namenode -format

Start Service View

start-all.sh

By viewing web UI and yarn HDFS cluster is normal

Here Insert Picture Description
Here Insert Picture Description

Published 15 original articles · won praise 3 · Views 2306

Guess you like

Origin blog.csdn.net/weixin_38620636/article/details/104968071