"Hadoop" Big Data technologies to develop practical study notes (b)

Build Hadoop 2.x distributed clusters


1, Hadoop clusters role assignment

2, unzip and upload Hadoop
in centos01, the installation files to upload / opt / softwares / directory, and then extract the installation files to the / opt / modules /

cd /opt/softwares/
tar -zxf hadoop-2.9.2.tar.gz -C /opt/modules/

3, configure the environment variables
only need to configure centos01 node can, through subsequent remote copy.
a, modify the file / etc / profile

sudo nano /etc/profile

Adding the following at the end of the file

export HADOOP_HOME=/opt/modules/hadoop-2.9.2
export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin

Refresh profile file, the changes to take effect

source /etc/profile

Hadoop command execution, if successful output, the configuration takes effect.

4, the configuration Hadoop environment variables
in the etc / hadoop installation directory under the directory, modify the configuration file:

hadoop_env.sh
mapred-env.sh
yarn-env.sh

Join JAVE_HOME environment variables in more than three files

export JAVE_HOME=/opt/modules/jdk1.8.0_144

5, configured HDFS (omitted)

6, configuration YARN (omitted)

7. Copy the files to other hosts Hadoop

8, format NameNode

hadoop namenode -format

9, start Hadoop

start-all.sh

Guess you like

Origin www.cnblogs.com/zonkidd/p/11922345.html