hadoop3.3.0 download, installation and configuration

hadoop3.3.0 download, installation and configuration

Hadoop version download address
http://archive.apache.org/dist/hadoop/common/In
order to install hadoop, you need to install jdk first and configure the java environment
jdk1.8 download link
link: https://pan.baidu.com/s/ 1pj4yAiA3tmWe-nO9780ITg?pwd=hp9dExtraction code: hp9d

1. Create tools and training folders

Directly upload jdk and hadoop installation packages to tools
Insert image description here

2. Unzip, install and configure jdk

2.1 Unzip

Enter the tools folder and extract the jdk to the training folder.

[root@localhost /]# cd tools
[root@localhost tools]# ls
hadoop-3.3.0.tar.gz  jdk-8u144-linux-x64.tar.gz
[root@localhost tools]# tar -zxvf jdk-8u144-linux-x64.tar.gz -C /training/

Insert image description here
After decompressing, go to the training folder to view it. The decompression is complete.
Insert image description here

2.2 Configuration

Enter this command to configure environment variables

vi ~/.bash_profile

After opening the file, it cannot be edited and keyboard input is required.iEnter the editing state
and add the following content to the file

#java
export JAVA_HOME=/training/jdk1.8.0_144
export JRE_HOME=$JAVA_HOME/jre
export CLASSPATH=.:$CLASSPATH:$JAVA_HOME/lib:$JRE_HOME/lib
export PATH=$PATH:$JAVA_HOME/bin:$JRE_HOME/bin

NoticeThe jdk path and version number must be consistent with your actual path and version number
. After the input is completed, click the ESC key on the keyboard to exit the input mode.

The input mode is as follows, INSERT is displayed at the end of the file

Insert image description here
After clicking ESC, the INSERT at the end of the file will disappear.
Insert image description here
Then enter and press :wqEnter to save and exit.

  • :wqSave and exit
  • :q!To force exit without saving,
    Insert image description here
    enter the following command to make the environment variables take effect.
 source ~/.bash_profile
2.3 Check whether the configuration is successful

Enter the following command to check whether the configuration is successful

 java -version

Insert image description here

3.Install hadoop

3.1 Configure host name

Finally, niit is the host name to be modified, which can also be freely set according to your own needs.

hostnamectl --static set-hostname niit
3.2 Configure IP host name mapping relationship

Modify the hosts file and configure the mapping relationship.
Enter the following command to modify it.

vi /etc/hosts

Insert image description here
Type the following content in the file (enter i to enter edit mode, :wq to save and exit).
The first is your own local IP address and the latter will be the host name you just configured.Be sure to match your actual IP address with the host name!!!

192.168.149.128 niit

Insert image description here
Configure another mapping file
Insert image description here

3.3 Turn off the firewall
systemctl stop firewalld.service
systemctl disable firewalld.service

Insert image description here

3.4 Unzip hadoop

Enter the tools folder and extract hadoop to the training folder

cd tools
tar -zxvf hadoop-3.3.0.tar.gz -C /training/

Insert image description here
Insert image description here

3.5 Configure hadoop environment variables
vi ~/.bash_profile

After opening the file, type the following(Enter i to enter editing mode, :wq to save and exit)

#hadoop
export HADOOP_HOME=/training/hadoop-3.3.0
export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin

Make environment variables effective

 source ~/.bash_profile
3.6 Enter hdfs to check whether hadoop is installed successfully

Enter the following content to indicate successful installation
Insert image description here

3.7 Configure hadoop password-free login

Create a tmp folder in the hadoop installation path to store configuration data

 mkdir /training/hadoop-3.3.0/tmp 

Insert image description here
For password-free configuration
, enter the following code and press Enter four times. Do not enter anything. Enter
Insert image description here
the following command.

cd ~/.ssh/
ssh-copy-id -i id_rsa.pub root@niit

niit is the host name of your own machine
Insert image description here

3.8 Configure hadoop configuration file
3.8.1 Enter the Hadoop configuration file address
cd /training/hadoop-3.3.0/etc/hadoop/

Insert image description here

  1. Configure hadoop-env.sh file
vi hadoop-env.sh

Insert image description here
After entering editing mode, find JAVA_HOME and add the following code under this column
(Enter i to enter editing mode, :wq to save and exit)

export JAVA_HOME=/training/jdk1.8.0_144
export HDFS_NAMENODE_USER=root
export HDFS_DATANODE_USER=root
export HDFS_SECONDARYNAMENODE_USER=root
export YARN_RESOURCEMANAGER_USER=root
export YARN_NODEMANAGER_USER=root

Insert image description here
2. Configure hdfs-site.xml file

vi hdfs-site.xml

After entering, add the following configuration under the two configuration tags(Enter i to enter editing mode, :wq to save and exit)

Insert image description here

<property>
	<name>dfs.replication</name>
	<value>1</value>
</property>
<property>
	<name>dfs.permissions</name>
	<value>false</value>
</property>

3. Configure the core-site.xml file
and enter the file

vi core-site.xml

After entering, add the following configuration under the two configuration tags(Enter i to enter editing mode, :wq to save and exit)

where niit is the host name, which must be consistent with your actual host name.

<property>
	<name>fs.defaultFS</name>
	<value>hdfs://niit:8020</value>
</property>			
<property>
	<name>hadoop.tmp.dir</name>
	<value>/training/hadoop-3.3.0/tmp</value>
</property>

Insert image description here
4. Configure the mapred-site.xml file.
After entering, add the following configuration under the two configuration tags.(Enter i to enter editing mode, :wq to save and exit)

vi mapred-site.xml
<property>	
	<name>mapreduce.framework.name</name>
	<value>yarn</value>
</property>

Insert image description here
Insert image description here

  1. Configure yarn-site.xml file
 vi yarn-site.xml

After entering, add the following configuration under the two configuration tags(Enter i to enter editing mode, :wq to save and exit)

<property>
	<name>yarn.resourcemanager.hostname</name>
	<value>niit</value>
</property>
<property>
	<name>yarn.nodemanager.aux-services</name>
	<value>mapreduce_shuffle</value>
</property>

Insert image description here

3.9 Hadoop format master node
hdfs namenode -format
4.0 hadoop startup and shutdown

start up

start-all.sh

Insert image description here
View progress

jps

Insert image description here

Close hadoop
stop-all.sh
Insert image description here

View progress

jps

Insert image description here
If all the above are normal, it means the installation is successful. If there is a missing process, it means there is a problem with a configuration file. Just check the configuration file for modification.

Guess you like

Origin blog.csdn.net/weixin_41907283/article/details/132868313