hadoop3.3.0 download, installation and configuration
Hadoop version download address
http://archive.apache.org/dist/hadoop/common/In
order to install hadoop, you need to install jdk first and configure the java environment
jdk1.8 download link
link: https://pan.baidu.com/s/ 1pj4yAiA3tmWe-nO9780ITg?pwd=hp9dExtraction code: hp9d
1. Create tools and training folders
Directly upload jdk and hadoop installation packages to tools
2. Unzip, install and configure jdk
2.1 Unzip
Enter the tools folder and extract the jdk to the training folder.
[root@localhost /]# cd tools
[root@localhost tools]# ls
hadoop-3.3.0.tar.gz jdk-8u144-linux-x64.tar.gz
[root@localhost tools]# tar -zxvf jdk-8u144-linux-x64.tar.gz -C /training/
After decompressing, go to the training folder to view it. The decompression is complete.
2.2 Configuration
Enter this command to configure environment variables
vi ~/.bash_profile
After opening the file, it cannot be edited and keyboard input is required.iEnter the editing state
and add the following content to the file
#java
export JAVA_HOME=/training/jdk1.8.0_144
export JRE_HOME=$JAVA_HOME/jre
export CLASSPATH=.:$CLASSPATH:$JAVA_HOME/lib:$JRE_HOME/lib
export PATH=$PATH:$JAVA_HOME/bin:$JRE_HOME/bin
NoticeThe jdk path and version number must be consistent with your actual path and version number
. After the input is completed, click the ESC key on the keyboard to exit the input mode.
The input mode is as follows, INSERT is displayed at the end of the file
After clicking ESC, the INSERT at the end of the file will disappear.
Then enter and press :wq
Enter to save and exit.
:wq
Save and exit:q!
To force exit without saving,
enter the following command to make the environment variables take effect.
source ~/.bash_profile
2.3 Check whether the configuration is successful
Enter the following command to check whether the configuration is successful
java -version
3.Install hadoop
3.1 Configure host name
Finally, niit is the host name to be modified, which can also be freely set according to your own needs.
hostnamectl --static set-hostname niit
3.2 Configure IP host name mapping relationship
Modify the hosts file and configure the mapping relationship.
Enter the following command to modify it.
vi /etc/hosts
Type the following content in the file (enter i to enter edit mode, :wq to save and exit).
The first is your own local IP address and the latter will be the host name you just configured.Be sure to match your actual IP address with the host name!!!
192.168.149.128 niit
Configure another mapping file
3.3 Turn off the firewall
systemctl stop firewalld.service
systemctl disable firewalld.service
3.4 Unzip hadoop
Enter the tools folder and extract hadoop to the training folder
cd tools
tar -zxvf hadoop-3.3.0.tar.gz -C /training/
3.5 Configure hadoop environment variables
vi ~/.bash_profile
After opening the file, type the following(Enter i to enter editing mode, :wq to save and exit)
#hadoop
export HADOOP_HOME=/training/hadoop-3.3.0
export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin
Make environment variables effective
source ~/.bash_profile
3.6 Enter hdfs to check whether hadoop is installed successfully
Enter the following content to indicate successful installation
3.7 Configure hadoop password-free login
Create a tmp folder in the hadoop installation path to store configuration data
mkdir /training/hadoop-3.3.0/tmp
For password-free configuration
, enter the following code and press Enter four times. Do not enter anything. Enter
the following command.
cd ~/.ssh/
ssh-copy-id -i id_rsa.pub root@niit
niit is the host name of your own machine
3.8 Configure hadoop configuration file
3.8.1 Enter the Hadoop configuration file address
cd /training/hadoop-3.3.0/etc/hadoop/
- Configure hadoop-env.sh file
vi hadoop-env.sh
After entering editing mode, find JAVA_HOME and add the following code under this column
(Enter i to enter editing mode, :wq to save and exit)
export JAVA_HOME=/training/jdk1.8.0_144
export HDFS_NAMENODE_USER=root
export HDFS_DATANODE_USER=root
export HDFS_SECONDARYNAMENODE_USER=root
export YARN_RESOURCEMANAGER_USER=root
export YARN_NODEMANAGER_USER=root
2. Configure hdfs-site.xml file
vi hdfs-site.xml
After entering, add the following configuration under the two configuration tags(Enter i to enter editing mode, :wq to save and exit)
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<property>
<name>dfs.permissions</name>
<value>false</value>
</property>
3. Configure the core-site.xml file
and enter the file
vi core-site.xml
After entering, add the following configuration under the two configuration tags(Enter i to enter editing mode, :wq to save and exit)
where niit is the host name, which must be consistent with your actual host name.
<property>
<name>fs.defaultFS</name>
<value>hdfs://niit:8020</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/training/hadoop-3.3.0/tmp</value>
</property>
4. Configure the mapred-site.xml file.
After entering, add the following configuration under the two configuration tags.(Enter i to enter editing mode, :wq to save and exit)
vi mapred-site.xml
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
- Configure yarn-site.xml file
vi yarn-site.xml
After entering, add the following configuration under the two configuration tags(Enter i to enter editing mode, :wq to save and exit)
<property>
<name>yarn.resourcemanager.hostname</name>
<value>niit</value>
</property>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
3.9 Hadoop format master node
hdfs namenode -format
4.0 hadoop startup and shutdown
start up
start-all.sh
View progress
jps
Close hadoop
stop-all.sh
View progress
jps
If all the above are normal, it means the installation is successful. If there is a missing process, it means there is a problem with a configuration file. Just check the configuration file for modification.