Detailed steps of Hadoop2 installation process

1. Change the network type of the virtual machine in vmware, --->NAT mode, (the IP of the virtual switch can be seen from edit-->vertual network editor of vmvare) 2. According to the address of the
switch (gateway), Let’s set the IP address of our client Windows 7 (Vmnet8 network card).
3. Start the Linux host, modify the IP address of the Linux system (modify through the graphical interface), and after the modification is completed, switch to the root user in the terminal (command line terminal) for execution. Command to restart the network service to make the ip take effect
4. Modify the host name: Under the root identity, use the command to modify vi /etc/sysconfig/network and change the hostname to yun-10-1
5. Add the mapping between the host name and ip under the root identity vi /etc/hosts Add a line 192.168.2.100 yun-10-1
6. Add the hadoop user to sudoers and vi /etc/sudoers as root. Find root ALL=ALL ALL in the file and add a line hadoop below. 7.
Stop the firewall service. Service iptables stop as root.
8. Turn off the automatic startup of the firewall. chkconfig iptables off as root.
9. Reboot
10. Use the ping command to check the network connectivity between the windows host and the Linux server.
11. Enter Linux to modify the graphical interface startup configuration. Do not start the graphical interface again. Under root identity, vi /etc/inittab changes it to id:3:initdefault:
12. Reboot again and you will not be able to boot to the graphical interface 
(when you want to start the graphical interface in the future, you can type startx (init 5) on the command line. If you want to close the graphical interface on the graphical interface, type the command init 3)

===========Gorgeous dividing line==============================

1/Use a terminal to connect to the Linux server to install the software (use secureCRT to connect)
2/Install jdk
    - Use the filezilla tool to upload the jdk compressed package
    - Unzip the jdk compressed package to a special installation directory /home/hadoop/ app
    -- type the command in the main directory of hadoop  

tar -zxvf jdk-7u65-linux-i586.tar.gz -C ./app

    --Configure the java environment variable sudo vi /etc/profile
              and add it at the end of the file:

            export JAVA_HOME=/home/hadoop/app/jdk1.7.0_65
            export PATH=$PATH:$JAVA_HOME/bin

3/Let the configuration take effect, source /etc/profile

===========Cool dividing line==============================

1/Use the filezilla tool to upload the hadoop installation package
2/Extract hadoop to the app directory  

tar -zxvf hadoop-2.4.1.tar.gz -C ./app/


3/Modify the five major configuration files of hadoop, located in the /home/hadoop/app/hadoop-2.4.1/etc/hadoop directory
-- vi hadoop-env.sh and change JAVA_HOME to the path where we install jdk

JAVA_HOME=/home/hadoop/app/jdk1.7.0_65

--  vi   core-site.xml   
 

<configuration>
    <property>
        <name>fs.defaultFS</name>
        <value>hdfs://yun-10-1:9000</value>
    </property>
    <property>
        <name>hadoop.tmp.dir</name>
        <value>/home/hadoop/app/hadoop-2.4.1/tmp</value>
    </property>
</configuration>

-- vi hdfs-site.xml

<configuration>
    <property>
        <name>dfs.replication</name>
        <value>1</value>
    </property>
</configuration>


-- First modify the file name mv mapred-site.xml.template mapred-site.xml
   and then edit vi mapred-site.xml

<configuration>
    <property>
        <name>mapreduce.framework.name</name>
        <value>yarn</value>
    </property>
</configuration>


-- vi yarn-site.xml

<configuration>
    <property>
        <name>yarn.resourcemanager.hostname</name>
        <value>yun-10-1</value>
    </property>
    <property>
        <name>yarn.nodemanager.aux-services</name>
        <value>mapreduce_shuffle</value>
    </property>
</configuration>


4/Configure the environment variables of hadoop
sudo vi /etc/profile

=============Beautiful dividing line====================

1/Configuration of secretless login.
First generate a key pair on the client ssh-keygen -t rsa and then press Enter.
Then copy the public key on the client to the remote ssh-copy-id desthost.

=========Perfect dividing line========Hadoop installation completed========

Download study materials:

Hadoop-HDFS-Shell-Learning materials and documents, Java code

Guess you like

Origin blog.csdn.net/caryxp/article/details/133513130