Hadoop overview and construction

Hadoop Overview and Installation

Basic pseudo distributed construction

Abstract

What is Hadoop?(Hadoop Outline)

The Apache Hadoop software library is a framework that allows for the distributed processing of large data sets across clusters of computers using simple programming models. It is designed to scale up from single servers to thousands of machines, each offering local computation and storage. Rather than rely on hardware to deliver high-availability, the library itself is designed to detect and handle failures at the application layer, so delivering a highly-available service on top of a cluster of computers, each of which may be prone to failures.

Why do you need to use Hadoop?(Hadoop Function)

  • Store Massive Data Sets

  • Mix Disparate Data Sources

  • Ingest Bulk Data

  • Ingest High Velocity Data

  • Apply Structure to Unstructured/Semi-Structured Data

  • Make Data Available for Fast Processing with SQL on Hadoop

  • Achieve Data Integration

  • Improve Machine Learning & Predictive Analytics

  • Deploy Real-Time Automation at Scale

  • Achieve Continuous Innovation at Scale

Original URL

[External link image transfer failed. The source site may have an anti-leech link mechanism. It is recommended to save the image and upload it directly (img-eH0sW1Hy-1601136550270)(/Users/stringle-003/Downloads/image012.png)]

Configure JDK

# install JDK
wget https://download.java.net/openjdk/jdk8u41/ri/openjdk-8u41-b04-linux-x64-14_jan_2020.tar.gz
# Unzip JDK installation package
tar -zxvf openjdk-8u41-b04-linux-x64-14_jan_2020.tar.gz
# move and rename
mv java-se-8u41-ri/ /usr/java8
# Configure java development environment variables
echo 'export JAVA_HOME=/usr/java8' >> /etc/profile
echo 'export PATH=$PATH:$JAVA_HOME/bin' >> /etc/profile
source /etc/profile
# Verify Java
java --verison

Download Hadoop

#  install Hadoop
wget https://mirrors.tuna.tsinghua.edu.cn/apache/hadoop/common/hadoop-2.9.2/hadoop-2.9.2.tar.gz
# Unzip Hadoop installation package
tar -zxvf hadoop-2.9.2.tar.gz -C /opt/
mv /opt/hadoop-2.9.2 /opt/hadoop
# move and rename
echo 'export HADOOP_HOME=/opt/hadoop/' >> /etc/profile
echo 'export PATH=$PATH:$HADOOP_HOME/bin' >> /etc/profile
echo 'export PATH=$PATH:$HADOOP_HOME/sbin' >> /etc/profile
source /etc/profile   
# Modify configuration file
echo "export JAVA_HOME=/usr/java8" >> /opt/hadoop/etc/hadoop/yarn-env.sh
echo "export JAVA_HOME=/usr/java8" >> /opt/hadoop/etc/hadoop/hadoop-env.sh

Configure Hadoop

# change Hadoop configuration file(core-site.xml)
vim /opt/hadoop/etc/hadoop/core-site.xml # input i
# insert Configuration content(<configuration></configuration>)
    <property>
        <name>hadoop.tmp.dir</name>
        <value>file:/opt/hadoop/tmp</value>
        <description>location to store temporary files</description>
    </property>
    <property>
        <name>fs.defaultFS</name>
        <value>hdfs://localhost:9000</value>
    </property>
# Save and exit (ESC -> input :wq )
# change Hadoop configuration file(hdfs-site.xml)
vim /opt/hadoop/etc/hadoop/hdfs-site.xml # input i
# insert Configuration content(<configuration></configuration>)
 <property>
        <name>dfs.replication</name>
        <value>1</value>
    </property>
    <property>
        <name>dfs.namenode.name.dir</name>
        <value>file:/opt/hadoop/tmp/dfs/name</value>
    </property>
    <property>
        <name>dfs.datanode.data.dir</name>
        <value>file:/opt/hadoop/tmp/dfs/data</value>
    </property>
# Save and exit (ESC -> input :wq )

Configure password free login

# Create key (public and private keys)
ssh-keygen -t rsa
#  save to  authorized_keys
cd .ssh
cat id_rsa.pub >> authorized_keys

Start Hadoop

# init namenode 
hadoop namenode -format
# start Hadoop 
start-dfs.sh
start-yarn.sh
# Check that the process started successfully 
jps

Verify Hadoop

	visit <yourself ip: 8088>
	visit <yourself ip: 50070>

If you can see the relevant interface, it means success

Guess you like

Origin blog.csdn.net/wzp7081/article/details/108819606