Install and start stand-alone HDFS

As a prerequisite, you need to ensure that you have a java8 environment, install java and configure environment variables.

vi /etc/profile
export JAVA_HOME=/root/jdk1.8.0_171
export CLASSPATH=$JAVA_HOME/lib/
export PATH=$PATH:$JAVA_HOME/bin

Then execute source /etc/profile to reload the configuration file.

Download hadoop

curl -O https://archive.apache.org/dist/hadoop/common/hadoop-2.10.0/hadoop-2.10.0.tar.gz

Unzip

tar -zxvf hadoop-2.10.0.tar.gz

Check if available

./bin/hadoop version   #  查看hadoop版本信息,成功显示则安装成功

Modify the configuration file to configure the Java environment of hadoop

vim etc/hadoop/hadoop-env.sh

image.png
Note that it is the etc directory under the hadoop path, not the etc directory under the root directory.

Change setting

Modify core-site and hdfs-site files

Replace the ip address according to your own situation

vim etc/hadoop/core-site.xml
<configuration>

<property>
    <name>fs.defaultFS</name>
    <value>hdfs://192.168.159.100:8020</value>
</property>

<property>
        <name>hadoop.tmp.dir</name>
        <value>/export/data/hadoop-3.3.4</value>
</property>

<property>
        <name>hadoop.http.staticuser.user</name>
        <value>root</value>
</property>

<property>
        <name>hadoop.proxyuser.root.hosts</name>
        <value>*</value>
</property>

<property>
        <name>hadoop.proxyuser.root.groups</name>
        <value>*</value>
</property>

<property>
        <name>fs.trash.interval</name>
        <value>1440</value>
</property>

</configuration>
vim  etc/hadoop/hdfs-site.xml
<configuration>
        <property>
             <name>dfs.replication</name>
             <value>1</value>
        </property>
        <property>
             <name>dfs.namenode.name.dir</name>
             <value>file:/root/hadoop-2.10.0/tmp/dfs/name</value>
        </property>
        <property>
             <name>dfs.datanode.data.dir</name>
             <value>file:/root/hadoop-2.10.0/tmp/dfs/data</value>
        </property>
<property>
<name>dfs.namenode.http-address</name>
<value>192.168.159.100:9870</value>
</property>
</configuration>

Install SSH password-free login

#生成授权
ssh-keygen -t rsa -P '' -f ~/.ssh/id_rsa
cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys #加入授权
chmod 0600 ~/.ssh/authorized_keys               #啥意思?

start up

/sbin/start-dfs.sh

After starting, use jps to see the following process.

You can use commands to operate hdfs files, or you can operate the hdfs file directory on web-ui.image.png

Guess you like

Origin blog.csdn.net/qq_37436172/article/details/130511865