If you choose to install Hadoop stand-alone mode, you can skip step 2 change the profile, Hadoop standalone mode by default; if Hadoop distributed mode installation on a single machine, Step 2 may be used to pseudo-distributed mode.
1. Download from the official website hadoop-3.1.0.tar, and extract it to / usr / hadoop directory;
2. Change Hadoop configuration file:
core-site.xml 配置如下:
<configuration>
<property>
<name>hadoop.tmp.dir</name>
<value>file:/home/leesf/program/hadoop/tmp</value>
<description>Abase for other temporary directories.</description>
</property>
<property>
<name>fs.defaultFS</name>
<value>hdfs://localhost:9000</value>
</property>
</configuration>
其中的hadoop.tmp.dir的路径可以根据自己的习惯进行设置。
mapred-site.xml.template配置如下:
<configuration>
<property>
<name>mapred.job.tracker</name>
<value>localhost:9001</value>
</property>
</configuration>
hdfs-site.xml配置如下:
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>file:/home/leesf/program/hadoop/tmp/dfs/name</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>file:/home/leesf/program/hadoop/tmp/dfs/data</value>
</property>
</configuration>
其中dfs.namenode.name.dir和dfs.datanode.data.dir的路径可以自由设置,最好在hadoop.tmp.dir的目录下面。
补充,如果运行Hadoop的时候发现找不到jdk,可以直接将jdk的路径放置在hadoop.env.sh里面,具体如下:
export JAVA_HOME=/home/lib/jvm/program/java/jdk1.8.0_60
- Install SSH service, set up password-free login:
need to send commands via SSH due Hadoop start / stop script associated daemon start, in order to avoid every start / stop Hadoop to enter a password for authentication, set the password-free login. Proceed as follows:
3.1 into the shell command, enter:
sudo apt-get install ssh openssh-server
3.2设置免密码登陆:
1)、创建SSH-KEY,使用RSA方式:
ssh-keygen -t rsa -P ""
After executing the command file id_rsa.pub generate a public key and private key files in id_rsa ~ / .ssh directory under
2), to copy the contents of the public key file id_rsa.pub authorized_keys file in the same directory:
cat ~/.ssh/id_rsa.pub >> authorized_keys
3) to verify no password is set successfully:
ssh localhost
Appears below the picture is the success
4. Format HDFS:
bin/hadoop namenode -format
5. Start Hadoop
run in Hadoop directory command
sbin/start-all.sh
You can use the jps command to list all daemons to verify the successful installation
jps
That follows the successful launch
After a successful start, in the browser via localhost: 8088 to access