Hadoop3 cluster building - install hadoop, configure the environment

  Continued from the previous article: Hadoop3 cluster construction - virtual machine installation

In the last article, the virtual machine has been installed, now start to configure the environment and install hadoop

Note: A hadoop cluster requires at least three machines, because the number of hdfs replicas is at least 3, and a single machine does not count

 

1. Create hadoop user and hadoopgroup group  

groupadd -g 102 hadoopgroup # create a user group

useradd -d /opt/hadoop -u 10201   -g 102   hadoop #Create user

passwd   hadoop #Set a password for the user

2. Install the ftp tool  

yum -y install vsftpd 
Start ftp: systemctl start vsftpd.service
停止ftp:systemctl stop vsftpd.service
Restart ftp: systemctl restart vsftpd.service
[root@venn08 ~ ]# systemctl start vsftpd.service # Start, no prompt information
[root@venn08 ~]# ps -ef| grep vsft #Check that the process already exists, use the ftp tool to connect directly
root       1257      1  0 09:41 ?        00:00:00 /usr/sbin/vsftpd /etc/vsftpd/vsftpd.conf
root       1266   1125  0 09:42 pts/0    00:00:00 grep --color=auto vsft
[root@venn08 ~]# systemctl restart vsftpd.service

Note: After installing vsftpd, you can use the system user to log in as the ftp user. The options are the same as the system permissions, and no additional configuration is required.

    

2. Install jdk and hadoop

  Copy the downloaded jdk and hadoop to the server, unzip, and modify the directory name  

[hadoop@venn05 ~]$ pwd
/opt/hadoop
[hadoop@venn05 ~]$ ll
drwxr-xr-x. 11 hadoop hadoopgroup       172 Apr  3 20:49 hadoop3
-rw-r--r--.  1 hadoop hadoopgroup 307606299 Apr  2 22:30 hadoop-3.0.1.tar.gz
drwxr-xr-x.  8 hadoop hadoopgroup       255 Apr  1  2016 jdk1.8
-rw-r--r--.  1 hadoop hadoopgroup 181367942 May 26  2016 jdk-8u91-linux-x64.tar.gz

Modify the directory name for the convenience of writing

 

3. Configure Java and hadoop environment variables

  Add Java and hadoop environment variables at the end, be careful not to write wrong paths

[hadoop@venn05 ~]$ vim .bashrc 
[hadoop@venn05 ~]$ more .bashrc 
# .bashrc

# Source global definitions
if [ -f /etc/bashrc ]; then
    . /etc/bashrc
fi

# Uncomment the following line if you don't like systemctl's auto-paging feature:
# export SYSTEMD_PAGER=

# User specific aliases and functions
#jdk
export JAVA_HOME=/opt/hadoop/jdk1.8
export JRE_HOME=${JAVA_HOME}/jre
export CLASS_PATH=${JAVA_HOME}/lib:${JRE_HOME}/lib
export PATH=${JAVA_HOME}/bin:$PATH


#hadoop
export HADOOP_HOME=/opt/hadoop/hadoop3
export PATH=$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$PATH

 

  

  

 

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=324786407&siteId=291194637