centos6.7 64-bit pseudo-distribution install cdh5.4.8 + jdk 8

1. Install JAVA
# Create a directory for JAVA
mkdir -p /usr/java 
cd /usr/java #Move
the downloaded rpm package to this directory #Execute
the installation
rpm -ivh jdk-8u65-linux-x64.rpm (corresponding to your rpm) #Increase
the environment variable
vim /etc/profile
Modify the profile and add
export JAVA_HOME=/usr/java/jdk1.8.0_65
export CLASSPATH=.:$JAVA_HOME/jre/lib/rt.jar:$JAVA_HOME/lib/dt .jar:$JAVA_HOME/lib/tools.jar
export PATH=$PATH:$JAVA_HOME/bin #Save and
exit, then run
source /etc/profile #After
that run
update-alternatives --install /usr/bin/java java /usr /java/jdk1.8.0_65/bin/java 60
update-alternatives --config java #Test
java installation
java -version
appears
java version "1.8.0_65"
Java(TM) SE Runtime Environment (build 1.8.0_65-b17)
Java HotSpot(TM) 64-Bit Server VM (build 25.65-b01, mixed mode)
indicates that the installation is complete

2. Centos pseudo-distribution installation cdh5
Download CDH file #Create
directory , and switch to this directory
mkdir /opt/soft
cd /opt/soft #download
file
wget http://archive.cloudera.com/cdh5/one-click-install/redhat/5/x86_64/cloudera-cdh-5- 0.x86_64.rpm #Then
use the yum command to install
sudo yum --nogpgcheck localinstall cloudera-cdh-5-0.x86_64.rpm
to start the installation
(Optionally) add a repository key:
$ sudo rpm --import http:// archive.cloudera.com/cdh5/redhat/5/x86_64/cdh/RPM-GPG-KEY-cloudera
2. Install Hadoop pseudo-node mode
$ sudo yum install hadoop-conf-pseudo
to start Hadoop and verify the environment
#So far, the Hadoop pseudo node installation has been completed, let's start to do some configuration and start Hadoop
#1. Format NameNode
sudo -u hdfs hdfs namenode -format
#2. Start HDFS
for x in `cd /etc/init .d ; ls hadoop-hdfs-*` ; do sudo service $x start ; done
This is mainly because the hadoop related commands are in /etc/init.d/
In order to verify whether the startup is successful, you can enter in the browser Address: http://localhost:50070 (localhost can also be changed to ip address) to view
#3. Create /tmp, Staging and Log directories
$ sudo -u hdfs hadoop fs -mkdir -p /tmp/hadoop-yarn/ staging/history/done_intermediate
$ sudo -u hdfs hadoop fs -chown -R mapred:mapred /tmp/hadoop-yarn/staging
$ sudo -u hdfs hadoop fs -chmod -R 1777 /tmp
$ sudo -u hdfs hadoop fs -mkdir -p /var/log/hadoop-yarn
$ sudo -u hdfs hadoop fs -chown yarn:mapred /var/log/hadoop-yarn
#Run the following command to see if the file is created:
$ sudo -u hdfs hadoop fs -ls -R /
#Start YARN (YARN is an upgraded version of MapReduce)
sudo service hadoop-yarn-resourcemanager start
sudo service hadoop-yarn-nodemanager start
sudo service hadoop-mapreduce-historyserver start #Create
a user directory, create a home directory for each MapReduce user, replace <user> with your user
Note that if you want to use another username such as myuser, then you need to do the following additional operations
-------------------------------------------------- -------------------
useradd myuser -- create user
vim /etc/sudoers -- modify the permissions of myuser user
source /etc/sudoers -- after saving the changes, restart
su myuser -- switch user
Next , perform the following operations (if you don't want to add the following <user>, replace it with root; otherwise, replace it with a new user)
----------------- -------------------------------------------------- --
$ sudo -u hdfs hadoop fs -mkdir -p /user/<user>
$ sudo -u hdfs hadoop fs -chown <user> /user/<user> #Test
HDFS
$ hadoop fs -mkdir input
$ hadoop fs -put / etc/hadoop/conf/*.xml input
$ hadoop fs -ls input
If the output is as follows, congratulations on your success
-rw-r--r-- 1 myuser supergroup 2133 2015-11-08 08:28 input/core-site .xml
-rw-r--r-- 1 myuser supergroup 2324 2015-11-08 08:28 input/hdfs-site.xml
-rw-r--r-- 1 myuser supergroup 1549 2015-11-08 08: 28 input/mapred-site.xml
-rw-r--r-- 1 myuser supergroup 2375 2015-11-08 08:28 input/yarn-site.xml

Guess you like

Origin http://10.200.1.11:23101/article/api/json?id=326976623&siteId=291194637