Personal homepage: http://www.tongtongxue.com/archives/4578.html
software preparation
(1) hadoop-2.6.0.tar.gz
(2) jdk-7u67-linux-x64.tar.gz (
3) VMware Workstation Pro
(4) CentOS-6.4-x86_64-minimal.iso
VM builds pseudo-distribution mode of Hadoop
Create a new CentOS virtual machine
(1) Create a new virtual machine
(2) Select "Custom"
(3) Select the ISO image file
(4) Naming
(5) Specify the virtual machine installation location
(6) Go to the next step, and when you want to "finish", remove the option of "start this virtual machine after creation"
(7) Edit virtual machine settings
(8) Remove "autoinst.iso"
(9) Start the virtual machine
(10) Select "Skip" in "Disc Found"
(11) Language selection "English"
(12) Select "US English" on the keyboard
(13) Edit HostName
(14) Select "Shanghai" for the time zone
(15) Set password
Create a new yun user
After entering as root user, enter the following operations:
useradd yun
Press enter, then enter
passwd yun
After pressing enter, the system will prompt to set a password
Install JDK
(1) Upload jdk-7u67-linux-x64.tar.gz through Xshell tool
(2) Unzip the file
tar -zxvf jdk-7u67-linux-x64.tar.gz
(3)设置环境变量
export JAVA_HOME=/opt/jdk1.7.0_67 export PATH=$JAVA_HOME/bin:$PATH export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
安装Hadoop
(1)配置SSH无密码登录节点,首先安装openssh-server
yum install openssh-server
(2)执行
ssh-keygen -t rsa
(3)将公钥名修改成authorized_keys
cd ~/.ssh mv id_rsa.pub authorized_keys
(4)配置core-site.xml
<configuration> <property> <name>hadoop.tmp.dir</name> <value>/opt/hadoop/tmp</value> </property> <property> <name>fs.defaultFS</name> <value>hdfs://yunpan:9000</value> </property> </configuration>
(5)配置hdfs-site.xml
<property> <name>dfs.replication</name> <value>1</value> </property> <property> <name>dfs.namenode.name.dir</name> <value>file:/opt/hadoop/dfs/name</value> </property> <property> <name>dfs.datanode.data.dir</name> <value>file:/opt/hadoop/dfs/data</value> </property> <property> <name>dfs.permissions</name> <value>false</value> </property>
(6)yarn-site.xml配置
<configuration><!-- Site specific YARN configuration properties --> <property> <name>mapreduce.framework.name</name> <value>yarn</value> </property> <property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> </property> </configuration>
(7)配置slaves
在该文件中输入datanode节点的ip,由于是伪分布式的,则里面只要配置目前的centos的ip或者域名
(8)配置环境变量
export HADOOP_HOME=/opt/hadoop 然后加入到PATH中 export PATH=$JAVA_HOME/bin:$HADOOP_HOME/bin:$PATH
(9)格式化
hdfs namenode -format
(10)启动
由于我们只需要HDFS分布式系统,以启动时只要输入
sbin/start-dfs.sh
关注微信公众号