hadoop2.7.3搭建

实验环境中搭建hadoop集群(3台示例)

1.修改主机名、修改hosts

vi /etc/sysconfig/network,然后将HOSTNAME修改成hadoop-node1(自定义)

  vi /etc/hosts ,添加hostname及其对应的ip


2.安装jdk和配置环境变量

2.1jdk下载安装不赘述

2.2环境变量:

/etc/profile文件中添加:

export HADOOP_HOME=/home/scada/hadoop(hadoop安装路径)
export PATH=$PATH:$HADOOP_HOME/bin
export PATH=$PATH:$HADOOP_HOME/sbin
export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop
export YARN_CONF_DIR=$HADOOP_HOME/etc/hadoop
export JAVA_HOME=/home/scada/jdk1.7.0_80(jdk安装路径)
export PATH=$PATH:$JAVA_HOME/bin
3.实现主节点可无秘钥登录其他节点

扫描二维码关注公众号,回复: 1529251 查看本文章

3.1安装ssh(ps -e | grep ssh存在ssh进程可忽略)

sudo apt-get install openssh-server

3.2在主节点上生成秘钥对

ssh-keygen -t rsa(一直enter即可)

3.3将公钥加入authorized_keys

cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys

3.4将主节点上的authorized_keys拷贝到其他主机上

3.5修改所有节点上authorized_keys权限:

chmod 600 .ssh/authorized_keys

4.解压hadoop包,在hadoop目录下创建文件系统目录
tar -zxvf hadoop...

mv hadoop... hadoop

cd hadoop

mkdir /dfs/name -p

mkdir /dfs/data -p

mkdir /tmp

5.配置相关hadoop文件

5.1 hadoop-env.sh, yarn-env.sh

修改JAVA_HOME值:export JAVA_HOME=/home/scadajdk1.7.0_80(jdk安装路径)

5.2 slaves(集群中从节点HOSTNAME)

hadoop-node2
hadoop-node3
5.3 core-site.xml

<configuration>  
	<property>  
		<name>fs.defaultFS</name>  
		<value>hdfs://hadoop-node1:9000</value>  
	</property>    
	<property>  
		<name>hadoop.tmp.dir</name>  
		<value>file:/home/scada/hadoop/tmp</value>  
	</property>  
</configuration>  
5.4 hdfs-site.xml

<configuration>  
	<property>  
		<name>dfs.namenode.secondary.http-address</name>  
		<value>hadoop-node1:9001</value>  
	</property>  
	<property>  
		<name>dfs.namenode.name.dir</name>  
		<value>file:/home/scada/hadoop/dfs/name</value>  
	</property>  
	<property>  
		<name>dfs.datanode.data.dir</name>  
		<value>file:/home/scada/hadoop/dfs/data</value>  
	</property>  
	<property>  
		<name>dfs.replication</name>  
		<value>2</value>  
	</property>  
</configuration> 
5.5 mapred-site.xml

<configuration>  
	<property>                                                                    
		<name>mapreduce.framework.name</name>  
		<value>yarn</value>  
	</property>  
</configuration> 
5.6 yarn-site.xml

<configuration>  
	<property>  
		<name>yarn.resourcemanager.hostname</name>  
		<value>hadoop-node1</value>  
	</property>  
	<property>  
		<name>yarn.nodemanager.aux-services</name>  
		<value>mapreduce_shuffle</value>  
	</property>  
</configuration>

6.将hadoop目录拷贝到其他节点

7.启动hadoop

hdfs namenode -format

start-all.sh

// hadoop-node1:/home/scada % jps
30099 NameNode
30270 SecondaryNameNode
22019 Jps
27124 ResourceManager
// hadoop-node2:/home/scada/hadoop/etc/hadoop % jps
17009 DataNode
30414 Jps
30277 NodeManager


测试:

1.上传文件:

/home/scada % hadoop fs -put jdk-7u80-linux-x64.tar.gz /
put: Call From hadoop-node1/172.16.1.216 to hadoop-node1:9000 failed on connection exception: java.net.ConnectException: 拒绝连接; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefuse
上传文件失败,查看datanode节点日志:
tail yarn-scada-nodemanager-hadoop-node2.log 
2017-04-01 06:48:00,878 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hadoop-node1/172.16.1.216:8031. Already tried 8 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2017-04-01 06:48:01,879 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hadoop-node1/172.16.1.216:8031. Already tried 9 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
问题解决方法:http://www.cnblogs.com/dyllove98/archive/2013/06/20/3147024.html

上传文件:

// hadoop-node1:/home/scada % hadoop fs -put jdk-7u80-linux-x64.tar.gz /

查看是否上传成功:

// hadoop-node1:/home/scada % hadoop fs -ls /
Found 4 items
drwxr-xr-x   - scada supergroup          0 2017-03-27 08:35 /data
-rw-r--r--   2 scada supergroup  153530841 2017-04-01 06:57 /jdk-7u80-linux-x64.tar.gz
drwxr-xr-x   - scada supergroup          0 2017-03-28 00:59 /test
drwx------   - scada supergroup          0 2017-03-27 08:35 /tmp

将本地目录下的jdk压缩包删除,从hadoop分布式文件系统中下载刚刚上传的文件

// hadoop-node1:/home/scada % ls
 jdk-7u80-linux-x64.tar.gz  
// hadoop-node1:/home/scada % rm jdk-7u80-linux-x64.tar.gz 
// hadoop-node1:/home/scada % ls
// hadoop-node1:/home/scada % hadoop fs -get /jdk-7u80-linux-x64.tar.gz
// hadoop-node1:/home/scada % ls
jdk-7u80-linux-x64.tar.gz 
ok,先到这了~
 
 
 
 
 
 
 
 
 

猜你喜欢

转载自blog.csdn.net/tustzhoujian/article/details/68942491