hadoop环境搭建 ------------------------- 1.安装hadoop a)下载hadoop-2.7.3.tar.gz b)tar开 $>su centos ; cd ~ $>cp /mnt/hdfs/downloads/bigdata/hadoop-2.7.3.tar.gz ~/downloads $>tar -xzvf hadoop-2.7.3.tar.gz c)无 d)移动tar开的文件到/soft下 $>mv ~/downloads/hadoop-2.7.3 /soft/ e)创建符号连接 $>ln -s /soft/hadoop-2.7.3 /soft/hadoop f)验证jdk安装是否成功 $>cd /soft/hadoop/bin $>./hadoop version 2.配置hadoop环境变量(包括jdk和hadoop) $>sudo nano /etc/profile ... export JAVA_HOME=/soft/jdk exprot PATH=$PATH:$JAVA_HOME/bin export HADOOP_HOME=/soft/hadoop export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin 3.使配置文件生效 $>source /etc/profile 配置hadoop -------------------- 1.standalone(local) nothing ! 不需要启用单独的hadoop进程。 2.Pseudodistributed mode 伪分布模式。 a)进入${HADOOP_HOME}/etc/hadoop目录 b)编辑core-site.xml <?xml version="1.0"?> <configuration> <property> <name>fs.defaultFS</name> <value>hdfs://localhost/</value> </property> </configuration> c)编辑hdfs-site.xml <?xml version="1.0"?> <configuration> <property> <name>dfs.replication</name> <value>1</value> </property> </configuration> d)编辑mapred-site.xml 注意:cp mapred-site.xml.template mapred-site.xml <?xml version="1.0"?> <configuration> <property> <name>mapreduce.framework.name</name> <value>yarn</value> </property> </configuration> e)编辑yarn-site.xml <?xml version="1.0"?> <configuration> <property> <name>yarn.resourcemanager.hostname</name> <value>localhost</value> </property> <property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> </property> </configuration> f)配置SSH 1)检查是否安装了ssh相关软件包(openssh-server + openssh-clients + openssh) $yum list installed | grep ssh 2)检查是否启动了sshd进程 $>ps -Af | grep sshd 3)在client侧生成公私秘钥对。 $>ssh-keygen -t rsa -P '' -f ~/.ssh/id_rsa 4)生成~/.ssh文件夹,里面有id_rsa(私钥) + id_rsa.pub(公钥) 5)追加公钥到~/.ssh/authorized_keys文件中(文件名、位置固定) $>cd ~/.ssh $>cat id_rsa.pub >> authorized_keys 6)修改authorized_keys的权限为644. $>chmod 644 authorized_keys 7)测试 $>ssh localhost
Hadoop环境搭建(JDK+hadoop+ssh)
猜你喜欢
转载自blog.csdn.net/lp284558195/article/details/79414353
今日推荐
周排行