参考了这篇博客: http://www.cnblogs.com/tlm1992/p/tlm_hadoop_auto.html,并对其进行了一些修改。
前提:安装了ssh
处理流程:
1.修改配置文件
(1)修改hadoop的配置文件,修改hadoop/hadoop-1.2.1/conf/下的
hdfs-site.xml,mapred-site.xml,core-site.xml 这三个文件,不需要修改hadoop/hadoop-1.2.1/conf/里的slaves和masters文件,脚本会根据hosts的设置自动修改。
(2)修改hbase的配置文件,修改hadoop/hbase-0.94.16/conf下的hbase-site.xml,不需要修改hadoop/hbase-0.94.16/conf/regionservers文件,脚本会根据hosts的设置自动修改。
2.修改hadoop/setup文件夹里的hosts文件来设置所有节点的名字和ip
3.编辑setHadoopOnce.sh的loginName变量(设置登录账号),pw变量(设置节点的密码)。账号、密码各节点必须都是同一个。此外还要修改slaveNum变量。
4.利用终端cd到setup这个文件夹,执行setHadoopOnce.sh脚本。
全部代码:
1.setHadoopOnce.sh,该文件是文件脚本执行的起点
#!/bin/bash #修改密码 PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin:. export PATH pw=123456 loginName=hadoop master=master slave=slave slaveNum=1 set timeout 100000 > ../hadoop-1.2.1/conf/masters > ../hadoop-1.2.1/conf/slaves #update local file while read line do echo $line ip=`echo $line | cut -d" " -f1` name=`echo $line | cut -d" " -f2` if [ ! -z $ip ]; then echo $name if [[ $name == maste* ]]; then echo "$name" >> ../hadoop-1.2.1/conf/masters elif [[ $name == slave* ]]; then echo "$name" >> ../hadoop-1.2.1/conf/slaves echo "$name" >> ../hbase-0.94.16/conf/regionservers fi fi done < hosts #upload file to all nodes while read line do ip=`echo $line | cut -d" " -f1` name=`echo $line | cut -d" " -f2` if [ ! -z $ip ]; then expect copyDataToAll.exp $ip $loginName $pw expect setForAll.exp $ip $loginName $pw fi done < hosts while read line do ip=`echo $line | cut -d" " -f1` name=`echo $line | cut -d" " -f2` if [ ! -z $ip ]; then if [[ $name == maste* ]]; then expect setForMaster.exp $ip $loginName $pw fi fi done < hosts while read line do ip=`echo $line | cut -d" " -f1` name=`echo $line | cut -d" " -f2` if [ ! -z $ip ]; then expect setForSSH.exp $ip $loginName $pw fi done < hosts
2.copyDataToAll.exp在setHadoopOnce.sh中的36行被调用,以复制文件到所有节点。
#!/usr/bin/expect proc usage {} { puts stderr "usage: $::argv0 ip usrname password" exit 1 } if {$argc != 3} { usage } set hostip [lindex $argv 0] set username [lindex $argv 1] set password [lindex $argv 2] set timeout 100000 spawn scp -r ../../hadoop ${username}@${hostip}:~ expect { "*assword:" { send "$password\n" expect eof } expect eof }
3.setForAll.exp为所有节点进行进一步的配置工作,在setHadoopOnce.sh中的37行被调用。
#!/usr/bin/expect proc usage {} { puts stderr "usage: $::argv0 ip usrname password" exit 1 } proc connect {pwd} { expect { "*(yes/no)?" { send "yes\n" expect "*assword:" { send "$pwd\n" expect { "*Last login:*" { return 0 } } } } "*assword:" { send "$pwd\n" expect { "*Last login:*" { return 0 } } } "*Last login:*" { return 0 } } return 1 } if {$argc != 3} { usage } set hostip [lindex $argv 0] set username [lindex $argv 1] set password [lindex $argv 2] set timeout 100000 spawn ssh ${username}@${hostip} if {[connect $password]} { exit 1 } #set host send "sudo bash ~/hadoop/setup/addHosts.sh\r" expect "*assword*" send "$password\r" expect "*ddhostsucces*" sleep 1 send "ssh-agent bash ~/hadoop/setup/sshGen.sh\n" expect { "*(yes/no)?" { send "yes\n" exp_continue } "*verwrite (y/n)?" { send "n\n" exp_continue } "*nter file in which to save the key*" { send "\n" exp_continue } "*nter passphrase*" { send "\n" exp_continue } "*nter same passphrase again*" { send "\n" exp_continue } "*our public key has been saved*" { exp_continue } "*etsshGenSucces*" { sleep 1 } } send "bash ~/hadoop/setup/setEnvironment.sh\n" expect "*etEnvironmentSucces*" sleep 1 send "exit\n" expect eof
3.1 addHosts.sh在setForAll.exp中被调用,用于设置节点的hosts文件。
#!/bin/bash hadoopRoot=~/hadoop cat $hadoopRoot/setup/hosts >> /etc/hosts echo "addhostsuccess"
3.2 sshGen.sh在setForAll.sh中被调用,用于生成sshkey.
#!/bin/bash sshPath=~/.ssh setupPath=~/hadoop/setup rm "$sshPath"/authorized_keys sleep 1 ssh-keygen -t rsa cat "$sshPath"/id_rsa.pub >> "$sshPath"/authorized_keys ssh-add echo "setsshGenSuccess"
3.3 setEnviroment.sh 在setForAll.sh中被调用,用于设置环境变量
#!/bin/bash hadoopRoot=~/hadoop hadoopPath=$hadoopRoot/hadoop-1.2.1 hbasePath=$hadoopRoot/hbase-0.94.16 setupPath=$hadoopRoot/setup JAVA_VERSION=`java -version 2>&1 | awk '/java version/ {print $3}'|sed 's/"//g'|awk '{if ($1>=1.6) print "ok"}'` if [ "$JAVA_VERSION"x != "okx" ]; then cat "$setupPath"/jdkenv >> ~/.bashrc sleep 1 source ~/.bashrc sleep 1 fi echo "export JAVA_HOME=~/hadoop/jdk1.7.0" >> "$hadoopPath"/conf/hadoop-env.sh echo "export JAVA_HOME=~/hadoop/jdk1.7.0" >> "$hbasePath"/conf/hbase-env.sh Hadoop_Version=`hadoop version|awk '/Hadoop/ {print $2}'|awk '{if ($1>=1.0) print "ok"}'` if [ "$Hadoop_Version"x != "okx" ]; then cat "$setupPath"/hadoopenv >> ~/.bashrc sleep 1 source ~/.bashrc sleep 1 fi echo "setEnvironmentSuccess"
4 setForMaster.exp 远程sshsetForMaster.sh,配置无密码登录
#!/usr/bin/expect proc usage {} { puts stderr "usage: $::argv0 ip usrname password" exit 1 } proc connect {pwd} { expect { "*(yes/no)?" { send "yes\n" expect "*assword:" { send "$pwd\n" expect { "*Last login:*" { return 0 } } } } "*assword:" { send "$pwd\n" expect { "*Last login:*" { return 0 } } } "*Last login:*" { return 0 } } return 1 } if {$argc != 3} { usage } set hostip [lindex $argv 0] set username [lindex $argv 1] set password [lindex $argv 2] set timeout 100000 spawn ssh ${username}@${hostip} if {[connect $password]} { exit 1 } send "bash ~/hadoop/setup/setForMaster.sh\n" expect { "*etForMasterSucces*" { sleep 1 } "*assword*" { send "$password\n" exp_continue } "*(yes/no)?" { send "yes\n" exp_continue } }
4.1 setForMaster.sh
#!/bin/bash while read line do ip=`echo $line | cut -d" " -f1` name=`echo $line | cut -d" " -f2` if [ ! -z $ip ]; then if [[ $name == slave* ]]; then scp $ip:~/.ssh/authorized_keys ~/tmpkey cat ~/tmpkey >> ~/.ssh/authorized_keys fi fi done < ~/hadoop/setup/hosts sleep 1 rm -f ~/tmpkey while read line do ip=`echo $line | cut -d" " -f1` name=`echo $line | cut -d" " -f2` if [ ! -z $ip ]; then if [[ $name == slave* ]]; then scp ~/.ssh/authorized_keys $ip:~/.ssh/authorized_keys fi fi done < ~/hadoop/setup/hosts echo "setForMasterSuccess"
5.setForSSH.exp配置了SSH,可能由于.ssh 目录和~/.ssh/authorized_keys权限不对,还是需要输入密码。因此修改权限。
#!/usr/bin/expect proc usage {} { puts stderr "usage: $::argv0 ip usrname password" exit 1 } proc connect {pwd} { expect { "*(yes/no)?" { send "yes\n" expect "*assword:" { send "$pwd\n" expect { "*Last login:*" { return 0 } } } } "*assword:" { send "$pwd\n" expect { "*Last login:*" { return 0 } } } "*Last login:*" { return 0 } } return 1 } if {$argc != 3} { usage } set hostip [lindex $argv 0] set username [lindex $argv 1] set password [lindex $argv 2] set timeout 100000 spawn ssh ${username}@${hostip} if {[connect $password]} { exit 1 } sleep 1 send "bash ~/hadoop/setup/setForSSH.sh\n" expect { "*ForSSHSuccess*" { sleep 1 } } send "exit\n" expect eof
5.1setForSSH.sh
#!/bin/bash chmod 700 ~/.ssh chmod 644 ~/.ssh/authorized_keys sleep 1 echo "setForSSHSuccess"
文件目录:
本想上传源代码,发现文件不能超过10M,无法上传。
附件中是文件的目录,上面的linux脚本在setup目录下,其他的hbase,hadoop,jdk安装文件请自己下载吧。