在没有更换前先备份数据
[hadoop@Hadoop-10-4 hadoop]$ bin/hadoop dfsadmin -finalizeUpgrade
Warning: $HADOOP_HOME is deprecated.
[hadoop@Hadoop-10-4 hadoop]$ bin/hadoop dfsadmin -upgradeProgress status
Warning: $HADOOP_HOME is deprecated.
There are no upgrades in progress.
升级hadoop
1、解压hadoop
2、拷贝原来hadoop/conf/core-site.xml hdfs-site.xml mapred-site.xml masters slaves到新的hadoop/conf目录
3、修改新的hadoop/conf/hadoop-env.sh中JAVA_HOME的路径
4、从hbase/lib/guava-11.0.2.jar protobuf-java-2.4.0a.jar zookeeper-3.4.5.jar拷贝到hadoop/lib目录下,主要是因为在hadoop中运行以下命令会报找不到类的错误:
[hadoop@Hadoop-10-4 hadoop-1.0.4]$ bin/hadoop jar ../hbase-0.94.5/hbase-0.94.5.jar rowcounter games2
升级hbase
1、拷贝原来的hbase/conf/hbase-site.xml regionservers到新的hbase/conf目录
2、修改新的hbase/conf/hbase-env.sh中的信息如下:
export JAVA_HOME=/home/hadoop/soft/jdk1.6.0_41
export HBASE_CLASSPATH=/home/hadoop/soft/hadoop/conf
export HBASE_OPTS="-XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=70"
3、把hadoop的native连接到hbase/lib目录
mv hbase/lib/native native.bak
ln -s /home/hadoop/soft/hadoop/lib/native native
mv hadoop-core-1.0.4.jar hadoop-core-1.0.4.jar.bak
cp /home/hadoop/soft/hadoop/hadoop-core-1.2.1.jar ./
4、更换完毕后此时还不能直接启动hadoop,否则会出现以下错误:
File system image contains an old layout version -32.
An upgrade to version -41 is required.
Please restart NameNode with -upgrade option.
at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:338)
at org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:104)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:427)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:395)
at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:299)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:569)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1479)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1488)
2013-09-04 15:07:11,813 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: java.io.IOException:
File system image contains an old layout version -32.
An upgrade to version -41 is required.
Please restart NameNode with -upgrade option.
at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:338)
at org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:104)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:427)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:395)
at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:299)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:569)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1479)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1488)
正确方法是:
[hadoop@Hadoop-10-4 hadoop]$ bin/start-dfs.sh -upgrade
检查数据块是否完整
hadoop fsck -blocks
正常后关闭finalizeUpgrade
如果需要回滚
bin/start-dfs.sh -rollback
hbase则直接启动即可
当集群正常并运行一段时间以后(如果确定没数据丢失,也可以立即final),使用 hadoop dfsadmin -finalizeUpgrade进行版本的序列化(在这之前如果你没有删除原来版本hadoop的,你完全可以使用start-dfs.sh -rollback返回到原来版本的hadoop)
finalizeUpgrade之后,需重启集群,空间才会释放,或hadoop namenode -finalize
bin/hadoop dfsadmin -upgradeProgress
命令来查看版本升级的情况。
bin/hadoop dfsadmin -upgradeProgress details
来查看更多的详细信息。
当升级过程被阻塞的时候,你可以使用
bin/hadoop dfsadmin -upgradeProgress force
来强制升级继续执行(这个命令比较危险,慎重使用)。
当HDFS升级完毕后,Hadoop依旧保留着旧版本的有关信息,
以便你可以方便的对HDFS进行降级操作。
可以使用bin/start-dfs.sh -rollback来执行降级操作。