CDH HIVE报错引发的一系列血案

今天偶尔发现一个CDH集群中的Hive MetaStore Server发生了异常,于是检查相关日志,具体日志为Hive MetaStore Server所在节点的/var/log/hive/hadoop-cmf-hive-HIVEMETASTORE-sbh01.esgyn.cn.log.out,日志报错如下,

2019-10-31 06:22:51,467 INFO  org.apache.hadoop.hive.metastore.HiveMetaStore: [pool-5-thread-199]: 199: source:172.31.232.22 get_table : db=cloudera_manager_metastore_canary_test_db_hive_HIVEME
TASTORE_89c6545c32a1b6da390011bba5c4799a tbl=CM_TEST_TABLE
2019-10-31 06:22:51,467 INFO  org.apache.hadoop.hive.metastore.HiveMetaStore.audit: [pool-5-thread-199]: ugi=hue        ip=172.31.232.22        cmd=source:172.31.232.22 get_table : db=cloudera_
manager_metastore_canary_test_db_hive_HIVEMETASTORE_89c6545c32a1b6da390011bba5c4799a tbl=CM_TEST_TABLE
2019-10-31 06:22:51,470 ERROR org.apache.hadoop.hive.metastore.RetryingHMSHandler: [pool-5-thread-199]: Retrying HMSHandler after 2000 ms (attempt 4 of 10) with error: javax.jdo.JDOException: E
xception thrown when executing query
        at org.datanucleus.api.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:596)
        at org.datanucleus.api.jdo.JDOQuery.execute(JDOQuery.java:275)
        at org.apache.hadoop.hive.metastore.ObjectStore.getMTable(ObjectStore.java:1217)
        at org.apache.hadoop.hive.metastore.ObjectStore.getTable(ObjectStore.java:1024)
        at sun.reflect.GeneratedMethodAccessor2.invoke(Unknown Source)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
        at org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:103)
        at com.sun.proxy.$Proxy6.getTable(Unknown Source)
        at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_table_core(HiveMetaStore.java:1950)
        at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_table(HiveMetaStore.java:1905)
        at sun.reflect.GeneratedMethodAccessor8.invoke(Unknown Source)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
        at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invokeInternal(RetryingHMSHandler.java:140)
        at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:99)
        at com.sun.proxy.$Proxy8.get_table(Unknown Source)
        at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$get_table.getResult(ThriftHiveMetastore.java:10128)
        at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$get_table.getResult(ThriftHiveMetastore.java:10112)
        at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39)
        at org.apache.hadoop.hive.metastore.TUGIBasedProcessor$1.run(TUGIBasedProcessor.java:110)
        at org.apache.hadoop.hive.metastore.TUGIBasedProcessor$1.run(TUGIBasedProcessor.java:106)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:415)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1924)
        at org.apache.hadoop.hive.metastore.TUGIBasedProcessor.process(TUGIBasedProcessor.java:118)
        at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:286)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)
NestedThrowablesStackTrace:
com.mysql.jdbc.exceptions.jdbc4.MySQLSyntaxErrorException: Unknown column 'A0.OWNER_TYPE' in 'field list'
        at sun.reflect.GeneratedConstructorAccessor20.newInstance(Unknown Source)
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
        at com.mysql.jdbc.Util.handleNewInstance(Util.java:404)
        at com.mysql.jdbc.Util.getInstance(Util.java:387)
        at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:939)
        at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:3878)
        at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:3814)

根据错误信息,我们怀疑是Hive的元数据库表出现了问题,根据错误信息,大概是表CM_TEST_TABLE的OWNER_TYPE字段没找到。
于是我们去Hive的元数据库MYSQL中查询此表并希望检查此字段,不过并没有找到(可能是表存在但没找着位置)。由于此环境是一个测试环境,我们决定先采用删除Hive并重装的方式来绕过。于是从CM管理界面把Hive整个组件删除并重新添加,添加后发现启动Hive过一小会儿后仍然报同样的错。这是因为MYSQL元数据库中的Hive数据库仍然是有问题,删除Hive组件并没有删除MYSQL中的Hive数据库。
因此,我们决定先删除MySQL中的Hive数据库并对Hive数据库进行初始化。那么问题来了,在删除Hive数据库时不小心删错了库,把MYSQL库删除了。。。执行的命令为:

/usr/share/mysql-5.6.31/bin/mysql -u root --password='xxx'
show databases;
drop database mysql;

删除了mysql库之后,我们发现再创建hive user时报错无法创建,这是因为mysql的database下面有张user表,所有的用户信息都存放于mysql.user里面。

现在mysql的database整个没有了,需要想办法恢复mysql这个database。我们从另外一套测试集群中进行mysql元数据库,查看发现mysql的数据库下面有28张表,部分表中有数据,表列表如下,

mysql> use mysql;
Database changed
mysql> show tables;
+---------------------------+
| Tables_in_mysql           |
+---------------------------+
| columns_priv              |
| db                        |
| event                     |
| func                      |
| general_log               |
| help_category             |
| help_keyword              |
| help_relation             |
| help_topic                |
| innodb_index_stats        |
| innodb_table_stats        |
| ndb_binlog_index          |
| plugin                    |
| proc                      |
| procs_priv                |
| proxies_priv              |
| servers                   |
| slave_master_info         |
| slave_relay_log_info      |
| slave_worker_info         |
| slow_log                  |
| tables_priv               |
| time_zone                 |
| time_zone_leap_second     |
| time_zone_name            |
| time_zone_transition      |
| time_zone_transition_type |
| user                      |
+---------------------------+
28 rows in set (0.00 sec)

于是我们使用mysqldump命令把这些表结构和数据导出成一个文件,命令如下,

/usr/local/mysql/bin/mysqldump -uroot --password='xxx' --databases mysql >mysql.sql

将此文件拷贝到问题集群上,在mysql中执行此文件创建相应的表结构并导入数据,执行之后使用update语句更新其中的几条数据,因为有几条数据保存的IP和机器名是原环境的信息。

/usr/share/mysql-5.6.31/bin/mysql -u root --password='xxx' < mysql.sql
--需要更新以下几个数据
use mysql;
update user set host='172.31.232.22' where host='172.31.234.1';
update user set host='sbh02.esgyn.cn' where host='uathx02.esgyn.cn';
update user set host='sbh01.esgyn.cn' where host='uathx01.esgyn.cn';
update proxies_priv set host='sbh01.esgyn.cn' where host='uathx01.esgyn.cn';

现在,mysql的数据库算是已经恢复了,由于导出的mysql环境和目标环境里面的用户信息和密码基本都是一致的,我们也不需要特意去修改mysql.user表里面的信息,而且也不用再创建hive用户了,因为mysql.user表中已经有相应的记录了,我们需要做的就是删除MYSQL元数据库中的hive这个数据库并重新创建一个空的库,

/usr/share/mysql-5.6.31/bin/mysql -uroot --password='xxx'
drop database hive;
create database hive;

现在,我们再次从CM管理界面重新添加Hive组件并启动,此时Hive终于恢复了正常!

发布了352 篇原创文章 · 获赞 400 · 访问量 73万+

猜你喜欢

转载自blog.csdn.net/Post_Yuan/article/details/102872004
CDH