hive update和delete报错Attempt to do update or delete using transaction manager

而在默认情况下,当用户如果使用update和delete操作时,会出现如下情况:

hive> select * from userdb.student;
OK
1009	99
1001	zhangsan
1002	lisi
1003	wangwu
1004	liliu
1005	mengmeng
1008	chengcheng
Time taken: 0.522 seconds, Fetched: 7 row(s)

hive> delete from userdb.student where id = 1002;
FAILED: SemanticException [Error 10294]: Attempt to do update or delete using transaction manager that does not support these operations.

hive> update userdb.student set name=wanggang where id = 1008;
FAILED: SemanticException [Error 10294]: Attempt to do update or delete using transaction manager that does not support these operations.

可以从上面的操作结果看出,在使用的转换管理器不支持update跟delete操作。

原来要支持update操作跟delete操作,必须额外再配置一些东西,见:
https://cwiki.apache.org/confluence/display/Hive/Hive+Transactions#HiveTransactions-NewConfigurationParametersforTransactions
在这里插入图片描述
具体的参数如下所示:
在这里插入图片描述

<property>
	<name>hive.support.concurrency</name>
    <value>true</value>
</property>
<property>
    <name>hive.enforce.bucketing</name>
    <value>true</value>
</property>
<property>
    <name>hive.exec.dynamic.partition.mode</name>
    <value>nonstrict</value>
</property>
<property>
    <name>hive.txn.manager</name>
    <value>org.apache.hadoop.hive.ql.lockmgr.DbTxnManager</value>
</property>
<property>
    <name>hive.compactor.initiator.on</name>
    <value>true</value>
</property>
<property>
    <name>hive.compactor.worker.threads</name>
    <value>1</value>
</property>
<property>
    <name>hive.in.test</name>
    <value>true</value>
</property>

配置完以为能够顺利运行了,谁知开始报下面这个错误:

[admin@admin02 apache-hive-1.2.1-bin]$ bin/hive

Logging initialized using configuration in jar:file:/home/admin/modules/apache-hive-1.2.1-bin/lib/hive-common-1.2.1.jar!/hive-log4j.properties
hive> select * from userdb.student;
FAILED: LockException [Error 10280]: Error communicating with the metastore

猜想无疑是元数据库出现了问题,修改log为DEBUG查看具体错误:
hive的日志默认存放在/tmp/<user.name>文件夹的hive.log文件中,全路径就是/tmp/当前用户名/hive.log。

2020-07-29 16:14:15,319 INFO  [main]: parse.ParseDriver (ParseDriver.java:parse(185)) - Parsing command: select * from userdb.student
2020-07-29 16:14:15,816 INFO  [main]: parse.ParseDriver (ParseDriver.java:parse(209)) - Parse Completed
2020-07-29 16:14:15,817 INFO  [main]: log.PerfLogger (PerfLogger.java:PerfLogEnd(148)) - </PERFLOG method=parse start=1596053655316 end=1596053655817 duration=501 from=org.apache.hadoop.hive.ql.Driver>
2020-07-29 16:14:15,861 ERROR [main]: metastore.RetryingHMSHandler (RetryingHMSHandler.java:invoke(159)) - MetaException(message:Unable to update transaction database com.mysql.jdbc.exceptions.jdbc4.MySQLSyntaxErrorException: Table 'metastore.TXNS' doesn't exist
	at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
	at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
	at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
	at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
	at com.mysql.jdbc.Util.handleNewInstance(Util.java:411

在元数据库中找不到COMPACTION_QUEUE这个表,赶紧去mysql中查看,确实没有这个表。怎么会没有这个表呢?找了很久都没找到什么原因,查源码吧。

org.apache.hadoop.hive.metastore.txn下的TxnDbUtil类中找到了建表语句,顺藤摸瓜,找到了下面这个方法会调用建表语句:

private void checkQFileTestHack() {
    boolean hackOn = HiveConf.getBoolVar(conf, HiveConf.ConfVars.HIVE_IN_TEST) ||
        HiveConf.getBoolVar(conf, HiveConf.ConfVars.HIVE_IN_TEZ_TEST);
    if (hackOn) {
      LOG.info("Hacking in canned values for transaction manager");
      // Set up the transaction/locking db in the derby metastore
      TxnDbUtil.setConfValues(conf);
      try {
        TxnDbUtil.prepDb();
      } catch (Exception e) {
        // We may have already created the tables and thus don't need to redo it.
        if (!e.getMessage().contains("already exists")) {
          throw new RuntimeException("Unable to set up transaction database for" +
              " testing: " + e.getMessage());
        }
      }
    }
  }

什么意思呢,就是说要运行建表语句还有一个条件:HIVE_IN_TEST或者HIVE_IN_TEZ_TEST.只有在测试环境中才能用delete,update操作,也可以理解,毕竟还没有开发完全。
终于找到原因,解决方法也很简单:在hive-site.xml中添加下面的配置:

 <property>
          <name>hive.compactor.initiator.on</name>
          <value>true</value>
  </property>

再重新启动服务,发现可以查询到数据。但是依旧无法delete

hive> select * from userdb.student;
OK
1009	99
1001	zhangsan
1002	lisi
1003	wangwu
1004	liliu
1005	mengmeng
1008	chengcheng
Time taken: 0.104 seconds, Fetched: 7 row(s)
hive> delete from userdb.student where id = 1009 ;
FAILED: SemanticException [Error 10297]: Attempt to do update or delete on table userdb.student that does not use an AcidOutputFormat or is not bucketed

说是要进行delete操作的表test不是AcidOutputFormat或没有分桶。估计是要求输出是AcidOutputFormat然后必须分桶
网上查到确实如此,而且目前只有ORCFileformat支持AcidOutputFormat,不仅如此建表时必须指定参数(‘transactional’ = true)。

于是按照再官网找到具体的建表语句如下所示:
https://cwiki.apache.org/confluence/display/Hive/Hive+Transactions#HiveTransactions-ConfigurationValuestoSetforINSERT,UPDATE,DELETE


CREATE TABLE table_name (
  id                int,
  name              string
)
CLUSTERED BY (id) INTO 2 BUCKETS STORED AS ORC
TBLPROPERTIES ("transactional"="true",
  "compactor.mapreduce.map.memory.mb"="2048",     -- specify compaction map job properties
  "compactorthreshold.hive.compactor.delta.num.threshold"="4",  -- trigger minor compaction if there are more than 4 delta directories
  "compactorthreshold.hive.compactor.delta.pct.threshold"="0.5" -- trigger major compaction if the ratio of size of delta files to
                                                                   -- size of base files is greater than 50%
);

在本地创建新的表格并加入指定参数’transactional’ = true等信息。

hive> CREATE TABLE userdb.student (
    >   id                int,
    >   name              string
    > )
    > CLUSTERED BY (id) INTO 2 BUCKETS STORED AS ORC
    > TBLPROPERTIES ("transactional"="true",
    >   "compactor.mapreduce.map.memory.mb"="2048",   
    >   "compactorthreshold.hive.compactor.delta.num.threshold"="4", 
    >   "compactorthreshold.hive.compactor.delta.pct.threshold"="0.5"                                                                  
    > );
OK
Time taken: 0.17 seconds

插入新的语句在userdb.student表中。

hive> insert into userdb.student values(1001,"rocky");

Query ID = admin_20200729162914_8cbd1c8c-06ca-4ede-a88c-f74ce0aa268e
Total jobs = 1
Launching Job 1 out of 1
Number of reduce tasks determined at compile time: 2
In order to change the average load for a reducer (in bytes):
  set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
  set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
  set mapreduce.job.reduces=<number>
Starting Job = job_1596046527562_0004, Tracking URL = http://admin02:8088/proxy/application_1596046527562_0004/
Kill Command = /home/admin/modules/hadoop-2.7.2/bin/hadoop job  -kill job_1596046527562_0004
Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 2
2020-07-29 16:29:23,089 Stage-1 map = 0%,  reduce = 0%
2020-07-29 16:29:29,481 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 2.02 sec
2020-07-29 16:29:36,153 Stage-1 map = 100%,  reduce = 50%, Cumulative CPU 4.8 sec
2020-07-29 16:29:37,203 Stage-1 map = 100%,  reduce = 100%, Cumulative CPU 7.87 sec
MapReduce Total cumulative CPU time: 7 seconds 870 msec
Ended Job = job_1596046527562_0004
Loading data to table userdb.student
Table userdb.student stats: [numFiles=2, numRows=1, totalSize=890, rawDataSize=0]
MapReduce Jobs Launched: 
Stage-Stage-1: Map: 1  Reduce: 2   Cumulative CPU: 7.87 sec   HDFS Read: 10162 HDFS Write: 1008 SUCCESS
Total MapReduce CPU Time Spent: 7 seconds 870 msec
OK
Time taken: 24.276 seconds

查询userdb.student里面的数据信息。

hive> select * from userdb.student;
OK
1001	rocky
Time taken: 0.117 seconds, Fetched: 1 row(s)

执行delete语句,查看是否报错:

hive> delete from userdb.student where id = 1001;
Query ID = admin_20200729163018_236b77d1-97f0-42ee-bc2d-ab45d719b49c
Total jobs = 1
Launching Job 1 out of 1
Number of reduce tasks determined at compile time: 2
In order to change the average load for a reducer (in bytes):
  set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
  set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
  set mapreduce.job.reduces=<number>
Starting Job = job_1596046527562_0005, Tracking URL = http://admin02:8088/proxy/application_1596046527562_0005/
Kill Command = /home/admin/modules/hadoop-2.7.2/bin/hadoop job  -kill job_1596046527562_0005
Hadoop job information for Stage-1: number of mappers: 2; number of reducers: 2
2020-07-29 16:30:25,042 Stage-1 map = 0%,  reduce = 0%
2020-07-29 16:30:30,473 Stage-1 map = 50%,  reduce = 0%, Cumulative CPU 2.05 sec
2020-07-29 16:30:31,501 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 5.62 sec
2020-07-29 16:30:43,961 Stage-1 map = 100%,  reduce = 100%, Cumulative CPU 10.92 sec
MapReduce Total cumulative CPU time: 10 seconds 920 msec
Ended Job = job_1596046527562_0005
Loading data to table userdb.student
Table userdb.student stats: [numFiles=3, numRows=0, totalSize=1416, rawDataSize=0]
MapReduce Jobs Launched: 
Stage-Stage-1: Map: 2  Reduce: 2   Cumulative CPU: 10.92 sec   HDFS Read: 18847 HDFS Write: 640 SUCCESS
Total MapReduce CPU Time Spent: 10 seconds 920 msec
OK
Time taken: 26.904 seconds

我们可以看见delete语句并没有出现报错信息,那我们继续查看表中是否存在有数据

hive> select * from userdb.student;
OK
Time taken: 0.91 seconds

可以看出表中不存在数据信息,我们继续执行insert语句,来检验现有参数是否支持update语句的执行。

hive> update userdb.student set name="bob" where id = 1001;
Query ID = admin_20200729163405_df8a3f9e-49cb-446f-9972-370abb655c95
Total jobs = 1
Launching Job 1 out of 1
Number of reduce tasks determined at compile time: 2
In order to change the average load for a reducer (in bytes):
  set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
  set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
  set mapreduce.job.reduces=<number>
Starting Job = job_1596046527562_0007, Tracking URL = http://admin02:8088/proxy/application_1596046527562_0007/
Kill Command = /home/admin/modules/hadoop-2.7.2/bin/hadoop job  -kill job_1596046527562_0007
Hadoop job information for Stage-1: number of mappers: 2; number of reducers: 2
2020-07-29 16:34:11,236 Stage-1 map = 0%,  reduce = 0%
2020-07-29 16:34:21,483 Stage-1 map = 50%,  reduce = 0%, Cumulative CPU 2.91 sec
2020-07-29 16:34:23,556 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 13.15 sec
2020-07-29 16:34:28,729 Stage-1 map = 100%,  reduce = 50%, Cumulative CPU 15.33 sec
2020-07-29 16:34:29,753 Stage-1 map = 100%,  reduce = 100%, Cumulative CPU 17.45 sec
MapReduce Total cumulative CPU time: 17 seconds 450 msec
Ended Job = job_1596046527562_0007
Loading data to table userdb.student
Table userdb.student stats: [numFiles=6, numRows=1, totalSize=3001, rawDataSize=0]
MapReduce Jobs Launched: 
Stage-Stage-1: Map: 2  Reduce: 2   Cumulative CPU: 17.45 sec   HDFS Read: 21536 HDFS Write: 802 SUCCESS
Total MapReduce CPU Time Spent: 17 seconds 450 msec
OK
Time taken: 25.32 seconds

确认update语句是否生效

hive> select * from userdb.student;
OK
1001	bob
Time taken: 0.172 seconds, Fetched: 1 row(s)

至此,我们可以看出若需要hql支持delete、update语句,则需要在建表时指定参数
"transactional"="true"
在这里插入图片描述
可以看出,在执行最简单的sql语句,耗费的时间目前较久,这个我们后期可以对此专门进行优化。

猜你喜欢

转载自blog.csdn.net/Victory_Lei/article/details/107669460