版权声明:Please make the source marked https://blog.csdn.net/qq_31807385/article/details/84677262
目录
数据库的创建:
操作如下:
0: jdbc:hive2://hadoop108:10000> show databases;
OK
+----------------+--+
| database_name |
+----------------+--+
| default |
+----------------+--+
1 row selected (2.047 seconds)
0: jdbc:hive2://hadoop108:10000> show tables;
OK
+-----------+--+
| tab_name |
+-----------+--+
+-----------+--+
No rows selected (0.101 seconds)
0: jdbc:hive2://hadoop108:10000> create database db_hive;
OK
No rows affected (0.417 seconds)
0: jdbc:hive2://hadoop108:10000> show databases;
OK
+----------------+--+
| database_name |
+----------------+--+
| db_hive |
| default |
+----------------+--+
2 rows selected (0.045 seconds)
经过上面的操作,也即创建了数据库之后,该数据库默认存储的位置是:
/user/hive/warehouse/db_hive.db
与此同时,创建user目录的时候,还会在根目录下创建
/tmp/
在创建数据库的时候,还可以指定数据库在HDFS上的位置,创建的语法如下:
0: jdbc:hive2://hadoop108:10000> create database db_hive2 location '/db_hive2.db';
OK
此时,在HDFS上也会多出一个文件夹:
/db_hive2.db
对于我们创建的数据库,关于数据库的相关的信息存储在metestore中,如下图:
可见,每一个数据库在创建的时候,都会有自己对应的location,如果在创建数据库的时候没有指定该数据库的location的话,那么该数据库的location将会是默认的位置: /user/hive/warehouse 目录下。
数据库的查询,切换,修改:
0: jdbc:hive2://hadoop108:10000> desc database db_hive;
OK
+----------+----------+-------------------------------------------------------+-------------+-------------+-------------+--+
| db_name | comment | location | owner_name | owner_type | parameters |
+----------+----------+-------------------------------------------------------+-------------+-------------+-------------+--+
| db_hive | | hdfs://hadoop108:9000/user/hive/warehouse/db_hive.db | isea | USER | |
+----------+----------+-------------------------------------------------------+-------------+-------------+-------------+--+
0: jdbc:hive2://hadoop108:10000> desc database extended db_hive;
OK
+----------+----------+-------------------------------------------------------+-------------+-------------+-------------+--+
| db_name | comment | location | owner_name | owner_type | parameters |
+----------+----------+-------------------------------------------------------+-------------+-------------+-------------+--+
| db_hive | | hdfs://hadoop108:9000/user/hive/warehouse/db_hive.db | isea | USER | |
+----------+----------+-------------------------------------------------------+-------------+-------------+-------------+--+
0: jdbc:hive2://hadoop108:10000> use db_hive;
用户可以使用ALTER DATABASE命令为某个数据库的DBPROPERTIES设置键-值对属性值,来描述这个数据库的属性信息。数据库的其他元数据信息都是不可更改的,包括数据库名和数据库所在的目录位置。
0: jdbc:hive2://hadoop108:10000> alter database db_hive set dbproperties('createtime'='12');
OK
No rows affected (0.14 seconds)
0: jdbc:hive2://hadoop108:10000> desc database extended db_hive;
OK
+----------+----------+-------------------------------------------------------+-------------+-------------+------------------+--+
| db_name | comment | location | owner_name | owner_type | parameters |
+----------+----------+-------------------------------------------------------+-------------+-------------+------------------+--+
| db_hive | | hdfs://hadoop108:9000/user/hive/warehouse/db_hive.db | isea | USER | {createtime=12} |
+----------+----------+-------------------------------------------------------+-------------+-------------+------------------+--+
数据库的删除:
0: jdbc:hive2://hadoop108:10000> drop database db_hive2;
OK
No rows affected (0.532 seconds)
0: jdbc:hive2://hadoop108:10000> show databases;
OK
+----------------+--+
| database_name |
+----------------+--+
| db_hive |
| default |
+----------------+--+
2 rows selected (0.038 seconds)
此时HDFS上的目录和db_hive2对应的数据库文件也被删除:
/db_hive2.db 也被删除
如果数据库中有表,不为空,如果是这种情况的话,使用上面的语句是无法删除的,此时可以使用下面的语句删除:
0: jdbc:hive2://hadoop108:10000> drop database db_hive2 cascade;
OK
No rows affected (0.635 seconds)