Hive commonly used commands and functions

  • 1 - Create Table
-- 内部表
create table aa(col1 string,col2 int) partitioned by(statdate int) ROW FORMAT DELIMITED FIELDS TERMINATED BY '\t'; -- 外部表 create external table bb(col1 string, col2 int) partitioned by(statdate int) ROW FORMAT DELIMITED FIELDS TERMINATED BY '\t' location '/user/gaofei.lu/'; 
  • 2- Check the create table statement
show create table tablename;
  • 3- Import table data
本地数据:load data local inpath ' /home/gaofei.lu/aa.txt' into table aa partition(statdate=20170403) hdfs上数据:load data inpath '/user/gaofei.lu/aa.txt' into table bb partition(statdate=20170403) 
  • Table 4- Properties Modify
alter table aa set tblproperties ('EXTERNAL'='TRUE') alter table bb set tblproperties ('EXTERNAL'='FALSE') 
  • 5- modify columns
修改列名和列数据类型:alter table aa change col2 name string ;
修改位置放置第一位:alter table aa change col2 name string first; 修改位置指定某一列后面:alter table aa change col1 dept string after name; 
  • 6- add columns (caution)
alter table aa add columns(col3 string); 
  • 7-epi rename
alter table aa rename to aa_test; 
  • 8- add a partition
alter table aa add partition(statdate=20170404); alter table bb add partition(statdate=20170404) location '/user/gaofei.lu/20170404.txt'; 
  • 9- Partition Table View
show partitioins aa;
  • 10- modify the partition
alter table aa partition(statdate=20170404) rename to partition(statdate=20170405); alter table bb partition(statdate=20170404) set location '/user/gaofei.lu/aa.txt'; 
  • 11- to delete the partition
alter table aa drop if exists partition(statdate=20170404); 
  • 12- beeline connection
beeline  !connect jdbc:hive2://192.168.1.17:10000
  • 13- Set hive on spark
set hive.execution.engine=spark 
  • 14- terminate the task
yarn application -kill job_id
  • 15 specify the delimiter export file
insert overwrite local directory '/home/hadoop/gaofeilu/test_delimited.txt'
row format delimited
fields terminated by '\t' select * from test;

Guess you like

Origin www.cnblogs.com/sx66/p/12039542.html