Learning porter, notes taken from the laboratory building courses
First, the experiment introduction
⭐ experiment content
- Import and export data Hive
- EXPORT command partition table or export data, together with the metadata output to the specified location. This output in turn can be moved from a position or to a different Hadoop Hive example, and use the IMPORT command to import.
- When exporting a partition table, original data may be in different positions HDFS, and also supports the export / import a subset of partitions.
- Export metadata stored in the target directory, the data file is stored in a subdirectory.
- EXPORT and IMPORT command independent of the source and target data with metadata of the data management system; for example, they may be used between the database and MYSQL Derby
⭐ experimental knowledge
- sql
- hdfs knowledge
- hive <=> hdfs import syntax each other
Second, the experimental preparation
This experiment was based on previous experiment, in which the operation table is a table used in the second experiment.
⭐ users switch to hadoop
su -l hadoop #密码为hadoop
⭐ start hdfs
cd /opt/hadoop-2.7.3/sbin
hdfs namenode -format #如果之前已经初始化过一次并且使用的是保存的环境,这里就不需要执行初始化,否则在启动之前应该进行一次初始化
./start-all.sh
⭐ start mysql
sudo service mysql start
⭐ which is easy to import and export data query creates tmp directory as the destination address hdfs export data in the / user / hive / directory.
You can view the following command in the experiment import / export result.
cd /opt/hadoop-2.7.3/sbin
hdfs dfs -ls /user/hive/warehouse #该路径为hive创建表的默认路径
Third, import / export
⭐ data export (EXPORT)
EXPORT TABLE tablename [PARTITION (part_column="value"[, ...])]
TO 'export_target_path' [ FOR replication('eventid')
⭐ data import (IMPORT)
IMPORT [[EXTERNAL] TABLE new_or_original_tablename [PARTITION (part_column="value"[, ...])]]
FROM 'source_path' [LOCATION 'import_target_path']
⭐ simple import / export
export table shiyanlou1 to '/user/hive/tmp/';
import from '/user/hive/tmp';