4. HBASE data migration scheme (snapshot):

4. HBASE data migration scheme:
  4.1 Import/Export
  4.2 distcp
  4.3 CopyTable
  4.4

Snapshot migration (take USER_info:user_log_info as an example)
1. First create a snapshot of the table in the source cluster
hbase(main):003:0> snapshot " USER_INFO:user_log_info","user_log_info_snapshot"

2. Execute on the source cluster:
sudo -u hdfs hbase org.apache.hadoop.hbase.snapshot.ExportSnapshot -snapshot user_log_info_snapshot -copy-to hdfs://slave.01.bigdata.prod. wgq:8020/hbase -overwrite
-overwrite is to overwrite if the target cluster has the snapshot;

3. Modify the file permissions:
sudo -u hdfs hdfs dfs -chown -R hbase:hbase /hbase/.hbase-snapshot
sudo -u hdfs hdfs dfs -chown -R hbase:hdfs /hbase/archive
sudo -u hdfs hdfs dfs -chmod -R 777 /hbase/archive

4. In the target cluster:


create a corresponding table, the table must be consistent
create 'USER_INFO:user_log_info', {NAME => 'cf', BLOOMFILTER => 'ROW', VERSIONS => '1', IN_MEMORY => 'false', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}

First disable the table
disable "USER_INFO:user_log_info" and then

restore the snapshot :
restore_snapshot 'user_log_info_snapshot'

does not report an error. Enable the table:
enable "USER_INFO:user_log_info" to

verify whether the table availability and the data volume of the two cluster tables are consistent:
count . . . . . . .

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=324457400&siteId=291194637