Table 1. Compression
As long as the operating table is substantially expands the data, few smaller tables after treatment. How compressing the table in phoenix,
Prior to the first operating table mountings unavailable in the following table table // set as unavailable
disable 'table name'
// first check the environment supports compression format (recommended Snappy)
hadoop checknative
// modify table properties, specify the compression format of the ALTER ' sogou ' , NAME => ' f ' , the COMPRESSION => ' Snappy '
/ / the table is set to usable
enable ' table name '
// see if the compressed success
desc ' biaoming '
// large merger, remember that after the big merger be considered only on the real compression, compression is considered successful
major_compact ' biaoming ' perform large merge
2. Map
When mapping table that is operated by phoenix hbase in the table, would like to operate by phoenix hbase tables, you must first map, the syntax is very strict remember phoenix double quotation marks and capitalization must pay attention to, and then create a table of certain to double quotes.
the Create View "table" (must be in quotes) to establish a mapping table, the table mapping table can be operated over by Phoenix the Create View "table" (PK VARCHAR Primark Key , "tufts column" "the Click",.) - PK primary key name the SELECT * from "sogou" limit 100
// establish mapping hbase tables the Create View "sogou01" (PK VARCHAR primary key , "f". "the Click" VARCHAR , "f". "url" VARCHAR , " f. "" serch " VARCHAR ," f. "" Rank "VARCHAR )
// extract 100 the data to see if the success of
select * from "sogou01" limit 100 ;