HBase install snappy compression software and related coding configuration

HBase install snappy compression software and related coding configuration

Foreword

In the course of using HBase as data storage redundancy-related issues, such as the number of backups take up too much disk space, as well as in the storage process in order to increase throughput it will use the associated compression algorithm to compress data and reduce storage space and in by compressing the data storage process to improve throughput.

HBase-2.1.5

Hadoop-2.7.7

A, HBase install Snappy compression software

snappy-1.1.3下载地址:

wget wget https://github.com/google/snappy/releases/download/1.1.3/snappy-1.1.3.tar.gz
sudo  yum -y install gcc-c++ libstdc++-devel
#下面是通过命令直接安装,
sudo yum -y install snappy snappy-devel
$ wget wget https://github.com/google/snappy/releases/download/1.1.3/snappy-1.1.3.tar.gz
$ sudo  yum install gcc-c++ libstdc++-devel         #安装需要编译snappy的软件
$ tar -zxvf  /home/zfll/soft/snappy-1.1.2.tar.gz
$ cd snappy-1.1.3
#安装完成之后重新进行./configure 然后 make
$ ./configure
$ make
$ sudo make install

hbase used snappyfor compressing the data need to Linuxinstall snaapy, need to be modified after the installation configuration file, snappyusually after the installation is complete /usr/local/libgeneration snappydependencies

hadoop-2.7.7: Because using the current version, the current version is actually integration of snappydependencies, so you do not need to re-compiled with a snappyversion of

$ $HADOOP_HOME/bin/hadoop checknative -a

Check the currently installed hadoopversion with or withoutsnappy

As map data is associated with the compression program package dependencies

In hadoopthe installation directory of hadoop/lib/nativethe presence of the following folder:

As content is in use snappycompression when needed is dependent on the package, in the current version has been compiled, and do not need to compile their own version

After the installation is complete, in HBaseuse, use when you need to configure

HBase will depend copied to the directory

Will $HADOOP_HOME/lib/nativecopy all files in the directory to the $HBase/lib/native/linux-amd64-64directory, the new directory does not exist

$ mkdir -p $HBASE_HOME/lib/native/linux-amd64-64
$ cp $HADOOP_HOME/lib/native $HBASE_HOME/lib/native/linux-amd64-64

Note : the above-described operation in a cluster all nodes need to be operated, such that each node on the snappyprogram can be found in the decompressed time-dependent

hbase/conf/hbase-site.xml

<property>
    <name>hbase.regionserver.codecs</name>
    <value>snappy</value>
</property>

Configured as described above is added in the above documents

hbase/conf/hbase-env.sh

export HBASE_LIBRARY_PATH=$HBASE_LIBRARY_PATH:$HBASE_HOME/lib/native/linux-amd64-64/:/usr/local/lib
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$HADOOP_HOME/lib/native/:/usr/local/lib

After completing the above configuration needs with the new hbase-env.shenvironment variable, each node updates to avoid problems, and then 关闭HBaserestartHBase

$ source $HBASE/conf/hbase-env.sh
$ ./$HBase_HOME/bin stop-hbase.sh
$ ./$HBASE_HOME/lib start-hbase.sh

Verify can be used

After completion of authentication is required, and the mounting configuration

$ hbase shell
$ > CREATE 'snappyTest',{NAME=>'info',COMPRESESSION=>'snappy'}

Created by the above compression algorithm using a command snappytable to see if the creation is successful, then can be verified by some data read and write operations

major_compact

reference:

<https://segmentfault.com/a/1190000013211406>

Guess you like

Origin www.cnblogs.com/mojita/p/11899486.html