HBase configuration and API operation

Yesterday and today, I mainly studied HBase, and recorded my learning experience here.

When I first came into contact with HBase, I found that it has its own zookeeper; I originally installed hadoop completely distributed, three machines, but I only opened two when I was running, so the examples below are only two. Taiwan's ha.

It stands to reason that it is better to install zookeeper separately, rather than using the internal zookeeper that comes with HBase. The reason is probably as follows (excerpted from the answer of a user who knows Baidu, I personally think his answer is good):

         Because we share a zookeeper with many hbase clusters, one of the clusters needs to be upgraded from hbase 0.90.2 to hbase 0.92. Naturally, the package must also be updated. But one of the regionservers also runs zookeeper, and zookeeper is still running with the zookeeper that comes with hbase 0.90.2. ? Now, upgrading a regionserver will also be implicated with zookeeper. It seems that it must be restarted. Otherwise, the replacement of the jar package may affect the frequent running of zk. But restarting zk will have a short-term impact on the client side that is connecting to this zk. ? It hurts. Originally, it was just to upgrade hbase, but zk was strongly coupled. ? Although it was later proved that as long as zookeeper is started, even if the jar package is deleted, it will not affect the running zk process, but the risk brought by such irregularities is really unnecessary. ? Therefore, as an operation and maintenance , I strongly recommend that zk and hbase are deployed separately, and the official zk can be deployed directly, because zk itself is an independent service, and there is no need to couple it with hbase. ? In the distributed system deployment, a role is managed by a special folder, do not use the same directory, this is really prone to problems.


At present, I am also lazy, so I directly use the zookeeper that comes with HBase.

My operating environment hadoop2.8.0, hbase1.2.6 (The configuration of hadoop is not mentioned here, you can build it yourself according to the official website. My three machine names are bigdata1, bigdata2, bigdata3; only the last two are run here)

  • HBase configuration

Take the configuration on bigdata3 as an example

hbase-site.xml

<configuration>
<property>
        <name>hbase.rootdir</name>
        <value>hdfs://bigdata3:9000/hbase</value>
</property>
<property>
        <name>hbase.cluster.distributed</name>
        <value>true</value>
</property>
<property>

        <name>hbase.zookeeper.quorum</name>

        <value>bigdata1,bigdata2,bigdata3</value>

</property>

<property>
	<name>hbase.zookeeper.sission.timeout</name>
	<value>60000</value>
</property>

<property>
	<name>hbase.zookeeper.property.clientPort</name>
	<value>2181</value>
</property>



<property>
	<name>hbase.regionserver.lease.period</name>
	<value>60000</value>
</property>

<property>
	<name>hbase.rpc.timeout</name>
	<value>60000</value>
</property>
<property>
	<name>hbase.master.info.port</name>
	<value>60010</value>
</property>
<property>

      <name>hbase.master</name>

      <value>hdfs://bigdata3:60000</value>

  </property>
</configuration>

hbase-env.sh

export HBASE_CLASSPATH=/home/hadoop/hbase-1.2.6/conf
export HBASE_OPTS="-XX:+UseConcMarkSweepGC"
export HBASE_MASTER_OPTS="$HBASE_MASTER_OPTS -XX:PermSize=128m -XX:MaxPermSize=128m"
export HBASE_REGIONSERVER_OPTS="$HBASE_REGIONSERVER_OPTS -XX:PermSize=128m -XX:MaxPermSize=128m"
export HBASE_LOG_DIR=${HBASE_HOME}/logs
# The directory where pid files are stored. /tmp by default.
export HBASE_PID_DIR=/home/hadoop/hbase-1.2.6/pids
# Tell HBase whether it should manage it's own instance of Zookeeper or not.
export HBASE_MANAGES_ZK=true
export JAVA_HOME=/home/hadoop/jdk1.8.0_121/
export HBASE_HOME=/home/hadoop/hbase-1.2.6
export PATH=$PATH:$HBASE_HOME/bin
export HADOOP_HOME=/home/hadoop/hadoop2.8.0




regionservers

bigdata1
bigdata2
bigdata3

After starting hadoop, start hbase, use the command start-hbase.sh start



Then hbase shell can perform database operations; I will not list the specific shell operation commands! Simultaneous browser access


Now the port of the new version of hbase has been changed to 16010, everyone pay attention.


It was normal to arrive here yesterday morning, but when I came back after dinner in the afternoon, I found that there was an error in hbase.

Later, I looked at the log and felt that zookeeper did not get up.


But I don't know where the zookeeper that comes with Hbase is, how to restart it manually? Baidu has not found a solution for a long time, and I have tried all the methods on the Internet, but it has not been solved yet;

This will be recorded after I solve it later. If you have encountered the same problem and solved it, please let Weixia know! Thanks in advance~

Because the default file is saved under tmp when using the built-in hbase, so for the time being, I will delete the content under tmp and then restart the hbase.

Below I use eclipse java to create a new table, the code is as follows:

mport java.io.IOException;

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.hbase.HBaseConfiguration;
import org.apache.hadoop.hbase.HColumnDescriptor;
import org.apache.hadoop.hbase.HTableDescriptor;
import org.apache.hadoop.hbase.TableName;
import org.apache.hadoop.hbase.client.HBaseAdmin;


public class CreateTable {
	public static void main(String[] args) throws IOException {
		Configuration con = HBaseConfiguration.create();
		con.set("hbase.zookeeper.quorum", "master");//This must be added when using eclipse, otherwise it cannot be located; if filling in matser is still useless, you can fill in the ip address
		HBaseAdmin admin= new HBaseAdmin(con);
		HTableDescriptor tableDescriptor = new HTableDescriptor(TableName.valueOf("emp"));
		tableDescriptor.addFamily(new HColumnDescriptor("personal"));
		tableDescriptor.addFamily(new HColumnDescriptor("professional"));
		admin.createTable(tableDescriptor);
		admin.close();
		System.out.println("Table created");
	}

}

Remember to add external jars in eclipse, and add all the jar packages under hbase lib into it!


Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=325895395&siteId=291194637