Configure Kerberos for Apache HBase on HDP of Kderberos (5)

Securing Apache HBase in a production environment

HBase is a popular distributed key-value store imitating Google BigTable. HBase can support extremely fast lookups, high write throughput, and strong consistency, making it suitable for use cases ranging from use case data storage (such as Facebook messaging) to analysis use cases (such as Yahoo Flurry). HBase stores data on HDFS to provide linear scaling and fault tolerance.

Similar to HDFS, Kerberos integration works by adding a SASL-based authentication layer to the HBase protocol that requires a valid SPN for authentication. In addition, HBase itself uses Kerberos to authenticate HDFS when storing data. HBase supports unit-level access control and provides a very fine authorization layer.

Install Apache HBase using Kerberos on an existing HDP cluster

You can install Kerberos-enabled Apache HBase on the HDP cluster.

Before you start
• You must have a working HDP cluster with Kerberos enabled.
• You must have Kerberos administrator access rights.
Please follow the steps below to install HBase with Kerberos enabled:

program

1. Log in to Ambari.
2. From the "Actions" menu, click "Add Service".
3. From the list of services, select HBase, and then click "Next".
4. Choose where to install HBase Master.
5. Select the node where you want to install the HBase regional server.
6. View the configuration details, and then modify it according to your performance adjustment needs, and then click "Next".
You can also customize the service later.
7. Check the Kerberos service principal name (SPN) that will be created for the HBase deployment, and click "Next".
8. View the configuration details, and then click "Deploy".
If Kerberos administrator credentials are not stored when Kerberos is enabled on HDFS, Ambari will prompt you to enter the
credentials again.
9. After entering the credentials, click "Save".
10. Wait for the installation to complete.
11. Review any errors encountered during the installation and click "Next".

Verify that kerberos is enabled for HBase

Please follow the steps below to verify that kerberos is enabled for Apache HBase:

Procedure
1. Log in to the Ambari node where the HBase client is installed.
2. Start HBase Shell.
3. On the HBase Master host, execute the status command:
Insert picture description here

In this example, the command fails with a stack trace because there is no TGT. Therefore, Kerberos cannot be used to authenticate to HBase.
4. Obtain a ticket-granting ticket (TGT).
The following example shows the input and output used to create a local TGT when running the kinit operation.
In the first example, you first run "kinit" on the command line, and then run the application. The second example automatically obtains the TGT through the key table.
Example of a secure client 1:

package com.hortonworks.hbase.examples;
import java.io.IOException;
import java.util.Objects;

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.hbase.HBaseConfiguration;
import org.apache.hadoop.hbase.TableName;
import org.apache.hadoop.hbase.client.Admin;
import org.apache.hadoop.hbase.client.ColumnFamilyDescriptorBuilder;
import org.apache.hadoop.hbase.client.Connection;
import org.apache.hadoop.hbase.client.ConnectionFactory;
import org.apache.hadoop.hbase.client.Get;
import org.apache.hadoop.hbase.client.Put;
import org.apache.hadoop.hbase.client.Result;
import org.apache.hadoop.hbase.client.Table;
import org.apache.hadoop.hbase.client.TableDescriptorBuilder;
import org.apache.hadoop.hbase.util.Bytes;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
/**
 * Write and read data from HBase, expects HBASE_CONF_DIR and
 HADOOP_CONF_DIR on the classpath and a valid Kerberos
 * ticket in a ticket cache (e.g. kinit).
 */
public class ExampleSecureClient implements Runnable {
    
    
 private static final Logger LOG =
 LoggerFactory.getLogger(ExampleSecureClient.class);
 private static final TableName TABLE_NAME =
 TableName.valueOf("example_secure_client");
 private static final byte[] CF = Bytes.toBytes("f1");
 private final Configuration conf;
 public ExampleSecureClient(Configuration conf) {
    
    
 this.conf = Objects.requireNonNull(conf);
 }
 @Override public void run() {
    
    
 try (Connection conn = ConnectionFactory.createConnection(conf)) {
    
     
 writeAndRead(conn, TABLE_NAME, CF);
 LOG.info("Success!");
 } catch (Exception e) {
    
    
 LOG.error("Uncaught exception running example", e);
 throw new RuntimeException(e);
 }
 }
 void writeAndRead(Connection conn, TableName tn, byte[] family) throws
 IOException {
    
    
 final Admin admin = conn.getAdmin();
 // Delete the table if it already exists
 if (admin.tableExists(tn)) {
    
    
 admin.disableTable(tn);
 admin.deleteTable(tn);
 }
 // Create our table
 
 admin.createTable(TableDescriptorBuilder.newBuilder(tn).setColumnFamily(
 ColumnFamilyDescriptorBuilder.of(family)).build());
 final Table table = conn.getTable(tn);
 Put p = new Put(Bytes.toBytes("row1"));
 p.addColumn(family, Bytes.toBytes("q1"), Bytes.toBytes("value"));
 LOG.info("Writing update: row1 -> value");
table.put(p);
 Result r = table.get(new Get(Bytes.toBytes("row1")));
 assert r.size() == 1;
 LOG.info("Read row1: {}", r);
 }
 public static void main(String[] args) {
    
    
 final Configuration conf = HBaseConfiguration.create();
 new ExampleSecureClient(conf).run();
 }
}

Ticket cache output example 1:

2018-06-12 13:44:40,144 WARN [main] util.NativeCodeLoader: Unable to load
 native-hadoop library for your platform... using builtin-java classes
 where applicable
2018-06-12 13:44:40,975 INFO [main] zookeeper.ReadOnlyZKClient: Connect
 0x62e136d3 to my.fqdn:2181 with session timeout=90000ms, retries 6, retry
 interval 1000ms, keepAlive=60000ms
2018-06-12 13:44:42,806 INFO [main] client.HBaseAdmin: Started disable of
 example_secure_client
2018-06-12 13:44:44,159 INFO [main] client.HBaseAdmin: Operation:
 DISABLE, Table Name: default:example_secure_client completed
2018-06-12 13:44:44,590 INFO [main] client.HBaseAdmin: Operation: DELETE,
 Table Name: default:example_secure_client completed
2018-06-12 13:44:46,040 INFO [main] client.HBaseAdmin: Operation: CREATE,
 Table Name: default:example_secure_client completed
2018-06-12 13:44:46,041 INFO [main] examples.ExampleSecureClient: Writing
 update: row1 -> value
2018-06-12 13:44:46,183 INFO [main] examples.ExampleSecureClient: Read
 row1: keyvalues={
    
    row1/f1:q1/1528825486175/Put/vlen=5/seqid=0}
2018-06-12 13:44:46,183 INFO [main] examples.ExampleSecureClient:
 Success!
2018-06-12 13:44:46,183 INFO [main] client.ConnectionImplementation:
 Closing master protocol: MasterService
2018-06-12 13:44:46,183 INFO [main] zookeeper.ReadOnlyZKClient: Close
 zookeeper connection 0x62e136d3 to my.fqdn:2181
  1. Execute the status command again by logging in through principal and keytab.

Example 2 of a secure client logged in using keytab:

package com.hortonworks.hbase.examples;
import java.io.File;
import java.security.PrivilegedAction;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.hbase.HBaseConfiguration;
import org.apache.hadoop.security.UserGroupInformation;
public class ExampleSecureClientWithKeytabLogin {
    
    
 public static void main(String[] args) throws Exception {
    
    
 final Configuration conf = HBaseConfiguration.create();
 final String principal = "[email protected]";
 final File keytab = new File("/etc/security/keytabs/myself.keytab");
assert keytab.isFile() : "Provided keytab '" + keytab + "' is not a
 regular file.";
 UserGroupInformation.setConfiguration(conf);
 UserGroupInformation ugi =
 UserGroupInformation.loginUserFromKeytabAndReturnUGI(
 principal, keytab.getAbsolutePath());
 ugi.doAs(new PrivilegedAction<Void>() {
    
    
 @Override public Void run() {
    
    
 new ExampleSecureClient(conf).run();
 return null;
 }
 });
 }
}

The following example shows the output generated by logging in via keytab:

2018-06-12 13:29:23,057 WARN [main] util.NativeCodeLoader: Unable to load
 native-hadoop library for your platform... using builtin-java classes
 where applicable
2018-06-12 13:29:23,574 INFO [main] zookeeper.ReadOnlyZKClient: Connect
 0x192d43ce to my.fqdn:2181 with session timeout=90000ms, retries 6, retry
 interval 1000ms, keepAlive=60000ms
2018-06-12 13:29:29,172 INFO [main] client.HBaseAdmin: Started disable of
 example_secure_client
2018-06-12 13:29:30,456 INFO [main] client.HBaseAdmin: Operation:
 DISABLE, Table Name: default:example_secure_client completed
2018-06-12 13:29:30,702 INFO [main] client.HBaseAdmin: Operation: DELETE,
 Table Name: default:example_secure_client completed
2018-06-12 13:29:33,005 INFO [main] client.HBaseAdmin: Operation: CREATE,
 Table Name: default:example_secure_client completed
2018-06-12 13:29:33,006 INFO [main] examples.ExampleSecureClient: Writing
 update: row1 -> value
2018-06-12 13:29:33,071 INFO [main] examples.ExampleSecureClient: Read
 row1: keyvalues={
    
    row1/f1:q1/1528824573066/Put/vlen=5/seqid=0}
2018-06-12 13:29:33,071 INFO [main] examples.ExampleSecureClient:
 Success!
2018-06-12 13:29:33,071 INFO [main] client.ConnectionImplementation:
 Closing master protocol: MasterService
2018-06-12 13:29:33,071 INFO [main] zookeeper.ReadOnlyZKClient: Close
 zookeeper connection 0x192d43ce to my.fqdn:2181

Use a Java client to access a Kerberos-enabled HBase cluster

You can use a Java client to access a Kerberos-enabled HBase cluster.

Before you start

• HDP cluster with Kerberos enabled.
• You are working in Java 8, Maven 3 and Eclipse development environments.
• You have administrator access to the Kerberos KDC.
Perform the following tasks to connect to HBase using a Java client and perform simple Put operations on the table.

program

1. "Download configuration"
2. "Set up customer account"
3. "Use a Java client to access the Kerberos-enabled HBase cluster"

Download configuration

Please follow the steps below to download the required configuration:

Procedure
1. From Ambari, extract HBase and HDFS files to the conf directory, which will save all configuration details.
These files must be extracted under the $HBASE_CONF_DIR directory, where $HBASE_CONF_DIR is the directory used to store HBase configuration files. For example, /etc/hbase/conf.
2. From KDC, download the krb5.conf file from /etc/krb5.con. You can also place the configuration summary in the directory.

includedir /etc/krb5.conf.d/
 [logging]
 default = FILE:/var/log/krb5libs.log
 kdc = FILE:/var/log/krb5kdc.log
 admin_server = FILE:/var/log/kadmind.log
 [libdefaults]
 dns_lookup_realm = false
 ticket_lifetime = 24h
 renew_lifetime = 7d
 forwardable = true
 rdns = false
 default_realm = HWFIELD.COM
 default_ccache_name = KEYRING:persistent:%{
    
    uid}
 [realms]
 HWFIELD.COM = {
    
    
 kdc = ambud-hdp-3.field.hortonworks.com
 admin_server = ambud-hdp-3.field.hortonworks.com
 }
 [domain_realm]
 .hwfield.com = HWFIELD.COM
 hwfield.com = HWFIELD.COM

Set up client accounts

Please follow the steps below to set up a kerberos account for the client and grant permissions to this account in HBase, so
you can create, read, and write tables.

Procedure
1. Log in to KDC.
2. Switch to the root directory.
3. Run kadmin.local:

$ sudo kadmin.local
kadmin.local: addprinc myself
WARNING: no policy specified for [email protected]; defaulting to no
 policy
Enter password for principal "[email protected]":
Re-enter password for principal "[email protected]":

Principal "[email protected]" created.
kadmin.local: xst -k /etc/security/keytabs/myself.keytab -norandkey
 myself
Entry for principal myself with kvno 1, encryption type aes256-cts-hmacsha1-96 added to keytab
WRFILE:/etc/security/keytabs/myself.keytab.
Entry for principal myself with kvno 1, encryption type aes128-cts-hmacsha1-96 added to keytab
WRFILE:/etc/security/keytabs/myself.keytab.
  1. Copy the key table file to the conf directory.
  2. Grant permissions in HBase. For more information, see Configuring HBase for Access Control List (ACL).
klist -k /etc/security/keytabs/hbase.headless.keytab

Optional step: You should secure the keytab file so that only the HBase process can access the keytab. This can be done by running commands.

$>sudo chmod 700 /etc/security/keytabs/hbase.headless.keytab
$ kinit -kt /etc/security/keytabs/hbase.headless.keytab hbase
$ hbase shell
hbase(main):001:0> status
1 active master, 0 backup masters, 4 servers, 1 dead, 1.2500 average load
  1. Grant users administrator rights. You can also customize this option to restrict access to the account. For more information, see https://hbase.apache.org/0.94/book/ hbase.accesscontrol.configuration.html#d1984e4744
    example
hbase(main):001:0> grant 'myself', 'C'

Create a Java client
Please follow the steps below to create a Java client:

program

1. Start Eclipse.
2. Create a simple Maven project.
3. Add hbase-client and hadoop-auth dependencies.
The client uses the Hadoop UGI utility class to perform Kerberos authentication using the keytab file. It sets the context so that all operations are performed under the hbase-user2 security context. Then, it performs the required HBase operations, namely checking/creating tables and performing put and get operations.

<dependencies>
 <dependency>
 <groupId>org.apache.hbase</groupId>
 <artifactId>hbase-client</artifactId>
 <version>${hbase.version}</version>
 <exclusions>
 <exclusion>
 <groupId>org.apache.hadoop</groupId>
 <artifactId>hadoop-aws</artifactId>
 </exclusion>
 </exclusions>
 </dependency>
 <dependency>
 <groupId>org.apache.hadoop</groupId>
 <artifactId>hadoop-auth</artifactId>
 <version>${hadoop.version}</version>
 </dependency>
 <dependency>
 <groupId>org.apache.hadoop</groupId>
 <artifactId>hadoop-common</artifactId>
 <version>${hadoop.version}</version>
 </dependency>
 <dependency>

4. Execute the HBase Java client code from the node that can reverse DNS resolution.
This is part of Kerberos authentication. Therefore, running it on a computer that never shares the same DNS infrastructure as the HDP cluster will cause authentication to fail.
5. To verify whether Kerberos authentication/keytab/principal is actually valid, you can also perform simple Kerberos authentication from Java. This gives you some understanding of how Java JAAS and Kerberos work.
It is strongly recommended that you use the Maven Shade plugin or Maven Jar plugin to automatically package dependencies into the thick client JAR. You can also use the Eclipse export function, but it is not recommended to use this function in the production code base.

Guess you like

Origin blog.csdn.net/m0_48187193/article/details/114893865
HDP