Kerberos安装及集成hdfs、hive

环境:

1 台 Centos6.5主机 部署 master KDC
2 台 Centos6.5主机 部署 Kerberos Client

1、Master主机安装Kerberos

yum install krb5-server krb5-libs krb5-workstation -y

1.1 配置kdc.conf

vim /var/kerberos/krb5kdc/kdc.conf
[kdcdefaults]
 kdc_ports = 88
 kdc_tcp_ports = 88

[realms]
 HADOOP.COM = {
  #master_key_type = aes256-cts
  acl_file = /var/kerberos/krb5kdc/kadm5.acl
  dict_file = /usr/share/dict/words
  admin_keytab = /var/kerberos/krb5kdc/kadm5.keytab
  max_renewable_life = 7d
  supported_enctypes = aes128-cts:normal des3-hmac-sha1:normal arcfour-hmac:normal camellia256-cts:normal camellia128-cts:normal des-hmac-sha1:normal des-cbc-md5:normal des-cbc-crc:normal
 }

说明:

HADOOP.COM:是设定的realms。名字随意。Kerberos可以支持多个realms,一般全用大写
master_key_type,supported_enctypes默认使用aes256-cts。由于,JAVA使用aes256-cts验证方式需要安装额外的jar包,这里暂不使用
acl_file:标注了admin的用户权限。文件格式是
Kerberos_principal permissions [target_principal] [restrictions]支持通配符等
admin_keytab:KDC进行校验的keytab
supported_enctypes:支持的校验方式。注意把aes256-cts去掉

1.2 配置krb5.conf

vim /etc/krb5.conf
# Configuration snippets may be placed in this directory as well
includedir /etc/krb5.conf.d/

[logging]
 default = FILE:/var/log/krb5libs.log
 kdc = FILE:/var/log/krb5kdc.log
 admin_server = FILE:/var/log/kadmind.log

[libdefaults]
 default_realm = HADOOP.COM
 dns_lookup_realm = false
 dns_lookup_kdc = false
 ticket_lifetime = 24h
 renew_lifetime = 7d
 forwardable = true
 clockskew = 120
 udp_preference_limit = 1
[realms]
 HADOOP.COM = {
  kdc = node01
  admin_server = node01
 }

[domain_realm]
 .hadoop.com = HADOOP.COM
 hadoop.com = HADOOP.COM

说明:

[logging]:表示server端的日志的打印位置
udp_preference_limit = 1 禁止使用udp可以防止一个Hadoop中的错误
ticket_lifetime: 表明凭证生效的时限,一般为24小时。
renew_lifetime: 表明凭证最长可以被延期的时限,一般为一个礼拜。当凭证过期之后,对安全认证的服务的后续访问则会失败。
clockskew:时钟偏差是不完全符合主机系统时钟的票据时戳的容差,超过此容差将不接受此票据,单位是秒

1.3 初始化kerberos database

kdb5_util create -s -r HADOOP.COM

其中,[-s]表示生成stash file,并在其中存储master server key(krb5kdc);还可以用[-r]来指定一个realm name —— 当krb5.conf中定义了多个realm时才是必要的。

1.4 修改database administrator的ACL权限

vim /var/kerberos/krb5kdc/kadm5.acl
#修改如下
*/[email protected]  *

kadm5.acl 文件更多内容可参考:kadm5.acl
想要管理 KDC 的资料库有两种方式, 一种直接在 KDC 本机上面直接执行,可以不需要密码就登入资料库管理;一种则是需要输入账号密码才能管理~这两种方式分别是:

kadmin.local:需要在 KDC server 上面操作,无需密码即可管理资料库
kadmin:可以在任何一台 KDC 领域的系统上面操作,但是需要输入管理员密码

1.5 启动kerberos daemons并设置开机启动

service krb5kdc start
service kadmin start
chkconfig krb5kdc on
chkconfig kadmin on

2. 在另外两台主机部署Kerberos Client

yum install krb5-workstation krb5-libs -y
#从master主机复制krb5.conf到这两台主机
scp /etc/krb5.conf node02:/etc/krb5.conf
scp /etc/krb5.conf node03:/etc/krb5.conf

3. kerberos的日常操作

3.1先配置下root/admin密码

[root@node-1 ~]# kadmin.local
Authenticating as principal root/[email protected] with password.
kadmin.local:  addprinc root/admin
WARNING: no policy specified for root/[email protected]; defaulting to no policy
Enter password for principal "root/[email protected]": 
Re-enter password for principal "root/[email protected]": 
Principal "root/[email protected]" created.
kadmin.local:  listprincs
K/[email protected]
kadmin/[email protected]
kadmin/[email protected]
kadmin/[email protected]
kiprop/[email protected]
krbtgt/[email protected]
root/[email protected]
kadmin.local:  exit

3.2新加用户hd1:

[root@node-3 ~]# kadmin
Authenticating as principal root/[email protected] with password.
Password for root/[email protected]: 
kadmin:  addprinc hd1
WARNING: no policy specified for [email protected]; defaulting to no policy
Enter password for principal "[email protected]": 
Re-enter password for principal "[email protected]": 
Principal "[email protected]" created.
kadmin:  exit

=================================================================================

4.hdfs集成kerberos

4.1在KDC上创建kerberos实例

4.1.1以root用户,输入kadmin.local进入kerberos命令行,在kerberos数据库中生成实例
addprinc -randkey hadoop/[email protected]
addprinc -randkey hadoop/[email protected]
addprinc -randkey hadoop/[email protected]
addprinc -randkey HTTP/[email protected]
addprinc -randkey HTTP/[email protected]
addprinc -randkey HTTP/[email protected]
4.1.2退出kerberos命令行,以root用户,为各实例生成密钥
kadmin.local -q "xst -k hadoop.keytab hadoop/[email protected]"
kadmin.local -q "xst -k hadoop.keytab hadoop/[email protected]"
kadmin.local -q "xst -k hadoop.keytab hadoop/[email protected]"
kadmin.local -q "xst -k HTTP.keytab HTTP/[email protected]"
kadmin.local -q "xst -k HTTP.keytab HTTP/[email protected]"
kadmin.local -q "xst -k HTTP.keytab HTTP/[email protected]"

此时,生成的keytab都在root根目录下。

4.1.3在root命令行,合并hadoop.keytab和HTTP.keytab为hdfs.keytab。
 ktutil
 rkt hadoop.keytab
 rkt HTTP.keytab
 wkt hdfs.keytab

将hdfs.keytab文件复制到/home/hadoop/目录。并向hadoop各节点分发。

4.1.4注意

a. 主机名一定要小写,通过kinit申请TGT票据时kerberos会将无论大小写都视为小写,生成票据。而实例中如果是大写主机名,则会使得kerberos库中对应的实例无法产生票据。

b. hadoop/namenode,其中的namenode只是作为一个标志,代表这个hadoop用户与另一个hadoop用户属于不同的主机而已。所有,如果hostname为大写,那么此处仍必须以小写作为输入。

4.2hdfs集成kerberos。(先停止集群)

4.2.1修改core_site.xml
 <property> 
    <name>hadoop.security.authentication</name>  
     <value>kerberos</value> 
 </property> 

 <property> 
     <name>hadoop.security.authorization</name> 
     <value>true</value> 
 </property>

以上配置,表示开启安全认证功能并且采用kerberos认证。

4.2.2修改hdfs-site.xml
<property>
  <name>dfs.block.access.token.enable</name>
  <value>true</value>
</property>
<property>
  <name>dfs.datanode.data.dir.perm</name>
  <value>700</value>
</property>
<property>
  <name>dfs.namenode.keytab.file</name>
  <value>/root/hadoop/hdfs.keytab</value>
</property>
<property>
  <name>dfs.namenode.kerberos.principal</name>
  <value>hadoop/[email protected]</value>
</property>
<property>
  <name>dfs.namenode.kerberos.https.principal</name>
  <value>HTTP/[email protected]</value>
</property>


<!--  
<property>
  <name>dfs.datanode.address</name>
  <value>0.0.0.0:1004</value>
</property>
<property>
  <name>dfs.datanode.http.address</name>
  <value>0.0.0.0:1006</value>
</property>
 -->
 
<property>
  <name>dfs.datanode.address</name>
  <value>0.0.0.0:61004</value>
</property>
<property>
  <name>dfs.datanode.http.address</name>
  <value>0.0.0.0:61006</value>
</property>
<property>
  <name>dfs.http.policy</name>
  <value>HTTPS_ONLY</value>
</property>
<property>
  <name>dfs.data.transfer.protection</name>
  <value>integrity</value>
</property>
<property>
  <name>dfs.datanode.keytab.file</name>
  <value>/root/hadoop/hdfs.keytab</value>
</property>
<property>
  <name>dfs.datanode.kerberos.principal</name>
  <value>hadoop/[email protected]</value>
</property>
<property>
  <name>dfs.datanode.kerberos.https.principal</name>
  <value>HTTP/[email protected]</value>
</property>
<property>
  <name>dfs.journalnode.keytab.file</name>
  <value>/root/hadoop/hdfs.keytab</value>
</property>
<property>
  <name>dfs.journalnode.kerberos.principal</name>
  <value>hadoop/[email protected]</value>
</property>
<property>
  <name>dfs.journalnode.kerberos.internal.spnego.principal</name>
  <value>HTTP/[email protected]</value>
</property>
<property>
  <name>dfs.webhdfs.enabled</name>
  <value>true</value>
</property>
<property>
  <name>dfs.web.authentication.kerberos.principal</name>
  <value>HTTP/[email protected]</value>
</property>
<property>
  <name>dfs.web.authentication.kerberos.keytab</name>
  <value>/root/hadoop/hdfs.keytab</value>
</property>
4.2.3在配置完上面的配置文件,启动后报如下错误
java.lang.RuntimeException: Cannot start secure DataNode without configuring either privileged resources or SASL RPC data transfer protection and SSL for HTTP.  Using privileged resources in combination with SASL RPC data transfer protection is not supported.
        at org.apache.hadoop.hdfs.server.datanode.DataNode.checkSecureConfig(DataNode.java:1201)
        at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:1101)
        at org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:429)
        at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2406)
        at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2293)
        at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:2340)
        at org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:2522)
        at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:2546)
2018-03-13 14:01:27,317 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 1
2018-03-13 14:01:27,318 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:

Using privileged resources in combination with SASL RPC data transfer protection is not supported.显示privileged resources(即小端口号)和SASL RPC data transfer protection不能同时使用。
这时候有两条道路选择:

1、继续使用小端口号。这样需要在root用户下使用jscv来启动dataNode,并且需要繁琐的下载依赖包额编译工作来获得jscv,最终放弃(此方法,暂时没有测试成功)
2、使用大的端口号。参考: Using privileged resources in combination with SASL RPC data transfer protection is not supported.

这时候又报错:

2018-03-09 20:44:10,993 INFO org.apache.hadoop.security.authentication.server.KerberosAuthenticationHandler: Login using keytab /etc/hadoop/conf/hdfs-service.keytab, for principal HTTP/[email protected]
2018-03-09 20:44:11,000 INFO org.apache.hadoop.security.authentication.server.KerberosAuthenticationHandler: Login using keytab /etc/hadoop/conf/hdfs-service.keytab, for principal HTTP/[email protected]
2018-03-09 20:44:11,003 WARN org.mortbay.log: failed [email protected]:50470: java.io.FileNotFoundException: /home/kduser/.keystore (No such file or directory)
2018-03-09 20:44:11,003 WARN org.mortbay.log: failed Server@10ded6a9: java.io.FileNotFoundException: /home/kduser/.keystore (No such file or directory)
2018-03-09 20:44:11,003 INFO org.apache.hadoop.http.HttpServer2: HttpServer.start() threw a non Bind IOException
java.io.FileNotFoundException: /home/kduser/.keystore (No such file or directory)
        at java.io.FileInputStream.open0(Native Method)
        at java.io.FileInputStream.open(FileInputStream.java:195)
        at java.io.FileInputStream.<init>(FileInputStream.java:138)
        at org.mortbay.resource.FileResource.getInputStream(FileResource.java:275)
        at org.mortbay.jetty.security.SslSelectChannelConnector.createSSLContext(SslSelectChannelConnector.java:624)
        at org.mortbay.jetty.security.SslSelectChannelConnector.doStart(SslSelectChannelConnector.java:598)
        at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
        at org.mortbay.jetty.Server.doStart(Server.java:235)
        at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
        at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:877)
        at org.apache.hadoop.hdfs.server.namenode.NameNodeHttpServer.start(NameNodeHttpServer.java:142)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.startHttpServer(NameNode.java:760)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:639)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:819)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:803)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1500)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1566)
2018-03-09 20:44:11,006 INFO org.mortbay.log: Stopped [email protected]:50470
2018-03-09 20:44:11,107 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Stopping NameNode metrics system...
2018-03-09 20:44:11,108 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system stopped.
2018-03-09 20:44:11,108 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system shutdown complete.
2018-03-09 20:44:11,108 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: Failed to start namenode.
java.io.FileNotFoundException: /home/kduser/.keystore (No such file or directory)
        at java.io.FileInputStream.open0(Native Method)
        at java.io.FileInputStream.open(FileInputStream.java:195)
        at java.io.FileInputStream.<init>(FileInputStream.java:138)
        at org.mortbay.resource.FileResource.getInputStream(FileResource.java:275)
        at org.mortbay.jetty.security.SslSelectChannelConnector.createSSLContext(SslSelectChannelConnector.java:624)
        at org.mortbay.jetty.security.SslSelectChannelConnector.doStart(SslSelectChannelConnector.java:598)
        at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
        at org.mortbay.jetty.Server.doStart(Server.java:235)
        at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
        at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:877)
        at org.apache.hadoop.hdfs.server.namenode.NameNodeHttpServer.start(NameNodeHttpServer.java:142)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.startHttpServer(NameNode.java:760)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:639)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:819)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:803)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1500)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1566)
2018-03-09 20:44:11,110 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 1
2018-03-09 20:44:11,111 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at v-hadoop-kbds.sz.kingdee.net/172.20.178.28
************************************************************/

这需要配置HTTPS。

4.2.4配置HTTPS

在 node01生成ca并拷贝至node02,node03。 (密码随便设置,大于6位即可。如123456)

cd /etc/https

openssl req -new -x509 -keyout hdfs_ca_key -out hdfs_ca_cert -days 9999 -subj '/C=CN/ST=beijing/L=chaoyang/O=lecloud/OU=dt/CN=jenkin.com'

scp hdfs_ca_key  hdfs_ca_cert node02:/etc/https/
scp hdfs_ca_key  hdfs_ca_cert node03:/etc/https/

在每一台机器上生成 keystore,和trustores(中间需要输入密码,这里我全部设置成了123456)

// 生成 keystore
keytool -keystore keystore -alias localhost -validity 9999 -genkey -keyalg RSA -keysize 2048 -dname "CN=${fqdn}, OU=DT, O=DT, L=CY, ST=BJ, C=CN"

// 添加 CA 到 truststore
keytool -keystore truststore -alias CARoot -import -file hdfs_ca_cert

// 从 keystore 中导出 cert
keytool -certreq -alias localhost -keystore keystore -file cert

// 用 CA 对 cert 签名
openssl x509 -req -CA hdfs_ca_cert -CAkey hdfs_ca_key -in cert -out cert_signed -days 9999 -CAcreateserial

// 将 CA 的 cert 和用 CA 签名之后的 cert 导入 keystore
keytool -keystore keystore -alias CARoot -import -file hdfs_ca_cert
keytool -keystore keystore -alias localhost -import -file cert_signed

将最终keystore,trustores放入合适的目录,并加上后缀

cp keystore /etc/https/keystore.jks
cp truststore /etc/https/truststore.jks

修改hdfs-site.xml(datanode与namenode混合部署是,需要 HTTPS_ONLY ,前面已经设置过,这里不需要再次设置)

<property>
      <name>dfs.http.policy</name>
      <value>HTTP_AND_HTTPS</value>
      <!-- <value>HTTPS_ONLY</value> -->
</property>

配置ssl-client.xml

<configuration>

<property>
  <name>ssl.client.truststore.location</name>
  <value>/etc/https/truststore.jks</value>
  <description>Truststore to be used by clients like distcp. Must be specified.</description>
</property>

<property>
  <name>ssl.client.truststore.password</name>
  <value>123456</value>
  <description>Optional. Default value is "".</description>
</property>

<property>
  <name>ssl.client.truststore.type</name>
  <value>jks</value>
  <description>Optional. The keystore file format, default value is "jks".</description>
</property>

<property>
  <name>ssl.client.truststore.reload.interval</name>
  <value>10000</value>
  <description>Truststore reload check interval, in milliseconds.Default value is 10000 (10 seconds).</description>
</property>

<property>
  <name>ssl.client.keystore.location</name>
  <value>/etc/https/keystore.jks</value>
  <description>Keystore to be used by clients like distcp. Must bespecified.</description>
</property>

<property>
  <name>ssl.client.keystore.password</name>
  <value>123456</value>
  <description>Optional. Default value is "".</description>
</property>

<property>
  <name>ssl.client.keystore.keypassword</name>
  <value>123456</value>
  <description>Optional. Default value is "".</description>
</property>

<property>
  <name>ssl.client.keystore.type</name>
  <value>jks</value>
  <description>Optional. The keystore file format, default value is "jks".</description>
</property>

</configuration> 

配置ssl-server.xml

<configuration>

<property>
  <name>ssl.server.truststore.location</name>
  <value>/etc/https/truststore.jks</value>
  <description>Truststore to be used by NN and DN. Must be specified.</description>
</property>

<property>
  <name>ssl.server.truststore.password</name>
  <value>123456</value>
  <description>Optional. Default value is "".</description>
</property>

<property>
  <name>ssl.server.truststore.type</name>
  <value>jks</value>
  <description>Optional. The keystore file format, default value is "jks".</description>
</property>

<property>
  <name>ssl.server.truststore.reload.interval</name>
  <value>10000</value>
  <description>Truststore reload check interval, in milliseconds.Default value is 10000 (10 seconds).</description>
</property>

<property>
  <name>ssl.server.keystore.location</name>
  <value>/etc/https/keystore.jks</value>
  <description>Keystore to be used by NN and DN. Must be specified.</description>
</property>

<property>
  <name>ssl.server.keystore.password</name>
  <value>123456</value>
  <description>Must be specified.</description>
</property>

<property>
  <name>ssl.server.keystore.keypassword</name>
  <value>123456</value>
  <description>Must be specified.</description>
</property>

<property>
  <name>ssl.server.keystore.type</name>
  <value>jks</value>
  <description>Optional. The keystore file format, default value is "jks".</description>
</property>

</configuration> 

4.3不出意外的话,到这里就可以成功启动hdfs了(可能无法访问50070页面)

4.4测试java访问hdfs

public static void main(String[] args) throws Exception {

        Configuration conf = new Configuration();
    	conf.addResource(new Path("D:/.../hdfs-site.xml"));
	    conf.addResource(new Path("D:/.../core-site.xml"));
        System.setProperty("java.security.krb5.conf", "D:/.../krb5.conf");
        UserGroupInformation.setConfiguration(conf);
        UserGroupInformation.loginUserFromKeytab("hadoop/[email protected]", "D:/.../hdfs.keytab");

        //获取带有kerberos验证的文件系统类
        FileSystem fileSystem1 = FileSystem.get(conf);
        //测试访问情况
        Path path=new Path("hdfs://node01:8020/user");
        if(fileSystem1.exists(path)){
            System.out.println("===contains===");
        }
        RemoteIterator<LocatedFileStatus> list=fileSystem1.listFiles(path,true);
        while (list.hasNext()) {
            LocatedFileStatus fileStatus = list.next();
            System.out.println(fileStatus.getPath());
        }
}

=================================================================================

5.hive集成kerberos

5.1创建实例,生成keyTab

kadmin.local -q "addprinc -randkey hive/[email protected] "
kadmin.local -q "addprinc -randkey hive/[email protected] "
kadmin.local -q "addprinc -randkey hive/[email protected] "
 
kadmin.local -q "xst  -k hive.keytab  hive/[email protected] "
kadmin.local -q "xst  -k hive.keytab  hive/[email protected] "
kadmin.local -q "xst  -k hive.keytab  hive/[email protected] "

5.2修改 hive-site.xml,添加下面配置

<property>
  <name>hive.server2.authentication</name>
  <value>KERBEROS</value>
</property>
<property>
  <name>hive.server2.authentication.kerberos.principal</name>
  <value>hive/[email protected]</value>
</property>
<property>
  <name>hive.server2.authentication.kerberos.keytab</name>
  <value>/root/hadoop/hive.keytab</value>
</property>
 
<property>
  <name>hive.metastore.sasl.enabled</name>
  <value>true</value>
</property>
<property>
  <name>hive.metastore.kerberos.keytab.file</name>
  <value>/root/hadoop/hive.keytab</value>
</property>
<property>
  <name>hive.metastore.kerberos.principal</name>
  <value>hive/[email protected]</value>
</property>

5.3修改hadoop的core-site.xml

<property>
  <name>hadoop.proxyuser.hive.hosts</name>
  <value>*</value>
</property>
<property>
  <name>hadoop.proxyuser.hive.groups</name>
  <value>*</value>
</property>
<property>
  <name>hadoop.proxyuser.hdfs.hosts</name>
  <value>*</value>
</property>
<property>
  <name>hadoop.proxyuser.hdfs.groups</name>
  <value>*</value>
</property>
<property>
  <name>hadoop.proxyuser.HTTP.hosts</name>
  <value>*</value>
</property>
<property>
  <name>hadoop.proxyuser.HTTP.groups</name>
  <value>*</value>
</property>

5.4将修改的上面文件同步到其他节点:node02、node03,并一一检查权限是否正确

5.5启动hive服务

./hive --service metastore
./hive --service hiveserver2

5.6测试Java访问hive

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.security.UserGroupInformation;

import java.sql.Connection;
import java.sql.DriverManager;
import java.sql.PreparedStatement;
import java.sql.ResultSet;

public class KBSimple {
    private static String JDBC_DRIVER = "org.apache.hive.jdbc.HiveDriver";
    private static String CONNECTION_URL ="jdbc:hive2://node01:10000/;principal=hive/[email protected]";

    static {
        try {
            Class.forName(JDBC_DRIVER);

        } catch (ClassNotFoundException e) {
            e.printStackTrace();
        }
    }

    public static void main(String[] args) throws Exception  {
        Class.forName(JDBC_DRIVER);

        //登录Kerberos账号
        System.setProperty("java.security.krb5.conf", "D:\\...\\krb5.conf");

        Configuration conf = new Configuration();
        conf.set("hadoop.security.authentication" , "Kerberos" );
        conf.addResource(new Path("D:/.../hive-site.xml"));
	    conf.addResource(new Path("D:/.../core-site.xml"));
	    
        UserGroupInformation. setConfiguration(conf);
        UserGroupInformation.loginUserFromKeytab("hive/[email protected]",
                "D:\\...\\hive.keytab");

        Connection connection = null;
        ResultSet rs = null;
        PreparedStatement ps = null;
        try {
            connection = DriverManager.getConnection(CONNECTION_URL);
            ps = connection.prepareStatement("select * from table1");
            rs = ps.executeQuery();
            while (rs.next()) {
                System.out.println(rs.getString(1));
            }
        } catch (Exception e) {
            e.printStackTrace();
        }
    }
}

参考:

(1)Kerberos 基本安装与配置

https://blog.csdn.net/dyq51/article/details/81363905

(2)kerberos安装

https://www.jianshu.com/p/f84c3668272b

(3)Hadoop_Kerberos配置过程记录

https://blog.csdn.net/Regan_Hoo/article/details/78582812

(4)HDFS集群整合Kerberos配置步骤

https://blog.csdn.net/gangchengzhong/article/details/80452903

(5)Hive配置Kerberos认证

https://blog.csdn.net/a118170653/article/details/43448133

(6)hdfs集成Kerberos

https://www.jianshu.com/p/ef6f16546b98

(7)hadoop https配置

https://www.cnblogs.com/kisf/p/7573561.html

发布了68 篇原创文章 · 获赞 4 · 访问量 7387

猜你喜欢

转载自blog.csdn.net/weixin_44455388/article/details/103185652