Configuring mutual trust between the two different clusters kerberos authentication center

After two open Hadoop cluster Kerberos authentication, inter-cluster does not have access to each other, the need to achieve mutual trust between Kerberos, using Hadoop cluster A Client Access service Hadoop cluster B of (essentially using the Kerberos Realm A Ticket realization of access Realm B services).
Prerequisites:
1) two clusters (XDF.COM and HADOOP.COM) are turned on Kerberos authentication
2) are set to Kerberos REALM is XDF.COM and HADOOP.COM
following steps:

Trust between 1 ticket KDC configuration

Achieve DXDF.COM and HADOOP.COM cross-domain trust between, for example, XDF.COM client access HADOOP.COM in service, two REALM requires co-owner named krbtgt / [email protected] of principal two Keys need to ensure the password, the same version number and encryption. By default, the trust is unidirectional,  HADOOP.COM client access XDF.COM service, two REALM requires krbtgt / [email protected] Principal of.
Add krbtgt principal to two clusters

  #XDF CLUSTER
  kadmin.local: addprinc –e “aes128-cts:normal des3-hmac-sha1:normal arcfour-hmac:normal camellia256-cts:normal camellia128-cts:normal des-hmac-sha1:normal des-cbc-md5:normal ” krbtgt/[email protected]
  kadmin.local: addprinc –e “aes128-cts:normal des3-hmac-sha1:normal arcfour-hmac:normal camellia256-cts:normal camellia128-cts:normal des-hmac-sha1:normal des-cbc-md5:normal ”  krbtgt/[email protected]

  #HADOOP CLUSTER
   kadmin.local: addprinc –e “aes128-cts:normal des3-hmac-sha1:normal arcfour-hmac:normal camellia256-cts:normal camellia128-cts:normal des-hmac-sha1:normal des-cbc-md5:normal ” krbtgt/[email protected]
   kadmin.local: addprinc –e “aes128-cts:normal des3-hmac-sha1:normal arcfour-hmac:normal camellia256-cts:normal camellia128-cts:normal des-hmac-sha1:normal des-cbc-md5:normal ”  krbtgt/[email protected]

To verify that two entries have matching kvno and encryption type, view command getprinc <principal_name>

kadmin.local:  getprinc  krbtgt/[email protected]
Principal: krbtgt/[email protected]
Expiration date: [never]
Last password change: Wed Jul 05 14:18:11 CST 2017
Password expiration date: [none]
Maximum ticket life: 1 day 00:00:00
Maximum renewable life: 30 days 00:00:00
Last modified: Wed Jul 05 14:18:11 CST 2017 (admin/[email protected])
Last successful authentication: [never]
Last failed authentication: [never]
Failed password attempts: 0
Number of keys: 7
Key: vno 1, aes128-cts-hmac-sha1-96
Key: vno 1, des3-cbc-sha1
Key: vno 1, arcfour-hmac
Key: vno 1, camellia256-cts-cmac
Key: vno 1, camellia128-cts-cmac
Key: vno 1, des-hmac-sha1
Key: vno 1, des-cbc-md5
MKey: vno 1
Attributes:
Policy: [none]
kadmin.local:  getprinc  addprinc krbtgt/[email protected]
usage: get_principal [-terse] principal
kadmin.local:  getprinc  krbtgt/[email protected]
Principal: krbtgt/[email protected]
Expiration date: [never]
Last password change: Wed Jul 05 14:17:47 CST 2017
Password expiration date: [none]
Maximum ticket life: 1 day 00:00:00
Maximum renewable life: 30 days 00:00:00
Last modified: Wed Jul 05 14:17:47 CST 2017 (admin/[email protected])
Last successful authentication: [never]
Last failed authentication: [never]
Failed password attempts: 0
Number of keys: 7
Key: vno 1, aes128-cts-hmac-sha1-96
Key: vno 1, des3-cbc-sha1
Key: vno 1, arcfour-hmac
Key: vno 1, camellia256-cts-cmac
Key: vno 1, camellia128-cts-cmac
Key: vno 1, des-hmac-sha1
Key: vno 1, des-cbc-md5
MKey: vno 1
Attributes:
Policy: [none]

2 and user configuration principal mapping RULES in the core-site

 

 
Paste_Image.png


Set hadoop.security.auth_to_local parameter, which is used to principal into user, need to pay attention to a problem that SASL RPC client needs Kerberos principal Remote Server matches the principal in the configuration itself. Require the same pricipal name assigned to the source and destination cluster of services, such as Source Cluster in the NameNode kerbeors principal name of NN / H @ XDF.COM, NameNode in the Destination cluster set in pricipal NN / H @ HADOOP.COM ( can not be set nn2/h***@HADOOP.COM), for example:
increased XDF Cluster and HADOOP Cluster in the core-site:

 

<property>
<name>hadoop.security.auth_to_local</name>
<value>
RULE:[1:$1@$0](^.*@HADOOP\.COM$)s/^(.*)@HADOOP\.COM$/$1/g
RULE:[2:$1@$0](^.*@HADOOP\.COM$)s/^(.*)@HADOOP\.COM$/$1/g
RULE:[1:$1@$0](^.*@XDF\.COM$)s/^(.*)@XDF\.COM$/$1/g
RULE:[2:$1@$0](^.*@XDF\.COM$)s/^(.*)@XDF\.COM$/$1/g 
DEFAULT             
</value>
</property>

Use hadoop org.apache.hadoop.security.HadoopKerberosName <principal-name> verify achieved, for example:

[root@node1a141 ~]#  hadoop org.apache.hadoop.security.HadoopKerberosName hdfs/[email protected]
Name: hdfs/[email protected] to hdfs 

3 Configure a trust relationship in krb5.conf

3.1 Configuration capaths

The first way is to configure shared hierarchy of names, this is the default and relatively simple way, the second way is to change capaths in the krb5.conf file, complex but more flexible, where the second approach.
Configure mapping between domain and realm in two cluster nodes /etc/krb5.conf file, for example: XDF cluster configuration in which:

[capaths]
       XDF.COM = {
              HADOOP.COM = .
       }

In HADOOP Cluster configuration in which:

 [capaths]
       HADOOP.COM = {
              XDF.COM = .
       }

Arranged '' is no intermediate realms

3.2 Configuring realms

In order to be accessible HADOOP XDF of the KDC, the KDC Server HADOOP need to XDF Cluster configuration, as described below, whereas the same:

 [realms]
  XDF.COM = {
    kdc = {host}.XDF.COM:88
    admin_server = {host}.XDF.COM:749
    default_domain = XDF.COM
  }
  HADOOP.COM = {
    kdc = {host}.HADOOP.COM:88
    admin_server = {host}.HADOOP.COM:749
    default_domain = HADOOP.COM
  }

3.3 domain_realm

In domain_realm, the general configuration to '.XDF.COM' and 'XDF.COM' format '.' Prefix to ensure that all XDF.COM kerberos hosts are mapped to XDF.COM realm. However, if the host name of the cluster is not to XDF.COM suffix format, you need to configure the mapping between host and domain_realm realm of, for example XDF.nn.local mapped to XDF.COM, the need to increase XDF.nn.local = XDF.COM.

[domain_realm]
.hadoop.com=HADOOP.COM
 hadoop.com=HADOOP.COM
 .xdf.com=XDF.COM
 xdf.com=XDF.COM
 node1a141 = XDF.COM
 node1a143 = XDF.COM
 node1a210 = HADOOP.COM
 node1a202 = HADOOP.COM
 node1a203 = HADOOP.COM 

Restart kerberos service

3.4 Configuration hdfs-site.xml

In hdfs-site.xml, is provided to allow the realms
disposed dfs.namenode.kerberos.principal.pattern hdfs-site.xml as in "*"

 
Paste_Image.png

 

This is the matching rule for controlling the client to allow authentication Realms, if the parameter is not configured, there is the following exception:

java.io.IOException: Failed on local exception: java.io.IOException:
java.lang.IllegalArgumentException:
       Server has invalid Kerberosprincipal:nn/ HADOOP.COM@ XDF.COM;
       Host Details : local host is: "host1.XDF.COM/10.181.22.130";
                        destination host is: "host2.HADOOP.COM":8020;

4 Test

1) Use hdfs command data access between the test and HADOOP XDF cluster, for example [email protected] in XDF Cluster in kinit, hdfs then run the command:

[root@node1a141 ~]# kdestroy
[root@node1a141 ~]# kinit admin
Password for [email protected]: 
[root@node1a141 ~]# hdfs dfs -ls /
Found 3 items
drwxrwxrwx+  - hdfs supergroup          0 2017-06-13 15:13 /tmp
drwxrwxr-x+  - hdfs supergroup          0 2017-06-22 15:55 /user
drwxrwxr-x+  - hdfs supergroup          0 2017-06-14 14:11 /wa
[root@node1a141 ~]# hdfs dfs -ls hdfs://node1a202:8020/
Found 9 items
drwxr-xr-x   - root  supergroup          0 2017-05-27 18:55 hdfs://node1a202:8020/cdtest
drwx------   - hbase hbase               0 2017-05-22 18:51 hdfs://node1a202:8020/hbase
drwx------   - hbase hbase               0 2017-07-05 19:16 hdfs://node1a202:8020/hbase1
drwxr-xr-x   - hbase hbase               0 2017-05-11 10:46 hdfs://node1a202:8020/hbase2
drwxr-xr-x   - root  supergroup          0 2016-12-01 17:30 hdfs://node1a202:8020/home
drwxr-xr-x   - mdss  supergroup          0 2016-12-13 18:30 hdfs://node1a202:8020/idfs
drwxr-xr-x   - hdfs  supergroup          0 2017-05-22 18:51 hdfs://node1a202:8020/system
drwxrwxrwt   - hdfs  supergroup          0 2017-05-31 17:37 hdfs://node1a202:8020/tmp
drwxrwxr-x+  - hdfs  supergroup          0 2017-05-04 15:48 hdfs://node1a202:8020/user

The same operation as in the HADOOP.COM
2) Run the program to copy distcp XDF HADOOP data to the cluster, the command is as follows:

[root@node1a141 ~]# hadoop distcp hdfs://node1a141:8020/tmp/test.sh  hdfs://node1a202:8020/tmp/

Appendix 5

/etc/krb5.conf complete file contents of two clusters as follows:

[root@node1a141 xdf]# cat /etc/krb5.conf
[logging]
 default = FILE:/var/log/krb5libs.log
 kdc = FILE:/var/log/krb5kdc.log
 admin_server = FILE:/var/log/kadmind.log

[libdefaults]
 default_realm = XDF.COM
 dns_lookup_realm = false
 dns_lookup_kdc = false
 ticket_lifetime = 7d
 renew_lifetime = 30
 forwardable = true
 renewable=true
 #default_ccache_name = KEYRING:persistent:%{uid}

[realms]
 HADOOP.COM = {
   kdc = node1a198
   admin_server = node1a198
   default_realm = HADOOP.COM
   supported_enctypes = aes128-cts:normal des3-hmac-sha1:normal arcfour-hmac:normal camellia256-cts:normal camellia128-cts:normal des-hmac-sha1:normal des-cbc-md5:normal des-cbc-crc:normal
 }
 XDF.COM = {
   kdc = node1a141
   admin_server = node1a141
   default_realm = XDF.COM
   supported_enctypes = aes128-cts:normal des3-hmac-sha1:normal arcfour-hmac:normal camellia256-cts:normal camellia128-cts:normal des-hmac-sha1:normal des-cbc-md5:normal des-cbc-crc:normal
 }

[domain_realm]
 .hadoop.com=HADOOP.COM
 hadoop.com=HADOOP.COM
 .xdf.com=XDF.COM
 xdf.com=XDF.COM
 node1a141 = XDF.COM
 node1a143 = XDF.COM
 node1a210 = HADOOP.COM
 node1a202 = HADOOP.COM
 node1a203 = HADOOP.COM

[capaths]
XDF.COM = {
 HADOOP.COM = .
}

Guess you like

Origin www.cnblogs.com/felixzh/p/11505996.html