Hbase machine configuration

Hbase machine configuration

[logging]
 default = FILE:/var/log/krb5libs.log
 kdc = FILE: /var/log/krb5kdc.log
 admin_server = FILE: /var/logdmind.log

[libdefaults]
 default_realm = HBASE.YOUKU1
 dns_lookup_realm = false
 dns_lookup_kdc = false
 ticket_lifetime = 24h
 renew_lifetime = 7d
 forwardable = true

[realms]
 EXAMPLE.COM = {
  kdc = kerberos.example.com1
  admin_server = kerberos.example.com1
 }
HBASE.YOUKU = {
 kdc = a01.regionserver.master.hbase.bigdata.vm.m6.youku1
 kdc = a01.regionserver.hbase.bigdata.m6.youku1
 admin_server = a01.regionserver.master.hbase.bigdata.vm.m6.youku1
 default_domain = hbase.youku1
}


[domain_realm]
 .example.com = EXAMPLE.COM1
 example.com = EXAMPLE.COM1
 .hbase.youku = HBASE.YOUKU1
 hbase.youku = HBASE.YOUKU1



[libdefaults]
default_realm = MAOYAN.COM // The default domain, so it can be omitted in kinit
allow_weak_crypto = false // Strong encryption is like weak encryption, don't understand what it means, disable it first
dns_lookup_realm = false // Without DNS, the domain name resolutions of these machines have been configured with each other in hosts
dns_lookup_kdc = false
ticket_lifetime = 24h // ticket expiration time
renew_lifetime = 7d // The time the ticket can be extended, the default is 0
forwardable = true // Whether the ticket can be forwarded. Note that the most restrictive point is the configuration on the server side
udp_preference_limit = 1 // When the size exceeds this limit, the TCP is first filled; it is not clear whether the unit is bytes or what

[realms]
MAOYAN.COM = {
    kdc = dx-movie-data-hadoop05:88 // kdc server, if it is in master/slave mode, write a few more lines repeatedly
    admin_server = dx-movie-data-hadoop05:749 // kdc master server, if there are more than one, write a few more lines
    default_domain = MAOYAN.COM // should be compatible between kerberos 4 and 5
}

[appdefaults]



Kerberos credentials (ticket) have two attributes, ticket_lifetime and renew_lifetime. The ticket_lifetime indicates the validity period of the certificate, which is generally 24 hours. Before the voucher expires, some vouchers can be extended for the expiration time (ie Renewable). renew_lifetime indicates the maximum time limit that the voucher can be extended, usually one week. After the credential expires, subsequent access to the security-authenticated service will fail. The first problem here is how to handle credential expiration.

Credential Expiration Handling Policy
The earliest Security features for Hadoop design made the assumption that

A Hadoop job will run no longer than 7 days (configurable) on a MapReduce cluster or accessing HDFS from the job will fail.

For general tasks, 24 A credential time limit of hours or even delayed to a week is sufficient. So most of the time, we only need to use kinit to authenticate once before executing the operation, and then start the background task for periodic credential update.

while true ; do kinit -R; sleep $((3600 * 6)) ; done &
However, for services that require resident access to the Hadoop cluster, the above assumptions do not hold. At this time, we can

expand time limit of ticket_lifetime and renew_lifetime
to solve this problem, but because Kerberos is bound to our online user login authentication, it will bring security risks, so it is not convenient to modify.
Periodically re-kinit authentication to update credentials
Instead of just extending the certification period periodically, you can simply re-certify periodically to extend the limited duration of the credential. Generally, we need to export keytab for regular authentication operations.
Hadoop encapsulates the Kerberos authentication part to a certain extent, and it does not need to be so complicated in fact. The key point here is to look at the UserGroupInformation class.

http://tech.meituan.com/hadoop-security-practice.html

error message:
17:24:56 ERROR [http-8280-exec-1] org.apache.hadoop.security.UserGroupInformation(UserGroupInformation.java:1494) - PriviledgedActionException as:yule/[email protected] (auth:KERBEROS) cause:javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)]
17:24:57 ERROR [http-8280-exec-1] org.apache.hadoop.security.UserGroupInformation(UserGroupInformation.java:1494) - PriviledgedActionException as:yule/[email protected] (auth:KERBEROS) cause:javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)]
17:24:59 ERROR [http-8280-exec-1] org.apache.hadoop.security.UserGroupInformation(UserGroupInformation.java:1494) - PriviledgedActionException as:yule/[email protected] (auth:KERBEROS) cause:javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)]




http://blog.csdn.net/lalaguozhe/article/details/11570009

Guess you like

Origin http://10.200.1.11:23101/article/api/json?id=327027320&siteId=291194637