华为认证-- sun.security.krb5.KrbException: Server not found in Kerberos database (7)

版权声明:本文为博主原创文章,未经博主允许不得转载。 https://blog.csdn.net/li1987by/article/details/82856873

一、问题描述
在用华为安全模式接入kafka时,一直提示错误:

javax.security.sasl.SaslException: An error: (java.security.PrivilegedActionException: javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Server not found in Kerberos database (7) - UNKNOWN_SERVER)]) occurred when evaluating SASL token received from the Kafka Broker. This may be caused by Java's being unable to resolve the Kafka Broker's hostname correctly. You may want to try to adding '-Dsun.net.spi.nameservice.provider.1=dns,sun' to your client's JVMFLAGS environment. Users must configure FQDN of kafka brokers when authenticating using SASL and `socketChannel.socket().getInetAddress().getHostName()` must match the hostname in `principal/hostname@realm` Kafka Client will go to AUTH_FAILED state.
Caused by: sun.security.krb5.KrbException: Server not found in Kerberos database (7) - UNKNOWN_SERVER

二、问题解决
主要检查kafka配置文件内容、jar包版本
我这里将开源kafka的jar包替换成华为平台自带的jar,解决掉问题。
三、解决流程
1.新建kafka项目,状态消息可以正常推送
2.利用华为自带kafka项目样例,也可以将状态信息正常推送
3.将华为kafka项目代码原封不动嵌入开发的项目,竟然报错
4.怀疑是华为jar作怪,将华为的kafka打到本地仓库,在项目maven中引用

1)将华为jar打到本地仓库:

mvn install:install-file -Dfile=E:\bigdata\huaClient\FusionInsight_Services_Client\FusionInsight_Services_ClientConfig\Kafka\kafka-examples\lib\kafka_2.10-0.10.0.0.jar -DgroupId=cn.mycompany -DartifactId=kafka_2.10 -Dversion=0.10.0.0  -Dpackaging=jar
mvn install:install-file -Dfile=E:\bigdata\huaClient\FusionInsight_Services_Client\FusionInsight_Services_ClientConfig\Kafka\kafka-examples\lib\kafka-clients-0.10.0.0.jar -DgroupId=cn.mycompany -DartifactId=kafka-clients -Dversion=0.10.0.0  -Dpackaging=jar

2)本地pom文件引用jar:

     <dependency>
            <groupId>cn.mycompany</groupId>
            <artifactId>kafka_2.10</artifactId>
   	   <version>0.10.0.0</version>
    </dependency>
    <dependency>
            <groupId>cn.mycompany</groupId>
            <artifactId>kafka-clients</artifactId>
           <version>0.10.0.0</version>
   </dependency>

3)启动参数

-Djava.security.auth.login.config=D:\workspace-bigdata\config-springcloud\jaas\104006123.jaas.conf 
-Djava.security.krb5.conf=D:\workspace-bigdata\config-springcloud\baseConf\krb5.conf 
-Dusername.client.keytab.file=D:\workspace-bigdata\config-springcloud\baseConf\user.keytab 
-Dzookeeper.server.principal=zookeeper/hadoop.hadoop.com -DbaseConfPath=D:\workspace-bigdata\config-springcloud
其中jaas文件可以先用华为示例程序生成一份,然后放到固定配置文件目录下,后续统一调用

4)统一认证代码
在每处调用服务处进行统一认证处理:

LoginUtil.shouldAuthenticateOverKrb(conf, userName, userKeytabFile, krb5File, ZOOKEEPER_SERVER_PRINCIPAL_KEY, ZOOKEEPER_DEFAULT_SERVER_PRINCIPAL);
其中参数依次是:hbase配置文件信息,认证用户名,user.keytab地址,krb5.conf地址,zookeeper.server.principal,zookeeper/hadoop.hadoop.com

其中conf启动时初始化的:

public static void init() {
        try {
            if (conf == null) {
                conf = HBaseConfiguration.create();
                String baseConfPath = System.getProperty(StringUtils.BASE_CONF_PATH);
                logger.info("安全认证的配置文件路径:{}",baseConfPath);
                String userdir = baseConfPath + File.separator + "hbaseConf" + File.separator;
                conf.addResource(new Path(userdir + "core-site.xml"));
                conf.addResource(new Path(userdir + "hdfs-site.xml"));
                conf.addResource(new Path(userdir + "hbase-site.xml"));
            }
        } catch (Exception e) {
            logger.error("HBase Configuration Initialization failure !");
            throw new RuntimeException(e);
        }
    }

主要的认证处理,可以写到LoginUtil文件(华为自带的认证类)

  public synchronized static void shouldAuthenticateOverKrb( Configuration conf,String userPrincipal,String userKeytabPath,String krb5ConfFile,String zkServerPrincipalKey,String zkServerPrincipal) throws IOException {
        if  (User.isHBaseSecurityEnabled(conf)) {
            if ((userPrincipal == null) || (userPrincipal.length() <= 0)) {
                LOG.error("input userPrincipal is invalid.");
                throw new IOException("input userPrincipal is invalid.");
            }
            if ((userKeytabPath == null) || (userKeytabPath.length() <= 0)) {
                LOG.error("input userKeytabPath is invalid.");
                throw new IOException("input userKeytabPath is invalid.");
            }
            if ((conf == null)) {
                LOG.error("input conf is invalid.");
                throw new IOException("input conf is invalid.");
            }
            // 2.check file exsits
            File userKeytabFile = new File(userKeytabPath);
            if (!userKeytabFile.exists()) {
                LOG.error("userKeytabFile(" + userKeytabFile.getAbsolutePath() + ") does not exsit.");
                throw new IOException(
                        "userKeytabFile(" + userKeytabFile.getAbsolutePath() + ") does not exsit.");
            }
            if (!userKeytabFile.isFile()) {
                LOG.error("userKeytabFile(" + userKeytabFile.getAbsolutePath() + ") is not a file.");
                throw new IOException(
                        "userKeytabFile(" + userKeytabFile.getAbsolutePath() + ") is not a file.");
            }
            setConfiguration(conf);
            // 4.login and check for hadoop
            loginHadoop(userPrincipal, userKeytabFile.getAbsolutePath());
            LOG.info("Login success!!!!!!!!!!!!!!");
        }

猜你喜欢

转载自blog.csdn.net/li1987by/article/details/82856873