原生apache hadoop3.3.1集群安装配置Kerberos

安装kerberos可以看一下我另外一篇。下面直接开始配置hadoop

CDH配置Kerberos和Sentry详解_cdh kerberos配置_Mumunu-的博客-CSDN博客

 部署好了kerberos之后,首先添加用户和生成认证文件

在KDC中添加需要认证的用户具体用户看情况而定(hadoop集群主要由hdfs管理,所以除了建hdfs账户还有HTTP账户,另外还有hive、hbase、dwetl也会访问hadoop集群。如有别的用户可用这种方式另行添加,如下图:

格式为:用户名/主机[email protected]

kadmin.local -q "addprinc -randkey HTTP/[email protected]"
所有需要加入集群的节点都需要一个对应的账密和keytab
kadmin.local -q "addprinc -randkey HTTP/[email protected]"
kadmin.local -q "addprinc -randkey HTTP/[email protected]"
像这样 下面省略

kadmin.local -q "addprinc -randkey hive/[email protected]"

kadmin.local -q "addprinc -randkey hbase/[email protected]"

kadmin.local -q "addprinc -randkey hdfs/[email protected]"

kadmin.local -q "addprinc -randkey presto/[email protected]"

kadmin.local -q "addprinc -randkey dwetl/[email protected]"

2、按用户批量生成 keytab

kadmin.local -q "xst -k /export/common/kerberos5/hdfs.keytab HTTP/[email protected]"
kadmin.local -q "xst -k /export/common/kerberos5/hdfs.keytab HTTP/[email protected]"
kadmin.local -q "xst -k /export/common/kerberos5/hdfs.keytab HTTP/[email protected]"

kadmin.local -q "xst -k /export/common/kerberos5/hive.keytab hive/[email protected]"

kadmin.local -q "xst -k /export/common/kerberos5/hbase.keytab hbase/[email protected]"

kadmin.local -q "xst -k /export/common/kerberos5/hdfs.keytab hdfs/[email protected]"

kadmin.local -q "xst -k /export/common/kerberos5/presto-server.keytab presto/[email protected]"

kadmin.local -q "xst -k /export/common/kerberos5/dwetl.keytab dwetl/[email protected]"

会在当前 /export/common/kerberos5目录生成 相应用户的keytab 文件。再将 hadoop.keytab 分发到每台机器,包括从kdc和客户端(注意:分发后,由于是不同用户访问keytab,要给keytab文件赋相应权限)。

配置 HDFS 使用 HTTPS 安全传输协议

A、生成密钥对

Keytool 是 java 数据证书的管理工具,使用户能够管理自己的公/私钥对及相关证书。

  • -keystore 指定密钥库的名称及位置(产生的各类信息将存在.keystore文件中)
  • -genkey(或者-genkeypair) 生成密钥对
  • -alias 为生成的密钥对指定别名,如果没有默认是 mykey
  • -keyalg 指定密钥的算法 RSA/DSA 默认是 DSA

生成 keystore 的密码及相应信息的密钥库

[root@hadoop102 ~]# keytool -keystore /etc/security/keytab/keystore -alias jetty -genkey -keyalg RSA输入密钥库口令:  
再次输入新口令: 
您的名字与姓氏是什么?[Unknown]:  
您的组织单位名称是什么?[Unknown]:  
您的组织名称是什么?[Unknown]:  
您所在的城市或区域名称是什么?[Unknown]:  
您所在的省/市/自治区名称是什么?[Unknown]:  
该单位的双字母国家/地区代码是什么?[Unknown]:  
CN=Unknown, OU=Unknown, O=Unknown, L=Unknown, ST=Unknown, C=Unknown是否正确?[否]:  y
输入 <jetty> 的密钥口令
(如果和密钥库口令相同, 按回车):  
再次输入新口令:

B、修改 keystore文件的所有者和访问权限

[root@hadoop102 ~]# chown -R root:hadoop /etc/security/keytab/keystore
[root@hadoop102 ~]# chmod 660 /etc/security/keytab/keystore

注意:

  • 密钥库的密码至少6个字符,可以是纯数字或者字母或者数字和字母的组合等等
  • 确保hdfs用户 (HDFS的启动用户) 具有对所生成 keystore 文件的读权限

C、将该证书分发到集群中的每台节点的相同路径

[root@hadoop102 ~]# xsync /etc/security/keytab/keystore

二、修改集群配置文件
1.hdfs添加以下配置
core-site.xml

<property>
<name>hadoop.security.authorization</name>
<value>true</value>
</property>
 
<property>
<name>hadoop.security.authentication</name>
<value>kerberos</value>
</property>


hdfs-site.xml

<!-- kerberos start -->
<!-- namenode -->
 
<property>
<name>dfs.namenode.keytab.file</name>
<value>/export/common/kerberos5/hdfs.keytab</value>
</property>
 
<property>
<name>dfs.namenode.kerberos.principal</name>
<value>hdfs/[email protected]</value>
</property>
 
<property>
<name>dfs.namenode.kerberos.internal.spnego.principal</name>
<value>HTTP/[email protected]</value>
</property>
 
<property>
<name>dfs.namenode.kerberos.internal.spnego.keytab</name>
<value>/export/common/kerberos5/HTTP.keytab</value>
</property>

<property>
<name>dfs.web.authentication.kerberos.principal</name>
<value>HTTP/[email protected]</value>
</property>
 
<property>
<name>dfs.web.authentication.kerberos.keytab</name>
<value>/export/common/kerberos5/HTTP.keytab</value>
</property>
 
<!-- datanode -->
<property>
<name>dfs.datanode.keytab.file</name>
<value>/export/common/kerberos5/hdfs.keytab</value>
</property>
 
<property>
<name>dfs.datanode.kerberos.principal</name>
<value>hdfs/[email protected]</value>
</property>
 
<property>
<name>dfs.http.policy</name>
<value>HTTPS_ONLY</value>
</property>
 
<!-- <property>
<name>dfs.https.port</name>
<value>50470</value>
</property> -->
 
<property>
<name>dfs.data.transfer.protection</name>
<value>integrity</value>
</property>
 
<property>
<name>dfs.block.access.token.enable</name>
<value>true</value>
</property>
 
<property> 
<name>dfs.datanode.data.dir.perm</name>
<value>700</value>
</property>
 
<!--
<property>
<name>dfs.datanode.https.address</name>
<value>0.0.0.0:50475</value>
</property> -->
 
 
 
<!-- journalnode -->
<property>
<name>dfs.journalnode.keytab.file</name>
<value>/export/common/kerberos5/hdfs.keytab</value>
</property>
 
<property>
<name>dfs.journalnode.kerberos.principal</name>
<value>hdfs/[email protected]</value>
</property>
 
<property>
<name>dfs.journalnode.kerberos.internal.spnego.principal</name>
<value>HTTP/[email protected]</value>
</property>

<property>
<name>dfs.journalnode.kerberos.internal.spnego.keytab</name>
<value>/export/common/kerberos5/HTTP.keytab</value>
</property>
 
<!-- kerberos end-->


 hadoop_env.sh

export HADOOP_OPTS="$HADOOP_OPTS -Djava.library.path=${JAVA_HOME}/lib -Djava.security.krb5.conf=/etc/krb5.conf -Djava.security.krb5.realm=HADOOP.COM -Djava.security.krb5.kdc=192.168.0.49:88"


ssl-server.xml(放在hadoop配置目录下:/export/common/hadoop/conf,赋权hdfs:hadoop)

         

<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
   Licensed to the Apache Software Foundation (ASF) under one or more
   contributor license agreements.  See the NOTICE file distributed with
   this work for additional information regarding copyright ownership.
   The ASF licenses this file to You under the Apache License, Version 2.0
   (the "License"); you may not use this file except in compliance with
   the License.  You may obtain a copy of the License at
       http://www.apache.org/licenses/LICENSE-2.0
   Unless required by applicable law or agreed to in writing, software
   distributed under the License is distributed on an "AS IS" BASIS,
   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
   See the License for the specific language governing permissions and
   limitations under the License.
-->
 
<configuration>
 
 
 
<property>
  <name>ssl.server.truststore.location</name>
  <value>/etc/security/keytab/keystore</value>
  <description>Truststore to be used by NN and DN. Must be specified.
  </description>
</property>
 
<property>
  <name>ssl.server.truststore.password</name>
  <value>123456</value>
  <description>Optional. Default value is "".
  </description>
</property>
 
<property>
  <name>ssl.server.truststore.type</name>
  <value>jks</value>
  <description>Optional. The keystore file format, default value is "jks".
  </description>
</property>
 
<property>
  <name>ssl.server.truststore.reload.interval</name>
  <value>10000</value>
  <description>Truststore reload check interval, in milliseconds.
  Default value is 10000 (10 seconds).
  </description>
</property>
 
<property>
  <name>ssl.server.keystore.location</name>
  <value>/etc/security/keytab/keystore</value>
  <description>Keystore to be used by NN and DN. Must be specified.
  </description>
</property>
 
<property>
  <name>ssl.server.keystore.password</name>
  <value>123456</value>
  <description>Must be specified.
  </description>
</property>
 
<property>
  <name>ssl.server.keystore.keypassword</name>
  <value>123456</value>
  <description>Must be specified.
  </description>
</property>
 
<property>
  <name>ssl.server.keystore.type</name>
  <value>jks</value>
  <description>Optional. The keystore file format, default value is "jks".
  </description>
</property>
 
<property>
  <name>ssl.server.exclude.cipher.list</name>
  <value>TLS_ECDHE_RSA_WITH_RC4_128_SHA,SSL_DHE_RSA_EXPORT_WITH_DES40_CBC_SHA,
  SSL_RSA_WITH_DES_CBC_SHA,SSL_DHE_RSA_WITH_DES_CBC_SHA,
  SSL_RSA_EXPORT_WITH_RC4_40_MD5,SSL_RSA_EXPORT_WITH_DES40_CBC_SHA,
  SSL_RSA_WITH_RC4_128_MD5</value>
  <description>Optional. The weak security cipher suites that you want excluded
  from SSL communication.</description>
</property>
 
</configuration>


 
 ssl-client.xml(放在hadoop配置目录下:/export/common/hadoop/conf,赋权hdfs:hadoop)

<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
   Licensed to the Apache Software Foundation (ASF) under one or more
   contributor license agreements.  See the NOTICE file distributed with
   this work for additional information regarding copyright ownership.
   The ASF licenses this file to You under the Apache License, Version 2.0
   (the "License"); you may not use this file except in compliance with
   the License.  You may obtain a copy of the License at
       http://www.apache.org/licenses/LICENSE-2.0
   Unless required by applicable law or agreed to in writing, software
   distributed under the License is distributed on an "AS IS" BASIS,
   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
   See the License for the specific language governing permissions and
   limitations under the License.
-->
 
<configuration>
 
<property>
  <name>ssl.client.truststore.location</name>
  <value>/etc/security/keytab/keystore</value>
  <description>Truststore to be used by clients like distcp. Must be
  specified.
  </description>
</property>
 
<property>
  <name>ssl.client.truststore.password</name>
  <value>123456</value>
  <description>Optional. Default value is "".
  </description>
</property>
 
 
 
<property>
  <name>ssl.client.truststore.type</name>
  <value>jks</value>
  <description>Optional. The keystore file format, default value is "jks".
  </description>
</property>
 
 
 
<property>
  <name>ssl.client.truststore.reload.interval</name>
  <value>10000</value>
  <description>Truststore reload check interval, in milliseconds.
  Default value is 10000 (10 seconds).
  </description>
</property>
 
<property>
  <name>ssl.client.keystore.location</name>
  <value>/etc/security/keytab/keystore</value>
  <description>Keystore to be used by clients like distcp. Must be
  specified.
  </description>
</property>
 
<property>
  <name>ssl.client.keystore.password</name>
  <value>123456</value>
  <description>Optional. Default value is "".
  </description>
</property>
 
<property>
  <name>ssl.client.keystore.keypassword</name>
  <value>123456</value>
  <description>Optional. Default value is "".
  </description>
</property>
 
<property>
  <name>ssl.client.keystore.type</name>
  <value>jks</value>
  <description>Optional. The keystore file format, default value is "jks".
  </description>
</property>


</configuration>


 2.yarn添加以下配置
yarn-site.xml

<!-- resourcemanager -->
<property>
<name>yarn.web-proxy.principal</name>
<value>HTTP/[email protected]</value>
</property>
 
<property>
<name>yarn.web-proxy.keytab</name>
<value>/export/common/kerberos5/HTTP.keytab</value>
</property>
 
<property>
<name>yarn.resourcemanager.principal</name>
<value>hdfs/[email protected]</value>
</property>
 
<property>
<name>yarn.resourcemanager.keytab</name>
<value>/export/common/kerberos5/hdfs.keytab</value>
</property>
 
<!-- nodemanager -->
<property>
<name>yarn.nodemanager.principal</name>
<value>hdfs/[email protected]</value>
</property>
 
<property>
<name>yarn.nodemanager.keytab</name>
<value>/export/common/kerberos5/hdfs.keytab</value>
</property>
 
<property>
<name>yarn.nodemanager.container-executor.class</name>
<value>org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor</value>
</property>
 
<property>
<name>yarn.nodemanager.linux-container-executor.group</name>
<value>hdfs</value>
</property>
 
<!-- timeline kerberos -->
<property>
<name>yarn.timeline-service.http-authentication.type</name>
<value>kerberos</value>
<description>Defines authentication used for the timeline server HTTP endpoint. Supported values are: simple | kerberos | #AUTHENTICATION_HANDLER_CLASSNAME#</description>
 
</property>
<property>
<name>yarn.timeline-service.principal</name>
<value>hdfs/[email protected]</value>
</property>
 
<property>
<name>yarn.timeline-service.keytab</name>
<value>/export/common/kerberos5/hdfs.keytab</value>
</property>
 
<property>
<name>yarn.timeline-service.http-authentication.kerberos.principal</name>
<value>HTTP/[email protected]</value>
</property>
 
<property> 
  <name>yarn.timeline-service.http-authentication.kerberos.keytab</name>
  <value>/export/common/kerberos5/HTTP.keytab</value>
</property>
 
<property>
<name>yarn.nodemanager.container-localizer.java.opts</name>
<value>-Djava.security.krb5.conf=/etc/krb5.conf -Djava.security.krb5.realm=HADOOP.COM -Djava.security.krb5.kdc=192.168.0.49 :88</value>
</property>
 
<property>
<name>yarn.nodemanager.health-checker.script.opts</name>
<value>-Djava.security.krb5.conf=/etc/krb5.conf -Djava.security.krb5.realm=HADOOP.COM -Djava.security.krb5.kdc=192.168.0.49:88</value>
</property>


mapred-site.xml

<property>
<name>mapreduce.map.java.opts</name>
<value>-Xmx1638M -Djava.security.krb5.conf=/etc/krb5.conf -Djava.security.krb5.realm=HADOOP.COM -Djava.security.krb5.kdc=192.168.0.49:88</value>
</property>
 
<property>
<name>mapreduce.reduce.java.opts</name>
<value>-Xmx3276M -Djava.security.krb5.conf=/etc/krb5.conf -Djava.security.krb5.realm=HADOOP.COM -Djava.security.krb5.kdc=192.168.0.49:88</value>
</property>
 
<property>
<name>mapreduce.jobhistory.keytab</name>
<value>/export/common/kerberos5/hdfs.keytab</value>
</property>
 
<property>
<name>mapreduce.jobhistory.principal</name>
<value>hdfs/[email protected]</value>
</property>
 
<property>
<name>mapreduce.jobhistory.webapp.spnego-keytab-file</name>
<value>/export/common/kerberos5/HTTP.keytab</value>
</property>
 
<property>
<name>mapreduce.jobhistory.webapp.spnego-principal</name>
<value>HTTP/[email protected]</value>
</property>
 
<property>
<name>mapred.child.java.opts</name>
<value>-Xmx1024m -Djava.security.krb5.conf=/etc/krb5.conf -Djava.security.krb5.realm=HADOOP.COM -Djava.security.krb5.kdc=192.168.0.49:88</value>
</property>
 
<property>
<name>yarn.app.mapreduce.am.command-opts</name>
<value>-Xmx3276m -Djava.security.krb5.conf=/etc/krb5.conf -Djava.security.krb5.realm=HADOOP.COM -Djava.security.krb5.kdc=192.168.0.49:88</value>
</property>


3.hive添加以下配置
hive-site.xml

<!--hiveserver2-->
<property>
<name>hive.server2.authentication</name>
<value>KERBEROS</value>
</property>

<property>
<name>hive.server2.authentication.kerberos.principal</name>
<value>hdfs/[email protected]</value>
</property>

<property>
<name>hive.server2.authentication.kerberos.keytab</name>
<value>/export/common/kerberos5/hdfs.keytab</value>
</property>

<!-- metastore -->

<property>
<name>hive.metastore.sasl.enabled</name>
<value>true</value>
</property>

<property>
<name>hive.metastore.kerberos.keytab.file</name>
<value>/export/common/kerberos5/hdfs.keytab</value>
</property>

<property>
<name>hive.metastore.kerberos.principal</name>
<value>hdfs/[email protected]</value>
</property>


4.hbase添加以下配置 
hbase-site.xml

<!-- hbase配置kerberos安全认证start -->
 
    <property>
        <name>hbase.security.authentication</name>
        <value>kerberos</value>
    </property>
    <!-- 配置hbase rpc安全通信 -->
 
    <property>
        <name>hbase.rpc.engine</name>
        <value>org.apache.hadoop.hbase.ipc.SecureRpcEngine</value>
    </property>
    <!-- hmaster配置kerberos安全凭据认证 -->
 
    <property>
        <name>hbase.master.kerberos.principal</name>
        <value>hdfs/[email protected]</value>
    </property> -->
    <!-- hmaster配置kerberos安全证书keytab文件位置 -->
 
    <property>
        <name>hbase.master.keytab.file</name>
        <value>/export/common/kerberos5/hdfs.keytab</value>
    </property> -->
    <!-- regionserver配置kerberos安全凭据认证 -->
 
    <property>
        <name>hbase.regionserver.kerberos.principal</name>
        <value>hdfs/[email protected]</value>
    </property> -->
    <!-- regionserver配置kerberos安全证书keytab文件位置 -->
 
    <property>
        <name>hbase.regionserver.keytab.file</name>
        <value>/export/common/kerberos5/hdfs.keytab</value>
    </property>
 
<!--
    <property>
        <name>hbase.thrift.keytab.file</name>
        <value>/soft/conf/hadoop/hdfs.keytab</value>
    </property>

     <property>
         <name>hbase.thrift.kerberos.principal</name>
         <value>hdfs/[email protected]</value>
     </property>

     <property>
         <name>hbase.rest.keytab.file</name>
         <value>/soft/conf/hadoop/hdfs.keytab</value>
     </property>

     <property>
         <name>hbase.rest.kerberos.principal</name>
         <value>hdfs/[email protected]</value>
     </property>

     <property>
         <name>hbase.rest.authentication.type</name>
         <value>kerberos</value>
     </property>

     <property>
         <name>hbase.rest.authentication.kerberos.principal</name>
         <value>HTTP/[email protected]</value>
     </property>

     <property>
         <name>hbase.rest.authentication.kerberos.keytab</name>
         <value>/soft/conf/hadoop/HTTP.keytab</value>
     </property>
-->
<!-- hbase配置kerberos安全认证end -->


 

三、kerberos相关命令
退出授权:kdestroy

主kdc打开kadmin管理:kadmin.local

查看当前系统使用的Kerberos账户:klist

使用keytab获取用户凭证:

kinit -kt /export/common/kerberos5/kadm5.keytab admin/[email protected]

查看keytab内容:

klist -k -e /export/common/kerberos5/hdfs.keytab

生成keytab文件:

kadmin.local -q "xst -k /export/common/kerberos5/hdfs.keytab admin/[email protected]"

延长kerberos认证时长:kinit -R

删除kdc数据库:rm -rf /export/common/kerberos5/principal(这个路径是create时新建的数据库路径)

四、快速测试
测试hdfs:切换到hdfs用户,键入命令:hdfs dfs -ls /后,需要认证。再次键入命令

“kinit -kt /export/common/kerberos5/hdfs.keytab hdfs/`hostname | awk '{print tolower($0)}'`”

后可以查出结果即为与hdfs集成成功。

还要注意一些显著的变化:1、任务可以由提交作业的用户以操作系统账户启动运行,而不一定要由运行节点管理的用户启动。这意味着,可以借助操作系统来隔离正在运行的任务,使它们之间无法相互传送指令,这样的话,诸如任务数据等本地信息的隐私即可通过本地文件系统的安全性而得到保护。

(需要将yarn.nodemanager.container-executor.class设为org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor。)
 

猜你喜欢

转载自blog.csdn.net/h952520296/article/details/130869070