Permission management component of big data platform-Aapche Ranger

Introduction to Apache Ranger

Apache Ranger provides a centralized security management framework, and solves authorization and auditing. It can perform fine-grained data access control on components of the Hadoop ecosystem such as HDFS, Yarn, Hive, and Hbase. By operating the Ranger console, administrators can easily configure policies to control user access rights. Ranger advantages:

  • Rich component support (HDFS, HBASE, HIVE, YARN, KAFKA, STORM)
  • Provides fine-grained access control (hive column level)
  • Access control plug-in type, unified and convenient strategy management
  • Support audit logs, record logs of various operations, and provide a unified query interface and interface
  • Support integration with kerberos, and provide Rest interface for secondary development

Why choose Ranger:

  • Multi-component support, basically covering the components of the current existing technology stack
  • Support audit log, you can find user operation details, convenient for troubleshooting and feedback
  • Have its own user system to facilitate integration with other systems and provide interface calls

Ranger's architecture diagram:
Permission management component of big data platform-Aapche Ranger

RangerAdmin :

  • Plan each service strategy and allocate corresponding resources to corresponding users or groups
  • Provides an interface for adding, deleting, modifying and checking strategies in the form of RESTFUL
  • Unified query and management page

Service Plugin:

  • Embedded in the execution process of each system, and regularly pull strategies from RangerAdmin
  • Access decision tree based on policy execution
  • Record access audit

Ranger permission model

  • User: Expressed by User or Group
  • Resources: Different components have different resources, such as HDFS Path, Hive DB\TABLE
  • Strategy: Service can have multiple policies, with different components and different policy authorization models

Take HDFS as an example, the access process after integration with Ranger:
Permission management component of big data platform-Aapche Ranger

  • Load the Ranger plugin when HDFS starts, and pull the permission policy from Admin
  • The user access request arrives at the NameNode for authorization verification
  • Process the access request after verification and record the audit log

Take Hive as an example, the access process after integration with Ranger:
Permission management component of big data platform-Aapche Ranger

  • Load the Ranger plugin when HiveServer2 starts, and pull the permission policy from Admin
  • The user's SQL query request arrives at HiveServer2, and the permissions are verified in the Compile phase
  • Process the access request after verification and record the audit log

Take YARN as an example, the access process after integration with Ranger:
Permission management component of big data platform-Aapche Ranger

  • Load the Ranger plugin when ResourceManger starts, and pull the permission policy from Admin
  • The user submits the task to the ResourceManager, and performs authorization verification in the parsing task stage
  • Submit the task after verification and record the audit log

Apache Ranger installation

Official documents:

Preparation

First prepare the Java and Maven environment:

[root@hadoop ~]# java -version
java version "1.8.0_261"
Java(TM) SE Runtime Environment (build 1.8.0_261-b12)
Java HotSpot(TM) 64-Bit Server VM (build 25.261-b12, mixed mode)
[root@hadoop ~]# mvn -v
Apache Maven 3.6.3 (cecedd343002696d0abb50b32b541b8a6ba2883f)
Maven home: /usr/local/maven
Java version: 1.8.0_261, vendor: Oracle Corporation, runtime: /usr/local/jdk/1.8/jre
Default locale: zh_CN, platform encoding: UTF-8
OS name: "linux", version: "3.10.0-1062.el7.x86_64", arch: "amd64", family: "unix"
[root@hadoop ~]# 
  • Tips: Maven needs to configure the domestic mirror source, otherwise the dependency will not be downloaded the next day

Install a MySQL database, I am using my local database here:

C:\Users\Administrator>mysql --version
mysql  Ver 8.0.21 for Win64 on x86_64 (MySQL Community Server - GPL)

To build a Hadoop environment, note that the Hadoop version must be >= 2.7.1 , because I have tried the 2.6.0 version of Hadoop before and cannot be successfully integrated with Ranger. This article uses version 2.8.5:

[root@hadoop ~]# echo $HADOOP_HOME
/usr/local/hadoop-2.8.5
[root@hadoop ~]# 

Ranger relies on MySQL as the state storage, so you need to prepare a MySQL driver package:

[root@hadoop ~]# ls /usr/local/src |grep mysql
mysql-connector-java-8.0.21.jar
[root@hadoop ~]# 

Compile Ranger source code

Go to the official website to download the source code package:

Need to pay attention to the corresponding version of Ranger and Hadoop. If you install Hadoop 2.x, then Ranger needs to use a version below 2.x. If your Hadoop installation is 3.x, then Ranger needs to use a version above 2.x. For example, the Hadoop version I installed here is 2.8.5 , so I choose Ranger 1.2.0 :

[root@hadoop ~]# cd /usr/local/src
[root@hadoop /usr/local/src]# wget https://mirror-hk.koddos.net/apache/ranger/1.2.0/apache-ranger-1.2.0.tar.gz

Unzip the source package:

[root@hadoop /usr/local/src]# tar -zxvf apache-ranger-1.2.0.tar.gz

Enter the decompressed directory:, cd apache-ranger-1.2.0modify the pomfiles in the directory, and comment out all the warehouse related configurations:

<!--
    <repositories>
        <repository>
            <id>apache.snapshots.https</id>
            <name>Apache Development Snapshot Repository</name>
            <url>https://repository.apache.org/content/repositories/snapshots</url>
            <snapshots>
                <enabled>true</enabled>
            </snapshots>
        </repository>
        <repository>
            <id>apache.public.https</id>
            <name>Apache Development Snapshot Repository</name>
            <url>https://repository.apache.org/content/repositories/public</url>
            <releases>
                <enabled>true</enabled>
            </releases>
            <snapshots>
                <enabled>false</enabled>
            </snapshots>
        </repository>
    <repository>
      <id>repo</id>
      <url>file://${basedir}/local-repo</url>
      <snapshots>
         <enabled>true</enabled>
      </snapshots>
  </repository>
    </repositories>
-->

After completing the above modifications, use the maven command to compile and package:

[root@hadoop /usr/local/src/apache-ranger-1.2.0]# mvn -DskipTests=true clean package assembly:assembly

After a long wait, the following information will be output when the compilation and packaging are completed:

[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary for ranger 1.2.0:
[INFO] 
[INFO] ranger ............................................. SUCCESS [  0.838 s]
[INFO] Jdbc SQL Connector ................................. SUCCESS [  0.861 s]
[INFO] Credential Support ................................. SUCCESS [ 26.341 s]
[INFO] Audit Component .................................... SUCCESS [  1.475 s]
[INFO] Common library for Plugins ......................... SUCCESS [  3.154 s]
[INFO] Installer Support Component ........................ SUCCESS [  0.471 s]
[INFO] Credential Builder ................................. SUCCESS [  1.074 s]
[INFO] Embedded Web Server Invoker ........................ SUCCESS [  0.807 s]
[INFO] Key Management Service ............................. SUCCESS [  3.335 s]
[INFO] ranger-plugin-classloader .......................... SUCCESS [  0.797 s]
[INFO] HBase Security Plugin Shim ......................... SUCCESS [ 17.365 s]
[INFO] HBase Security Plugin .............................. SUCCESS [  6.050 s]
[INFO] Hdfs Security Plugin ............................... SUCCESS [  5.831 s]
[INFO] Hive Security Plugin ............................... SUCCESS [02:01 min]
[INFO] Knox Security Plugin Shim .......................... SUCCESS [03:47 min]
[INFO] Knox Security Plugin ............................... SUCCESS [07:05 min]
[INFO] Storm Security Plugin .............................. SUCCESS [  1.757 s]
[INFO] YARN Security Plugin ............................... SUCCESS [  0.820 s]
[INFO] Ranger Util ........................................ SUCCESS [  0.869 s]
[INFO] Unix Authentication Client ......................... SUCCESS [ 17.494 s]
[INFO] Security Admin Web Application ..................... SUCCESS [03:01 min]
[INFO] KAFKA Security Plugin .............................. SUCCESS [  6.686 s]
[INFO] SOLR Security Plugin ............................... SUCCESS [03:07 min]
[INFO] NiFi Security Plugin ............................... SUCCESS [  1.210 s]
[INFO] NiFi Registry Security Plugin ...................... SUCCESS [  1.205 s]
[INFO] Unix User Group Synchronizer ....................... SUCCESS [  2.062 s]
[INFO] Ldap Config Check Tool ............................. SUCCESS [  3.478 s]
[INFO] Unix Authentication Service ........................ SUCCESS [  0.638 s]
[INFO] KMS Security Plugin ................................ SUCCESS [  1.430 s]
[INFO] Tag Synchronizer ................................... SUCCESS [01:58 min]
[INFO] Hdfs Security Plugin Shim .......................... SUCCESS [  0.584 s]
[INFO] Hive Security Plugin Shim .......................... SUCCESS [ 24.249 s]
[INFO] YARN Security Plugin Shim .......................... SUCCESS [  0.612 s]
[INFO] Storm Security Plugin shim ......................... SUCCESS [  0.709 s]
[INFO] KAFKA Security Plugin Shim ......................... SUCCESS [  0.617 s]
[INFO] SOLR Security Plugin Shim .......................... SUCCESS [  0.716 s]
[INFO] Atlas Security Plugin Shim ......................... SUCCESS [ 31.534 s]
[INFO] KMS Security Plugin Shim ........................... SUCCESS [  0.648 s]
[INFO] ranger-examples .................................... SUCCESS [  0.015 s]
[INFO] Ranger Examples - Conditions and ContextEnrichers .. SUCCESS [  1.108 s]
[INFO] Ranger Examples - SampleApp ........................ SUCCESS [  0.386 s]
[INFO] Ranger Examples - Ranger Plugin for SampleApp ...... SUCCESS [  0.519 s]
[INFO] Ranger Tools ....................................... SUCCESS [  1.411 s]
[INFO] Atlas Security Plugin .............................. SUCCESS [  3.977 s]
[INFO] Sqoop Security Plugin .............................. SUCCESS [  3.637 s]
[INFO] Sqoop Security Plugin Shim ......................... SUCCESS [  0.558 s]
[INFO] Kylin Security Plugin .............................. SUCCESS [01:04 min]
[INFO] Kylin Security Plugin Shim ......................... SUCCESS [  0.883 s]
[INFO] Unix Native Authenticator .......................... SUCCESS [  0.452 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------

At this time target, you can see the packaged plug-in installation package in the directory:

[root@hadoop /usr/local/src/apache-ranger-1.2.0]# ls target/
antrun                            ranger-1.2.0-hbase-plugin.zip     ranger-1.2.0-kms.zip                ranger-1.2.0-ranger-tools.zip     ranger-1.2.0-storm-plugin.zip
archive-tmp                       ranger-1.2.0-hdfs-plugin.tar.gz   ranger-1.2.0-knox-plugin.tar.gz     ranger-1.2.0-solr-plugin.tar.gz   ranger-1.2.0-tagsync.tar.gz
maven-shared-archive-resources    ranger-1.2.0-hdfs-plugin.zip      ranger-1.2.0-knox-plugin.zip        ranger-1.2.0-solr-plugin.zip      ranger-1.2.0-tagsync.zip
ranger-1.2.0-admin.tar.gz         ranger-1.2.0-hive-plugin.tar.gz   ranger-1.2.0-kylin-plugin.tar.gz    ranger-1.2.0-sqoop-plugin.tar.gz  ranger-1.2.0-usersync.tar.gz
ranger-1.2.0-admin.zip            ranger-1.2.0-hive-plugin.zip      ranger-1.2.0-kylin-plugin.zip       ranger-1.2.0-sqoop-plugin.zip     ranger-1.2.0-usersync.zip
ranger-1.2.0-atlas-plugin.tar.gz  ranger-1.2.0-kafka-plugin.tar.gz  ranger-1.2.0-migration-util.tar.gz  ranger-1.2.0-src.tar.gz           ranger-1.2.0-yarn-plugin.tar.gz
ranger-1.2.0-atlas-plugin.zip     ranger-1.2.0-kafka-plugin.zip     ranger-1.2.0-migration-util.zip     ranger-1.2.0-src.zip              ranger-1.2.0-yarn-plugin.zip
ranger-1.2.0-hbase-plugin.tar.gz  ranger-1.2.0-kms.tar.gz           ranger-1.2.0-ranger-tools.tar.gz    ranger-1.2.0-storm-plugin.tar.gz  version
[root@hadoop /usr/local/src/apache-ranger-1.2.0]# 

Install Ranger Admin

Unzip the ranger admin installation package to a suitable directory. I am used to putting it here /usr/local:

[root@hadoop /usr/local/src/apache-ranger-1.2.0]# tar -zxvf target/ranger-1.2.0-admin.tar.gz -C /usr/local/

Enter the decompressed directory, the directory structure is as follows:

[root@hadoop /usr/local/src/apache-ranger-1.2.0]# cd /usr/local/ranger-1.2.0-admin/
[root@hadoop /usr/local/ranger-1.2.0-admin]# ls
bin                    contrib  dba_script.py           ews                 ranger_credential_helper.py  set_globals.sh           templates-upgrade   upgrade.sh
changepasswordutil.py  cred     db_setup.py             install.properties  restrict_permissions.py      setup_authentication.sh  update_property.py  version
changeusernameutil.py  db       deleteUserGroupUtil.py  jisql               rolebasedusersearchutil.py   setup.sh                 upgrade_admin.py
[root@hadoop /usr/local/ranger-1.2.0-admin]# 

Configure installation options:

[root@hadoop /usr/local/ranger-1.2.0-admin]# vim install.properties
# 指定MySQL驱动包所在的路径
SQL_CONNECTOR_JAR=/usr/local/src/mysql-connector-java-8.0.21.jar

# 配置root用户名密码以及MySQL实例的连接地址
db_root_user=root
db_root_password=123456a.
db_host=192.168.1.11

# 配置访问数据库的用户名密码
db_name=ranger_test
db_user=root
db_password=123456a.

# 指定审计日志的存储方式
audit_store=db
audit_db_user=root
audit_db_name=ranger_test
audit_db_password=123456a.

Create a ranger database in MySQL:

create database ranger_test;

Since I am using MySQL8.x here, I need to modify the database-related scripts. If the version is not MySQL8, you can skip this step. Open dba_script.pyand db_setup.pyfile, search for the following:

-cstring jdbc:mysql://%s/%s%s

Modify it all as follows, mainly adding JDBC serverTimezoneconnection parameters:

-cstring jdbc:mysql://%s/%s%s?serverTimezone=Asia/Shanghai

Then execute the following command to start installing ranger admin:

[root@hadoop /usr/local/ranger-1.2.0-admin]# ./setup.sh

Error resolution

If the following error is reported during installation:

SQLException : SQL state: HY000 java.sql.SQLException: Operation CREATE USER failed for 'root'@'localhost' ErrorCode: 1396

SQLException : SQL state: 42000 java.sql.SQLSyntaxErrorException: Access denied for user 'root'@'192.168.1.11' to database 'mysql' ErrorCode: 1044

The solution is to execute the following statement in MySQL:

use mysql;
flush privileges;
grant system_user on *.* to 'root';
drop user'root'@'localhost';
create user 'root'@'localhost' identified by '123456a.';
grant all privileges on *.* to 'root'@'localhost' with grant option;

drop user'root'@'192.168.1.11';
create user 'root'@'192.168.1.11' identified by '123456a.';
grant all privileges on *.* to 'root'@'192.168.1.11' with grant option;
flush privileges;

If the following error is reported:

SQLException : SQL state: HY000 java.sql.SQLException: This function has none of DETERMINISTIC, NO SQL, or READS SQL DATA in its declaration and binary logging is enabled (you *might* want to use the less safe log_bin_trust_function_creators variable) ErrorCode: 1418

Solution:

set global log_bin_trust_function_creators=TRUE;
flush privileges;

If the following error is reported:

SQLException : SQL state: HY000 java.sql.SQLException: Cannot drop table 'x_policy' referenced by a foreign key constraint 'x_policy_ref_role_FK_policy_id' on table 'x_policy_ref_role'. ErrorCode: 3730

Solution: delete all the tables in the ranger library and execute again./setup.sh

After the installation is complete, it will finally output:

Installation of Ranger PolicyManager Web Application is completed.

Start ranger admin

Modify the configuration file, configure the database connection password and jdbc url time zone parameters:

[root@hadoop /usr/local/ranger-1.2.0-admin]# vim conf/ranger-admin-site.xml
...

<property>
        <name>ranger.jpa.jdbc.url</name>
        <value>jdbc:log4jdbc:mysql://192.168.1.11/ranger_test?serverTimezone=Asia/Shanghai</value>
        <description />
</property>
<property>
        <name>ranger.jpa.jdbc.user</name>
        <value>root</value>
        <description />
</property>
<property>
        <name>ranger.jpa.jdbc.password</name>
        <value>123456a.</value>
        <description />
</property>

...

Modify the audit storage related configuration:

[root@hadoop /usr/local/ranger-1.2.0-admin]# vim conf/ranger-admin-default-site.xml
...

<property>
        <name>ranger.jpa.audit.jdbc.url</name>
        <value>jdbc:log4jdbc:mysql://192.168.1.11:3306/ranger_test?serverTimezone=Asia/Shanghai</value>
        <description />
</property>
<property>
        <name>ranger.jpa.audit.jdbc.user</name>
        <value>root</value>
        <description />
</property>
<property>
        <name>ranger.jpa.audit.jdbc.password</name>
        <value>123456a.</value>
        <description />
</property>

...

The startup command is as follows:

[root@hadoop /usr/local/ranger-1.2.0-admin]# ranger-admin start 
Starting Apache Ranger Admin Service
Apache Ranger Admin Service with pid 21102 has started.
[root@hadoop /usr/local/ranger-1.2.0-admin]# 

Check whether the port and process are normal:

[root@hadoop /usr/local/ranger-1.2.0-admin]# jps
21194 Jps
21102 EmbeddedServer
[root@hadoop /usr/local/ranger-1.2.0-admin]# netstat -lntp |grep 21102
tcp6       0      0 :::6080                 :::*           LISTEN      21102/java          
tcp6       0      0 127.0.0.1:6085          :::*           LISTEN      21102/java          
[root@hadoop /usr/local/ranger-1.2.0-admin]# 

Use a browser to access port 6080 and enter the login page. The default user name and password are both admin :
Permission management component of big data platform-Aapche Ranger

After successfully logging in, enter the home page as follows:
Permission management component of big data platform-Aapche Ranger


Ranger HDFS Plugin installation

Unzip the hdfs plugin installation package to a suitable directory:

[root@hadoop ~]# mkdir /usr/local/ranger-plugin
[root@hadoop ~]# tar -zxvf /usr/local/src/apache-ranger-1.2.0/target/ranger-1.2.0-hdfs-plugin.tar.gz -C /usr/local/ranger-plugin
[root@hadoop ~]# cd /usr/local/ranger-plugin/
[root@hadoop /usr/local/ranger-plugin]# mv ranger-1.2.0-hdfs-plugin/ hdfs-plugin

Enter the decompressed directory, the directory structure is as follows:

[root@hadoop /usr/local/ranger-plugin/hdfs-plugin]# ls
disable-hdfs-plugin.sh  enable-hdfs-plugin.sh  install  install.properties  lib  ranger_credential_helper.py  upgrade-hdfs-plugin.sh  upgrade-plugin.py
[root@hadoop /usr/local/ranger-plugin/hdfs-plugin]# 

Configure installation options:

[root@hadoop /usr/local/ranger-plugin/hdfs-plugin]# vim install.properties
# 指定ranger admin服务的访问地址
POLICY_MGR_URL=http://192.168.243.161:6080
# 配置仓库名称,可自定义
REPOSITORY_NAME=dev_hdfs
# 配置hadoop的安装目录
COMPONENT_INSTALL_DIR_NAME=/usr/local/hadoop-2.8.5

# 配置用户和用户组
CUSTOM_USER=root
CUSTOM_GROUP=root

Execute the following script to open hdfs-plugin :

[root@hadoop /usr/local/ranger-plugin/hdfs-plugin]# ./enable-hdfs-plugin.sh 

After the script is successfully executed, the following content will be output:

Ranger Plugin for hadoop has been enabled. Please restart hadoop to ensure that changes are effective.

Restart Hadoop:

[root@hadoop ~]# stop-all.sh 
[root@hadoop ~]# start-all.sh

Authentication permission control

Add hdfs service to Ranger Admin. The Service Name here needs to correspond to the configuration in the configuration file:
Permission management component of big data platform-Aapche Ranger

Fill in the corresponding information:
Permission management component of big data platform-Aapche Ranger

After filling in, go to the bottom of the page and click "Test Connection" to test whether the connection is normal, and then click "Add" to complete the addition:
Permission management component of big data platform-Aapche Ranger

After waiting for a while, go to the "Audit" -> "Plugins" page to check if the hdfs plug-in is found. If not, it means the plug-in has not been activated successfully. The normal situation is as follows:
Permission management component of big data platform-Aapche Ranger

After confirming the successful integration of the hdfs plugin, create some test directories and files in hdfs:

[root@hadoop ~]# hdfs dfs -mkdir /rangertest1
[root@hadoop ~]# hdfs dfs -mkdir /rangertest2
[root@hadoop ~]# echo "ranger test" > testfile
[root@hadoop ~]# hdfs dfs -put testfile /rangertest1
[root@hadoop ~]# hdfs dfs -put testfile /rangertest2

Then add Ranger's internal users on Ranger Admin, "Settings" -> "Add New User", and fill in user information:
Permission management component of big data platform-Aapche Ranger

Then add the permission policy, "Access Manager" -> "dev_hdfs" -> "Add New Policy", configure the user, directory and other information for the permission policy:
Permission management component of big data platform-Aapche Ranger

Pull to the bottom and click "Add" to complete the addition, you can see that a new policy configuration has been added:
Permission management component of big data platform-Aapche Ranger

Back to the operating system, add and switch to the hiveuser, test whether the directory and file can be read normally:

[root@hadoop ~]# sudo su - hive
[hive@hadoop ~]$ hdfs dfs -ls /
Found 2 items
drwxr-xr-x   - root supergroup          0 2020-11-12 13:48 /rangertest1
drwxr-xr-x   - root supergroup          0 2020-11-12 13:48 /rangertest2
[hive@hadoop ~]$ hdfs dfs -ls /rangertest1
Found 1 items
-rw-r--r--   1 root supergroup         12 2020-11-12 13:48 /rangertest1/testfile
[hive@hadoop ~]$ hdfs dfs -cat /rangertest1/testfile
ranger test
[hive@hadoop ~]$ hdfs dfs -ls /rangertest2
Found 1 items
-rw-r--r--   1 root supergroup         12 2020-11-12 13:48 /rangertest2/testfile
[hive@hadoop ~]$ 

You can see by looking at the directory information rangertest1and rangertest2permission bits directories are: drwxr-xr-x, also said that in addition to rootthe user is no authority outside of these two directory write operations.

But testing the write operation at this time, you will find that the hiveuser can rangertest1add files to the directory normally , but adding files to the rangertest2directory will report an error, because in Ranger we only give the hiveuser rangertest1the read and write permissions for the directory:

[hive@hadoop ~]$ echo "this is test file 2" > testfile2
[hive@hadoop ~]$ hdfs dfs -put testfile2 /rangertest1
[hive@hadoop ~]$ hdfs dfs -put testfile2 /rangertest2
put: Permission denied: user=hive, access=WRITE, inode="/rangertest2":root:supergroup:drwxr-xr-x
[hive@hadoop ~]$ 

If we want to prohibit hiveusers rangertest2from all operations on the directory, then we can add a deny policy, select the rangertest2directory in "Resource Path" , and check the permissions that require deny in the "Deny Conditions" column:
Permission management component of big data platform-Aapche Ranger

After the policy takes effect, the hiveuser rangertest2will be prompted to deny permission when accessing the directory:

[hive@hadoop ~]$ hdfs dfs -ls /rangertest2
ls: Permission denied: user=hive, access=EXECUTE, inode="/rangertest2"
[hive@hadoop ~]$ hdfs dfs -cat /rangertest2/testfile
cat: Permission denied: user=hive, access=EXECUTE, inode="/rangertest2/testfile"
[hive@hadoop ~]$ 

So far, Ranger's permission control on HDFS has also been verified. In addition, you can also perform other tests.


Ranger Hive Plugin installation

First, you need to build a good Hive environment, you can refer to the following:

In order to maintain compatibility with Hadoop and Ranger versions, the Hive version used in this article is 2.3.6 :

[root@hadoop ~]# echo $HIVE_HOME
/usr/local/apache-hive-2.3.6-bin
[root@hadoop ~]# 

Unzip the hive plugin installation package to a suitable directory:

[root@hadoop ~]# tar -zxvf /usr/local/src/apache-ranger-1.2.0/target/ranger-1.2.0-hive-plugin.tar.gz -C /usr/local/ranger-plugin/
[root@hadoop /usr/local/ranger-plugin]# mv ranger-1.2.0-hive-plugin/ hive-plugin

Enter the decompressed directory, the directory structure is as follows:

[root@hadoop /usr/local/ranger-plugin]# cd hive-plugin/
[root@hadoop /usr/local/ranger-plugin/hive-plugin]# ls
disable-hive-plugin.sh  enable-hive-plugin.sh  install  install.properties  lib  ranger_credential_helper.py  upgrade-hive-plugin.sh  upgrade-plugin.py
[root@hadoop /usr/local/ranger-plugin/hive-plugin]# 

Configure installation options:

[root@hadoop /usr/local/ranger-plugin/hive-plugin]# vim install.properties
# 指定ranger admin服务的访问地址
POLICY_MGR_URL=http://192.168.243.161:6080
# 配置仓库名称,可自定义
REPOSITORY_NAME=dev_hive
# 配置hive的安装目录
COMPONENT_INSTALL_DIR_NAME=/usr/local/apache-hive-2.3.6-bin

# 配置用户和用户组
CUSTOM_USER=root
CUSTOM_GROUP=root

Execute the following script to start hive-plugin :

[root@hadoop /usr/local/ranger-plugin/hive-plugin]# ./enable-hive-plugin.sh 

After the script is successfully executed, the following content will be output:

Ranger Plugin for hive has been enabled. Please restart hive to ensure that changes are effective.

Restart Hive:

[root@hadoop ~]# jps
8258 SecondaryNameNode
9554 EmbeddedServer
8531 NodeManager
13764 Jps
7942 NameNode
11591 RunJar
8040 DataNode
8428 ResourceManager
[root@hadoop ~]# kill -15 11591
[root@hadoop ~]# nohup hiveserver2 -hiveconf hive.execution.engine=mr &

Authentication permission control

Add hive service to Ranger Admin. The Service Name here needs to correspond to the configuration in the configuration file:
Permission management component of big data platform-Aapche Ranger

Fill in the corresponding information and click "Add" to complete the addition:
Permission management component of big data platform-Aapche Ranger

  • Tips: When adding a hive service for the first time, when you click "Test Connection", you may be prompted that the test connection fails. You can leave it alone for the time being, as long as the plug-in can be detected on the "Plugins" page

After waiting for a while, go to the "Audit" -> "Plugins" page to check if the hive plugin has been detected. If not, it means the plugin has not been successfully activated. The normal situation is as follows:
Permission management component of big data platform-Aapche Ranger

After confirming the successful integration of the hive plug-in, add the permission policy, "Access Manager" -> "dev_hive" -> "Add New Policy", configure the user, library, table, column and other information that the permission policy acts on:
Permission management component of big data platform-Aapche Ranger

Back to the operating system, switch to the hiveuser, and beelineenter the interactive terminal of Hive:

[root@hadoop ~]# sudo su - hive
上一次登录:四 11月 12 13:53:53 CST 2020pts/1 上
[hive@hadoop ~]$ beeline -u jdbc:hive2://localhost:10000 -n hive

Test permissions, you can see that all show tablesoperations except those are rejected:

0: jdbc:hive2://localhost:10000> show tables;
+-----------------+
|    tab_name     |
+-----------------+
| hive_wordcount  |
+-----------------+
1 row selected (0.126 seconds)
0: jdbc:hive2://localhost:10000> show databases;
Error: Error while compiling statement: FAILED: HiveAccessControlException Permission denied: user [hive] does not have [USE] privilege on [*] (state=42000,code=40000)
0: jdbc:hive2://localhost:10000> select * from hive_wordcount;
Error: Error while compiling statement: FAILED: HiveAccessControlException Permission denied: user [hive] does not have [SELECT] privilege on [default/hive_wordcount/*] (state=42000,code=40000)
0: jdbc:hive2://localhost:10000> 

Because we only give permissions to the hiveuser drop hive_wordcounttable:

0: jdbc:hive2://localhost:10000> drop table hive_wordcount;
No rows affected (0.222 seconds)
0: jdbc:hive2://localhost:10000> 

Guess you like

Origin blog.51cto.com/zero01/2550035