hdfs, yarn integration ranger

First, install hdfs plug

Install from source ranger copy on the server hdfs plugin to where you need to install

1, extract the installation

# tar zxvf ranger-2.1.0-hdfs-plugin.tar.gz -C /data1/hadoop

2, modify plug-in configuration file, as follows

# cd /data1/hadoop/ranger-2.1.0-SNAPSHOT-hdfs-plugin/

Modify install.properties file

The main changes the following parameters:

= POLICY_MGR_URL http://192.168.4.50:6080    #policy address, that is, ranger-admin address

REPOSITORY_NAME = hadoopdev # service name, the ranger-admin when the front desk is created, you need to like the value of this parameter.

XAAUDIT.SOLR.ENABLE = true # open the audit log

XAAUDIT.SOLR.URL=http://192.168.4.50:6083/solr/ranger_audits #solr地址

CUSTOM_USER = hduser # custom plug-ins users, I guess this value is user-initiated clusters

CUSTOM_GROUP=hduser

3, modify hdfs profile

# vim hdfs-site.xml

Add the following configuration:

<property>

        <name>dfs.namenode.inode.attributes.provider.class</name>

        <value>org.apache.ranger.authorization.hadoop.RangerHdfsAuthorizer</value>

    </property>

<property>

        <name>dfs.permissions</name>

        <value>true</value>

    </property>

    <property>

        <name>dfs.permissions.ContentSummary.subAccess</name>

        <value>true</value>

</property>

4, start the plug

# Sudo ./enable-hdfs-plugin.sh ( requires root privileges )

 

 

Second, the installation yarn plug

1, extract the installation

# tar zxvf ranger-2.0.0-yarn-plugin.tar.gz -C /data1/hadoop

2, modify the configuration file install.properties

Modify the following attributes:

POLICY_MGR_URL=http://192.168.4.50:6080

REPOSITORY_NAME=yarndev

XAAUDIT.SOLR.ENABLE=true

XAAUDIT.SOLR.URL=http://192.168.4.50:6083/solr/ranger_audits

CUSTOM_USER=hduser   

CUSTOM_GROUP=hduser

3, modify yarn-site.xml configuration file

Add the following attributes:

<property>

    <name>yarn.acl.enable</name>

    <value>true</value>

</property>

<property>

    <name>yarn.authorization-provider</name>

    <value>org.apache.ranger.authorization.yarn.authorizer.RangerYarnAuthorizer</value>

</property>

4, starting yarn plug

# ./enable-yarn-plugin.sh

 

# Reboot cluster

Third, the reception Configuration

1 , HDFS configuration

(1)  Log: HTTP / 192.168.4.50: 6080

 

(1) Add Service

 

Click the plus sign to add services

 

 

 

Click Test

 

 

Do not forget to click Save Configuration finished.

In the foreground you configure the interface as follows:

 

 

 

(1) Configuration Policy

Click hadoopdev configure policy

 

 

 

The default has two strategies, top right, click here to add policy

 

 

 

Save.

(1)  Test yjt the user whether or not there on / out1 have access to this directory.

 

 

 

 

analysis:

Can be seen from the above, for this directory unless the user or group were to pull off the ACL , is normally read, but above our yjt the user / out1 this directory policy control ( deny access ) restrictions, you can see, the current user does not have permission to read the directory, the configuration was successful.

2, the Yarn Configuration

(1) Add Service

 

 

You configure the connection can be tested to see whether the configuration ok

(1) Add Policy

Of yarn restrictions, mainly for submission restrict user access to the queue, and the task

 

 

Adding access control

 

 

(1)  Test yjt whether the user can submit jobs

 

 

As can be seen from, YJT that users can not submit jobs to hadoop queue.

 

 

 

 

 

 

 

Guess you like

Origin www.cnblogs.com/yjt1993/p/11837551.html