Apache Sentry部署

三台hadoop集群,分别是master、slave1和slave2。下面是这三台机器的软件分布:

  • master:NameNode、ZK、HiveMetaSotre、HiveServer2、SentryServer
  • slave1:DataNode、ZK
  • slave2:DataNode、ZK

2软件需求

  1. MySql
  2. mysql-jdbc.jar:mysql-connector-java-5.1.38-bin.jar
  3. Hadoop
  4. Hive1.1.0:apache-hive-1.1.0-bin.tar.gz
  5. Sentry1.6.0:apache-sentry-1.6.0-incubating-bin.tar.gz
    注意:这里的Sentry1.6.0版本对应的是Hive1.1.0版本,目前还不支持Hive1.2及以上版本。

3软件安装

3.1 MySql安装

  • 在线安装:sudo apt-get install mysql-server。
  • 修改文件:vim /etc/mysql/my.cnf,内容如下:

#bind-address = 127.0.0.1

bind-address = 0.0.0.0

  • 重启mysql:service mysql restart

3.2 Hadoop安装

此处省略

3.2 Hive安装

此处省略

创建hive的元数据mysql数据库

create database hive;

CREATE USER hive IDENTIFIED BY 'hive';

GRANT all ON hive.* TO hive@'%' IDENTIFIED BY'hive';

flush privileges;

3.3 Sentry安装

1.解压安装

tar -zxvf apache-sentry-1.6.0-incubating-bin.tar.gz -C /opt

2.修改sentry的配置文件(sentry-site.xml)

mv apache-sentry-1.6.0-incubating-bin apache-sentry-1.6.0

cp mysql-connector-java-5.1.38-bin.jar /opt/apache-sentry-1.6.0/lib

cd /opt/apache-sentry-1.6.0/conf

vim sentry-site.xml

3.修改sentry-sitem.xml文件

<?xml version="1.0" encoding="utf-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
<property>
<name>sentry.service.admin.group</name>  
<value>admin</value>
</property>  
<property>
<name>sentry.service.allow.connect</name>  
<value>hive,admin</value>
</property>  
<property>
<name>sentry.service.reporting</name>  
<value>JMX</value>
</property>  
<property>
<name>sentry.service.server.rpc-address</name>  
<value>192.168.70.110</value>
</property>  
<property>
<name>sentry.service.server.rpc-port</name>  
<value>8038</value>
</property>  
<property>
<name>sentry.store.group.mapping</name>  
<value>org.apache.sentry.provider.common.HadoopGroupMappingService</value>
</property>  
<property>
<name>sentry.hive.server</name>  
<value>server1</value>
</property>  
<!-- 配置Webserver -->  
<property>
<name>sentry.service.web.enable</name>  
<value>true</value>
</property>  
<property>
<name>sentry.service.web.port</name>  
<value>51000</value>
</property>  
<property>
<name>sentry.service.web.authentication.type</name>  
<value>NONE</value>
</property>  
<property>
<name>sentry.service.web.authentication.kerberos.principal</name>  
<value> </value>
</property>  
<property>
<name>sentry.service.web.authentication.kerberos.keytab</name>  
<value> </value>
</property>  
<!--配置认证-->  
<property>
<name>sentry.service.security.mode</name>  
<value>none</value>
</property>  
<property>
<name>sentry.service.server.principal</name>  
<value> </value>
</property>  
<property>
<name>sentry.service.server.keytab</name>  
<value> </value>
</property>  
<!-- 配置Jdbc -->  
<property>
<name>sentry.store.jdbc.url</name>  
<value>jdbc:mysql://192.168.70.110:3306/sentry</value>
</property>  
<property>
<name>sentry.store.jdbc.driver</name>  
<value>com.mysql.jdbc.Driver</value>
</property>  
<property>
<name>sentry.store.jdbc.user</name>  
<value>sentry</value>
</property>  
<property>
<name>sentry.store.jdbc.password</name>  
<value>sentry</value>
</property>
</configuration>

4.配置环境变量
编辑vim /etc/profile

export SENTRY_HOME=/opt/apache-sentry-1.6.0

export PATH=${SENTRY_HOME}/bin:$PATH

使生效source /etc/profile
5.在mysql中创建sentry数据库

create database sentry;

CREATE USER sentry IDENTIFIED BY 'sentry';

GRANT all ON sentry.* TO sentry@'%' IDENTIFIED BY 'sentry';

flush privileges;

6.将mysql-jdbc的jar包拷贝到sentry/lib目录下

cp mysql-connector-java-5.1.38-bin.jar ${SENTRY_HOME}/lib

7.替换hadoop中jline的jar包

rm ${HADOOP_HOME}/share/hadoop/yarn/lib/jline-0.9.94.jar

cp ${SENTRY_HOME}/lib/jline-2.12.jar ${HADOOP_HOME}/share/hadoop/yarn/lib

8.初始化sentry数据库

sentry --command schema-tool --conffile ${SENTRY_HOME}/conf/sentry-site.xml --dbType mysql --initSchema

9.启动sentry服务

sentry --command service --conffile ${SENTRY_HOME}/conf/sentry-site.xml

4 Hive集成Sentry

1.修改hive-site.xml配置文件

<?xml version="1.0" encoding="utf-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
<property>
<name>hive.sentry.conf.url</name>  
<value>file:///opt/apache-hive-1.1.0/conf/sentry-site.xml</value>
</property>
<!-- 配置开关控制列访问权限 -->  
<property>
<name>hive.stats.collect.scancols</name>  
<value>true</value>
</property>   
<!-- Hive Metastore集成Sentry -->  
<property>
<name>hive.metastore.pre.event.listeners</name>  
<value>org.apache.sentry.binding.metastore.MetastoreAuthzBinding</value>
</property>  
<property>
<name>hive.metastore.event.listeners</name>  
<value>org.apache.sentry.binding.metastore.SentryMetastorePostEventListener</value>
</property>  
<!-- Hive Server2集成Sentry -->  
<property>
<name>hive.server2.session.hook</name>  
<value>org.apache.sentry.binding.hive.HiveAuthzBindingSessionHook</value>
</property>  
<property>
<name>hive.security.authorization.task.factory</name>  
<value>org.apache.sentry.binding.hive.SentryHiveAuthorizationTaskFactoryImpl</value>
</property>  
<!-- 设置mysql-jdbc -->  
<property>
<name>javax.jdo.option.ConnectionURL</name>  
<value>jdbc:mysql://192.168.70.110:3306/hive?createDatabaseIfNotExist=true</value>
</property>  
<property>
<name>javax.jdo.option.ConnectionDriverName</name>  
<value>com.mysql.jdbc.Driver</value>
</property>  
<property>
<name>javax.jdo.option.ConnectionUserName</name>  
<value>hive</value>
</property>  
<property>
<name>javax.jdo.option.ConnectionPassword</name>  
<value>hive</value>
</property>
</configuration>

2.修改sentry-site.xml配置文件
注意:该文件是在hive/conf目录下的,不要和sentry/conf目录下的sentry-site.xml文件混淆!

<?xml version="1.0" encoding="utf-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
<property>
<name>sentry.service.client.server.rpc-address</name>  
<value>192.168.70.110</value>
</property>  
<property>
<name>sentry.service.client.server.rpc-port</name>  
<value>8038</value>
</property>  
<property>
<name>sentry.service.client.server.rpc-connection-timeout</name>  
<value>200000</value>
</property>  
<!--配置认证-->  
<property>
<name>sentry.service.security.mode</name>  
<value>none</value>
</property>  
<property>
<name>sentry.service.server.principal</name>  
<value> </value>
</property>  
<property>
<name>sentry.service.server.keytab</name>  
<value> </value>
</property>  
<property>
<name>sentry.provider</name>  
<value>org.apache.sentry.provider.file.HadoopGroupResourceAuthorizationProvider</value>
</property>  
<property>
<name>sentry.hive.provider.backend</name>  
<value>org.apache.sentry.provider.db.SimpleDBProviderBackend</value>
</property>  
<property>
<name>sentry.metastore.service.users</name>  
<value>hive</value>  
<!--queries made by hive user (beeline) skip meta store check-->
</property>  
<property>
<name>sentry.hive.server</name>  
<value>server1</value>
</property>  
<property>
<name>sentry.hive.testing.mode</name>  
<value>true</value>
</property>
</configuration>

3.拷贝sentry/lib下的jar包到hive/lib目录下

cp ${SENTRY_HOME}/lib/sentry*.jar ${HIVE_HOME}/lib

cp ${SENTRY_HOME}/lib/shiro-*.jar ${HIVE_HOME}/lib

注意:shiro-*.jar一定也要拷贝过去,否则会报错。
4.修改Hive在HDFS上的文件权限

hadoop fs -chown -R hive:hive /user/hive/warehouse

hadoop fs -chmod -R 770 /user/hive/warehouse

由于HDFS上文件是hive:hive权限,所以linux用户上能
5.启动Hive Metastore服务

hive --service metastore -hiveconf hive.root.logger=DEBUG,console

6.启动Hive Server2服务

hive --service hiveserver2 -hiveconf hive.root.logger=DEBUG,console

5 Hive权限验证

5.1准备测试数据

  • 创建测试数据文件:vim /tmp/events.csv,内容如下:

10.1.2.3,US,android,createNote

10.200.88.99,FR,windows,updateNote

10.1.2.3,US,android,updateNote

10.200.88.77,FR,ios,createNote

10.1.4.5,US,windows,updateTag

  • 然后,在hive中运行下面sql语句(注意:以hive用户启动hive命令):
  1. create database sensitive;
  2. create table sensitive.events (ip STRING, country STRING, client STRING, action STRING) ROW FORMAT DELIMITED FIELDS TERMINATED BY ',';
  3. load data local inpath '/tmp/events.csv' overwrite into table sensitive.events;
  4. create database filtered;
  5. create view filtered.events as select country, client, action from sensitive.events;
  6. create view filtered.events_usonly as select * from filtered.events where country = 'US';
  • 在master上通过beeline连接hiveserver2,运行下面命令创建角色和组:

beeline -u "jdbc:hive2://master:10000/" -n admin -p admin -d org.apache.hive.jdbc.HiveDriver

  • 在beelin中执行下面的sql语句创建role、group等:
  1. create role admin_role;
  2. GRANT ALL ON SERVER server1 TO ROLE admin_role;
  3. GRANT ROLE admin_role TO GROUP admin;
  4. create role test_role;
  5. GRANT ALL ON DATABASE filtered TO ROLE test_role;
  6. use sensitive;
  7. GRANT SELECT(ip) on TABLE sensitive.events TO ROLE test_role;
  8. GRANT ROLE test_role TO GROUP test;

上面创建了两个角色:

  • admin_role,具有管理员权限,可以读写所有数据库,并授权给admin和hive组(对应操作系统上的组)
  • test_role,只能读写filtered数据库,和只能读取sensitive库中events表中的ip字段,并授权给test组。

因为系统上没有test用户和组,所以需要手动创建:useradd test

5.2测试

5.2.1测试admin_role角色

使用admin用户访问beelin:

beeline -u "jdbc:hive2://master:10000/" -n admin -p admin -d org.apache.hive.jdbc.HiveDriver

admin用户属于admin_role组,具有管理员权限,可以查看所有角色:

0: jdbc:hive2://master:10000/> show roles;

+-------------+--+

| role |

+-------------+--+

| admin_role |

| test_role |

+-------------+--+

2 rows selected (0.251 seconds)

查看所有权限:

0: jdbc:hive2://master:10000/> show grant role admin_role;

+-----------+--------+------------+---------+-----------------+-----------------+------------+---------------+-------------------+----------+--+

| database | table | partition | column | principal_name | principal_type | privilege | grant_option | grant_time | grantor |

+-----------+--------+------------+---------+-----------------+-----------------+------------+---------------+-------------------+----------+--+

| * | | | | admin_role | ROLE | * | false | 1461507543582000 | -- |

+-----------+--------+------------+---------+-----------------+-----------------+------------+---------------+-------------------+----------+--+

1 row selected (0.111 seconds)

0: jdbc:hive2://master:10000/> show grant role test_role;

+------------+---------+------------+---------+-----------------+-----------------+------------+---------------+-------------------+----------+--+

| database | table | partition | column | principal_name | principal_type | privilege | grant_option | grant_time | grantor |

+------------+---------+------------+---------+-----------------+-----------------+------------+---------------+-------------------+----------+--+

| sensitive | events | | ip | test_role | ROLE | select | false | 1461558337008000 | -- |

| filtered | | | | test_role | ROLE | * | false | 1461557354579000 | -- |

+------------+---------+------------+---------+-----------------+-----------------+------------+---------------+-------------------+----------+--+

2 rows selected (0.111 seconds)

hive用户可以查看所有数据库、访问所有表:

0: jdbc:hive2://master:10000/> show databases;

+----------------+--+

| database_name |

+----------------+--+

| default |

| filtered |

| sensitive |

+----------------+--+

3 rows selected (0.142 seconds)

0: jdbc:hive2://master:10000/> use filtered;

No rows affected (0.144 seconds)

0: jdbc:hive2://master:10000/> select * from filtered.events;

+-----------------+----------------+----------------+--+

| events.country | events.client | events.action |

+-----------------+----------------+----------------+--+

| US | android | createNote |

| FR | windows | updateNote |

| US | android | updateNote |

| FR | ios | createNote |

| US | windows | updateTag |

+-----------------+----------------+----------------+--+

5 rows selected (0.458 seconds)

0: jdbc:hive2://master:10000/> select * from sensitive.events;

+---------------+-----------------+----------------+----------------+--+

| events.ip | events.country | events.client | events.action |

+---------------+-----------------+----------------+----------------+--+

| 10.1.2.3 | US | android | createNote |

| 10.200.88.99 | FR | windows | updateNote |

| 10.1.2.3 | US | android | updateNote |

| 10.200.88.77 | FR | ios | createNote |

| 10.1.4.5 | US | windows | updateTag |

+---------------+-----------------+----------------+----------------+--+

5 rows selected (0.731 seconds)

5.2.2测试test_role角色

使用test用户访问beeline:

beeline -u "jdbc:hive2://master:10000/" -n test -p test -d org.apache.hive.jdbc.HiveDriver

test用户不是管理员,是不能查看所有角色的:

0: jdbc:hive2://master:10000/> show roles;

ERROR : Error processing Sentry command: Access denied to test.Please grant admin privilege to test.

Error: Error while processing statement: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.SentryGrantRevokeTask. SentryAccessDeniedException: Access denied to test (state=08S01,code=1)

test用户可以列出所有数据库:

0: jdbc:hive2://master:10000/> show databases;

+----------------+--+

| database_name |

+----------------+--+

| default |

| filtered |

| sensitive |

+----------------+--+

test用户可以filtered库:

0: jdbc:hive2://master:10000/> use filtered;

No rows affected (0.256 seconds)

0: jdbc:hive2://master:10000/> select * from events;

+-----------------+----------------+----------------+--+

| events.country | events.client | events.action |

+-----------------+----------------+----------------+--+

| US | android | createNote |

| FR | windows | updateNote |

| US | android | updateNote |

| FR | ios | createNote |

| US | windows | updateTag |

+-----------------+----------------+----------------+--+

5 rows selected (0.199 seconds)

test用户只能查看sensitive库中events表中的ip字段,其他的都都无法查看:

0: jdbc:hive2://master:10000/> use sensitive;

No rows affected (0.112 seconds)

0: jdbc:hive2://master:10000/> select * from events;

Error: Error while compiling statement: FAILED: SemanticException No valid privileges

User test does not have privileges for QUERY (state=42000,code=40000)

0: jdbc:hive2://master:10000/> select ip from events;

+---------------+--+

| ip |

+---------------+--+

| 10.1.2.3 |

| 10.200.88.99 |

| 10.1.2.3 |

| 10.200.88.77 |

| 10.1.4.5 |

+---------------+--+

5 rows selected (0.176 seconds)

原文出处:http://www.shsxt.com/it/Big-data/825.html

上海尚学堂大数据教研组创作

相关推荐