Zeppelin整合spark

1、下载,zeppelin有两种,一种是集成了全部解释器的,一种是需要自己安装解释器的(其实里面也集成了spark和python),我下载的第二种
2、修改zeppelin-env.sh,我这边是spark on yarn的模式,然后需要用到pyspark
export JAVA_HOME=/home/java
export MASTER=yarn-client
export SPARK_HOME=/home/spark
export SPARK_SUBMIT_OPTIONS="–deploy-mode client --driver-memory 512M --executor-memory 1G --executor-cores 1"
export HADOOP_CONF_DIR=/home/hadoop/etc/hadoop
export PYTHONPATH=/app/anaconda3/bin
export PYSPARK_PYTHON=/app/anaconda3/bin/pathon3.7
3、修改zeppelin-site.xml

<property>
  <name>zeppelin.server.addr</name>
  <value>192.168.188.18</value>
  <description>Server address</description>
</property>

<property>
  <name>zeppelin.server.port</name>
  <value>9090</value>
  <description>Server port.</description>
</property>

4、zeppelin WEB界面的Interpreter界面添加spark.home,修改zeppelin.pyspark.python
zeppelin.pyspark.python : /app/anaconda3/bin/python3.7
5、遇到的错误:
java.lang.NoSuchMethodError: io.netty.buffer.PooledByteBufAllocator.metric()Lio/netty/buffer/PooledByteBufAllocatorMetric;
at org.apache.spark.network.util.NettyMemoryMetrics.registerMetrics(NettyMemoryMetrics.java:80)
at org.apache.spark.network.util.NettyMemoryMetrics.(NettyMemoryMetrics.java:76)
at org.apache.spark.network.client.TransportClientFactory.(TransportClientFactory.java:109)
at org.apache.spark.network.TransportContext.createClientFactory(TransportContext.java:99)
at org.apache.spark.rpc.netty.NettyRpcEnv.(NettyRpcEnv.scala:71)
at org.apache.spark.rpc.netty.NettyRpcEnvFactory.create(NettyRpcEnv.scala:461)

原因: spark与zeppelin的netty jar包版本不一致导致
解决办法:
将zeppelin的netty:netty-all-4.0.23.Final.jar复制到spark的jars目录下,同时删除spark原来的netty包

5、可能遇到的错误:
JsonMappingException: Incompatible Jackson version: 2.11.8
原因:zeppelin的jacson包与spark jackson包版本不一致
解决办法:将spark的jackson包复制到zeppelin中,删除zeppelin中原来的jackson包

6、添加HIVE Interpreters,需要将hive安装目录下面的lib中的hive-exec-2.1.1.jar、hive-service-2.1.1.jar、hive-jdbc-2.1.1.jar拷入${ZEPPELIIN_HOME}/interpreter/jdbc/中,同时还要添加依赖hadoop-common-2.6.0.jar。
注意:hive-exec-2.1.1.jar、hive-service-2.1.1.jar、hive-jdbc-2.1.1.jar如果版本不一致会有java.lang.NoSuchFieldError: HIVE_CLI_SERVICE_PROTOCOL_V8错误

7、添加jdbc Interpreters,需要将依赖mysql-connector-java-5.1.35.jar拷入${ZEPPELIIN_HOME}/interpreter/jdbc/中

8、如果spark中集成了hive,那有可能还会遇到下面这个错误:
java.lang.NoSuchMethodError:com.facebook.fb303.FacebookService C l i e n t . s e n d B a s e O n e w a y ( L j a v a / l a n g / S t r i n g ; L o r g / a p a c h e / t h r i f t / T B a s a t c o m . f a c e b o o k . f b 303. F a c e b o o k S e r v i c e Client.sendBaseOneway (Ljava/lang/String;Lorg/apache/thrift/TBas at com.facebook.fb303.FacebookService Client.send_shutdown(FacebookService.java:436)
at com.facebook.fb303.FacebookService C l i e n t . s h u t d o w n ( F a c e b o o k S e r v i c e . j a v a : 430 ) a t o r g . a p a c h e . h a d o o p . h i v e . m e t a s t o r e . H i v e M e t a S t o r e C l i e n t . c l o s e ( H i v e M e t a S t o r e C l i e n t . j a v a : 558 ) a t s u n . r e f l e c t . N a t i v e M e t h o d A c c e s s o r I m p l . i n v o k e 0 ( N a t i v e M e t h o d ) a t s u n . r e f l e c t . N a t i v e M e t h o d A c c e s s o r I m p l . i n v o k e ( N a t i v e M e t h o d A c c e s s o r I m p l . j a v a : 62 ) a t s u n . r e f l e c t . D e l e g a t i n g M e t h o d A c c e s s o r I m p l . i n v o k e ( D e l e g a t i n g M e t h o d A c c e s s o r I m p l . j a v a : 43 ) a t j a v a . l a n g . r e f l e c t . M e t h o d . i n v o k e ( M e t h o d . j a v a : 498 ) a t o r g . a p a c h e . h a d o o p . h i v e . m e t a s t o r e . R e t r y i n g M e t a S t o r e C l i e n t . i n v o k e ( R e t r y i n g M e t a S t o r e C l i e n t . j a v a : 178 ) a t c o m . s u n . p r o x y . Client.shutdown(FacebookService.java:430) at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.close(HiveMetaStoreClient.java:558) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:178) at com.sun.proxy. Proxy22.close(Unknown Source)
这是zeppelin使用的libthrift包的版本是0.9.3一下的版本,此时可以升级版本就可以了
也可以直接复制spark中libthrift的jar包到zeppelin中:cp /app/spark/jars/libthrift-0.9.3.jar /app/zeppelin/lib/interpreter/

9、zeppelin多账号配置步骤:
a、shiro.ini中配置账号
[users]
admin = password1, admin
username = password,role1
b、zeppelin-site.xml中设置zeppelin.anonymous.allowed=>false
c、设置zeppelin-env.sh中export ZEPPELIN_NOTEBOOK_PUBLIC=“false”(这个是用来设置每个用户新建的notebook是否公开可见)

猜你喜欢

转载自blog.csdn.net/a376554764/article/details/84672444