hive on spark的安装及问题

配置hive

hive-site

<property>

   <name>hive.metastore.uris</name>

   <value>thrift://database:9083</value>

</property> 

<property>

   <name>hive.metastore.client.socket.timeout</name>

   <!--<value>600s</value>-->

    <value>600</value>

</property>

把hive-site.xml 放到spark/conf目录下

Mysql驱动放到spark/lib目录下

启动:hive --service metastore

配置spark

Slaves

spark04
spark02

Spark-env.sh

SPARK_MASTER_IP=spark02

JAVA_HOME=/usr/local/jdk1.7.0_75

SPAKR_HIVE=true

HADOOP_CONF_DIR=/usr/local/hadoop-2.6.0/etc/hadoop

spark-defaults.conf

# Default system properties included when running spark-submit.
# This is useful for setting default environmental settings.

# Example:
# spark.master                     spark://master:7077
spark.eventLog.enabled           true
#spark.eventLog.dir               hdfs://mycluster:8021/spark/logs/events
# spark.eventLog.dir               hdfs://namenode:8021/directory
# spark.serializer                 org.apache.spark.serializer.KryoSerializer
# spark.driver.memory              5g
# spark.executor.extraJavaOptions  -XX:+PrintGCDetails -Dkey=value -Dnumbers="one two three"

Scp到其它机器上

测试spark-hive

spark-shell--master spark://spark02:7077

valsqlContext = new org.apache.spark.sql.hive.HiveContext(sc);

sqlContext.sql("selectcount(*) from ods_app.dev_location").collect().foreach(println);

相关问题

1、hive metastore 问题

java.lang.RuntimeException: Unable to instantiateorg.apache.hadoop.hive.metastore.HiveMetaStoreClient

解决方法:

在hive-site.xml配置hive.metastore.uris,并启动hive metastore

<property>

   <name>hive.metastore.uris</name>

   <value>thrift://database:9083</value>

</property>



2、Ha mycluster的问题

java.lang.IllegalArgumentException:java.net.UnknownHostException: mycluster

解决方法:

在spark-env.sh,配置HADOOP_CONF_DIR

HADOOP_CONF_DIR=/usr/local/hadoop-2.6.0/etc/hadoop




猜你喜欢

转载自smarthhl.iteye.com/blog/2268653