spark学习记录(十一、Spark on Hive配置)

添加依赖

        <dependency>
            <groupId>org.apache.spark</groupId>
            <artifactId>spark-hive_2.12</artifactId>
            <version>2.4.0</version>
            <scope>provided</scope>
        </dependency>

1.1hadoop1启动hive metastore服务

在hive/sbin目录下

hive --service metastore &

1.2修改hive客户端主机上 hive/conf目录下的hive-site.xml 添加

    <configuration>
        <property>
            <name>hive.metastore.uris</name>
            <value>thrift://hadoop1:9083</value>
        </property>
    </configuration>

1.3 将hive客户端的主机上hive/conf目录下的hive-site.xml拷到spark客户端spark/conf目录下 

1.4修改spark客户端的hive-site.xml,只保留

    <configuration>
        <property>
            <name>hive.metastore.uris</name>
            <value>thrift://hadoop1:9083</value>
        </property>
    </configuration>

1.5启动spark

在spark客户端的spark/bin目录下

./spark-shell --master spark://hadoop1:7077,hadoop2:7077

1.6编写java代码

public class JavaExample {
    public static void main(String[] args) {
        SparkConf conf = new SparkConf();
        conf.setAppName("hive");
        JavaSparkContext sc = new JavaSparkContext(conf);
//        HiveContext是SQLContext子类
        HiveContext hiveContext = new HiveContext(sc);
        hiveContext.sql("USE spark");
        hiveContext.sql("DROP TABLE IF EXISTS student_infos");
//        在hive中创建student_infos表
        hiveContext.sql("CREATE TABLE IF NOT EXISTS student_infos (name STRING,age INT) row format delimited " +
                "fields terminated by '\t'");
        hiveContext.sql("load data local inpath '/usr/local/student_infos' into table student_infos");

        /**
         * 查询生成的DataFrame
         */
        Dataset<Row> siDf = hiveContext.sql("SELECT * from student_infos");

        siDf.registerTempTable("students");
        siDf.show();

        /**
         * 将结果保存到hive表good_student_infos
         */
        hiveContext.sql("DROP TABLE IF EXISTS good_student_infos");
        siDf.write().mode(SaveMode.Overwrite).saveAsTable("good_student_infos");
        Dataset<Row> table = hiveContext.table("good_student_infos");
        Row[] collects = table.collect();
        for (Row collect : collects) {
            System.out.println(collect);
        }
        sc.stop();
    }
}

1.7在bin目录下执行命令

./spark-submit --master spark://hadoop1:7077,hadoop2:7077 /usr/local/JavaExample.jar 

猜你喜欢

转载自blog.csdn.net/qq_33283652/article/details/86023753
今日推荐