3、分布式查询引擎

 

1、启动spark thrift-server服务端

start-thriftserver.sh --master spark://s101:7077

2.1、测试之查看10000端口

netstat -anop |grep 10000

2.2、测试之查看sparksubmit进程

[centos@s101 ~]$ jps
2946 DFSZKFailoverController
7011 SparkSubmit
3092 Master
2568 NameNode
7148 Jps

2.3、测试之查看webui    8080

3.1、连接之shell

[centos@s102 /soft/spark/bin]$ ./beeline
Beeline version 1.2.1.spark2 by Apache Hive
beeline>
beeline> !connect jdbc:hive2://s101:10000/lx;auth=noSasl;

 3.2、连接之idea

  依赖

<dependency>
  <groupId>org.apache.hive</groupId>
  <artifactId>hive-exec</artifactId>
  <version>2.1.0</version>
</dependency>
<dependency>
  <groupId>org.apache.hive</groupId>
  <artifactId>hive-jdbc</artifactId>
  <version>2.1.0</version>
</dependency>

  java版

Class.forName("org.apache.hive.jdbc.HiveDriver");

        String url = "jdbc:hive2://s101:10000/default;auth=noSasl";
        Connection conn = DriverManager.getConnection(url);
        String sql = "select * from www";
        PreparedStatement ppst = conn.prepareStatement(sql);
        ResultSet rs = ppst.executeQuery();
        while (rs.next()) {
            System.out.println(rs.getString(1) + "  "+  rs.getString(2));
        }

        rs.close();
        conn.close();

猜你喜欢

转载自www.cnblogs.com/lybpy/p/9832437.html
今日推荐