Hive 读写

在Hive中创建数据库和表:

service mysql start #可以在Linux的任何目录下执行该命令

service mysql start

启动Hadoop,然后再启动Hive

cd /usr/local/hadoop
./sbin/start-all.sh

cd /usr/local/hive
./bin/hive
hive> create database if not exists sparktest;//创建数据库sparktest
hive> show databases; //显示一下是否创建出了sparktest数据库
//下面在数据库sparktest中创建一个表student
hive> create table if not exists sparktest.student(
> id int,
> name string,
> gender string,
> age int);
hive> use sparktest; //切换到sparktest
hive> show tables; //显示sparktest数据库下面有哪些表
hive> insert into student values(1,'Xueqian','F',23); //插入一条记录
hive> insert into student values(2,'Weiliang','M',24); //再插入一条记录
hive> select * from student; //显示student表中的记录

连接Hive读写数据:

  • 修改“/usr/local/spark/spark-env.sh”这个配置文件:
cd /usr/local/spark/conf/
gedit spark-env.sh
export SPARK_DIST_CLASSPATH=$(/usr/local/hadoop/bin/hadoop classpath)
export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
export CLASSPATH=$CLASSPATH:/usr/local/hive/lib
export SCALA_HOME=/usr/local/scala
export HADOOP_CONF_DIR=/usr/local/hadoop/etc/hadoop
export HIVE_CONF_DIR=/usr/local/hive/conf
export SPARK_CLASSPATH=$SPARK_CLASSPATH:/usr/local/hive/lib/mysql-connector-java-5.1.40-bin.jar

在spark-shell(包含Hive支持)中执行以下命令从Hive中读取数据:

Scala> import org.apache.spark.sql.Row
Scala> import org.apache.spark.sql.SparkSession
 
Scala> case class Record(key: Int, value: String)
 
// warehouseLocation points to the default location for managed databases and tables
Scala> val warehouseLocation = "spark-warehouse"
 
Scala> val spark = SparkSession.builder().appName("Spark Hive Example").config("spark.sql.warehouse.dir", warehouseLocation).enableHiveSupport().getOrCreate()
 
Scala> import spark.implicits._
Scala> import spark.sql
//下面是运行结果
scala> sql("SELECT * FROM sparktest.student").show()
+---+--------+------+---+
| id|    name|gender|age|
+---+--------+------+---+
|  1| Xueqian|     F| 23|
|  2|Weiliang|     M| 24|
cd /usr/local/hive
./bin/hive
  • 查看sparktest.student表中的数据
hive> use sparktest;
OK
Time taken: 0.016 seconds
 
hive> select * from student;
OK
1   Xueqian F   23
2   Weiliang    M   24
Time taken: 0.05 seconds, Fetched: 2 row(s)
  • 切换到spark-shell(含Hive支持)终端窗口,输入以下命令:
scala> import java.util.Properties
scala> import org.apache.spark.sql.types._
scala> import org.apache.spark.sql.Row
 
//下面我们设置两条数据表示两个学生信息
scala> val studentRDD = spark.sparkContext.parallelize(Array("3 Rongcheng M 26","4 Guanhua M 27")).map(_.split(" "))
 
//下面要设置模式信息
scala> val schema = StructType(List(StructField("id", IntegerType, true),StructField("name", StringType, true),StructField("gender", StringType, true),StructField("age", IntegerType, true)))
 
//下面创建Row对象,每个Row对象都是rowRDD中的一行
scala> val rowRDD = studentRDD.map(p => Row(p(0).toInt, p(1).trim, p(2).trim, p(3).toInt))
 
//建立起Row对象和模式之间的对应关系,也就是把数据和模式对应起来
scala> val studentDF = spark.createDataFrame(rowRDD, schema)
 
//查看studentDF
scala> studentDF.show()
+---+---------+------+---+
| id|     name|gender|age|
+---+---------+------+---+
|  3|Rongcheng|     M| 26|
|  4|  Guanhua|     M| 27|
+---+---------+------+---+
//下面注册临时表
scala> studentDF.registerTempTable("tempTable")
 
scala> sql("insert into sparktest.student select * from tempTable")
  • 切换到刚才的hive终端窗口,输入以下命令查看Hive数据库内容的变化:
hive> select * from student;
OK
1   Xueqian F   23
2   Weiliang    M   24
3   Rongcheng   M   26
4   Guanhua M   27
Time taken: 0.049 seconds, Fetched: 4 row(s)

猜你喜欢

转载自blog.csdn.net/weixin_51309151/article/details/122186775
今日推荐