Spark DataFrame常用操作

Spark DataFrame常用操作

工作中经常用到Spark SQL和Spark DataFrame,但是官方文档DataFrame API只有接口函数,没有实例,新手用起来不太方便。下面这篇博客总结的很好,基本常用的API都有讲解,而且都有示例,平时使用的时候经常查看,很方便。

Spark-SQL之DataFrame操作大全

下面是其中没有包含的内容,工作中比较常用,总结在这里:

1、正则匹配
val app_device_info = app_device_info_df.where("m=7").select(app_device_info_df("app_did"), 
	regexp_extract(app_device_info_df("app_emulator_mac"), 
	"([e-9A-Fa-f]{2}:){5}[0-9A-Fa-f]{2}, 0).alias("app_emulator_mac"), 
	app_device_info_df("app_emulator_imei")app_device_info_df("app_emulator_addr"))
2、透视表:
val ack_freq_daily_DF = ack_freq_week_DF.groupby("app_id""ad_id""parameter_account_mobile""app_did")
		.pivot("ack_date").sum("ack_freq_daily").na.fill(0)
3、DataFrame写入表:
app_device_info.write.format("parquet").option("path", "/user/hadoop/warehouse/app/app_device_info")
	.partitionBy("y", "m").mode("overwrite").saveAsTable("my_database.app_device_info")

hivecontext.sql(s"""insert into table my_database.${tableNane}) partition(y='s{year}',m='${month}',d='${day}')
	select t.*,  '${year}' as y, '${month}' as m,  '${day}' as d  from tmpTable t""")
4、agg聚合

(1) agg(expers: column*)返回dataframe类型,同数学计算求值

df.agg(max("age"), avg("salary"))
df.groupBy().agg(max("age"), avg("salary"))

(2) agg(expers: Map[String, String])返回dataframe类型,同数学计算map类型求值

df.agg(Map("age" -> "max", "salary" -> "avg"))
df.groupBy().agg(Map("age" -> "max", "salary" -> "avg"))

–> 可使用所有spark SQL聚集函数

5、udf:

调用sqlcontext里面的udf函数:

// 对test这个String计算它的长度
sqlContext.udf.register("str",(_:String).length)
sqlContext.sql("select str('test')")

// 构建一个DF,在里面取出大于98的列值
sqlContext.udf.register("rd",(n:Int)=>{n>98})
case class TestData(key:Int,Value:String)
val df4=sqlContext.sparkContext.parallelize(1 to 100).map(i=>TestData(i,i.toString)).toDF()
df4.registerTempTable("integerData")

// sql里面where操作调用UDF
val result=sqlContext.sql("select * from integerData  where rd(key)")

sql里面group操作 对列值大于10的进行sum操作:

sqlContext.udf.register("groupFunction", (n: Int) => { n > 10 })
 
val df = Seq(("red", 1), ("red", 2), ("blue", 10),
    ("green", 100), ("green", 200)).toDF("g", "v")
df.registerTempTable("groupData")
 
val result = sqlContext.sql(
	      """
	        | SELECT SUM(v)
	        | FROM groupData
	        | GROUP BY groupFunction(v)
	      """.stripMargin)
6、na

方式一:

val ack_freq_daily_DF=ack_freq_week_DF.groupby("app_id", "ad_id", "paraneter_account mobile", "app_did")
		.pivot("ack_date").sum("ack_freq_daily").na.fill(0)

方式二(推荐):
构建Map如下:
val map = Map(“列名1” -> 指定数字, “列名2” -> 指定数字, ……)
然后执行dataframe.na.fill(map),即可实现对NULL值的填充。

7、randomsplit
def randomSplit(weights: Array[Double], seed: Long = Utils.random.nextLong): Array[RDD[T]]

该函数根据weights权重,将一个RDD切分成多个RDD。该权重参数为一个Double数组,第二个参数为random的种子。

scala> var rdd = sc.makeRDD(1 to 10,10)
rdd: org.apache.spark.rdd.RDD[Int] = ParallelCollectionRDD[16] at makeRDD at :21

scala> rdd.collect
res6: Array[Int] = Array(1, 2, 3, 4, 5, 6, 7, 8, 9, 10)  

scala> var splitRDD = rdd.randomSplit(Array(0.1,0.2,0.3,0.4))
splitRDD: Array[org.apache.spark.rdd.RDD[Int]] = Array(MapPartitionsRDD[17] at randomSplit at :23, 
MapPartitionsRDD[18] at randomSplit at :23, 
MapPartitionsRDD[19] at randomSplit at :23, 
MapPartitionsRDD[20] at randomSplit at :23)

//这里注意:randomSplit的结果是一个RDD数组
scala> splitRDD.size
res8: Int = 4

//由于randomSplit的第一个参数weights中传入的值有4个,因此,就会切分成4个RDD,
//把原来的rdd按照权重0.1,0.2,0.3,0.4,随机划分到这4个RDD中,权重高的RDD,划分到//的几率就大一些。
//注意,权重的总和加起来为1,否则会不正常

scala> splitRDD(0).collect
res10: Array[Int] = Array(1, 4)

scala> splitRDD(1).collect
res11: Array[Int] = Array(3)                                                    

scala> splitRDD(2).collect
res12: Array[Int] = Array(5, 9)

scala> splitRDD(3).collect
res13: Array[Int] = Array(2, 6, 7, 8, 10)
8、alias

字段别名

app_device_info_df.where("m=7").select(app_device_info_df("app_did").alias("app_emulator_mac")) 
9、as
df.groupBy("column1").agg(min("timestamp") as "min",max("timestamp") as "max" ,
count("timestamp") as "count",max("timestamp")- min("timestamp") as "interval")
10、repartition
11、coalesce

猜你喜欢

转载自blog.csdn.net/olizxq/article/details/82807583