Spark RDD converted into two ways DataFrame

Spark SQL supports two ways to convert an existing RDD to DataFrame.
The first method uses reflection to infer the schema and create a DataSet RDD then converted to DataFrame. Based on this reflection method it is very simple, but only if you already know the type of RDD schema when you write Spark application.
The second way is through the programming interface, use StructType you build and then apply it to existing RDD. Although this method is cumbersome, but it allows you did not know before running the case and the type of building DataSet column

    方法如下
         1.将RDD转换成Rows   
         2.按照第一步Rows的结构定义StructType  
         3.基于rows和StructType使用createDataFrame创建相应的DF

Test data order.data

1   小王  电视  12  2015-08-01 09:08:31
1   小王  冰箱  24  2015-08-01 09:08:14
2   小李  空调  12  2015-09-02 09:01:31

code show as below:

object RDD2DF {

  /**
    * 主要有两种方式
    *   第一种是在已经知道schema已经知道的情况下,我们使用反射把RDD转换成DS,进而转换成DF
    *   第二种是你不能提前定义好case class,例如数据的结构是以String类型存在的。我们使用接口自定义一个schema
    * @param args
    */
  def main(args: Array[String]): Unit = {

    val spark=SparkSession.builder()
      .appName("DFDemo")
      .master("local[2]")
      .getOrCreate()

//    rdd2DFFunc1(spark)

    rdd2DFFunc2(spark)
    spark.stop()
  }

  /**
    * 提前定义好case class
    * @param spark
    */
  def rdd2DFFunc1(spark:SparkSession): Unit ={
    import spark.implicits._
    val orderRDD=spark.sparkContext.textFile("F:\\JAVA\\WorkSpace\\spark\\src\\main\\resources\\order.data")
    val orderDF=orderRDD.map(_.split("\t"))
      .map(attributes=>Order(attributes(0),attributes(1),attributes(2),attributes(3),attributes(4)))
      .toDF()
    orderDF.show()
    Thread.sleep(1000000)
  }

  /**
    *总结:第二种方式就是通过最基础的DF接口方法,将
    * @param spark
    */
  def rdd2DFFunc2(spark:SparkSession): Unit ={
    //TODO:   1.将RDD转换成Rows   2.按照第一步Rows的结构定义StructType  3.基于rows和StructType使用createDataFrame创建相应的DF
    val orderRDD=spark.sparkContext.textFile("F:\\JAVA\\WorkSpace\\spark\\src\\main\\resources\\order.data")

    //TODO:   1.将RDD转换成Rows
    val rowsRDD=orderRDD
//      .filter((str:String)=>{val arr=str.split("\t");val res=arr(1)!="小李";res})
      .map(_.split("\t"))
      .map(attributes=>Row(attributes(0).trim,attributes(1),attributes(2),attributes(3).trim,attributes(4)))

    //TODO:   2.按照第一步Rows的结构定义StructType
    val schemaString="id|name|commodity|age|date"
    val fields=schemaString.split("\\|")
      .map(filedName=>StructField(filedName,StringType,nullable = true))
    val schema=StructType(fields)

    //TODO:   3.基于rows和StructType使用createDataFrame创建相应的DF
   val orderDF= spark.createDataFrame(rowsRDD,schema)
    orderDF.show()
    orderDF.groupBy("name").count().show()
    orderDF.select("name","commodity").show()
    Thread.sleep(10000000)
  }
}
case class Order(id:String,name:String,commodity:String,age:String,date:String)

Guess you like

Origin blog.51cto.com/14309075/2402582