Spark SQL multi-data source interaction_Chapter 4

Spark SQL can interact with multiple data sources, such as plain text, json, parquet, csv, MySQL, etc.
1. Write to different data sources
2. Read from different data sources
Write data:

package cn.itcast.sql
import java.util.Properties
import org.apache.spark.SparkContext
import org.apache.spark.rdd.RDD
import org.apache.spark.sql.{DataFrame, SaveMode, SparkSession}
object WriterDataSourceDemo {
case class Person(id:Int,name:String,age:Int)
def main(args: Array[String]): Unit = {
//1.创建SparkSession
val spark: SparkSession = SparkSession.builder().master(“local[*]”).appName(“SparkSQL”)
.getOrCreate()
val sc: SparkContext = spark.sparkContext
sc.setLogLevel(“WARN”)
//2.读取文件
val fileRDD: RDD[String] = sc.textFile(“D:\data\person.txt”)
val linesRDD: RDD[Array[String]] = fileRDD.map(.split(" "))
val rowRDD: RDD [Person] = linesRDD.map (line => Person (line (0) .toInt, line (1), line (2) .toInt))
// 3. Convert RDD to DF
// Note: There is no toDF method in RDD.In the new version, you need to add a method to it. You can use the implicit conversion
import spark.implicits.

// Note: The generic type of rowRDD above is Person, which contains Schema information
// so SparkSQL can
Obtained automatically by reflection and added to DF val personDF: DataFrame = rowRDD.toDF
//Write DF to different data sources=
//Text data source supports only a single column, and you have 3 columns.;
//personDF.write.text(“D:\data\output\text”)
personDF.write.json(“D:\data\output\json”)
personDF.write.csv(“D:\data\output\csv”)
personDF.write.parquet(“D:\data\output\parquet”)
val prop = new Properties()
prop.setProperty(“user”,“root”)
prop.setProperty(“password”,“root”)
personDF.write.mode(SaveMode.Overwrite).jdbc(
“jdbc:mysql://localhost:3306/bigdata?characterEncoding=UTF-8”,“person”,prop)
println(“写入成功”)
sc.stop()
spark.stop()
}
}

Reading data:

package cn.itcast.sql
import java.util.Properties
import org.apache.spark.SparkContext
import org.apache.spark.sql.SparkSession
object ReadDataSourceDemo {
def main(args: Array[String]): Unit = {
//1.创建SparkSession
val spark: SparkSession = SparkSession.builder().master(“local[*]”).appName(“SparkSQL”)
.getOrCreate()
val sc: SparkContext = spark.sparkContext
sc.setLogLevel(“WARN”)
//2.读取文件
spark.read.json(“D:\data\output\json”).show()
spark.read.csv(“D:\data\output\csv”).toDF(“id”,“name”,“age”).show()
spark.read.parquet(“D:\data\output\parquet”).show()
val prop = new Properties()
prop.setProperty(“user”,“root”)
prop.setProperty(“password”,“root”)
spark.read.jdbc(
“jdbc:mysql://localhost:3306/bigdata?characterEncoding=UTF-8”,“person”,prop).show()
sc.stop()
spark.stop()
}
}

3. Summary
1. SparkSQL write data:
DataFrame / DataSet.write.json / csv / jdbc
2. SparkSQL read data:
SparkSession.read.json / csv / text / jdbc / format

Published 238 original articles · praised 429 · 250,000 views

Guess you like

Origin blog.csdn.net/qq_45765882/article/details/105561475