SparkSql、SparkCore保存数据到关系数据库

数据分析之后肯定要落地,一般都输落在hdfs,或者是常见的关系型数据库,这里用mysql给大家说一下

sql的话其实操作起来也方便,调用方法就可以了

    val prop=new java.util.Properties()
    prop.put("driver","com.mysql.jdbc.Driver")
    prop.put("user","root")
    prop.put("password","root")

	resultDF.write.jdbc("jdbc:mysql://localhost:3306/test?createDatabaseIfNotExist=true&characterEncoding=UTF-8","student",prop)

你如果用的是Core,那么调用foreachRDD写jdbc就OK,下面是我把wordcount保存的例子

tuple.foreachRDD(rdd=>{
    
    
      rdd.foreach(word=>{
    
    
        Class.forName("com.mysql.jdbc.Driver")
        //获取mysql连接
        val conn = DriverManager.getConnection("jdbc:mysql://localhost:3306/test", "root", "")
        //把数据写入mysql
        try {
    
    
          var totalcount = word._2
          var sql = "";
          var querysql="select count from wordcount where titleName='"+word._1+"'"
          val queryresult: ResultSet = conn.prepareStatement(querysql).executeQuery()
          //存在更新,不存在添加
          if(queryresult.next()){
    
    
            totalcount = queryresult.getString("count").toInt+word._2
            sql = "update wordcount set count='"+totalcount+"' where titleName='"+word._1+"'"
          }else{
    
    
            sql = "insert into wordcount (titleName,count) values ('" + word._1 + "','" + totalcount + "')"
          }

          conn.prepareStatement(sql).executeUpdate()
          println("保存结束--------------------------------------------------------------")
        } finally {
    
    
          conn.close()
        }
      })
    })

猜你喜欢

转载自blog.csdn.net/dudadudadd/article/details/114374087