Spark RDD算子之Action保存操作

1、saveAsTextFile

将 RDD 以文本文件的格式存储到文件系统中

scala版本

    val conf = new SparkConf().setMaster("local[*]").setAppName("SaveAsTextFileScala")
    val sc = new SparkContext(conf)
    val rdd = sc.makeRDD(1 to 10,2)
    rdd.saveAsTextFile("file:///E://save/测试文件夹")//保存到本地
    rdd.saveAsTextFile("hdfs://192.168.8.99:9000/save")//保存到hdfs

2、saveAsSequenceFile

将 RDD 以SequenceFile的文件格式保存到 HDFS 上,用法同saveAsTextFile

3、saveAsObjectFile

  • 将 RDD 中的元素序列化成对象,存储到文件中
  • 对于HDFS,默认采用SequenceFile保存
var rdd1 = sc.makeRDD(1 to 10,2)
rdd1.saveAsObjectFile("hdfs://cdh5/tmp/lxw1234.com/")

hadoop fs -cat /tmp/lxw1234.com/part-00000
SEQ !org.apache.hadoop.io.NullWritable"org.apache.hadoop.io.BytesWritableT

4、saveAsHadoopFile

  • 将RDD存储在HDFS上的文件中,支持老版本Hadoop API
  • 可以指定outputKeyClass、outputValueClass以及压缩格式
  • 每个分区输出一个文件
var rdd1 = sc.makeRDD(Array(("A",2),("A",1),("B",6),("B",3),("B",7)))

import org.apache.hadoop.mapred.TextOutputFormat
import org.apache.hadoop.io.Text
import org.apache.hadoop.io.IntWritable

rdd1.saveAsHadoopFile("/tmp/lxw1234.com/",classOf[Text],classOf[IntWritable],classOf[TextOutputFormat[Text,IntWritable]])

rdd1.saveAsHadoopFile("/tmp/lxw1234.com/",classOf[Text],classOf[IntWritable],classOf[TextOutputFormat[Text,IntWritable]],
                      classOf[com.hadoop.compression.lzo.LzopCodec])

5、saveAsHadoopDataset

使用saveAsHadoopDataset将RDD保存到HDFS中

import org.apache.spark.SparkConf
import org.apache.spark.SparkContext
import SparkContext._
import org.apache.hadoop.mapred.TextOutputFormat
import org.apache.hadoop.io.Text
import org.apache.hadoop.io.IntWritable
import org.apache.hadoop.mapred.JobConf



var rdd1 = sc.makeRDD(Array(("A",2),("A",1),("B",6),("B",3),("B",7)))
var jobConf = new JobConf()
jobConf.setOutputFormat(classOf[TextOutputFormat[Text,IntWritable]])
jobConf.setOutputKeyClass(classOf[Text])
jobConf.setOutputValueClass(classOf[IntWritable])
jobConf.set("mapred.output.dir","/tmp/lxw1234/")
rdd1.saveAsHadoopDataset(jobConf)

结果:
hadoop fs -cat /tmp/lxw1234/part-00000
A       2
A       1
hadoop fs -cat /tmp/lxw1234/part-00001
B       6
B       3
B       7

保存数据到Hbase
HBase建表:

create 'lxw1234',{NAME => 'f1',VERSIONS => 1},{NAME => 'f2',VERSIONS => 1},{NAME => '3',VERSIONS => 1}

注意:保存到HBase,运行时需要在SPARK_CLASSPATH中加入HBase相关的jar包。

import org.apache.spark.SparkConf
import org.apache.spark.SparkContext
import SparkContext._
import org.apache.hadoop.mapred.TextOutputFormat
import org.apache.hadoop.io.Text
import org.apache.hadoop.io.IntWritable
import org.apache.hadoop.mapred.JobConf
import org.apache.hadoop.hbase.HBaseConfiguration
import org.apache.hadoop.hbase.mapred.TableOutputFormat
import org.apache.hadoop.hbase.client.Put
import org.apache.hadoop.hbase.util.Bytes
import org.apache.hadoop.hbase.io.ImmutableBytesWritable

var conf = HBaseConfiguration.create()
    var jobConf = new JobConf(conf)
    jobConf.set("hbase.zookeeper.quorum","zkNode1,zkNode2,zkNode3")
    jobConf.set("zookeeper.znode.parent","/hbase")
    jobConf.set(TableOutputFormat.OUTPUT_TABLE,"lxw1234")
    jobConf.setOutputFormat(classOf[TableOutputFormat])

    var rdd1 = sc.makeRDD(Array(("A",2),("B",6),("C",7)))
    rdd1.map(x => 
      {
    
    
        var put = new Put(Bytes.toBytes(x._1))
        put.add(Bytes.toBytes("f1"), Bytes.toBytes("c1"), Bytes.toBytes(x._2))
        (new ImmutableBytesWritable,put)
      }
    ).saveAsHadoopDataset(jobConf)

##结果:
hbase(main):005:0> scan 'lxw1234'
ROW     COLUMN+CELL                                                                                                
 A       column=f1:c1, timestamp=1436504941187, value=\x00\x00\x00\x02                                              
 B       column=f1:c1, timestamp=1436504941187, value=\x00\x00\x00\x06                                              
 C       column=f1:c1, timestamp=1436504941187, value=\x00\x00\x00\x07                                              
3 row(s) in 0.0550 seconds

6、saveAsNewAPIHadoopFile

将RDD数据保存到 HDFS 上,使用新版本 Hadoop API,用法基本同saveAsHadoopFile

import org.apache.spark.SparkConf
import org.apache.spark.SparkContext
import SparkContext._
import org.apache.hadoop.mapreduce.lib.output.TextOutputFormat
import org.apache.hadoop.io.Text
import org.apache.hadoop.io.IntWritable

var rdd1 = sc.makeRDD(Array(("A",2),("A",1),("B",6),("B",3),("B",7)))
rdd1.saveAsNewAPIHadoopFile("/tmp/lxw1234/",classOf[Text],classOf[IntWritable],classOf[TextOutputFormat[Text,IntWritable]])

7、saveAsNewAPIHadoopDataset

作用同saveAsHadoopDataset,只不过采用新版本Hadoop API

HBase建表:

create 'lxw1234',{NAME => 'f1',VERSIONS => 1},{NAME => 'f2',VERSIONS => 1},{NAME => '3',VERSIONS => 1}

注意:保存到HBase,运行时候需要在SPARK_CLASSPATH中加入HBase相关的jar包

package com.lxw1234.test

import org.apache.spark.SparkConf
import org.apache.spark.SparkContext
import SparkContext._
import org.apache.hadoop.hbase.HBaseConfiguration
import org.apache.hadoop.mapreduce.Job
import org.apache.hadoop.hbase.mapreduce.TableOutputFormat
import org.apache.hadoop.hbase.io.ImmutableBytesWritable
import org.apache.hadoop.hbase.client.Result
import org.apache.hadoop.hbase.util.Bytes
import org.apache.hadoop.hbase.client.Put

object Test {
    
    
  def main(args : Array[String]) {
    
    
   val sparkConf = new SparkConf().setMaster("spark://lxw1234.com:7077").setAppName("lxw1234.com")
   val sc = new SparkContext(sparkConf);
   var rdd1 = sc.makeRDD(Array(("A",2),("B",6),("C",7)))

    sc.hadoopConfiguration.set("hbase.zookeeper.quorum ","zkNode1,zkNode2,zkNode3")
    sc.hadoopConfiguration.set("zookeeper.znode.parent","/hbase")
    sc.hadoopConfiguration.set(TableOutputFormat.OUTPUT_TABLE,"lxw1234")
    var job = new Job(sc.hadoopConfiguration)
    job.setOutputKeyClass(classOf[ImmutableBytesWritable])
    job.setOutputValueClass(classOf[Result])
    job.setOutputFormatClass(classOf[TableOutputFormat[ImmutableBytesWritable]])

    rdd1.map(
      x => {
    
    
        var put = new Put(Bytes.toBytes(x._1))
        put.add(Bytes.toBytes("f1"), Bytes.toBytes("c1"), Bytes.toBytes(x._2))
        (new ImmutableBytesWritable,put)
      }    
    ).saveAsNewAPIHadoopDataset(job.getConfiguration)

    sc.stop()   
  }
}

猜你喜欢

转载自blog.csdn.net/qq_42578036/article/details/109619403