pyspark将本地数据转化为方便在hdfs上读取的分布式数据

以mnist数据为例

读取mnist数据

from tensorflow.contrib.learn.python.learn.datasets import mnist

with open(input_images, 'rb') as f:
    images = numpy.array(mnist.extract_images(f))

创建RDD数据

imageRDD = sc.parallelize(images.reshape(shape[0], shape[1] * shape[2]), num_partitions)
labelRDD = sc.parallelize(labels, num_partitions)

保存文件路径

output_images = output + "/images"
output_labels = output + "/labels"

转化为CSV

def toCSV(vec):
  """将数据转化为以逗号分割的数据"""
  return ','.join([str(i) for i in vec])

imageRDD.map(toCSV).saveAsTextFile(output_images)
labelRDD.map(toCSV).saveAsTextFile(output_labels)

转化为pickle

imageRDD.saveAsPickleFile(output_images)
labelRDD.saveAsPickleFile(output_labels)

转化为tfrecord

tfRDD = imageRDD.zip(labelRDD).map(lambda x: (bytearray(toTFExample(x[0], x[1])), None))
    # requires: --jars tensorflow-hadoop-1.0-SNAPSHOT.jar
tfRDD.saveAsNewAPIHadoopFile(output, "org.tensorflow.hadoop.io.TFRecordFileOutputFormat",
                                 keyClass="org.apache.hadoop.io.BytesWritable",
                                 valueClass="org.apache.hadoop.io.NullWritable")

数据的读取在另外一篇博客

数据读取

发布了19 篇原创文章 · 获赞 3 · 访问量 409

猜你喜欢

转载自blog.csdn.net/u011740601/article/details/103893037