Unzip the parquet format file to text

Method 1: spark python implementation

import sys
from pyspark import SparkConf, SparkContext
from pyspark.sql import SQLContext

inputpath=sys.argv[1]
outputpath=sys.argv[2]
sc = SparkContext(appName="Transform Pq to Csv")
sqlContext = SQLContext(sc)
df = sqlContext.read.parquet(inputpath)
df.select('*').save(outputpath,"com.databricks.spark.csv",delimiter='\001')

 

 

run:spark-submit --packages com.databricks:spark-csv_2.10:1.2.0 --master yarn-client read_pq.py /tmp/xing/20161115/1049 /tmp/xing/20161115/text/1049

 

Method 2: If it is the data in the parquet table, you can get the data through hql query

insert overwrite (local) inpath outputpath

select * from table1_parquet;

Guess you like

Origin http://10.200.1.11:23101/article/api/json?id=326946410&siteId=291194637