pyspark常用dataframe处理方法

点击这里可以查看官方文档。在spark 2.0中, HiveContext, SQLContext, StreamingContext, SparkContext 都被聚合到了spark模块中。另外要注意的一个事情是,读取文件时只能有一个活动进程,否则会报错。

1. 创建与保存dataframe

spark.createDataFrame(data, schema=None, samplingRatio=None),直接创建
其中data是行或元组或列表或字典的RDD、list、pandas.DataFrame。

df = spark.createDataFrame([
        (1, 144.5, 5.9, 33, 'M'),
        (2, 167.2, 5.4, 45, 'M'),
        (3, 124.1, 5.2, 23, 'F'),
        (4, 144.5, 5.9, 33, 'M'),
        (5, 133.2, 5.7, 54, 'F'),
        (3, 124.1, 5.2, 23, 'F'),
        (5, 129.2, 5.3, 42, 'M'),
    ], ['id', 'weight', 'height', 'age', 'gender']) #直接创建Dataframe

df = spark.createDataFrame([{'name':'Alice','age':1},
	{'name':'Polo','age':1}]) #从字典创建

schema = StructType([
    StructField("id", LongType(), True),    
    StructField("name", StringType(), True),
    StructField("age", LongType(), True),
    StructField("eyeColor", StringType(), True)
])
df = spark.createDataFrame(csvRDD, schema) #指定schema。

spark.read,从文件中读取

>>> airports = spark.read.csv(airportsFilePath, header='true', inferSchema='true', sep='\t')
>>> rdd = sc.textFile('python/test_support/sql/ages.csv') #可以用这种方法将用逗号分隔的rdd转为dataframe
>>> df2 = spark.read.csv(rdd)
>>> df = spark.read.format('json').load('python/test_support/sql/people.json') 
>>> df1 = spark.read.json('python/test_support/sql/people.json')
>>> df1.dtypes
[('age', 'bigint'), ('name', 'string')]
>>> rdd = sc.textFile('python/test_support/sql/people.json')
>>> df2 = spark.read.json(rdd) 
>>> df = spark.read.text('python/test_support/sql/text-test.txt')
>>> df.collect()
[Row(value='hello'), Row(value='this')]
>>> df = spark.read.text('python/test_support/sql/text-test.txt', wholetext=True)
>>> df.collect()
[Row(value='hello\nthis')]

DataFrameWriter.save、DataFrameWriter.parquet、DataFrameWriter.saveAsTable

2. dataframe处理

collect、show、first、head(n),显示数据
describe,计算统计信息,包括计数,平均,标准差,最小和最大。如果没有指定任何列,这个函数计算统计所有数值列。
count、cov、crosstab、freqItems(cols, support=None)、groupby,计数、计算方差、计算分组频数表、计算重复项、分组
alias,复制成一个新名字的dataframe
select、filter、where、limit,筛选过滤
sort、orderBy,排序
replace,替换
join(other, on=None, how=None),联合
agg,是groupBy.agg的缩写,聚合函数包括avg, max, min, sum, count。里面可以是一个列:函数字典,也可以是函数(列)
union、unionAll、unionByName,row合并
contains、getItem、isin、like、asDict(recursive=False)、na.drop = dropna

>>> df = spark.createDataFrame([([1, 2], {"key": "value"})], ["l", "d"])
>>> df.select(df.l.getItem(0), df.d.getItem("key")).show()
+----+------+
|l[0]|d[key]|
+----+------+
|   1| value|
+----+------+
>>> Person = Row("name", "age")
>>> Person
<Row(name, age)>
>>> 'name' in Person
True
>>> 'wrong_key' in Person
False
>>> Person("Alice", 11)
Row(name='Alice', age=11)
df.select("id", "age").filter("age = 22") # 两者是等价的,注意等号
df.select(df.id, df.age).filter(df.age == 22)
df1 = df.alias("df1")
df2 = df.alias("df2")
joined_df = df1.join(df2, col("df1.id") == col("df2.id"))
df.crosstab("id","age").show()
df.groupby(df.id).avg().collect()
df.groupby('id').agg({'age':'mean'}).collect()
df.agg({"age":"max"}).show()
from pyspark.sql import functions as F #第二种方法,使用spark function
df.agg(F.min(df.age)).show()
df.sort(df.age.desc()).collect()#下面三种排序方法等价
df.sort("age", ascending=False).collect()
df.orderBy(df.age.desc()).collect()
df.join(df2, ['id','age'], 'outer').select('name', 'height').collect()
cond = [df.name == df3.name, df.age == df3.age]
df.join(df3, cond).select(df.name, df3.age).collect()
df.summary().show()
+-------+------------------+-----+
|summary|               age| name|
+-------+------------------+-----+
|  count|                 2|    2|
|   mean|               3.5| null|
| stddev|2.1213203435596424| null|
|    min|                 2|Alice|
|    25%|                 2| null|
|    50%|                 2| null|
|    75%|                 5| null|
|    max|                 5|  Bob|
+-------+------------------+-----+
>>> df1 = spark.createDataFrame([[1, 2, 3]], ["col0", "col1", "col2"])
>>> df2 = spark.createDataFrame([[4, 5, 6]], ["col1", "col2", "col0"])
>>> df1.unionByName(df2).show()
+----+----+----+
|col0|col1|col2|
+----+----+----+
|   1|   2|   3|
|   6|   4|   5|
+----+----+----+

columns、drop、withColumn(colName, col)、withColumnRenamed(existing, new),获取列、删除列、新增列、重命名列
row、StructType、StructFiled
distinct,去重
dropDuplicates(subset = df),去重
dropna(how=‘any’, thresh=None, subset=None),去除空值。how:‘any’或者’all’。如果’any’,删除包含任何空值的行。如果’all’,删除所有值为null的行;thresh默认为None,如果指定这个值,删除小于阈值的非空值的行;subset:选择的列名称列表。
toJSON(转为RDD)、toPandas(Pandas dataframe会全部存放在driver节点的内存中)、toDF、toLocalIterator
fillna,填充空值

df = df.dropDuplicates(subset=[c for c in df.columns if c != 'id']) 
df_miss_no_income.dropna(thresh=3).show() #去除null值超过一定数量的记录
df_miss_no_income.fillna(means).show() #可以用一个固定值或者一个字典来指定null的填充值

cache、persist/clearCache、unpersist,缓存和清空缓存。cache只有一个默认的缓存级别MEMORY_ONLY ,而persist可以根据情况设置其它的缓存级别。

3. Spark function

foreach(f),应用f函数,将df的每一行作为f函数的输入
apply(udf)
map(f),应用f函数,作用对象为rdd的每一行

def f(person):
     print(person.name)
df.foreach(f)
>>> from pyspark.sql.functions import pandas_udf, PandasUDFType
>>> df = spark.createDataFrame(
...     [(1, 1.0), (1, 2.0), (2, 3.0), (2, 5.0), (2, 10.0)],
...     ("id", "v"))
>>> :pandas_udf("id long, v double", PandasUDFType.GROUPED_MAP)  # doctest: +SKIP
... def normalize(pdf):
...     v = pdf.v
...     return pdf.assign(v=(v - v.mean()) / v.std())
>>> df.groupby("id").apply(normalize).show()  # doctest: +SKIP
+---+-------------------+
| id|                  v|
+---+-------------------+
|  1|-0.7071067811865475|
|  1| 0.7071067811865475|
|  2|-0.8320502943378437|
|  2|-0.2773500981126146|
|  2| 1.1094003924504583|
+---+-------------------+

spark.sql.functions,内建函数:
abs、add_months、.approx_count_distinct(col, rsd=None)、array(*cols)、array_contains(col, value)、avg
ceil、coalesce(返回第一个非空的值)、col/column、collect_list(将列中的数据聚合成一个list)、collect_set、concat、concat_ws(sep,*cols)、cos、count、countDistinct、current_date、current_timestamp
date_add(start, days)、date_format、date_sub(往前)、date_trunc(format包括‘year’, ‘yyyy’, ‘yy’, ‘month’, ‘mon’, ‘mm’, ‘day’, ‘dd’, ‘hour’, ‘minute’, ‘second’, ‘week’, ‘quarter’)、datediff、dayofmonth、dayofyear、decode、degree(角度从raians转换为degree)、dense_rank(注意和rank的区别,一个是1223这样的排序下去,一个是1224这样的排序下去)、desc
explode(将array或者map转换为dataframe)、explode_outer、expr(解析字符串成为命令)
floor、format_number(小数点位数)、format_string、from_json、from_unixtime(timestamp, format=‘yyyy-MM-dd HH:mm:ss’)

>>> from pyspark.sql import Row
>>> eDF = spark.createDataFrame([Row(a=1, intlist=[1,2,3], mapfield={"a": "b"})])
>>> eDF.select(explode(eDF.intlist).alias("anInt")).collect()
[Row(anInt=1), Row(anInt=2), Row(anInt=3)]
>>> eDF.select(explode(eDF.mapfield).alias("key", "value")).show()
+---+-----+
|key|value|
+---+-----+
|  a|    b|
+---+-----+
>>> df.select(expr("length(name)")).collect()
[Row(length(name)=5), Row(length(name)=3)]
>>> df = spark.createDataFrame([(5, "hello")], ['a', 'b'])
>>> df.select(format_string('%d %s', df.a, df.b).alias('v')).collect()
[Row(v='5 hello')]
>>> from pyspark.sql.functions import *
>>> from pyspark.sql.types import *
>>> data = [(1, '''{"a": 1,"b":4}'''),(2, '''{"a": 3,"b":2}''')]
>>> df = spark.createDataFrame(data, ("key", "value"))
>>> df.show()
+---+--------------+
|key|         value|
+---+--------------+
|  1|{"a": 1,"b":4}|
|  2|{"a": 3,"b":2}|
+---+--------------+
>>> schema = StructType([StructField("a", IntegerType()),StructField("b", IntegerType())])
>>> df.select(from_json(df.value, schema).alias("json")).show()
+------+
|  json|
+------+
|[1, 4]|
|[3, 2]|
+------+
>>> data = [(1, '''{"a": {"c":"chen"},"b":4}'''),(2, '''{"a": {"c":"min"},"b":2}''')]
>>> df = spark.createDataFrame(data, ("key", "value"))
>>> df3 = df.select('key',get_json_object('value','$.a.c').alias('c'),get_json_object('value','$.b').cast('integer').alias('b'))
>>> df3.show()
+---+----+---+
|key|   c|  b|
+---+----+---+
|  1|chen|  4|
|  2| min|  2|
+---+----+---+

4. Spark SQL语法

使用之前要先注册表:

df.createOrReplaceTempView("df") #使用sparksql时需要先注册为临时表

spark.sql,使用sql语句对dataframe进行操作

spark.sql("select id, age from swimmers where age = 22")

猜你喜欢

转载自blog.csdn.net/kittyzc/article/details/82862089