用spark窗口函数进行session划分

问题1:
数据有car_id,city,up_time三列,百G左右。目标是统计car每次经过一个city的时间段;
类似于网页中的session,不是经过每个city的总时间。每个时间段以当地的up_time为准。
比如:

    N597, 杭州, 03-15 11:49:16
    N597, 杭州, 03-15 12:50:38
    N597, 绍兴, 03-15 14:10:35
    N597, 绍兴, 03-15 19:20:47
    N597, 杭州, 03-16 13:20:28
    N597, 杭州, 03-16 15:20:27

得到:
N597, 杭州, 1.0小时
N597, 绍兴, 5.17小时
N597, 杭州, 2小时

    # 使用pyspark演示。先按[car_id,up_time]排序,过滤掉这两列的空值
    select_df = df_all.sort('car_id', 'up_time')\
                       .dropna(subset=['city','up_time'])
    # 定义分割每个car的窗口函数
    carWindow = Window.partitionBy("car_id").orderBy("up_time")
    # 获取前一个city,窗口函数lag()默认取前1条的记录
    # 然后判断当前city与前一条的不同(或者pre_city为空),标记为switch=1,否则switch=0。
    split_df = select_df.withColumn("pre_city", F.lag("city").over(carWindow))\
                        .withColumn("switch", F.when(F.col("city")!=F.col("pre_city"), 1)\
                                            .when(F.isnull(F.col("pre_city")), 1).otherwise(0))
    
    # 对switch在carWindow中累加(不是求和),得到idx列,表示每个car第几次切换city。
    # 之后,对[idx, car_id, city]分组求时间差,就是每个session的时长。
    sess_df = split_df.withColumn("idx", F.sum("switch").over(carWindow))\
                      .groupBy('idx', 'car_id', 'city')\
                      .agg(F.max("up_time").alias("max_time"), F.min("up_time").alias("min_time"))
    # 时间差转小时,保留3位小数
    timeDiff = (F.unix_timestamp('max_time') - F.unix_timestamp('min_time'))/3600
    stay_df = sess_df.withColumn('stay_hour', F.round(timeDiff.cast(DoubleType()), 3))\
                        .drop('max_time', 'min_time')

转成时间戳来计算时间差,所以不存在跨天的问题。

问题2

用户观看观看视频、浏览网页的数据记录,有三列:user_id, start_time, end_time
规定如果用户两次访问的间隔超过一天,那么就认为这两条记录是两个session的数据,否则是同一个session的数据。来自 https://blog.csdn.net/shinever1/article/details/99863804

# pyspark
from pyspark.sql import SparkSession
from pyspark.sql.window import Window
from pyspark.sql import functions as F

spark = SparkSession.builder.appName("temp1526").master("local[1]").getOrCreate()

data=[("A","2019-01-01","2019-01-02"),("A","2019-01-02","2019-01-03"),("A","2019-01-04","2019-01-05"), \
      ("A","2019-01-08","2019-01-09"),("A","2019-01-09","2019-01-10"),("B","2019-01-01","2019-01-02"), \
      ("B","2019-01-02","2019-01-03"),("B","2019-01-04","2019-01-05"),("B","2019-01-08","2019-01-09"), \
    ("B","2019-01-09","2019-01-10")]
df = spark.createDataFrame(data,["user_id","start_time","end_time"])

userWindow = Window.partitionBy("user_id").orderBy("start_time")
# 计算距离上次结束的天数
df_interval = df.withColumn("pre_end_time", F.lag("end_time",1).over(userWindow))\
                .withColumn("days", F.datediff("start_time", "pre_end_time"))
# 添加是否跨天的标记
df_flag = df_interval.withColumn("flag", F.when(F.col("day")<=1, 0).otherwise(1))
# 累计flag,划分出session
dfRet = df_flag.withColumn("session", F.sum("flag").over(userWindow))

效果如下:

+-------+----------+----------+------------+----+----+-------+
|user_id|start_time|  end_time|pre_end_time|days|flag|session|
+-------+----------+----------+------------+----+----+-------+
|      B|2019-01-01|2019-01-02|        null|null|   1|      1|
|      B|2019-01-02|2019-01-03|  2019-01-02|   0|   0|      1|
|      B|2019-01-04|2019-01-05|  2019-01-03|   1|   0|      1|
|      B|2019-01-08|2019-01-09|  2019-01-05|   3|   1|      2|
|      B|2019-01-09|2019-01-10|  2019-01-09|   0|   0|      2|
|      A|2019-01-01|2019-01-02|        null|null|   1|      1|
|      A|2019-01-02|2019-01-03|  2019-01-02|   0|   0|      1|
|      A|2019-01-04|2019-01-05|  2019-01-03|   1|   0|      1|
|      A|2019-01-08|2019-01-09|  2019-01-05|   3|   1|      2|
|      A|2019-01-09|2019-01-10|  2019-01-09|   0|   0|      2|
+-------+----------+----------+------------+----+----+-------+

得益于窗口函数,短短几行就能解决此类session划分的问题。MySQL从8.0开始支持窗口函数。

猜你喜欢

转载自blog.csdn.net/rover2002/article/details/106254597