spark:distinct算子实现原理

distinct的底层使用reducebykey巧妙实现去重逻辑

//使用reduceByKey或者groupbykey的shuffle去重思想
rdd.map(key=>(key,null)).reduceByKey((key,value)=>key)
.map(_._1)

猜你喜欢

转载自www.cnblogs.com/hejunhong/p/12906280.html