spark计算dataframe中两列的相关系数

背景:现在有张表,表中数据 imei,height,weight。计算weight和height的相关性:

# 皮尔森、斯皮尔曼(pearson spearman)计算相关系数
import org.apache.spark.mllib.stat.Statistics
val df1 = sql("""
select new_rank_level,old_rank_level
from ad_tmp.xxx
""")
val df_real = df1.select("old_rank_level","new_rank_level")
val rdd_real = df_real.rdd.map(x=>(x(0).toString.toDouble ,x(1).toString.toDouble ))
val label = rdd_real.map(x=>x._1.toDouble )
val feature = rdd_real.map(x=>x._2.toDouble )

val cor_pearson:Double = Statistics.corr(label, feature, "pearson")
println(cor_pearson)
0.23997483383749665 

val cor_spearman:Double = Statistics.corr(label, feature, "spearman")
println(cor_spearman)
cor_spearman: Double = 0.23997567905723607   
发布了80 篇原创文章 · 获赞 27 · 访问量 6万+

猜你喜欢

转载自blog.csdn.net/abc50319/article/details/97683819
今日推荐