87 R k-means, hierarchical clustering, implementation of EM clustering

1 Preparation and preview

In the previous chapter, the blogger introduced 3 clustering methods and their principles. This chapter is mainly about the implementation process of R code, the code is the main, and the explanation is supplemented. For details, please refer to the previous chapter

############### R BASICS #############################

rm(list=ls())  #清除R工作环境中的所有内容
#set the work directory
#setwd("C:/Temp/Cluster") 
getwd()  

#install.packages("ggplot2") #安装包
library(ggplot2)  #加载包

##############Segmentation code ##########################

#import raw data
dat<-read.csv('practice_sample.csv',stringsAsFactors = F)
View(dat)
str(dat)  #数据概览
summary(dat)

2 Data exploration

Perform basic data exploration to preview data details and missing values

######Part1:数据探索
sum(is.na(dat[,1:23])) #缺失值总数
colSums(is.na(dat[,1:23]))  #显示每列的缺失值数量
mean(!complete.cases(dat$AU002))  #显示缺失值比例

Categorize the data and preview the general situation. If the missing value is less than 20%, you can consider deleting it. Outliers can be handled by the cap method.

library(plyr)

#分类统计
prof<-ddply(dat,.(XB851),summarise,cnt=length(consumer_profile_key))  #分组统计
View(prof)

Observing the data situation can also directly summarize

explore=function(x,y){
  y<-c(
    var="AU002",
    mean=mean(x,na.rm=TRUE) ,   #na.rm=TRUE去除NA的影响
    median=median(x,na.rm=TRUE) ,
    quantile(x,c(0,0.01,0.1,0.25,0.5,0.75,0.9,0.99,1),na.rm=TRUE),
    max=max(x,na.rm=TRUE),
    missing=sum(is.na(x))
  )
}
explore(dat$AU002,mean_AU002)
View(t(mean_AU002))

insert image description here

3 Missing value handling

Method 1 remove missing values

######Part2:缺失值替换
#dat1<-na.omit(dat)   #删除有缺失值的记录
#dat2<-dat[complete.cases(dat),]   #删除有缺失值的记录

Method 2 Fill

#填补
dat$R2_XB851 <- ifelse(is.na(dat$XB851) == T,0,dat$XB851)
summary(dat$XB851)
summary(dat$R2_XB851)

Since the proportion of missing values ​​in this dataset is small, it can be deleted directly

4 Standardization

K-eans clustering is very sensitive to dimensions, and the distances caused by different dimensions are different, so the influence of dimensions is eliminated first. Standardize the data.

#subset 
dat.sub<-dat[,substr(names(dat),1,3)=="R1_"]
str(dat.sub)

#Scaling
dat.scale<-as.data.frame(scale(dat.sub))  #数据标准化
colSums(dat.scale)  #列总和

5 k-means cluster analysis

######Part3: 聚类分析
######Part3.1: K-Means
set.seed(12345) 
cluster.km<-kmeans(dat.scale, centers=4,iter.max = 20)

cluster.km$iter   #迭代次数
cluster.km$size   #每类数量
cluster.km$centers   #聚类中心
table(cluster.km$cluster)  #每类数量

cluster.km$totss    #总平方和  (总平方和=组内平方和+组间平方和)
cluster.km$tot.withinss   #组内平方和的总和  (组内平方和越小越好,说明组内差异不大)
cluster.km$betweenss   #组间平方和  (组间平方和越大越好,说明组间差异大)
cluster.km$betweenss/cluster.km$totss  #组间平方和占总平方和的比例(又称R方,聚类优度),可用于进行聚类比较,该值越大表明组内差距越小,组间差距越大,即聚类效果越好

Clustering results display
insert image description here
The clustering results are merged and displayed in the original data

#将聚类结果合并到原始数据上
clus<-data.frame(cluster=cluster.km$cluster,dat.sub)
#Cluster profiling 
clus_profile<-aggregate(.~cluster,data=clus,FUN=mean)
clus_profile

insert image description here

clustering goodness

Due to k-means clustering, several categories need to be determined in advance. For the selection of k and k, you can try to cycle through the parameters. The basis for judging the best parameters is
work on sampling data
by betweenss/totss

#选择最佳聚类数
#work on sampling data
#by betweenss/totss
set.seed(26987)
dat.temp<-dat.sub[sample(nrow(dat.scale),2000,replace=F),]   #抽样

results<-rep(1:20)  #比较不同聚类个数下的聚类优度
for (k in 1:20){
  fit.km<-kmeans(dat.temp, centers=k,iter.max = 20)
  results[k]<-fit.km$betweenss/fit.km$totss
}
round(results,2)

insert image description here
The parameter results are displayed in a graph, and the clustering goodness of the extracted 2000 samples is as follows

plot(1:20, results, type="b", xlab="Number of Clusters",ylab="results")

insert image description here

According to the silhouette coefficient

The silhouette coefficient is used to judge whether the clustering is good or bad.

#by Silhouette Coefficient
#轮廓系数约趋近1越好
#install.packages("fpc")
library(fpc)
K <- 2:8
round <- 5 
rst <- sapply(K, function(i){
  print(paste("K=",i))
  mean(sapply(1:round,function(r){
    print(paste("Round",r))
    result <- kmeans(dat.temp, i)
    stats <- cluster.stats(dist(dat.temp), result$cluster)
    stats$avg.silwidth
  }))
})

insert image description here

Joint judgment of clustering goodness and silhouette coefficient

#做图(聚类优度和轮廓系数)
cluster<-c(2:8)
r<-results[2:8]
sil<-rst
plot(cluster,r,type='b',pch=16,lty=1,ylim=c(0.5,1),xlab='number of clusters',ylab='parameters')
lines(cluster,sil,type='b',lty=2,pch=17)
legend('topleft',inset=0.05,c('results','silhouette'),lty=c(1,2),pch=c(16,17))

insert image description here
Using multiple indicators to judge by voting, the k value with the most votes is the best

#by NbCluster,用多种指标来判断聚类个数,但运行速度较慢
#install.packages("NbClust")
library(NbClust)
#provide 30 index to determine optimal number of cluster
set.seed(26987)
nc.index<-NbClust(dat.temp, min.nc=3, max.nc=6, method="kmeans")  #从最小聚类数3遍历到最大聚类数6,通过评估指标看分别在聚类数为多少时达到最优,最后选择指标支持数最多的聚类数目为最佳聚类数目。
nc.index$Best.nc #30个指标的投票最优
barplot(table(nc.index$Best.nc[1,]), 
        xlab="Numer of Clusters", ylab="Number of Criteria",
        main="Number of Clusters Chosen by 26 Criteria")

insert image description here

6 Hierarchical Clustering

Because the calculation of hierarchical clustering is more complicated and the calculation is very slow, first divide it into 50 groups and then use hierarchical clustering

#####Part3.2:Hierarchical clustering
set.seed(123)

#层次聚类计算太慢,先使用k-mean分成50类,再使用层次聚类
clus.x<-kmeans(dat.scale, centers=50,iter.max = 200,algorithm ="Lloyd") #algorithm为动态聚类的算法,默认为“Hartigan-Wong", 其他还有"Lloyd","Forgy", "MacQueen"
clus.x$size
hist(clus.x$size)

insert image description here
Clustering using the mean of k-means results

clus.sub<-data.frame(cluster=clus.x$cluster,dat.sub)
# get cluster means 
clus.n50<-aggregate(.~cluster,data=clus.sub,FUN=mean) #按cluster进行分组统计均值
head(clus.n50)

clus.n50.scale<-scale(clus.n50[,-1])

Perform hierarchical clustering, and then prune the resulting clustering tree

clus.dist<-dist(clus.n50.scale, method = "euclidean") # 距离矩阵
clus.dist
cluster.hc<-hclust(clus.dist) #默认method=complete最长距离法,其他还有最短距离法(single),离差平方和法(wald),重心法(centroid),类平均法(average)等

plot(cluster.hc) # 对聚类结果做树状图
groups_k4<-cutree(cluster.hc, k=4) #利用剪枝函数中的参数k控制聚类个数为4
table(groups_k4)
groups_h10<-cutree(cluster.hc, h=10) #利用剪枝函数中的参数h控制距离为10
table(groups_h10)

plot(cluster.hc)
rect.hclust(cluster.hc, k=4, border="red") #用红色矩形框在树状图中框出聚类个数为4的结果
rect.hclust(cluster.hc, k=3, border="green") #用绿色矩形框在树状图中框出聚类个数为3的结果

insert image description here
Hierarchical clustering effect display, here is the display compressed to two-dimensional

#层级聚类采用cophenetic distance用于度量聚类的效果,越接近1越好
cop<-cophenetic(cluster.hc)  #cophenetic相关系数测量了原始距离与根据“树”估计的距离之间的相似性
cor(cop, clus.dist)

#绘制二元聚类图
library(cluster) 

par(mfrow = c(1,2))
clusplot(clus.n50.scale, groups_k4, color=TRUE, shade=TRUE,labels=2, lines=0)
clusplot(clus.n50.scale, groups_h10, color=TRUE, shade=TRUE,labels=2, lines=0)

insert image description here

7 EM Clustering

EM clustering is more abstract than the other two. For related introduction, please refer to the previous chapter. The evaluation standard uses Gaussian mixture model.

######Part3.3:Model Based Clustering
#soft clustering
library(mclust)
cluster.mb<-Mclust(clus.n50.scale,G=4)  #进行EM聚类
summary(cluster.mb) #显示聚类结果的信息汇总
cluster.mb.bic<-mclustBIC(clus.n50.scale) #各聚类数下的BIC值 AIC赤池信息准则,BIC贝叶斯信息准则
summary(cluster.mb.bic)#高斯混合模型

8.3 Simple Comparison of Clustering

Use R's own data iris for analysis, and compare the clustering results with the real results

data<-iris
str(data)
summary(data)
table(data$Species)
dat.test<-scale(iris[,1:4])
head(dat.test)
#Kmeans
set.seed(12345)
clus.kmeans<-kmeans(dat.test,3)
clus.kmeans$centers
# Hierarchical
test.dist<-dist(dat.test, method = "euclidean") 
clus.hc<-hclust(test.dist)
plot(clus.hc)
#EM clustering
clus.em<-Mclust(dat.test,G=3)

#在图形上展现
#多维标度分析,将多维空间的研究对象简化到低维空间进行定位
mds = cmdscale(dist(dat.test,method="euclidean"))
head(mds)
#做图
old.par <- par(mfrow = c(1,4))
plot(mds, col=iris$Species, main='true class', pch = 19)
plot(mds, col=clus.kmeans$cluster, main='Kmeans k=3', pch = 19)
plot(mds, col=cutree(clus.hc, k=3), main='Hierarchical', pch = 19)
plot(mds, col=clus.em$classification, main='EM clustering', pch = 19)

insert image description here

Guess you like

Origin blog.csdn.net/weixin_44498127/article/details/124329670