K-means clustering algorithm analysis

Algorithm brief description

K-means algorithm principle

We assume that a given data sample X contains n objects X = \left \{ X_{1},X_{2},X_{3},...,X_{n}\right \}, each of which has attributes of m dimensions. The goal of the K-means algorithm is to cluster n objects into designated k clusters based on the similarity between objects. Each object belongs to and only belongs to one cluster with the smallest distance to the cluster center. For the K-means algorithm, you first need to initialize k cluster centers \left \{ C_{1}, C_{2},C_{3},...,C_{k}, \right \},1<k\leq n , and then calculate the Euclidean distance from each object to each cluster center, as shown in the following formula:

dis(X_{i},C_{i}) = \sqrt{\sum_{t=1}^{m} (X_{it}-C_{jt})^{2}}

Here  X_{i}represents the i-th object 1<i<n, C_{i}represents the j-th cluster center1<j<k , X_{it}represents the t-th attribute of the i-th object,1\leq t\leq m , C_{jt}represents the t-th attribute of the j-th cluster center.

Compare the distance between each object and each cluster center in turn, assign the object to the cluster with the nearest cluster center, and obtain k clusters. The \left \{S_{1},S_{2},S_{3},...,S_{k}\right \}kmeans algorithm defines the prototype of the cluster, and the cluster center is the cluster. The mean value of all objects in each dimension is calculated as follows:

C_{t} = \frac{\sum_{X_{i}\in S_{i}}^{}X_{i}}{\left | S_{l} \right |}

In the formula, C_{l} represents the l-th cluster center, 1\leq l\leq k, represents the number of objects in the\left | S_{l} \right | l-th cluster, represents the i-th object in the l-th cluster,X_{i}1\leq i\leq \left | S_{i} \right |

Algorithm implementation process

  1. Randomly set K points in the feature space as the initial clustering center.
  2. For each other point, the distance to K centers is calculated. For unknown points, the nearest cluster center point is selected as the label category.
  3. Then after facing the marked cluster center, recalculate the new center point (average value) of each cluster
  4. If the calculated new center point is the same as the original center point (the center of mass no longer moves), then it is over, otherwise the second step is repeated.

core code

Handwritten implementation of K-means algorithm:

import numpy as np
import random
import matplotlib.pyplot as plt
plt.rcParams['font.sans-serif'] = ['SimHei']
plt.rcParams['axes.unicode_minus'] = False
"""
手写实现Kmeans
"""
data = np.genfromtxt("classes.txt", delimiter='\t')
X = data
K = 5
colors = ['r', 'g', 'b', 'c', 'm', 'y', 'k']
max_iterations = 10000
random.seed(100)
def kmeans(data, K, max_iterations):
    initial_centers = random.sample(list(data), K)
    centers = initial_centers

    for iteration in range(max_iterations):
        clusters = {i: [] for i in range(K)}
        for point in data:
            distances = [np.linalg.norm(point - center) for center in centers]
            cluster_index = np.argmin(distances)
            clusters[cluster_index].append(point)

        new_centers = [np.mean(clusters[i], axis=0) for i in range(K)]
        if np.all(np.array_equal(centers[i], new_centers[i]) for i in range(K)):
            break
        centers = new_centers

    return centers, clusters

final_centers, final_clusters = kmeans(X, K, max_iterations)
for i in range(K):
    cluster = np.array(final_clusters[i])
    plt.scatter(cluster[:, 0], cluster[:, 1], c=colors[i], label=f'簇 {i + 1}')

centers = np.array(final_centers)
plt.scatter(centers[:, 0], centers[:, 1], c='k', marker='x', s=100, label='簇中心')

plt.xlabel('高度')
plt.ylabel('宽度')
plt.legend()
plt.show()

Call the K-means algorithm of the sklearn package:

import numpy as np
from sklearn.cluster import KMeans
import matplotlib.pyplot as plt
plt.rcParams['font.sans-serif'] = ['SimHei']
plt.rcParams['axes.unicode_minus'] = False
"""
调用sklearn库的Kmeans算法
"""
data = np.genfromtxt("classes.txt", delimiter='\t')
X = data
K = 3
num_experiments = 5
colors = ['r', 'g', 'b', 'c', 'm', 'y', 'k']
for i in range(num_experiments):
    kmeans = KMeans(n_clusters=K, init='k-means++', random_state=i)
    kmeans.fit(X)
    print(f"实验 {i + 1} - 初始中心: {kmeans.cluster_centers_}")

kmeans = KMeans(n_clusters=K, init='k-means++', random_state=0)
kmeans.fit(X)

labels = kmeans.labels_

clustered_data = {i: [] for i in range(K)}
for i, label in enumerate(labels):
    clustered_data[label].append(X[i])

for i in range(K):
    cluster = np.array(clustered_data[i])
    plt.scatter(cluster[:, 0], cluster[:, 1], c=colors[i], label=f'簇 {i + 1}')

centers = kmeans.cluster_centers_
plt.scatter(centers[:, 0], centers[:, 1], c='k', marker='x', s=100, label='簇中心')

plt.xlabel('高度')
plt.ylabel('宽度')
plt.legend()
plt.show()

The algorithm process of handwriting K-means:

  • Randomly select initial centers: Randomly select K data points from the data set as initial centers.
  • Calculate distance: For each data point, calculate the distance between it and the current K centers.
  • Assign data points: Assign each data point to the set corresponding to the nearest center.
  • Update centers: Update the center point of each set to the mean in the set.
  • Repeat steps 2-4: until centers no longer change, or the maximum number of iterations is reached.
  • Return centers and clusters: Return the final centers and clusters.

Experimental results and analysis

Use python handwriting to achieve the K-means algorithm effect (assuming K=3):

The effect of using the K-means algorithm in sklearn (assuming K=3):

Use python handwriting to achieve the K-means algorithm effect (assuming K=5):

Here, Python handwriting is used to implement the K-means algorithm, and it is compared with the K-means algorithm in the scikit-learn library. It was found that the effect of the K-means algorithm implemented by handwriting is similar to that of the K-means algorithm in the scikit-learn library, and both can aggregate data points well.

Conclusion and experience

The K-means algorithm is a commonly used clustering algorithm that can be used to group data points into K clusters. In this experiment, we used 600 images from the VOC dataset and annotated the bounding box of each image as a data point. The K-means algorithm is used here to aggregate these data points into K clusters.

classes.txt file:

201	158
171	330
94	137
300	180
175	250
190	265
150	146
222	274
102	372
213	122
19	43
202	297
163	348
174	356
29	53
31	81
85	105
77	159
102	140
333	482
148	229
97	200
133	186
52	76
256	306
411	332
30	115
151	202
164	233
283	328
394	237
107	153
151	128
99	139
118	318
240	311
420	371
153	188
75	148
54	52
197	326
14	34
196	250
295	374
230	167
206	161
105	164
41	28
119	108
328	360
252	414
279	429
135	251
101	156
75	169
123	311
238	298
132	157
79	64
15	106
52	172
159	327
82	90
82	252
273	305
281	211
205	291
456	330
223	372
199	118
116	162
231	212
19	25
334	346
68	222
116	179
165	206
222	461
91	277
36	25
86	155
162	251
173	372
255	228
74	171
296	440
118	158
288	271
120	87
31	87
300	206
131	195
69	109
71	186
300	298
330	190
222	187
56	135
192	276
95	300
209	166
100	309
455	315
38	42
89	177
303	401
277	200
216	357
221	246
130	106
232	263
340	498
126	213
162	343
465	221
130	280
144	223
499	356
35	60
260	372
64	153
181	161
55	153
42	78
182	295
178	333
460	485
121	354
142	227
299	304
194	147
478	332
236	441
132	108
56	45
242	374
30	73
40	27
46	57
230	228
251	221
217	356
104	264
108	150
26	35
172	275
261	199
17	30
272	197
324	408
10	24
76	45
160	215
274	373
248	201
128	104
311	329
413	176
267	382
160	331
255	175
97	224
306	240
367	252
198	219
222	260
214	292
225	358
66	167
146	137
96	344
353	498
167	100
287	191
445	373
372	331
474	328
117	185
386	334
124	202
74	68
27	83
405	418
57	121
214	225
166	469
347	362
209	437
251	302
188	167
30	110
155	198
227	225
231	290
314	188
13	20
243	206
23	51
385	330
26	31
164	280
355	235
385	353
77	184
148	288
82	134
220	309
366	341
150	104
126	318
163	473
37	135
315	485
187	242
339	484
236	177
159	176
339	402
260	274
145	277
231	237
246	270
158	117
49	139
276	373
60	167
281	482
60	190
191	382
325	317
252	298
147	235
64	71
67	127
280	318
212	437
184	165
165	288
61	188
290	319
62	115
301	232
478	144
254	169
106	123
70	70.1
100	223
97	130
96	282
201	309
110	183
99	214
159	186
92	266
82	150
151	248
226	319
100	113
195	192
471	326
202	238
98	216
478	331
159	160
402	374
220	138
239	261
248	176
108	118
297	372
155	287
30	57
192	163
19	23
112	429
363	251
83	173
134	373
341	440
309	321
190	476
120	149
67	233
30	35
102	196
68	188
62	158
305	425
196	178
184	354
121	140
165	243
121	320
314	315
198	170
190	376
215	184
193	114
148	161
138	222
262	203
301	487
361	210
87	216
183	381
318	337
401	275
64	55
43	49
254	137
316	270
439	268
41	32
155	133
223	175
46	50
142	161
381	276
71	199
81	55
184	287
304	276
162	213
81	59
341	229
85	63
187	275
74	256
121	109
167	354
160	200
346	466
202	320
289	453
303	182
422	266
49	56
156	194
267	124
333	178
173	127
185	178
326	485
177	280
222	245
313	277
99	152
74	98
410	188
148	51
161	140
428	428.1
318	317
65	117
496	330
255	166
274	245
100	114
158	138
74	130
184	273
260	204
90	67
150	246
126	96
190	233
170	324
301	288
356	292
462	340
297	332
48	97
343	349
57	131
110	79
58	70
253	226
23	25
466	323
179	260
198	215
341	219
76	66
324	255
218	170
376	446
88	216
146	338
280	265
216	298
222	185
268	175
194	414
118	214
273	234
62	149
366	239
181	188
258	198
42	20
224	401
30	22
108	257
139	285
428	339
140	162
92	90
314	184
263	206
180	170
246	223
127	67
403	327
189	273
280	317
288	272
56	118
77	72
38	27
468	368
96	164
169	149
240	190
219	383
232	135
136	366
145	306
361	206
165	209
305	428
105	232
305	222
182	139
108	141
32	38
123	83
425	247
201	261
61	133
88	88.1
400	364
100	191
109	107
122	92
107	340
329	213
152	133
147	130
57	134
251	187
31	57
449	231
347	207
164	292
314	199
175	198
48	63
74	76
121	120
98	81
52	38
106	163
298	230
344	278
249	201
432	371
43	23
82	220
152	92
236	111
190	189
228	173
134	322
290	246
82	48
220	182
40	65
338	272
103	302
453	315
138	200
339	224
165	128
184	155
256	389
407	259
293	180
264	351
283	175
334	218
303	345
127	139
166	252
70	51
175	166
439	256
247	257
321	448
207	204
271	370
164	261
306	303
303	342
155	118
405	358
177	330
96	71
420	174
62	87
76	57
475	340
163	190
167	164
177	238
190	104
357	329
97	77
163	213
46	38
43	34
49	37
113	99
421	313
32	31
410	482
128	173
366	430
39	29
457	232
36	66
485	468
118	112
89	77
132	107
233	304
425	330
112	79
102	117
452	295
71	48
46	89
267	229
85	163
326	269
161	214
409	332
299	180
49	29
116	118
209	137
264	132
273	269
162	105
202	171
70	163
97	170
286	355
323	174
117	161
117	214
223	220
138	95
110	100
468	333
57	55
168	186
27	19
189	220
141	134
371	362
46	30
253	280
135	106
321	377
68	65
182	260
126	218
162	165
111	125
312	258
357	238
461	388
240	176
177	150
156	116
321	250
31	43
65	52
186	183
163	160
147	196
82	64
219	214
101	131
247	154
70	42
37	31
113	186
145	171
14	11

Guess you like

Origin blog.csdn.net/m0_62919535/article/details/135299758