CS231n作业(一)KNN分类

作业说明

学习ML和DL很关键的两点在于对最基本的算法的理解,以及通过编程将算法复现的能力。做好这两点,才有实现更加复杂算法与工作的可能。否则,只会调包,跑跑开源代码,永远是重复别人的工作,没有自己的理解,也就无法将算法应用到实际任务中来。

还好有cs231n这门课程,以DL在计算机视觉中的应用为切入点,讲解了如KNN分类、线性判别函数、神经网络等基本的方法。而且配套的作业的要求是不可以调包,基于python语言和numpy来编写。

第一次作业的第一题要求是用K近邻算法(k-Nearest Neighbor)实现cifar10数据集的分类,运用交叉验证法得到合适的K值,实现L1距离(即绝对值之和)和L2距离(即平方和再开根号)两种度量。

运行程序前,我们需要先到cifar10的官网(http://www.cs.toronto.edu/~kriz/cifar.html)下载cifar10数据集,注意下载python版的。并修改对应的训练集和测试集的路径。

话不多说,上代码。

程序源码

# -*- coding: utf-8 -*-
"""
Created on Sat Sep 22 17:02:59 2018

@author: wjp_ctt
"""

import numpy as np
import random
from matplotlib import pylab as plt

#读取cifar10数据
def unpickle(file):
    import pickle
    with open(file, 'rb') as fo:
        dict = pickle.load(fo, encoding='bytes')
    return dict

#产生验证集
def get_validation_set(k_fold, num_validation, training_data):
    num_training=np.size(training_data, 0)
    validation_set=random.sample(range(0,num_training),k_fold*num_validation)
    validation_set=np.reshape(validation_set,[num_validation, k_fold])
    return validation_set
    
#定义L1距离
def L1_loss(training_data, testing_data) :
    num_training=np.size(training_data,0)
    num_testing=np.size(testing_data,0)
    l1_loss = np.zeros([num_training, num_testing])
    for i in range(0,num_training):
        for j in range(0,num_testing):
            l1_loss[i,j]= np.sum(np.abs(training_data[i,:]-testing_data[j,:]))
    return l1_loss

#定义L2距离
def L2_loss(training_data, testing_data) :
    num_training=np.size(training_data,0)
    num_testing=np.size(testing_data,0)
    l2_loss = np.zeros([num_training, num_testing])
    for i in range(0,num_training):
        for j in range(0,num_testing):
            l2_loss[i,j]= np.sum(np.power(training_data[i,:]-testing_data[j,:],2))
    return l2_loss

#KNN分类器
def knn(loss,k,testing_data,training_labels,testing_labels):
    num_testing=np.size(testing_data,0)
    labels=np.zeros([num_testing],dtype=np.int)  
    result=training_labels[np.argpartition(loss,k,axis=0)][:k]    
    for j in range(0,num_testing):
        tu=sorted([(np.sum(result[:,j]==i),i) for i in result[:,j]])
        labels[j]=tu[-1][1]
    correct=np.where(labels==testing_labels)    
    correct_num=np.size(correct[0])
    accuracy=correct_num/num_testing
    return accuracy

#K_fold validation
def k_fold_validation(k_fold, k_candidate, num_validation, validation_set):
    print('Doing k_fold validation...\n')
    for i in k_candidate:
        for j in range(0, k_fold):
            validation_training=np.delete(validation_set,0,axis=1)
            validation_training=np.reshape(validation_training,[(k_fold-1)*num_validation])
            validation_testing=training_data[validation_set[:,j],:]
            loss=L1_loss(training_data[validation_training,:],validation_testing)
            accuracy=knn(loss,i,validation_testing,training_labels,training_labels[validation_set[:,j]])
            validation_accuracy[i-1, j]=accuracy
    mean=np.mean(validation_accuracy,axis=1)
    var=np.var(validation_accuracy,axis=1)
    plt.errorbar(k_candidate, mean,yerr=var)
    plt.show()
    k=np.argmax(mean)+1
    print('The most suitable k is %d\n'%(k))
    return k
    

#构建训练数据集
training_data=np.zeros([50000,3072],dtype=np.uint8)
training_filenames=np.zeros([50000],dtype=list)
training_labels=np.zeros([50000],dtype=np.int)
for i in range(0,5):
    #此处改为你存放cifar10训练集的路径
    file_name='cifar-10-python/cifar-10-batches-py/data_batch_'+str(i+1)
    temp=unpickle(file_name)
    training_data[i*10000+0:i*10000+10000,:]=temp.get(b'data')
    training_filenames[i*10000+0:i*10000+10000]=temp.get(b'filenames')
    training_labels[i*10000+0:i*10000+10000]=temp.get(b'labels')
print('Training data loaded: 50000 samples from 10 categories!\n')

#构建测试数据集
testing_data=np.zeros([10000,3072],dtype=np.uint8)
testing_filenames=np.zeros([10000],dtype=list)
testing_labels=np.zeros([10000],dtype=np.int)
#此处该为你存放cifar10测试集的路径
file_name='cifar-10-python/cifar-10-batches-py/test_batch'
temp=unpickle(file_name)
testing_data=temp.get(b'data')
testing_filenames=temp.get(b'filenames')
testing_labels=temp.get(b'labels')
print('Testing data loaded: 10000 samples from 10 categories!\n')

#从训练集中随机采样出测试集
k_fold=5
num_validation=2000
k_candidate = range(1,16)
validation_accuracy=np.zeros([np.size(k_candidate), k_fold])
validation_set=get_validation_set(k_fold, num_validation, training_data)
print('Validation data created from training data: %d folds and %d samples for each fold.\n '%(k_fold, num_validation))

#进行K值交叉验证
k=k_fold_validation(k_fold, k_candidate, num_validation, validation_set)

#计算L1距离和L2距离
print('Calculating the distance between training labels and testing labels...\n')
l1_loss=L1_loss(training_data,testing_data[0:1000,:])
#l2_loss=L2_loss(training_data[0:100,:],testing_data[0:10,:])
   
#进行K近邻分类
print('Doing KNN classification...\n')
accuracy=knn(l1_loss,k,testing_data[0:1000,:],training_labels,testing_labels[0:1000])
print('accuracy is ',accuracy)

程序输出

Training data loaded: 50000 samples from 10 categories!

Testing data loaded: 10000 samples from 10 categories!

Validation data created from training data: 5 folds and 2000 samples for each fold.
 
Doing k_fold validation...


The most suitable k is 3

Calculating the distance between training labels and testing labels...

Doing KNN classification...

accuracy is  0.252

结果说明

经过交叉验证,最适合的K值是3。选用50000张图作为训练样本,1000张图作为测试样本,最终的精度值是25.2%。精度值不高的原因在于此方法仅考虑了像素值的距离,而单个像素点往往不能代表整体的特征。后续我们将实现其他更高精度的方法。

猜你喜欢

转载自blog.csdn.net/wjp_ctt/article/details/82827867