cs231n assignment1 Image Features

这部分主要是对图像特征(pixel)的描述处理, 旨在了解特征处理对识别正确率的影响,分别用 SVM 和 Two-Layer Neural Network测试并和之前结果对比。

部分题目说明:

对于每个图像,我们将计算定向直方图渐变(HOG)以及使用HSV中的色调通道的颜色直方图色彩空间。 我们通过连接形成每个图像的最终特征向量HOG和颜色直方图特征向量。

粗略地说,HOG应该在忽略颜色信息的同时捕捉图像的纹理,颜色直方图表示输入的颜色信息而忽略纹理。 因此,我们希望将两者结合使用应该比单独使用更好地工作。 验证这个假设会为你的兴趣尝试是件好事。

`hog_feature`和`color_histogram_hsv`函数都在单个上运行图像并返回该图像的特征向量。 extract_features函数采用一组图像和一系列特征函数并进行评估每个图像上的每个特征函数,将结果存储在矩阵中每行是单个图像的所有特征向量的串联。

HOG和HSV方法在源代码 feature.py 中

Coding:

  • 加载数据
import random
import numpy as np
from cs231n.data_utils import load_CIFAR10
import matplotlib.pyplot as plt
from cs231n.features import color_histogram_hsv, hog_feature


def get_CIFAR10_data(num_training=49000, num_validation=1000, num_test=1000):
    # Load the raw CIFAR-10 data
    cifar10_dir = 'F:\pycharmFile\KNN\cifar-10-batches-py'

    X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir)

    # Subsample the data
    mask = list(range(num_training, num_training + num_validation))
    X_val = X_train[mask]
    y_val = y_train[mask]
    mask = list(range(num_training))
    X_train = X_train[mask]
    y_train = y_train[mask]
    mask = list(range(num_test))
    X_test = X_test[mask]
    y_test = y_test[mask]

    return X_train, y_train, X_val, y_val, X_test, y_test

X_train, y_train, X_val, y_val, X_test, y_test = get_CIFAR10_data()
  • HOG + HSV
from cs231n.features import *

num_color_bins = 10 # Number of bins in the color histogram
feature_fns = [hog_feature, lambda img: color_histogram_hsv(img, nbin=num_color_bins)]
X_train_feats = extract_features(X_train, feature_fns, verbose=True)
X_val_feats = extract_features(X_val, feature_fns)
X_test_feats = extract_features(X_test, feature_fns)
print(X_train_feats.shape)
  • Feature Scaling,并加上bias 

# Preprocessing: Subtract the mean feature
mean_feat = np.mean(X_train_feats, axis=0, keepdims=True)
X_train_feats -= mean_feat
X_val_feats -= mean_feat
X_test_feats -= mean_feat

# Preprocessing: Divide by standard deviation. This ensures that each feature
# has roughly the same scale.
std_feat = np.std(X_train_feats, axis=0, keepdims=True)
X_train_feats /= std_feat
X_val_feats /= std_feat
X_test_feats /= std_feat

# Preprocessing: Add a bias dimension
X_train_feats = np.hstack([X_train_feats, np.ones((X_train_feats.shape[0], 1))])
X_val_feats = np.hstack([X_val_feats, np.ones((X_val_feats.shape[0], 1))])
X_test_feats = np.hstack([X_test_feats, np.ones((X_test_feats.shape[0], 1))])
  • 使用SVM测试
# Train SVM on features
# Use the validation set to tune the learning rate and regularization strength

from cs231n.classifiers.linear_classifier import LinearSVM

learning_rates = [1e-9, 1e-8, 1e-7]
regularization_strengths = [5e4, 5e5, 5e6]

results = {}
best_val = -1
best_svm = None

################################################################################
# TODO:                                                                        #
# Use the validation set to set the learning rate and regularization strength. #
# This should be identical to the validation that you did for the SVM; save    #
# the best trained classifer in best_svm. You might also want to play          #
# with different numbers of bins in the color histogram. If you are careful    #
# you should be able to get accuracy of near 0.44 on the validation set.       #
################################################################################
for rate in learning_rates:
    for regular in regularization_strengths:
        svm = LinearSVM()
        svm.train(X_train_feats, y_train, learning_rate=rate, reg=regular, num_iters=1000)
        y_train_pred = svm.predict(X_train_feats)
        acc_train = np.mean(y_train_pred == y_train)
        y_val_pred = svm.predict(X_val_feats)
        acc_val = np.mean(y_val_pred == y_val)
        results[(rate, regular)] = (acc_train, acc_val)
        if best_val < acc_val:
            best_val = acc_val
            best_svm = svm
################################################################################
#                              END OF YOUR CODE                                #
################################################################################

# Print out results.
for lr, reg in sorted(results):
    train_accuracy, val_accuracy = results[(lr, reg)]
    print('lr %e reg %e train accuracy: %f val accuracy: %f' % (
        lr, reg, train_accuracy, val_accuracy))

print('best validation accuracy achieved during cross-validation: %f' % best_val)

# Evaluate your trained SVM on the test set
y_test_pred = best_svm.predict(X_test_feats)
test_accuracy = np.mean(y_test == y_test_pred)
print(test_accuracy)

examples_per_class = 8
classes = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']
for cls, cls_name in enumerate(classes):
    idxs = np.where((y_test != cls) & (y_test_pred == cls))[0]
    idxs = np.random.choice(idxs, examples_per_class, replace=False)
    for i, idx in enumerate(idxs):
        plt.subplot(examples_per_class, len(classes), i * len(classes) + cls + 1)
        plt.imshow(X_test[idx].astype('uint8'))
        plt.axis('off')
        if i == 0:
            plt.title(cls_name)
plt.show()

Inline question 1: 

描述您看到的错误分类结果。 他们有道理吗? 

是有一定道理的,比如在plane中,误分了很多鸟的图片,它们的头部和身体是相似的,还比如在car中,分了很多卡车,也是有一定相似性的

特征处理之前:

               之后:

  • 使用 Two-Layer Neural Network 测试

注意调超参

# Neural Network on image features

# 去掉bias
print(X_train_feats.shape)
X_train_feats = X_train_feats[:, :-1]
X_val_feats = X_val_feats[:, :-1]
X_test_feats = X_test_feats[:, :-1]
print(X_train_feats.shape)


from cs231n.classifiers.neural_net import TwoLayerNet

input_dim = X_train_feats.shape[1]
hidden_dim = 500
num_classes = 10

net = TwoLayerNet(input_dim, hidden_dim, num_classes)
best_net = None
best_val = -1
best_stats = None
results = {}

################################################################################
# TODO: Train a two-layer neural network on image features. You may want to    #
# cross-validate various parameters as in previous sections. Store your best   #
# model in the best_net variable.                                              #
################################################################################
learning_rates = [1e-1, 2e-1, 3e-1]
regularization_strengths = [5e-4, 1e-3,5e-3,1e-2,1e-1, 0.5, 1]
for lr in learning_rates:
    for rg in regularization_strengths:
        net = TwoLayerNet(input_dim, hidden_dim, num_classes)
        stats = net.train(X_train_feats, y_train, X_val_feats, y_val,
                          num_iters=1000, batch_size=200,
                          learning_rate=lr, learning_rate_decay=0.95,
                          reg=rg)
        y_train_pred = net.predict(X_train_feats)
        acc_train = np.mean(y_train_pred == y_train)
        y_val_pred = net.predict(X_val_feats)
        acc_val = np.mean(y_val_pred == y_val)
        results[(lr, rg)] = (acc_train, acc_val)

        if best_val < acc_val:
            best_net = net
            best_val = acc_val
            best_stats = stats
################################################################################
#                              END OF YOUR CODE                                #
################################################################################

for lr, reg in results:
    train_acc, val_acc = results[(lr, reg)]
    print('lr: ', lr, ' reg: ', reg, ' train acc: ', train_acc, ' val_acc: ', val_acc)


# Run your best neural net classifier on the test set. You should be able
# to get more than 55% accuracy.

test_acc = (best_net.predict(X_test_feats) == y_test).mean()
print(test_acc)

在特征处理之前: 

                  之后:

可以看出特征描述对识别率影响还是比较大的

发布了55 篇原创文章 · 获赞 22 · 访问量 4万+

猜你喜欢

转载自blog.csdn.net/li_k_y/article/details/86666652